Description
Optimizely is focused on unlocking digital potential and we are the recognized category leader in Digital Experience Platform (DXP) and created the category for A/B Testing and experimentation software. We have incredible customers – isn’t that one of the most important aspects of looking for your next job? Optimizely has over 9,000 brands from global organizations such as Visa, Sky, Yamaha, Wall Street Journal to tech innovators like Atlassian, DocuSign, FitBit and Zillow.
Not only are we financially sound and growing but we have unicorn status: Exceeded $300M in revenue in 2020, is profitable already, and has all strategic options ahead of itself. Optimizely continues to invest and addresses a market opportunity north of $30 billion, providing significant personal career growth opportunities.
We are an inclusive culture with a global team of 1200+ people across the US, Europe, Australia, and Vietnam. We blend European and American business culture with emphasis on teamwork, inclusion, and moving fast. People make the difference!
We are looking for a Senior Software Engineer to join our Data Platform team. We have built sophisticated infrastructure that processes billions of events per day, enriches them via stream processing, aggregates and stores them efficiently to support large scale performant queries. This team provides centralized data infrastructure and APIs for Optimizely experimentation, event, and results data needs. This includes distributed databases, streaming platforms, storage solutions and big data infrastructure.
Our users include both paying customers and engineers within the company building data products. The Data Infrastructure team plays an important role in making it easy and efficient for our users to get accurate data into and out of our systems.
What you’ll be doing:
- Design, build and maintain business-critical streaming data and distributed systems, utilizing the best languages, technologies, and platforms for the job.
- Work with highly scalable systems that processes billions of events per day.
- Support near real-time steaming analytics insights for data consumers both internally and externally.
- Drive continuous improvements to the reliability, accuracy, performance, security, and the cost our data infrastructure
Desired qualifications:
- Strong Software engineering experience.
- Expertise in object-oriented programming languages like Java
- Experience with cloud computing platforms like AWS and/or GCP
- Knowledge of event streaming systems, big data systems, and build/release processes.
- Love for digging into and solving complex problems.
- Familiarity with microservice architectures
- Deep passion and experience with building infrastructure platforms and tools
- Understanding of scaling and reliability concerns in large systems.
- Excellent communication and collaboration skills.
- Knowledge of container services (Docker/Kubernetes) is a plus.
Why you’ll succeed:
- You are curious about how distributed systems operate and fail at scale.
- You have an automation mindset.
- You seek to understand problems, and then produce workable and efficient solutions.
- You reflect and seek feedback on choices and trade-offs in your design process.
- You seek context to inform your decisions, and you adapt to changes according to the needs of the business.
- You are curious about emerging technologies and are interested in evaluating and adapting where it makes sense.
- You are a team player who enjoys collaborating across engineering teams.
- You appreciate working with people from all walks of life, and you work to respectfully engage and collaborate with colleagues regardless of perspective or experience.