Spire operates a hybrid work model with our colleagues coming onsite to a Spire office 3x per week. This role must be based in either Boulder, Colorado.
What You'll Do:
Spire's next-generation AI weather modeling is a big deal. A REALLY big deal. We’re looking for a Senior Software Engineer to join our Weather Product Data Engineering team and help take our product pipeline, APIs, and other product delivery infrastructure to another level. You’ll have the opportunity to make a real impact on our products and infrastructure as we continue to grow our portfolio of data and services.
- Have a customer- and product-driven focus and mindset
- Design, implement, maintain, and document web services and APIs
- Design and develop cloud-native data pipelines and APIs for Earth Observation data-related use cases
- Ability to deliver when it counts with a collaborative spirit
- Handling large, complex datasets and creating repeatable, reusable code. This applies to the whole software development lifecycle starting from the development of proof-of-concepts up to operational services
- Be comfortable moving across domains (software, infrastructure, science) and projects
- Support the user implementation through architecture guidance, best practices, capacity planning, implementation,
- troubleshooting and monitoring
- Keep abreast with industry trends, advancements, and best practices in this domain and leverage this knowledge to drive
- innovation by starting new proof-of-concepts
- Responsible for engaging across multiple business areas including engineering, project management, business development,
- and after-sales support
- Apply industry standard security best practices
Who You Are:
You are a visionary, technical leader, and love to share your knowledge and experiences with the world. You love weather and understand it's multi-trillion dollar impact to global economies which we at Spire aim to help.
- 5+ years of professional software development experience ·
- Experience with Amazon Web Services and cloud-native architectures ·
- Proficiency with Python and Linux ·
- Demonstrated experience creating and managing data processing pipelines for large geophysical datasets ·
- History of work in a mission-critical, production or operational environment
- Leveraging the Pangeo ecosystem for large-scale data analysis
- Building and/or managing large-scale, high-performance systems in a complex, multi-tiered, distributed environment
- Experience working with Kubernetes
- Infrastructure as Code tools like Terraform and CloudFormation
- Building and scaling high-quality APIs