Position Overview:
We are currently looking for a keen Software Engineer to join our data ingestion team. Working on a greenfield project as part of a world class data platform; this team does exciting work in the area of highly scalable data ingestion pipelines for location-based analytics. You will be building the automation and tooling to support retrieval and transformation of tens of thousands of externally sourced datasets. You will work in a team with highly qualified and accomplished data and software engineers to build, enhance, and maintain our data platform that supports our best in class products.
What you will do and achieve:
Reporting to the Software Development Manager, the duties and responsibilities of the Software Engineer include, but are not limited to:
- Work with an agile team to develop, test, and maintain data ingestion applications and workflows
- Participate in design reviews and pull requests
- Adhere to high-quality development principles while delivering solutions on-time and on-budget
- Analyze and resolve technical and application problems
- Contribute to our evolving Continuous Integration (CI/CD) pipeline
- Analyze use cases and propose solutions to meet business objectives
Who you are:
Education
- Bachelor's degree or better in Computer Science
- Must have an excellent academic record with a good grounding in Software Engineering theory including at least one modern programming language
- Familiar with Standards, concepts, practices, and procedures within the field of Computer Science
Experience
- 2 to 5 years’ experience as a software engineer
Key Knowledge & Skills
- At least one modern programming language, preferably Python
- Knowledge and experience with OOP design patterns
- Any RDBMS, such as MySQL, MS-SQL
- Experience building scalable and maintainable data intensive applications
- Pipeline orchestration technology, such as Prefect, Luigi or Airflow
- Big data technologies, such as Hadoop & Spark, ideally including Python pandas and Dask
Core Competencies
- Keen interest in data engineering and a “tinkering” mindset
- Driven to continually learn about and incorporate new technologies
- Thrive in a self-driven environment
- Understanding and integrating human and machine workflows
- Enterprise Data Lake & Warehouse Modeling & Design
- Docker, Kubernetes
- Cloud infrastructure, such as Azure, AWS, or Google Cloud
- Some Full Stack Experience such as Flask and React
- Development on Linux
Other Desirable Attributes
- Distributed systems – storage, compute & access patterns
- Unstructured Data extraction – pdf, scraping
- Graph database, such as Neo4j
- Elasticsearch
- Serverless Architecture
- Location – Spatial Data
- Data catalog platforms, such as Amundsen
This job description is a general listing of the required tasks and expectations of the position and in no way implies that the duties listed above are the employee’s only responsibilities. The employee is expected to perform other tasks, responsibilities and training as instructed by their supervisors. Duties and responsibilities may change at any time with or without notice.
This position may require additional hours outside of the standard work schedule including occasional holiday, evening and/or weekend hours in order to meet deadlines or to accommodate customers.
The employee will regularly be required to talk, hear, walk, use hands, kneel, crouch and lift up to 25 pounds. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.
LightBox and all its holding companies are an equal opportunity/affirmative action employer. It is the policy of the LightBox and its holding companies to prohibit discrimination of any type and to afford equal employment opportunities to employees and applicants, without regard to race, color, religion, sex, national origin, age, disability, or veteran status.
We thank all applicants for their interest; however, only those selected for an interview will be contacted.
NO TELEPHONE CALLS OR AGENCY SOLICITATION PLEASE.