Job Title: Senior Data Engineer (Azure data services/Databricks)
Location: Austin, TX 78723 (
Hybrid – in person on Tuesday and Thursday--
Austin local candidates only
Business Hours: Monday through Friday from 8:00 AM to 5:00 PM
Citizenship:USC/GC Only
Candidates should be able to start 2 weeks after selection.
Job Description
Important Notes:
- We need local candidates as the position is hybrid (two days per week).
- We prefer U.S. citizens or Green Card holders.
- Please share only the candidates who meet the qualifications.
Key skill set
Azure data services, Databricks, Python, SQL, PySpark, GitHub
Dept/Project
Enterprise Data & Analytics/EDM Team
Interview
First round: Microsoft Teams Video interview (Interviewed by: EDA Tech Team)
Second round: In person Interview (1900 Aldrich St Austin, TX 78723) - (Interviewed by: EDA Tech & Mgt Team)
Interviewers
EDA Tech & Mgt Team
Job Description:
We are seeking a hands-on
Senior Data Engineer with extensive experience
(Hands on) in Azure data services and Databricks.
Responsibilities:
- Design and implement data solutions using Azure services (e.g., Synapse Analytics, Databricks, Snowflake, Azure SQL, Azure Blob Storage, ADLS Gen2, Azure Functions, Azure Key Vault, AI/ML).
- Develop data pipelines and transformations (ETL/ELT).
- Collaborate with stakeholders to understand data requirements and deliver solutions.
- Ensure data solutions are scalable, reliable, and secure.
- Implement CI/CD and DevOps practices.
- Maintain and publish code to GitHub.
- Follow PMO/Scrum master instructions using Agile/Scrum methodologies.
Requirements:
Must Have:
- 10+ years of IT experience, including a minimum of 7 years as a Data Engineer.
- 5+ years of experience with the Azure cloud platform.
- Proven experience with Azure data services (e.g., Azure Key Vault, Azure Blob Storage, ADLS Gen2, Synapse Pipeline, Log Analytics, Logic App, Purview, Azure Functions).
- Experience with Databricks, Power Automate, Power BI, Azure SQL.
- Proficiency in Python, SQL, API, and PySpark.
- Expertise in optimizing ETL processes, data warehouse, and data lake management.
- Experience with CI/CD and DevOps practices.
- Experience with structured, unstructured, and semi-structured data.
- Experience with maintaining and publishing code to GitHub.
- Ability to work in a cross-functional team and coordinate with Infra/Ops and Information Security departments.
- Experience with requirement gathering and Scrum methodologies.
Good To Have:
- Experience with on-premises to Azure cloud migration.
- Knowledge of Snowflake, Microsoft Fabric, Microsoft Purview, AI/ML, streaming data services, and marketplace data services.
- Relevant certifications.
- Experience with multiple cloud implementations.