Job: Cloud Data Engineer
Location: San Francisco CA - Hybrid (2+ days in office)
Interview process: Phone/teams/Webex interview with HM & 2nd round: Panel interview
Top Must have skills:
- Min 10 years of relevant experience and 5+ with AWS Cloud
- Need strong on SQL and cloud and able to build data pipeline.
- Migration to Cloud from SQL server.
- hands-on exp in SQL and Data Pipeline
- AWS data product and services - S3, Glue, Redshift, Lambda, Aurora, ci/cd, CloudWatch
- Python
- SAP Business Objects (Nice to Have)
- data migration and data warehousing
Requirements:
- Expert in developing and analyzing complex SQL on a variety of RDBMS (Microsoft SQL Server, Oracle)
- Expert knowledge of data modeling and understanding of different data structures and their benefits and limitations under particular use cases
- Experience with ETL tools (Informatica)
- Ability to create quality ERD’s (entity-relationship diagrams)
- Excellent writing skills for writing user and system documentation
- AWS Cloud Data Warehousing Technologies
- Experience using core AWS services to build and support data warehouse solutions leveraging AWS architecture best practices (S3, DMS, Glue, Lambda)
- Development/modeling experience with Amazon Redshift
- Experience using the AWS service APIs, AWS CLI, and SDKs to build applications
- Proficiency in developing, deploying, and debugging cloud-based applications using AWS
- Ability to use a CI/CD pipeline to deploy applications on AWS (GitLab, Terraform, DBMaestro)
- Ability to apply a basic understanding of cloud-native applications to write code
- Proficiency writing code for serverless applications
- Ability to write code using AWS security best practices (e.g., not using secret and access keys in the code, instead using IAM roles)
- Ability to author, maintain, and debug code modules on AWS
- Database: AWS Aurora Postgres
- Data Access: AWS Athena
- Data Analytics: AWS Quicksight
- Familiarity working with or knowledge of Alteryx, Collibra, Immuta, Okta a plus.