DescriptionEmbrace this pivotal role as an essential member of a high performing team dedicated to reaching new heights in data engineering. Your contributions will be instrumental in shaping the future of one of the world’s largest and most influential companies.
As a Data Engineer within the Consumer and Community banking in Connected Commerce Technology Team, you serve as a seasoned member of an agile team to design and deliver trusted data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. You are responsible for developing, testing, and maintaining critical data pipelines and architectures across multiple technical areas in support of the firm’s business objectives.
Job responsibilities
- Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems.
- Creates secure and high-quality production code.
- Produce architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development.
- Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems.
- Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture.
- Design & develop data pipelines end to end using Spark SQL, Java and AWS Services. Utilize programming languages like Java, Python, NoSQL databases, SQL, Container Orchestration services including Kubernetes, and a variety of AWS tools and services.
- Contributes to software engineering communities of practice and events that explore new and emerging technologies.
- Adds to team culture of diversity, equity, inclusion, and respect.
Required qualifications, capabilities, and skills
- Formal training or certification on software engineering concepts and 5+ years of applied experience.
- Hands-on practical experience in system design, application development, testing and operational stability.
- Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying language.
- Hands-on practical experience in developing spark-based Frameworks for end-to-end ETL, ELT & reporting solutions using key components like Spark SQL & Spark Streaming.
- Proficient in coding in one or more Coding languages - Java, Python
- Experience with Relational and No SQL databases,
- Cloud implementation experience with AWS including:
- AWS Data Services: Proficiency in Lake formation, Glue ETL (or) EMR, S3, Glue Catalog, Athena, Kinesis (or) MSK, Airflow (or) Lambda + Step Functions + Event Bridge
- Data De/Serialization: Expertise in at least 2 of the formats: Parquet, Iceberg, AVRO, JSON-LD
- AWS Data Security: Good Understanding of security concepts such as: Lake formation, IAM, Service roles, Encryption, KMS, Secrets Manager
- Proficiency in automation and continuous delivery methods.
- Proficient in all aspects of the Software Development Life Cycle.
- Solid understanding of agile methodologies such as CI/CD, Applicant Resiliency, and Security
Preferred qualifications, capabilities, and skills
- Snowflake knowledge or experience preferred
- In-depth knowledge of the financial services industry and their IT systems
- Worked with building Data lake, built Data platforms, built Data frameworks, Built/Design of Data as a Service AP