DescriptionWe have an exciting and rewarding opportunity for you to take your software engineering career to the next level.
As a Software Engineer III at JPMorgan Chase within the Consumer & Community Banking division, your role involves being a key contributor to an agile team, designing and delivering secure, stable, and scalable technology products that lead the market. Your responsibilities include implementing critical technology solutions across a variety of technical domains to support various business functions, thereby aiding the firm in achieving its business objectives.
Job responsibilities
- Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems.
- Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems.
- Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development.
- Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems.
- Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture.
- Contributes to software engineering communities of practice and events that explore new and emerging technologies.
- Adds to team culture of diversity, equity, inclusion, and respect.
- Delivers all book of work assigned to the team.
Required qualifications, capabilities, and skills
- Formal training or certification on software engineering concepts and 3+ years applied experience.
- Hands on Ab Initio/ETL (Informatica) development experience.
- To gain expertise in critical business process of Deposits Eco system.
- Hands-on practical experience in system design, application development, testing, and operational stability.
- In-depth knowledge of Ab Initio/Informatica ETL programming (GDE/EME), Unix Shell Scripting and Control-M / Autosys batch schedulers.
- In-depth knowledge of developing application and infrastructure architectures.
- Experience in developing, debugging, and maintaining code in a large corporate environment and in RDBMS/querying languages.
- Solid understanding of agile methodologies such as CI/CD, Applicant Resiliency, and Security.
- Demonstrated knowledge of software applications and technical processes within a technical discipline (ETL Processing, Scheduling, Operations).
- Proficient in scripting with Python for data processing tasks and ETL workflows.
- Experience writing Splunk or Cloudwatch queries, DataDog metrics.
Preferred qualifications, capabilities, and skills
- Familiarity with Java framework, modern front-end technologies and exposure to cloud technologies.
- Practical cloud native experience in AWS (EC2, S2, Glue, AWS Lambda, Athena, RDS, SNS), proficiency with Python, PySpark, Machine Learning disciplines.
- Strong experience with distributed computing frameworks such as Apache Spark, specifically PySpark and event driven architecture using Kafka.
- Experience with distributed databases like AWS DynamoDB and distributed computing frameworks such as Apache Spark, specifically PySpark.
- Working knowledge of AWS Glue services, including experience in designing, building, and maintaining ETL jobs for diverse data sources.
- Familiarity with AWS Glue Dynamic Frames to streamline ETL processes.
- Capability to troubleshoot common issues in PySpark and AWS Glue jobs, with the ability to identify and address performance bottlenecks.