Data Engineer - Platform Generative Ai Job in Mckinsey & Company
Data Engineer - Platform Generative Ai
- Bengaluru, Bangalore Urban, Karnataka
- Not Disclosed
- Full-time
Your Impact
We are seeking a passionate Data Engineer with expertise in Python development who is excited about cloud-based data engineering using AWS services. You will be an integral part of a dynamic, multi-disciplinary team, working closely with digital product professionals, data scientists, cloud engineers, and other stakeholders.
As a key member of a global team working on our generative AI initiative, you will be based in one of our European offices. McKinsey s Tech Ecosystem function is responsible for developing and delivering all technology solutions for the firm s internal use, and your role will be crucial in driving the development of data solutions to support generative AI applications.
You will work with a team of data engineers to develop robust data ingestion pipelines and enhance data processing capabilities that integrate data into systems used by AI applications. Your responsibilities will include writing Python code, creating tests, developing and maintaining GitHub Action CICD pipelines, and managing AWS-based infrastructure and Docker containers.
Your Growth
As a member of the global team working on our generative AI initiative, you will play a key role in shaping and accelerating the delivery of McKinsey's target state data platform, which will enable AI use-cases. You will be part of our cloud-first approach, transforming data platforms and analytical applications across the firm.
Working closely with multidisciplinary teams, you will contribute to building cutting-edge data solutions in a fast-paced, innovative environment. McKinsey s Tech Ecosystem function is responsible for developing all technology solutions for the firm s internal needs, and you ll have the opportunity to shape how these solutions evolve.
Your Qualifications and Skills
- 3+ years of professional experience as a Data Engineer, with a focus on cloud-based data engineering using AWS services
- Expertise in Python development and a strong understanding of clean code, modularity, error handling, and test automation
- Extensive experience with relational databases and data pipeline performance
- Hands-on experience with Docker and CI/CD pipelines (e.g., GitHub Actions)
- Strong execution focus, with the ability to work independently in complex, fast-paced environments and deliver results
- Demonstrable experience in solving data pipeline performance issues and diagnostics
- Interest in generative AI and machine learning topics
- Experience with Kedro framework is a plus
- Opinionated and confident in sharing ideas, willing to speak up at all levels
- Familiarity with Agile principles and product development methodologies
- Excellent problem-solving skills and the ability to analyze and resolve complex data engineering challenges
- Strong communication and collaboration skills, with the ability to work effectively in cross-functional teams

