Senior Data Engineer Job in Synechron
Senior Data Engineer
- Bengaluru, Bangalore Urban, Karnataka
- Not Disclosed
- Full-time
Position Title: Senior Data Engineer Databricks, PySpark, Cloud Platforms
Location: Bengaluru Bellandur (GTP)
Employment Type: Full-time
Job Summary
Synechron is looking for a Senior Data Engineer to join our advanced analytics team in Bengaluru. In this role, you will architect and build scalable, high-performance data pipelines that power data science, analytics, and business intelligence initiatives. You ll work with modern tools including Databricks, PySpark, and cloud data platforms, while collaborating across teams to ensure high-quality, secure, and efficient data solutions.
Key Responsibilities
- Design, develop, and maintain large-scale, secure, and efficient data pipelines using Databricks, PySpark, and cloud-native tools.
- Partner with data scientists, analysts, and business stakeholders to translate requirements into robust data solutions.
- Integrate data from various structured, semi-structured, and streaming sources.
- Ensure high standards for data quality, performance optimization, security, and cost efficiency.
- Drive data pipeline automation, orchestration, and monitoring using tools like Airflow.
- Lead troubleshooting efforts, performance tuning, and enhancements of existing pipelines.
- Stay informed about emerging data technologies and recommend adoption where relevant.
Technical Skills
Core Expertise
- Programming: Python (expert), SQL (advanced), PySpark.
- Platforms: Databricks (clusters, notebooks, workflows), AWS/Azure/GCP.
- Data Orchestration: Apache Airflow (or similar).
- Data Warehousing: Snowflake (preferred), data modeling, ETL/ELT pipelines.
- Streaming: Kafka or other stream processing tools.
- DevOps: CI/CD (GitLab CI, Jenkins), version control (Git), containerization (Docker/Kubernetes preferred).
- Security: Familiarity with encryption, access controls, and compliance best practices.
Experience
- 8+ years of experience in data engineering or related roles.
- Proven expertise in developing and deploying scalable data pipelines using Databricks, PySpark, and SQL.
- Hands-on experience with cloud platforms (AWS, Azure, or GCP).
- Strong background in data warehousing, especially with Snowflake.
- Exposure to real-time data processing and orchestration tools.
- Experience implementing CI/CD pipelines for data workflows is a plus.
Daily Responsibilities
- Build and optimize data ingestion, transformation, and storage workflows.
- Collaborate with cross-functional teams to align data solutions with business objectives.
- Monitor, troubleshoot, and continuously improve pipeline performance.
- Conduct data quality checks, ensure governance and compliance standards.
- Contribute to technical documentation, code reviews, and team knowledge sharing.
Qualifications
- Bachelor s or Master s degree in Computer Science, IT, or related field.
- Relevant certifications (e.g., Databricks Certified Data Engineer, AWS Certified Data Analytics) are preferred.
Professional Competencies
- Strong problem-solving and analytical mindset.
- Effective communicator with ability to collaborate across technical and non-technical teams.
- Time management and prioritization skills under tight deadlines.
- Proactive leadership and a passion for innovation.
- Commitment to ethical data use and data security.
Diversity & Inclusion at Synechron
Synechron is committed to building an inclusive, diverse, and equitable workplace. Through our global Same Difference DEI initiative, we celebrate and support people from all backgrounds, including race, gender, sexual orientation, religion, age, disability, and more. We offer flexible work arrangements, continuous learning, internal mobility, and mentoring programs to support every employee s growth.
Qualification : Bachelors or Masters degree in Computer Science, IT, or related field

