Data Engineer - Big Data Job in Nice Software Solutions Pvt Ltd

Data Engineer - Big Data

Apply Now
Job Summary

Data Engineer Big Data

Job Location: Nagpur
Experience Required: 5 to 7 years
Positions Open: 3
Shift: Rotational Shift

Job Overview:

We are looking for a talented Data Engineer with expertise in Big Data technologies to join our growing team. As a Data Engineer, you will be responsible for expanding and optimizing our data pipeline architecture. You will work closely with software developers, database architects, data analysts, and data scientists to ensure the seamless flow of data across various systems. This role requires proficiency in Hadoop, Hive, Spark, and other big data technologies, as well as solid experience in data transformation and ETL processes. If you are passionate about building and optimizing data systems, this is the role for you!

Key Responsibilities:

  • Data Pipeline Architecture: Expand and optimize the architecture for data flow and collection across various teams.
  • Big Data Technologies: Work with Hadoop ecosystem tools such as Hive, Sqoop, and Spark to process large datasets.
  • SQL Optimization: Craft and optimize complex SQL queries for data transformation, aggregation, and analysis.
  • Python Programming: Use Python (including libraries like Pandas and NumPy) to process and manipulate data efficiently.
  • ETL Processes: Ensure smooth data movement and transformation by applying industry best practices in ETL.
  • Data Warehousing: Apply knowledge of data warehousing concepts to optimize data storage and retrieval.
  • Data Quality Management: Troubleshoot data-related issues, ensuring the quality, consistency, and integrity of data.
  • Collaboration: Work closely with cross-functional teams, translating complex technical concepts to non-technical stakeholders.
  • Cloud Integration: Use cloud platforms (e.g., AWS, Azure, GCP) to integrate data systems and services.
  • Automation: Write and maintain shell scripts or other automation tools for efficient data processing.
  • Documentation: Create clear, concise technical documentation to facilitate knowledge sharing and communication across teams.

Required Skills and Qualifications:

  • 5 to 7 years of experience in Data Engineering, with a focus on Big Data technologies.
  • Proficiency in SQL for data manipulation and analysis.
  • Strong programming skills in Python (Pandas, NumPy) for data processing and manipulation.
  • Hands-on experience with big data tools such as Hadoop, Hive, Sqoop, and Spark.
  • In-depth understanding of ETL processes and data transformation methodologies.
  • Experience in designing data models and optimizing data storage in data warehouses.
  • Knowledge of cloud platforms (e.g., AWS, Azure, GCP) and their data engineering services is a plus.
  • Ability to write shell scripts or other automation scripts for data processing.
  • Ability to troubleshoot data-related issues and ensure data quality and consistency.
  • Strong collaboration skills in a cross-functional team environment.
  • Ability to communicate complex technical concepts to non-technical stakeholders.

Preferred Skills (Good to Have):

  • Experience with Data Lake architecture and integration.
  • Familiarity with Stream Processing and related technologies.
  • Knowledge of Cloud platforms (AWS, Azure, GCP) and their data services.
  • Experience with Big Data frameworks and data modeling.

Ready to be a part of our innovative data team? Apply now and help us build the next generation of data architecture!

Experience Required :

5 to 7 Years

Vacancy :

2 - 4 Hires

Apply Now
Similar Jobs for you

See more recommended jobs

Your 4 Step Guide to Career Success

Apply for jobs
Create Profile
Schedule Interview
Get Hired