Etl Lead Job in Prokarma

Etl Lead

Apply Now
Job Summary

ETL Lead

Job LocationsIN-HyderabadRequisition ID2020-27501

Overview

Experience should be between 7-10 yrs. At least 4+ years relevant experience in data engineering is mandatory

Ability to Create and maintain optimal ETL/ELT data pipelines using IBM datastage, AWS Glue/AZure ADF/GCP Dataflow/Hadoop

At least experience with one cloud/Hadoop ecosystem is mandatory apart from IBM datastage

Good Experience in data warehousing using AWS Redshift or Azure Synapse/SQLDW or GCP Bigquery or Hadoop Data Warehousing

Very Strong SQL knowledge and experience working with relational SQL and NoSQL databases.

Very strong knowledge in implementing data transformations using Stored procedures

Good Experience in IBM datastage apart from Cloud data warehousing is a strong plus

Good Experience in migrating legacy mainframe data to modern data warehousing is a strong plus

A successful history of manipulating, processing and extracting value from large disconnected datasets.

Ability to understand requirements and create required data Models and data Mapping documents, tables and other sql objects needed.

Ability to Write unit/integration tests, contributes to engineering wiki, and documents work.

Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues.

Ability to Implement processes and systems to monitor data quality, ensuring production data is always accurate.

Collaborates with analytics and business teams to improve data models that feed business intelligence tools

Strong analytic skills related to working with structured/unstructured datasets.

Knowledge on Visualization/Reporting tools like Power BI/AWS Quicksight/GCP Datastudio is an added advantage

knowledge on real time stream processing using Spark/Kafka is an added advantage

Responsibilities

Experience should be between 7-10 yrs. At least 4+ years relevant experience in data engineering is mandatory

Ability to Create and maintain optimal ETL/ELT data pipelines using IBM datastage, AWS Glue/AZure ADF/GCP Dataflow/Hadoop

At least experience with one cloud/Hadoop ecosystem is mandatory apart from IBM datastage

Good Experience in data warehousing using AWS Redshift or Azure Synapse/SQLDW or GCP Bigquery or Hadoop Data Warehousing

Very Strong SQL knowledge and experience working with relational SQL and NoSQL databases.

Very strong knowledge in implementing data transformations using Stored procedures

Good Experience in IBM datastage apart from Cloud data warehousing is a strong plus

Good Experience in migrating legacy mainframe data to modern data warehousing is a strong plus

A successful history of manipulating, processing and extracting value from large disconnected datasets.

Ability to understand requirements and create required data Models and data Mapping documents, tables and other sql objects needed.

Ability to Write unit/integration tests, contributes to engineering wiki, and documents work.

Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues.

Ability to Implement processes and systems to monitor data quality, ensuring production data is always accurate.

Collaborates with analytics and business teams to improve data models that feed business intelligence tools

Strong analytic skills related to working with structured/unstructured datasets.

Knowledge on Visualization/Reporting tools like Power BI/AWS Quicksight/GCP Datastudio is an added advantage

knowledge on real time stream processing using Spark/Kafka is an added advantage

Qualifications

B.E / B. Tech / MCA / Masters in computer science or any other equivalent experience is preferred.

Options

Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.Share on your newsfeed

Software Powered by iCIMS

Experience Required :

Fresher

Vacancy :

2 - 4 Hires

Apply Now
Similar Jobs for you