Azure Data Engineer Job in Valuemomentum

Azure Data Engineer

Apply Now
Job Summary

Job Summary: Are you someone who has a strong data engineering background who is passionate about tackling complex enterprise business challenges by designing data driven solutions? Then read on! ValueMomentum is seeking an Azure Data Engineer to join our DataLeverage teams who will be responsible for leading technology innovation in the area of Azure Data Technology. You will be part of a highly collaborative and growing team and solve complex business challenges leveraging the modern data and analytics technologies. In this role, you will be responsible for developing batch processing solutions by using Data Factory and Azure Databricks. The ideal candidate would also be responsible implement the best practices in design and implement the Data warehouse using Azure Data Lake, Azure Data Bricks and Azure SQL DWH. At ValueMomentum, we endorse a culture of experimentation and constantly strive for improvement and learning. You will be exposed to advanced tools, technologies, methodologies and best practices that will help you develop your skills and grow in your career. Know your team: ValueMomentum s Data Leverage (DL) team is trusted by leading insurance, financial services, and healthcare payers to empower businesses with real-time insights. The DL team is focussed on helping our customers modernize data infrastructure, create effective business intelligence capabilities and develop analytics models to generate business driven insights and outcomes. DL team provides a wide range of capabilities to our customers including advisory services, strategy consulting, data engineering, data management, data governance, BI and Analytics. Responsibilities: As an Azure Data Engineer, your day-to-day work activities will be as follows: Design & develop the ETL Good experience in writing SQL, Python and PySpark programming Create the Pipelines (simple and complex) using ADF. Work with other Azure stack modules like Azure Data Lakes, SQL DW Must be extremely well versed with handling large volume of data. Understand the business requirements for Data flow process needs. Understand requirements, functional and technical specification documents. Development of mapping document and transformation business rules as per scope and requirements/Source to target. Responsible for continuous formal and informal communication on project status Good understanding of JIRA stories process for SQL development activities Requirements: Candidates are required to have these mandatory skills: Overall, 6+ years of developer skills with SQL, Python with Spark (PySpark) Experience in Azure Data Factory, Data Sets, Data Frame, Azure Blob & Storage Explorer Implement data ingestion pipelines from multiple data sources using ADF, ADB (Azure data bricks) Experience in creating Data Factory Pipelines, custom Azure development, deployment, troubleshoot data load / extraction using ADF. Extensive experience on SQL, python, PySpark in Azure Databricks Able to write Python code in PySpark frame by using Dataframes Have good understanding on Agile/Scrum methodologies.

Experience Required :

Fresher

Vacancy :

2 - 4 Hires

Similar Jobs for you

See more recommended jobs