Big Data Engineer Job in Exponentia

Big Data Engineer

Apply Now
Job Summary

Big Data Engineer


Experience: 4-6 Years


Company Profile:

Exponentia.ai is an AI tech organization with presence across India, Singapore, Middle East, and the UK. We are an innovative and disruptive organization, working on cutting edge technology to help our clients transform into the enterprises of the future. We provide artificial intelligence-based products/platforms capable of automated cognitive decision making to improve productivity, quality, and economics of the underlying business processes. Currently, we are rapidly expandingacross machine learning, Data Engineering and Analytics functions.


Exponentia.ai has developed long term relationships with world class clients such as PayPal, PayU, SBI Group, HDFC Life, Kotak Securities, Wockhardt and Adani Group amongst others.


One of the top partners ofDataBricks, Azure,Cloudera (leading analytics player) and Qlik (leader in BI technologies), Exponentia.ai has recently been awarded the Innovation Partner Award by Qlikand "Excellence in Business Process Automation Award" (IMEA) by Automation Anywhere.


Get to know more about us onhttp://www.exponentia.aiandhttps://in.linkedin.com/company/exponentiaai



Responsibilities:

As a Big Data Engineer, you will apply modern engineering to design, develop, integrate, evaluate the Data Lake.

Responsible for Ingesting data from files, streams and databases into the DataLake

Building the big data solutions using distributed computing frameworks

Cleanse & Process the data with Hive, Hadoop, Spark, EMR

Develop programs in Scala, Java and Python as part of data cleaning and processing

Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems using Core Java technology stack

Develop efficient software code for multiple use cases leveraging Core Java and Big Data technologies for various use cases built on the platform

Provide high operational excellence guaranteeing high availability and platform stability

Implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Hadoop, Cloud computing etc.



Experience Required :

4 to 6 Years

Vacancy :

2 - 4 Hires

Similar Jobs for you

See more recommended jobs