Big Data Engineer Job in Quantiphi

Big Data Engineer

Apply Now
Job Summary

As a Big Data Engineer, you will build utilities that would help orchestrate migration of massive Hadoop/Big Data systems onto public cloud systems. You would build data processing scripts and pipelines that serve several jobs and queries per day. The services you build will integrate directly with cloud services, opening the door to new and cutting-edge re-usable solutions. You will work with engineering teams, co-workers, and customers to gain new insights and dream of new possibilities.

The Big Data Engineering team is hiring in the following areas:

Distributed storage and compute solutions, Data ingestion, consolidation, and warehousing, Cloud migrations and replication pipelines, Hybrid on-premise and in-cloud Big Data solutions, Big Data, Hadoop and spark processing

Role & Responsibilities:

  • Work with cloud engineers and customers to solve big data problems by developing utilities for migration, storage and processing on AWS Cloud.
  • Design and build a cloud migration strategy for cloud and on-premise applications.
  • Diagnose and troubleshoot complex distributed systems problems and develop solutions with a significant impact at massive scale.
  • Build tools to ingest and jobs to process several terabytes or petabytes per day.
  • Design and develop next-gen storage and compute solutions for several large customers.
  • Communicate with a wide set of teams, including Infrastructure, Network, Engineering, DevOps, SiteOps teams, and cloud customers.
  • Build advanced tooling for automation, testing, monitoring, administration, and data operations across multiple cloud clusters.

Required Skills:

  • 4+ years experience of Hands-on in data structures, distributed systems, Hadoop and Spark, SQL and NoSQL Databases
  • Strong software development skills in at least one of -Java, C/C++, Python, or Scala.
  • Experience building and deploying cloud-based solutions at scale.
  • Experience in developing Big Data solutions (migration, storage, processing)
  • BS, MS, or Ph.D. degree in Computer Science or Engineering, and 5+ years of relevant work experience in Big Data and cloud systems.
  • Experience building and supporting large-scale systems in a production environment

Technology Stack:

  • Any of Apache Hadoop/CDH/HDP/EMR/Google DataProc/HD-Insights Distributed processing Frameworks.
  • One or more of MapReduce, Apache Spark, Apache Storm, Apache Flink. Database/warehouse
  • Hive, HBase, and at least one cloud-native services Orchestration Frameworks
  • Any of Airflow, Oozie, Apache NiFi, Google DataFlow Message/Event Solutions
  • Any of Kafka, Kinesis, Cloud pub-sub Container Orchestration (Good to have)
  • Kubernetes or Swarm

Experience Required :

3 to 7 Years

Vacancy :

2 - 4 Hires

Similar Jobs for you

See more recommended jobs