Big Data Lead Job in Srijan Technologies

Big Data Lead

Apply Now
Job Summary

Big Data Lead

Role

  • Tech Lead - Experience of 4-6 years should be fine, predominantly on Kafka (AWS experience will be mandatory)

Role:

A Big Data Lead with Kafka (primary focus) and Hadoop skill sets to work on an exciting Streaming / Data Engineering team

Responsibilities include:

  • Develop scalable and reliable data solutions to move data across systems from multiple sources in real time as well as batch modes (Kafka)
  • Build Producer and Consumer applications on Kafka, and appropriate Kafka configurations
  • Designing, writing, and operationalizing new Kafka Connectors using the framework
  • Accelerate adoption of the Kafka ecosystem by creating a framework for leveraging technologies such as Kafka Connect, KStreams/KSQL, Schema Registry, and other streaming-oriented technology
  • Implement Stream processing using Kafka Streams / KSQL / Spark Jobs along with Kafka
  • Bring forward ideas to experiment and work in teams to transform ideas to reality
  • Architect data structures that meet the reporting timelines
  • Work directly with engineering teams for design and build their development requirements
  • Maintain high standards of software quality by establishing good practices and habits within the development team while delivering solutions on time and on budget.
  • Proven communication skills, both written and oral
  • Demonstrated ability to quickly learn new tools and paradigms to deploy cutting edge solutions
  • Create large scale deployments using newly conceptualized methodologies

Skills:

  • Proven hands-on experience with Kafka is a must
  • Proven hands-on experience with Hadoop stack (HDFS, Map Reduce, Spark)
  • Core development experience in one or more of these languages: Java, Python / PySpark, Scala etc.
  • Good experience in in developing Producers and Consumers for Kafka as well as custom Connectors for Kafka
  • 2+ plus years of developing applications using Kafka (Architecture), Kafka Producer and Consumer APIs, Real-time Data pipelines/Streaming
  • 2 plus years of experience performing Configuration and fine-tuning of Kafka for optimal production performance
  • Experience in using Kafka APIs to build producer and consumer applications, along with expertise in implementing KStreams components. Have developed KStreams pipelines, as well as deployed KStreams clusters
  • Strong knowledge of the Kafka Connect framework, with experience using several connector types: HTTP REST proxy, JMS, File, SFTP, JDBC, Splunk, Salesforce, and how to support wire-format translations. Knowledge of connectors available from Confluent and the community
  • Experience with developing SQL queries and best practices of using KSQL vs KStreams will be an added advantage
  • Expertise with Hadoop ecosystem, primarily Spark, Kafka, Nifi etc.
  • Experience with integration of data from multiple data sources
  • Experience with stream-processing systems: Storm, Spark-Streaming, etc. will be ad advantage
  • Experience with relational SQL and NoSQL databases, one or more of DBs like Postgres, Cassandra, HBase, Cassandra, MongoDB etc.
  • Experience with AWS cloud services like S3, EC2, EMR, RDS, Redshift will be an added advantage
  • Excellent in Data structures & algorithms and good in analytical skills
  • Strong communication skills
  • Ability to work with and collaborate across the team
  • A good "can do" attitude

Experience Required :

Fresher

Vacancy :

2 - 4 Hires

Similar Jobs for you

See more recommended jobs