Big Data Engineer Job in Pokkt
We are looking for a Big Data Engineer that will work on collecting, storing, processing, and analyzing huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company.
- 6+ years' experience in Big Data Hadoop, MapReduce, Hive, Sqoop and Spark with hands-on expertise in design and implementation of high data volume solutions (ETL & Streaming).
- Experience with building stream-processing systems, using solutions such as Storm, Spark-Streaming, Kafka streams
- Extensive experience in working with Big Data tools like Pig, Hive, Athena, Glue, Snowflake and EMR
- Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
- Knowledge of various ETL techniques and frameworks, such as Flume
- Experience with various messaging systems, such as Kafka or RabbitMQ
- Good understanding of Lambda Architecture, along with its advantages and drawbacks
Minimum 6 Years
2 - 4 Hires