Lead Data Engineer Job in Sequoia

Lead Data Engineer

Apply Now
Job Summary

The Opportunity:

    • As a Lead Data Engineer at Sequoia, you will develop data-driven solutions with current and next-generation technologies to meet evolving business needs. You will own the technical design and develop Extract/Transform/Load (ETL) applications that interface with all key Sequoia applications. You will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams. The right candidate will be excited by the prospect of optimizing or even re-designing our company s data architecture to support our next generation of products and data initiatives.

What You Get to Do:

    • Design and develop data-ingestion frameworks, real-time processing solutions, and data processing and transformation frameworks.
    • Deploy and provide support for application codes and analytical models.
    • Provide senior-level technical consulting to peer data engineers during design and development for highly complex and critical data projects.
    • Create and enhance data enabling seamless integration and flow across the data ecosystem.
    • Provide business analysis and develop ETL code and scripting to meet all technical specifications and business requirements according to the established designs.
    • Develop real-time data ingestion and stream-analytic solutions leveraging technologies such as Kafka, Apache Spark, Python and AWS-based solutions.
    • Utilize multiple development languages/tools such as Python, SPARK, Hive, Java to build prototypes and evaluate results for effectiveness and feasibility.
    • Contribute to determining programming approach, tools, and techniques that best meet the business requirements.
    • Provide subject matter expertise in the analysis, preparation of specifications and plans for the development of data processes.
    • Operationalize open source data-analytic tools for enterprise use.
    • Ensure data governance policies are followed by implementing or validating data lineage, quality checks, and data classification.

What You Bring:

    • 6+ years of experience in data platform administration/engineering
    • Capability to architect highly scalable distributed systems, using different tools
    • Expert knowledge of data modelling and understanding of different data structures and their benefits and limitations under particular use cases
    • Experience using Big Data batch and streaming tools.
    • Knowledge and experience using query languages (SQL, Cypher) for relational and graph databases
    • Capability to collaborate with stakeholders and project leaders to understand requirements, deliverables, and set expectations on tasks that you will be responsible for
    • Ability to work in a fast-paced, rapidly changing environment
    • Experience working in an agile and collaborative team environment
    • Excellent written and verbal communication, presentation and professional speaking skills
    • Passion for learning and interest in pursuing classroom training and self-discovery on a variety of emerging technologies
    • Hands-on experience with Amazon Web Services (AWS)-based solutions such as Lambda, Dynamodb, Snowflake and S3
    • Experience in migrating ETL processes (not just data) from relational warehouse databases to AWS-based solutions
    • Experience within the financial industry
    • Strong documentation, customer service and communication skills
    • Innovative and able to reach and collaborate with other team members and functional teams
    • Ability to handle multiple tasks in a fast-paced environment
    • Most importantly, live our Sequoia values day in and day out
Experience Required :

Minimum 6 Years

Vacancy :

2 - 4 Hires

Similar Jobs for you

See more recommended jobs