As a Data Engineer on our Data team, you will play a critical role in designing, building, and maintaining reliable data pipelines and infrastructure that power analytics and data-driven decision-making across the organization. Your primary focus will be ingesting into, transforming, and optimizing data within our Snowflake Data Warehouse using modern data tools such as dbt. Success in this role means delivering trusted, performant, and scalable data systems that enable downstream analytics, reporting, and machine learning.
Key Responsibilities:
- Build New Data Pipelines
- Design, develop, and maintain reliable, scalable data pipelines from raw data sources to the silver layer within Snowflake.
- Implement best practices for data ingestion, transformation, and quality assurance using dbt and modern orchestration tools.
- Ensure data is consistently structured, performant, and available for downstream modeling and analytics.
- Unify and Modernize Orchestration
- Transition legacy scripts, manual workflows, and no-code data integrations into a centralized, code-based orchestration framework.
- Collaborate with the broader data team to streamline and standardize the end-to-end ELT process.
- Troubleshoot and Support Data Consumers
- Investigate and resolve data discrepancies reported by end users.
- Perform root cause analysis and implement long-term fixes to prevent recurrence.
- Maintain clear communication and documentation to build trust and transparency with data consumers.
- Continuously identify opportunities for improvement in current processes.
- Enhance Source-to-Silver Data Modeling
- Refine data modeling patterns for the silver layer to improve clarity, scalability, and usability.
- Collaborate with Analytics Engineers and downstream consumers to align transformations with business logic and reporting needs.
Core Requirements:
- SQL
- Python
- dbt
- Snowflake (or Databricks)
- git
- Docker
- AWS (Lambda, S3, Step Functions, ECS, SQS, ECR) or Azure/GCP equivalents
- Data ETL/ELT concepts
The ideal candidate will also have experience with:
- Retool
- ML
- Looker / LookML
- Data Orchestration Tools (Airflow, Dagster, Prefect)
- Power BI / DAX
Qualifications:
- Bachelors degree in a technical or analytical field, or equivalent practical experience
- 3+ years of experience in data engineering or in a similar role building data pipelines and architectures
- 3+ years of experience with SQL
- 3+ years of experience with Python
- 2+ years of experience with cloud-based data platforms like Snowflake and Databricks
- 1+ year of experience within the AWS ecosystem or Azure/GCP
- Strong understanding of data modeling, transformation, and governance best practices
- Excellent problem-solving, communication, and collaboration skills
- Comfortable working independently and managing multiple priorities in a fast-paced environment
Compensation & Benefits:
- Competitive salary based on experience
- Comprehensive medical, dental, and vision insurance
- 401(k) plan with company match
- A collaborative and growth-oriented company culture