Share this job
SVP Data Engineering
Apply for this job

Recruiter Summary of Position

  • Job Title: SVP Data Engineering
  • Location: Remote (United States)
  • Work Arrangement: Remote (Periodic travel may be required)
  • Required Technical Skills: Data Engineering, Databricks, Python/Spark, querying knowledge (SQL), etc. Prior experience working with Data Scientists and Data Engineers doing large scale Data Modelling is required. Machine Learning implementation (SKU).
  • Preferred Technical Skills: Redis/Bigtable, Feature Store/Prediction Mart development, Delta Sharing, etc.
  • Techno-Functional Skills: Ability to take ideals/goals and translate into planning/execution. Product road mapping, driving future direction and priorities for engineering teams. Improve/modify product based upon customer demands/feedback. Ability to implement/integrate data when working with products is valuable. People Management skills and the ability to manage a team is required. This is a technically specialized role first and foremost.
  • Employer Value Proposition: Architect and own the core data foundation for The Company's next-generation AI products, driving measurable business impact in a high-momentum, executive-facing role. Established company that not encumbered by top-down bureaucracy and there is an ability to make a rapid impact.


SVP Data Engineering

Imagine stepping into a role where your architectural decisions are the engine for The Company's entire AI and product strategy. As the SVP Data Engineering, you’ll spend your days driving high-impact decisions, partnering directly with company executives to ensure the company’s proprietary database is the most scalable, intelligent, and real-time data foundation in the industry. This is not a maintenance role; it’s an acceleration role. You will be clearing blockers, expediting dependencies, and translating the complexity of large-scale data sets, compelling stories that empower rapid adoption across both technical and non-technical executive teams. You're the bridge between pure data engineering and product growth—a technical executive who thrives on high-speed execution with imperfect information.


Why This Opportunity Is Different

This role is for the visionary Data Executive who is ready to architect and execute a core business asset, not just manage a team.

  • Executive Architectural Ownership: You own the entire strategic vision: from the proprietary architecture to the creation/modification of data models. Your design choices are gospel.
  • Direct AI Impact: Your team builds the foundational data systems that directly feed our next-generation machine learning and AI-driven products. You embed economic considerations into the ML pipeline.
  • High-Velocity Environment: The mandate is speed and impact. You'll operate in a decision-heavy environment, driving momentum and clarity for teams dedicated to real-time data ingestion (Kafka, Structured Streaming) and low-latency serving (Redis/Bigtable).


Core Leadership & Technical Requirements

We are looking for an expert leader with proven success in bringing strategic vision to life through technical execution.


Expert Management & Leadership:

  • Strategic Execution: Proven ability to convert a high-level strategic roadmap (in "mind-meld" with the executive team) into measurable, successfully executed Agile sprints.
  • Clarity and Momentum: Thrives in a fast-paced environment, making clear, data-informed decisions with imperfect information and relentlessly removing blockers for your teams.


Foundational Technical Expertise:

  • Data Engineering: Large scale, Data Engineering and data modelling experience. Implementing multi-speed data pipelines to meet diverse SLAs (real-time, daily, annual).
  • Databricks Lakehouse Mastery: Expert-level proficiency with the Databricks Lakehouse Architecture, including Delta Lake, Delta Live Tables (DLT), and leveraging Unity Catalog for governance and PII access control.
  • Data Quality Executive: Demonstrated expertise in establishing quantitative data quality metrics and implementing sophisticated data fusion and canonicalization logic to resolve conflicts across multiple high-volume data feeds.
  • Real-time Processing: Experience designing and implementing data layers using Kafka/Structured Streaming and leveraging low-latency Key-Value caches (Redis/Bigtable).


By applying for this job, you agree that we can text you (standard rates apply).

Apply for this job