Databricks Engineer
Location: Remote (U.S.)
Employment Type: Full-Time / Contract
About TURNBRIDGE
TURNBRIDGE delivers precision-driven technical solutions and talent strategies that accelerate business outcomes. We partner with organizations to solve complex data, cloud, and engineering challenges—quickly, efficiently, and with unmatched quality. Our team focuses on measurable impact, fast feedback loops, and a streamlined hiring experience that gets the right people in the right seats.
Role Overview
We are seeking an experienced Databricks Engineer with deep expertise in big data engineering, Azure Databricks, and PySpark. This role will focus on building scalable data platforms, optimizing data pipelines, and enabling advanced analytics solutions within a modern cloud architecture. You’ll collaborate with cross-functional teams in an Agile environment to deliver high-quality, production-grade data engineering solutions.
Responsibilities
- Collaborate with stakeholders in an Agile environment to understand data requirements and design scalable data engineering solutions.
- Architect, build, and optimize data pipelines leveraging Azure Databricks, PySpark, and Delta technologies.
- Implement best practices for data governance, quality, and security across data platforms.
- Provide guidance and subject-matter expertise on medallion architecture, Delta Live Tables (DLT), and Unity Catalog.
- Lead and participate in code reviews, documentation, and knowledge-sharing sessions.
- Drive continuous improvement and identify opportunities to enhance information management and data processes.
- Build strong relationships with stakeholders responsible for analytics, data products, and performance management.
Required Qualifications
- 8+ years of enterprise data engineering experience, including design and build of large-scale data platforms.
- Extensive hands-on experience with Azure Databricks fundamentals, architecture, cluster design, and SQL Warehouse optimization.
- Strong proficiency in PySpark for building batch and real-time data pipelines.
- Deep understanding of Data Lakes, Data Warehouses, and Data Product architecture.
- Experience delivering solutions for migrations, batch processing, and streaming ingestion.
- Experience or knowledge integrating data engineering workflows with metadata management tools (e.g., Collibra).
Required Technical Skills
- Databricks
- PySpark
- Data Modeling
- Azure Data Engineering
- (Optional) Collibra – metadata ingestion, lineage mapping, data quality integration
Nice to Have
- Strong client-facing communication and stakeholder management skills
- Passion for continuous learning in a fast-moving technology environment
- Experience with modern data governance frameworks or platform engineering concepts
What We’re Looking For
The ideal candidate brings:
- A minimum of 7 years of experience building analytics and engineering solutions
- Approximately 5 years of hands-on experience with Azure Databricks and PySpark
- Deep technical expertise, intellectual curiosity, and a drive to stay at the forefront of cloud and data engineering technologies