A rapidly scaling tech company is building out its Data Engineering team to support a high-volume, real-time data platform.
The system processes tens of billions of events daily and powers reporting, optimization, and core business operations under strict SLA requirements. The environment is highly distributed, with a strong focus on scalability, performance, and modern data infrastructure.
Responsibilities
Tech Environment
Python, Spark, Kafka, Presto/Trino, Airflow, Kubernetes, Hive, SQL, Iceberg, and cloud-based analytics tools
Background
Fully remote (U.S. hours). Interview process includes a coding assessment and technical evaluation.