Senior Data Platform Engineer (Python) | API’s, Analytics & Machine Learning | Hybrid (Montreal) | $140,000 - $150,000 CAD + Equity & Benefits
The Company
They make cities understandable. Not in theory, but in data. They’ve built a platform that translates the chaos of urban environments into something usable: insights that help governments, developers, and businesses make better decisions about how cities are built and evolve. Backed by serious funding and born out of academic research, they’re now operating at scale, processing massive datasets across North America and turning them into real-world applications. The mission is ambitious: better cities, built with better data. The kind that actually works for the people living in them.
The Role
This is a senior, hands-on data engineering role focused on scaling and refining a high-volume data platform. You’ll be working on systems that process tens of terabytes of geospatial and real-world data, powering APIs, analytics products, and machine learning models used by both internal teams and external customers. The platform already works. Your job is to make it faster, cleaner, and harder to break. You won’t just be building pipelines, you’ll be shaping how data moves, how it’s trusted, and how it scales as complexity grows.
The Responsibilities
- Design, build, and operate large-scale batch data pipelines and lakehouse datasets (~30TB+)
- Ensure data systems are reliable, cost-efficient, and consistently refreshed across daily, monthly, and quarterly cycles
- Translate requirements from data science, product, and engineering teams into scalable data architecture
- Establish and enforce standards around data quality, validation, lineage, and governance
- Improve observability, monitoring, and alerting across the data platform
- Drive best practices in software engineering, including testing, performance, and security
- Mentor other engineers through code reviews, design input, and architectural guidance
- Support data delivery into production systems, APIs, and downstream applications
The Requirements
- Proven experience building and operating production-grade batch data pipelines at scale
- Strong Python skills, with a focus on writing clean, testable, production-ready code
- Experience working with modern data stack tools (e.g. lakehouses, orchestration frameworks, distributed processing)
- Solid understanding of data modeling, orchestration, observability, and cost optimization
- Experience integrating pipelines with production databases while maintaining data integrity
- Familiarity with cloud-native environments (AWS preferred), containerization, and distributed systems
- Experience with tools such as Spark, Dagster (or similar), and modern CI/CD workflows
- Exposure to geospatial data or spatial analytics is a strong advantage
- Strong communication skills and the ability to work cross-functionally
- Comfortable operating in a fast-moving environment with evolving priorities
The Remuneration
- $140,000 - $150,000 CAD with equity participation
- Comprehensive health coverage (medical, dental, vision)
- Access to mental health support, telemedicine, and employee assistance programs
- Unlimited vacation policy with an emphasis on actual use
- Annual health and wellness allowance
- Remote work setup support
- Annual professional development budget (~$1,500 CAD)
- Additional perks including commuter benefits and a well-located office in Montreal
- Hybrid work model (2–3 days in office per week)
Why Apply
If you like tidy datasets and predictable problems, this probably isn’t your role. This is messy, real-world data at scale, where things break, drift, and evolve constantly. But if you’re the kind of engineer who wants to build systems that actually shape how cities are understood, and you get a kick out of turning complexity into something usable, this is worth your time. Apply if you want your work to matter beyond dashboards and pipelines.