Share this job
Senior Staff Graph Retrieval Engineer
Apply for this job

Senior Staff Graph Retrieval Engineer


About the Opportunity

Join an elite, venture-backed team building next-generation, AI-powered collaboration tools for the enterprise.

Technical Integrity has been retained to lead the search for a rapidly growing startup developing AI systems that improve how large organizations think, communicate, and make decisions. Their product uses cutting-edge AI to identify and resolve coordination gaps automatically — helping teams operate more intelligently and efficiently at scale.

Following a recent and substantial round of funding, the company is expanding its world-class engineering team in Colorado and beyond.


We're hiring a senior staff engineer specializing in knowledge graph retrieval to solve a critical scaling challenge: our AI agents build massive English-language knowledge graphs of enterprise operations, and we need intelligent retrieval systems to extract relevant information from graphs that are 100-10000x larger than any LLM context window. This is a novel information retrieval problem combining classical search/ranking techniques with cutting-edge agentic LLM approaches, applied to highly unstructured natural language knowledge graphs.

The Problem We're Solving

Our product builds personal AI agents for leaders and managers in large, complex organizations. Each agent constructs a "world model" - an English-language knowledge graph capturing:



  • Company goals and project hierarchies
  • Cross-team dependencies and relationships
  • Project status, risks, blockers, and opportunities
  • People, roles, and communication patterns
  • Decisions, commitments, and timelines

 

An Example of The Retrieval Problem: When a leader asks "what could cause Project X to run behind?", we need to intelligently traverse the knowledge graph to find:



  • Upstream dependencies (projects Project X depends on)
  • Status of those dependencies (are they at risk?)
  • People involved (are they overcommitted?)
  • Recent decisions that might impact timelines
  • Communication patterns (are teams coordinating effectively?)

 

This isn't keyword search. This is graph-structured retrieval over unstructured natural language content where understanding business semantics (dependencies, criticality, risk) is essential.

 


What Makes This Role Unique


Novel Technical Problem

  • Hybrid Structure: Graph structure (dependencies, hierarchies) with unstructured content (natural language descriptions, status updates, Slack messages)
  • Semantic Complexity: Need to understand business concepts like "critical path", "blocking issues", "resource contention", "scope creep"
  • Agentic Retrieval: Classical techniques (graph traversal, embeddings, ranking) PLUS novel approaches (LLM agents crawling their own knowledge graphs)
  • Dynamic Graphs: Knowledge graphs update continuously as we ingest chat messages, docs, emails
  • Query Diversity: From precise ("who owns Project X?") to exploratory ("why might this initiative fail?")


Full-Stack Ownership

This is a small team (11 people) building for massive enterprises. The ideal candidate needs to be able to handle the algorithmic core of this problem plus the surrounding tooling and infra:



  • Designs and implements core retrieval algorithms
  • Builds production infrastructure (caching, indexing, APIs)
  • Instruments and optimizes performance
  • Works directly with product to understand use cases
  • Ships fast, iterates based on real user feedback



Must-Have Experience


Strong CS Fundamentals:


  • Algorithms and data structures (graph algorithms especially)
  • Complexity analysis and optimization
  • Data systems architecture

 

Information Retrieval Expertise:



  • Search Ranking & Relevance: Should be able to discuss specific algorithms (TF-IDF, BM25, learning-to-rank models like LambdaMART) and explain when to use each.
  • Evaluation & Metrics: Must understand precision, recall, F1, NDCG, MRR (Mean Reciprocal Rank). Should have run offline evaluations and A/B tests to measure retrieval quality.
  • Knowledge Graphs - Implementation Depth: Should have built graph data structures or worked with graph databases (Neo4j, Amazon Neptune, GraphQL engines), not just queried them.
  • Vector Search - Built Not Just Used: Should understand how vector indexes work (HNSW, IVF, product quantization), not just called Pinecone APIs.
  • Hybrid Retrieval: Should have combined multiple retrieval signals (keywords + vectors + graph structure, or dense + sparse retrieval).
  • Query Understanding: Should understand NLP techniques for parsing user intent—entity extraction, query expansion, semantic parsing, or intent classification.
  • Multi-Hop Reasoning: Bonus if they've built systems that retrieve information across multiple documents or hops (e.g., "find papers cited by papers that cite this paper").
  • Scalability Experience: Should have dealt with large-scale retrieval (millions of documents/nodes, thousands of queries per second).
  • Retrieval for LLMs (RAG): Should understand context window constraints, token budgets, and how to select what to include in LLM context.
  • Real-World Tradeoffs: Should be able to discuss precision vs. recall tradeoffs, latency vs. quality, and when to optimize for each.

 

Production Engineering:



  • Built and scaled backend systems (Python, TypeScript, or similar)
  • Experience with databases (PostgreSQL, vector DBs)
  • API design and performance optimization
  • Comfortable with full dev lifecycle (design → build → deploy → monitor)

 

Modern ML/LLM Knowledge:



  • Understanding of RAG (Retrieval-Augmented Generation) architectures
  • Familiarity with LLM capabilities and limitations
  • Bonus: experience building with LLM APIs (OpenAI, Anthropic)
  • Bonus: agentic systems or multi-agent orchestration

 


Educational Background

Ideal:


  • MS or PhD in Computer Science (specializing in IR, ML, NLP, or Data Systems)
  • Top undergrad CS program (Stanford, MIT, CMU, Berkeley, etc.) with relevant coursework

 

Acceptable:


  • Strong BS in CS with exceptional work experience in search/retrieval
  • Self-taught engineers with deep domain expertise (rare but possible)

 

Bonus:


  • Published papers in IR conferences (SIGIR, WWW, WSDM, RecSys)
  • Contributions to open-source search/graph projects
  • Side projects demonstrating depth in knowledge graphs or RAG


Compensation & Benefits:


  • Competitive salary and meaningful equity packages. (Equity is the big story- take time to learn about that story throughout your interviews).
  • Recent offers for comparable roles have ranged from $275,000–$325,000 base, plus meaningful signing bonuses and equity stakes.
  • Very solid Health benefits (Medical, Dental, and Vision)
  • 401K match
  • Flexible work arrangements — Ideal situation is someone able to spend a day or two per week in the office in Boulder, Colorado, next is remote US (with a preference for NYC or the Bay Area)
  • Opportunity to collaborate with world-class technical peers on groundbreaking AI systems.


Application Process

To apply, please contact Technical Integrity with your resume and a concise statement of interest.

We value transparency, prompt feedback, and a respectful candidate experience throughout.

If you’re a senior software engineer or principal technologist who thrives on autonomy, deep technical challenges, and building at the frontier of AI-assisted engineering, we’d love to hear from you.




Apply for this job
Powered by