We are seeking a Backend AI Engineer to be a vital member of a AI empowered Security Operations product engineering team, responsible for owning and developing the AI subsystems of our product. This role involves defining the architecture, tooling, and strategies necessary to deliver exceptional customer outcomes powered by large language models.
Working closely with the Head of Engineering and the broader engineering team, you will ensure that our AI-powered systems are performant, reliable, and impactful. You will also partner with PM teams to align your work with the product vision.
Location
● US Remote
● Architect and Develop AI-Powered Backend Subsystems:
● Collaborate Across Engineering and Product Teams:
● Optimize Model and Prompt Performance:
· Design, test, and refine prompts, pipelines, and fine-tuning strategies to achieve
· reliable outputs.
· Monitor and optimize inference performance, cost, and accuracy across multiple LLM providers.
● Ensure High-Quality AI Deliverables:
· Establish best practices for prompt engineering, evaluation frameworks, and
· guardrails.
· Implement automated tests and metrics to ensure consistency and safety of AI outputs.
● Contribute to Full AI Lifecycle Development:
· Participate in all stages of the AI feature lifecycle, from research and prototyping to
· deployment and continuous improvement.
· Provide technical solutions for both immediate and long-term AI challenges.
● Drive Innovation in AI Systems:
· Stay current with cutting-edge LLM research and industry advancements,
· incorporating them into our stack.
· Develop creative solutions to complex natural language processing and reasoning
· problems.
Technical Skills
● Expert-level Knowledge:
· Large Language Models (e.g., GPT, Claude, Llama, Mistral)
· Prompt Engineering and Evaluation Strategies
· Backend AI Orchestration Frameworks (e.g., LangChain, LlamaIndex, custom
· pipelines)
· Programming Languages (e.g., Python, Go, TypeScript)
· API and Microservice Design for AI workloads
● Working Knowledge:
· Retrieval-Augmented Generation (RAG) and Vector Databases (e.g., Pinecone, Weaviate, FAISS)
· Fine-tuning and Model Adaptation (LoRA, PEFT, RLHF)
· Cloud Services for AI (AWS Sagemaker, GCP Vertex AI, Azure OpenAI)
· DevOps for AI (CI/CD, infrastructure as code, model deployment pipelines)
● Familiarity:
· Model Evaluation Frameworks (e.g., RAGAS, TruLens)
· Guardrails and Safety Systems for AI outputs
· ML Experiment Tracking (e.g., Weights & Biases, MLflow)