Cohere
ScalingAI Frontier · Enterprise LLMs
Currently tracking 60 active AI roles, up 80% versus the prior 4 weeks. Primary focus: Agent · Engineering.
Hiring
60 / 60
Momentum (4w)
↑+16 +80%
36 opens last 4w · 20 prior 4w
Salary range
—
Tracked since
Oct '24
last role today
Hiring velocityscroll left for older weeks
Jobs (11)
| Title | Stage | AI score |
|---|---|---|
| Senior Solutions Architect - San Francisco Senior Solutions Architect at Cohere, focusing on pre-sales and post-sales technical engagements for enterprise AI solutions. The role involves building customer demos and proof-of-concepts, translating business needs into technical solutions using Cohere's foundation models, fine-tuning, custom agents, and agent orchestration. Responsibilities include supporting deployment, leading technical discussions, and providing customer feedback to the product team. | Agent | 9 |
| Member of Technical Staff, Senior/Staff MLE Cohere is seeking a Senior/Staff Member of Technical Staff, Applied ML to work directly with enterprise customers on problems that push LLMs to their limits. This role involves designing custom LLM solutions, delivering production-ready models, and training/customizing frontier models using Cohere's full stack. The position also influences Cohere's foundation models and requires operating with early-startup level ownership. Responsibilities include technical leadership, solution design, modeling, customization, customer-facing impact, and team mentorship. | Post-trainAgent | 9 |
| Member of Technical Staff, MLE This role focuses on applying and customizing Cohere's frontier LLMs for enterprise customers, involving post-training, retrieval, and agent integrations. The individual will design and deliver production-ready models, influence the development of foundation models, and operate with significant ownership, combining application, research, and customer-facing engineering. | Post-trainAgent | 9 |
| Applied AI Engineer – Agentic Workflows Cohere is seeking an Applied AI Engineer to build production-grade AI agents for enterprise customers. This role involves designing, building, and deploying agentic workflows powered by LLMs, integrating them with tools, APIs, and data sources. The engineer will focus on reliability, observability, safety, and audibility, working closely with customers and shaping how agentic systems are built and deployed. | Agent | 9 |
| Staff Research Engineer, Model Efficiency Cohere is seeking a Staff Research Engineer focused on Model Efficiency to push the limits of LLM inference efficiency. This role involves exploring and shipping breakthroughs in model architecture, routing optimization, decoding algorithms, software/hardware co-design for GPU acceleration, and performance optimization without compromising model quality. The goal is to improve how fast and efficiently their foundation models run in production. | ServePretrain | 9 |
| Member of Technical Staff, Model Efficiency Cohere is seeking an engineer to improve LLM inference efficiency by optimizing model execution, reducing latency and increasing throughput. This role involves deep dives into model execution, identifying bottlenecks, and developing optimizations across the inference stack, including GPU/CUDA and kernel-level improvements. | Serve | 9 |
| Senior Member of Technical Staff, Multimodal AI Cohere is seeking a Senior Member of Technical Staff to focus on Multimodal AI. This role involves designing and developing cutting-edge multimodal AI systems integrating text, speech, and vision. The candidate will conduct research and experiments on advanced compute infrastructure, exploring novel ideas in multimodal representation learning and transfer learning. The role requires strong software engineering skills, proficiency in Python and deep learning frameworks (JAX, PyTorch, TensorFlow), and knowledge of distributed training strategies for large-scale multimodal models. Experience with autoregressive models for tasks like image/video captioning and speech-to-text is beneficial. The ideal candidate enjoys tuning and optimizing large multimodal models and building evaluations to measure their performance. | Post-trainAgent | 9 |
| Lead Member of Technical Staff, Inference Infrastructure Lead Member of Technical Staff, Inference Infrastructure at Cohere. Responsible for the design, deployment, and operation of the AI platform delivering large language models through API endpoints. Focuses on optimizing NLP models for low latency, high throughput, and high availability, with a strong emphasis on Kubernetes, GPU workloads, and multi-cloud environments. Requires extensive experience in production infrastructure, distributed systems, and technical leadership, including mentoring engineers and guiding strategic infrastructure decisions. | Serve | 8 |
| Data Engineer Cohere is seeking a Data Engineer to work on foundational infrastructure for AI systems, including storage, product launches, and customer experiences. The role involves collaborating with researchers and engineers, running implementations end-to-end, and partnering across departments to define growth strategies. The ideal candidate has 5+ years of experience in production-grade data processing systems, strong Python and SQL skills, and experience with distributed data processing frameworks. | Data | 8 |
| Staff Software Engineer, Inference Infrastructure Cohere is seeking a Staff Software Engineer to join their Model Serving team. This role focuses on developing, deploying, and operating the AI platform that delivers Cohere's large language models via API endpoints. The engineer will optimize NLP models for low latency, high throughput, and high availability, working with distributed systems, Kubernetes, and GPU workloads. Experience with cloud platforms and high-performance languages is required. | Serve | 8 |
| Audio Inference Engineer, Model Efficiency Cohere is seeking an Audio Inference Engineer to optimize audio inference serving efficiency, focusing on latency, throughput, and quality for real-time and streaming audio workloads. The role involves deep system analysis, bottleneck identification, and developing creative solutions for audio processing and inference. | ServePost-train | 8 |