Currently tracking 66 active AI roles, down 30% versus the prior 4 weeks. Primary focus: Agent · Engineering. Salary range $130k–$425k (avg $220k).
| Title | Stage | AI score |
|---|---|---|
| Staff Software Engineer - GenAI Performance and Kernel Staff Software Engineer focused on optimizing GPU kernels for GenAI inference, involving low-level compute, performance tuning, and integration with ML systems. The role requires deep expertise in GPU architecture and optimization techniques, with a focus on shipping high-performance production software. | Serve | 9 |
| Staff Software Engineer - GenAI inference Staff Software Engineer focused on the GenAI inference engine at Databricks, responsible for architecture, development, and optimization of high-throughput, low-latency LLM inference. This role involves kernel-level optimization, runtime development, orchestration, and integration with ML frameworks, bridging research advances with production demands. | Serve | 9 |
| Sr. Manager, Engineering - AI Gateway (LLM Inference) Sr. Manager of Engineering to lead teams building the Databricks AI Gateway, an enterprise control plane for governing, routing, and monitoring LLM endpoints, coding agents, and model serving endpoints. The role involves launching and growing new products, focusing on standardizing, securing, and observing LLM inference traffic while managing cost, performance, and quality. | ServeAgent | 8 |
| Software Engineer - GenAI inference Software Engineer focused on designing, developing, and optimizing the inference engine for Databricks' Foundation Model API. The role involves working on the full GenAI inference stack, including kernels, runtimes, orchestration, and memory management, to ensure fast, scalable, and efficient LLM serving systems. | Serve | 8 |
| Senior Machine Learning Engineer - GenAI Platform Hiring experienced machine learning platform engineers to build out a customer-facing generative AI platform for the ML development lifecycle, including data generation, training, evaluation, serving, and agent-building. The role involves end-to-end ownership, translating user requirements into product interfaces, and building backend distributed systems. Responsibilities span from user-facing features to low-level GPU orchestration. | ServePost-train | 8 |
| Staff Software Engineer - AI Research Infrastructure Staff Software Engineer focused on building and operating the AI research infrastructure at Databricks. This role involves designing and implementing services for large-scale training and inference workloads, improving developer tooling, and ensuring reliability, efficiency, and security for AI research. The engineer will partner with researchers and ML engineers to create robust pipelines and influence the long-term roadmap for research computation. | Serve | 7 |
| Staff Software Engineer - AI Research Infrastructure Staff Software Engineer focused on building and operating the AI research infrastructure at Databricks. This role involves designing and implementing services for large-scale training and inference workloads, improving developer tooling, and ensuring reliability, efficiency, and security for AI research. The engineer will partner with researchers and ML engineers to create robust pipelines and influence the long-term roadmap for research computation. | Serve | 7 |
| Staff Backend Software Engineer- (AI Platform) Staff Backend Software Engineer for Databricks' AI Platform, focusing on Foundation Model Serving. The role involves designing and implementing high-throughput, low-latency inference systems for frontier AI models on GPU workloads, optimizing serving infrastructure, and influencing the technical roadmap for LLM APIs and runtimes at scale. Prior ML/AI experience is not required, but experience with large-scale distributed systems and operational sensitive systems is critical. | Serve | 7 |
| Staff Backend Software Engineer- (AI Platform) Databricks is seeking a Staff Backend Software Engineer for their AI Platform team, focusing on the Model Serving product. The role involves designing and building systems for high-throughput, low-latency inference across CPU and GPU workloads, optimizing performance, and ensuring scalability and reliability. The engineer will contribute to core serving infrastructure, collaborate cross-functionally, and lead technical initiatives to improve latency, availability, and cost-effectiveness. | Serve | 7 |
| Staff Backend Software Engineer- (AI Platform) Staff Backend Software Engineer for Databricks' AI Platform, focusing on the Model Serving product. The role involves designing and building scalable, low-latency inference systems for both CPU and GPU workloads, optimizing performance, and ensuring operational excellence. Key responsibilities include developing core serving infrastructure, driving architectural decisions, and collaborating across teams to deliver a world-class serving platform for enterprise AI/ML models. | Serve | 7 |
| Staff Backend Software Engineer- (AI Platform) Staff Backend Software Engineer for Databricks' AI Platform team, focusing on building and improving the infrastructure that powers AI offerings like MLflow, AI Gateway, Agent Framework, and Foundation Model APIs. The role involves improving reliability, latency, and efficiency of distributed AI workloads and collaborating with various teams to deliver seamless end-to-end AI experiences. | ServeAgent | 7 |
| Staff Backend Software Engineer Staff Backend Software Engineer on the AI Platform team at Databricks, responsible for building and improving LLM infrastructure, including model serving, agent support, and Vector Search, to power customer AI workloads. | ServeAgent | 7 |
| Staff Software Engineer, Foundational Model Serving Staff Software Engineer focused on building and operating high-scale, low-latency inference systems for foundational AI models (LLMs) at Databricks. The role involves designing and implementing core systems and APIs for model serving, optimizing performance on GPU workloads, and influencing architectural direction for the Foundation Model Serving product. | Serve | 7 |
| Sr. Manager, Engineering - Model Serving Lead the engineering team responsible for Databricks' Model Serving product, focusing on both customer-facing capabilities and foundational infrastructure for scalable, low-latency AI/ML model inference. | Serve | 7 |
| Senior Software Engineer, Model Serving Databricks is seeking a Senior Software Engineer to join their Model Serving product team. This role focuses on designing and building scalable, low-latency inference systems for AI/ML models (traditional ML to LLMs) on CPU and GPU. Responsibilities include optimizing performance, throughput, autoscaling, and operational efficiency, as well as contributing to core serving infrastructure components like routing, caching, and observability. The role requires strong experience in large-scale distributed systems and model serving infrastructure. | Serve | 7 |
| Staff Software Engineer, Model Serving Databricks is seeking a Staff Software Engineer to work on their Model Serving product, which is a core pillar of their platform for enterprises to deploy and manage AI/ML models. The role involves designing and building systems for high-throughput, low-latency inference across CPU and GPU workloads, influencing architectural direction, and collaborating with various teams to deliver a world-class serving platform. | Serve | 7 |
| Staff Backline Engineer - Data & AI Staff Backline Engineer role at Databricks focused on deep-dive troubleshooting, root cause analysis, and architectural optimization within the Databricks Data and AI ecosystem. The role involves developing automated workflows and AI-driven diagnostic tools to improve supportability and scale the organization. Requires expertise in either Data Engineering, Product Supportability, or the AI track (ML/GenAI systems, LLMs, agentic workflows). | ServeAgent | 7 |
| Sr. Solutions Architect - Strategic AI Native Solutions Architect role focused on guiding 'AI native' customers in leveraging the Databricks platform for data engineering, data science, and machine learning workflows. Involves consulting on architectures, implementing proof-of-concepts, and collaborating with sales and product teams. Requires expertise in distributed data systems, Python/SQL, and cloud providers, with a strong preference for data engineering or ML technologies. | Serve | 5 |
| Sr. Specialist Solutions Architect - Builder Team This role focuses on building and maintaining infrastructure and backend services for Databricks Labs, integrating AI capabilities into production systems. It involves extending the Databricks platform, supporting tooling efforts, and ensuring reliability and observability of these systems. | Serve | 5 |
| Data & AI Platform Architect (Professional Services) This role is for a Data & AI Platform Architect within Professional Services, focusing on customer engagements using the Databricks platform. Responsibilities include designing and building data engineering, data science, and cloud technology projects, providing architectural guidance, and ensuring successful adoption of Databricks solutions. The role requires strong data engineering, distributed computing (Spark), and cloud ecosystem experience, with familiarity in MLOps and CI/CD. | ServeData | 5 |
| Specialist Solutions Architect This role is for a Specialist Solutions Architect at Databricks, focusing on guiding customers in building big data and AI solutions on the Databricks Lakehouse Platform. The role involves architectural design, data engineering, and model deployment, with a strong emphasis on production-level workloads, performance tuning, and optimization. Experience with Apache Spark, MLflow, and cloud platforms is crucial, as is the ability to provide technical leadership in customer-facing engagements. | ServeData | 5 |
| Data and AI Solution Architect (Professional Services) Databricks is seeking a Data and AI Solution Architect for their Professional Services team in Bavaria, Germany. This role involves working with clients on big data challenges using the Databricks platform, providing data engineering, data science, and cloud technology project support. Responsibilities include designing reference architectures, creating how-to guides, productionalizing use cases, and consulting on architecture and design for AI applications. The role requires extensive experience in data engineering, data platforms, analytics, coding in Python or Scala, cloud ecosystems, Apache Spark, and MLOps. | Serve | 5 |
| Staff Data & AI Technical Solutions Engineer Staff Data & AI Technical Solutions Engineer role at Databricks, focusing on driving and mentoring others in producing Data & AI technical solutions for customer issues. Requires deep expertise in Data & AI architectures, production environments, and troubleshooting complex customer situations, including deep dives into code and systems architecture related to Databricks products like Spark, Delta, DLT, and Model Serving. The role involves serving as a customer support advisor, influencing engineering for product improvements, and acting as a technical expert and leader within the support organization. | Serve | 5 |
| Senior Software Engineer (Backend) - AI/ML Environments Backend Senior Software Engineer for Databricks' AI/ML Environments team, focusing on building the infrastructure for AI training and serving environments. The role involves collaborating with other AI infrastructure teams, interacting with customers and product managers, and shaping how developers interact with AI on Databricks. | Serve | 5 |
| Staff Designated Support Engineer Staff Designated Support Engineer role focused on providing specialized support and technical solutions for Databricks' largest customers. Responsibilities include advanced troubleshooting of Spark, SQL, Delta, Streaming, and Databricks runtime features, building POCs for AI/ML capabilities, developing playbooks, and training customer teams. Requires deep expertise in Big Data, Spark, ML/AI ecosystems, cloud platforms, and customer-facing experience. | Serve | 5 |
| Senior Manager, Infrastructure Data Science Senior Manager role focused on leading a team of data scientists to optimize Databricks' infrastructure using data science and machine learning. The role involves capacity planning, performance optimization, reliability engineering, and improving infrastructure efficiency to reduce costs and enhance customer experience. It requires strong management experience and a background in data science/ML within a cloud environment. | Serve | 5 |
| Senior Manager, Infrastructure Data Science Senior Manager role focused on leading a team of data scientists to optimize Databricks' infrastructure using data science and machine learning. The role involves capacity planning, performance optimization, reliability engineering, and improving infrastructure efficiency to reduce costs and enhance customer experience. It requires strong management experience and a background in data science/ML within a cloud environment. | Serve | 5 |
| Sr Data & AI Technical Solutions Engineer This role focuses on supporting customers in debugging and maintaining stable production data pipelines and AI workflows on the Databricks platform. The engineer will provide initial analysis, troubleshooting, and resolution for data engineering and AI workloads, perform deep dives into code-level analysis, and contribute to product improvements. | Serve | 5 |
| Solutions Architect - Emerging Enterprise (Startups) Solutions Architect for Databricks, focusing on emerging enterprise clients. This role involves leading customer adoption of the Databricks Unified Analytics Platform, consulting on big data architecture, implementing proof of concepts for data science and machine learning projects, and guiding customers through implementations. The role requires technical leadership, strong communication skills, and expertise in data engineering, data analytics, data science, and machine learning technologies, with a focus on open-source projects like Apache Spark, MLflow, and Delta Lake. | Serve | 5 |