AI Hire Signal
JobsCompaniesTrendsInsightsWeekly
JobsStrategy timeline
AI Hire Signal

Tracking AI hiring across 200+ US tech companies. Stage, salary, and stack signals on every role — refreshed weekly.

Contact

Browse

JobsCompaniesTrendsInsightsWeekly

Resources

AboutSitemapRobots

Legal

PrivacyTerms
© 2026 AI Hire Signal·Not affiliated with companies shown

Currently tracking 66 active AI roles, down 30% versus the prior 4 weeks. Primary focus: Agent · Engineering. Salary range $130k–$425k (avg $220k).

Hiring
66 / 68
Momentum (4w)
↓-52 -30%
123 opens last 4w · 175 prior 4w
Salary range · avg $220k
$130k–$425k
USD · disclosed roles only
Tracked since
Sep '23
last role yesterday
Hiring velocityscroll left for older weeks
1 new role
May 16
1 new role
Jul 11
2 new roles
Aug 8
2 new roles
15
1 new role
Sep 5
1 new role
Oct 24
3 new roles
Dec 19
2 new roles
26
2 new roles
Jan 9
1 new role
Feb 27
2 new roles
Mar 20
4 new roles
Apr 10
1 new role
May 1
2 new roles
8
2 new roles
29
2 new roles
Jun 19
1 new role
Sep 25
1 new role
Nov 6
4 new roles
Dec 11
1 new role
18
1 new role
Jan 22
2 new roles
Feb 5
10 new roles
Mar 4
1 new role
Apr 1
1 new role
8
1 new role
May 6
1 new role
Jun 17
1 new role
Jul 1
2 new roles
15
3 new roles
Aug 19
2 new roles
26
1 new role
Sep 9
11 new roles
16
5 new roles
23
1 new role
Oct 7
1 new role
21
1 new role
Nov 4
2 new roles
18
1 new role
Jan 13
1 new role
20
2 new roles
27
1 new role
Feb 3
3 new roles
10
2 new roles
17
4 new roles
Mar 3
3 new roles
10
2 new roles
17
5 new roles
24
3 new roles
31
5 new roles
Apr 14
1 new role
21
1 new role
28
2 new roles
May 5
6 new roles
12
1 new role
19
11 new roles
26
9 new roles
Jun 2
4 new roles
9
2 new roles
16
2 new roles
23
2 new roles
Jul 14
3 new roles
21
3 new roles
28
3 new roles
Aug 4
5 new roles
11
2 new roles
18
6 new roles
25
1 new role
Sep 1
3 new roles
8
2 new roles
15
3 new roles
22
4 new roles
29
16 new roles
Oct 6
10 new roles
13
5 new roles
20
11 new roles
27
3 new roles
Nov 3
6 new roles
10
15 new roles
17
6 new roles
24
9 new roles
Dec 1
6 new roles
8
6 new roles
15
6 new roles
22
2 new roles
29
9 new roles
Jan 5
8 new roles
12
13 new roles
19
13 new roles
26
14 new roles
Feb 2
28 new roles
9
23 new roles
16
28 new roles
23
42 new roles
Mar 2
49 new roles
9
37 new roles
16
36 new roles
23
64 new roles
30
34 new roles
Apr 6
41 new roles
13
33 new roles
20
36 new roles
27
39 new roles
May 4
15 new roles
11

Jobs (17)

66 AI · 809 total active
FilteredStageServe×FunctionEngineering×Clear all
Show
Active onlyAI only (≥ 7)
Stage
AllData · 1Pretrain · 2Post-train · 3Serve · 18Agent · 29Ship · 13
Function
AllEngineering · 55Product · 6Research · 5
Country
AllUnited States · 50India · 9Australia · 2Singapore · 2Serbia · 1Sweden · 1United Kingdom · 1
Sort
AI scoreRecentTitle
TitleStageFunctionLocationFirst seenAI score
Staff Software Engineer - GenAI Performance and Kernel
Staff Software Engineer focused on optimizing GPU kernels for GenAI inference, involving low-level compute, performance tuning, and integration with ML systems. The role requires deep expertise in GPU architecture and optimization techniques, with a focus on shipping high-performance production software.
ServeEngineeringSan Francisco, CAOct '259
Staff Software Engineer - GenAI inference
Staff Software Engineer focused on the GenAI inference engine at Databricks, responsible for architecture, development, and optimization of high-throughput, low-latency LLM inference. This role involves kernel-level optimization, runtime development, orchestration, and integration with ML frameworks, bridging research advances with production demands.
ServeEngineeringSan Francisco, CAOct '259
Sr. Manager, Engineering - AI Gateway (LLM Inference)
Sr. Manager of Engineering to lead teams building the Databricks AI Gateway, an enterprise control plane for governing, routing, and monitoring LLM endpoints, coding agents, and model serving endpoints. The role involves launching and growing new products, focusing on standardizing, securing, and observing LLM inference traffic while managing cost, performance, and quality.
ServeAgentEngineeringNew York, NY6w ago8
Software Engineer - GenAI inference
Software Engineer focused on designing, developing, and optimizing the inference engine for Databricks' Foundation Model API. The role involves working on the full GenAI inference stack, including kernels, runtimes, orchestration, and memory management, to ensure fast, scalable, and efficient LLM serving systems.
ServeEngineeringSan Francisco, CAOct '258
Senior Machine Learning Engineer - GenAI Platform
Hiring experienced machine learning platform engineers to build out a customer-facing generative AI platform for the ML development lifecycle, including data generation, training, evaluation, serving, and agent-building. The role involves end-to-end ownership, translating user requirements into product interfaces, and building backend distributed systems. Responsibilities span from user-facing features to low-level GPU orchestration.
ServePost-trainEngineeringSan Francisco, CASep '238
Staff Software Engineer - AI Research Infrastructure
Staff Software Engineer focused on building and operating the AI research infrastructure at Databricks. This role involves designing and implementing services for large-scale training and inference workloads, improving developer tooling, and ensuring reliability, efficiency, and security for AI research. The engineer will partner with researchers and ML engineers to create robust pipelines and influence the long-term roadmap for research computation.
ServeEngineeringSan Francisco, CA2w ago7
Staff Software Engineer - AI Research Infrastructure
Staff Software Engineer focused on building and operating the AI research infrastructure at Databricks. This role involves designing and implementing services for large-scale training and inference workloads, improving developer tooling, and ensuring reliability, efficiency, and security for AI research. The engineer will partner with researchers and ML engineers to create robust pipelines and influence the long-term roadmap for research computation.
ServeEngineeringSan Francisco, CA2w ago7
Staff Backend Software Engineer- (AI Platform)
Staff Backend Software Engineer for Databricks' AI Platform, focusing on Foundation Model Serving. The role involves designing and implementing high-throughput, low-latency inference systems for frontier AI models on GPU workloads, optimizing serving infrastructure, and influencing the technical roadmap for LLM APIs and runtimes at scale. Prior ML/AI experience is not required, but experience with large-scale distributed systems and operational sensitive systems is critical.
ServeEngineeringMountain View, CA6w ago7
Staff Backend Software Engineer- (AI Platform)
Databricks is seeking a Staff Backend Software Engineer for their AI Platform team, focusing on the Model Serving product. The role involves designing and building systems for high-throughput, low-latency inference across CPU and GPU workloads, optimizing performance, and ensuring scalability and reliability. The engineer will contribute to core serving infrastructure, collaborate cross-functionally, and lead technical initiatives to improve latency, availability, and cost-effectiveness.
ServeEngineeringMountain View, CA6w ago7
Staff Backend Software Engineer- (AI Platform)
Staff Backend Software Engineer for Databricks' AI Platform, focusing on the Model Serving product. The role involves designing and building scalable, low-latency inference systems for both CPU and GPU workloads, optimizing performance, and ensuring operational excellence. Key responsibilities include developing core serving infrastructure, driving architectural decisions, and collaborating across teams to deliver a world-class serving platform for enterprise AI/ML models.
ServeEngineeringMountain View, CA6w ago7
Staff Backend Software Engineer- (AI Platform)
Staff Backend Software Engineer for Databricks' AI Platform team, focusing on building and improving the infrastructure that powers AI offerings like MLflow, AI Gateway, Agent Framework, and Foundation Model APIs. The role involves improving reliability, latency, and efficiency of distributed AI workloads and collaborating with various teams to deliver seamless end-to-end AI experiences.
ServeAgentEngineeringMountain View, CA6w ago7
Staff Backend Software Engineer
Staff Backend Software Engineer on the AI Platform team at Databricks, responsible for building and improving LLM infrastructure, including model serving, agent support, and Vector Search, to power customer AI workloads.
ServeAgentEngineeringNew York, NY8w ago7
Staff Software Engineer, Foundational Model Serving
Staff Software Engineer focused on building and operating high-scale, low-latency inference systems for foundational AI models (LLMs) at Databricks. The role involves designing and implementing core systems and APIs for model serving, optimizing performance on GPU workloads, and influencing architectural direction for the Foundation Model Serving product.
ServeEngineeringMountain View, CAOct '257
Sr. Manager, Engineering - Model Serving
Lead the engineering team responsible for Databricks' Model Serving product, focusing on both customer-facing capabilities and foundational infrastructure for scalable, low-latency AI/ML model inference.
ServeEngineeringSan Francisco, CAOct '257
Senior Software Engineer, Model Serving
Databricks is seeking a Senior Software Engineer to join their Model Serving product team. This role focuses on designing and building scalable, low-latency inference systems for AI/ML models (traditional ML to LLMs) on CPU and GPU. Responsibilities include optimizing performance, throughput, autoscaling, and operational efficiency, as well as contributing to core serving infrastructure components like routing, caching, and observability. The role requires strong experience in large-scale distributed systems and model serving infrastructure.
ServeEngineeringMountain View, CAOct '257
Staff Software Engineer, Model Serving
Databricks is seeking a Staff Software Engineer to work on their Model Serving product, which is a core pillar of their platform for enterprises to deploy and manage AI/ML models. The role involves designing and building systems for high-throughput, low-latency inference across CPU and GPU workloads, influencing architectural direction, and collaborating with various teams to deliver a world-class serving platform.
ServeEngineeringMountain View, CAOct '257
Staff Backline Engineer - Data & AI
Staff Backline Engineer role at Databricks focused on deep-dive troubleshooting, root cause analysis, and architectural optimization within the Databricks Data and AI ecosystem. The role involves developing automated workflows and AI-driven diagnostic tools to improve supportability and scale the organization. Requires expertise in either Data Engineering, Product Supportability, or the AI track (ML/GenAI systems, LLMs, agentic workflows).
ServeAgentEngineeringUnited StatesFeb '257