AI Hire Signal
JobsCompaniesTrendsInsightsWeekly
JobsStrategy timeline
AI Hire Signal

Tracking AI hiring across 200+ US tech companies. Stage, salary, and stack signals on every role — refreshed weekly.

Contact

Browse

JobsCompaniesTrendsInsightsWeekly

Resources

AboutSitemapRobots

Legal

PrivacyTerms
© 2026 AI Hire Signal·Not affiliated with companies shown

Currently tracking 995 active AI roles, up 64% versus the prior 4 weeks. Primary focus: Agent · Engineering. Salary range $65k–$465k (avg $196k).

Hiring
995 / 995
Momentum (4w)
↑+403 +64%
1033 opens last 4w · 630 prior 4w
Salary range · avg $196k
$65k–$465k
USD · disclosed roles only
Tracked since
Oct '24
last role today
Hiring velocityscroll left for older weeks
2 new roles
Oct 7
1 new role
Feb 3
1 new role
Mar 10
1 new role
17
1 new role
24
2 new roles
31
1 new role
Apr 14
4 new roles
28
2 new roles
May 12
1 new role
19
1 new role
26
3 new roles
Jun 2
1 new role
9
4 new roles
16
2 new roles
23
2 new roles
30
2 new roles
Jul 14
12 new roles
21
3 new roles
28
4 new roles
Aug 4
5 new roles
11
2 new roles
18
3 new roles
25
11 new roles
Sep 1
4 new roles
8
9 new roles
15
4 new roles
22
8 new roles
29
7 new roles
Oct 6
9 new roles
13
8 new roles
20
14 new roles
27
13 new roles
Nov 3
20 new roles
10
14 new roles
17
20 new roles
24
21 new roles
Dec 1
14 new roles
8
19 new roles
15
12 new roles
22
8 new roles
29
29 new roles
Jan 5
22 new roles
12
25 new roles
19
67 new roles
26
64 new roles
Feb 2
71 new roles
9
52 new roles
16
80 new roles
23
110 new roles
Mar 2
135 new roles
9
129 new roles
16
136 new roles
23
136 new roles
30
164 new roles
Apr 6
194 new roles
13
251 new roles
20
237 new roles
27
304 new roles
May 4
241 new roles
11

Jobs (92)

995 AI · 2722 total active
FilteredStageServe×CountryUnited States×Clear all
Show
Active onlyAI only (≥ 7)
Stage
AllData · 53Pretrain · 9Post-train · 93Serve · 124Agent · 437Eval Gate · 25Ship · 254
Function
AllEngineering · 778Research · 175Product · 42
Country
AllUnited States · 653Canada · 48United Kingdom · 18India · 17Spain · 13Australia · 11Romania · 7Belgium · 6Germany · 6Poland · 6Taiwan · 6China · 5Japan · 5Singapore · 5Brazil · 4Mexico · 4France · 3Netherlands · 3Switzerland · 3Philippines · 2Vietnam · 2Egypt · 1Estonia · 1Italy · 1South Korea · 1Sweden · 1Thailand · 1
Sort
AI scoreRecentTitle
TitleStageFunctionLocationFirst seenAI score
Sr Software Development Engineer, EC2 Nitro Machine Learning Systems
Senior Software Development Engineer role focused on building and scaling machine learning infrastructure for EC2 Nitro, supporting training and inference workloads for various ML applications including LLMs and multimodal systems. The role involves designing innovative technologies, leading technical projects, developing regression testing systems, and collaborating with hardware teams to optimize platform designs for ML performance.
ServeDataEngineeringSeattle, WA4w ago7
Applied Scientist II, Annapurna ML
Applied Scientist II role focused on enhancing ML accelerator software (Trainium/Inferentia) to accelerate customer adoption. Responsibilities include developing ML/RL for code generation/optimization, creating ML compiler techniques, building validation tools, and designing high-performance kernels. The role involves working with customers, engineering teams, and research communities to advance ML systems, with a focus on inference performance and training cost optimization.
51–92 of 92← Prev12Next →
ServeData
Engineering
Cupertino, CA
4w ago
7
Interdisciplinary Sys Engineer, GES NA Ops Engineering
This role focuses on integrating computer vision, edge computing, and physical automation systems to enable real-time operational intelligence, improve equipment performance, and optimize process flow within global fulfillment networks. The engineer will bridge AI/ML models with physical systems, leading the development and deployment of sensor-driven automation solutions and ensuring seamless integration across hardware, software, and control layers.
ServeAgentEngineeringBellevue, WA5w ago7
Senior Applied Scientist, Agentic WorkSpaces
Senior Applied Scientist role focused on building predictive intelligence for capacity management in AWS workspaces. This involves developing ML systems for demand forecasting, resource optimization, and cost efficiency at enterprise scale. The role requires translating business needs into production ML systems, designing algorithms, and applying advanced ML techniques like time-series forecasting, reinforcement learning, and causal inference. Emphasis on low-latency, large-scale data processing, and collaboration with product and engineering teams.
ServeAgentEngineeringSeattle, WA5w ago7
ML Compiler Engineer II - Neuron Kernel Interface , Annapurna Labs
ML Compiler Engineer II on the Neuron Compiler Automated Reasoning Group, developing and maintaining tooling for fuzzers and specification synthesis for an LLVM-based compiler targeting ML accelerators (Inferentia/Trainium) for domains like Large Language and Vision. Focus on accuracy and reliability of the compiler stack.
ServeEngineeringBoston, MA5w ago7
Sr. Worldwide Specialist - GenAI, Foundation Models, Data & AI GTM
This role focuses on defining and executing Go-to-Market (GTM) strategies for AWS's generative AI (GenAI) infrastructure, specifically targeting large-scale model training and inference workloads. The individual will work with key customers (Frontier AI model builders) to accelerate their adoption of AWS services, understand their infrastructure needs, and influence product roadmaps. The role involves business development, customer engagement, evangelism, and collaboration with internal AWS teams.
ServePretrainProductSan Francisco, CA6w ago7
Software Dev Engineer, AWS Identity Analytics Platform
Software Development Engineer role focused on building and operating the data platform infrastructure for an AI-driven analytics platform at AWS Identity. This involves designing and managing ingestion, transformation, and serving pipelines for petabyte-scale data to feed ML models and LLM agents. The role also includes productionizing ML models, building feature engineering infrastructure, and ensuring platform resilience and scalability.
ServeDataEngineeringSeattle, WA7w ago7
Senior AI Hardware Systems Engineer, Annapurna Labs, Trainium Machine Learning Fleet Operations
This role focuses on the operational excellence and reliability of a fleet of ML accelerators and server products, specifically Amazon's Trainium chips. The engineer will be responsible for debugging hardware and software issues, developing automation, analyzing fleet data, and ensuring the health and performance of the ML hardware infrastructure. This is an engineering role focused on the operational aspects of serving ML hardware.
ServeEngineeringAustin, TX8w ago7
Software Development Engineer, Data Integration AI and Platform Excellence (APEX)
Software Development Engineer role focused on building AI/ML-powered products and infrastructure for data integration workflows at Amazon.
ServeEngineeringNY +18w ago7
Senior Software Dev Engineer, EC2 Nitro
Senior Software Development Engineer to build and optimize infrastructure for AI/ML workloads on EC2 Nitro. Focus on performance measurement, benchmarking, regression testing, and influencing future hardware designs for LLMs, multimodal systems, and emerging architectures. Role involves both customer-facing performance problem-solving and foundational infrastructure development.
ServeEngineeringSeattle, WAMar 137
Software Dev Engineer, EC2 Nitro
Software Development Engineer to build and optimize performance measurement infrastructure for AI/ML workloads on AWS EC2 Nitro. The role involves low-level systems, ML frameworks, and serving layers to translate performance insights into technical requirements for platform designs.
ServeEngineeringSeattle, WAMar 137
Software Dev Engineer, EC2 Nitro
Software Development Engineer to build and optimize performance measurement infrastructure for AI/ML workloads on AWS EC2 Nitro. The role involves low-level systems, ML frameworks, and serving layers to translate performance insights into technical requirements for platform designs.
ServeEngineeringSeattle, WAMar 137
Applied Scientist III - AMZ9675101
Applied Scientist III role focused on designing, developing, evaluating, deploying, and updating data-driven models and analytical solutions for machine learning and natural language applications. The role involves applying statistical modeling, optimization, and ML techniques, building and deploying models in production, and researching novel ML approaches. Requires a Master's degree (or equivalent experience) in a related field and experience in programming and developing supervised/unsupervised ML models.
ServeResearchChicago, ILMar 127
Software Development Engineer II, Post Silicon Validation
Software Development Engineer II, Post Silicon Validation for AWS's next-generation machine learning accelerators. This role involves validating the complete vertical stack of ML accelerators, from silicon to system, ensuring quality and performance for AWS cloud infrastructure. Responsibilities include developing validation strategies, executing test plans, hardware bring-up and debug, and collaborating with cross-functional teams.
ServeEngineeringAustin, TXMar 47
Sr. Systems Development Engineer (AWS Generative AI & ML Servers), AWS HW Engineering
This role focuses on building and operating AWS cloud infrastructure for AI training and inference, specifically targeting high-performance and scalable solutions for large language models. The engineer will work on server designs, system-level debugging, and implementing automation solutions, including agentic workflows and AI-driven tools, to enhance the productivity of other engineers and influence AI implementation and core architecture.
ServeEngineeringAustin, TXFeb 247
Machine Learning Engineer II, Special Projects
Machine Learning Engineer II on an Amazon Special Projects team focused on creating new products and services using Generative AI and LLMs. Responsibilities include developing and maintaining platforms for LLM development, evaluation, and deployment, processing large datasets, scaling models, and optimizing performance. Experience with distributed model training is required.
ServePost-trainEngineeringSeattle, WAFeb 117
Software Engineer- AI/ML, AWS Neuron Distributed Training - Performance Optimization
Software Engineer focused on performance optimization for distributed training of large-scale AI/ML models (LLMs, multi-modal) on AWS Neuron accelerators. This involves tuning across the software stack, including collective communications, memory utilization, compiler optimizations, and kernel performance, working with PyTorch and JAX.
ServePost-trainEngineeringSeattle, WAFeb 57
Software Development Manager - Compiler, AWS Neuron, Annapurna Labs
Seeking a Software Engineering Manager to lead a team developing compiler optimization algorithms and deploying a new compiler for AWS custom hardware (Inferentia and Trainium chips). The role involves technical leadership, mentoring, and partnering with AWS ML services teams to improve deep learning model performance and productivity.
ServeEngineeringCupertino, CAFeb 47
Software Development Engineer II, AI/ML Elastic Collectives - Annapurna Labs
Software Development Engineer II at Amazon's Annapurna Labs, focusing on distributed AI/ML systems and collective operations for scaling AI across multiple accelerators and servers. The role requires strong C/C++ and Linux skills, with experience in embedded systems, high-speed networking, or HPC interconnects being valuable. This position is on the forefront of AI/ML, working with large-scale clusters and models within AWS's EC2 infrastructure.
ServeEngineeringCupertino, CAJan 297
Sr. Machine Learning - Compiler Engineer III, AWS Neuron, Annapurna Labs
This role is for a Sr. Machine Learning Compiler Engineer III on the AWS Neuron team, focusing on the development and scaling of a compiler for ML accelerators. The role involves architecting and implementing features for a deep learning compiler stack that optimizes neural network performance on custom AWS hardware, integrating with frameworks like PyTorch and TensorFlow. The goal is to provide significant performance improvements for large-scale ML workloads.
ServeEngineeringCupertino, CAJan 287
Professional Services III - AMZ13646.11
This role focuses on building and deploying reliable, scalable, and high-performance ML/AI solutions, leveraging Big Data, AppDev, or DevOps experience. It involves working closely with Data Scientists and Data Engineers to deliver end-to-end solutions, utilizing ML frameworks, algorithms, and ML pipelines, with a strong emphasis on hosting and deployment of models.
ServeDataEngineeringNY +1Jan 277
Software Development Manager, Neuron Tools, Annapurna Labs
Software Development Manager for AWS Neuron Tools team, responsible for leading engineers to develop and maintain high-performance monitoring and profiling tools for AI accelerators (Inferentia, Trainium). The role involves managing the full development lifecycle of the Neuron Profiler, ensuring scalability, reliability, and usability, and collaborating with cross-functional teams to optimize AI workloads. Experience with ML-specific profiler tools and performance analysis is required.
ServeEngineeringSeattle, WAJan 277
Software Development Engineer, ML Systems, Annapurna Labs
Software Development Engineer focused on building and applying AI agents to simplify and accelerate customer adoption of AWS Neuron ML chips (Trainium and Inferentia). The role involves working with external and internal customers to identify obstacles and opportunities for accelerating adoption, and transforming service performance, durability, cost, and security.
ServeEngineeringNY +1Jan 37
Software Development Engineer, ML Systems, Annapurna Labs
Software Development Engineer focused on building and applying AI agents to simplify and accelerate customer adoption of AWS Neuron ML chips (Trainium and Inferentia). The role involves working with external and internal customers to identify obstacles and opportunities for accelerating adoption, and transforming service performance, durability, cost, and security.
ServeEngineeringNY +1Jan 37
Machine Learning Engineer, AWS Neuron Inference, Annapurna ML
Machine Learning Engineer role focused on optimizing and tuning inference performance for AWS Neuron accelerators, specifically for large language models (LLMs) and other key ML model families. The role involves developing and performance tuning building blocks for the distributed inference library, ensuring high performance and efficiency on Trn2 and Trn3 servers. Requires experience with LLM inference optimization, kernels, Python, PyTorch, or JAX.
ServeEngineeringSeattle, WADec '257
Sr. SoC Power Engineer, Annapurna Labs - Cloud Scale Machine Learning
This role is for a Senior SoC Power Engineer focused on developing and optimizing power consumption for machine learning accelerators (Inferentia and Trainium SoCs) within AWS. The engineer will be responsible for power analysis and modeling from RTL to netlist, identifying power saving opportunities, and correlating simulation results with lab measurements. This is an engineering role focused on the hardware infrastructure that powers AI workloads.
ServeEngineeringAustin, TXDec '257
Software Development Manager, ML Accelerators, AWS Neuron, Annapurna Labs
Software Engineering Manager to lead a team focused on machine learning compiler design and development for AWS Neuron, driving optimization techniques, hardware bring-up, and influencing pre-silicon design decisions to accelerate ML infrastructure.
ServeEngineeringSeattle, WADec '257
Machine Learning Compiler Engineer
The Machine Learning Compiler Engineer will work on the Amazon Neuron team to develop and scale a deep learning compiler stack for Amazon's custom ML accelerators (Inferentia and Trainium). This role involves optimizing neural network models for inference and training performance, integrating with ML frameworks, and contributing to the software stack that enables large-scale ML workloads. The engineer will be involved in pre-silicon design and bringing new features to market.
ServeEngineeringCupertino, CANov '257
C/C++ Hardware / Software Co-Design SDE, Machine Learning Acceleration Systems
This role involves developing bare metal firmware for custom ASIC-based ML Accelerator chips, focusing on hardware/software co-design for machine learning acceleration systems. The engineer will work on the firmware that drives neural network model execution on custom silicon, collaborating with hardware design teams. While no prior ML knowledge is required, the role is core to enabling ML infrastructure.
ServeEngineeringAustin, TXNov '257
Post-Silicon Systems Validation Engineer, Annapurna Labs
This role focuses on validating next-generation machine learning accelerators for AWS, covering the entire vertical stack from silicon to system. The engineer will develop and execute validation strategies, conduct hands-on bring-up and debug, and collaborate with various teams to ensure the quality and performance of AI/ML accelerators used in AWS data centers for AI training and inference.
ServeEngineeringAustin, TXNov '257
Sr Software Dev Engineer, Machine Learning, Sponsored Products and Brands Ads Response Prediction
This role focuses on enhancing the scalability, automation, and efficiency of large-scale training and real-time inference systems for Amazon Ads' Sponsored Products and Brands. The engineer will pioneer LLM inference infrastructure and work with applied scientists to optimize ML models and infrastructure, implementing end-to-end solutions. The team builds advanced ML models and infrastructure, from training to inference, including LLM-based systems, to deliver relevant ads.
ServePost-trainEngineeringPalo Alto, CANov '257
Machine Learning - Compiler Engineer , AWS Neuron, Annapurna Labs
Software Engineer role focused on building and optimizing the AWS Neuron compiler for custom AI chips (Inferentia and Trainium). The role involves transforming ML models (PyTorch, TensorFlow, JAX) into optimized code for these accelerators, with a focus on large language models and diffusion models. Requires strong software engineering skills, particularly in C++, and experience with compiler technologies is preferred.
ServeEngineeringCupertino, CAOct '257
Sr. Post-Silicon Systems Software Validation Engineer, Annapurna Labs
This role focuses on validating next-generation machine learning accelerators for AWS, covering the full vertical stack from silicon to system. The engineer will be responsible for developing validation strategies, executing test plans, debugging hardware and software, and collaborating with cross-functional teams to ensure the quality and performance of AI/ML accelerators used in AWS data centers.
ServeEngineeringAustin, TXOct '257
Sr.System Development Engineer, AGI Infrastructure
The AGI team is seeking engineers to develop and maintain multi-modal and multi-lingual LLMs using scalable training and inference systems. The role involves deeply understanding technology landscapes, evaluating new technologies, and driving operational excellence. Key responsibilities include leading the design and automation of GenAI training compute infrastructure, mentoring engineers, identifying performance bottlenecks, and working with core AWS services, CI/CD pipelines, and Kubernetes.
ServeEngineeringIN, TN +1Oct '257
Sr. Software Development Engineer, Annapurna Labs
Senior Software Development Engineer at Amazon Annapurna Labs focused on leading a technical team to develop profiling and optimization tools for the Neuron ML accelerators fleet. The role involves working with hardware and software teams to identify bottlenecks and provide recommendations for improving performance of large ML workloads, including custom kernels.
ServeEngineeringSeattle, WAOct '257
Software Development Manager, LLM Inference Model Enablement, Neuron SDK
Software Development Manager to lead a team optimizing LLMs for inference on AWS custom accelerators (Neuron, Trainium, Inferentia). Focus on improving model enablement speed, experience, usability, and quality through features, infrastructure, tools, and automation. Requires strong background in LLM architectures, performance optimizations, and distributed inference.
ServeEngineeringCupertino, CASep '257
Software Development Engineer, ML Systems, Annapurna Labs
Software Development Engineer focused on ML Systems within Amazon Annapurna Labs, working on AWS Neuron software for ML chips (Inferentia and Trainium). The role involves building and applying AI agents to accelerate customer adoption of this technology, optimizing performance, durability, cost, and security for AWS customers.
ServeEngineeringNY +1Aug '257
Sr. ML Kernel Performance Engineer, AWS Neuron, Annapurna Labs
Senior ML Kernel Performance Engineer for AWS Neuron SDK, focusing on optimizing deep learning and GenAI workloads on custom ML accelerators (Inferentia, Trainium). The role involves designing and implementing high-performance compute kernels, optimizing performance at the hardware-software boundary, and collaborating with customers and internal teams on model enablement and acceleration.
ServeEngineeringCupertino, CAAug '257
Senior Machine Learning Compiler Engineer
Senior Machine Learning Compiler Engineer responsible for the ground-up development and scaling of a deep learning compiler stack for Amazon's ML accelerators (Inferentia and Trainium). The role involves architecting and implementing business-critical features, optimizing neural net models for custom hardware, and integrating with ML frameworks like PyTorch and TensorFlow.
ServeEngineeringSeattle, WAAug '257
Sr. Machine Learning - Compiler Engineer III, AWS Neuron, Annapurna Labs
This role is for a Sr. Machine Learning Compiler Engineer III on the AWS Neuron team, focusing on the development and scaling of a compiler for ML accelerators. The role involves architecting and implementing features for a deep learning compiler stack that optimizes neural network performance on custom AWS hardware, integrating with frameworks like PyTorch and TensorFlow. The goal is to provide significant performance improvements for large-scale ML workloads.
ServeEngineeringCupertino, CAJul '257
Senior Software Development Engineer - Generative AI, Neuron SDK
Senior Software Development Engineer focused on Generative AI within Amazon's Annapurna Labs, specifically working with the Neuron SDK and ML chips (Inferentia and Trainium). The role involves building and applying AI agents to improve customer adoption of these chips, optimizing software solutions for performance, durability, cost, and security, and collaborating with cross-functional teams including compiler, hardware, and ML engineers. Experience in the Generative AI space is a hard requirement.
ServeEngineeringNY +1Jul '257
Sr. SDM, AI Inference Technology, Neuron SDK
Senior Manager for AI Inference Technology, leading a team to build fundamental inference technology building blocks and libraries for AWS Neuron SDK, optimizing models for Trainium and Inferentia devices. Focuses on the full development life cycle of inference libraries, enabling customers to optimize LLMs, multimodal, and generative models.
ServeEngineeringSeattle, WAJun '257