Intel
Building- HQ
- Santa Clara, US
- Founded
- 1968
- Size
- 120,000+
- Website
- intel.com
Currently tracking 64 active AI roles, up 216% versus the prior 4 weeks. Primary focus: Serve · Engineering. Salary range $122k–$414k (avg $253k).
Hiring
64 / 66
Momentum (4w)
↑+356 +216%
521 opens last 4w · 165 prior 4w
Salary range · avg $253k
$122k–$414k
USD · disclosed roles only
Tracked since
Feb 3
last role today
Hiring velocityscroll left for older weeks
Jobs (6)
| Title | Stage | AI score |
|---|---|---|
| Triton Compiler Engineer The role involves developing Triton front-end and back-end components for Intel GPUs, focusing on creating efficient custom GPU kernels for AI workloads. Responsibilities include defining, designing, developing, testing, and maintaining software tools for domain-specific programming languages, working with hardware design teams and compiler development communities, and participating in language standards groups. The ideal candidate has experience in GPU programming for AI, C/C++/Python, compiler stages, code generation, optimization, and GitHub. Familiarity with PyTorch attention techniques for transformer models is also required. | Serve | 7 |
| AI Validation, Workload Enabling and Tools Engineer AI Software Solution Engineer focused on validation and workload enabling for Intel platforms. The role involves optimizing AI model efficiency, accuracy, and performance by working with frameworks, algorithms, and hardware. Key responsibilities include enabling AI models on Intel GPUs, debugging deep learning models, conducting benchmarking and validation, developing automation pipelines, and evaluating AI models against competitors. The role also involves customer engagement for enablement and performance improvements, and translating AI workload needs into architecture insights. | ServeEval Gate | 7 |
| Applied AI (Frameworks) Engineer Engineer to work on Intel's AI frameworks software stack, focusing on design, development, and optimization of features for AI accelerators and GPUs. This includes ML kernel development, enhancing training and inference capabilities, and contributing to open-source AI frameworks like PyTorch, Tensorflow, and JAX. | Serve | 7 |
| Senior System Debug Engineer Senior System Debug Engineer responsible for the design and development of integrated AI solutions for deep learning and machine learning systems, focusing on hardware, software, firmware, board, and silicon components. The role involves AI systems architecture, defining product specifications, and impacting the AI product roadmap. It requires developing new methods in various AI/ML domains, leading design and implementation of component-level choices for performance and cost, defining system integration approaches, and delivering end-to-end technical solutions. The role also includes debugging and ensuring the reliability of AI infrastructure, collaborating on next-generation requirements, and influencing AI roadmap with customer knowledge. | Serve | 7 |
| Applied AI Frameworks Engineer This role focuses on designing and developing features for Intel's AI frameworks software stack, specifically optimizing inference serving frameworks (like SGLang, vLLM) and ML frameworks (PyTorch, Tensorflow, JAX) for Intel's AI accelerators and GPUs. The engineer will enhance deep learning training and inference capabilities, identify optimization opportunities, and contribute to open-source communities. | Serve | 7 |
| Applied AI Frameworks Engineer Engineer to design and develop features for Intel's AI frameworks software stack, focusing on inference serving frameworks (SGLang, vLLM) and ML frameworks (PyTorch, Tensorflow, JAX). The role involves optimizing software for Intel's AI accelerators and GPUs, enhancing training and inference capabilities, and contributing to open-source communities. | Serve | 7 |