Intel
Building- HQ
- Santa Clara, US
- Founded
- 1968
- Size
- 120,000+
- Website
- intel.com
Currently tracking 64 active AI roles, up 216% versus the prior 4 weeks. Primary focus: Serve · Engineering. Salary range $122k–$414k (avg $253k).
Hiring
64 / 66
Momentum (4w)
↑+356 +216%
521 opens last 4w · 165 prior 4w
Salary range · avg $253k
$122k–$414k
USD · disclosed roles only
Tracked since
Feb 3
last role today
Hiring velocityscroll left for older weeks
Jobs (4)
| Title | Stage | AI score |
|---|---|---|
| Research and Pathfinding Internship: AI Workload Compiler Optimization for CPU and GPU Internship role focused on advancing compiler infrastructure for heterogeneous AI workloads by developing novel optimization techniques for AI kernel compilation targeting both CPU and GPU architectures using MLIR/LLVM. Explores algebraic optimization, hierarchical scheduling, and cost-driven pruning for high-performance fused kernels. | Serve | 8 |
| Systems Research Engineer/Scientist Systems Research Engineer/Scientist role focused on leveraging AI/ML for higher efficiency and performance in system architecture innovations, including high-performance cluster computing, virtualization, and accelerated computing. The role involves prototyping, characterizing, and analyzing workloads, developing tools for performance assessment, and influencing future product roadmaps. Requires strong systems knowledge and hands-on experience with AI workloads, with a focus on performance modeling and analysis of AI inference or training. | Serve | 7 |
| Cloud and AI System Intern Research intern focusing on system reliability (RAS) and silent data error characterization and mitigation for AI and general-purpose compute platforms, including heterogeneous systems and large-scale server clusters. Responsibilities include designing and running experiments, analyzing logs, and prototyping detection/diagnosis methods to improve data integrity and platform robustness across the HW/FW/OS/runtime stack. | Serve | 5 |
| Research Intern for Supernode Solution Research Intern focusing on system innovation, cost optimization, and GPU interconnect protocols for disaggregated AI supernode architectures. The role involves exploring architectural innovations, implementing distributed memory pooling, and researching Ethernet-native GPU interconnect protocols for large-scale AI inference and training clusters. Familiarity with RDMA, Mellanox tools, and LLM inference benchmarking methodologies is required. | ServePretrain | 5 |