Intel

Building

Industrial

HQ
Santa Clara, US
Founded
1968
Size
120,000+
Website
intel.com

Currently tracking 64 active AI roles, up 216% versus the prior 4 weeks. Primary focus: Serve · Engineering. Salary range $122k–$414k (avg $253k).

Hiring
64 / 66
Momentum (4w)
+356 +216%
521 opens last 4w · 165 prior 4w
Salary range · avg $253k
$122k–$414k
USD · disclosed roles only
Tracked since
Feb 3
last role today
Hiring velocityscroll left for older weeks
2 new roles
Oct 6
1 new role
Dec 8
3 new roles
Jan 5
5 new roles
12
1 new role
19
2 new roles
26
6 new roles
Feb 2
6 new roles
9
8 new roles
16
18 new roles
23
22 new roles
Mar 2
38 new roles
9
45 new roles
16
29 new roles
23
37 new roles
30
54 new roles
Apr 6
113 new roles
13
110 new roles
20
151 new roles
27
147 new roles
May 4

Jobs (4)

64 AI · 734 total active
TitleStageFunctionLocationFirst seenAI score
Research and Pathfinding Internship: AI Workload Compiler Optimization for CPU and GPU
Internship role focused on advancing compiler infrastructure for heterogeneous AI workloads by developing novel optimization techniques for AI kernel compilation targeting both CPU and GPU architectures using MLIR/LLVM. Explores algebraic optimization, hierarchical scheduling, and cost-driven pruning for high-performance fused kernels.
ServeResearchGdansk, Poland3w ago8
Systems Research Engineer/Scientist
Systems Research Engineer/Scientist role focused on leveraging AI/ML for higher efficiency and performance in system architecture innovations, including high-performance cluster computing, virtualization, and accelerated computing. The role involves prototyping, characterizing, and analyzing workloads, developing tools for performance assessment, and influencing future product roadmaps. Requires strong systems knowledge and hands-on experience with AI workloads, with a focus on performance modeling and analysis of AI inference or training.
ServeResearchOregon, Hillsboro, United States6w ago7
Cloud and AI System Intern
Research intern focusing on system reliability (RAS) and silent data error characterization and mitigation for AI and general-purpose compute platforms, including heterogeneous systems and large-scale server clusters. Responsibilities include designing and running experiments, analyzing logs, and prototyping detection/diagnosis methods to improve data integrity and platform robustness across the HW/FW/OS/runtime stack.
ServeResearchShanghai, China2w ago5
Research Intern for Supernode Solution
Research Intern focusing on system innovation, cost optimization, and GPU interconnect protocols for disaggregated AI supernode architectures. The role involves exploring architectural innovations, implementing distributed memory pooling, and researching Ethernet-native GPU interconnect protocols for large-scale AI inference and training clusters. Familiarity with RDMA, Mellanox tools, and LLM inference benchmarking methodologies is required.
ServePretrainResearchShanghai, China3w ago5