Intel
Building- HQ
- Santa Clara, US
- Founded
- 1968
- Size
- 120,000+
- Website
- intel.com
Currently tracking 64 active AI roles, up 216% versus the prior 4 weeks. Primary focus: Serve · Engineering. Salary range $122k–$414k (avg $253k).
Hiring
64 / 66
Momentum (4w)
↑+356 +216%
521 opens last 4w · 165 prior 4w
Salary range · avg $253k
$122k–$414k
USD · disclosed roles only
Tracked since
Feb 3
last role today
Hiring velocityscroll left for older weeks
Jobs (9)
| Title | Stage | AI score |
|---|---|---|
| AI Software Engineer Intern Internship role focused on applied research and productization of Vision-Language Models (VLM) and Vision-Language-Action (VLA) models, including pre-training, fine-tuning, alignment, data pipelines, fusion strategies, action components, and model optimization for efficient deployment on Intel hardware. The role involves evaluating models and potentially publishing results. | Post-trainData | 9 |
| AI Algorithm Research Intern – Neuromorphic Computing AI Algorithm Research Intern focused on developing, implementing, and benchmarking algorithms for Intel's next-generation neuromorphic architecture to enable applications in edge computing, signal processing, and autonomous systems. The role involves contributing to Intel's neuromorphic SDK and publishing research findings. | Data | 9 |
| AI Algorithm Research Intern – Neuromorphic Computing Intern position at Intel's Neuromorphic Computing Lab focused on developing, implementing, and benchmarking algorithms for next-generation neuromorphic architectures. The role involves supporting application development, publishing research, and contributing to the neuromorphic SDK, with a focus on edge computing, signal processing, and autonomous systems. | Data | 9 |
| AI Algorithm Engineer Scientist AI Algorithm Engineer Scientist at Intel focused on generative AI, specifically for building next-generation code generation agents for GPU programming. The role involves research and development of ML models, algorithm optimization for CPUs/GPUs, and translating models into deployable products, with a focus on areas like audio, voice, speech, and vision processing. | AgentData | 8 |
| Research and Pathfinding Internship: AI Workload Compiler Optimization for CPU and GPU Internship role focused on advancing compiler infrastructure for heterogeneous AI workloads by developing novel optimization techniques for AI kernel compilation targeting both CPU and GPU architectures using MLIR/LLVM. Explores algebraic optimization, hierarchical scheduling, and cost-driven pruning for high-performance fused kernels. | Serve | 8 |
| Senior Principal Engineer – AI Applied Research Senior Principal Engineer in AI Applied Research at Intel, focusing on applying AI/ML to logic IP design and semiconductor manufacturing. The role involves conducting applied research, developing proof-of-concept models, and implementing solutions to demonstrate business value, requiring expertise in deep learning, ML, RL, NLP, GNNs, and time-series. The position emphasizes leadership, influencing partners, and mentoring technical leaders. | Post-train | 8 |
| Robotics Research Intern Robotics Research Intern at Intel focusing on advanced algorithmic development and robotics research for next-generation robotic technologies. The role involves researching, designing, and optimizing robotics algorithms, control systems, and AI/ML models, with a focus on enabling intelligent autonomous systems and innovative robotic applications. Collaboration with cross-functional teams to translate research into practical implementations is key. | Agent | 7 |
| Research Intern: Agent-CC System Research intern position focused on Confidential AI systems and Agent-CC solutions for hyperscale, involving LLMs, AI systems, and microarchitecture. Requires programming, foundational computer architecture, and AI math skills. | Agent | 7 |
| Systems Research Engineer/Scientist Systems Research Engineer/Scientist role focused on leveraging AI/ML for higher efficiency and performance in system architecture innovations, including high-performance cluster computing, virtualization, and accelerated computing. The role involves prototyping, characterizing, and analyzing workloads, developing tools for performance assessment, and influencing future product roadmaps. Requires strong systems knowledge and hands-on experience with AI workloads, with a focus on performance modeling and analysis of AI inference or training. | Serve | 7 |