Intel
Building- HQ
- Santa Clara, US
- Founded
- 1968
- Size
- 120,000+
- Website
- intel.com
Currently tracking 64 active AI roles, up 216% versus the prior 4 weeks. Primary focus: Serve · Engineering. Salary range $122k–$414k (avg $253k).
Hiring
64 / 66
Momentum (4w)
↑+356 +216%
521 opens last 4w · 165 prior 4w
Salary range · avg $253k
$122k–$414k
USD · disclosed roles only
Tracked since
Feb 3
last role today
Hiring velocityscroll left for older weeks
Jobs (798)
| Title | Stage | AI score |
|---|---|---|
| Senior AI Software Architect - Runtime Intel is seeking a Senior AI Software Architect to lead the development of their neuromorphic AI execution stack for edge and robotic systems. This role involves architecting and optimizing firmware, runtime components, and performance infrastructure, integrating the stack into robotics ecosystems, and providing technical leadership. The position requires extensive experience in low-level systems software for AI accelerators, software architecture, and production-grade software development in C++/Python, with a strong background in AI/deep learning workloads. | ServeShip | 9 |
| AI Software Engineer Intern Internship role focused on applied research and productization of Vision-Language Models (VLM) and Vision-Language-Action (VLA) models, including pre-training, fine-tuning, alignment, data pipelines, fusion strategies, action components, and model optimization for efficient deployment on Intel hardware. The role involves evaluating models and potentially publishing results. | Post-trainData | 9 |
| AI Software Engineer Intern This role focuses on building and optimizing a next-generation LLM inference system, including model optimization, inference runtime, and system-level design. It involves research and engineering to implement and optimize core techniques across the stack from model to kernels to runtime to distributed systems, with a key focus on GPU kernel and runtime optimization for an end-to-end AI rack software system for LLM inference. | Serve | 9 |
| AI Software Engineer Intern This role focuses on building and optimizing a next-generation LLM inference system, including model optimization, inference runtime, and system-level design. It involves research and engineering to implement and optimize core techniques across the stack from model to kernels to runtime to distributed systems, with a key focus on GPU kernel and runtime optimization for an end-to-end AI rack software system for LLM inference. | Serve | 9 |
| AI Algorithm Research Intern – Neuromorphic Computing AI Algorithm Research Intern focused on developing, implementing, and benchmarking algorithms for Intel's next-generation neuromorphic architecture to enable applications in edge computing, signal processing, and autonomous systems. The role involves contributing to Intel's neuromorphic SDK and publishing research findings. | Data | 9 |
| AI Algorithm Research Intern – Neuromorphic Computing Intern position at Intel's Neuromorphic Computing Lab focused on developing, implementing, and benchmarking algorithms for next-generation neuromorphic architectures. The role involves supporting application development, publishing research, and contributing to the neuromorphic SDK, with a focus on edge computing, signal processing, and autonomous systems. | Data | 9 |
| Senior GenAI Software Architect Senior GenAI Software Architect role focused on building and architecting machine learning products and solutions, with a strong emphasis on GenAI algorithms, LLM-based systems, and AI agent development. The role involves translating ML models into software, optimizing for edge devices, and supporting customer/partner deployments. | AgentPost-train | 8 |
| Embodied AI Robot System Intern Develops and integrates large models (LLMs/VLMs/VLAs) into ROS 2-based robotic systems for perception, planning, and execution. Designs reward functions, training curricula, and evaluation protocols for embodied tasks, with a focus on training RL policies for manipulation. | AgentData | 8 |
| Data Science Student for AI Solutions Group Intel's AI Solutions Group is seeking an MSc/PhD student to work on state-of-the-art AI capabilities for chip development. The role involves solving high-value problems using ML, DL, and LLMs, from ideation and research to preparing solutions for deployment. Requires strong Python, ML/DL knowledge, and familiarity with AI tools like PyTorch or Scikit-learn. | Post-train | 8 |
| GPU Power Architect The role focuses on designing and developing energy-efficient hardware architectures for AI/ML workloads, specifically for GPUs. Responsibilities include building and validating GPU power models, optimizing for performance-per-watt, and developing scalable power analysis flows. The position requires a strong background in computer architecture, digital logic design, and power modeling. | Serve | 8 |
| Principal Engineer: XeSS and Neural Graphics Principal Engineer to drive Intel's XeSS and related AI-based graphics technologies, impacting XeSS Super Resolution, Frame Generation, Neural Rendering, and next-gen AI rendering. The role involves shaping technical direction, driving execution across research, software, hardware, validation, and ecosystem teams, and bringing AI graphics technologies from concept to product. Responsibilities include end-to-end development across model design, datasets, training, visual quality, performance optimization, and product integration, as well as guiding the application of modern AI model architectures to future graphics workloads. | ShipServe | 8 |
| AI Algorithm Engineer Scientist AI Algorithm Engineer Scientist at Intel focused on generative AI, specifically for building next-generation code generation agents for GPU programming. The role involves research and development of ML models, algorithm optimization for CPUs/GPUs, and translating models into deployable products, with a focus on areas like audio, voice, speech, and vision processing. | AgentData | 8 |
| Principal Engineer – Distributed AI Systems Architecture (Heterogeneous Compute) Seeking a Principal Engineer to architect next-generation distributed AI systems across heterogeneous compute platforms (CPUs, GPUs, accelerators). The role focuses on dynamic execution of large-scale AI computation graphs, managing state, locality, and performance. Responsibilities include defining runtime models, stateful scheduling, graph introspection, integrating specialized accelerators, MoE-aware execution, and adaptive runtime optimization. Requires deep expertise in systems architecture, HPC, distributed systems, and heterogeneous compute environments, with experience in AI/ML systems and inference infrastructure preferred. | ServeAgent | 8 |
| Research and Pathfinding Internship: AI Workload Compiler Optimization for CPU and GPU Internship role focused on advancing compiler infrastructure for heterogeneous AI workloads by developing novel optimization techniques for AI kernel compilation targeting both CPU and GPU architectures using MLIR/LLVM. Explores algebraic optimization, hierarchical scheduling, and cost-driven pruning for high-performance fused kernels. | Serve | 8 |
| Senior GenAI Software Architect Senior GenAI Software Architect role focused on building and architecting machine learning products and solutions, with a strong emphasis on GenAI algorithms, LLM-based systems, and AI agent development. The role involves translating ML models into software, optimizing for edge devices, and supporting customer/partner deployments. | AgentPost-train | 8 |
| AI Frameworks Software Engineer – Model Compression Algorithm Develop Intel Neural Compressor product and related tools, optimize for Intel AI platform (CPU, GPU, AI Accelerator). Research and implement quantization and compression techniques for LLMs and text-to-image/video generation models. Track and explore cutting-edge directions in efficient model deployment and inference/finetuning acceleration. | ServePost-train | 8 |
| Physical AI Engineer The Physical AI Engineer role at Intel focuses on designing and developing integrated AI solutions for deep learning and machine learning systems, encompassing hardware, software, firmware, and silicon. The role involves AI systems architecture, defining product specifications, and impacting the AI product roadmap. Key responsibilities include developing new methods in areas like reinforcement learning, computer vision, and robotics, leading design and implementation of AI systems, and delivering end-to-end technical solutions for customer problems. The role also involves analyzing AI infrastructure reliability and collaborating on next-generation requirements. | ShipData | 8 |
| GenAI Software Architect GenAI Software Architect role at Intel, focusing on building and optimizing AI/ML-based products and solutions, particularly LLM-based systems and AI agents. Requires expertise in GenAI algorithms, solution architecture, performance tuning, and experience with frameworks like LangChain and RAG pipelines. The role involves developing and deploying machine learning models and methods into software, with a focus on real-world use cases and edge device optimization. | Agent | 8 |
| Data Scientist Data Scientist role focused on accelerating pre and post silicon validation using AI/ML. Responsibilities include designing and deploying ML algorithms and generative AI pipelines, architecting end-to-end AI systems (data pipelines, training, inference, MLOps), developing advanced AI models for debug efficiency, and applying LLMs/RAG for log summarization and triage automation. The role requires strong Python, ML framework, SQL, and software engineering skills, with preferred experience in validation environments, transformer models, LLM fine-tuning, and RAG. | AgentData | 8 |
| Neuromorphic Applications Engineer- (Temporary Position) This role focuses on demonstrating the value of Intel's neuromorphic technologies by developing, implementing, and benchmarking algorithms for next-generation neuromorphic architectures. The goal is to enable applications in edge computing, signal processing, and autonomous systems for physical AI, with a focus on robotics applications like VLA models for drones and humanoids. The role involves validating the neuromorphic SDK, gathering metrics, proposing software enhancements, and presenting findings. It's a fixed-term position within Intel's CTO Office, aiming to commercialize neuromorphic technology. | ShipServe | 8 |
| AI Robotics Engineer- (Temporary Position) Develop, implement, and benchmark advanced robotics algorithms optimized for modern heterogeneous Intel compute architectures, enabling high performance and efficient solutions for real-world applications in autonomous systems, edge robotics, and intelligent physical systems. This role involves integrating and simulating large-scale robotics systems and bringing real robotic platforms to life, with a focus on commercializing these technologies for future Intel and partner products. | ShipAgent | 8 |
| Senior Principal Engineer – AI Applied Research Senior Principal Engineer in AI Applied Research at Intel, focusing on applying AI/ML to logic IP design and semiconductor manufacturing. The role involves conducting applied research, developing proof-of-concept models, and implementing solutions to demonstrate business value, requiring expertise in deep learning, ML, RL, NLP, GNNs, and time-series. The position emphasizes leadership, influencing partners, and mentoring technical leaders. | Post-train | 8 |
| Software Enabling and Optimization Engineer This role focuses on optimizing AI software solutions for Intel's AI PC environments by collaborating with customers and ecosystem partners. The engineer will develop, integrate, test, tune, and debug software, leveraging tools like OpenVINO, Llama.CPP, Ollama, LMStudio, and vLLM, to enhance product adoption and differentiation. Key responsibilities include researching and prototyping software, evangelizing Intel's tools, leading pre-enabling efforts, and identifying key workloads for future product designs. | Serve | 7 |
| AI Software Engineering Intern AI Software Engineering Intern role focused on designing, developing, and optimizing AI algorithms and frameworks, with contributions to implementation, tuning, applied research, and prototyping for scalable AI solutions. The role involves working with computer vision, machine learning, and deep learning, and requires programming ability in Python and familiarity with ML frameworks. | ServePost-train | 7 |
| Triton Compiler Engineer The role involves developing Triton front-end and back-end components for Intel GPUs, focusing on creating efficient custom GPU kernels for AI workloads. Responsibilities include defining, designing, developing, testing, and maintaining software tools for domain-specific programming languages, working with hardware design teams and compiler development communities, and participating in language standards groups. The ideal candidate has experience in GPU programming for AI, C/C++/Python, compiler stages, code generation, optimization, and GitHub. Familiarity with PyTorch attention techniques for transformer models is also required. | Serve | 7 |
| AI Software Development Engineer AI Software Development Engineer focused on optimizing AI inference workloads (LLMs, Diffusion models) on Intel GPUs. This role involves end-to-end optimization across graph compilation, runtime execution, and low-level GPU kernels, requiring strong C++ skills and understanding of GPU architectures and neural network inference. | Serve | 7 |
| GPU Software Engineer GPU Software Engineer focused on AI-driven software development and validation, building and optimizing software quality measurement and tracking systems. Responsibilities include developing high-performance software modules with AI technologies and models, optimizing media drivers, and deploying LLMs/VLMs within agentic frameworks. | AgentServe | 7 |
| AI Framework Software Intern Internship role focused on optimizing AI software solutions, including algorithms, frameworks, and architectures for computer vision, machine learning, and deep learning. Responsibilities include researching model quantization and graph transformation, evaluating LLM performance on Intel platforms, analyzing software bottlenecks, and assisting in implementing and tuning AI models for performance and accuracy. The role emphasizes hardware-software integration and collaboration for scalable AI solutions. | Serve | 7 |
| AI Validation, Workload Enabling and Tools Engineer AI Software Solution Engineer focused on validation and workload enabling for Intel platforms. The role involves optimizing AI model efficiency, accuracy, and performance by working with frameworks, algorithms, and hardware. Key responsibilities include enabling AI models on Intel GPUs, debugging deep learning models, conducting benchmarking and validation, developing automation pipelines, and evaluating AI models against competitors. The role also involves customer engagement for enablement and performance improvements, and translating AI workload needs into architecture insights. | ServeEval Gate | 7 |
| Senior AI Algorithm Engineer in oneDNN Seeking a Senior AI Algorithm Engineer to develop and optimize oneDNN, a critical open-source performance library for deep learning applications, enabling state-of-the-art neural network performance across Intel hardware (CPUs, GPUs). The role involves low-level performance engineering, parallel algorithm development, and contributing to the open-source community. | ServePost-train | 7 |
| Applied AI (Frameworks) Engineer Engineer to work on Intel's AI frameworks software stack, focusing on design, development, and optimization of features for AI accelerators and GPUs. This includes ML kernel development, enhancing training and inference capabilities, and contributing to open-source AI frameworks like PyTorch, Tensorflow, and JAX. | Serve | 7 |
| AI Frameworks Engineer Intel is seeking a software engineer for its deep learning compiler team to develop and optimize compiler technology for deep learning workloads on Intel NPUs. This role involves analyzing deep learning networks, developing compiler optimization algorithms, and collaborating with hardware and software framework teams to achieve high performance for AI hardware accelerators. The focus is on product development with an end goal of high-quality, high-performance, secure product software. | Serve | 7 |
| Lead Senior Design Engineer – AI SoC Development Lead Senior Design Engineer focused on AI SoC development, responsible for defining, implementing, and validating complex SoC IP blocks and subsystems for AI applications. This role involves architectural leadership, microarchitecture and RTL development, verification collaboration, timing/physical design support, and silicon bring-up, all while ensuring power, performance, and security requirements are met for next-generation AI solutions. | Serve | 7 |
| Senior System Debug Engineer Senior System Debug Engineer responsible for the design and development of integrated AI solutions for deep learning and machine learning systems, focusing on hardware, software, firmware, board, and silicon components. The role involves AI systems architecture, defining product specifications, and impacting the AI product roadmap. It requires developing new methods in various AI/ML domains, leading design and implementation of component-level choices for performance and cost, defining system integration approaches, and delivering end-to-end technical solutions. The role also includes debugging and ensuring the reliability of AI infrastructure, collaborating on next-generation requirements, and influencing AI roadmap with customer knowledge. | Serve | 7 |
| AI GPU Arch Perf Optimization Intern This internship focuses on optimizing core GPU compute kernels for AI and numerical workloads, validating GPU IP with AI inference and training workloads, and performing GPU performance profiling and analysis. The role involves hardware/software codesign for next-generation Intel GPU and AI accelerator platforms. | Serve | 7 |
| AI GPU Arch Perf Optimization Intern Intern role focused on optimizing GPU compute kernels for AI workloads and validating GPU IP. Involves performance profiling, analysis, and modeling to improve next-generation Intel GPU and AI accelerator platforms. | Serve | 7 |
| AI GPU Arch Perf Optimization Intern This internship focuses on optimizing core GPU compute kernels for AI and numerical workloads, validating GPU IP with AI inference and training workloads, and performing GPU performance profiling and analysis. The role involves hardware/software codesign for next-generation Intel GPU and AI accelerator platforms. | Serve | 7 |
| Software Solutions Engineering Intern Internship role focused on the research and application of Vision-Language-Action (VLA) algorithms in robot scenarios. Responsibilities include VLA data collection, model fine-tuning, performance testing, problem analysis, and optimization using Python and frameworks like lerobot, PyTorch, and TensorFlow. The role involves improving model adaptability and execution accuracy for robot systems. | Post-trainAgent | 7 |
| AI Software Engineer Intern Intern role focused on optimizing CPU kernels for AI workloads, including LLMs and multimodal models, using Intel architecture features and performance profiling tools. Integrates custom operators into production frameworks. | Serve | 7 |
| Robotics Research Intern Robotics Research Intern at Intel focusing on advanced algorithmic development and robotics research for next-generation robotic technologies. The role involves researching, designing, and optimizing robotics algorithms, control systems, and AI/ML models, with a focus on enabling intelligent autonomous systems and innovative robotic applications. Collaboration with cross-functional teams to translate research into practical implementations is key. | Agent | 7 |
| Applied AI Frameworks Engineer This role focuses on designing and developing features for Intel's AI frameworks software stack, specifically optimizing inference serving frameworks (like SGLang, vLLM) and ML frameworks (PyTorch, Tensorflow, JAX) for Intel's AI accelerators and GPUs. The engineer will enhance deep learning training and inference capabilities, identify optimization opportunities, and contribute to open-source communities. | Serve | 7 |
| Applied AI Frameworks Engineer Engineer to design and develop features for Intel's AI frameworks software stack, focusing on inference serving frameworks (SGLang, vLLM) and ML frameworks (PyTorch, Tensorflow, JAX). The role involves optimizing software for Intel's AI accelerators and GPUs, enhancing training and inference capabilities, and contributing to open-source communities. | Serve | 7 |
| Efficient AI Solutions Engineering Intern Internship role focused on developing efficient algorithm solutions for accelerating large AI models and agentic systems, with an emphasis on deployment in resource-constrained computing platforms. | ServeAgent | 7 |
| AI Tools Development Intern Intern role focused on the development, validation, and deployment of AI tools such as GenAI assistants, RAG-based knowledge tools, and workflow automation agents within an enterprise AI context. Requires basic knowledge of ML/GenAI concepts and Python programming. | Agent | 7 |
| AI Frameworks Engineer – GPU Performance for Generative AI (OpenVINO) Software engineer focused on implementing and optimizing generative AI workloads (LLMs, diffusion models) on Intel GPUs using the OpenVINO inference runtime. The role involves analyzing performance bottlenecks, adapting state-of-the-art techniques, and optimizing for current and future GPU architectures, requiring deep C++ and system-level expertise. | Serve | 7 |
| AI Compiler and Library Engineer - Intern AI Compiler and Library Engineer Intern at Intel, contributing to the design, development, and optimization of AI software solutions, including algorithms, frameworks, and architectures. Focuses on implementing and tuning models for performance and accuracy, applied research, and hardware-software integration, with potential involvement in system-level deployment. The role emphasizes learning and skill development through hands-on projects supporting Intel's business goals. | Serve | 7 |
| AI Software Engineering Intern AI Software Engineering Intern at Intel contributing to the design, development, and optimization of AI software solutions across computer vision, machine learning, and deep learning. Responsibilities include implementing and tuning models, conducting applied research, and assisting with system-level deployment. | Post-train | 7 |
| AI framework vLLM optimization Intern AI Software Engineering Intern focused on designing, developing, and optimizing AI software solutions, including algorithms, frameworks, and architectures. Key responsibilities include tuning deep learning models, exploring model compression techniques (quantization, pruning), and conducting applied research for system-level deployment and hardware integration. The role emphasizes practical engineering applications and inference optimization. | Serve | 7 |
| AI Software Engineer Intern AI Software Engineer Intern role focused on the design, development, and optimization of AI software solutions, including algorithms, frameworks, and architectures. Responsibilities include implementing and tuning models, applied research, hardware-software integration, and system-level deployment. The role involves learning and applying knowledge in areas like computer vision, machine learning, and deep learning, with a focus on performance and accuracy. | ServePost-train | 7 |
| Workload optimization intern This intern role focuses on optimizing deep learning models and their deployment for Intel GPUs/CPUs. Responsibilities include performance tuning, debugging accuracy and memory issues, developing deployment frameworks (e.g., using vLLM), and creating high-performance kernels. The role involves technical syncs with architects and transforming innovative ideas into production-ready features. | Serve | 7 |