The Android XR Ecosystem team is the engine responsible for accelerating the mass adoption of spatial computing. To win, software and silicon must evolve as one.
In this role, you will be responsible for defining and architecting the overall architecture of camera systems, graphics, video, and display pixel processing across AR/XR products. You will provide thought leadership in architecting the end-to-end pixel processing pipeline subsystem including the platform architecture for ISP/Camera system, Display pipeline requirements, architect cutting edge Graphics and Video pipelines and addresses system level topics like camera synchronization, motion to render to photon latency, sensor road-maps and display re-projection hardware engines for next gen AR and XR products.
You will ensure all components work together effectively to meet performance, power, and cost goals for all CUJs. You will be understanding the entire pixel processing pipeline, from sensor to final image/video output for human or AI consumption and cross-functional collaboration internally and externally with silicon vendors, technology partners and image sensor vendors.
For decades, the computing revolution has reshaped our world driven by breakthroughs in compute, connectivity, mobile, and now, AI. Google's XR team is at the forefront of the next major leap – the convergence of AI and XR. This is more than just new devices – it's about reimagining how we interact with the world around us. We're building a future where lightweight XR devices like smart glasses and headsets pair with helpful AI to augment human intelligence, offering personalized, conversational, and contextually aware experiences.
The US base salary range for this full-time position is $189,000-$274,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.
Responsibilities
- Define and document system-level and component-level requirements (e.g., create specifications for performance, power consumption, thermal constraints), and features to support various use cases like photography, videography, multi-modal Artificial Intelligence (AI) and computer vision.
- Drive Image Signal Processor (ISP) architecture and requirements, including throughput, power efficiency, image quality, and pipeline features. Research and evaluate emerging camera algorithms, architectures, and technologies to foster innovation in upcoming products.
- Architect and collaborate with cross-functional partners to define the end-to-end pixel data path for the display sub-systems such as reprojection accelerators, processing stages, and various correction or denoising blocks, and help establish hardware and software interfaces.
- Architect cutting edge graphics pipelines with minimal motion to render to photon latency, analyze bottlenecks to achieve kpis with best power efficiency.
Qualifications
Minimum qualifications:
- Bachelor’s degree in Electrical Engineering, Computer Engineering, Computer Science, Physics, or a specialized field (e.g., Optics, Sensors, Audio/DSP, etc.), or equivalent practical experience.
- 8 years of experience in camera or ISP system architecture.
- 3 years of experience in technical leadership.
Preferred qualifications:
- Master's degree or PhD degree in Materials Science, Electrical Engineering, Computer Engineering, Physics, or a related field.
- Experience in image processing, denoising, computer architecture, GenAI, event cameras, and super resolution techniques.
- Experience in XR (e.g., Augmented (AR)/Virtual Reality (VR)), mobile, or wearables ecosystems.
- Knowledge of Display pipelines, image sensors, emerging technologies in computer vision, computational photography and AI on the edge.
- Ability to navigate ambiguity and manage consensus across engineering teams (e.g., Core Tech, Platform Software, Hardware NTI).