| Title | Stage | AI score |
|---|---|---|
| Software Engineer, Machine Learning Infrastructure Software Engineer, Machine Learning Infrastructure at Whatnot, focusing on scaling AI and ML infrastructure for large language models and other ML applications. Responsibilities include owning AI/ML infrastructure, prototyping and productionizing ML architectures, designing and scaling inference infrastructure for low-latency and high-throughput serving, and building distributed training and inference pipelines. | ServePost-train | 8 |
| Software Engineer, Machine Learning Infrastructure Software Engineer, Machine Learning Infrastructure at Whatnot, focusing on building and scaling the core infrastructure for AI and ML models, including low-latency large model serving and distributed training/inference pipelines. |
| 8 |
| Machine Learning Platform Engineer Machine Learning Platform Engineer at Whatnot, focusing on building and scaling the core infrastructure for AI and ML models, including LLM applications, low-latency serving, distributed training, and GPU inference. | ServePost-train | 8 |
| Technical Lead Manager, ML Infrastructure Lead the development and scaling of core ML infrastructure, including low-latency model serving, streaming feature ingestion, distributed training, and high-throughput GPU inference, to power AI/ML applications at consumer scale. This role involves hands-on coding, architectural guidance, and empowering ML scientists. | ServeData | 8 |
| Machine Learning Infrastructure Engineer Seeking an ML Infrastructure Engineer to design and scale core infrastructure for ML and LLM applications, focusing on low-latency serving, distributed training, and high-throughput GPU inference to productionize cutting-edge models. | ServePost-train | 8 |
| Senior Engineering Manager, ML Platform Senior Engineering Manager, ML Platform at Whatnot, a livestream shopping platform. This role focuses on leading the development and scaling of core infrastructure for machine learning and self-hosted LLM applications. Responsibilities include building low-latency model serving, streaming feature ingestion, distributed training, and high-throughput GPU inference systems. The role requires strong technical depth, hands-on coding, and managing production ML systems at consumer scale. | ServeData | 8 |
| Feature Platform Engineer This role focuses on building and scaling the feature ingestion and storage infrastructure that powers both core business logic and ML applications. The engineer will work on real-time feature pipelines, optimize system performance, and empower ML scientists to iterate faster by building abstractions and tools. The goal is to enable faster ML model responses to marketplace dynamics and scale AI across the ecosystem. | Serve | 5 |