Consumer · Live shopping marketplace
| Title | Stage | AI score |
|---|---|---|
| Software Engineer, Machine Learning Infrastructure Software Engineer, Machine Learning Infrastructure at Whatnot, focusing on scaling AI and ML infrastructure for large language models and other ML applications. Responsibilities include owning AI/ML infrastructure, prototyping and productionizing ML architectures, designing and scaling inference infrastructure for low-latency and high-throughput serving, and building distributed training and inference pipelines. | ServePost-train | 8 |
| Software Engineer, Machine Learning Infrastructure Software Engineer, Machine Learning Infrastructure at Whatnot, focusing on building and scaling the core infrastructure for AI and ML models, including low-latency large model serving and distributed training/inference pipelines. | Serve |
| 8 |
| Machine Learning Platform Engineer Machine Learning Platform Engineer at Whatnot, focusing on building and scaling the core infrastructure for AI and ML models, including LLM applications, low-latency serving, distributed training, and GPU inference. | ServePost-train | 8 |
| Technical Lead Manager, ML Infrastructure Lead the development and scaling of core ML infrastructure, including low-latency model serving, streaming feature ingestion, distributed training, and high-throughput GPU inference, to power AI/ML applications at consumer scale. This role involves hands-on coding, architectural guidance, and empowering ML scientists. | ServeData | 8 |
| Machine Learning Infrastructure Engineer Seeking an ML Infrastructure Engineer to design and scale core infrastructure for ML and LLM applications, focusing on low-latency serving, distributed training, and high-throughput GPU inference to productionize cutting-edge models. | ServePost-train | 8 |
| Senior Engineering Manager, ML Platform Senior Engineering Manager, ML Platform at Whatnot, a livestream shopping platform. This role focuses on leading the development and scaling of core infrastructure for machine learning and self-hosted LLM applications. Responsibilities include building low-latency model serving, streaming feature ingestion, distributed training, and high-throughput GPU inference systems. The role requires strong technical depth, hands-on coding, and managing production ML systems at consumer scale. | ServeData | 8 |