| Title | Stage | AI score |
|---|---|---|
| Forward Deployed Product Manager AI Product Manager focused on customer success and translating technical requirements into product offerings for generative AI infrastructure, specifically LLM inference and fine-tuning. | ServePost-train | 9 |
| Member of Technical Staff, Performance Optimization Software Engineer focused on Performance Optimization for AI infrastructure, optimizing speed and efficiency across the stack for LLMs, VLMs, and video models. Responsibilities include low-level GPU kernel optimization, distributed systems scaling, and performance analysis for training and inference. | ServePost-train | 9 |
| Associate Product Manager Associate Product Manager role at Fireworks AI, a Series C company focused on generative AI infrastructure. The role involves working on the frontier of AI infrastructure, building tools for developers and AI teams. Responsibilities include acting as a technical advisor to customers, partnering with engineering teams to define and ship product features, and engaging with customers to identify use cases. The role is designed for early-career candidates with a technical background and curiosity about AI/LLMs. | ServePost-train | 8 |
| Solutions Architect Solutions Architect role focused on customer engagement, technical sales, and solution design for generative AI infrastructure, specifically LLM inference and fine-tuning. The role involves understanding customer needs, designing AI solutions using the Fireworks platform, executing Proofs of Concept (POCs), and providing performance engineering and model recommendations. It requires strong technical depth in the LLM stack and customer-facing skills, with two tracks: Enterprise SA and Applied AI SA. | ServePost-train | 8 |
| Software Engineer, AI Infrastructure Software Engineer on the AI Infrastructure team at Fireworks AI, focusing on designing and building core systems for their generative AI platform, including infrastructure for distributed training, inference, data pipelines, CI/CD, control plane, and model serving. The role emphasizes reliability, performance, and quality of the AI system, bridging customer needs with the inference engine. | Serve | 8 |
| Member of Technical Staff, Software Engineer This role is for a backend software engineer focused on building the core infrastructure for a generative AI platform, including web applications, model orchestration, billing, APIs, and developer tooling. The role emphasizes platform engineering with product impact, working closely with various teams to ship end-to-end features and improve system reliability and performance. Experience with AI systems and a desire to build products in the AI space are required. | Serve | 7 |
| Member of Technical Staff, Cloud Infrastructure Software Engineer on the Cloud Infrastructure team responsible for architecting and building foundational systems for a generative AI platform, focusing on serving AI workloads globally with high reliability, efficiency, and scalability. The role requires deep expertise in distributed systems, cloud-native infrastructure, and ML platforms, with responsibilities including designing and implementing backend services, optimizing infrastructure, and collaborating with ML and product teams. | Serve | 7 |