Software Engineer, Observability

Weights & Biases Weights & Biases · Data AI · Bellevue, WA +3 · Technology

Software Engineer on the Observability team at CoreWeave, focusing on building and operating logging, tracing, and metrics platforms for AI workloads on GPU infrastructure. The role involves designing, building, and maintaining scalable systems for telemetry data processing and surfacing, collaborating with cross-functional teams, and participating in on-call rotations.

What you'd actually do

  1. Design, build, and maintain scalable systems that process and surface telemetry data across distributed environments.
  2. Contribute production-quality code in languages like Go and Python, while improving system reliability through enhanced monitoring, alerting, and incident response practices.
  3. Collaborate with cross-functional engineering teams to implement observability best practices, support production systems, and help optimize performance across large-scale infrastructure.
  4. Participate in on-call rotations and contribute to continuous improvements based on real-world system behavior.

Skills

Required

  • 2+ years of experience in Software Engineering, Site Reliability Engineering, DevOps, or a related field
  • Proficiency in at least one programming or scripting language (e.g., Python, Go)
  • Experience working with Kubernetes, containerization, and microservices architectures
  • Experience participating in on-call rotations, including triaging and escalating production issues
  • Hands-on experience using observability systems (metrics, logging, tracing) to debug distributed systems

Nice to have

  • Experience operating observability platforms or databases (e.g., ClickHouse, Elastic, Loki, VictoriaMetrics, Prometheus, Thanos, OpenTelemetry, Grafana)
  • Familiarity with infrastructure-as-code tools such as Terraform
  • Experience with modern testing frameworks and deployment strategies (e.g., canary, blue-green)
  • Experience with data streaming technologies (e.g., Kafka, Kafka Connect)
  • Exposure to AI/ML infrastructure, including GPU-based systems, large-scale training/inference workloads, or MLOps tooling

What the JD emphasized

  • AI workloads
  • GPU-dense infrastructure
  • massive scale
  • distributed systems
  • production systems
  • large-scale infrastructure