AI Infrastructure Operations Engineer

Cerebras · Semiconductors · Headquarters +2 · Deployment

The AI Infrastructure Operations Engineer will manage and operate Cerebras' advanced AI compute clusters, ensuring their health, performance, and availability. This role focuses on maximizing compute capacity, deploying container-based services, and providing 24/7 monitoring and support for large-scale machine learning infrastructure.

What you'd actually do

  1. Manage and operate multiple advanced AI compute infrastructure clusters.
  2. Monitor and oversee cluster health, proactively identifying and resolving potential issues.
  3. Maximize compute capacity through optimization and efficient resource allocation.
  4. Deploy, configure, and debug container-based services using Docker.
  5. Provide 24/7 monitoring and support, leveraging automated tools and performing hands-on troubleshooting as needed.

Skills

Required

  • 6-8 years of relevant experience in managing and operating complex compute infrastructure, preferably in the context of machine learning or high-performance computing.
  • Strong proficiency in Python scripting for automation and system administration.
  • Deep understanding of Linux-based compute systems and command-line tools.
  • Extensive knowledge of Docker containers and container orchestration platforms like k8s and SLURM.
  • Proven ability to troubleshoot and resolve complex technical issues in a timely and efficient manner.
  • Experience with monitoring and alerting systems.
  • Should have a proven track record to own and drive challenges to completion.
  • Excellent communication and collaboration skills.
  • Ability to work effectively in a fast-paced environment.
  • Willingness to participate in a 24/7 on-call rotation.

Nice to have

  • Operating large scale GPU clusters.
  • Knowledge of technologies like Ethernet, RoCE, TCP/IP, etc. is desired.
  • Knowledge of cloud computing platforms (e.g., AWS, GCP, Azure).
  • Familiarity with machine learning frameworks and tools.
  • Experience with cross-functional team projects.

What the JD emphasized

  • Proven ability to troubleshoot and resolve complex technical issues in a timely and efficient manner.
  • Should have a proven track record to own and drive challenges to completion.

Other signals

  • AI compute clusters
  • Wafer-Scale Engine (WSE)
  • training and inference speeds
  • Generative AI inference solution