Staff Machine Learning Engineer, AI Research

Cribl · Enterprise · CA · Engineering

Staff Machine Learning Engineer focused on AI Research at Cribl, a company building telemetry infrastructure for the AI era. The role involves designing, training, and evaluating ML models, running experiments, building ML pipelines, and optimizing model performance. It requires strong Python and ML framework skills, familiarity with MLOps, and understanding of NLP, computer vision, or RL.

What you'd actually do

  1. Design, train, and evaluate machine learning models across a range of research and applied AI initiatives
  2. Run rapid, iterative experiments to test hypotheses and surface insights that drive model improvements
  3. Collaborate closely with researchers and engineers to translate cutting-edge academic advances into practical, production-ready systems
  4. Build and maintain robust ML pipelines for data ingestion, feature engineering, model training, and evaluation
  5. Optimize model performance through fine-tuning, hyperparameter search, and architecture experimentation

Skills

Required

  • Python
  • ML frameworks such as PyTorch or TensorFlow
  • MLOps tooling and infrastructure (e.g., MLflow, Weights & Biases, Kubeflow, or similar)
  • Modern NLP, computer vision, and/or reinforcement learning techniques
  • Ability to move fast without sacrificing rigor

Nice to have

  • Master's or PhD a plus

What the JD emphasized

  • Deep hands-on experience training and evaluating ML models, including language models
  • Solid understanding of modern NLP, computer vision, and/or reinforcement learning techniques
  • Bachelor's degree in Computer Science, Mathematics, Statistics, or a related field with 5+ years of industry or research experience (Master's or PhD a plus)

Other signals

  • AI-enabled Security/Observability platforms
  • integrating cutting-edge AI/ML technologies
  • design, train, and evaluate machine learning models
  • ML pipelines for data ingestion, feature engineering, model training, and evaluation
  • Optimize model performance through fine-tuning, hyperparameter search, and architecture experimentation