Machine Learning Performance Engineer

at Jane Street · Quant · Hong Kong · Machine Learning

Machine Learning Performance Engineer at Jane Street, focusing on optimizing the performance of ML models for both training and inference. This role requires deep expertise in low-level systems programming, GPU optimization, and a whole-systems approach to performance, including storage and networking, within a high-frequency trading environment.

What you'd actually do

  1. optimising the performance of our models – both training and inference
  2. improving straightforward CUDA, but the interesting part needs a whole-systems approach, including storage systems, networking and host- and GPU-level considerations
  3. ensure our platform makes sense even at the lowest level – is all that throughput actually goodput?
  4. debug a training run’s performance end to end

Skills

Required

  • Experience in low-level systems programming and optimisation
  • Understanding of modern ML techniques and toolsets
  • Systems knowledge to debug training run performance end to end
  • Low-level GPU knowledge (PTX, SASS, warps, cooperative groups, Tensor Cores, memory hierarchy)
  • Debugging and optimisation experience with tools like CUDA GDB, NSight Systems, NSight Compute
  • Library knowledge of Triton, CUTLASS, CUB, Thrust, cuDNN, cuBLAS
  • Intuition about latency and throughput characteristics of CUDA graph launch, tensor core arithmetic, warp-level synchronization, asynchronous memory loads
  • Background in Infiniband, RoCE, GPUDirect, PXN, rail optimisation, NVLink
  • Understanding of collective algorithms supporting distributed GPU training in NCCL or MPI
  • Inventive approach and willingness to ask hard questions

Nice to have

  • Experience in finance

What the JD emphasized

  • low-level systems programming and optimisation
  • efficient large-scale training
  • low-latency inference
  • high-throughput inference
  • low-level GPU knowledge
  • debugging and optimisation experience
  • library knowledge of Triton, CUTLASS, CUB, Thrust, cuDNN and cuBLAS
  • collective algorithms supporting distributed GPU training

Other signals

  • Optimizing ML model performance for training and inference
  • Low-latency and high-throughput inference in real-time systems
  • Whole-systems approach to performance optimization (storage, networking, host/GPU)
  • Low-level GPU programming and optimization (CUDA, PTX, SASS, Tensor Cores, memory hierarchy)
  • Distributed GPU training optimization (NCCL, MPI)
Read full job description

We are looking for an engineer with experience in low-level systems programming and optimisation to join our growing ML team.

Machine learning is a critical pillar of Jane Street's global business. Our ever-evolving trading environment serves as a unique, rapid-feedback platform for ML experimentation, allowing us to incorporate new ideas with relatively little friction.

Your part here is optimising the performance of our models – both training and inference. We care about efficient large-scale training, low-latency inference in real-time systems and high-throughput inference in research. Part of this is improving straightforward CUDA, but the interesting part needs a whole-systems approach, including storage systems, networking and host- and GPU-level considerations. Zooming in, we also want to ensure our platform makes sense even at the lowest level – is all that throughput actually goodput? Does loading that vector from the L2 cache really take that long?

If you’ve never thought about a career in finance, you’re in good company. Many of us were in the same position before working here. If you have a curious mind and a passion for solving interesting problems, we have a feeling you’ll fit right in.

There’s no fixed set of skills, but here are some of the things we’re looking for:

  • An understanding of modern ML techniques and toolsets
  • The experience and systems knowledge required to debug a training run’s performance end to end
  • Low-level GPU knowledge of PTX, SASS, warps, cooperative groups, Tensor Cores and the memory hierarchy
  • Debugging and optimisation experience using tools like CUDA GDB, NSight Systems, NSight Computesight-systems and nsight-compute
  • Library knowledge of Triton, CUTLASS, CUB, Thrust, cuDNN and cuBLAS
  • Intuition about the latency and throughput characteristics of CUDA graph launch, tensor core arithmetic, warp-level synchronization and asynchronous memory loads
  • Background in Infiniband, RoCE, GPUDirect, PXN, rail optimisation and NVLink, and how to use these networking technologies to link up GPU clusters
  • An understanding of the collective algorithms supporting distributed GPU training in NCCL or MPI
  • An inventive approach and the willingness to ask hard questions about whether we're taking the right approaches and using the right tools
  • Fluent in English

If you're a recruiting agency and want to partner with us, please reach out to agency-partnerships@janestreet.com.