Software Engineer, Data Infrastructure

at Cursor · Coding AI · San Francisco, CA · Engineering

Software Engineer, Data Infrastructure at Cursor, a company focused on automating coding. This role involves owning and operating data pipelines and storage systems that power model improvement, evals, and experimentation, with a focus on correctness, cost, and ergonomics. The role requires experience with Spark, Ray Data, and debugging performance issues across the data stack.

What you'd actually do

  1. Own the full ladder: patch what should be patched, redesign what should be redesigned, ship the replacement, and operate it.
  2. Design and ship the replacement for a core pipeline while keeping the existing system running.
  3. Define what needs to be captured and wire it through for new product surfaces lacking instrumentation.
  4. Fix instrumentation gaps, add contracts to prevent recurrence, and ship dashboards to catch issues earlier.
  5. Design schema evolution and validation for multiple consumers depending on overlapping data.
  6. Decide what data is worth keeping, implement retention and compression, and delete what is not.

Skills

Required

  • Spark (Databricks or open-source Spark)
  • Ray Data
  • large data pipelines
  • storage systems
  • debugging performance issues
  • data modeling
  • maintainability

Nice to have

  • ClickHouse
  • dbt
  • Dagster

What the JD emphasized

  • built real systems at scale
  • cares about correctness, cost, and ergonomics
  • Deep experience with Spark
  • Production experience with Ray Data
  • Hands-on ownership of large data pipelines and storage systems
  • Comfort debugging performance issues across client instrumentation, streaming, storage, and model-facing workflows, as well as, compute, storage, and networking layers
  • Clear thinking about data modeling and long-term maintainability
  • good judgment about when to patch and when to rebuild

Other signals

  • Data infrastructure is what turns them into something teams can trust.
  • This role owns the full ladder: patch what should be patched, redesign what should be redesigned, ship the replacement, and operate it.
  • Privacy guarantees are part of correctness.
Read full job description

Our mission is to automate coding. The first step in our journey is to build the best tool for professional programmers, using a combination of inventive research, design, and engineering. Our organization is very flat, and our team is small and talent dense. We particularly like people who are truth-seeking, passionate, and creative. We enjoy spirited debate, crazy ideas, and shipping code.

About the Role

Cursor ships daily. Every release leaves signals behind: telemetry, prompts, completions, agent runs, sessions. Those signals power model improvement, evals, and experimentation. Data infrastructure is what turns them into something teams can trust.

A lot of systems here started simple so we could move fast. Over time, the constraints change and the “good enough” version becomes the bottleneck. This role owns the full ladder: patch what should be patched, redesign what should be redesigned, ship the replacement, and operate it.

Privacy guarantees are part of correctness. What we can retain and use depends on Privacy Mode and org configuration, and getting that wrong breaks a product promise. We choose work by business impact: what blocks product and model teams today, and what will block them next month.

Sample projects include...

  • A core pipeline started as a pragmatic reuse of infrastructure built for something else. It works, but it cannot guarantee properties downstream consumers now need (for example, point-in-time consistency). You design and ship the replacement while keeping the existing system running.
  • A new product surface ships without instrumentation. You talk to the team, define what needs to be captured, and wire it through before the absence becomes anyone else’s problem.
  • Eval coverage drops. You trace it to an instrumentation gap introduced weeks ago by a product change nobody flagged. You fix the gap, add a contract so it cannot recur, and ship the dashboard that would have caught it earlier.
  • Multiple consumers depend on overlapping data. You design schema evolution and validation so changes in one place do not silently degrade the others.
  • Storage costs rise faster than usage. You decide what is worth keeping, implement retention and compression, and delete what is not.

What we're looking for

We’re looking for someone who has built real systems at scale and cares about correctness, cost, and ergonomics.

Strong signals include:

  • Deep experience with Spark (Databricks or open-source Spark both count)
  • Production experience with Ray Data
  • Hands-on ownership of large data pipelines and storage systems
  • Comfort debugging performance issues across client instrumentation, streaming, storage, and model-facing workflows, as well as, compute, storage, and networking layers
  • Clear thinking about data modeling and long-term maintainability
  • You have good judgment about when to patch and when to rebuild

Nice to have

  • Experience running or scaling ClickHouse
  • Familiarity with dbt, Dagster, or similar orchestration and modeling tools

We're in-person with cozy offices in North Beach, San Francisco and Manhattan, New York, replete with well-stocked libraries.

Applying

If there appears to be a fit, we'll reach to schedule 2-3 short technicals. After, we'll schedule an onsite in our office, where you'll work on a small project, discuss ideas, and meet the team.

#LI-DNI