AI Agents Applied Engineer - Senior Associate

JPMorgan Chase JPMorgan Chase · Banking · Brooklyn, NY +1 · Consumer & Community Banking

This role focuses on the end-to-end lifecycle of LLM-based agents, from research to production deployment. The engineer will define research directions in areas like multi-step planning, tool use, and safety, build production systems that meet strict constraints, and partner with other teams to bring these systems to market. The role emphasizes building AI that is auditable, explainable, and safe within a regulated financial domain, with a focus on customer-facing financial tasks.

What you'd actually do

  1. Perform research and deployment of agentic AI systems with multi-step workflows, tool calling, and multi-agent orchestration.
  2. Fine-tune and optimize LLMs using parameter-efficient fine-tuning (PEFT), distillation, and quantization to meet production constraints such as latency, memory, and cost.
  3. Apply reinforcement learning and preference optimization to improve personalization and dialogue policies.
  4. Scale LLM systems through caching, batching, prompt governance, and evaluation frameworks.
  5. Implement privacy, safety, and security controls including PCI compliance, jailbreak resistance, and auditability.

Skills

Required

  • BS with 3+ years or M.S. with 2+ years building and deploying AI systems in production
  • Applied GenAI experience with LLMs including fine-tuning, prompt engineering, and RAG.
  • Experience scaling LLM systems with caching, batching, governance, and evaluation.
  • Strong foundation in ML, deep learning, statistical modeling, and experimental design.
  • Experience in Information Retrieval (indexing, ranking, retrieval) and/or recommendation systems.
  • Proficiency in Python and ML frameworks (PyTorch/TensorFlow, Hugging Face, scikit-learn)
  • Demonstrated ability to set a technical research agenda and drive it from concept through production deployment.

Nice to have

  • 2+ years developing conversational AI systems, virtual assistants or LLM-based systems in production.
  • Experience with multi-agent orchestration, supervisor agents, and specialized toolkits.
  • Experience with reinforcement learning, bandit algorithms, and preference-based optimization (DPO, IPO), with practical exposure to data collection, labeling, and evaluation pipelines.
  • MLOps/LLMOps experience with CI/CD, monitoring, versioning, A/B testing, and rollbacks.
  • Track record of data-driven product development and experimentation.
  • Publications in top-tier AI/ML venues and/or open-source contributions

What the JD emphasized

  • production constraints
  • latency
  • accuracy
  • compliance constraints
  • auditable
  • explainable
  • safe
  • highly regulated
  • high-stakes domain
  • PCI compliance
  • jailbreak resistance
  • auditability

Other signals

  • LLM-based agents
  • multi-step planning
  • tool use
  • safety
  • production systems
  • real-world latency, accuracy, and compliance constraints
  • auditable, explainable, and safe in a highly regulated, high-stakes domain