Applied Scientist Intern

Amazon Amazon · Big Tech · Newark, NJ · Data Science

This role focuses on designing and implementing innovative AI solutions, developing ML models and frameworks, enabling self-service automation, and building evaluation frameworks to enhance productivity and unlock new value within Audible. The role involves applying ML/AI approaches to solve complex real-world problems and building the blueprint for how Audible works with AI.

What you'd actually do

  1. Design and implement innovative AI solutions across our three pillars: driving internal productivity, building the blueprint for how Audible works with AI, and unlocking new value through ML & AI-powered product features
  2. Develop machine learning models, frameworks, and evaluation methodologies that help teams streamline workflows, automate repetitive tasks, and leverage collective knowledge
  3. Enable self-service workflow automation by developing tools that allow non-technical teams to implement their own solutions
  4. Collaborate with product, design and engineering teams to rapidly prototype new product ideas that could unlock new audiences and revenue streams
  5. Build evaluation frameworks to measure AI system quality, effectiveness, and business impact

Skills

Required

  • Experience programming in Java, C++, Python or related language
  • Experience with SQL and an RDBMS (e.g., Oracle) or Data Warehouse
  • Currently enrolled in a Master's or PhD program in Computer Science, Machine Learning, Statistics, NLP, or a related quantitative field
  • Coursework or project experience in at least one of: NLP, recommender systems, machine learning, or deep learning
  • Familiarity with ML frameworks (e.g., PyTorch, TensorFlow, HuggingFace)
  • Experience implementing algorithms using both toolkits and self-developed code

Nice to have

  • Experience implementing algorithms using both toolkits and self-developed code
  • Have publications at top-tier peer-reviewed conferences or journals
  • Are enrolled in a PhD
  • Hands-on experience with LLMs, RAG pipelines, or fine-tuning (LoRA, PEFT)
  • Experience building or evaluating recommendation systems

What the JD emphasized

  • production-ready models
  • production-ready models
  • evaluation methodologies
  • evaluation frameworks
  • evaluation frameworks

Other signals

  • develop machine learning models
  • implement innovative AI solutions
  • build evaluation frameworks
  • enable self-service workflow automation