Applied Scientist Intern

Amazon Amazon · Big Tech · Newark, NJ · Data Science

This Applied Scientist Intern role at Audible focuses on developing innovative AI solutions for recommendation, content understanding, and AI-powered product experiences. The intern will design and implement ML models, frameworks, and evaluation methodologies, enabling self-service automation and building AI systems that drive internal productivity and new product features. The role emphasizes practical implementation, collaboration with cross-functional teams, and raising AI fluency across the organization.

What you'd actually do

  1. Design and implement innovative AI solutions across our three pillars: driving internal productivity, building the blueprint for how Audible works with AI, and unlocking new value through ML & AI-powered product features
  2. Develop machine learning models, frameworks, and evaluation methodologies that help teams streamline workflows, automate repetitive tasks, and leverage collective knowledge
  3. Enable self-service workflow automation by developing tools that allow non-technical teams to implement their own solutions
  4. Collaborate with product, design and engineering teams to rapidly prototype new product ideas that could unlock new audiences and revenue streams
  5. Build evaluation frameworks to measure AI system quality, effectiveness, and business impact

Skills

Required

  • Experience programming in Java, C++, Python or related language
  • Experience with SQL and an RDBMS (e.g., Oracle) or Data Warehouse
  • Currently enrolled in a Master's or PhD program in Computer Science, Machine Learning, Statistics, NLP, or a related quantitative field
  • Coursework or project experience in at least one of: NLP, recommender systems, machine learning, or deep learning
  • Familiarity with ML frameworks (e.g., PyTorch, TensorFlow, HuggingFace)
  • Experience implementing algorithms using both toolkits and self-developed code

Nice to have

  • Have publications at top-tier peer-reviewed conferences or journals
  • Are enrolled in a PhD
  • Hands-on experience with LLMs, RAG pipelines, or fine-tuning (LoRA, PEFT)
  • Experience building or evaluating recommendation systems

What the JD emphasized

  • production-ready models
  • production-ready models
  • evaluation methodologies
  • evaluation frameworks

Other signals

  • develop machine learning models, frameworks, and evaluation methodologies
  • enable self-service workflow automation
  • build evaluation frameworks to measure AI system quality, effectiveness, and business impact
  • mentor and educate colleagues on AI best practices