Currently tracking 108 active AI roles, down 14% versus the prior 4 weeks. Primary focus: Agent · Research.
| Title | Stage | AI score |
|---|---|---|
| Research Engineer, Preparedness - Meta Superintelligence Labs Research Engineer role focused on evaluating frontier AI systems and risks, developing and refining evaluations for multimodal and agentic models, and producing technical artifacts to inform risk assessments and launch decisions. Requires strong ML engineering and research skills, experience with agentic and multimodal models, and understanding of AI safety and threat models. | Eval GateAgent | 9 |
| Data Scientist, Evaluations - Meta Superintelligence Labs Meta is seeking a Data Scientist for their Superintelligence Labs to lead the design, validation, and analysis of novel AI evaluations and benchmarks. This role focuses on scientific rigor, measuring frontier AI capabilities, and influencing research directions through data-driven insights and publications. |
| Eval Gate |
| 9 |
| AI Research Scientist - MSL FAIR Foundations Research Scientist role focused on designing and developing novel benchmarks and evaluation methodologies for frontier AI capabilities within Meta Superintelligence Labs (MSL). The role involves measuring and understanding AI capabilities, influencing research direction, and collaborating with researchers and technical leadership. Requires a strong publication record and experience in machine learning research, particularly in evaluation and deep learning. | Eval GatePost-train | 9 |
| AI Research Scientist - MSL FAIR Foundations Research Scientist role focused on developing and implementing novel evaluations for frontier AI systems, shaping research direction and model development. Requires strong ML research background, experience with LLM/multimodal evaluation, and publication record. | Eval GatePost-train | 9 |
| Research Engineer - MSL FAIR Foundations Research Engineer role focused on building and curating benchmarks and evaluation environments for advanced AI models across text, vision, and audio. The role involves developing novel benchmarks, integrating existing ones, and creating scalable evaluation tooling to directly impact research direction and model development. Requires strong ML engineering and research skills, Python proficiency, experience with ML frameworks, and a track record of publications in relevant venues. | Eval GatePost-train | 9 |
| Research Scientist Manager, MetaAI Assistant Measurement Research Scientist Manager to lead a team focused on measurement and evaluation of AI Assistants powered by foundation models. The role involves defining scientific strategy for evaluation, ensuring methodological rigor, and collaborating with product, engineering, and training teams to ensure reliability and trustworthiness at scale. | Eval GatePost-train | 9 |