AI at Meta

AI at Meta

Research Services

Menlo Park, California 830,794 followers

Together with the AI community, we’re pushing boundaries through open science to create a more connected world.

About us

Through open science and collaboration with the AI community, we are pushing the boundaries of artificial intelligence to create a more connected world. We can’t advance the progress of AI alone, so we actively engage with the AI research and academic communities. Our goal is to advance AI in Infrastructure, Natural Language Processing, Generative AI, Vision, Human-Computer Interaction and many other areas of AI enable the community to build safe and responsible solutions to address some of the world’s greatest challenges.

Website
https://ai.meta.com/
Industry
Research Services
Company size
10,001+ employees
Headquarters
Menlo Park, California
Specialties
research, engineering, development, software development, artificial intelligence, machine learning, machine intelligence, deep learning, computer vision, engineering, computer vision, speech recognition, and natural language processing

Updates

  • View organization page for AI at Meta, graphic

    830,794 followers

    As part of our release of Llama 3.1 and our continued support of open science, this week we published the full Llama 3 research paper that covers a range of topics including insights on model training, architecture and the results of our current work to integrate image/video/speech capabilities via a compositional approach. The Llama 3 Herd of Models Paper ➡️ https://go.fb.me/1nmc78 We hope that sharing this research will help the larger research community understand the key factors of foundation-model development and contribute to a more informed discussion about the future of foundation models in the general public.

    • No alternative text description for this image
  • View organization page for AI at Meta, graphic

    830,794 followers

    Starting today, open source is leading the way. Introducing Llama 3.1: Our most capable models yet. Today we’re releasing a collection of new models including our long awaited 405B. Llama 3.1 delivers stronger reasoning, a larger 128K context window & improved support for 8 languages including English — among other improvements. Details in the full announcement ➡️ https://go.fb.me/hvuqhb Download the models ➡️ https://go.fb.me/11ffl7 We evaluated performance across 150+ benchmark datasets across a range of languages — in addition to extensive human evaluations in real-world scenarios. Trained on >16K NVIDIA H100 GPUs, Llama 3.1 405B is the industry leading open source foundation model and delivers state-of-the-art capabilities that rival the best closed source models in general knowledge, steerability, math, tool use and multilingual translation. We’ve also updated our license to allow developers to use the outputs from Llama models — including the 405B — to improve other models for the first time. We’re excited about how synthetic data generation and model distillation workflows with Llama will help to advance the state of AI. As Mark Zuckerberg shared this morning, we have a strong belief that open source will ensure that more people around the world have access to the benefits and opportunities of AI and that’s why we continue to take steps on the path for open source AI to become the industry standard. With these releases we’re setting the stage for unprecedented new opportunities and we can’t wait to see the innovation our newest Llama models will unlock across all levels of the AI community.

  • View organization page for AI at Meta, graphic

    830,794 followers

    We're looking forward to SIGGRAPH and this discussion!

    View organization page for NVIDIA, graphic

    2,660,124 followers

    We’re excited to announce that our CEO Jensen Huang will be joined by Meta CEO Mark Zuckerberg to discuss how fundamental research is enabling AI breakthroughs at #SIGGRAPH2024. Attend the discussion to learn about the intersection of #generativeAI and virtual worlds, and discover how open source and AI will empower developers and creators. Read more on our blog: https://nvda.ws/4cWa4Tq 📅 Monday, July 29, 4 p.m. MT

    • No alternative text description for this image
  • View organization page for AI at Meta, graphic

    830,794 followers

    In April, we published a research paper on a new approach for building better and faster LLMs by using multi-token prediction. Using this approach, we can train language models to predict multiple future words at once, improving model capabilities and training efficiency while allowing for faster inference. In the spirit of responsible open science, we’ve released pre-trained models for code completion using this approach to enable further exploration in the research community. Get the model on Hugging Face ➡️ https://go.fb.me/dm1giu More on this approach ➡️ https://go.fb.me/x1zhdq

    • No alternative text description for this image
  • View organization page for AI at Meta, graphic

    830,794 followers

    Introducing Meta 3D Gen – new text-to-3D research from AI researchers at Meta that enables text-to-3D generation with high-quality geometry and textures. Research paper ➡️ https://go.fb.me/c9g4x6 Meta 3D Gen delivers text-to-mesh generation with high-quality geometry, texture and PBR materials. It can generate high-quality 3D assets, with both high-resolution textures and material maps end-to-end, producing results that are superior to previous state-of-the-art solutions — all at 3-10x the speed of previous work. In addition to the Meta 3D Gen technical report, we’re publishing our research on the two individual components of the Meta 3D Gen system: Meta 3D AssetGen for generating 3D models from text — and Meta 3D TextureGen, a model capable of high-quality texture generation and AI-assisted retexturing of artist-created or generated assets. Meta 3D AssetGen paper ➡️ https://go.fb.me/87tktg  Meta 3D TextureGen paper ➡️ https://go.fb.me/tvbdf8

Affiliated pages

Similar pages