TAAFT
Free mode
100% free
Freemium
Free Trial
Deals

TRIBE v2

TRIBE v2 is a Meta research model and demo focused on computational neuroscience rather than general chat or image generation. It is a trimodal brain encoder that fuses pretrained video, audio, and text features with temporal modeling to predict spatial and time-varying whole-brain fMRI activity during rich, movie-like experiences. Meta describes TRIBE as targeting multiple modalities, cortical areas, and individuals, with the earlier TRIBE system reported as the first such deep network in this setting and a top performer in the Algonauts 2025 brain-response prediction challenge.
New Multimodal Gen 3
Released: March 26, 2026

Overview

TRIBE v2 is Meta’s multimodal brain-encoding research demo. It predicts whole-brain fMRI responses to naturalistic stimuli by combining video, audio, and text representations, aiming to model how the brain reacts over time across different cortical regions and people. It builds on Meta’s TRIBE line for cross-modal brain response prediction.

About Meta Platforms

We're connecting people to what they care about, powering new, meaningful experiences, and advancing the state-of-the-art through open research and accessible tooling.

Industry: Technology, Information and Media
Company Size: 78.000-79.000
Location: Menlo Park, California, US
Website: ai.meta.com
View Company Profile

Tools using TRIBE v2

No tools found for this model yet.

Last updated: March 26, 2026
0 AIs selected
Clear selection
#
Name
Task