TAAFT
Free mode
100% free
Freemium
Free Trial
Create tool

Phi 3.5 MoE

New Text Gen
Released: August 22, 2024

Overview

Phi-3.5-MoE is Microsoft’s open-weight Mixture-of-Experts model (16×3.8B, ~6.6B active) built for strong reasoning and coding with a 128K-token context window. It’s MIT-licensed and available in Azure AI Foundry and on Hugging Face

Description

Phi-3.5-MoE is a small, efficient MoE LLM from Microsoft. It routes tokens across 16 experts (~3.8B each) with top-2 gating (~6.6B active parameters), delivering solid reasoning and code performance at low latency. The model is trained on high-quality, reasoning-dense data and post-trained with SFT plus RL-style techniques (e.g., PPO/DPO) for instruction following and safety. It supports a 128K context window, multilingual use, and an MIT open-weight license. Builders can deploy it serverlessly in Azure AI or run/fine-tune locally (e.g., LoRA). Typical uses include long-context QA and summarization, coding assistants, math, and multilingual applications in constrained environments.

About Microsoft

No company description available.

Location: Washington, US
View Company Profile

Related Models

Last updated: September 22, 2025