ministral-3-3b-reasoning

Public

The reasoning post-trained version of Ministral 3 3B, combining a 3.4B language model with a 0.4B vision encoder, optimized for complex reasoning tasks.

633 Downloads

2 stars

Capabilities

Vision Input
Reasoning

Minimum system memory

2GB

Tags

3B
mistral3

README

Ministral 3 3B Reasoning by mistralai

The reasoning post-trained version of Ministral 3 3B, combining a 3.4B language model with a 0.4B vision encoder, optimized for complex reasoning tasks.

Supports context length of 256k tokens.

Excels at complex, multi-step reasoning and dynamic problem-solving, making it ideal for math, coding, and STEM-related use cases.

Vision-enabled for image analysis and multimodal reasoning tasks.

Multilingual support across dozens of languages including English, French, Spanish, German, Italian, Portuguese, Dutch, Chinese, Japanese, Korean, Arabic.

Native function calling and JSON output generation with best-in-class agentic capabilities.

Edge-optimized for deployment on a wide range of hardware including local devices.

Apache 2.0 License

Parameters

Custom configuration options included with this model

Reasoning Section Parsing
{ "enabled": true, "startString": "[THINK]", "endString": "[/THINK]" }

Sources

The underlying model files this model uses