ministral-3b-instruct

Public

The smallest model in the Ministral 3 family, combining a 3.4B language model with a 0.4B vision encoder for efficient edge deployment.

1 Download

Capabilities

Minimum system memory

Tags

README

Ministral 3 3B by mistralai

The smallest model in the Ministral 3 family, combining a 3.4B language model with a 0.4B vision encoder for efficient edge deployment.

Supports context length of 256k tokens.

Vision-enabled for image analysis and multimodal tasks.

Multilingual support across dozens of languages including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, and more.

Native function calling and JSON output generation.

Apache 2.0 License

Sources

The underlying model files this model uses