ministral-3-3b

Public

The smallest model in the Ministral 3 family, combining a 3.4B language model with a 0.4B vision encoder for efficient edge deployment.

16.2K Downloads

3 stars

Capabilities

Vision Input

Minimum system memory

2GB

Tags

3B
mistral3

README

Ministral 3 3B by mistralai

The smallest model in the Ministral 3 family, combining a 3.4B language model with a 0.4B vision encoder for efficient edge deployment.

Supports context length of 256k tokens.

Vision-enabled for image analysis and multimodal tasks.

Multilingual support across dozens of languages including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, and more.

Native function calling and JSON output generation.

Apache 2.0 License

Sources

The underlying model files this model uses