
Mistral Le Chat — User Guide
Official Mistral chat.
Strengths
- Open source models can be deployed locally and data is completely private
- Mistral Large is comparable to top models in reasoning and coding capabilities
- Fast response and low API latency
- EU Compliant, suitable for businesses with strict data privacy requirements
Best for
- Privacy-sensitive scenarios requiring local deployment
- Code generation and technical documentation writing
- AI Compliance Applications for European Enterprises
- High concurrency and low latency AI API requirements
Get started quickly with API
Quickly access powerful language models through the Mistral API.
Conversations using the Mistral API
from mistralai import mistral
client = Mistral(api_key="your_key")
chat_response = client.chat.complete(
model="mistral-large-latest",
messages=[{"role": "user", "content": "Explanation of the basic principles of quantum computing"}]
)
print(chat_response.choices[0].message.content)The mistral-small model is faster and cheaper, and is suitable for simple tasks; the mistral-large model is suitable for complex reasoning.
Local deployment (Ollama)
Use Ollama to run Mistral models locally, no internet required.
Running Mistral 7B locally
# After installing Ollama, execute: ollama pull mistral ollama run mistral # Then enter the question directly to have a conversation
The 7B model requires at least 8GB of RAM, with 16GB or more recommended for a smooth experience.
Sources & references:
- Mistral AI official documentation (2025-01)