Mistral Client
The Mistral client provides access to Mistral AI models. It inherits from OpenAIClient since Mistral uses an OpenAI-compatible API.
Configuration
from padwan_llm.mistral import MistralClient
client = MistralClient(
api_key="...", # or set MISTRAL_API_KEY env var
model="mistral-large-latest", # default model
)
Usage
Basic Chat
from padwan_llm.conversation import Message
async with MistralClient() as client:
response, usage = await client.complete_chat([
Message(role="user", content="Hello!")
])
print(response["content"])
Streaming
from padwan_llm.conversation import Message
async with MistralClient() as client:
stream = client.stream_chat([
Message(role="user", content="Tell me a story")
])
async for chunk in stream:
print(chunk, end="")
With System Prompt
from padwan_llm import ConversationState
state = ConversationState(system="You are a helpful assistant.")
state.add_user_message("Hello!")
async with MistralClient() as client:
response, usage = await client.complete_chat(state.messages)
state.add_assistant_message(response["content"])
state.accumulate_usage(usage)
Audio Transcription
Transcribe audio using the voxtral-mini-latest model.
async with MistralClient() as client:
# From a local file
result = await client.transcribe(file="recording.mp3")
print(result["text"])
# From a URL
result = await client.transcribe(file_url="https://example.com/audio.mp3")
# From an uploaded file ID
result = await client.transcribe(file_id="file-abc123")
Exactly one of file, file_id, or file_url must be provided. file accepts a path (str/Path) or raw bytes.
Optional parameters: language, temperature, diarize (speaker detection), and timestamp_granularities (["segment"] and/or ["word"]).
result = await client.transcribe(
file="meeting.mp3",
language="en",
diarize=True,
timestamp_granularities=["segment", "word"],
)
for segment in result.get("segments", []):
print(f"[{segment['start']:.1f}s] {segment['text']}")
Embeddings
Generate text embeddings using the mistral-embed model.