AI Providers
Supported Providers
Section titled “Supported Providers”ReadAny supports multiple AI providers. You can use any of the following:
| Provider | Models | Notes |
|---|---|---|
| OpenAI | GPT-4o, GPT-4o-mini, etc. | Requires API key |
| Anthropic | Claude Sonnet, Claude Haiku, etc. | Requires API key |
| Gemini Pro, Gemini Flash, etc. | Requires API key | |
| DeepSeek | DeepSeek Chat, DeepSeek Reasoner | Requires API key |
| Ollama | Llama, Mistral, Qwen, etc. | Local, free, no API key needed |
Configuration
Section titled “Configuration”- Go to Settings → AI
- Select your preferred provider
- Enter your API key (not needed for Ollama)
- Choose the model you want to use
- Optionally set a custom API base URL
Using Ollama (Local AI)
Section titled “Using Ollama (Local AI)”For fully private, offline AI:
- Install Ollama on your machine
- Pull a model:
ollama pull llama3.2 - In ReadAny, select Ollama as the provider
- The default endpoint
http://localhost:11434will be used automatically
Custom API Endpoints
Section titled “Custom API Endpoints”You can use any OpenAI-compatible API by:
- Selecting “OpenAI” as the provider
- Setting a custom API Base URL pointing to your endpoint
- Entering the appropriate API key
This works with services like Azure OpenAI, Together AI, Groq, and self-hosted models.