Skip to content
GitHub

AI Providers

ReadAny supports multiple AI providers. You can use any of the following:

ProviderModelsNotes
OpenAIGPT-4o, GPT-4o-mini, etc.Requires API key
AnthropicClaude Sonnet, Claude Haiku, etc.Requires API key
GoogleGemini Pro, Gemini Flash, etc.Requires API key
DeepSeekDeepSeek Chat, DeepSeek ReasonerRequires API key
OllamaLlama, Mistral, Qwen, etc.Local, free, no API key needed
  1. Go to Settings → AI
  2. Select your preferred provider
  3. Enter your API key (not needed for Ollama)
  4. Choose the model you want to use
  5. Optionally set a custom API base URL

For fully private, offline AI:

  1. Install Ollama on your machine
  2. Pull a model: ollama pull llama3.2
  3. In ReadAny, select Ollama as the provider
  4. The default endpoint http://localhost:11434 will be used automatically

You can use any OpenAI-compatible API by:

  1. Selecting “OpenAI” as the provider
  2. Setting a custom API Base URL pointing to your endpoint
  3. Entering the appropriate API key

This works with services like Azure OpenAI, Together AI, Groq, and self-hosted models.