Configuration
Retriqs keeps configuration focused around a few core choices.
For each storage, you configure:
- the LLM provider
- the embedding provider
- the storage backends
- a few runtime values such as context size and concurrency
Note
This page describes the configuration choices that are currently part of the actual app flow.
Last updated: April 4, 2026
Prerequisites
- A working installation of the Retriqs app
Configuration overview
In Retriqs, configuration is tied to a storage.
That means you do not just install the app and start uploading files immediately. You first create a storage, and then you configure how that storage should behave.
Each storage has its own settings for:
- generation
- embeddings
- graph storage
- vector storage
- KV storage
- document status storage
LLM provider
The LLM provider is used for tasks such as:
- extraction
- summarization
- query-time answer generation
Currently available LLM providers
In the current app flow, you can choose:
OpenAIOllama
LLM settings you can configure
Depending on the selected provider, the app allows you to configure:
- provider
- model
- API host
- API key
- context size for Ollama
- max async runners
Practical guidance
- Choose
OpenAIif you want a cloud-based setup. - Choose
Ollamaif you want a local setup. - If you use Ollama, make sure the selected model is actually available on your machine.
Embedding provider
The embedding provider is used to create vectors for retrieval.
Currently available embedding providers
In the current app flow, you can choose:
OpenAIOllama
Embedding settings you can configure
- provider
- model
- embedding dimension
- API host
- API key
- token limit
Important rule
Your embedding setup should stay consistent once you start indexing documents.
If you change the embedding model or embedding dimension later, your existing vector data may no longer match your storage and you may need to rebuild or reindex.
Storage configuration
Retriqs uses four storage layers.
1. Graph storage
Graph storage is used for entities and relationships.
Current options in the app:
NetworkXStorageNeo4JStorage
2. Vector storage
Vector storage is used for embeddings and retrieval.
Current options in the app:
NanoVectorDBStorageMilvusVectorDBStorage
3. KV storage
KV storage is used for internal records, cache, and document-related data.
Current options in the app:
JsonKVStorageRedisKVStorage
4. Document status storage
Document status storage tracks indexing state, progress, and failures.
Current options in the app:
JsonDocStatusStorageRedisDocStatusStorage
Recommended setup
For most users, the safest and simplest setup is the local-first path:
OpenAIorOllamafor the LLMOpenAIorOllamafor embeddings- local/default storage where possible
If you are just getting started, keep the setup simple first and only move to more advanced storage options when you actually need them.
What to configure first
When creating a new storage, make these decisions in order:
- Choose the LLM provider
- Choose the embedding provider
- Choose the graph storage
- Choose the vector storage
- Choose the KV storage
- Choose the document status storage
- Set any provider-specific values like API host, API key, or context size
Related Pages
Retriqs is still evolving, and feedback from early users helps us decide what to improve next.
If you want to share ideas, report issues, suggest graph packs, or help test new features, join our Discord: