Configuration

Retriqs keeps configuration focused around a few core choices.

For each storage, you configure:

  • the LLM provider
  • the embedding provider
  • the storage backends
  • a few runtime values such as context size and concurrency

Note
This page describes the configuration choices that are currently part of the actual app flow.


Last updated: April 4, 2026


Prerequisites

  • A working installation of the Retriqs app

Configuration overview

In Retriqs, configuration is tied to a storage.

That means you do not just install the app and start uploading files immediately. You first create a storage, and then you configure how that storage should behave.

Each storage has its own settings for:

  • generation
  • embeddings
  • graph storage
  • vector storage
  • KV storage
  • document status storage

LLM provider

The LLM provider is used for tasks such as:

  • extraction
  • summarization
  • query-time answer generation

Currently available LLM providers

In the current app flow, you can choose:

  • OpenAI
  • Ollama

LLM settings you can configure

Depending on the selected provider, the app allows you to configure:

  • provider
  • model
  • API host
  • API key
  • context size for Ollama
  • max async runners

Practical guidance

  • Choose OpenAI if you want a cloud-based setup.
  • Choose Ollama if you want a local setup.
  • If you use Ollama, make sure the selected model is actually available on your machine.

Embedding provider

The embedding provider is used to create vectors for retrieval.

Currently available embedding providers

In the current app flow, you can choose:

  • OpenAI
  • Ollama

Embedding settings you can configure

  • provider
  • model
  • embedding dimension
  • API host
  • API key
  • token limit

Important rule

Your embedding setup should stay consistent once you start indexing documents.

If you change the embedding model or embedding dimension later, your existing vector data may no longer match your storage and you may need to rebuild or reindex.


Storage configuration

Retriqs uses four storage layers.

1. Graph storage

Graph storage is used for entities and relationships.

Current options in the app:

  • NetworkXStorage
  • Neo4JStorage

2. Vector storage

Vector storage is used for embeddings and retrieval.

Current options in the app:

  • NanoVectorDBStorage
  • MilvusVectorDBStorage

3. KV storage

KV storage is used for internal records, cache, and document-related data.

Current options in the app:

  • JsonKVStorage
  • RedisKVStorage

4. Document status storage

Document status storage tracks indexing state, progress, and failures.

Current options in the app:

  • JsonDocStatusStorage
  • RedisDocStatusStorage

Recommended setup

For most users, the safest and simplest setup is the local-first path:

  • OpenAI or Ollama for the LLM
  • OpenAI or Ollama for embeddings
  • local/default storage where possible

If you are just getting started, keep the setup simple first and only move to more advanced storage options when you actually need them.


What to configure first

When creating a new storage, make these decisions in order:

  1. Choose the LLM provider
  2. Choose the embedding provider
  3. Choose the graph storage
  4. Choose the vector storage
  5. Choose the KV storage
  6. Choose the document status storage
  7. Set any provider-specific values like API host, API key, or context size

Related Pages

Retriqs is still evolving, and feedback from early users helps us decide what to improve next.

If you want to share ideas, report issues, suggest graph packs, or help test new features, join our Discord:

Join the Retriqs Discord