OpenAI

OpenAI is one of the two model providers currently available in the Retriqs app.

Use OpenAI if you want a cloud-based setup with hosted models for generation and embeddings.

Last updated: April 4, 2026


Prerequisites

To use OpenAI in Retriqs, you need:

  • an OpenAI API key
  • a valid API host
  • a chat model
  • an embedding model

When to choose OpenAI

OpenAI is a good fit when you want:

  • a fast setup
  • hosted models without local model management
  • cloud-based generation
  • cloud-based embeddings

What you can configure

When you choose OpenAI in Retriqs, you can configure:

  • provider
  • model
  • API host
  • API key
  • max async runners

For embeddings, you can also configure:

  • embedding model
  • embedding dimension
  • token limit

Typical OpenAI setup

A common setup looks like this:

  • LLM provider: OpenAI
  • Embedding provider: OpenAI
  • API host: https://api.openai.com/v1

Typical model examples:

  • chat: gpt-4o-mini
  • embeddings: text-embedding-3-small

Recommended use

OpenAI is the simplest choice if you want to get started quickly and do not want to run local models yourself.

It is usually the most straightforward option for:

  • first-time setup
  • hosted inference
  • easy onboarding

Things to watch

  • Make sure your API key is valid.
  • Make sure the selected model name is correct.
  • Make sure the API host is correct.
  • Keep your embedding model stable after indexing starts.

If you change the embedding setup later, your existing vector data may no longer match.


Related Pages