AI Limitations

Because Retriqs is built on top of Large Language Models (LLMs) and heuristic extraction pipelines, it has inherent, unavoidable limitations. We believe in being fully transparent about what the system can and cannot do.

1. Probabilistic Outputs

LLMs (like OpenAI, Anthropic, or local Ollama models) are probabilistic engines. They do not "know" facts the way traditional databases do. They guess the next most likely piece of text. As a result, the system may occasionally surface plausible but entirely false answers (hallucinations).

2. Missing Context in Retrieval

While our Knowledge Graph pipeline improves recall, neither standard vector search nor GraphRAG guarantees 100% retrieval. The system may miss highly relevant context hidden inside your documents during a query, leading to incomplete answers.

3. Distorted Summarization

When summarizing large documents, the LLM must choose what to keep and what to discard. This compression process inherently omits details, and in some cases, the omission may inadvertently change the implied meaning of the original text.

4. Imperfect Graph Relationships

The relationships extracted to form the Knowledge Graph are inferred by the LLM from unstructured text. These extracted nodes and edges may be incomplete, incorrectly categorized, or fundamentally wrong.

5. Mandatory Human Review

Because of these documented limitations, Retriqs must be treated strictly as a research assistant. Users must aggressively review the primary source documents (which the software attempts to link back to) before making any decisions based on the output. No configuration—even running on completely local, locked-down hardware—alters the fundamental probabilistic nature of the AI output.