Conflux Docs
Overview

What is Conflux?

Conflux is an enterprise LLM Layer 2 for governed AI workspaces.

What is Conflux? diagramClick to enlarge
Start here

If you are new to Conflux, read Quickstart, Execution modes, Memory overview, and API overview next. Those pages explain how Conflux feels as a user-facing product before going deeper into governance.

Why it exists

Teams now use AI through web chat, APIs, Claude CLI, Codex CLI, and custom tools. Without a shared control layer, every request becomes isolated: no common policy, no workspace memory, no route visibility, and no durable understanding of what the team has already built or decided.

Conflux sits between users, clients, and models. It gives every workspace one governed runtime for policy, memory, routing, model execution, and analytics.

Platform layers

Policy and complianceInspects prompts, responses, evidence, secrets, PII, internal data, and destructive commands before model execution.
Memory layerTurns conversations, tool work, file changes, and decisions into evidence-backed workspace knowledge.
Smart routingChooses direct or multi-model Flux execution based on prompt intent, client profile, model health, and workspace policy.
Multi-protocol runtimeProvides one execution path across Web, OpenAI-compatible APIs, Anthropic-compatible APIs, Claude CLI, and Codex CLI.
Context optimizationPreserves client prompts by default, creates compacted evidence views, and recovers from context-window failures when needed.
Analytics and lineageShows usage, cost, routing outcomes, request chains, compliance events, and memory health across the workspace.

Who uses it

Conflux is for teams that want AI to behave like a governed workspace tool, not a collection of disconnected chat sessions. It is useful for engineering, product, legal, accounting, operations, support, and any team that needs shared context, traceability, and model control.