Conversations
Chat threads between you and an agent — how messages, attachments, and tool calls are persisted.
A conversation is a chat thread between a user and either a single agent or a raw model. Conversations are scoped to a workspace.
Anatomy of a conversation
Stored as a row in the conversations table:
| Field | Purpose |
|---|---|
user_id | The Supabase Auth user who owns the thread |
org_id | Org-level scoping for RLS |
workspace_id | The workspace the conversation belongs to |
agent_id | The agent the user is chatting with (nullable) |
model | Override model — defaults to the agent's or workspace's |
preview | First-line preview text for the history list |
message_count | Maintained incrementally as messages are appended |
Each message is a row in messages:
| Field | Purpose |
|---|---|
role | user / assistant / system |
content | Text (or structured content for multi-modal) |
sources | Citations from search_pages / search_user_tables tools |
token_count | Tokens consumed for billing |
metadata | JSONB — tool calls, tool results, function-call traces |
attachments | JSONB — file references (paths in chat-attachments bucket) |
What you can attach
Conversations support file attachments via the paperclip button,
drag-and-drop, or clipboard paste. The browser uploads directly to the
private chat-attachments Supabase Storage bucket. Path:
{user_id}/{conversation_id}/{uuid.ext}Limits:
- 50 MB per file
- 10 files per message
- Images, PDFs, text, Office docs, audio, video, archives
RLS on the storage bucket scopes access to the uploading user.
Streaming
The chat-proxy Edge Function streams responses via Server-Sent Events. The
browser parser (hosted.js)
handles three event types in a single loop:
- OpenAI-style
data: { choices: [...] }chunks. - Anthropic-style content-block deltas.
- Custom
event: tool_callsSSE event carrying tool-call metadata.
The UI shows a typing cursor while streaming and a "Thinking..." spinner during the tool-call round-trip.
Tool calls
When the LLM decides to call a tool, the proxy:
- Emits
event: tool_callswith the planned calls. - Executes each tool server-side (Composio v3 for external tools, native
handlers for
search_user_tables,search_pages, etc.). - Truncates tool results to 4000 chars to protect the next round.
- Sends results back to the LLM with up to 5 rounds before stopping.
Tool calls are persisted to messages.metadata so they re-render exactly
when you reload the conversation.
History
The conversation history card in the Chat mode reads from the same
conversations table, grouped by date, with search and inline delete. The
list updates optimistically when you start or delete a thread.
Context budget
Long conversations are truncated oldest-first to fit a 400 000-character budget (~100k tokens) before being sent to the model. Truncation re-runs after every tool-call round so a chatty tool result can't blow the budget.