System prompt and model
The two settings that have the biggest effect on agent behavior — how to write a system prompt and how to pick a model.
The system prompt and the model are the two settings that have the biggest effect on how your agent behaves. Both live on the agent's General tab and are also inline-editable in the Role tab and in inline-editable Identity / Model config nodes in pipelines.
The system prompt
The system prompt is the persona and instructions the model sees on every turn. It's prepended to every conversation with this agent.
What to put in it
A reliable system prompt covers, in order:
- Identity. Who is this agent and who do they work for?
- Goal. What is the user trying to accomplish when they talk to it?
- Tools. Which tools are available and when to use them.
- Style. Tone, formality, response length.
- Constraints. Things the agent should never do.
Example for a support agent:
You are Aria, the support agent for Acme Inc. Your job is to help
customers solve product issues quickly and politely.
When the user describes a problem:
1. Search the help center pages with search_pages before asking
clarifying questions.
2. If the issue references a customer record, look it up with
search_user_tables on the People table.
3. If you can't resolve it, escalate by delegating to the
"Engineering On-Call" agent.
Style: friendly, concise. Use bullet points for steps. Avoid
marketing speak. Never promise refunds — that requires a human.Length and Markdown
You can use Markdown formatting (headings, bullet points, code blocks) and it'll be passed through. Keep system prompts short and direct — every token is on every turn.
The model
The model selector is populated from your active provider keys. Project88 supports seven providers:
| Provider | Source of truth |
|---|---|
| OpenAI | src/components/chat/SearchInput.jsx + src/modals/AgentCanvasModal.jsx |
| Anthropic | same |
| same | |
| Mistral | same |
| Groq | same |
| Together | same |
| DeepSeek | same |
Specific model names update faster than these docs — the canonical list lives in the two files above. The chat input dropdown reflects whatever versions are currently wired up plus whichever providers you've added a key for.
Picking a model
A pragmatic decision tree:
- Best quality → the largest Anthropic or OpenAI flagship currently in the list (e.g. the newest Claude Opus or GPT-4.1).
- Best reasoning at a cheaper tier → Anthropic Sonnet / mid-tier OpenAI / DeepSeek-R1.
- Fastest, cheapest → Anthropic Haiku, the mini OpenAI tiers, Gemini Flash, or anything on Groq.
- Open-weight, lowest cost → Together or DeepSeek.
Agents can override the workspace default model per conversation — the chat input always has a model picker.
Overriding model per pipeline step
In an Agent's pipeline you can drop in a Model config node to override the model for a specific branch (e.g. a cheap model for classification, a strong model for final response).