Overview
Ask lets you chat with your project memory in natural language. This guide covers practical techniques for getting accurate, useful answers -- and knowing when Ask is not the right tool for the job.
Build memory first
Ask can only answer questions about topics that exist in your project memory. Before using Ask, make sure you have a well-populated brain.
Minimum setup
- Scan your codebase -- Run
contox scanto index your project structure, routes, components, and dependencies - Save a few sessions -- Use the MCP server or CLI to save sessions with meaningful summaries about architecture, conventions, and decisions
- Run enrichment -- Trigger enrichment on your sessions from the dashboard to generate memory items with embeddings
For best results
- Run git digest -- Use
contox git-digestto capture recent commit history and decisions - Save after key work -- Save sessions after implementing features, fixing bugs, or making architectural decisions
- Approve enrichment results -- Review and approve generated memory items to ensure they appear in the brain
- Run hygiene -- Clean up duplicates and outdated items so Ask sources are accurate
If Ask returns "I couldn't find relevant information in the project memory," it means no memory items matched your question above the 50% similarity threshold. This is a signal to scan, save, or enrich more content.
Crafting good questions
The quality of your question directly affects the quality of the answer. Ask uses semantic search, so questions that use the same terminology as your memory items will match better.
Specific beats vague
| Instead of... | Try... |
|---|---|
| "How does auth work?" | "How does JWT refresh token rotation work in the auth middleware?" |
| "Tell me about the API" | "What are the V2 ingest endpoint's HMAC signing requirements?" |
| "What's the stack?" | "What database and hosting infrastructure does the project use?" |
| "Are there bugs?" | "What known issues exist with the session timeout logic?" |
Mention concrete names
Semantic search works best when your question includes terms that match your memory items:
- Component names -- "What does the
ChatSessionPanelcomponent do?" - File paths -- "What is the purpose of
src/lib/middleware/auth-derive.ts?" - Schema keys -- "What do we know about
root/architecture/auth?" - Technology names -- "How is Appwrite configured for the Frankfurt region?"
Ask about architecture and decisions
Ask excels at questions about:
- Architecture -- "What is the data flow from VS Code extension to the enrichment pipeline?"
- Conventions -- "What are the ESLint gotchas for arrow functions?"
- Decisions -- "Why was Gemini 2.0 Flash chosen over Mistral for the Ask feature?"
- Implementation patterns -- "How does the session 4-hour window logic work?"
- Known bugs -- "What bugs have been found in the billing system?"
Use follow-up questions
Conversations within the same chat session build context naturally. Start broad and narrow down:
- "What authentication methods does the project support?"
- "How are API keys validated in the ingest endpoint?"
- "What happens when an HMAC signature is invalid?"
Understanding sources
Every answer includes source cards that show where the information came from.
Similarity percentage
The percentage badge on each source card indicates how closely the memory item matched your question:
| Range | Meaning |
|---|---|
| 85--100% | Very strong match -- the source is directly about your question |
| 70--84% | Good match -- the source covers a closely related topic |
| 50--69% | Partial match -- the source has some relevant information but is not focused on your question |
The minimum threshold is 50%. Sources below this are excluded entirely.
Used vs Related
Sources are divided into two groups:
- Used Sources -- The AI explicitly cited these sources when constructing the answer. These are marked with a green "Used" badge and are the most reliable indicators of where the information came from.
- Related Sources -- These matched your question by semantic similarity but were not directly cited. They may contain additional relevant information worth exploring.
If you see many "Related" sources but few "Used" sources, your question may be too broad. Try narrowing it to a specific aspect of the topic.
Exploring sources
Click a source card to expand it and see the full facts text. You can:
- Copy the schema key -- Use it to find the item in the brain hierarchy
- Copy the content -- Paste it into another tool or document
- View in Memory -- Jump directly to the memory item in the Brain tab
When Ask struggles
Ask is not always the right tool. Here are situations where it may produce incomplete or unhelpful answers:
Topics not in memory
Ask can only answer from what is in your project memory. If you have never saved, scanned, or enriched information about a topic, Ask will not know about it. Solution: Scan your codebase, save relevant sessions, and run enrichment.
Highly specific code questions
Ask has access to memory item descriptions and facts, not actual source code. It cannot show you the implementation of a function or the exact contents of a file. Solution: Use your IDE, GitHub search, or an AI coding assistant (Claude, Cursor, Copilot) that has direct access to your source files.
Very recent changes
The embedding cache has a 5-minute TTL. If you just ran enrichment, there may be a short delay before new items appear in Ask results. Additionally, memory items need embeddings (generated during the Embed stage of enrichment) to be searchable.
Cross-project questions
Ask searches only the currently selected project. If you need information spanning multiple projects, you need to switch projects and ask separately.
Ask vs alternatives
| Approach | Best for | Limitations |
|---|---|---|
| Ask | Architecture, conventions, decisions, implementation patterns, bugs | No source code access, only as good as the memory |
| Brain tab search | Finding specific memory items by keyword | Manual browsing, no synthesis across items |
| Context Packs | Feeding focused context to an AI coding assistant | Requires MCP/CLI integration, raw memory format |
Full brain (GET /api/v2/brain) | Loading the complete project brain | Large payload, no semantic filtering |
| IDE search (grep/find) | Finding exact code, variable usage, file contents | No synthesis, no architectural understanding |
| AI coding assistant | Writing code, debugging, refactoring | No project memory -- answers from general knowledge unless given context |
The most effective workflow combines these tools:
- Use Ask to understand architecture and decisions
- Use Context Packs to give your AI coding assistant focused project context
- Use your IDE for actual code navigation and editing
Troubleshooting
"I couldn't find relevant information"
- Your project memory may be empty or very sparse. Run
contox scanand save some sessions. - Your question may use terminology that does not match your memory items. Try rephrasing with different terms.
- Embeddings may not have been generated yet. Check that enrichment has completed with the Embed stage.
Slow responses
- The first question after a cache miss takes longer because embeddings need to be fetched from Appwrite. Subsequent questions within 5 minutes are faster.
- Large projects with many memory items take longer to search.
- Network latency to the VPS worker affects streaming speed.
Incomplete answers
- Check the source cards. If the sources do not cover the full topic, Ask cannot synthesize a complete answer.
- Try breaking your question into smaller, more focused questions.
- Build more memory around the topic by saving sessions with relevant details.
Next steps
- Ask Dashboard Guide -- How to use the Ask interface
- Ask (Concept) -- How Ask works architecturally
- Best Practices -- General best practices for maintaining project memory
- Codebase Scanner -- Populate your brain with a scan