AI integration
Hyperterse is designed for AI-first applications. This guide covers patterns for LLMs, retrieval-augmented generation (RAG), and multi-agent systems.
Why Hyperterse for AI?
Section titled “Why Hyperterse for AI?”Traditional database access in AI systems is problematic:
| Challenge | Traditional Approach | Hyperterse Approach |
|---|---|---|
| SQL exposure | LLM generates raw SQL | LLM calls typed tools |
| Injection risk | Must sanitize LLM output | Automatic validation |
| Schema leakage | LLM needs schema knowledge | Tools are self-describing |
| Credential security | Connection strings in prompts | Fully contained runtime |
Use cases
Section titled “Use cases”- Tool calling - Let LLMs discover and call database queries autonomously.
- RAG systems - Use structured queries for reliable context retrieval.
- Chatbots - Power conversational interfaces with live data.
- Multi-agent - Share consistent data access across agent teams.
Tool calling
Section titled “Tool calling”LLMs like GPT-4 and Claude support function/tool calling. Hyperterse queries become callable tools via MCP.
Example: customer support bot
Section titled “Example: customer support bot”queries: get-customer-orders: use: main_db description: 'Get recent orders for a customer. Use when customers ask about their order history.' statement: | SELECT id, status, total, created_at FROM orders WHERE customer_email = {{ inputs.email }} ORDER BY created_at DESC LIMIT 10 inputs: email: type: string description: "Customer's email address"
get-order-details: use: main_db description: 'Get detailed information about a specific order including items.' statement: | SELECT o.*, oi.product_name, oi.quantity, oi.price FROM orders o JOIN order_items oi ON o.id = oi.order_id WHERE o.id = {{ inputs.orderId }} inputs: orderId: type: int description: 'Order ID'
check-product-availability: use: main_db description: 'Check if a product is in stock. Use when customers ask about availability.' statement: | SELECT id, name, stock_quantity, price FROM products WHERE id = {{ inputs.productId }} inputs: productId: type: intThe AI assistant can now:
- List available tools via
tools/list - Understand when to use each tool from descriptions
- Call tools with validated inputs
- Receive structured results
Retrieval-augmented generation (RAG)
Section titled “Retrieval-augmented generation (RAG)”For RAG systems, Hyperterse provides reliable, structured context retrieval.
Semantic search setup
Section titled “Semantic search setup”queries: search-knowledge-base: use: main_db description: 'Semantic search over documentation' statement: | SELECT id, title, content, 1 - (embedding <=> {{ inputs.queryEmbedding }}::vector) as similarity FROM documents ORDER BY embedding <=> {{ inputs.queryEmbedding }}::vector LIMIT {{ inputs.limit }} inputs: queryEmbedding: type: string description: 'Query embedding as JSON array' limit: type: int optional: true default: '5'Rag pipeline
Section titled “Rag pipeline”# 1. generate embedding for user queryquery_embedding = embed_text(user_query)
# 2. retrieve relevant documents via Hypertersedocs = call_hyperterse("search-knowledge-base", { "queryEmbedding": json.dumps(query_embedding), "limit": 5})
# 3. build contextcontext = "\n\n".join([doc["content"] for doc in docs["results"]])
# 4. generate response with contextresponse = llm.complete(f"Context:\n{context}\n\nQuestion: {user_query}")Hybrid search
Section titled “Hybrid search”Combine semantic and keyword search:
queries: hybrid-search: use: main_db description: 'Combined semantic and keyword search' statement: | SELECT id, title, content, (0.7 * (1 - (embedding <=> {{ inputs.queryEmbedding }}::vector))) + (0.3 * ts_rank(search_vector, plainto_tsquery({{ inputs.keywords }}))) as score FROM documents WHERE search_vector @@ plainto_tsquery({{ inputs.keywords }}) OR (embedding <=> {{ inputs.queryEmbedding }}::vector) < 0.5 ORDER BY score DESC LIMIT 10 inputs: queryEmbedding: type: string keywords: type: stringMulti-agent systems
Section titled “Multi-agent systems”In multi-agent architectures, Hyperterse provides a shared data layer:
┌────────────────┐ ┌────────────────┐ ┌────────────────┐│ Research Agent│ │ Planning Agent│ │ Execution Agent│└───────┬────────┘ └───────┬────────┘ └───────┬────────┘ │ │ │ └──────────────────────┼──────────────────────┘ │ ┌──────▼──────┐ │ Hyperterse │ │ (MCP Tools)│ └──────┬──────┘ │ ┌──────▼──────┐ │ Database │ └─────────────┘Example: Research + Analyst agents
Section titled “Example: Research + Analyst agents”queries: # Research agent queries get-market-data: use: analytics_db description: 'Get market data for analysis' statement: 'SELECT * FROM market_data WHERE date >= {{ inputs.startDate }}' inputs: startDate: type: datetime
# Planning agent queries get-portfolio: use: main_db description: 'Get current portfolio holdings' statement: 'SELECT * FROM portfolio WHERE user_id = {{ inputs.userId }}' inputs: userId: type: int
# Execution agent queries log-trade: use: main_db description: 'Log a trade execution' statement: 'INSERT INTO trades (symbol, quantity, price) VALUES ({{ inputs.symbol }}, {{ inputs.qty }}, {{ inputs.price }})' inputs: symbol: type: string qty: type: int price: type: floatLLM documentation
Section titled “LLM documentation”Generate AI-readable documentation:
hyperterse generate llms -f config.terse -o llms.txt --base-url https://api.example.comInclude in system prompts:
system_prompt = f"""You are a helpful assistant with access to a database API.
Available tools:{open("llms.txt").read()}
When users ask about data, use these tools to fetch information."""Agent skills
Section titled “Agent skills”Generate agent skill archives:
hyperterse generate skills -f config.terse -o my-data-skill.zipThis creates a portable package for AI platforms that support skill imports.
Best practices
Section titled “Best practices”Write AI-friendly descriptions
Section titled “Write AI-friendly descriptions”# Good - explains when to usedescription: "Search products by category. Use when customers ask about what products are available in a specific category."
# Less helpfuldescription: "Get products"2. Limit result sizes
Section titled “2. Limit result sizes”# Prevent overwhelming LLM context Windowsstatement: | SELECT * FROM logs ORDER BY created_at DESC LIMIT 100 -- Always limit3. Return relevant fields only
Section titled “3. Return relevant fields only”# Good - only useful fieldsstatement: "SELECT id, name, summary FROM articles WHERE ..."
# Avoid - too much datastatement: "SELECT * FROM articles WHERE ..."4. Group related queries
Section titled “4. Group related queries”Organize queries logically so AI can understand the available operations:
queries: # Customer queries get-customer: ... list-customers: ... search-customers: ...
# Order queries get-order: ... list-orders: ... get-order-items: ...