Skip to content

AI integration

Hyperterse is designed for AI-first applications. This guide covers patterns for LLMs, retrieval-augmented generation (RAG), and multi-agent systems.

Traditional database access in AI systems is problematic:

ChallengeTraditional ApproachHyperterse Approach
SQL exposureLLM generates raw SQLLLM calls typed tools
Injection riskMust sanitize LLM outputAutomatic validation
Schema leakageLLM needs schema knowledgeTools are self-describing
Credential securityConnection strings in promptsFully contained runtime
  • Tool calling - Let LLMs discover and call database queries autonomously.
  • RAG systems - Use structured queries for reliable context retrieval.
  • Chatbots - Power conversational interfaces with live data.
  • Multi-agent - Share consistent data access across agent teams.

LLMs like GPT-4 and Claude support function/tool calling. Hyperterse queries become callable tools via MCP.

queries:
get-customer-orders:
use: main_db
description: 'Get recent orders for a customer. Use when customers ask about their order history.'
statement: |
SELECT id, status, total, created_at
FROM orders
WHERE customer_email = {{ inputs.email }}
ORDER BY created_at DESC
LIMIT 10
inputs:
email:
type: string
description: "Customer's email address"
get-order-details:
use: main_db
description: 'Get detailed information about a specific order including items.'
statement: |
SELECT o.*, oi.product_name, oi.quantity, oi.price
FROM orders o
JOIN order_items oi ON o.id = oi.order_id
WHERE o.id = {{ inputs.orderId }}
inputs:
orderId:
type: int
description: 'Order ID'
check-product-availability:
use: main_db
description: 'Check if a product is in stock. Use when customers ask about availability.'
statement: |
SELECT id, name, stock_quantity, price
FROM products
WHERE id = {{ inputs.productId }}
inputs:
productId:
type: int

The AI assistant can now:

  1. List available tools via tools/list
  2. Understand when to use each tool from descriptions
  3. Call tools with validated inputs
  4. Receive structured results

For RAG systems, Hyperterse provides reliable, structured context retrieval.

queries:
search-knowledge-base:
use: main_db
description: 'Semantic search over documentation'
statement: |
SELECT id, title, content,
1 - (embedding <=> {{ inputs.queryEmbedding }}::vector) as similarity
FROM documents
ORDER BY embedding <=> {{ inputs.queryEmbedding }}::vector
LIMIT {{ inputs.limit }}
inputs:
queryEmbedding:
type: string
description: 'Query embedding as JSON array'
limit:
type: int
optional: true
default: '5'
# 1. generate embedding for user query
query_embedding = embed_text(user_query)
# 2. retrieve relevant documents via Hyperterse
docs = call_hyperterse("search-knowledge-base", {
"queryEmbedding": json.dumps(query_embedding),
"limit": 5
})
# 3. build context
context = "\n\n".join([doc["content"] for doc in docs["results"]])
# 4. generate response with context
response = llm.complete(f"Context:\n{context}\n\nQuestion: {user_query}")

Combine semantic and keyword search:

queries:
hybrid-search:
use: main_db
description: 'Combined semantic and keyword search'
statement: |
SELECT id, title, content,
(0.7 * (1 - (embedding <=> {{ inputs.queryEmbedding }}::vector))) +
(0.3 * ts_rank(search_vector, plainto_tsquery({{ inputs.keywords }}))) as score
FROM documents
WHERE search_vector @@ plainto_tsquery({{ inputs.keywords }})
OR (embedding <=> {{ inputs.queryEmbedding }}::vector) < 0.5
ORDER BY score DESC
LIMIT 10
inputs:
queryEmbedding:
type: string
keywords:
type: string

In multi-agent architectures, Hyperterse provides a shared data layer:

┌────────────────┐ ┌────────────────┐ ┌────────────────┐
│ Research Agent│ │ Planning Agent│ │ Execution Agent│
└───────┬────────┘ └───────┬────────┘ └───────┬────────┘
│ │ │
└──────────────────────┼──────────────────────┘
┌──────▼──────┐
│ Hyperterse │
│ (MCP Tools)│
└──────┬──────┘
┌──────▼──────┐
│ Database │
└─────────────┘
queries:
# Research agent queries
get-market-data:
use: analytics_db
description: 'Get market data for analysis'
statement: 'SELECT * FROM market_data WHERE date >= {{ inputs.startDate }}'
inputs:
startDate:
type: datetime
# Planning agent queries
get-portfolio:
use: main_db
description: 'Get current portfolio holdings'
statement: 'SELECT * FROM portfolio WHERE user_id = {{ inputs.userId }}'
inputs:
userId:
type: int
# Execution agent queries
log-trade:
use: main_db
description: 'Log a trade execution'
statement: 'INSERT INTO trades (symbol, quantity, price) VALUES ({{ inputs.symbol }}, {{ inputs.qty }}, {{ inputs.price }})'
inputs:
symbol:
type: string
qty:
type: int
price:
type: float

Generate AI-readable documentation:

Terminal window
hyperterse generate llms -f config.terse -o llms.txt --base-url https://api.example.com

Include in system prompts:

system_prompt = f"""
You are a helpful assistant with access to a database API.
Available tools:
{open("llms.txt").read()}
When users ask about data, use these tools to fetch information.
"""

Generate agent skill archives:

Terminal window
hyperterse generate skills -f config.terse -o my-data-skill.zip

This creates a portable package for AI platforms that support skill imports.

# Good - explains when to use
description: "Search products by category. Use when customers ask about what products are available in a specific category."
# Less helpful
description: "Get products"
# Prevent overwhelming LLM context Windows
statement: |
SELECT * FROM logs
ORDER BY created_at DESC
LIMIT 100 -- Always limit
# Good - only useful fields
statement: "SELECT id, name, summary FROM articles WHERE ..."
# Avoid - too much data
statement: "SELECT * FROM articles WHERE ..."

Organize queries logically so AI can understand the available operations:

queries:
# Customer queries
get-customer: ...
list-customers: ...
search-customers: ...
# Order queries
get-order: ...
list-orders: ...
get-order-items: ...