Skip to main content

Overview

ondoki integrates AI throughout the platform — from auto-processing workflow recordings to inline text editing to a full chat interface. All AI features are optional and require an LLM provider to be configured.

Supported Providers

Configure your LLM in Project Settings → AI/LLM or via environment variables.
ProviderModelsNotes
OpenAIGPT-4o, GPT-4, GPT-3.5-turbo, etc.Most commonly used
AnthropicClaude Opus, Sonnet, HaikuFull function calling support
OllamaLlama, Mistral, any local modelSelf-hosted, no API key needed
CustomAny OpenAI-compatible endpointSet ONDOKI_LLM_BASE_URL
LLM settings configured in the UI (Project Settings) take priority over environment variables. This allows different projects to use different providers.

Inline AI

The editor includes seven AI commands accessible via the /ai slash command:
CommandDescription
WriteGenerate new text from a prompt
SummarizeCondense selected text into a brief summary
ImproveRewrite selected text for clarity and quality
ExpandAdd more detail to selected text
SimplifyMake selected text easier to understand
TranslateTranslate selected text to another language
ExplainGenerate an explanation of selected content
Responses stream in real-time via Server-Sent Events (SSE) and insert directly into the editor at the cursor position. Endpoint: POST /api/v1/chat/inline

AI Chat

The chat interface provides a context-aware AI assistant that understands your documents and workflows.

Features

  • Streaming responses — real-time output via SSE
  • Function calling — the AI can use tools to search, create, and modify content
  • Context injection — automatically includes relevant workflow/document context
  • Conversation history — maintains context across messages

AI Tools

The chat AI has access to 16 tools:
ToolWhat It Does
rag_searchSemantic search across documents and workflows
create_pageCreate a new document
update_pageModify document content
read_documentFetch and read a document
read_workflowFetch a workflow with all steps
analyze_workflowAI analysis of a workflow
rename_workflowChange a workflow title
rename_stepsBulk rename workflow steps
merge_stepsCombine multiple steps
suggest_workflowGenerate workflow suggestions
search_pagesFull-text search on documents
list_workflowsBrowse accessible workflows
create_folderCreate folder structure
add_context_linkCreate URL → resource associations
Endpoint: POST /api/v1/chat/chat/completions

Auto-Processing

When a workflow recording is finalized, ondoki automatically runs an AI pipeline:
1

Title Generation

Generates a descriptive title based on the recorded steps and window titles.
2

Summary

Creates a 50–150 word summary of what the workflow demonstrates.
3

Tagging

Suggests topic tags for categorization and search.
4

Difficulty Assessment

Rates the workflow as Easy, Medium, or Advanced based on complexity.
5

Time Estimation

Estimates how long it would take someone to follow the workflow.
6

Step Annotation

For each step, generates a title and description explaining what happens. Identifies the UI element being interacted with and categorizes the action.
7

Guide Generation

Produces a complete step-by-step guide in Markdown format.
You can re-trigger annotation on individual steps via the Annotate button in the workflow editor. Endpoint: POST /api/v1/process-recording/steps/{step_id}/annotate

Privacy and PII Protection

ondoki supports optional PII obfuscation before data reaches AI providers:

SendCloak

SendCloak is a proxy that intercepts LLM requests, detects PII using Microsoft Presidio, replaces it with synthetic data, and de-masks the AI response. Enable it with:
SENDCLOAK_ENABLED=true
docker compose --profile privacy up -d

DataVeil

An alternative privacy proxy that masks sensitive data in AI requests. Configure via DATAVEIL_ENABLED and DATAVEIL_URL environment variables.

Circuit Breaker

The LLM gateway includes a circuit breaker pattern to handle provider outages gracefully:
  • After 3 consecutive failures, the circuit opens for 60 seconds
  • During the cooldown, requests return an error immediately instead of timing out
  • The circuit automatically resets after the cooldown period

Usage Tracking

All LLM calls are tracked in the llmusage table with:
  • Input/output token counts
  • Estimated cost in USD
  • Model and provider used
  • Endpoint type (chat, inline, annotation, guide)
  • User and project association
View usage data in the Analytics Dashboard.