Build Brains
Inside your apps.
The first Stateful Agentic AI Framework for Flutter. Enable agents to reason, think, and execute tools locally — with zero latency.
Kill the
Middleman
Stop paying for bloated Python middleware. Vantura moves the reasoning orchestration directly into your Flutter app.
Old Way (Backend AI)
App → Server → Python (LangChain) → LLM. High Latency. Privacy Risk.
Vantura Way (On-Device)
App (Vantura Agent) → LLM. Local Tools & Privacy. Zero Middleware.
Agentic Powerhouse
Every feature you need to build production-grade autonomous agents, built into the core SDK.
On-Device ReAct
Native Reason+Act loop in Dart. Iterative thinking with under 50ms orchestration latency.
Multi-Agent Teams
Task delegation between specialized agents via automatic transfer mechanisms.
PII Redaction Engine
VanturaSecurity masks emails, SSNs, and cards before triggers reach provider APIs.
Multi-Provider
Deep integration for OpenAI, Anthropic, and Gemini. Swap with one line of code.
Dual-Layer Memory
Short-term context + Long-term summarized persistence across app restarts.
Isolate Workers
Heavy reasoning runs in background isolates. Zero impact on UI/UX frame-rates.
The Vantura Ecosystem
| Package | Version | Core Role |
|---|---|---|
| vantura | v1.1.0 | ReAct Engine, Memory & Coordination |
| vantura_tracing | v0.1.0 | Observability & Privacy-safe Logs |
| orbit_reference | v0.3.0 | Production-grade Flutter Suite |
Built for
Compliance & Privacy
PII Redaction
Auto-mask emails, phones, and SSNs at the SDK level.
Anti-SSRF Armor
Hostname blacklisting prevents internal network scanning.
Safe Logging
Recursive redaction of keys and secrets from all logs.
Human-in-the-loop
Mandatory confirmation triggers for sensitive tools.
Frequently Asked Questions
Q: How is Vantura different from LangChain or CrewAI?
LangChain and CrewAI are server-side Python frameworks. Vantura runs entirely on the client (Flutter), eliminating the need for a backend orchestration server, reducing latency, and keeping data private by design.
Q: Which LLM providers are supported?
Vantura ships with native support for OpenAI (and Groq/Ollama), Anthropic Claude, and Google Gemini. You can swap providers with a single line of code.
Q: Does Vantura work offline?
Tool execution and memory work offline. LLM calls require network, but agent checkpointing preserves state so sessions resume flawlessly when connection is restored.
Q: Can it handle sensitive data?
Yes. Vantura's security engine redacts sensitive PII like emails and card numbers before data is sent to provider APIs, and all logging is scrubbed of secrets by default.
Intelligence
Starts Here.
Vantura is the missing cognitive framework for Flutter. Join the mission to build more autonomous, private, and powerful mobile apps.
v:0.1