A minimal, hackable agentic framework engineered to run entirely locally with Ollama or BitNet.
Inspired by the architecture of OpenClaw, rebuilt from scratch for local-first operation.
| Document | Description |
|---|---|
| Architecture.md | Technical documentation for developers (directory structure, core design, orchestrator modes) |
| CHANGELOG.md | Version history and release notes (includes LocalClaw history) |
| TESTS.md | Benchmark results, model recommendations, and testing guide |
# Install from GitHub using pip:
pip install git+https://github.com/VTSTech/AgentNova.gitThe package was previously named localclaw. For backward compatibility:
# Old package name still works (shows deprecation warning)
pip install localclaw
# Old CLI command still works
localclaw run "What is the capital of Japan?" # Redirects to agentnova
# Old imports still work (with deprecation warning)
import localclaw # Re-exports from agentnovaWe recommend updating to the new package name:
# Old
import localclaw
from localclaw import Agent
# New
import agentnova
from agentnova import Agentgit clone https://github.com/VTSTech/AgentNova.git
cd AgentNova
pip install -e .AgentNova uses only Python stdlib — no dependencies! You can also just copy the agentnova directory into your project:
cp -r agentnova /path/to/your/project/# Test all models for native tool support
agentnova models --tool_support
# Results saved to tested_models.json for future reference# Simple Q&A
agentnova run "What is the capital of Japan?"
# With streaming output
agentnova run "Tell me a joke." --stream
# Specify a model
agentnova run "Explain quantum computing" -m llama3.2:3b# Start interactive session
agentnova chat -m qwen2.5-coder:0.5b
# With tools enabled
agentnova chat -m llama3.1:8b --tools calculator,shell,read_file,write_file
# With skills loaded
agentnova chat -m llama3.2:3b --skills skill-creator --tools write_file,shell
# Fast mode (reduced context for speed)
agentnova chat -m qwen2.5-coder:0.5b --fast --verboseagentnova chat --backend bitnet --force-react
agentnova run "Calculate 17 * 23" --backend bitnet --tools calculator- Zero dependencies — uses Python stdlib only
- Ollama + BitNet backends — switch with
--backendflag - Three-tier tool support — native, ReAct, or none (auto-detected per model)
- Agent Skills — follows Agent Skills specification
- Small model optimized — pure reasoning mode for sub-500M models
- Built-in security — path validation, command blocklist, SSRF protection
AgentNova automatically detects each model's tool support level:
| Level | Description | When to Use |
|---|---|---|
native |
Ollama API tool-calling | Models trained for function calling |
react |
Text-based ReAct prompting | Models that accept tools but need format guidance |
none |
No tool support | Models that reject tools; use pure reasoning |
# Test all models
agentnova models --tool_support
# Example output:
Model Family Context Tool Support
──────────────────────────────────────────────────────────────────────────────
gemma3:270m gemma3 32K ○ none
granite4:350m granite 32K ✓ native
qwen2.5-coder:0.5b-instruct-q4_k_m qwen2 32K ReAct
functiongemma:270m gemma3 32K ✓ nativeR02.6 Quick Diagnostic results (5 questions, ~30-120s/model):
| Model | Score | Time | Tool Support |
|---|---|---|---|
functiongemma:270m |
100% | 19.6s | native |
granite4:350m |
100% | 49.4s | native |
qwen2.5:0.5b |
100% | 66.3s | native |
qwen2.5-coder:0.5b |
100% | 116.5s | react |
qwen3:0.6b |
100% | 122.8s | react |
gemma3:270m |
80% | 14.3s | none |
dolphin3.0-qwen2.5:0.5b |
80% | 26.6s | none |
qwen:0.5b |
20% | 27.0s | none |
Key improvements in R02.6:
- 5 models achieve 100% - All tool-calling models now perfect!
- functiongemma:270m fastest at 19.6s (native tools + 270M params)
- Multi-step expression extraction - Handles
8 times 7 minus 5, word problems, time calculations - ReAct JSON parsing fixed - Clean extraction even with trailing text
- Verbose response fallback - Uses numeric result when model gives long explanation
- Tool-calling outperforms pure reasoning - ALL native/react models score 100% vs max 80% for no-tool models
| Command | Description |
|---|---|
run "prompt" |
Run single prompt and exit |
chat |
Interactive multi-turn conversation |
models |
List available Ollama models with tool support info |
tools |
List built-in tools |
skills |
List available Agent Skills |
test [example] |
Run example/test scripts (--list to see all) |
modelfile [model] |
Show model's Modelfile system prompt |
| Flag | Description |
|---|---|
-m, --model |
Model name (default: qwen2.5-coder:0.5b) |
--tools |
Comma-separated tool list |
--skills |
Comma-separated skill list |
--backend |
ollama or bitnet |
--stream |
Stream output token-by-token |
--fast |
Preset: reduced context for speed |
-v, --verbose |
Show tool calls and timing |
--acp |
Enable ACP (Agent Control Panel) integration |
--use-mf-sys |
Use Modelfile system prompt instead of AgentNova default |
--force-react |
Force ReAct mode for all models |
--debug |
Show debug info (parsed tool calls, fuzzy matching) |
--num-ctx |
Context window size for test commands |
--num-predict |
Max tokens to predict for test commands |
# List models with family, context size, and tool support
agentnova models
# Test each model for native tool support (recommended)
agentnova models --tool_supportOutput shows:
- Model - Model name
- Family - Model family from Ollama API
- Context - Context window size
- Tool Support -
✓ native,ReAct,○ none, oruntested
⚛️ AgentNova R02.6 Models
Model Family Context Tool Support
──────────────────────────────────────────────────────────────────────────────
gemma3:270m gemma3 32K ○ none
granite4:350m granite 32K ✓ native
qwen2.5-coder:0.5b-instruct-q4_k_m qwen2 32K ReAct
functiongemma:270m gemma3 32K untested
1 model(s) untested. Use --tool_support to detect native support.
# List all available tests
agentnova test --list
# Quick diagnostic - 5 questions, ~30s/model (NEW in R02.3)
agentnova test 15 --model granite3.1-moe:1b
agentnova test 15 --model all --debug
# Run GSM8K benchmark (50 math questions)
agentnova test 14 --acp --timeout 6400
# Run with debug output
agentnova test 02 --debug --verbose| Tool | Description |
|---|---|
calculator |
Evaluate math expressions |
python_repl |
Execute Python code |
shell |
Run shell commands |
read_file |
Read file contents |
write_file |
Write content to file |
list_directory |
List directory contents |
http_get |
HTTP GET request |
save_note / get_note |
Save and retrieve notes |
| Variable | Description | Default |
|---|---|---|
OLLAMA_BASE_URL |
Ollama server URL | http://localhost:11434 |
BITNET_BASE_URL |
BitNet server URL | http://localhost:8765 |
ACP_BASE_URL |
ACP (Agent Control Panel) server URL | http://localhost:8766 |
AGENTNOVA_BACKEND |
Backend: ollama or bitnet |
ollama |
AGENTNOVA_MODEL |
Default model | qwen2.5-coder:0.5b-instruct-q4_k_m |
AGENTNOVA_SECURITY_MODE |
Security mode: strict, permissive, disabled |
permissive |
# Make sure Ollama is running:
ollama serve
# Pull a model:
ollama pull qwen2.5-coder:0.5b-instruct-q4_k_m
# Test tool support:
agentnova models --tool_support**⚛️ AgentNova ** is written and maintained by VTSTech.
- 🌐 Website: https://www.vts-tech.org
- 📦 GitHub: https://github.com/VTSTech/AgentNova
- 💻 More projects: https://github.com/VTSTech
For more details, see:
- Architecture.md — Technical architecture and design decisions
- CHANGELOG.md — Version history and release notes
- TESTS.md — Benchmark results and model recommendations