AI-augmented, schema-driven API penetration testing from OpenAPI/Swagger specs, with asynchronous execution and structured reporting.
SecNode API helps security engineers and backend teams run repeatable API risk assessments in staging and CI without writing one-off test scripts for every target.
- Ingests local or remote OpenAPI/Swagger schema files
- Performs Multi-stage specialized AI generation (Auth, Injection, Infrastructure, Business Logic) to maximize vulnerability coverage
- Performs enhanced reconnaissance (mutations, method probing, parameter fuzzing) augmented by an AI Recon Analyzer for shadow endpoints
- Executes tests concurrently with optional proxy routing
- Supports autonomous agent mode with request budgets and iterative replanning
- Supports direct microservices mode with controller/planner/worker boundaries
- Produces both human-readable and machine-readable findings
SecNode API is a practical automation layer for API security testing.
- It is useful for fast risk triage, regression checks, and structured analyst review
- It does not replace manual penetration testing or threat modeling
- It can still produce false positives or miss undocumented behavior
- Python 3.10+
- Access to an LLM provider key (OpenAI, Anthropic, or Nebius)
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
pip install -e .[dev]
uv sync --extra dev
On Windows (PowerShell):
python -m venv .venv
.venv\Scripts\Activate.ps1
pip install -r requirements.txt
pip install -e .[dev]
On Windows (PowerShell) with uv:
uv sync --extra dev
Set the model and provider credentials before running scans:
export SECNODE_LLM="openai/gpt-4o"
export OPENAI_API_KEY="your-api-key"
Or with Anthropic:
export SECNODE_LLM="anthropic/claude-3-5-sonnet-20241022"
export ANTHROPIC_API_KEY="your-anthropic-key"
Or with Ollama:
export SECNODE_LLM="ollama/llama3.1"
export OLLAMA_API_BASE="http://localhost:11434" # optional, defaults to localhost if omitted
Or with Nebius:
export SECNODE_LLM="nebius/meta-llama/Meta-Llama-3.1-70B-Instruct"
export NEBIUS_API_KEY="your-nebius-key"
Provider credentials are model-specific. openai/* requires OPENAI_API_KEY, anthropic/*
requires ANTHROPIC_API_KEY, nebius/* requires NEBIUS_API_KEY, and ollama/* can run locally without cloud API keys.
See LiteLLM providers.
Run against a remote schema:
secnodeapi --target https://api.example.com/swagger.json
Run against a local schema file:
secnodeapi --target ./openapi.yaml
Reports are written to results/<target_or_local_schema>_<timestamp>/.
secnodeapi --target ./openapi.yaml --schema-only
secnodeapi --target https://api.example.com/swagger.json --dry-run --dry-run-output ./results/tests.json
secnodeapi --target https://api.example.com/swagger.json --auth-header "Authorization: Bearer <token>"
secnodeapi --target https://api.example.com/swagger.json --proxy http://127.0.0.1:8080 --insecure
secnodeapi --target https://api.example.com/swagger.json --mode agent --request-budget 500 --max-iterations 6
secnodeapi --target https://api.example.com/swagger.json --mode agent --instruction "username=admin, role=superuser" --instruction "username=user"
secnodeapi --target https://api.example.com/swagger.json --mode microservices
--targetURL or local path to OpenAPI schema (required)--modeagent(default) orlegacyexecution pipeline--modeagent,legacy, ormicroservices--concurrencyconcurrent request workers--auth-headersingle inline auth header--auth-fileJSON file of auth headers--identities-fileJSON identities for differential auth testing--instructioncomma-separated key=value pairs for instruction sets (repeatable)--schema-onlyoutput normalized API structure and exit--dry-rungenerate tests without executing--dry-run-outputwrite generated tests to JSON (requires--dry-run)--request-budgetmax request count in agent mode--per-endpoint-budgetmax attempts per endpoint in agent mode--max-iterationsmax plan/execute loops in agent mode--proxyroute traffic via proxy--insecuredisable TLS verification for controlled environments
Each run generates an output directory containing:
report.mdwith executive summary, severity overview, and evidence sectionsfindings.jsonfor machine processing and pipeline integration
make install-dev
make lint
make test
make test-cov
make build
Using uv-native targets:
make install-dev-uv
make lint-uv
make test-uv
make test-cov-uv
make build-uv
GitHub Actions workflow runs:
- lint checks
- test suite with coverage thresholds
- package build
- scan job template for staging targets
This repository now includes a direct microservices runtime foundation:
- Controller service
- Planner service
- Skill engine service with ranked skill dispatch
- Specialized workers (recon, discovery, fuzzing, exploit)
- Tool adapters (
ffuf,nuclei,sqlmap,zap,kiterunner) - Memory subsystem (session, history, skill metrics)
- Attack graph engine
- FastAPI control plane
Run local stack:
docker compose -f deploy/docker-compose.yml up --build
Only test systems you own or are explicitly authorized to assess.
- Read the disclosure process in SECURITY.md
- Follow community expectations in CODE_OF_CONDUCT.md
Contributions are welcome. For setup and PR expectations, see CONTRIBUTING.md.
Apache 2.0. See LICENSE.