Skip to content

SecNode/API-PENTESTER

SecNode API

CI Python License

AI-augmented, schema-driven API penetration testing from OpenAPI/Swagger specs, with asynchronous execution and structured reporting.

Why SecNode API

SecNode API helps security engineers and backend teams run repeatable API risk assessments in staging and CI without writing one-off test scripts for every target.

  • Ingests local or remote OpenAPI/Swagger schema files
  • Performs Multi-stage specialized AI generation (Auth, Injection, Infrastructure, Business Logic) to maximize vulnerability coverage
  • Performs enhanced reconnaissance (mutations, method probing, parameter fuzzing) augmented by an AI Recon Analyzer for shadow endpoints
  • Executes tests concurrently with optional proxy routing
  • Supports autonomous agent mode with request budgets and iterative replanning
  • Supports direct microservices mode with controller/planner/worker boundaries
  • Produces both human-readable and machine-readable findings

What It Is and Is Not

SecNode API is a practical automation layer for API security testing.

  • It is useful for fast risk triage, regression checks, and structured analyst review
  • It does not replace manual penetration testing or threat modeling
  • It can still produce false positives or miss undocumented behavior

Installation

Requirements

  • Python 3.10+
  • Access to an LLM provider key (OpenAI, Anthropic, or Nebius)

Install from source

python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
pip install -e .[dev]

Install with uv (recommended)

uv sync --extra dev

On Windows (PowerShell):

python -m venv .venv
.venv\Scripts\Activate.ps1
pip install -r requirements.txt
pip install -e .[dev]

On Windows (PowerShell) with uv:

uv sync --extra dev

Configuration

Set the model and provider credentials before running scans:

export SECNODE_LLM="openai/gpt-4o"
export OPENAI_API_KEY="your-api-key"

Or with Anthropic:

export SECNODE_LLM="anthropic/claude-3-5-sonnet-20241022"
export ANTHROPIC_API_KEY="your-anthropic-key"

Or with Ollama:

export SECNODE_LLM="ollama/llama3.1"
export OLLAMA_API_BASE="http://localhost:11434" # optional, defaults to localhost if omitted

Or with Nebius:

export SECNODE_LLM="nebius/meta-llama/Meta-Llama-3.1-70B-Instruct"
export NEBIUS_API_KEY="your-nebius-key"

Provider credentials are model-specific. openai/* requires OPENAI_API_KEY, anthropic/* requires ANTHROPIC_API_KEY, nebius/* requires NEBIUS_API_KEY, and ollama/* can run locally without cloud API keys. See LiteLLM providers.

Quick Start

Run against a remote schema:

secnodeapi --target https://api.example.com/swagger.json

Run against a local schema file:

secnodeapi --target ./openapi.yaml

Reports are written to results/<target_or_local_schema>_<timestamp>/.

CLI Usage

secnodeapi --target ./openapi.yaml --schema-only
secnodeapi --target https://api.example.com/swagger.json --dry-run --dry-run-output ./results/tests.json
secnodeapi --target https://api.example.com/swagger.json --auth-header "Authorization: Bearer <token>"
secnodeapi --target https://api.example.com/swagger.json --proxy http://127.0.0.1:8080 --insecure
secnodeapi --target https://api.example.com/swagger.json --mode agent --request-budget 500 --max-iterations 6
secnodeapi --target https://api.example.com/swagger.json --mode agent --instruction "username=admin, role=superuser" --instruction "username=user"
secnodeapi --target https://api.example.com/swagger.json --mode microservices

Key options

  • --target URL or local path to OpenAPI schema (required)
  • --mode agent (default) or legacy execution pipeline
  • --mode agent, legacy, or microservices
  • --concurrency concurrent request workers
  • --auth-header single inline auth header
  • --auth-file JSON file of auth headers
  • --identities-file JSON identities for differential auth testing
  • --instruction comma-separated key=value pairs for instruction sets (repeatable)
  • --schema-only output normalized API structure and exit
  • --dry-run generate tests without executing
  • --dry-run-output write generated tests to JSON (requires --dry-run)
  • --request-budget max request count in agent mode
  • --per-endpoint-budget max attempts per endpoint in agent mode
  • --max-iterations max plan/execute loops in agent mode
  • --proxy route traffic via proxy
  • --insecure disable TLS verification for controlled environments

Output

Each run generates an output directory containing:

  • report.md with executive summary, severity overview, and evidence sections
  • findings.json for machine processing and pipeline integration

Development

make install-dev
make lint
make test
make test-cov
make build

Using uv-native targets:

make install-dev-uv
make lint-uv
make test-uv
make test-cov-uv
make build-uv

CI

GitHub Actions workflow runs:

  • lint checks
  • test suite with coverage thresholds
  • package build
  • scan job template for staging targets

Direct Microservices Runtime

This repository now includes a direct microservices runtime foundation:

  • Controller service
  • Planner service
  • Skill engine service with ranked skill dispatch
  • Specialized workers (recon, discovery, fuzzing, exploit)
  • Tool adapters (ffuf, nuclei, sqlmap, zap, kiterunner)
  • Memory subsystem (session, history, skill metrics)
  • Attack graph engine
  • FastAPI control plane

Run local stack:

docker compose -f deploy/docker-compose.yml up --build

Security and Responsible Use

Only test systems you own or are explicitly authorized to assess.

Contributing

Contributions are welcome. For setup and PR expectations, see CONTRIBUTING.md.

License

Apache 2.0. See LICENSE.

About

Your agentic API security engineer. Built by the community, for builders who care about security but don't have unlimited time or budget. Point it at your API docs it hunts down the deep vulnerabilities that actually get you breached.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages