Spider CLI
The Spider CLI (spider) is a standalone command-line tool for querying Spider-captured network traffic data and managing capture lifecycle. It is designed for both humans and AI agents, and follows a kubectl-style verb-first command structure (spider <verb> <resource>).
Install
curl -L https://repository.floocus.com/bin/spider-x86_64/latest/spider.xz | unxz > /usr/local/bin/spider
chmod +x /usr/local/bin/spider
Quick start
# Interactive setup (prompts for all fields; password/secret are hidden,
# arrow keys supported for line editing)
spider add profile prod
# Or non-interactive with a service account
spider add profile prod \
--api-url https://your-spider-instance \
--auth-type service_account \
--client-id <client_id> \
--client-secret <client_secret> \
--whisperer <whisperer_id> \
--controller <controller_id>
# Or with a human login
spider add profile prod \
--api-url https://your-spider-instance \
--auth-type human \
--email <email> \
--password <password> \
--whisperer <whisperer_id>
# Search last hour of HTTP errors
spider search http --query "stats.statusCode:[400 TO 599]" --pretty
# Get HTTP stats grouped by URL template
spider stats http --group-by template --pretty
# Detect duration outliers
spider outliers http --pretty
Authentication
Two auth types are supported:
| Auth type | Credentials | Use case |
|---|---|---|
service_account (default) | --client-id + --client-secret | Automation, CI, AI agents |
human | --email + --password | Interactive use |
Credentials are stored in ~/.spider/profiles.json (mode 0600). Tokens are cached in ~/.spider/tokens.json and refreshed automatically when they expire.
Command tree
Data search and retrieval
| Command | Description |
|---|---|
spider search http|psql|tcp|packets | Search captured communications |
spider get http|psql|tcp|packets <id> | Fetch a single communication by ID |
spider get http <id> req|res body | Fetch and decompress a request/response body (gzip, deflate, brotli handled client-side) |
spider stats http|psql|tcp|packets | Aggregated statistics with optional --group-by |
spider outliers http|psql | IQR-based duration/size outliers + z-score status anomalies |
spider aggs http|psql|tcp|packets | Run raw Elasticsearch aggregations |
Management resources
| Command | Description |
|---|---|
spider search whisperers|controllers|gociphers|teams|users | Search management resources |
spider get whisperer|controller|gocipher|team|user <id> | Fetch a single resource by ID |
spider show whisperer|controller|team | Show the resource referenced by the active profile |
Capture control
| Command | Description |
|---|---|
spider show namespaces | List Kubernetes namespaces visible to the profile's controller |
spider show collection -n <ns> -c <type> | List workloads in a namespace (types: pods, statefulsets, deployments, daemonsets, cronjobs) |
spider attach -n <ns> -c <type> -t <name> | Attach a whisperer to a workload and drive it to RECORDING |
Sharing
| Command | Description |
|---|---|
spider add link | Create a private or public shareable link |
Profile management
| Command | Description |
|---|---|
spider add profile <name> | Create or update a profile (interactive if no flags) |
spider list profiles | List all saved profiles |
spider show profile [name] | Show a profile (secrets redacted) |
spider use profile <name> | Set the default profile |
spider delete profile <name> | Remove a profile |
Common flags
All data commands (search, stats, outliers, aggs) accept:
--start— start time (ISO 8601 or relative like-1h,-24h, ornow)--stop— stop time (default:now)--whisperer/-w— whisperer IDs (overrides profile, repeatable)--query/-q— Lucene query string--size— max results (default: 100)--pretty— human-readable JSON output
Global:
--profile <name>— use a specific profile instead of the default
Fetching request/response bodies
spider get http <id> req|res body fetches the raw body from the server and decompresses it client-side based on the Content-Encoding header. Supported encodings: gzip, deflate, br (brotli). This avoids server-side decompression errors and returns the plain payload to stdout.
spider get http <id> res body > response.json
Generating shareable links
Spider has two link types:
Private link — requires a Spider account to open. Created by default.
spider add link \
[--view http|psql|tcp|packets] \
[--query "<lucene>"] \
[--start <t>] [--stop <t>] \
[--whisperer <id>]
Public link — anyone can open; an OTP is emailed to the recipient (or fully open if no email/domain restriction is set).
spider add link --public --expiry 72h \
[--emails analyst@example.com] \
[--domains @example.com] \
[--view http] [--query "<lucene>"] \
[--start <t>] [--stop <t>]
Both link types open Network-View pre-loaded with the selected whisperer(s), team, time window, view, and optional Lucene filter (stored as filters.freeQuery.part). Public links also restrict visible data via accessFilters — the same --query limits what the recipient can see.
Attaching whisperers to Kubernetes workloads
Discover targets first (requires controller_id in profile), then attach:
# List namespaces visible to the controller
spider show namespaces
# List workloads in a namespace
spider show collection -n my-namespace -c deployments
# Attach a whisperer to a target workload
spider attach -n my-namespace -c deployments -t my-service
# Uses the profile's whisperer_id and controller_id by default.
# Waits up to 30s for the whisperer to reach RECORDING state.