Skip to main content
The invoke command is the primary way to make authenticated API calls through aivault. The broker validates the request, injects auth, and returns the response — the caller never sees the secret.

invoke

Execute a proxied request and print the raw upstream response.
aivault invoke <capability-id> [options]
This is a top-level shortcut for aivault capability invoke.

Examples

# JSON body
aivault invoke openai/chat-completions \
  --body '{"model":"gpt-5.2","messages":[{"role":"user","content":"hello"}]}'

# Multipart (file upload)
aivault invoke openai/transcription \
  --multipart-field model=whisper-1 \
  --multipart-file file=/tmp/audio.wav

# Custom method and path
aivault invoke github/repos \
  --method GET \
  --path /repos/owner/repo

# With specific credential
aivault invoke openai/chat-completions \
  --credential my-openai-staging \
  --body '...'

# With workspace/group context
aivault invoke openai/chat-completions \
  --workspace-id my-workspace \
  --group-id my-group \
  --body '...'

# From a request file
aivault invoke openai/chat-completions \
  --request-file /tmp/request.json

# Body from file
aivault invoke openai/chat-completions \
  --body-file-path /tmp/body.json

# Additional headers
aivault invoke openai/chat-completions \
  --header "X-Custom: value" \
  --body '...'

json

Invoke and print the response as parsed JSON.
aivault json openai/chat-completions \
  --body '{"model":"gpt-5.2","messages":[{"role":"user","content":"hello"}]}'
Same as aivault capability json.

markdown

Invoke and print the response converted to markdown. Useful for LLM-friendly output.
aivault markdown openai/chat-completions \
  --body '{"model":"gpt-5.2","messages":[{"role":"user","content":"hello"}]}'

# With namespace wrapping
aivault markdown openai/chat-completions \
  --namespace data \
  --body '...'
# → <begin data> ... </end data>

# Exclude fields from output
aivault markdown openai/chat-completions \
  --exclude-field usage \
  --body '...'

# Wrap fields containing markdown
aivault markdown openai/chat-completions \
  --wrap-field content \
  --body '...'
Alias: md

Invoke options

FlagDescription
--methodHTTP method (defaults to capability’s first allowed method)
--pathRequest path (defaults to capability’s first path prefix)
--headerAdditional request header (repeatable)
--bodyRequest body (JSON string)
--body-file-pathRead request body from file
--requestFull request envelope as JSON
--request-fileRead full request envelope from file
--multipart-fieldMultipart form field as name=value (repeatable)
--multipart-fileMultipart form file as name=/path/to/file (repeatable)
--credentialSpecific credential to use (overrides default resolution)
--workspace-idWorkspace context for credential resolution
--group-idGroup context for credential resolution
--client-ipClient IP for audit context (default: 127.0.0.1)

Response handling

  • Raw mode (invoke): prints the response body as-is
  • JSON mode (json): parses and pretty-prints as JSON
  • Markdown mode (markdown): converts JSON response to markdown with optional namespace wrapping, field exclusion, and field wrapping
In all modes, upstream response headers are intentionally stripped. In untrusted execution environments, headers can carry identifiers or cookies that leak through agent context. Next: OAuth