AI Action
The ai action is how you talk to AI models in JigSpec. It's a single action with multiple modes of operation — the mode is determined by which fields you set. Think of it like one knob with several positions.
The simplest case: just talk to the model
Implemented- name: write_bio
action: ai
prompt: "Write a short professional bio for {{ input.name }}"That's it. You give it a prompt, it gives you back text. The output is in write_bio.text.
The {{ }} syntax is how you pass data between steps — see Data References for the full story.
You can reference this in the next step:
- name: write_bio
action: ai
prompt: "Write a short professional bio for {{ input.name }}"
- name: translate
action: ai
prompt: "Translate this bio to French: {{ write_bio.text }}"Mode 1: Prompt
ImplementedThe simplest mode. Give it a prompt, get text back.
- name: summarize
action: ai
prompt: "Summarize in 3 bullet points: {{ input.article }}"Output: summarize.text
You can add optional fields to control the model:
- name: write
action: ai
prompt: "Write a creative story about {{ input.topic }}"
config:
model: anthropic/claude-sonnet-4-5 # overrides pipeline default
temperature: 0.8 # higher = more creative
max_tokens: 2000Mode 2: Agent
ImplementedWhen you need the model to work autonomously — making multiple tool calls, planning, and iterating until done — use agent mode. Add the tools field.
- name: researcher
action: ai
prompt: "Research {{ input.company }} and write a competitive analysis"
tools:
- Write
- Read
max_attempts: 20 # safety limit on tool call roundsThe model will call tools, observe results, plan next steps, and keep going until it decides it's done or hits max_attempts.
Output: researcher.text — the model's final response
Agent vs. Prompt
Use prompt mode when you want one round-trip: you ask, the model answers. Use agent mode when the model needs to gather information or take actions before it can answer.
Tools configuration
JigSpec ships with a set of built-in tools the agent can use. Declare which tools are allowed in the tools field:
- name: analyst
action: ai
prompt: "Analyze the data and write a report to output/report.md"
tools:
- Write
- ReadYou can also restrict tools at the pipeline level and override at the step level. See Configuration — Tools.
Model configuration
Use provider-prefixed model strings:
config:
model: openai/gpt-5-codex-mini # OpenAI
# model: anthropic/claude-sonnet-4-5 # Anthropic
# model: anthropic/claude-haiku-4-5 # Faster/cheaper AnthropicThe prefix tells JigSpec which provider to route to. This makes it easy to swap models without changing your pipeline logic.
Mode 3: Extract
ImplementedPull structured data out of unstructured text. Add output_schema to activate extract mode.
- name: parse_invoice
action: ai
prompt: "Extract invoice details from: {{ input.text }}"
output_schema:
vendor: string
amount: number
date: string
line_items:
- description: string
quantity: number
price: numberThe model returns structured JSON matching your schema. JigSpec uses the Vercel AI SDK's generateObject under the hood, so the output is schema-validated before the step completes — if the model produces malformed JSON or fields that don't match, the step retries (up to max_attempts) with the validation error fed back to the model.
Output:
parse_invoice.data— the full structured object (accessible via{{ parse_invoice.data.vendor }},{{ parse_invoice.data.amount }}, etc.)- The raw JSON is also written to
data.jsonin the step's workspace directory.
Nested and array fields are supported — the example above produces a line_items array where each item has description, quantity, and price.
When to use extract
Reach for extract mode any time you'd otherwise ask the model to "reply in JSON format" in a prompt. Letting the runtime enforce the schema instead of trusting prose instructions is more reliable and removes boilerplate parsing code downstream.
Mode 4: Classify
ImplementedRoute data into categories. Add categories to activate classify mode.
- name: triage
action: ai
prompt: "Classify this support message: {{ input.message }}"
categories:
- bug_report
- feature_request
- question
- spamThe model must choose exactly one of the declared strings. JigSpec uses generateObject({ output: "enum" }) so the return value is enforced at the provider level — the model cannot reply with anything that isn't in your categories list. Invalid or off-list responses trigger a retry with corrective feedback.
Output (single-label, the default):
triage.category— the chosen label as a stringcategory.txt— same value, written to the step's workspace
Multi-label classification
Add max_categories greater than 1 to allow multiple labels per item:
- name: tag_article
action: ai
prompt: "Tag this article with all relevant topics: {{ input.text }}"
categories: [ai, politics, finance, health, sports, tech]
max_categories: 3Output (multi-label):
tag_article.categories— an array of labelscategories.json— same value as JSON
Using classification to drive control flow
Combine classify with the route action or the per-step when: gate (see Control Flow):
- name: triage
action: ai
prompt: "Classify: {{ input.message }}"
categories: [bug_report, feature_request, question, spam]
- name: respond
action: ai
prompt: |
This was classified as {{ triage.category }}.
Write an appropriate response for: {{ input.message }}How modes are selected
JigSpec uses a single ai action and determines the mode from the fields you set:
| Fields present | Mode |
|---|---|
prompt only | Prompt mode |
prompt + tools | Agent mode |
prompt + output_schema | Extract mode |
prompt + categories | Classify mode |
You never need to say mode: agent — adding tools is the declaration that this step needs agent behavior.