Workflows as Tools¶
Every Ploston workflow automatically becomes an MCP tool. Agents call workflows like any other tool—no special integration required.
This is Ploston's core differentiator: workflows are first-class MCP citizens.
The Concept¶
When Ploston starts:
1. Workflows are loaded from the configured directory
2. Each workflow is registered as an MCP tool with a w_ prefix
3. Agents discover workflows via standard MCP tools/list
4. Agents call workflows via standard MCP tools/call
flowchart TB
subgraph Startup["Ploston Startup"]
direction TB
subgraph Load["1. Load workflows from ./workflows/"]
W1["scrape-and-publish.yaml"]
W2["data-enrichment.yaml"]
W3["report-generator.yaml"]
end
subgraph Register["2. Register as MCP tools (w_ prefix)"]
T1["w_scrape-and-publish"]
T2["w_data-enrichment"]
T3["w_report-generator"]
end
subgraph Discover["3. Agent connects, calls tools/list"]
D1["Sees all workflows alongside native tools"]
end
Load --> Register --> Discover
end
Why This Matters¶
1. Stable Agent Interfaces¶
Workflow names don't change. The underlying implementation can evolve without breaking agent prompts.
# Agent prompt (stable)
"Use w_data-pipeline to process the data"
# Workflow implementation (can change)
v1.0: fetch → transform → save
v2.0: fetch → validate → transform → cache → save
2. Reduced Prompt Complexity¶
One tool call instead of many. Agents don't need to know the steps.
# Without Ploston: Agent must orchestrate
"First call firecrawl_scrape, then transform the result, then call kafka_publish..."
# With Ploston: Single tool
"Call w_scrape-and-publish with the URL"
3. Enforced Execution Boundaries¶
Workflows define exactly what can happen. Agents can't deviate.
# This workflow can ONLY do these three things
steps:
- id: fetch
tool: firecrawl_scrape # ✅ Allowed
- id: transform
code: | # ✅ Allowed (sandboxed)
...
- id: publish
tool: kafka_produce # ✅ Allowed
# Agent cannot call other tools through this workflow
4. Multi-Agent Cooperation¶
Multiple agents can share the same Ploston instance. All agents see the same workflows.
flowchart BT
subgraph Agents["All agents see the same tools, same governance"]
A["Agent A<br/>(Claude)"]
B["Agent B<br/>(GPT-4)"]
C["Agent C<br/>(Custom)"]
end
subgraph Ploston["Ploston Instance"]
subgraph Workflows["Registered Workflows"]
W1["w_scrape-and-publish"]
W2["w_data-enrichment"]
W3["w_report-generator"]
end
end
A --> Ploston
B --> Ploston
C --> Ploston
How It Works¶
Tool Naming Convention¶
Workflows appear as MCP tools with the w_ prefix:
Why w_ and not workflow:?
The colon character (:) causes some MCP clients and agents to misinterpret tool names as HTTP/curl-style URIs rather than tool identifiers. The w_ prefix is unambiguous, short, and grep-friendly.
Other tool naming patterns you'll see:
- local__github__create_issue — runner tool: <runner>__<server>__<tool>
- w_my-workflow — workflow tool: w_<workflow-name>
- slack_post — CP-side tool: <tool-name>
From Workflow to Tool¶
Workflow definition:
name: scrape-and-publish
version: "1.0.0"
description: "Scrape a URL and publish content to Kafka"
inputs:
- url:
type: string
required: true
description: "URL to scrape"
- topic:
type: string
required: true
description: "Kafka topic to publish to"
- format:
type: string
default: "markdown"
enum: ["markdown", "html"]
steps:
- id: fetch
tool: firecrawl_scrape
params:
url: "{{ inputs.url }}"
format: "{{ inputs.format }}"
- id: transform
code: |
data = context.steps['fetch'].output
return {"content": data['content'], "url": context.inputs['url']}
- id: publish
tool: kafka_produce
params:
topic: "{{ inputs.topic }}"
message: "{{ steps.transform.output }}"
outputs:
result:
from: steps.publish.output
Generated MCP tool:
{
"name": "w_scrape-and-publish",
"description": "Scrape a URL and publish content to Kafka",
"inputSchema": {
"type": "object",
"properties": {
"url": { "type": "string", "description": "URL to scrape" },
"topic": { "type": "string", "description": "Kafka topic to publish to" },
"format": { "type": "string", "enum": ["markdown", "html"], "default": "markdown" }
},
"required": ["url", "topic"]
}
}
Agent Discovery¶
Agents discover workflows via standard MCP:
{
"tools": [
{ "name": "firecrawl_scrape", "description": "..." },
{ "name": "kafka_produce", "description": "..." },
{ "name": "local__github__create_issue", "description": "..." },
{ "name": "w_scrape-and-publish", "description": "..." },
{ "name": "w_data-enrichment", "description": "..." }
]
}
Agent Invocation¶
{
"name": "w_scrape-and-publish",
"arguments": {
"url": "https://example.com/article",
"topic": "content-updates"
}
}
Response Format¶
{
"content": [
{
"type": "text",
"text": "{\"result\": {\"partition\": 0, \"offset\": 12345}}"
}
],
"_meta": {
"execution_id": "exec-abc123",
"workflow_id": "scrape-and-publish",
"workflow_version": "1.0.0",
"duration_ms": 5230
}
}
Error Handling¶
When a workflow fails, the response includes structured error information:
{
"content": [
{
"type": "text",
"text": "Tool 'firecrawl_scrape' is unavailable: Connection refused"
}
],
"isError": true,
"_meta": {
"execution_id": "exec-abc123",
"error": {
"code": "TOOL_UNAVAILABLE",
"category": "TOOL",
"message": "Tool 'firecrawl_scrape' is unavailable",
"step_id": "fetch",
"retryable": true
}
}
}
Agents can use the retryable flag to decide whether to retry.
What's NOT Implemented Yet¶
| Feature | Status | Phase |
|---|---|---|
| Workflow composition (workflows calling workflows) | Planned | Enterprise |
Version pinning ([email protected]) |
Planned | Enterprise |
| Per-workflow exposure control | Planned | Enterprise |
| Partial results on failure | Planned | Enterprise |
Next Steps¶
- How Ploston Works — The planning vs execution separation
- Execution Model — How workflows execute
- Workflow Schema — Complete YAML reference