AI-assisted authoring

Feed a use case,
get a deployable pipeline.

We publish a self-contained system-prompt document. Feed it + a plain-English use case to any LLM (Claude, GPT, Gemini) and the output is a ready-to-deploy pipeline JSON plus an @InvocableMethod wrapper when the use case calls for Flow Builder.

How it works

STEP 1

Feed the system prompt

Copy the contents of docs/05-reference/pipeline-authoring-system-prompt.md into the LLM's system message.

STEP 2

Describe the use case

Plain English. "When an Account is updated, summarize it into a custom field for the AE." No special syntax required.

STEP 3

Deploy

The output is valid JSON + valid Apex. Drop into your SFDX project and sf project deploy start.

Pipeline JSON schema

Every pipeline is a DAG. Stages wire via depends_on and {{ }} templates.

{
  "id": "summarize_account",
  "name": "Summarize Account",
  "description": "Fetch an Account with recent cases and produce a 3-sentence summary.",
  "input_schema": {
    "type": "object",
    "properties": { "accountId": { "type": "string" } },
    "required": ["accountId"]
  },
  "stages": [
    {
      "id": "fetch",
      "component_type": "soql_query",
      "config": {
        "query": "SELECT Id, Name, Industry FROM Account WHERE Id = :input.accountId LIMIT 1"
      },
      "depends_on": []
    },
    {
      "id": "summarize",
      "component_type": "llm_summarizer",
      "config": {
        "style": "brief",
        "maxLength": 120,
        "prompt": "Summarize this account in 3 sentences: {{ stages.fetch.records[0] }}"
      },
      "depends_on": ["fetch"]
    },
    {
      "id": "write_summary",
      "component_type": "dml_operation",
      "config": {
        "operation": "update",
        "sobjectType": "Account",
        "records": [ { "Id": "{{ input.accountId }}", "AI_Summary__c": "{{ stages.summarize.summary }}" } ]
      },
      "depends_on": ["summarize"]
    }
  ],
  "output_stage_id": "summarize"
}

Component catalog

50+ components across three categories — every shipped type, auto-registered via FM_Component_Type__mdt.

Nodes (AI / LLM)

llm_summarizer
Text → summary
{ summary, style, input }
llm_classifier
Text → one of N labels
{ label, confidence }
llm_extractor
Text → structured JSON
{ extracted: { … } }
llm_generator
Prompt → free-form text
{ text }
llm_translator
Text → target language
{ translated }
llm_rewriter
Text → rewritten
{ rewritten }
llm_qa
Context + question → answer
{ answer, citations }
llm_chat
Conversational turn
{ reply }
llm_critic
Text → critique + score
{ critique, score }
llm_improver
Text + critique → revised
{ improved }
llm_selector
N options → best choice
{ selected, reason }
llm_validator
Text + rules → valid?
{ valid, violations }
llm_analyzer
Text → structured analysis
{ analysis: { … } }
llm_embedder
Text → vector
{ embedding: number[] }
combiner
N upstream → merged
{ combined }
audio_transcriber
Audio → text
{ transcript, confidence }
text_to_speech
Text → audio
{ audioUrl }
document_reader
PDF/image → text (OCR)
{ text, pages }
form_extractor
Form/invoice → structured
{ fields: { … } }
rag_retriever
Query → relevant docs
{ documents[] }
knowledge_query
Query → grounded answer
{ answer, sources }

Operators (data movement / transform)

soql_query
Run SOQL (FLS-safe)
dml_operation
Insert/update/delete/upsert
http_callout
External HTTP via Named Credential
template
Render a string template
logger
Write to audit + debug log
filter
Collection → filtered collection
aggregate
Sum / avg / min / max / count
merge
N inputs → merged map
split
String or collection → parts
data_mapper
Remap input shape
json_transform
JSON-to-JSON transform
validate
Data + JSON Schema → valid?
cache
Get/set a cache entry
delay
Pause before next stage
email_sender
Send email via platform
file_parser
CSV/JSON/XML → structured
variable_set
Write to context.X
rule_checker
Input + rules → pass/fail
output_router
Route to 1 of N by rule

Flow control

foreach
Iterate collection; run child stages per item
try_catch
Guarded run; on exception, run catch branch
conditional
if / elseif / else by boolean
switch
Pick 1 of N branches by value
router
Rule-based routing
parallel
Run multiple branches concurrently
loop
While / until loop
retry
Retry wrapped stage with backoff
timeout
Enforce time limit on wrapped stage
return_early
Short-circuit the pipeline
subpipeline
Invoke another pipeline by ID
circuit_breaker
Protect a call from cascading failure (short_circuit or buffered mode)

Invocable Apex wrapper

When the use case needs Flow Builder or record-triggered entry, the authoring prompt emits a matching Apex class. The wrapper is always public with sharing with a List<Request> signature for bulk-safe Flow invocation.

public with sharing class SummarizeAccountInvocable {
    public class Request {
        @InvocableVariable(label='Account ID' required=true)
        public Id accountId;
    }
    public class Response {
        @InvocableVariable(label='Summary')
        public String summary;
    }
    @InvocableMethod(label='AI: Summarize Account' category='FlowMason AI')
    public static List<Response> invoke(List<Request> requests) {
        List<Map<String, Object>> inputs = new List<Map<String, Object>>();
        for (Request r : requests) {
            inputs.add(new Map<String, Object>{ 'accountId' => r.accountId });
        }
        List<ExecutionResult> results = PipelineRunner.executeAll('summarize_account', inputs);
        List<Response> responses = new List<Response>();
        for (ExecutionResult result : results) {
            Response resp = new Response();
            resp.summary = (String) result.getOutput().get('summary');
            responses.add(resp);
        }
        return responses;
    }
}

Design patterns the prompt recognizes

Summarization
soql_query → llm_summarizer → dml_operation
Classification
soql_query → llm_classifier → switch → handlers
Extraction
form_extractor → dml_operation
Triage / routing
classifier + analyzer → conditional → dml_operation
Grounded Q&A (RAG)
rag_retriever → llm_qa
Batch scoring
soql_query → foreach { classifier → dml_operation }
Async resilience
circuit_breaker (mode: buffered) → wrapped stage
Multi-provider arbitration
parallel → combiner → llm_critic

Get the full system prompt

The complete 886-line authoring doc — every component, every pattern, worked examples, quality checklist. Ships with the managed package.

Request access