Docs / Pipeline Studio
Pipeline Studio

Pipeline Studio

Pipeline Studio is the visual canvas for building, testing, and publishing multi-step AI pipelines. Think of it as a workflow builder where every node is an AI-native Salesforce operation.

Access Studio: Open the Pipeline Studio Lightning app, or add the c-fm-pipeline-builder component to any Lightning page. You'll need the FlowMason_Full_Access permission set.

Core concepts

Pipelines

A pipeline is an ordered graph of stages. Each stage performs one operation — an LLM call, a SOQL query, a DML write, a conditional branch. Stages declare their dependencies via depends_on, and the runtime builds a topological execution order automatically.

Stages

Every stage has a type (what it does), a category (LLM, operator, or control flow), and a config (type-specific settings). The Studio palette shows all available stage types:

Stage types reference
// Stage types available in the Studio palette:

// LLM / AI nodes (BaseNode subclasses):
// summarize         — FMSummarize core
// classify          — classify text into categories
// extract           — extract structured JSON from free text
// llm_call          — raw LLM call with full prompt control

// Data / IO operators (BaseOperator subclasses):
// soql_query        — SOQL SELECT with FLS enforcement
// dml_operation     — insert / update / delete with CRUD check
// http_callout      — outbound HTTP with URL allowlist
// logger            — write to Pipeline_Stage_Log__c
// variable_set      — set / update execution context variables

// Control flow (BaseFlowControl subclasses):
// for_each          — iterate over a list, fan stages out
// try_catch         — wrap stages in error recovery
// conditional       — branch based on expression result
// wait_for_signal   — pause until external event resumes
// parallel          — run multiple branches concurrently
// sub_pipeline      — embed another pipeline as a stage

The expression language

Stage config values can reference upstream results using {{...}} expressions. The expression engine is sandboxed — it can't make callouts, run SOQL, or access Apex reflection. It evaluates at runtime when the stage executes.

Expression language — reference
// Inside any stage config value, use double-brace expressions:
// {{input.accountId}}               — from the pipeline input map
// {{stages.fetch.records[0].Name}}  — output of a previous stage
// {{env.UserInfo.userId}}           — Salesforce environment info
// {{input.tags | join(', ')}}       — pipe to a built-in function

// Supported functions: join, upper, lower, trim, length,
//                      toJson, fromJson, coalesce, if, now

// Example — conditional prompt based on stage output:
// "prompt": "{{if(stages.classify.category == 'vip', 'VIP brief:', 'Standard brief:')}} {{stages.fetch.records[0].Name}}"

Creating a pipeline

  1. Open Pipeline Studio → click New Pipeline
  2. Give it a name and optionally a description
  3. Click the + button on the canvas to add stages
  4. Connect stages by dragging from the output handle of one stage to the input handle of another — this sets the depends_on relationship
  5. Configure each stage in the right-hand panel — fill in the config fields, reference upstream outputs with {{stages.stageid.field}}
  6. Set the Output Stage — the stage whose output becomes the pipeline's return value
  7. Save (auto-saves every 1.5 seconds by default; undo stack depth is 50 steps)

Pipeline JSON structure

Everything you build in Studio generates standard pipeline JSON. You can inspect it with View → JSON Editor, or export it as SFDX metadata. Here's the structure for a three-stage account enrichment pipeline:

JSON — pipeline structure
{
  "id": "account-enrich-v1",
  "name": "Account Enrichment",
  "version": 1,
  "stages": [
    {
      "id": "fetch",
      "type": "soql_query",
      "category": "operator",
      "config": {
        "query": "SELECT Id, Name, Industry, AnnualRevenue FROM Account WHERE Id = '{{input.accountId}}'"
      }
    },
    {
      "id": "summarize",
      "type": "summarize",
      "category": "llm",
      "depends_on": ["fetch"],
      "config": {
        "prompt": "Summarize this account for an enterprise AE: {{stages.fetch.records[0].Name}}, {{stages.fetch.records[0].Industry}}",
        "max_tokens": 300,
        "provider": "anthropic"
      }
    },
    {
      "id": "save",
      "type": "dml_operation",
      "category": "operator",
      "depends_on": ["summarize"],
      "config": {
        "operation": "update",
        "sobject": "Account",
        "record": {
          "Id": "{{input.accountId}}",
          "Description": "{{stages.summarize.content}}"
        }
      }
    }
  ],
  "output_stage_id": "summarize"
}

Validation

The Studio validates your pipeline every 400ms (debounced). Errors block Run and Publish. Warnings let you save but flag things to fix before production.

Apex — PipelineValidator
// The Studio runs validation automatically on save.
// You can also call it from Apex:
List<PipelineValidator.Issue> issues = PipelineValidator.validate(configJson);

for (PipelineValidator.Issue issue : issues) {
    System.debug(issue.severity + ': ' + issue.message);
    // severity: 'error' | 'warning'
    // Errors block Run and Publish. Warnings allow save.
}

Boolean hasErrors = PipelineValidator.hasErrors(issues);

// Common errors:
// - cycles in depends_on (stage A depends on stage B which depends on A)
// - output_stage_id refers to a stage that doesn't exist
// - duplicate stage ids
// - unknown depends_on references (when strictDependencyResolution = true)

Testing in Studio

Every pipeline has a Debug button alongside Run. Debug mode captures a full snapshot of the ExecutionContext at every stage boundary — inputs, outputs, variables — so you can step through what happened after the run completes.

  • Run: Executes with no overhead. No snapshots captured. Use in production.
  • Debug: Captures stage snapshots. Click Debug View after it completes to step through stages and inspect every variable.
  • Debug Session: Sets real breakpoints. The pipeline pauses and waits for you to step through it live. You can edit variable values mid-run (Edit & Continue).

See the full debugger reference in the Governance & Audit section. (Debugger docs coming as a dedicated guide.)

Publishing and promoting

A pipeline stored in FlowMason_Pipeline__c (Studio-editable) can be promoted to FlowMasonPipeline__mdt (SFDX-deployable, read-only at runtime) for distribution. This is the path for:

  • Promoting from sandbox to production via change sets or SFDX
  • Shipping pipelines as part of an AppExchange package
  • Version-controlling pipelines in git alongside your Apex code
SFDX export
// Export a pipeline from Studio as SFDX metadata:
// Studio → pipeline → ... → Download as Metadata

// Or use the Apex API:
String metadataXml = PipelineBuilderController.getMetadataDownload(pipelineId);

// The output is a FlowMasonPipeline__mdt-compatible XML file.
// Deploy it with:
// sf project deploy start --target-org MyOrg --source-dir force-app

Canvas settings

Config keyDefaultWhat it controls
canvasMinScale0.25Minimum zoom level
canvasMaxScale2.5Maximum zoom level
studioAutosaveDebounceMs1500Autosave delay after last change (ms)
studioUndoLimit50Undo stack depth
studioValidateDebounceMs400Validation debounce after last change (ms)

Custom stage components

You can extend the palette with your own stage types. Implement one of the three base classes, then register it:

  • BaseNode — for LLM-backed stages that make callouts
  • BaseOperator — for pure data transformation (no callouts)
  • BaseFlowControl — for branching and looping logic

Deploy an FM_Component_Type__mdt record with Type_Name__c, Class_Name__c, and Category__c. The Studio palette picks it up automatically on next load.

What's next