Coming to AppExchange — Get Early Access

AI pipelines for
Salesforce.

The developer primitive layer that sits beside Flow, Apex, and Agentforce — not against them.

Chain SOQL queries, LLM calls, DML writes, and HTTP callouts into governor-safe pipelines — exposed as Flow actions, Apex facades, LWC components, REST endpoints, and trigger bindings. No orchestration boilerplate.

1 afternoon
Per new pipeline
6–10 weeks
DIY equivalent
7 providers
Zero lock-in
// Run a 5-stage account briefing pipeline — one call
ExecutionResult result = PipelineRunner.run(
    'account-briefing-v1',
    new Map<String, Object>{ 'accountId' => recordId }
);

// Stage 1 — soql_query:   fetch Account + open Opportunities (FLS-checked)
// Stage 2 — llm_summarizer:  summarize for the AE  (auto-retries on 429)
// Stage 3 — llm_classifier:  classify deal risk    (parallel to stage 2)
// Stage 4 — dml_operation:   save to Account.AI_Briefing__c (CRUD-checked)
// Stage 5 — http_callout: post to Slack channel  (allowlisted endpoint)

String briefing  = (String)  result.outputGet('content');
String riskLevel = (String)  result.outputGet('riskLevel');
@InvocableMethod fm.LLM.* <c-fm-ai-*> PipelineTrigger REST endpoints Platform Events Scheduler
AppExchange Security Review
Submitted Q2 2026
SFDX-deployable
CustomMetadata + 2GP
FLS + CRUD enforced
Every SOQL / DML stage
Audit-logged
Full execution trail
Zero vendor lock-in
7 providers, swap by config

The Problem

Stop writing
HTTP callouts.

Every Salesforce AI project from scratch carries a 100-hour tax. You've paid it before. You'll pay it again — unless you have FlowMason.

100+ hours of HTTP callout boilerplate

Named credentials, HttpRequest, JSON parse, retry logic — every. single. project.

Governor limits are a minefield

Async queueable chains, CPU time overruns, SOQL in loops. AI calls make it worse.

Einstein lock-in means one provider

Your enterprise wants Claude. Your security team wants Bedrock. Agentforce says no.

Zero testability

How do you unit test an HTTP callout to OpenAI? You can't mock it properly. Coverage suffers.

No cost attribution or audit trail

Which team spent $4,000 on tokens last month? Which pipeline touched that record? No idea.

Prompt Builder is UI-first, not SFDX-first

Prompt templates live in a UI. You can't git-review them. CI/CD breaks. Admins own the DX.

One callout is easy. Five chained steps isn't.

Any dev can write an HttpRequest to Claude in an afternoon. Chaining 5 stages with governor yield, per-stage retry, FLS-safe DML, async fan-out, and a mock test harness? That's 6–12 weeks — before you wire it to Flow and LWC.

FlowMason gives those 100 hours back.

Install the package. Call fm.LLM.summarize(). Ship AI features your team actually uses.

What FlowMason Actually Builds

You don't call an LLM.
You run a pipeline.

A pipeline is a directed graph of stages. Each stage is typed — SOQL query, LLM call, DML write, HTTP callout, conditional branch, for-each loop. FlowMason wires them together, monitors governors between every stage, retries on failure, and writes a full audit record on completion.

soql_query
Fetch Record
Account + Opps + Cases
FLS-checked
bulk-safe
llm_summarizer
Summarize
AE briefing
auto-retry 3x
any provider
llm_classifier
Classify Risk
deal health score
governor guard
async-safe
dml_operation
Update Record
AI_Briefing__c field
CRUD-checked
rollback-safe
http_callout
Notify Slack
post to channel
allowlisted
Named Credential
Governor monitored between every stage Full audit record written on completion Exposed as Flow action, REST endpoint, and trigger binding simultaneously

The pipeline definition (~20 lines)

account-briefing-v1.json (excerpt)
{
  "id": "account-briefing-v1",
  "stages": [
    { "id": "fetch",    "type": "soql_query",
      "query": "SELECT Id, Name, AnnualRevenue FROM Account WHERE Id = {{input.accountId}}" },
    { "id": "summarize", "type": "llm_summarizer",
      "depends_on": ["fetch"],
      "prompt": "Summarize for the AE: {{stages.fetch.records}}" },
    { "id": "classify",  "type": "llm_classifier",
      "depends_on": ["fetch"],
      "prompt": "Score deal risk 1-10: {{stages.fetch.records}}" },
    { "id": "save",     "type": "dml_operation",
      "depends_on": ["summarize", "classify"] },
    { "id": "notify",   "type": "http_callout",
      "depends_on": ["save"] }
  ]
}
Apex — execute all 5 stages
ExecutionResult result = PipelineRunner.run(
  'account-briefing-v1',
  new Map<String, Object>{
    'accountId' => recordId
  }
);

What you get for those 20 lines

Governor-aware execution
Monitors CPU, heap, and SOQL between every stage. Yields to a Queueable automatically before you hit a limit — no manual async chaining.
🔄
Per-stage retry with backoff
Each LLM stage retries independently on 429/503. The rest of the pipeline doesn't fail. Configurable via FM_Config__mdt — no code change.
🌐
7 surfaces, zero extra code
The same pipeline JSON runs as a Flow action, Apex call, LWC event, REST endpoint, trigger binding, Platform Event handler, and scheduled job.
📋
Full audit trail, automatically
Every execution writes to PipelineExecution__c and Pipeline_Stage_Log__c. Token counts, cost, duration, and who ran it — queryable by SOQL from day one.
~20 lines
to define this pipeline
vs. ~400 lines of Apex to build the equivalent from scratch — before tests, before Flow wiring, before the LWC.

The Build-vs-Buy Question

You could write this yourself.
Here's what that costs.

The first pipeline you could DIY. By the fifth, you're maintaining a framework — not shipping features.

DIY Apex
The "I'll just write it" path
  • HttpRequest + Named Credential wiring per provider
  • JSON serialization / deserialization for each LLM API
  • Retry with exponential backoff (429, 503, timeouts)
  • Queueable async chain + governor yield logic
  • ForEach bulkification for trigger contexts
  • Test mock interface that intercepts HTTP callouts
  • FLS + CRUD enforcement on every data access
  • Audit log record on every execution
  • Per-token cost calculation per model per provider
  • @InvocableMethod wrapper + Request/Response classes for Flow
  • @AuraEnabled surface + LWC component for record pages
  • REST endpoint for external systems
  • Platform Event subscriber for async fan-out
  • Scheduled batch job runner
  • Visual debugger / step-through replay mode
6–10 weeks
per team, for a solid v1
then repeat for every new pipeline shape
FlowMason
The "ship it this week" path
Everything above, in a JSON file
{
  "id": "my-pipeline",
  "stages": [
    { "type": "soql_query",   /* FLS auto-enforced */ },
    { "type": "llm_call",     /* retry built-in    */ },
    { "type": "dml_write",    /* CRUD auto-checked */ },
    { "type": "http_callout", /* allowlisted creds */ }
  ]
}
  • All 15 DIY items above are already built and AppExchange-reviewed
  • Switch providers by changing one config field — no code
  • Test any pipeline without burning real tokens (FMTestMocks)
  • Governor yield happens automatically — you never think about it
  • SFDX-deployable: git-review your pipelines like code
  • Ship the 4th pipeline as fast as the 1st
1 afternoon
to define and deploy a pipeline
the same afternoon for the next one

The first pipeline you could build yourself.

By the fifth, FlowMason pays for itself.

Get Early Access

Positioning

Built to coexist with Salesforce, not compete with it.

Salesforce already ships Flow, Apex, Einstein, and Agentforce. We don't replace any of them. We slot underneath as the developer-first AI primitive layer the platform is missing.

vs. Apex

Not an Apex replacement.

Apex is the right tool for 90% of what Apex does. We make the 10% that calls AI much easier — governor-safe callouts, retry, token accounting, mocking.

vs. Flow Builder

Not a Flow Builder competitor.

Flow is orchestration. FlowMason is the AI element Flow consumes. Every pipeline ships as a native Flow action — Flow gets more powerful, admins stay in the tool they already love.

vs. Agentforce

Not Agentforce.

Agentforce owns the autonomous agent chat UI. FlowMason is the primitive layer developers reach for when Agentforce's UI-first workflow doesn't fit their architecture. Different jobs, same org.

vs. No-code tools

Not a no-code platform.

FlowMason is developer-first infrastructure. Admins get drop-in Flow actions and LWC components, but someone on your team writes Apex and deploys via SFDX. That's the point.

The first pipeline you build might look like competition. By the fifth, it's infrastructure — and every Salesforce tool you already use gets better.

Real Use Cases

What teams actually build
with FlowMason

Six pipelines — from a single Slack notification to a nightly 10,000-record enrichment run. Every FlowMason component is represented. Every example is production-realistic.

Trigger + Batch

AI Case Triage

Case · On new Case creation

Route and prioritize support cases before a human reads them.

Pipeline stages
Trigger Framework soql_query llm_classifier llm_selector dml_operation http_callout → Slack

A Case trigger fires FMTriggerFramework.dispatch(). The pipeline fetches the case body and account history via SOQL, classifies the issue type (billing / technical / product) and urgency via two parallel LLM calls, writes the result back to the Case, and posts a formatted summary to the right Slack channel — all before the first agent even opens their queue.

Avg triage time: 8 sec
Replaces 30-min manual review
Handles 200+ cases/day in bulk
Scheduler + Batch

Nightly Lead Enrichment

Lead · Scheduled — 2 AM nightly

Enrich every stale lead automatically while your team sleeps.

Pipeline stages
FMScheduler PipelineRunnerBatch soql_query (bulk) http_callout → enrichment API llm_analyzer ForEach + dml_operation

FMScheduler fires a Batch pipeline at 2 AM. PipelineRunnerBatch chunks leads into governor-safe groups of 50. Each chunk queries the lead's company data, calls an enrichment API for firmographic signals, scores fit with an LLM call using your ICP criteria, and writes a composite score back to the Lead record — 10,000 leads enriched before the SDRs log in.

10k leads enriched per night
Zero governor errors
Cost: ~$0.002 per lead
LWC Drop-in

Renewal Intelligence

Opportunity · Record page / Flow action

Give every AE an AI briefing before the renewal call.

Pipeline stages
c-fm-ai-summary (LWC) c-fm-ai-next-best-action soql_query (Account + Opps + Cases) llm_summarizer llm_generator conditional

Drop two LWC components on the Opportunity record page. When an AE opens a renewal opp, c-fm-ai-summary fetches the account, its open cases, past NPS scores, and last 3 closed-won/lost opps via SOQL and generates a 3-bullet AE briefing. c-fm-ai-next-best-action branches on deal health score and surfaces a tailored talk track — "expand" vs "save" mode — based on the classify stage output.

Used daily by 40+ AEs
Avg: 12 min saved per prep call
LWC: drag-and-drop, no code
LWC drop-in: c-fm-ai-summary + c-fm-ai-next-best-action — drag onto any record page in App Builder
REST + Document AI

Contract Intelligence

ContentVersion (PDF) · File upload / REST API

Extract, normalize, and store key contract terms automatically.

Pipeline stages
REST endpoint (/fm/v1/actions) Document Provider (form_extractor) llm_summarizer llm_analyzer combiner dml_operation → Contract__c

An external CLM system posts a PDF contract to /services/apexrest/fm/v1/actions/contract-intel-v1. The Document Provider extracts structured fields (parties, term dates, liability caps) using a vision-capable model. Two parallel LLM stages summarize the contract and flag non-standard clauses. A combiner merges both outputs and the pipeline writes clean structured data into your Contract__c object — no manual data entry.

Processes 50-page PDFs in <30 sec
Accuracy: 94% vs manual entry
Works from any CLM via REST
Platform Events

Voice of Customer Analysis

Platform Event: Survey_Submitted__e · Platform Event subscriber

Turn every NPS response into structured product signal, automatically.

Pipeline stages
FMEventFramework (Platform Events) llm_analyzer llm_extractor conditional (NPS score) dml_operation → VoC_Record__c http_callout → Jira (detractor only)

When a survey is submitted, Salesforce fires a Survey_Submitted__e Platform Event. FMEventFramework subscribes and kicks off the pipeline. Two LLM stages run in parallel: one scores sentiment, one extracts product themes from the free-text response. A conditional stage branches on NPS score: detractors get a Jira ticket created via HTTP callout; promoters get flagged for the customer marketing team. Every response ends up as a structured VoC record queryable by product area.

100% response coverage
Jira tickets in <60 sec for detractors
Theme taxonomy auto-evolves
Trigger + Chat LWC

Competitive Deal Intelligence

Opportunity · Field update (Competitor__c set)

Brief the rep on competitive positioning the moment a competitor is named.

Pipeline stages
Trigger Framework (field-change filter) http_callout → knowledge API llm_generator c-fm-ai-chat (LWC) circuit_breaker Audit log → Pipeline_Audit__c

A trigger binding fires only when the Competitor__c field on Opportunity transitions from blank to a known value. The pipeline calls an internal knowledge API to fetch the latest competitive intel, passes it to an LLM stage that generates a tailored battle card (pricing objections, differentiation, landmines), and surfaces it in c-fm-ai-chat on the Opportunity page — so the rep can ask follow-up questions conversationally. circuit_breaker ensures the knowledge API outage doesn't cascade to pipeline failure.

Battle card ready in <10 sec
Rep never leaves Salesforce
Circuit breaker: zero API-down failures
LWC drop-in: c-fm-ai-chat — drag onto any record page in App Builder

Every example above uses the same PipelineRunner.run() call under the hood.

Talk to us about your use case

Surface Showcase

Real code.
Every surface.

Not pseudocode. Not simplified examples. The exact patterns your org will use on day one.

Callable Apex — fm.LLM Synchronous + Async
// 10 core methods — all provider-neutral
fm.LLM.summarize(recordId, promptTemplate);
fm.LLM.classify(text, categories);
fm.LLM.extract(text, jsonSchema);
fm.LLM.translate(text, targetLocale);
fm.LLM.rewrite(text, instructions);
fm.LLM.generate(promptTemplate, inputMap);
fm.LLM.critique(text, rubric);
fm.LLM.qa(question, recordId);
fm.LLM.validate(text, rules);
fm.LLM.rank(items, criteria);

// Async + provider override
String jobId = fm.LLM
    .withProvider('bedrock')
    .summarizeAsync(recordId, prompt, callbackClass);
LWC Drop-in Components 6 components
c-fm-ai-summary

Any record page

Prompt template, provider, cache TTL

c-fm-ai-chat

Any page

System prompt, history retention, record context

c-fm-ai-next-best-action

Record pages

Action set, ranking prompt

c-fm-ai-field-suggester

Record detail fields

Field list, source text mapping

c-fm-ai-similar-records

Any record page

SObject, embedding source, top-N

c-fm-ai-case-triage

Service Cloud

Priority/category prompt, auto-route

Auto-Generated REST Endpoints Zero custom Apex
# Every published pipeline gets an endpoint
POST /services/apexrest/fm/v1/actions/{pipelineName}

# Example
POST /services/apexrest/fm/v1/actions/summarize_opportunity
Content-Type: application/json

{ "recordId": "006Dn000001QpZr" }

# Response
{ "success": true, "output": {"summary": "..."}, "tokensUsed": 312 }
Trigger Framework Bulkified + Governor-safe
// bind once — FlowMason handles the rest
trigger CaseTrigger on Case(after insert) {
    fm.PipelineTrigger.handle(
        'case_auto_triage', Trigger.new, null
    );
}

// ✓ Bulkification  ✓ Async queueable chaining
// ✓ Governor limits  ✓ Retry on failures
// ✓ Audit log  ✓ Cost attribution

Provider Neutrality

Any model.
One interface.

Switch providers with a single config change. No refactoring. No redeployment. Your business rules, your model choice.

AN
Anthropic Claude
OA
OpenAI GPT-4
GV
Google Vertex AI
AB
AWS Bedrock
AZ
Azure OpenAI
EA
EdenAI Router
OL
Ollama (self-hosted)
FM_Config__mdt — change once, affects all pipelines
// org-default in FM_Config__mdt (one metadata record)
-  DeveloperName: default_provider  Value: anthropic
+  DeveloperName: default_provider  Value: openai

// or per-call override — no org config change needed
fm.LLM
  .withProvider('bedrock')
  .withModel('anthropic.claude-3-5-sonnet')
  .summarize(recordId, prompt);
7
Providers
0
Refactors to switch
Your model strategy

Enterprise Governance

The observability
Agentforce doesn't give you.

Audit logs, cost attribution, FLS enforcement, tenant isolation — built into the runtime, not bolted on.

Full Audit Log

Every LLM call logged: who triggered it, which record, which pipeline, prompt version, response, tokens. Queryable via SOQL. Exportable.

SELECT Pipeline__c, Record_Id__c, Token_Count__c,
  Created_By__c, CreatedDate
FROM FM_Audit_Log__c
WHERE CreatedDate = LAST_N_DAYS:30

Cost Attribution

Token costs broken down by pipeline, user, team, and time period. Know exactly which automation spent $4,000 last month.

fm.Cost.getByPipeline('opportunity_enrichment', 30);
// → { totalUSD: 412.30, calls: 1842, avgTokens: 312 }

FLS + CRUD Enforcement

Every SOQL query and DML goes through FMSecurityUtil. Field-level security checks are baked into every data operation — not optional.

FMSecurityUtil.checkObjectRead('Opportunity');
FMSecurityUtil.checkFieldAccessible(
  'Opportunity',
  new List<String>{'Amount', 'Name'}
);

Tenant Isolation

API keys encrypted at rest per org. No cross-tenant data access. Namespaced metadata. Built for Salesforce's multi-tenant security model.

// Encrypted — never in plaintext
FM_Secret__c key = FMVaultUtil.getKey(
  'anthropic_api_key'
);
// Org-scoped. Cannot be accessed cross-org.

Enterprise security

Three compliance layers most AI stacks skip.

FlowMason ships PII redaction, cross-user FLS enforcement, and async resilience as built-in primitives. All opt-in. All auditable. All with kill switches for incident response.

Phase A

Value-pattern redaction

PII never reaches the LLM

FMRedactor scans every string leaf for SSNs, credit cards, API keys, bearer tokens, emails, phone numbers, IBANs, IPv4, AWS keys, and provider keys — not just keys named "ssn". Ten seed patterns ship inactive; admins activate per tenant.

  • Catches PII embedded in free-text (Case.Description, Note.Body)
  • Every match audited on PipelineExecution__c.Redaction_Count__c
  • ReDoS linter on precommit — catastrophic regex rejected before merge
  • Kill switch: FM_Config__mdt.REDACTION_VALUE_PATTERN_ENABLED
Phase B

FLS-aware prompt guard

End-user FLS honored even in system context

FMPromptGuard enforces the invoking user's field-level security on every LLM prompt — even when the pipeline runs system-mode for throughput. Uses UserFieldAccess (Spring '24+) so profiles, permission sets, permission set groups, muting, and expiration all resolve correctly.

  • Closes system-mode FLS gap in regulated verticals
  • Variable-laundering defense: vars carry source FieldRef provenance
  • Per-stage bypass flag is audited — not silent
  • Kill switch: FM_Config__mdt.FLS_PROMPT_GUARD_ENABLED
Phase C

Buffered circuit breaker

Provider outages stop cascading failures

Opt-in mode: "buffered" on any circuit_breaker stage. During an open window the pipeline's ExecutionState is persisted to FM_Circuit_Queue__c. When the breaker closes, a governor-safe Queueable replays each pipeline from the buffered stage — no re-running upstream stages, no double-DML.

  • FOR UPDATE row locks prevent concurrent-drainer double-replay
  • Dead-letter Platform Event fires after max-attempts failure
  • Expiry cutoff so stale items don't replay against changed data
  • Kill switches: CIRCUIT_BUFFER_ENABLED + 3 tuning keys

Redaction + FLS + async resilience — built into the runtime. No bolt-on compliance layer. No vendor lock-in. AppExchange-review ready.

Honest Comparison

FlowMason vs.
Einstein / Agentforce

We slot underneath Agentforce as the developer primitive layer — not above it. Different tools for different jobs.

FlowMason compared to Einstein and Agentforce across 8 dimensions
Dimension Einstein / Agentforce FlowMason
AI Provider Einstein/Atlas LLM only Any: Claude, GPT-4, Gemini, Bedrock, Vertex, Ollama
Primary Surface Agent chat UI, Prompt Builder UI Apex, Flow, LWC, Triggers, REST — developer-first
Primary User End users (agents), Admins (Prompt Builder) Developers. Admins also covered via Invocable Actions.
Deployment Declarative, UI-authored. Awkward in CI/CD. SFDX-packaged. Git-reviewable. CI/CD native.
Unit Testing Limited mock support fm.PipelineTest with full LLM/SOQL/HTTP mocks
Cost Model Einstein credits (opaque, per-conversation) Flat per-org + your own provider API costs (transparent)
Vendor Lock-in Einstein credits, Einstein models Zero. Swap providers with a config change.
Autonomous Agents Yes — Agentforce owns this Not our focus. Use Agentforce for that.

Agentforce owns the agent chat UI. We're the primitive layer developers use when Agentforce's UI-first workflow doesn't fit their architecture.

Coming to AppExchange

Ready to ship AI in your Salesforce org?

One package install. Every Salesforce surface covered. Provider-neutral from day one.

$ sf package install --package FlowMasonAI
No credit card required Deploy to any Salesforce org SFDX-native from day one