AI pipelines for
Salesforce.
The developer primitive layer that sits beside Flow, Apex, and Agentforce — not against them.
Chain SOQL queries, LLM calls, DML writes, and HTTP callouts into governor-safe pipelines — exposed as Flow actions, Apex facades, LWC components, REST endpoints, and trigger bindings. No orchestration boilerplate.
// Run a 5-stage account briefing pipeline — one call
ExecutionResult result = PipelineRunner.run(
'account-briefing-v1',
new Map<String, Object>{ 'accountId' => recordId }
);
// Stage 1 — soql_query: fetch Account + open Opportunities (FLS-checked)
// Stage 2 — llm_summarizer: summarize for the AE (auto-retries on 429)
// Stage 3 — llm_classifier: classify deal risk (parallel to stage 2)
// Stage 4 — dml_operation: save to Account.AI_Briefing__c (CRUD-checked)
// Stage 5 — http_callout: post to Slack channel (allowlisted endpoint)
String briefing = (String) result.outputGet('content');
String riskLevel = (String) result.outputGet('riskLevel'); The Problem
Stop writing
HTTP callouts.
Every Salesforce AI project from scratch carries a 100-hour tax. You've paid it before. You'll pay it again — unless you have FlowMason.
100+ hours of HTTP callout boilerplate
Named credentials, HttpRequest, JSON parse, retry logic — every. single. project.
Governor limits are a minefield
Async queueable chains, CPU time overruns, SOQL in loops. AI calls make it worse.
Einstein lock-in means one provider
Your enterprise wants Claude. Your security team wants Bedrock. Agentforce says no.
Zero testability
How do you unit test an HTTP callout to OpenAI? You can't mock it properly. Coverage suffers.
No cost attribution or audit trail
Which team spent $4,000 on tokens last month? Which pipeline touched that record? No idea.
Prompt Builder is UI-first, not SFDX-first
Prompt templates live in a UI. You can't git-review them. CI/CD breaks. Admins own the DX.
One callout is easy. Five chained steps isn't.
Any dev can write an HttpRequest to Claude in an afternoon. Chaining 5 stages with governor yield, per-stage retry, FLS-safe DML, async fan-out, and a mock test harness? That's 6–12 weeks — before you wire it to Flow and LWC.
FlowMason gives those 100 hours back.
Install the package. Call fm.LLM.summarize(). Ship AI features your team actually uses.
What FlowMason Actually Builds
You don't call an LLM.
You run a pipeline.
A pipeline is a directed graph of stages. Each stage is typed — SOQL query, LLM call, DML write, HTTP callout, conditional branch, for-each loop. FlowMason wires them together, monitors governors between every stage, retries on failure, and writes a full audit record on completion.
bulk-safe
any provider
async-safe
rollback-safe
Named Credential
The pipeline definition (~20 lines)
{
"id": "account-briefing-v1",
"stages": [
{ "id": "fetch", "type": "soql_query",
"query": "SELECT Id, Name, AnnualRevenue FROM Account WHERE Id = {{input.accountId}}" },
{ "id": "summarize", "type": "llm_summarizer",
"depends_on": ["fetch"],
"prompt": "Summarize for the AE: {{stages.fetch.records}}" },
{ "id": "classify", "type": "llm_classifier",
"depends_on": ["fetch"],
"prompt": "Score deal risk 1-10: {{stages.fetch.records}}" },
{ "id": "save", "type": "dml_operation",
"depends_on": ["summarize", "classify"] },
{ "id": "notify", "type": "http_callout",
"depends_on": ["save"] }
]
} ExecutionResult result = PipelineRunner.run(
'account-briefing-v1',
new Map<String, Object>{
'accountId' => recordId
}
); What you get for those 20 lines
The Build-vs-Buy Question
You could write this yourself.
Here's what that costs.
The first pipeline you could DIY. By the fifth, you're maintaining a framework — not shipping features.
- HttpRequest + Named Credential wiring per provider
- JSON serialization / deserialization for each LLM API
- Retry with exponential backoff (429, 503, timeouts)
- Queueable async chain + governor yield logic
- ForEach bulkification for trigger contexts
- Test mock interface that intercepts HTTP callouts
- FLS + CRUD enforcement on every data access
- Audit log record on every execution
- Per-token cost calculation per model per provider
- @InvocableMethod wrapper + Request/Response classes for Flow
- @AuraEnabled surface + LWC component for record pages
- REST endpoint for external systems
- Platform Event subscriber for async fan-out
- Scheduled batch job runner
- Visual debugger / step-through replay mode
{
"id": "my-pipeline",
"stages": [
{ "type": "soql_query", /* FLS auto-enforced */ },
{ "type": "llm_call", /* retry built-in */ },
{ "type": "dml_write", /* CRUD auto-checked */ },
{ "type": "http_callout", /* allowlisted creds */ }
]
} - All 15 DIY items above are already built and AppExchange-reviewed
- Switch providers by changing one config field — no code
- Test any pipeline without burning real tokens (FMTestMocks)
- Governor yield happens automatically — you never think about it
- SFDX-deployable: git-review your pipelines like code
- Ship the 4th pipeline as fast as the 1st
The first pipeline you could build yourself.
By the fifth, FlowMason pays for itself.
Get Early AccessPositioning
Built to coexist with Salesforce,
not compete with it.
Salesforce already ships Flow, Apex, Einstein, and Agentforce. We don't replace any of them. We slot underneath as the developer-first AI primitive layer the platform is missing.
Not an Apex replacement.
Apex is the right tool for 90% of what Apex does. We make the 10% that calls AI much easier — governor-safe callouts, retry, token accounting, mocking.
Not a Flow Builder competitor.
Flow is orchestration. FlowMason is the AI element Flow consumes. Every pipeline ships as a native Flow action — Flow gets more powerful, admins stay in the tool they already love.
Not Agentforce.
Agentforce owns the autonomous agent chat UI. FlowMason is the primitive layer developers reach for when Agentforce's UI-first workflow doesn't fit their architecture. Different jobs, same org.
Not a no-code platform.
FlowMason is developer-first infrastructure. Admins get drop-in Flow actions and LWC components, but someone on your team writes Apex and deploys via SFDX. That's the point.
The first pipeline you build might look like competition. By the fifth, it's infrastructure — and every Salesforce tool you already use gets better.
Real Use Cases
What teams actually build
with FlowMason
Six pipelines — from a single Slack notification to a nightly 10,000-record enrichment run. Every FlowMason component is represented. Every example is production-realistic.
AI Case Triage
Route and prioritize support cases before a human reads them.
A Case trigger fires FMTriggerFramework.dispatch(). The pipeline fetches the case body and account history via SOQL, classifies the issue type (billing / technical / product) and urgency via two parallel LLM calls, writes the result back to the Case, and posts a formatted summary to the right Slack channel — all before the first agent even opens their queue.
Nightly Lead Enrichment
Enrich every stale lead automatically while your team sleeps.
FMScheduler fires a Batch pipeline at 2 AM. PipelineRunnerBatch chunks leads into governor-safe groups of 50. Each chunk queries the lead's company data, calls an enrichment API for firmographic signals, scores fit with an LLM call using your ICP criteria, and writes a composite score back to the Lead record — 10,000 leads enriched before the SDRs log in.
Renewal Intelligence
Give every AE an AI briefing before the renewal call.
Drop two LWC components on the Opportunity record page. When an AE opens a renewal opp, c-fm-ai-summary fetches the account, its open cases, past NPS scores, and last 3 closed-won/lost opps via SOQL and generates a 3-bullet AE briefing. c-fm-ai-next-best-action branches on deal health score and surfaces a tailored talk track — "expand" vs "save" mode — based on the classify stage output.
c-fm-ai-summary + c-fm-ai-next-best-action — drag onto any record page in App Builder
Contract Intelligence
Extract, normalize, and store key contract terms automatically.
An external CLM system posts a PDF contract to /services/apexrest/fm/v1/actions/contract-intel-v1. The Document Provider extracts structured fields (parties, term dates, liability caps) using a vision-capable model. Two parallel LLM stages summarize the contract and flag non-standard clauses. A combiner merges both outputs and the pipeline writes clean structured data into your Contract__c object — no manual data entry.
Voice of Customer Analysis
Turn every NPS response into structured product signal, automatically.
When a survey is submitted, Salesforce fires a Survey_Submitted__e Platform Event. FMEventFramework subscribes and kicks off the pipeline. Two LLM stages run in parallel: one scores sentiment, one extracts product themes from the free-text response. A conditional stage branches on NPS score: detractors get a Jira ticket created via HTTP callout; promoters get flagged for the customer marketing team. Every response ends up as a structured VoC record queryable by product area.
Competitive Deal Intelligence
Brief the rep on competitive positioning the moment a competitor is named.
A trigger binding fires only when the Competitor__c field on Opportunity transitions from blank to a known value. The pipeline calls an internal knowledge API to fetch the latest competitive intel, passes it to an LLM stage that generates a tailored battle card (pricing objections, differentiation, landmines), and surfaces it in c-fm-ai-chat on the Opportunity page — so the rep can ask follow-up questions conversationally. circuit_breaker ensures the knowledge API outage doesn't cascade to pipeline failure.
c-fm-ai-chat — drag onto any record page in App Builder
Every example above uses the same PipelineRunner.run() call under the hood.
The Platform
One SDK.
Seven Salesforce surfaces.
One provider-neutral runtime powers every surface. Devs, admins, and architects each get the interface that matches how they work.
Invocable Actions
@InvocableMethod 30+ AI actions in Flow Builder. Summarize, Classify, Extract — drag and drop. No code required.
Callable Apex
fm.LLM.* Sync + async facade. One method call. No HTTP boilerplate, no retry logic, no credential management.
LWC Drop-ins
<c-fm-ai-*> 6 fully encapsulated components. Drop on any record page via App Builder. Configure without code.
Trigger Framework
PipelineTrigger Bulkified, async-chained, governor-safe AI in triggers. One line to bind a pipeline to any SObject event.
Platform Events
AIRequest__e Publish events from any process. Subscribe from pipelines. Native async AI without queueable boilerplate.
REST Endpoints
/fm/v1/actions/{name} Every pipeline gets an auto-generated REST endpoint. External systems integrate without custom Apex.
Scheduler
fm.Scheduler Cron-driven pipelines. Lead enrichment at 2am. Contract checks every day. Managed jobs in a dedicated tab.
Surface Showcase
Real code.
Every surface.
Not pseudocode. Not simplified examples. The exact patterns your org will use on day one.
// 10 core methods — all provider-neutral
fm.LLM.summarize(recordId, promptTemplate);
fm.LLM.classify(text, categories);
fm.LLM.extract(text, jsonSchema);
fm.LLM.translate(text, targetLocale);
fm.LLM.rewrite(text, instructions);
fm.LLM.generate(promptTemplate, inputMap);
fm.LLM.critique(text, rubric);
fm.LLM.qa(question, recordId);
fm.LLM.validate(text, rules);
fm.LLM.rank(items, criteria);
// Async + provider override
String jobId = fm.LLM
.withProvider('bedrock')
.summarizeAsync(recordId, prompt, callbackClass); c-fm-ai-summary Any record page
Prompt template, provider, cache TTL
c-fm-ai-chat Any page
System prompt, history retention, record context
c-fm-ai-next-best-action Record pages
Action set, ranking prompt
c-fm-ai-field-suggester Record detail fields
Field list, source text mapping
c-fm-ai-similar-records Any record page
SObject, embedding source, top-N
c-fm-ai-case-triage Service Cloud
Priority/category prompt, auto-route
# Every published pipeline gets an endpoint
POST /services/apexrest/fm/v1/actions/{pipelineName}
# Example
POST /services/apexrest/fm/v1/actions/summarize_opportunity
Content-Type: application/json
{ "recordId": "006Dn000001QpZr" }
# Response
{ "success": true, "output": {"summary": "..."}, "tokensUsed": 312 } // bind once — FlowMason handles the rest
trigger CaseTrigger on Case(after insert) {
fm.PipelineTrigger.handle(
'case_auto_triage', Trigger.new, null
);
}
// ✓ Bulkification ✓ Async queueable chaining
// ✓ Governor limits ✓ Retry on failures
// ✓ Audit log ✓ Cost attribution Provider Neutrality
Any model.
One interface.
Switch providers with a single config change. No refactoring. No redeployment. Your business rules, your model choice.
// org-default in FM_Config__mdt (one metadata record)
- DeveloperName: default_provider Value: anthropic
+ DeveloperName: default_provider Value: openai
// or per-call override — no org config change needed
fm.LLM
.withProvider('bedrock')
.withModel('anthropic.claude-3-5-sonnet')
.summarize(recordId, prompt); Enterprise Governance
The observability
Agentforce doesn't give you.
Audit logs, cost attribution, FLS enforcement, tenant isolation — built into the runtime, not bolted on.
Full Audit Log
Every LLM call logged: who triggered it, which record, which pipeline, prompt version, response, tokens. Queryable via SOQL. Exportable.
SELECT Pipeline__c, Record_Id__c, Token_Count__c,
Created_By__c, CreatedDate
FROM FM_Audit_Log__c
WHERE CreatedDate = LAST_N_DAYS:30 Cost Attribution
Token costs broken down by pipeline, user, team, and time period. Know exactly which automation spent $4,000 last month.
fm.Cost.getByPipeline('opportunity_enrichment', 30);
// → { totalUSD: 412.30, calls: 1842, avgTokens: 312 } FLS + CRUD Enforcement
Every SOQL query and DML goes through FMSecurityUtil. Field-level security checks are baked into every data operation — not optional.
FMSecurityUtil.checkObjectRead('Opportunity');
FMSecurityUtil.checkFieldAccessible(
'Opportunity',
new List<String>{'Amount', 'Name'}
); Tenant Isolation
API keys encrypted at rest per org. No cross-tenant data access. Namespaced metadata. Built for Salesforce's multi-tenant security model.
// Encrypted — never in plaintext
FM_Secret__c key = FMVaultUtil.getKey(
'anthropic_api_key'
);
// Org-scoped. Cannot be accessed cross-org. Enterprise security
Three compliance layers
most AI stacks skip.
FlowMason ships PII redaction, cross-user FLS enforcement, and async resilience as built-in primitives. All opt-in. All auditable. All with kill switches for incident response.
Value-pattern redaction
PII never reaches the LLM
FMRedactor scans every string leaf for SSNs, credit cards, API keys, bearer tokens, emails, phone numbers, IBANs, IPv4, AWS keys, and provider keys — not just keys named "ssn". Ten seed patterns ship inactive; admins activate per tenant.
- Catches PII embedded in free-text (Case.Description, Note.Body)
- Every match audited on PipelineExecution__c.Redaction_Count__c
- ReDoS linter on precommit — catastrophic regex rejected before merge
- Kill switch: FM_Config__mdt.REDACTION_VALUE_PATTERN_ENABLED
FLS-aware prompt guard
End-user FLS honored even in system context
FMPromptGuard enforces the invoking user's field-level security on every LLM prompt — even when the pipeline runs system-mode for throughput. Uses UserFieldAccess (Spring '24+) so profiles, permission sets, permission set groups, muting, and expiration all resolve correctly.
- Closes system-mode FLS gap in regulated verticals
- Variable-laundering defense: vars carry source FieldRef provenance
- Per-stage bypass flag is audited — not silent
- Kill switch: FM_Config__mdt.FLS_PROMPT_GUARD_ENABLED
Buffered circuit breaker
Provider outages stop cascading failures
Opt-in mode: "buffered" on any circuit_breaker stage. During an open window the pipeline's ExecutionState is persisted to FM_Circuit_Queue__c. When the breaker closes, a governor-safe Queueable replays each pipeline from the buffered stage — no re-running upstream stages, no double-DML.
- FOR UPDATE row locks prevent concurrent-drainer double-replay
- Dead-letter Platform Event fires after max-attempts failure
- Expiry cutoff so stale items don't replay against changed data
- Kill switches: CIRCUIT_BUFFER_ENABLED + 3 tuning keys
Redaction + FLS + async resilience — built into the runtime. No bolt-on compliance layer. No vendor lock-in. AppExchange-review ready.
Honest Comparison
FlowMason vs.
Einstein / Agentforce
We slot underneath Agentforce as the developer primitive layer — not above it. Different tools for different jobs.
| Dimension | Einstein / Agentforce | FlowMason |
|---|---|---|
| AI Provider | Einstein/Atlas LLM only | Any: Claude, GPT-4, Gemini, Bedrock, Vertex, Ollama |
| Primary Surface | Agent chat UI, Prompt Builder UI | Apex, Flow, LWC, Triggers, REST — developer-first |
| Primary User | End users (agents), Admins (Prompt Builder) | Developers. Admins also covered via Invocable Actions. |
| Deployment | Declarative, UI-authored. Awkward in CI/CD. | SFDX-packaged. Git-reviewable. CI/CD native. |
| Unit Testing | Limited mock support | fm.PipelineTest with full LLM/SOQL/HTTP mocks |
| Cost Model | Einstein credits (opaque, per-conversation) | Flat per-org + your own provider API costs (transparent) |
| Vendor Lock-in | Einstein credits, Einstein models | Zero. Swap providers with a config change. |
| Autonomous Agents | Yes — Agentforce owns this | Not our focus. Use Agentforce for that. |
Agentforce owns the agent chat UI. We're the primitive layer developers use when Agentforce's UI-first workflow doesn't fit their architecture.
Ready to ship AI in your Salesforce org?
One package install. Every Salesforce surface covered. Provider-neutral from day one.