FAQ
Common questions,
honest answers.
Everything a Salesforce developer or architect asks before evaluating FlowMason.
General
What is FlowMason?
FlowMason is a pipeline orchestration framework for Salesforce. It lets you chain SOQL queries, LLM calls, DML writes, HTTP callouts, and control-flow logic into multi-step AI pipelines — all running natively inside your Salesforce org. You define a pipeline in JSON or via Pipeline Studio, call
PipelineRunner.run() from Apex, a Flow action, an LWC event, a REST endpoint, or a trigger binding — and FlowMason handles governor awareness, retry logic, async execution, FLS/CRUD enforcement, and audit logging automatically.Who is FlowMason for?
Salesforce developers and architects who need to build AI-powered features beyond what a single LLM prompt can do. If you've ever written an HttpRequest callout to Claude and thought "this is going to get complicated fast" — FlowMason is for you.
How is this different from Einstein Copilot or Agentforce?
Einstein and Agentforce are declarative, Salesforce-model-only products designed for admins. FlowMason is code-first, works with any LLM provider (Anthropic, OpenAI, Azure, Bedrock, Google, Ollama, and more), is fully SFDX-deployable, and gives developers precise control over every stage. They solve different problems: Einstein is a product; FlowMason is infrastructure.
Do I need to be a developer to use it?
To define and deploy pipelines, yes — FlowMason is developer-first. Admins can trigger pre-built pipelines from Flow Builder and use LWC drop-in components in App Builder without writing code, but someone on your team needs Apex and SFDX skills to set things up.
Is FlowMason a managed package on AppExchange?
Yes. FlowMason installs as a managed package from AppExchange. Everything runs inside your org — no external dependencies, no data leaving your org unless you've explicitly configured an LLM provider.
Architecture
What exactly is a pipeline?
A pipeline is a JSON definition of ordered stages connected by
depends_on relationships. Each stage has a type (soql_query, llm_call, dml_write, http_callout, for_each, conditional, etc.) and a configuration. FlowMason builds a dependency graph, executes stages in topological order, monitors governor limits between stages, and handles retry/async/error logic at the framework level. You define the what; FlowMason handles the how.How does FlowMason handle Salesforce governor limits?
FlowMason's
GovernorMonitor measures CPU time, heap, and SOQL count between every stage. When it detects that continuing synchronously would risk hitting a limit, it automatically yields execution to a Queueable job, serializes the current state, and resumes from exactly where it left off. You never write async chaining code manually.Where does pipeline execution happen?
Everything runs inside your Salesforce org in native Apex. LLM calls go out via Named Credentials to your configured provider. There is no external FlowMason server or cloud service involved in execution.
How many LLM providers does FlowMason support?
Seven: Anthropic (Claude), OpenAI (GPT-4o, o1), Azure OpenAI, Google Vertex AI, AWS Bedrock, Eden AI (smart router across 20+ models), and Ollama (local/private models). Switching providers requires changing one
FM_Config__mdt field — no code changes.Installation & Setup
How do I install FlowMason?
Install the managed package from AppExchange into your sandbox first, then assign the
FlowMason_Full_Access permission set to developers and FlowMason_Provider_Admin to whoever configures LLM providers. Configure your first provider by adding an API key to FM_Provider__mdt. Run PipelineRunner.run('ping', new Map<String,Object>()) from Anonymous Apex to confirm the install.Can I install in sandbox first?
Yes, and we recommend it. Install in a Developer sandbox, run your pipelines and tests there, then deploy your pipeline JSON files (stored as
FlowMasonPipeline__mdt custom metadata) via SFDX change sets or a deployment pipeline.Does FlowMason work with scratch orgs and SFDX?
Yes. Pipelines are defined as Custom Metadata (
FlowMasonPipeline__mdt) and are fully SFDX-deployable. You can source-track pipelines, code-review them in pull requests, and deploy them through your CI/CD pipeline exactly like any other Salesforce metadata.Does it work with Experience Cloud or Communities?
The LWC drop-in components work anywhere LWC is supported, including Experience Cloud pages. Pipeline execution via REST endpoints also works from Experience Cloud guest/authenticated contexts with appropriate permission set assignments.
AI Providers & Data Privacy
Does my Salesforce data leave my org?
Only the data you explicitly include in your LLM prompt leaves your org — sent via HTTPS to your configured LLM provider using a Salesforce Named Credential. FlowMason itself never stores, proxies, or sees your data. Your LLM provider's data processing terms apply to that traffic.
Can I use a private or on-premise model?
Yes. The Ollama provider connects to any OpenAI-compatible local model endpoint. If you're running a private model inside your network, configure its endpoint as a Named Credential and point the Ollama provider at it — no data leaves your infrastructure.
How do I manage API keys securely?
LLM API keys are stored in Salesforce Named Credentials, never in custom settings or hardcoded in Apex. Rotating a key means updating the Named Credential — no deployments required.
Can I restrict which pipelines can use which providers?
Yes. Pipeline-level provider overrides let you pin specific pipelines to specific providers. Combined with
FM_Config__mdt rate limits and cost caps, you can ensure a test pipeline never accidentally hits a production Bedrock endpoint.Governance & Cost
How do I track LLM spend?
Every pipeline execution writes a
PipelineExecution__c record with input/output token counts, cost in USD, duration, pipeline ID, and the running user. Report on this with standard Salesforce reports, SOQL queries, or the built-in analytics REST endpoint (GET /fm/v1/analytics/summary).What's in the audit log?
Pipeline_Audit__c records every execution event: who ran it, when, from which surface (Flow, Apex, REST, Trigger), what input was passed, what output was returned, and whether it succeeded or failed. Pipeline_Stage_Log__c has per-stage detail including individual LLM prompts and responses (redactable for compliance).Can I set spending limits or rate limits?
Yes.
FM_Config__mdt lets you configure providerDailyBudgetUsd, providerMaxRetries, and rateLimitPerMinute per provider. Executions that would exceed budget return a RateLimitException rather than running.How long is execution history retained?
Default retention is configurable via
FM_Config__mdt (executionRetentionDays). A scheduled Apex job (FMExecutionRetentionScheduler) purges records older than the configured threshold. You can query and archive records before purge using standard Salesforce data tools.Testing & Development
Can I test pipelines without burning real LLM tokens?
FMTestMocks.mockLLM(stageId, response) intercepts LLM calls and returns your canned response. FMTestMocks.mockSOQL(fromClause, records) intercepts SOQL queries. FMTestMocks.mockHTTP(domain, status, body) intercepts HTTP callouts. Your entire test suite runs without making a single external API call — fast, deterministic, and free.Are pipelines SFDX-deployable?
Yes. Pipelines stored as
FlowMasonPipeline__mdt are standard Custom Metadata and deploy via sf project deploy. Pipeline Studio can export any pipeline to MDT XML for SFDX source format. You can code-review pipeline changes in pull requests like any other metadata.How do I promote a pipeline from sandbox to production?
Export the pipeline JSON from Pipeline Studio as Custom Metadata XML, add it to your SFDX project under
force-app/main/default/customMetadata/, commit it, and deploy via your standard change set or CI/CD pipeline. Pipelines deploy atomically with the Apex code that calls them.Does FlowMason work with Salesforce DX scratch orgs?
Yes. Install the managed package into your scratch org definition, deploy your pipeline metadata, and run tests with
sf apex run test. The mock framework means your Apex tests pass without any LLM provider configured in the scratch org.Org Chat
What is Org Chat?
A Lightning Web Component (
fmOrgChat) you drop on any Lightning page. Users type natural-language questions; FlowMason generates SOQL, validates it through an 8-gate sanitiser, runs it under FLS, and returns rows + a one-sentence answer. See /features/org-chat.Where can I place the Org Chat component?
Seven surfaces: Tab, Utility Bar, Global Action, Agentforce Copilot, Record Page, Home Page, App Page. ADR-009 surface gating is a single CSV kill-switch (
orgChatSurfacesEnabled). Drop a token to disable that surface without a deploy.Can the assistant write data, or is it read-only?
Read-only by default. Write requires three independent fail-closed gates:
orgChatDmlEnabled = true, per-SObject Allow_Update__c / Allow_Insert__c / Allow_Delete__c = true, and the FlowMason_Org_Chat_Dml_User permset assigned. Plus the user types the SObject name in a confirmation modal. Any one gate says no, the answer is no. ADR-005, ADR-006.Does it send my data to the LLM?
Only the SOQL the LLM generates is sent. Never raw rows.
FMSoqlValidator runs the 8-gate sanitiser before any row leaves your org; the assistant gets shaped results back. PII redaction (FMRedactor + FMPromptGuard) runs on every prompt + every reply.How does the assistant know my org's schema?
FMSchemaCatalog reads describe + allowlist at request time and threads a per-turn schema excerpt into the LLM prompt. With introspection on (orgChatManifestEnabled = true) it adds triggers, flows, validation rules, dependency degree, and perm-set FLS for the SObjects in scope.What is tool-calling?
Provider-agnostic multi-step reasoning (ADR-013). Lets the assistant invoke
run_soql, lookup_metadata, object_relationships, inventory_search iteratively for cross-object questions. Off by default; flip orgChatToolCallingEnabled = true.Which providers support tool-calling?
Anthropic, OpenAI, Azure OpenAI, AWS Bedrock, Ollama. EdenAI / Google Vertex / Salesforce Models API fall back to single-shot automatically. Full matrix on /features/org-chat.
What does the inventory harvester do?
Once daily, populates
FM_Org_Inventory_Snapshot__c with org component metadata (Apex classes, triggers, flows, validation rules, perm sets). Lets the inventory_search tool answer "what triggers fire on Lead?" without re-querying Tooling per chat turn.What about prompt injection?
FMSoqlValidator is the trust boundary. Even a successfully prompt-injected LLM cannot exfiltrate data because every SOQL passes the 8-gate sanitiser before execution. DML refused outright via the dispatcher; intent-detected DML routes to the human-confirm modal. 27+ refusal tests + fuzz cohort.How do I revoke chat access in an emergency?
Set
orgChatSurfacesEnabled = none to kill all surfaces immediately. Or orgChatDmlEnabled = false to stop just DML. Or revoke FlowMason_Org_Chat_User permset for one user. All effective on the next request.Are LLM calls FLS-aware?
Yes.
FMPromptGuard scrubs field references the running user can't see before the prompt leaves Apex. SOQL execution honours FLS + CRUD via FMSecurityUtil. Master switches: FLS_PROMPT_GUARD_ENABLED, REDACTION_VALUE_PATTERN_ENABLED.Inspector vs Debugger vs Telemetry. Which when?
Inspector for "why did this run fail?" (post-mortem). Debugger for "what's happening right now?" (live, paused, step). Telemetry Dashboard for "how are pipelines doing across all runs?" (aggregates).
How do I add a new component / provider / plugin?
Subclass
BaseNode / BaseOperator / BaseFlowControl and register via FM_Component_Type__mdt. For providers, implement LLMProvider (and optionally LLMToolCapableProvider for tool support) and register via FM_Provider_Type__mdt. See the Plugin SDK reference.Per-user rate limits?
Org Chat: 60 turns/min/user (
orgChatMaxTurnsPerMinutePerUser). DML confirmations: 3/min/user (orgChatDmlMaxConfirmationsPerMinute). Tool calls per turn: 5 (orgChatToolCallingMaxCalls). All configurable.What's the cost model with tool-calling on?
Tool-calling adds round-trips (3-5 per turn vs 2 baseline) but reduces prompt tokens because conversation history lives server-side in
FMThreadState instead of being re-sent each turn. Long conversations (10+ turns) typically see 30-50% prompt-token savings. Per-call cost lands in Pipeline_Stage_Log__c.Cost__c.