Docs / Governance & Audit
Governance & Audit

Governance, Audit & Compliance

FlowMason gives you full visibility into every AI action — who ran what, when, what it cost, and what the LLM produced. All data stays in your Salesforce org.

Data sovereignty: Every LLM call, its inputs, outputs, token counts, and cost are recorded in standard Salesforce objects inside your org. Nothing is sent to a FlowMason server. You query it with SOQL.

Execution tracking

Every pipeline run creates a PipelineExecution__c record. Every stage within that run creates a Pipeline_Stage_Log__c child record. Together, they give you complete lineage for every AI decision.

SOQL — execution and stage queries
// Query recent pipeline executions with stage-level detail:
List<PipelineExecution__c> execs = [
    SELECT Id, Pipeline_Id__c, Status__c, Input__c, Output__c,
           Total_Input_Tokens__c, Total_Output_Tokens__c, Total_Cost__c,
           CreatedById, CreatedDate, Duration_Ms__c
    FROM PipelineExecution__c
    WHERE Status__c IN ('Completed', 'Failed')
    AND CreatedDate = LAST_N_DAYS:7
    ORDER BY CreatedDate DESC
    LIMIT 100
];

// Drill into stage-level logs for a specific execution:
List<Pipeline_Stage_Log__c> stages = [
    SELECT Stage_Id__c, Stage_Type__c, Status__c,
           Input_Tokens__c, Output_Tokens__c, Duration_Ms__c,
           Error_Message__c
    FROM Pipeline_Stage_Log__c
    WHERE Execution__c = :execId
    ORDER BY Stage_Order__c
];
FieldObjectWhat it captures
Status__cPipelineExecution__cPending / Running / Completed / Failed / Cancelled
Total_Cost__cPipelineExecution__cSum of all stage costs for this run (USD)
Total_Input_Tokens__cPipelineExecution__cTotal prompt tokens across all stages
Total_Output_Tokens__cPipelineExecution__cTotal completion tokens across all stages
Duration_Ms__cPipelineExecution__cWall-clock execution time in milliseconds
Input__cPipelineExecution__cSerialized pipeline input (sensitive fields redacted)
Output__cPipelineExecution__cTerminal stage output (sensitive fields redacted)
Error_Message__cPipeline_Stage_Log__cError detail for a failed stage

Cost attribution reports

Build cost reports by pipeline, user, or time period directly with SOQL aggregate queries:

SOQL — cost reports
// Cost by pipeline (last 30 days):
SELECT Pipeline_Id__c,
       SUM(Total_Cost__c) totalCost,
       SUM(Total_Input_Tokens__c) totalInputTokens,
       SUM(Total_Output_Tokens__c) totalOutputTokens,
       COUNT(Id) runCount
FROM PipelineExecution__c
WHERE CreatedDate = LAST_N_DAYS:30
  AND Status__c = 'Completed'
GROUP BY Pipeline_Id__c
ORDER BY totalCost DESC

// Cost by user (last 30 days):
SELECT CreatedById,
       SUM(Total_Cost__c) totalCost,
       COUNT(Id) runCount
FROM PipelineExecution__c
WHERE CreatedDate = LAST_N_DAYS:30
GROUP BY CreatedById
ORDER BY totalCost DESC

These queries work as Salesforce Reports too — create a custom report type on PipelineExecution__c for a point-and-click cost dashboard without writing SOQL.

The audit trail

Pipeline_Audit__c is the immutable action log. It records every pipeline execution event separately from the execution record itself — giving you an append-only history that persists even after executions are cleaned up by retention jobs.

SOQL — audit trail
// The Pipeline_Audit__c object records every pipeline execution event:
// Action__c        — 'run' | 'run_async' | 'cancel' | 'resume' | 'validate'
// Pipeline_Id__c   — which pipeline was involved
// User__c          — who triggered the action
// Execution_Id__c  — the PipelineExecution__c Id (when applicable)
// Timestamp__c     — when it happened
// Details__c       — JSON blob with action-specific context

// Query audit trail for a specific pipeline:
List<Pipeline_Audit__c> trail = [
    SELECT Action__c, User__c, Timestamp__c, Details__c, Execution_Id__c
    FROM Pipeline_Audit__c
    WHERE Pipeline_Id__c = 'account-summarize-v1'
    AND Timestamp__c = LAST_N_DAYS:30
    ORDER BY Timestamp__c DESC
];

// Retention: controlled by FM_Config__mdt.auditLogRetentionDays (default: 365)

FLS and CRUD enforcement

FlowMason enforces Salesforce's field-level and object-level security on every SOQL query and DML operation. This isn't optional bolted-on security — it's the default behavior at every I/O seam in the framework.

FLS / CRUD enforcement notes
// FLS and CRUD are enforced by default on all SOQL and DML stages.
// You can verify this by reading the stage config:
// "enforce_fls": true   — default, use FMSecurityUtil.stripReadable() on SOQL
// "enforce_fls": false  — opt-out for advanced scenarios (audit this in pipeline JSON)

// In Apex, all @AuraEnabled methods check object-level read access:
// FMSecurityUtil.checkObjectRead('PipelineExecution__c');
// FMSecurityUtil.checkFieldAccessible('Account', 'Description');

// Cross-user execution access is blocked by default:
// A user can only resume/cancel their own executions, UNLESS they hold
// the FlowMason_Execution_Admin custom permission.

Sharing model

ObjectSharing
FlowMason_Pipeline__cPrivate
PipelineExecution__cPrivate
Pipeline_Audit__cPrivate
Pipeline_Stage_Log__cControlledByParent (parent: PipelineExecution__c)
FlowMason_Admin_Audit__cPrivate

Sensitive data redaction

Before any pipeline input, output, or stage log is persisted to the database, FMRedactor scans for sensitive patterns and replaces matching values with ***redacted***. This protects against accidentally storing API keys, passwords, or tokens that flow through pipeline data.

Default redaction patterns (regex, case-insensitive):

  • api[_-]?key, password, secret, bearer
  • authorization, access[_-]?token, refresh[_-]?token
  • client[_-]?secret, private[_-]?key

Add custom patterns by deploying FM_Redaction_Pattern__mdt records — no code change required.

Admin configuration audit

Every mutation to provider or SDK configuration is logged to FlowMason_Admin_Audit__c with before/after SHA-256 hashes of the affected record:

Admin audit log
// FlowMason_Admin_Audit__c records every admin configuration change:
// Action__c    — 'create' | 'update' | 'delete' | 'activate'
// Target_Id__c — the CMT record affected
// Before_Hash__c / After_Hash__c — SHA-256 of the record before/after
// Acting_User__c — who made the change
// Timestamp__c   — when it happened

// Any mutation to FM_Config__mdt requires the FlowMason_Config_Admin permission.
// Any mutation to LLMProviderConfig__mdt requires FlowMason_Provider_Admin.
// Both write to FlowMason_Admin_Audit__c automatically.

Permission sets

Permission set reference
// Permission sets deployed with the SDK:
// FlowMason_Full_Access      — run primitives, use LWCs, execute pipelines
// FlowMason_Provider_Admin   — manage LLMProviderConfig__mdt records
// FlowMason_Config_Admin     — manage FM_Config__mdt records
// FlowMason_Execution_Admin  — cancel/resume any user's pipeline execution

// Assign in Setup → Permission Sets → [set name] → Manage Assignments
// Or via SFDX:
// sf org assign permset --name FlowMason_Full_Access --on-behalf-of [email protected]

Data retention

Retention jobs are admin opt-in — they're not active by default. This keeps first-install side-effects to zero. Schedule them explicitly when you're ready:

Apex — retention setup
// Enable retention jobs (admin opt-in — not active by default):

// Delete Pipeline_Stage_Log__c rows older than stageLogRetentionDays (default 90):
FMLogRetentionBatch.scheduleDefault();

// Delete terminal PipelineExecution__c rows older than executionRetentionDays (default 90):
FMExecutionRetentionBatch.scheduleDefault();

// Custom schedule (cron expression):
FMLogRetentionBatch.schedule('0 0 2 * * ?');  // 2am daily

// Check what's scheduled:
// Setup → Scheduled Jobs — look for 'FMLogRetentionBatch' and 'FMExecutionRetentionBatch'

// Tune retention windows in FM_Config__mdt:
// auditLogRetentionDays    = 365   (Pipeline_Audit__c)
// stageLogRetentionDays    = 90    (Pipeline_Stage_Log__c)
// executionRetentionDays   = 90    (PipelineExecution__c terminal rows)

Governor limit protections

In addition to Salesforce's own platform limits, FlowMason enforces explicit caps that throw GovernorLimitException — a typed exception you can catch and handle — before hitting the hard platform limit:

CapConfig keyDefault
Queueable chain depthqueueableChainDepthMax5
ForEach items per stageforEachMaxItems500
Serialized context sizecontextSizeCapBytes500,000 bytes
Pipeline stage iterationspipelineMaxIterations1,000
HTTP callout timeouthttpCalloutMaxTimeoutMs120,000 ms

What's next