Connecting your LLM providers
FlowMason is provider-neutral. Point it at any of the seven supported providers — or switch between them — without touching pipeline logic. All configuration lives in Custom Metadata.
How provider configuration works
Provider configuration is stored in LLMProviderConfig__mdt. The active record (the one with IsActive__c = true) is the default for all pipelines that don't specify a provider. You can override per-stage in pipeline config, or per-call from Apex.
Only one record should be active at a time. FlowMason reads the first active record it finds — activating a second record doesn't automatically deactivate the first.
Supported providers
| Provider | ProviderName__c | Notes |
|---|---|---|
| Eden AI (default) | edenai | Smart router — picks best model automatically. Supports 30+ models from all providers via one API key. |
| Anthropic | anthropic | Claude 3.5, Claude 4 family. Best for reasoning-heavy tasks and long document analysis. |
| OpenAI | openai | GPT-4o, GPT-4o mini. Strong general-purpose performance and function calling. |
| Azure OpenAI | azure_openai | OpenAI models hosted in your Azure subscription. Meets data residency requirements. |
| Google Vertex AI | google_vertex | Gemini 2.5 Pro, Gemini Flash. Strong for multimodal and code generation. |
| AWS Bedrock | bedrock | Claude, Llama, Titan via AWS. Full VPC isolation available. |
| Ollama | ollama | Self-hosted local models. Zero data egress — everything stays on your infrastructure. |
Setting up a provider
Go to Setup → Custom Metadata Types → LLM Provider Config → Manage Records → New, or deploy the following SFDX metadata:
<?xml version="1.0" encoding="UTF-8"?>
<CustomMetadata xmlns="http://soap.sforce.com/2006/04/metadata"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<label>Anthropic Claude</label>
<values>
<field>ProviderName__c</field>
<value xsi:type="xsd:string">anthropic</value>
</values>
<values>
<field>ModelName__c</field>
<value xsi:type="xsd:string">claude-sonnet-4-6</value>
</values>
<values>
<field>ApiKey__c</field>
<value xsi:type="xsd:string">sk-ant-...</value>
</values>
<values>
<field>IsActive__c</field>
<value xsi:type="xsd:boolean">true</value>
</values>
</CustomMetadata> API key configuration for each provider
// Anthropic
// ProviderName__c = 'anthropic'
// ApiKey__c = 'sk-ant-...'
// OpenAI
// ProviderName__c = 'openai'
// ApiKey__c = 'sk-...'
// Azure OpenAI
// ProviderName__c = 'azure_openai'
// ApiKey__c = 'your-azure-key'
// BaseUrl__c = 'https://your-resource.openai.azure.com/'
// ModelName__c = 'gpt-4o' (deployment name in Azure)
// Google Vertex AI
// ProviderName__c = 'google_vertex'
// ApiKey__c = 'service-account-json-or-access-token'
// BaseUrl__c = 'https://us-central1-aiplatform.googleapis.com/'
// ModelName__c = 'gemini-2-5-pro-preview'
// AWS Bedrock
// ProviderName__c = 'bedrock'
// ApiKey__c = 'access-key-id::secret-access-key'
// BaseUrl__c = 'https://bedrock-runtime.us-east-1.amazonaws.com/'
// ModelName__c = 'anthropic.claude-3-5-sonnet-20241022-v2:0'
// Ollama (self-hosted)
// ProviderName__c = 'ollama'
// BaseUrl__c = 'http://your-server:11434/'
// ModelName__c = 'llama3.2'
// ApiKey__c = '' (no key needed for local Ollama) ApiKey__c path is convenient for development and sandbox orgs, but Named Credentials provide better security posture for production. Set ApiKey__c to callout:YourNamedCredential to route through a Named Credential.
Eden AI — the smart router
Eden AI is the default provider because it removes the "which model?" decision. Set ModelName__c = '@edenai' and Eden AI selects the best available model for each call based on task type, latency, and cost. You can also pin to a specific model through Eden AI:
// Eden AI is the default. It routes to the best available model.
// To use a specific model via Eden AI:
// ModelName__c = 'anthropic/claude-sonnet-4-6' (Claude)
// ModelName__c = 'openai/gpt-4o' (GPT-4o)
// ModelName__c = 'google/gemini-2-5-pro-preview' (Gemini)
// ModelName__c = '@edenai' (smart router — default)
// In a pipeline stage config:
// "provider": "edenai",
// "model": "anthropic/claude-opus-4-7" Switching providers
Switching the active provider is a metadata change — no code changes, no pipeline edits. This is what "provider-neutral" means in practice:
// Change the active provider in FM_Config__mdt:
FMConfigAdmin.upsertConfig(new Map<String, Object>{
'key' => 'defaultProvider',
'value' => 'openai',
'valueType'=> 'string',
'category' => 'provider'
});
// Or override per-stage in pipeline JSON:
// "config": {
// "provider": "anthropic",
// "model": "claude-opus-4-7"
// }
// In a test — no permission needed:
FMConfig.setForTest('defaultProvider', 'openai', 'string'); Retry and timeout settings
All retry behavior is controlled through FM_Config__mdt. The defaults are sensible for most scenarios, but you can tune them for high-volume or latency-sensitive pipelines:
// All retry + timeout settings are in FM_Config__mdt:
// providerMaxRetries = 3 — retry count for transient failures
// providerBackoffBaseMs = 1000 — base backoff before first retry (ms)
// providerBackoffMaxMs = 30000 — ceiling for retry backoff (ms)
// providerBackoffMultiplier= 2.0 — exponential multiplier
// providerTimeoutMs = 60000 — HTTP callout timeout (ms)
// Read and override in Apex:
Integer retries = FMConfig.getInteger('providerMaxRetries', 3);
FMConfig.setForTest('providerMaxRetries', '5', 'number'); // in tests | Key | Default | Description |
|---|---|---|
defaultMaxTokens | 2000 | Default max_tokens for LLM calls |
defaultTemperature | 0.7 | Default temperature (0 = deterministic, 1 = creative) |
providerMaxRetries | 3 | Retry count for transient failures |
providerTimeoutMs | 60000 | HTTP callout timeout (ms) |
providerBackoffBaseMs | 1000 | Base wait before first retry |
providerBackoffMultiplier | 2.0 | Exponential backoff multiplier |
Cost tracking and pricing
FlowMason tracks token usage on every LLM call. Pricing rates are seeded in FM_Provider_Pricing__mdt and used to calculate cost attribution per execution:
// Calculate the cost of an LLM call:
Decimal cost = ProviderResponse.calculateCost(
'anthropic/claude-sonnet-4-6', // model identifier
1500, // input tokens
800 // output tokens
);
// Pricing is seeded in FM_Provider_Pricing__mdt.
// Add or override rows via SFDX:
// Model_Name__c = 'anthropic/claude-opus-4-7'
// Input_Cost__c = 0.000015 (per token)
// Output_Cost__c = 0.000075 (per token)
// Currency__c = 'USD' Add or update pricing rows by deploying FM_Provider_Pricing__mdt metadata. Rows are matched by exact Model_Name__c first, then by provider prefix fallback. See Governance & Audit for cost reporting queries.
Troubleshooting
HTTP 401 — authentication failed
ApiKey__c is invalid or expired. Update the key in the CMT record. API keys are never returned by FMProviderAdmin.listProvidersAdmin for security reasons — you must edit the record directly in Setup.Timeout errors on large inputs
defaultMaxTokens to reduce response time. Or pin to a faster model (e.g., anthropic/claude-3-5-haiku-20241022 or openai/gpt-4o-mini) via ModelName__c.Multiple active provider records
c-fm-provider-admin) shows all records and their active state.