Org Chat. Chat with your data.
Drop one LWC on any Lightning page. Users type natural-language questions; FlowMason generates SOQL, validates it through an 8-gate sanitiser, runs it under FLS, and returns rows + a one-sentence answer. Optional tool-calling lets the model chain queries for cross-object questions.
What it ships
- Seven placement surfaces. Tab, Utility Bar, Global Action, Agentforce Copilot, Record Page, Home Page, App Page. One CSV kill-switch disables any surface without a deploy.
- FLS + CRUD honoured everywhere. Generated SOQL runs under
USER_MODE.FMPromptGuardscrubs field references the user can't see before the prompt leaves Apex. - Allowlist-governed. Per-SObject opt-in via
FM_Org_Chat_Allowlist__mdt. Read-only by default; DML requires three independent fail-closed gates. - Eight LLM providers. Anthropic, OpenAI, Azure OpenAI, Bedrock, Google Vertex, Ollama, EdenAI, Salesforce Models API. Switch with one MDT flag.
- Tool-calling (opt-in). Provider-agnostic multi-step reasoning. Assistant invokes
run_soql,lookup_metadata,object_relationships,inventory_searchiteratively. ADR-013. - Org introspection. INV-1 reads triggers, flows, validation rules per turn. INV-2 harvests inventory nightly so the assistant can answer "what triggers fire on Lead?" without a Tooling round-trip.
- DML two-step. Assistant proposes a write; user types the SObject name to confirm. Audit row immutable.
Quickstart. 10 minutes
Step 1. Assign the permset
Without this permset the LWC renders nothing. No Apex calls, no UI flash. ADR-009 enforcement.
sf org assign permset --target-org Flowmason --name FlowMason_Org_Chat_User Step 2. Add the chat surface
Setup → Lightning App Builder → edit any home/app/record page. Drag Org Chat (the fmOrgChat custom component) onto the page. Set the surface attribute to match the placement (home_page, app_page, record_page, etc.). Save + activate.
Step 3. Allowlist your SObjects
Setup → Custom Metadata Types → FM_Org_Chat_Allowlist__mdt → New per SObject. Read-only by default.
<!-- FM_Org_Chat_Allowlist.Lead.md-meta.xml -->
<CustomMetadata xmlns="http://soap.sforce.com/2006/04/metadata">
<label>Lead</label>
<values><field>Sobject_Api_Name__c</field><value xsi:type="xsd:string">Lead</value></values>
<values><field>Allow_Read__c</field><value xsi:type="xsd:boolean">true</value></values>
<values><field>Max_Limit__c</field><value xsi:type="xsd:double">200</value></values>
<!-- Read-only by default. Flip Allow_Update__c / Insert / Delete only when DML is intended. -->
</CustomMetadata> Step 4. Try it
Refresh the page. In the chat composer, type:
show me 5 hot leads
Expect a SOQL chip ("SELECT Id, Name, ... FROM Lead WHERE Rating='Hot' LIMIT 5"), a result table inline, and Export-CSV / Open-in-Inspector action buttons.
Step 5. Enable tool-calling + introspection
Optional but recommended. Lets the model invoke tools iteratively for cross-object questions like "which leads work at the same accounts as my top 5 open opps?".
// Enable Org Chat surfaces (default: all 7 on)
// Setup → Custom Metadata Types → FM Config
// Key: orgChatSurfacesEnabled
// Value: tab,utility,global_action,copilot,record_page,home_page,app_page
// Enable per-turn org introspection (INV-1)
// Key: orgChatManifestEnabled
// Value: true
// Enable provider-agnostic tool-calling (ADR-013)
// Key: orgChatToolCallingEnabled
// Value: true
// Schedule the nightly inventory harvester (INV-2)
FMOrgInventoryScheduler.ensureScheduled();
FMOrgInventoryHarvester.enqueue(); // force first run Ask "which triggers fire on Lead?" and confirm the 🔧 N tool calls badge renders on the assistant turn.
Tool-calling provider matrix (ADR-013)
| Provider | supportsTools() | Native API |
|---|---|---|
| Anthropic | ✓ | Messages API tool_use |
| OpenAI | ✓ | Chat Completions tools / tool_calls |
| Azure OpenAI | ✓ | Chat Completions (deployment-based) |
| AWS Bedrock | ✓ | bedrock-2023-05-31 Anthropic protocol |
| Ollama | ✓ | /api/chat tool_calls |
| EdenAI | ✗ | Smart-router; falls back to single-shot |
| Google Vertex | ✗ | Gemini function-calling not yet wired |
| Mock | ✗ | Test fixture only |
Security model
Org Chat preserves every existing FlowMason trust boundary:
FMSoqlValidator8-gate sanitiser on every assistant-generated SOQL. DML keywords + multi-statement + non-allowlisted objects rejected before any row leaves the org. 27+ refusal tests + fuzz cohort.- No DML tool exposed. Assistant has read-only powers via tools. DML still requires the human-confirm modal.
- Three independent fail-closed gates for DML.
orgChatDmlEnabled = falseAND per-objectAllow_*__c = falseAND permsetFlowMason_Org_Chat_Dml_Usernot assigned. Any one says no, the answer is no. - Discovery nudge gating. Custom permission
FlowMason_Org_Chat_Discoveryrequired to see "want me to introspect <object>?" suggestions. Recon-vector mitigation. - Per-user rate limit. 60 turns/min/user (
orgChatMaxTurnsPerMinutePerUser). 3 DML confirms/min/user. - PII redaction.
FMRedactorvalue-pattern + key-pattern catalog runs on every prompt + every reply. - FLS scrub.
FMPromptGuardremoves field references the running user can't see.
Kill switches
| Kill condition | Switch | Effect |
|---|---|---|
| Disable Org Chat org-wide | orgChatSurfacesEnabled = none | Every surface refuses turns |
| Disable DML | orgChatDmlEnabled = false | Reads still work; writes refused |
| Disable tool-calling | orgChatToolCallingEnabled = false | Byte-identical legacy single-shot path |
| Disable manifest excerpt | orgChatManifestEnabled = false | Schema-only prompt |
| Suspend inventory tool | Empty FM_Org_Inventory_Snapshot__c | inventory_search degrades gracefully |
| Block a user | Revoke FlowMason_Org_Chat_User permset | LWC renders nothing for that user |
Effective on next request. FMConfig.refresh() runs automatically per request.
Cost model
Tool-calling adds round-trips but reduces prompt tokens (history lives server-side in FMThreadState, not re-sent each turn).
- Pre-tool-calling: 2 round-trips per turn (build + summarise).
- Tool-calling on: 3-5 round-trips, capped by
orgChatToolCallingMaxCalls(default 5). - Wall-clock budget:
orgChatToolCallingTimeoutMs(default 25s). - Long conversations (10+ turns) typically see 30-50% prompt-token reduction via thread-state caching.
Per-call accounting in Pipeline_Stage_Log__c.Cost__c; per-turn aggregates in FlowMasonRun__e.Detail__c. Telemetry dashboard rolls up by surface, provider, and tool-call rate.
Related
- Getting Started for the package install
- LWC Drop-in Components for placement details
- Provider Configuration per-vendor
- Governance & Audit for the audit log schema
- Enterprise Security for the redaction + FLS guard layers