#salesforceadmins — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #salesforceadmins, aggregated by home.social.
-
Warn and Inform with Native Toast Messages in Salesforce Flow
Just when the Salesforce community thought we had fully digested the Summer ’26 release notes, the product team decided to drop a classic “one more thing.” Adam White recently announced that two new functionalities were “snuck” into the release at the last minute. For those of us who live and breathe Flow Builder, this is like finding an extra gift under the tree after you thought Christmas was over.
The star of this stealth update? The Native Show Toast Message Action.
In this post, we’re going to break down why this is such a great update for Salesforce Admins, how we used to handle this the “old way”, and a clever trick to implement these notifications without cluttering your Flow logic.
What Exactly is a Toast Message?
In the world of User Experience (UX) and User Interface (UI) design, a Toast Message is a small, non-modal notification that “pops up” (like toast from a toaster) to provide feedback about an operation.
Unlike a modal or a popup window, a toast message doesn’t require the user to click “OK” to continue their work (though they can be configured to stay until dismissed). They are designed to be subtle but informative. In Salesforce, you usually see them at the top of the screen in green (Success), red (Error), yellow (Warning), or blue (Information).
Why Toasts Matter
Toasts are critical for a smooth user journey. They confirm that an action was successful or alert a user to a problem without breaking their concentration or forcing them to navigate to a new page. Without toasts, users are often left wondering, “Did that save?” or “Did my automation actually run?”
The Way We Were: The Era of UnofficialSF and AppExchange
For years, the request for a native “Show Toast” action in Flow was one of the most requested ideas. But for a long time, the answer from Salesforce was silence. This led the community to innovate on its own.
The UnofficialSF Method
To get a toast message in a Screen Flow, most Admins turned to UnofficialSF. This incredible community resource offered a “Show Toast” flow component. While it worked beautifully, it came with technical debt considerations:
Installation Management: You had to install a managed or unmanaged package in your production environment.
Maintenance: Every time Salesforce updated its API, you had to ensure your community-sourced components remained compatible.
Security Audits: In highly regulated industries (like Finance or Healthcare), getting a third-party package approved by a security team can take months.
The AppExchange and Custom LWC
Other Admins turned to the AppExchange for “Flow Utility” packs or, if they had developer resources, they wrote custom Lightning Web Components (LWC). An LWC could use the
ShowToastEventin JavaScript, but it required writing code, which goes against the “Clicks, Not Code” mantra that makes Flow so powerful.That era is officially over. With the Summer ’26 release, the power is finally native.
Exploring the New Native “Show Toast” Action
The new functionality allows us to call a standard action directly from the Flow Builder. It is robust, flexible, and incredibly easy to configure. Here is what you can now do natively:
Style Selection
You can choose the “Flavor” of your notification. This dictates the icon and the color of the toast:
Success (Green): For when things go right.
Warning (Yellow): To alert users of a potential issue that doesn’t stop progress.
Information (Blue): General updates or helpful hints.
Error (Red): When a process fails or a validation is triggered.
Dismissal Control
You get to decide the “persistence” of the message.
Automatic: The toast appears and then fades away after a few seconds. This is great for simple success confirmations.
Manual: The toast stays on the screen until the user clicks the “X” to close it. This is vital for errors or warnings where you want to ensure the user has actually read the information.
Rich Messaging and URLs
This is where it gets really exciting. You aren’t limited to plain text.
Dynamic Resources: You can include Flow variables, formulas, or record fields in the title and description.
The “Curly Bracket” Trick: By using curly brackets
{ }in your message description, you can embed a URL. This could be a link to a public webpage, a internal Terms & Conditions document, or even a link to a specific Salesforce record.
Use Cases for Native Toasts
How should you use this in your day-to-day Admin life? Here are a few examples:
Eligibility Alerts: As shown in the video below, if a customer no longer qualifies for a service (e.g., they moved out of the service area), a toast can immediately inform the user the moment they open the record.
Data Validation Feedback: Instead of a clunky fault screen, show a red Error toast if a user enters data that doesn’t meet business criteria.
Onboarding Guidance: When a new Lead is created, show an “Information” toast with a link to the “Sales Playbook” for that specific industry.
Process Confirmation: After a complex Screen Flow that updates multiple records, show a “Success” toast that includes a link to the primary updated record. Please note that Salesforce included another action in the last minute that opens the newly created record on another tab for the user. Stay tuned for more updates related to that action.
The Pro Trick: Conditional Visibility via Lightning Record Pages
In the video, I demonstrated a clever way to use this. Normally, you might think you need to build a complex Flow that runs, checks criteria using a Decision element, and then decides to show the toast. There is a clever method.
Instead of putting the logic inside the Flow, you can keep the Flow extremely simple. Just the “Show Toast” action, and put the logic on the Lightning Record Page.
How to do it:
Create a simple Screen Flow: The flow only contains one element: the “Show Toast” action.
Add to Record Page: Drag the “Flow” component onto your Contact or Account page layout.
Set Component Visibility: In the Lightning App Builder, click on the Flow component. In the right-hand sidebar, go to Set Component Visibility.
Define Your Criteria: For example, set the visibility to
Record > Last Name > Equals > Brock.
The Result: The Flow only “exists” and runs when that specific condition is met. When I go to Lex Luthor’s record, nothing happens. But when I navigate to Eddie Brock’s record, the Flow triggers, and the toast message pops up instantly: “Customer no longer qualifies for our services.”
This keeps your Flow canvas clean and offloads the “heavy lifting” to the Lightning UI engine.
Stop Duct Taping Your Flow Notifications
The “Sneaky” Summer ’26 release features prove that Salesforce is listening to the community. By making the Show Toast action native, they have removed the need for third-party dependencies, reduced technical debt, and given Admins a powerful new tool to communicate with users.
The ability to include clickable URLs and dynamic variables means our notifications can now be functional bridges to other parts of the business.
Enjoy this new functionality, folks! It’s a game-changer for Flow UX.
Watch the video here:
Does this new native action mean you’ll be retiring your unofficialSF packages? Let us know in the comments below!
Explore related content:
11 Flow Updates in Summer 26 Release
Get Your Org Ready: Summer ’26 Admin Highlights
Master Custom Batch Sizes for Schedule-Triggered Flows
#HowTo #NewReleaseUpdate #Salesforce #SalesforceAdmins #SalesforceDevelopers #Summer26 #Tutorial -
Warn and Inform with Native Toast Messages in Salesforce Flow
Just when the Salesforce community thought we had fully digested the Summer ’26 release notes, the product team decided to drop a classic “one more thing.” Adam White recently announced that two new functionalities were “snuck” into the release at the last minute. For those of us who live and breathe Flow Builder, this is like finding an extra gift under the tree after you thought Christmas was over.
The star of this stealth update? The Native Show Toast Message Action.
In this post, we’re going to break down why this is such a great update for Salesforce Admins, how we used to handle this the “old way”, and a clever trick to implement these notifications without cluttering your Flow logic.
What Exactly is a Toast Message?
In the world of User Experience (UX) and User Interface (UI) design, a Toast Message is a small, non-modal notification that “pops up” (like toast from a toaster) to provide feedback about an operation.
Unlike a modal or a popup window, a toast message doesn’t require the user to click “OK” to continue their work (though they can be configured to stay until dismissed). They are designed to be subtle but informative. In Salesforce, you usually see them at the top of the screen in green (Success), red (Error), yellow (Warning), or blue (Information).
Why Toasts Matter
Toasts are critical for a smooth user journey. They confirm that an action was successful or alert a user to a problem without breaking their concentration or forcing them to navigate to a new page. Without toasts, users are often left wondering, “Did that save?” or “Did my automation actually run?”
The Way We Were: The Era of UnofficialSF and AppExchange
For years, the request for a native “Show Toast” action in Flow was one of the most requested ideas. But for a long time, the answer from Salesforce was silence. This led the community to innovate on its own.
The UnofficialSF Method
To get a toast message in a Screen Flow, most Admins turned to UnofficialSF. This incredible community resource offered a “Show Toast” flow component. While it worked beautifully, it came with technical debt considerations:
Installation Management: You had to install a managed or unmanaged package in your production environment.
Maintenance: Every time Salesforce updated its API, you had to ensure your community-sourced components remained compatible.
Security Audits: In highly regulated industries (like Finance or Healthcare), getting a third-party package approved by a security team can take months.
The AppExchange and Custom LWC
Other Admins turned to the AppExchange for “Flow Utility” packs or, if they had developer resources, they wrote custom Lightning Web Components (LWC). An LWC could use the
ShowToastEventin JavaScript, but it required writing code, which goes against the “Clicks, Not Code” mantra that makes Flow so powerful.That era is officially over. With the Summer ’26 release, the power is finally native.
Exploring the New Native “Show Toast” Action
The new functionality allows us to call a standard action directly from the Flow Builder. It is robust, flexible, and incredibly easy to configure. Here is what you can now do natively:
Style Selection
You can choose the “Flavor” of your notification. This dictates the icon and the color of the toast:
Success (Green): For when things go right.
Warning (Yellow): To alert users of a potential issue that doesn’t stop progress.
Information (Blue): General updates or helpful hints.
Error (Red): When a process fails or a validation is triggered.
Dismissal Control
You get to decide the “persistence” of the message.
Automatic: The toast appears and then fades away after a few seconds. This is great for simple success confirmations.
Manual: The toast stays on the screen until the user clicks the “X” to close it. This is vital for errors or warnings where you want to ensure the user has actually read the information.
Rich Messaging and URLs
This is where it gets really exciting. You aren’t limited to plain text.
Dynamic Resources: You can include Flow variables, formulas, or record fields in the title and description.
The “Curly Bracket” Trick: By using curly brackets
{ }in your message description, you can embed a URL. This could be a link to a public webpage, a internal Terms & Conditions document, or even a link to a specific Salesforce record.
Use Cases for Native Toasts
How should you use this in your day-to-day Admin life? Here are a few examples:
Eligibility Alerts: As shown in the video below, if a customer no longer qualifies for a service (e.g., they moved out of the service area), a toast can immediately inform the user the moment they open the record.
Data Validation Feedback: Instead of a clunky fault screen, show a red Error toast if a user enters data that doesn’t meet business criteria.
Onboarding Guidance: When a new Lead is created, show an “Information” toast with a link to the “Sales Playbook” for that specific industry.
Process Confirmation: After a complex Screen Flow that updates multiple records, show a “Success” toast that includes a link to the primary updated record. Please note that Salesforce included another action in the last minute that opens the newly created record on another tab for the user. Stay tuned for more updates related to that action.
The Pro Trick: Conditional Visibility via Lightning Record Pages
In the video, I demonstrated a clever way to use this. Normally, you might think you need to build a complex Flow that runs, checks criteria using a Decision element, and then decides to show the toast. There is a clever method.
Instead of putting the logic inside the Flow, you can keep the Flow extremely simple. Just the “Show Toast” action, and put the logic on the Lightning Record Page.
How to do it:
Create a simple Screen Flow: The flow only contains one element: the “Show Toast” action.
Add to Record Page: Drag the “Flow” component onto your Contact or Account page layout.
Set Component Visibility: In the Lightning App Builder, click on the Flow component. In the right-hand sidebar, go to Set Component Visibility.
Define Your Criteria: For example, set the visibility to
Record > Last Name > Equals > Brock.
The Result: The Flow only “exists” and runs when that specific condition is met. When I go to Lex Luthor’s record, nothing happens. But when I navigate to Eddie Brock’s record, the Flow triggers, and the toast message pops up instantly: “Customer no longer qualifies for our services.”
This keeps your Flow canvas clean and offloads the “heavy lifting” to the Lightning UI engine.
Stop Duct Taping Your Flow Notifications
The “Sneaky” Summer ’26 release features prove that Salesforce is listening to the community. By making the Show Toast action native, they have removed the need for third-party dependencies, reduced technical debt, and given Admins a powerful new tool to communicate with users.
The ability to include clickable URLs and dynamic variables means our notifications can now be functional bridges to other parts of the business.
Enjoy this new functionality, folks! It’s a game-changer for Flow UX.
Watch the video here:
Does this new native action mean you’ll be retiring your unofficialSF packages? Let us know in the comments below!
Explore related content:
11 Flow Updates in Summer 26 Release
Get Your Org Ready: Summer ’26 Admin Highlights
Master Custom Batch Sizes for Schedule-Triggered Flows
#HowTo #NewReleaseUpdate #Salesforce #SalesforceAdmins #SalesforceDevelopers #Summer26 #Tutorial -
Warn and Inform with Native Toast Messages in Salesforce Flow
Just when the Salesforce community thought we had fully digested the Summer ’26 release notes, the product team decided to drop a classic “one more thing.” Adam White recently announced that two new functionalities were “snuck” into the release at the last minute. For those of us who live and breathe Flow Builder, this is like finding an extra gift under the tree after you thought Christmas was over.
The star of this stealth update? The Native Show Toast Message Action.
In this post, we’re going to break down why this is such a great update for Salesforce Admins, how we used to handle this the “old way”, and a clever trick to implement these notifications without cluttering your Flow logic.
What Exactly is a Toast Message?
In the world of User Experience (UX) and User Interface (UI) design, a Toast Message is a small, non-modal notification that “pops up” (like toast from a toaster) to provide feedback about an operation.
Unlike a modal or a popup window, a toast message doesn’t require the user to click “OK” to continue their work (though they can be configured to stay until dismissed). They are designed to be subtle but informative. In Salesforce, you usually see them at the top of the screen in green (Success), red (Error), yellow (Warning), or blue (Information).
Why Toasts Matter
Toasts are critical for a smooth user journey. They confirm that an action was successful or alert a user to a problem without breaking their concentration or forcing them to navigate to a new page. Without toasts, users are often left wondering, “Did that save?” or “Did my automation actually run?”
The Way We Were: The Era of UnofficialSF and AppExchange
For years, the request for a native “Show Toast” action in Flow was one of the most requested ideas. But for a long time, the answer from Salesforce was silence. This led the community to innovate on its own.
The UnofficialSF Method
To get a toast message in a Screen Flow, most Admins turned to UnofficialSF. This incredible community resource offered a “Show Toast” flow component. While it worked beautifully, it came with technical debt considerations:
Installation Management: You had to install a managed or unmanaged package in your production environment.
Maintenance: Every time Salesforce updated its API, you had to ensure your community-sourced components remained compatible.
Security Audits: In highly regulated industries (like Finance or Healthcare), getting a third-party package approved by a security team can take months.
The AppExchange and Custom LWC
Other Admins turned to the AppExchange for “Flow Utility” packs or, if they had developer resources, they wrote custom Lightning Web Components (LWC). An LWC could use the
ShowToastEventin JavaScript, but it required writing code, which goes against the “Clicks, Not Code” mantra that makes Flow so powerful.That era is officially over. With the Summer ’26 release, the power is finally native.
Exploring the New Native “Show Toast” Action
The new functionality allows us to call a standard action directly from the Flow Builder. It is robust, flexible, and incredibly easy to configure. Here is what you can now do natively:
Style Selection
You can choose the “Flavor” of your notification. This dictates the icon and the color of the toast:
Success (Green): For when things go right.
Warning (Yellow): To alert users of a potential issue that doesn’t stop progress.
Information (Blue): General updates or helpful hints.
Error (Red): When a process fails or a validation is triggered.
Dismissal Control
You get to decide the “persistence” of the message.
Automatic: The toast appears and then fades away after a few seconds. This is great for simple success confirmations.
Manual: The toast stays on the screen until the user clicks the “X” to close it. This is vital for errors or warnings where you want to ensure the user has actually read the information.
Rich Messaging and URLs
This is where it gets really exciting. You aren’t limited to plain text.
Dynamic Resources: You can include Flow variables, formulas, or record fields in the title and description.
The “Curly Bracket” Trick: By using curly brackets
{ }in your message description, you can embed a URL. This could be a link to a public webpage, a internal Terms & Conditions document, or even a link to a specific Salesforce record.
Use Cases for Native Toasts
How should you use this in your day-to-day Admin life? Here are a few examples:
Eligibility Alerts: As shown in the video below, if a customer no longer qualifies for a service (e.g., they moved out of the service area), a toast can immediately inform the user the moment they open the record.
Data Validation Feedback: Instead of a clunky fault screen, show a red Error toast if a user enters data that doesn’t meet business criteria.
Onboarding Guidance: When a new Lead is created, show an “Information” toast with a link to the “Sales Playbook” for that specific industry.
Process Confirmation: After a complex Screen Flow that updates multiple records, show a “Success” toast that includes a link to the primary updated record. Please note that Salesforce included another action in the last minute that opens the newly created record on another tab for the user. Stay tuned for more updates related to that action.
The Pro Trick: Conditional Visibility via Lightning Record Pages
In the video, I demonstrated a clever way to use this. Normally, you might think you need to build a complex Flow that runs, checks criteria using a Decision element, and then decides to show the toast. There is a clever method.
Instead of putting the logic inside the Flow, you can keep the Flow extremely simple. Just the “Show Toast” action, and put the logic on the Lightning Record Page.
How to do it:
Create a simple Screen Flow: The flow only contains one element: the “Show Toast” action.
Add to Record Page: Drag the “Flow” component onto your Contact or Account page layout.
Set Component Visibility: In the Lightning App Builder, click on the Flow component. In the right-hand sidebar, go to Set Component Visibility.
Define Your Criteria: For example, set the visibility to
Record > Last Name > Equals > Brock.
The Result: The Flow only “exists” and runs when that specific condition is met. When I go to Lex Luthor’s record, nothing happens. But when I navigate to Eddie Brock’s record, the Flow triggers, and the toast message pops up instantly: “Customer no longer qualifies for our services.”
This keeps your Flow canvas clean and offloads the “heavy lifting” to the Lightning UI engine.
Stop Duct Taping Your Flow Notifications
The “Sneaky” Summer ’26 release features prove that Salesforce is listening to the community. By making the Show Toast action native, they have removed the need for third-party dependencies, reduced technical debt, and given Admins a powerful new tool to communicate with users.
The ability to include clickable URLs and dynamic variables means our notifications can now be functional bridges to other parts of the business.
Enjoy this new functionality, folks! It’s a game-changer for Flow UX.
Watch the video here:
Does this new native action mean you’ll be retiring your unofficialSF packages? Let us know in the comments below!
Explore related content:
11 Flow Updates in Summer 26 Release
Get Your Org Ready: Summer ’26 Admin Highlights
Master Custom Batch Sizes for Schedule-Triggered Flows
#HowTo #NewReleaseUpdate #Salesforce #SalesforceAdmins #SalesforceDevelopers #Summer26 #Tutorial -
Master Custom Batch Sizes for Schedule-Triggered Flows
The wait is finally over! Summer ’26 has officially arrived, and while some might call this release “light,” those of us deep in the automation trenches have found some gems. If you’ve spent any time on Salesforce Break, you know I’m passionate about Flow performance and scalability. That’s why my #1 item for this release is the arrival of custom batch sizes for scheduled flows.
This is a functionality I’ve been asking for for years, and it finally got rolled out to our Flow Builder toolset. Let’s get into why this matters, the technical hurdles it solves, and how you can use it to build more resilient automations.
What is a Schedule-Triggered Flow?
Before we get into the new settings, let’s define the foundation. A Schedule-Triggered Flow is a type of background automation that launches at a specific time and frequency (once, daily, or weekly).
Unlike Record-Triggered flows that fire the moment a record is edited, these flows are often used for “maintenance” tasks, such as:
- Sending follow-up emails for stale opportunities.
- Updating status fields on records that have reached an expiration date.
- Nightly data cleanups or syncing with external systems.
You define a start date, time, and an optional object with filter criteria. Salesforce then finds every record in your org that meets those criteria and runs a “flow interview” for each one.
Understanding Bulkification and Batching
Efficiency is at the heart of Salesforce’s architecture. To handle thousands of records without crashing the servers, Salesforce uses bulkification and batching.
By default, when a scheduled flow runs, Salesforce groups records into batches of 200. For example, if you have 300 accounts that need updating, Salesforce won’t run 300 separate transactions. Instead, it creates two transactions:
- Transaction 1: Processes 200 records.
- Transaction 2: Processes the remaining 100 records.
While this is great for overall system efficiency, it can lead to significant problems when your automation logic is complex or touches sensitive data.
The Danger Zone: Governor Limits and Errors
To ensure no single process hogs all the resources in a multi-tenant environment, Salesforce enforces Governor Limits, strict “usage caps” on things like the number of SOQL queries, DML statements (updates/inserts), and CPU time allowed in a single transaction.
When you process 200 records at once in a single transaction, the “math” of these limits adds up quickly. If your flow performs a few queries per record, multiplying those by 200 can easily blow past the 100-query limit, resulting in a dreaded `System.LimitException`.
Here is another potential issue: One of the most common, and frustrating, issues we face is record locking. When Salesforce updates a record, it “locks” that record to prevent other processes from changing it at the same time. It also locks the parent (master) for this record.
Let’s say you have a custom course record in Salesforce, and you have a cohort record under it. The relationship is master-detail. When Salesforce updates a cohort record, it will attempt to lock both records first. If it can’t lock these records, the system will throw an error.
The Error Scenario:
If multiple batches of 200 contain child records that all belong to the same parent, Transaction A might try to lock the parent to update cohort 1. Simultaneously, another part of the batch (or a parallel transaction) tries to lock that same parent to update cohort 2. The second attempt fails because it cannot “reach in” and get the lock, resulting in an UNABLE_TO_LOCK_ROW error.
The Solution: Custom Batch Sizes
In Summer ’26, we finally have the control to mitigate these issues. Under the “Select Object” settings of a scheduled flow, you can now enter a custom number for the records processed at the same time.
The Default: 200 records.
The Power Move: You can decrease this number, even down to 1.
Why set a batch size of 1?
If you are experiencing frequent locking errors or hitting CPU limits, running the automation “one-by-one” (each transaction processing a single record) ensures that the parent record is only locked for that specific record’s update and then immediately released. This will decrease the possibility of locking errors.
Another potential solution for locking issues is sorting by parent before updating child records. Since we cannot sort records by Parent ID in a schedule-triggered flow, decreasing the batch size is often your only tool to prevent parent-record locking conflicts.
Since scheduled flows often run at night or on weekends when user activity is low, the increased total processing time is usually a fair trade-off for 100% reliability.
Best Practices and Recommendations
To get the most out of this new feature, keep these recommendations in mind:
- Identify High-Risk Objects: Pay extra attention to flows running on Task, Event, Contact, and Opportunity objects, or any custom object that is a child in a Master-Detail relationship, as these are high-risk for locking issues. Remember that standard object relationships are not really technically classified as master-detail, but they could act like one in some respects. These are special relationships that have their own rules. For example: Account is not a required lookup for Opportunity, but you can still add a rollup summary field to the Account for the Opportunity.
2. Monitor Your Error Rates: Keep an eye on the new Element Error Rate column in your Flow list view. If you see a high percentage of errors on a scheduled flow, it’s a prime candidate for a smaller batch size. Disclaimer: This is a brand new functionality, and I have not played with this, yet.
3. Test the “Middle Ground”: You don’t always have to drop to a batch size of 1. If 200 is too high, try 50 or 100 to balance speed and stability.
This update is a huge win for Salesforce Admins and Architects alike. It provides the granular control we need to ensure our “heavy lifting” automations run smoothly without constant manual intervention or error emails.
Take Control of Your Automations
The arrival of custom batch sizes in Summer ’26 is a testament to Salesforce listening to the community’s “real world” pain points. While it might seem like a small setting in the Flow Builder, it is a massive architectural lever for those of us responsible for high-volume data integrity.
No longer are we forced to “hack” our way around governor limits or cross our fingers that record locking doesn’t tank our nightly cleanups. We finally have the precision to tune our automations like a high-performance engine. So, take a look at your most troublesome scheduled flows, experiment with those batch sizes, and turn those “failed flow” emails into a thing of the past. Happy flowing!
A quick heads-up: this feature is specific to the Summer ’26 release.
Explore related content:
What’s New in the Salesforce Mobile App: Summer ’26 Release
11 Flow Updates in Summer 26 Release
Get Your Org Ready: Summer ’26 Admin Highlights
#HowTo #SalesforceAdmins #SalesforceDevelopers #SalesforceRelease #SalesforceUpdate #Summer26 #Tutorial -
Master Custom Batch Sizes for Schedule-Triggered Flows
The wait is finally over! Summer ’26 has officially arrived, and while some might call this release “light,” those of us deep in the automation trenches have found some gems. If you’ve spent any time on Salesforce Break, you know I’m passionate about Flow performance and scalability. That’s why my #1 item for this release is the arrival of custom batch sizes for scheduled flows.
This is a functionality I’ve been asking for for years, and it finally got rolled out to our Flow Builder toolset. Let’s get into why this matters, the technical hurdles it solves, and how you can use it to build more resilient automations.
What is a Schedule-Triggered Flow?
Before we get into the new settings, let’s define the foundation. A Schedule-Triggered Flow is a type of background automation that launches at a specific time and frequency (once, daily, or weekly).
Unlike Record-Triggered flows that fire the moment a record is edited, these flows are often used for “maintenance” tasks, such as:
- Sending follow-up emails for stale opportunities.
- Updating status fields on records that have reached an expiration date.
- Nightly data cleanups or syncing with external systems.
You define a start date, time, and an optional object with filter criteria. Salesforce then finds every record in your org that meets those criteria and runs a “flow interview” for each one.
Understanding Bulkification and Batching
Efficiency is at the heart of Salesforce’s architecture. To handle thousands of records without crashing the servers, Salesforce uses bulkification and batching.
By default, when a scheduled flow runs, Salesforce groups records into batches of 200. For example, if you have 300 accounts that need updating, Salesforce won’t run 300 separate transactions. Instead, it creates two transactions:
- Transaction 1: Processes 200 records.
- Transaction 2: Processes the remaining 100 records.
While this is great for overall system efficiency, it can lead to significant problems when your automation logic is complex or touches sensitive data.
The Danger Zone: Governor Limits and Errors
To ensure no single process hogs all the resources in a multi-tenant environment, Salesforce enforces Governor Limits, strict “usage caps” on things like the number of SOQL queries, DML statements (updates/inserts), and CPU time allowed in a single transaction.
When you process 200 records at once in a single transaction, the “math” of these limits adds up quickly. If your flow performs a few queries per record, multiplying those by 200 can easily blow past the 100-query limit, resulting in a dreaded `System.LimitException`.
Here is another potential issue: One of the most common, and frustrating, issues we face is record locking. When Salesforce updates a record, it “locks” that record to prevent other processes from changing it at the same time. It also locks the parent (master) for this record.
Let’s say you have a custom course record in Salesforce, and you have a cohort record under it. The relationship is master-detail. When Salesforce updates a cohort record, it will attempt to lock both records first. If it can’t lock these records, the system will throw an error.
The Error Scenario:
If multiple batches of 200 contain child records that all belong to the same parent, Transaction A might try to lock the parent to update cohort 1. Simultaneously, another part of the batch (or a parallel transaction) tries to lock that same parent to update cohort 2. The second attempt fails because it cannot “reach in” and get the lock, resulting in an UNABLE_TO_LOCK_ROW error.
The Solution: Custom Batch Sizes
In Summer ’26, we finally have the control to mitigate these issues. Under the “Select Object” settings of a scheduled flow, you can now enter a custom number for the records processed at the same time.
The Default: 200 records.
The Power Move: You can decrease this number, even down to 1.
Why set a batch size of 1?
If you are experiencing frequent locking errors or hitting CPU limits, running the automation “one-by-one” (each transaction processing a single record) ensures that the parent record is only locked for that specific record’s update and then immediately released. This will decrease the possibility of locking errors.
Another potential solution for locking issues is sorting by parent before updating child records. Since we cannot sort records by Parent ID in a schedule-triggered flow, decreasing the batch size is often your only tool to prevent parent-record locking conflicts.
Since scheduled flows often run at night or on weekends when user activity is low, the increased total processing time is usually a fair trade-off for 100% reliability.
Best Practices and Recommendations
To get the most out of this new feature, keep these recommendations in mind:
- Identify High-Risk Objects: Pay extra attention to flows running on Task, Event, Contact, and Opportunity objects, or any custom object that is a child in a Master-Detail relationship, as these are high-risk for locking issues. Remember that standard object relationships are not really technically classified as master-detail, but they could act like one in some respects. These are special relationships that have their own rules. For example: Account is not a required lookup for Opportunity, but you can still add a rollup summary field to the Account for the Opportunity.
2. Monitor Your Error Rates: Keep an eye on the new Element Error Rate column in your Flow list view. If you see a high percentage of errors on a scheduled flow, it’s a prime candidate for a smaller batch size. Disclaimer: This is a brand new functionality, and I have not played with this, yet.
3. Test the “Middle Ground”: You don’t always have to drop to a batch size of 1. If 200 is too high, try 50 or 100 to balance speed and stability.
This update is a huge win for Salesforce Admins and Architects alike. It provides the granular control we need to ensure our “heavy lifting” automations run smoothly without constant manual intervention or error emails.
Take Control of Your Automations
The arrival of custom batch sizes in Summer ’26 is a testament to Salesforce listening to the community’s “real world” pain points. While it might seem like a small setting in the Flow Builder, it is a massive architectural lever for those of us responsible for high-volume data integrity.
No longer are we forced to “hack” our way around governor limits or cross our fingers that record locking doesn’t tank our nightly cleanups. We finally have the precision to tune our automations like a high-performance engine. So, take a look at your most troublesome scheduled flows, experiment with those batch sizes, and turn those “failed flow” emails into a thing of the past. Happy flowing!
A quick heads-up: this feature is specific to the Summer ’26 release.
Explore related content:
What’s New in the Salesforce Mobile App: Summer ’26 Release
11 Flow Updates in Summer 26 Release
Get Your Org Ready: Summer ’26 Admin Highlights
#HowTo #SalesforceAdmins #SalesforceDevelopers #SalesforceRelease #SalesforceUpdate #Summer26 #Tutorial -
Master Custom Batch Sizes for Schedule-Triggered Flows
The wait is finally over! Summer ’26 has officially arrived, and while some might call this release “light,” those of us deep in the automation trenches have found some gems. If you’ve spent any time on Salesforce Break, you know I’m passionate about Flow performance and scalability. That’s why my #1 item for this release is the arrival of custom batch sizes for scheduled flows.
This is a functionality I’ve been asking for for years, and it finally got rolled out to our Flow Builder toolset. Let’s get into why this matters, the technical hurdles it solves, and how you can use it to build more resilient automations.
What is a Schedule-Triggered Flow?
Before we get into the new settings, let’s define the foundation. A Schedule-Triggered Flow is a type of background automation that launches at a specific time and frequency (once, daily, or weekly).
Unlike Record-Triggered flows that fire the moment a record is edited, these flows are often used for “maintenance” tasks, such as:
- Sending follow-up emails for stale opportunities.
- Updating status fields on records that have reached an expiration date.
- Nightly data cleanups or syncing with external systems.
You define a start date, time, and an optional object with filter criteria. Salesforce then finds every record in your org that meets those criteria and runs a “flow interview” for each one.
Understanding Bulkification and Batching
Efficiency is at the heart of Salesforce’s architecture. To handle thousands of records without crashing the servers, Salesforce uses bulkification and batching.
By default, when a scheduled flow runs, Salesforce groups records into batches of 200. For example, if you have 300 accounts that need updating, Salesforce won’t run 300 separate transactions. Instead, it creates two transactions:
- Transaction 1: Processes 200 records.
- Transaction 2: Processes the remaining 100 records.
While this is great for overall system efficiency, it can lead to significant problems when your automation logic is complex or touches sensitive data.
The Danger Zone: Governor Limits and Errors
To ensure no single process hogs all the resources in a multi-tenant environment, Salesforce enforces Governor Limits, strict “usage caps” on things like the number of SOQL queries, DML statements (updates/inserts), and CPU time allowed in a single transaction.
When you process 200 records at once in a single transaction, the “math” of these limits adds up quickly. If your flow performs a few queries per record, multiplying those by 200 can easily blow past the 100-query limit, resulting in a dreaded `System.LimitException`.
Here is another potential issue: One of the most common, and frustrating, issues we face is record locking. When Salesforce updates a record, it “locks” that record to prevent other processes from changing it at the same time. It also locks the parent (master) for this record.
Let’s say you have a custom course record in Salesforce, and you have a cohort record under it. The relationship is master-detail. When Salesforce updates a cohort record, it will attempt to lock both records first. If it can’t lock these records, the system will throw an error.
The Error Scenario:
If multiple batches of 200 contain child records that all belong to the same parent, Transaction A might try to lock the parent to update cohort 1. Simultaneously, another part of the batch (or a parallel transaction) tries to lock that same parent to update cohort 2. The second attempt fails because it cannot “reach in” and get the lock, resulting in an UNABLE_TO_LOCK_ROW error.
The Solution: Custom Batch Sizes
In Summer ’26, we finally have the control to mitigate these issues. Under the “Select Object” settings of a scheduled flow, you can now enter a custom number for the records processed at the same time.
The Default: 200 records.
The Power Move: You can decrease this number, even down to 1.
Why set a batch size of 1?
If you are experiencing frequent locking errors or hitting CPU limits, running the automation “one-by-one” (each transaction processing a single record) ensures that the parent record is only locked for that specific record’s update and then immediately released. This will decrease the possibility of locking errors.
Another potential solution for locking issues is sorting by parent before updating child records. Since we cannot sort records by Parent ID in a schedule-triggered flow, decreasing the batch size is often your only tool to prevent parent-record locking conflicts.
Since scheduled flows often run at night or on weekends when user activity is low, the increased total processing time is usually a fair trade-off for 100% reliability.
Best Practices and Recommendations
To get the most out of this new feature, keep these recommendations in mind:
- Identify High-Risk Objects: Pay extra attention to flows running on Task, Event, Contact, and Opportunity objects, or any custom object that is a child in a Master-Detail relationship, as these are high-risk for locking issues. Remember that standard object relationships are not really technically classified as master-detail, but they could act like one in some respects. These are special relationships that have their own rules. For example: Account is not a required lookup for Opportunity, but you can still add a rollup summary field to the Account for the Opportunity.
2. Monitor Your Error Rates: Keep an eye on the new Element Error Rate column in your Flow list view. If you see a high percentage of errors on a scheduled flow, it’s a prime candidate for a smaller batch size. Disclaimer: This is a brand new functionality, and I have not played with this, yet.
3. Test the “Middle Ground”: You don’t always have to drop to a batch size of 1. If 200 is too high, try 50 or 100 to balance speed and stability.
This update is a huge win for Salesforce Admins and Architects alike. It provides the granular control we need to ensure our “heavy lifting” automations run smoothly without constant manual intervention or error emails.
Take Control of Your Automations
The arrival of custom batch sizes in Summer ’26 is a testament to Salesforce listening to the community’s “real world” pain points. While it might seem like a small setting in the Flow Builder, it is a massive architectural lever for those of us responsible for high-volume data integrity.
No longer are we forced to “hack” our way around governor limits or cross our fingers that record locking doesn’t tank our nightly cleanups. We finally have the precision to tune our automations like a high-performance engine. So, take a look at your most troublesome scheduled flows, experiment with those batch sizes, and turn those “failed flow” emails into a thing of the past. Happy flowing!
A quick heads-up: this feature is specific to the Summer ’26 release.
Explore related content:
What’s New in the Salesforce Mobile App: Summer ’26 Release
11 Flow Updates in Summer 26 Release
Get Your Org Ready: Summer ’26 Admin Highlights
#HowTo #SalesforceAdmins #SalesforceDevelopers #SalesforceRelease #SalesforceUpdate #Summer26 #Tutorial -
Clean Data, Smart Flows: Automating Data Cleanup in Salesforce Nonprofit Cloud
I had the privilege of presenting at Nonprofit Dreamin, one of the most community-driven Salesforce events on the calendar. With a sold-out crowd of 300 participants, the energy in the room was exactly what you’d hope for when talking about technology that actually matters for mission-driven organizations. It was a great session, and the conversations that followed reminded me why this work matters. For everyone who attended, asked questions, or tracked me down afterward, thank you. Here’s a deeper look at everything we covered.
The Case for Clean Data in Nonprofit Cloud
Every Nonprofit wants to make decisions grounded in accurate, real-time data. But as any Salesforce professional knows, “accurate data” doesn’t just happen on its own. It requires deliberate architecture, thoughtful automation, and a clear understanding of which tools belong where.
In Salesforce Nonprofit Cloud (NPC), that challenge is multiplied. Built on the Salesforce Industries architecture, NPC introduces a purpose-built data model with Person Accounts, Gift Commitments, Gift Transactions, and volunteer management objects that all need to stay tightly synchronized. The good news? Salesforce Flow, especially with the addition of the Transform element, has become a powerful enough tool to handle both the data hygiene work and the complex calculations your fundraising and volunteer teams depend on, without touching your DPE credit limits.
This post covers two interconnected use cases: automating data sanitization for volunteer management and building advanced donor fulfillment calculations with Flow, including the new Transform element. Together, they demonstrate what’s possible when clean data and smart automation work in concert.
Why Clean Data Is the Non-Negotiable Starting Point
Before we get into calculations and check-in flows, let’s establish something foundational: none of this works without clean data.
In the Salesforce world, “clean data” means records that are accurate, consistent, and free of duplicates. For admins, this has always been best practice. But with the rise of AI Agents, autonomous programs that can execute real transactions inside your org, data quality has become a hard requirement. AI is only as good as what it’s grounded in. Garbage in, garbage out, and now that garbage can trigger a bad transaction at scale.
In NPC specifically, clean data is the backbone of reliable volunteer coordination, accurate donor reporting, and eventually, trustworthy AI-assisted fundraising. One of the most common, and most overlooked, data quality issues is mobile phone formatting.
Part 1: Automating Data Sanitization with Record-Triggered Flow
Volunteers check in using their last name and mobile phone number. That sounds simple until you realize that the same phone number can be stored dozens of different ways: (512) 555-0100, 512-555-0100, 5125550100, 512 555 0100. When a Get Records element tries to match on an exact value, any inconsistency breaks the lookup.
The fix is a record-triggered flow that strips all non-digit characters from the mobile phone field the moment a Person Account is created or updated.
Person Account
A person account is a Salesforce record type that combines Account and Contact into a single entity, allowing you to manage individuals like donors or volunteers without needing a separate business account record. NPC relies on Person Accounts as its primary constituent record.
The “Clean Mobile Phone” Flow
This flow runs when a Person Account is created, or when the mobile phone field is changed and is not blank. The sanitization logic uses a chained SUBSTITUTE formula that removes spaces, dashes, and parentheses in sequence, leaving only pure digits. The result: a consistent, matchable value in every record.
If you need flexibility, there are alternatives. Validation rules can reject improperly formatted entries at the point of save, preventing the problem before it’s created. Scheduled flows can run as a daily batch job to clean up any legacy data that snuck through before your automation was in place. For most organizations, a combination of all three provides the most airtight coverage.
Part 2: Reactive Screen Flows for Volunteer Check-In
Once your data is clean, you can build experiences that actually work. In NPC, volunteer management tracks jobs, positions, and shifts, and getting volunteers into the right slot quickly is a real operational challenge.
Rather than relying on a standard digital experience site, we built a custom screen flow that leverages reactive functionality: the ability for a screen to update dynamically based on user input without navigating to a new page.
Reactive Screen Flow
A reactive screen flow allows components on the same screen to communicate with each other in real time. A data table can update the moment a user types a search term or makes a selection, with no page reload.
How the Check-In Flow Works
The volunteer enters their last name and mobile phone number. Because we’ve already sanitized the phone field, the Get Records query finds an exact match reliably. If no match exists, a warning screen appears immediately.
From there, a data table displays available jobs, such as “Food Distribution.” Once the volunteer selects a job, a Screen Action triggers an auto-launched subflow in the background.
That subflow queries available shifts for that specific day and passes them back to a second data table on the same screen. The volunteer selects their shift and clicks Next, and the flow creates a Job Position Assignment record with a status of “Complete.” Clean, fast, no paper sign-in sheet required.
Part 3: Complex Donor Fulfillment Calculations with Flow and the Transform Element
With volunteers managed and data sanitized, let’s look at the other side of the NPC operation: donor management. Here, the goal is to give fundraising teams a real-time snapshot of donor health directly on the Account page.
Specifically, we want to calculate three things for each donor:
Current Year Gift Commitment: The donor’s pledge for the year. In NPC’s data model, this tracks promises rather than payments.
Current Year Paid Amount: The total actually received via Gift Transactions. A single commitment can have multiple transactions associated with it as the donor makes payments over time.
Fulfillment Rate and Membership Level: The percentage of the commitment that’s been paid, and a tiered classification (Gold, Silver, Bronze) based on actual payments.
Why Flow Instead of DPE?
NPC includes pre-built Data Processing Engine (DPE) calculations for Donor Gift Summary. Think of DPE as a mini-ETL tool built directly into Salesforce, designed to handle millions of records with joins, filters, and aggregations that would push a standard Flow to its governor limits. It’s powerful, but it comes with two significant constraints: a steep learning curve that many admins haven’t climbed yet, and a license-based DPE credit limit that can be exhausted quickly if calculations run in real time or too frequently.
Flow provides a low-code alternative that doesn’t count against those credits, making it the right choice for on-demand or daily updates across mid-sized datasets. The golden rule: always use the tool you already know if it fits the case at hand.
Step 1: The Auto-Launched Subflow
We start by building an Auto-Launched Flow to house all the calculation logic. Keeping the math in a subflow means the same logic can be triggered by a user button, a nightly schedule, or an automated event, without ever rebuilding it.
The flow takes three input variables: the Account ID we’re processing, a StartDate, and an EndDate. Formulas handle null inputs gracefully, defaulting to January 1st of the current year and today’s date respectively, so the flow still works if those values aren’t provided.
Two Get Records elements pull the data. The first retrieves Gift Commitments filtered by DonorId and EffectiveStartDate within the selected range. The second retrieves Gift Transactions for the same donor where Status is Paid and TransactionDate falls within range.
The Transform Element
This is where Flow Builder has meaningfully evolved. The Transform element allows you to map and aggregate data collections without the traditional Loop + Assignment pattern. Instead of iterating through every transaction record manually, we point the Transform element at the Gift Transactions collection, set the target to a currency variable, select Sum, and choose the Amount field. The element does the rest. Repeat the process for Gift Commitments.
This approach is bulkified by design and significantly easier to debug than a loop-based alternative.
Categorization via Formulas
A nested IF formula handles Membership Level assignment: Bronze for paid amounts under $50,000, Silver up to $100,000, and Gold above that. A separate formula calculates the Fulfillment Rate as a percentage. Both formulas include null checks to handle donors who have commitments but no transactions yet.
Step 2: The Screen Flow and Quick Action
The subflow handles all three rollups in a single execution: total paid amount, total commitment, and the derived fulfillment rate and membership tier. The Screen Flow itself grabs the Account ID from the page, passes it into the subflow, receives the calculated values back, and writes them to custom fields on the Account using an Update Records element. A Flow Message component displays a toast-style confirmation to the user when the calculation is complete.
Step 3: Nightly Automation via Scheduled Flow
A button is great for one-off checks. But data goes stale. The subflow architecture makes automation straightforward: a Schedule-Triggered Flow runs nightly at 8:00 PM, loops through all active donor Accounts, and calls the same subflow we built for the button. Every morning, the fundraising team logs in to dashboards and Account views that are already current.
Conclusion
Clean data and efficient automation are the engine of nonprofit effectiveness. Accurate volunteer check-ins mean accurate service records. Accurate service records mean accurate outcome data. And accurate outcome data is what allows organizations to apply for larger grants, deepen constituent relationships, and scale their mission year over year.
The same principle applies on the donor side. When gift fulfillment data is reliable and up to date, fundraising teams can have better conversations, identify at-risk donors earlier, and make the case for continued investment with confidence.
With NPC’s purpose-built data model and Flow’s growing capabilities, especially the Transform element, there has never been a better time to consolidate your automation strategy around tools your team already understands. The result is an org that’s not just manageable, but genuinely ready for whatever comes next, including AI.
Want to walk through these builds step by step? The Clean Data Playbook is available FREE on Flow Canvas Academy.
Explore related content:
Mastering Data Rollups in Nonprofit Cloud
What Nonprofits Taught Me About Building Salesforce for Humans, Not Just Systems
Salesforce NPSP vs Nonprofit Cloud Consultant Certifications
How the Salesforce Architecture Program Is Being Rebuilt with the Community
#Nonprofit #NonprofitCloud #NPC #NPSP #SalesforceAdmins #SalesforceDevelopers #SalesforceHowTo #SalesforceTutorials -
Clean Data, Smart Flows: Automating Data Cleanup in Salesforce Nonprofit Cloud
I had the privilege of presenting at Nonprofit Dreamin, one of the most community-driven Salesforce events on the calendar. With a sold-out crowd of 300 participants, the energy in the room was exactly what you’d hope for when talking about technology that actually matters for mission-driven organizations. It was a great session, and the conversations that followed reminded me why this work matters. For everyone who attended, asked questions, or tracked me down afterward, thank you. Here’s a deeper look at everything we covered.
The Case for Clean Data in Nonprofit Cloud
Every Nonprofit wants to make decisions grounded in accurate, real-time data. But as any Salesforce professional knows, “accurate data” doesn’t just happen on its own. It requires deliberate architecture, thoughtful automation, and a clear understanding of which tools belong where.
In Salesforce Nonprofit Cloud (NPC), that challenge is multiplied. Built on the Salesforce Industries architecture, NPC introduces a purpose-built data model with Person Accounts, Gift Commitments, Gift Transactions, and volunteer management objects that all need to stay tightly synchronized. The good news? Salesforce Flow, especially with the addition of the Transform element, has become a powerful enough tool to handle both the data hygiene work and the complex calculations your fundraising and volunteer teams depend on, without touching your DPE credit limits.
This post covers two interconnected use cases: automating data sanitization for volunteer management and building advanced donor fulfillment calculations with Flow, including the new Transform element. Together, they demonstrate what’s possible when clean data and smart automation work in concert.
Why Clean Data Is the Non-Negotiable Starting Point
Before we get into calculations and check-in flows, let’s establish something foundational: none of this works without clean data.
In the Salesforce world, “clean data” means records that are accurate, consistent, and free of duplicates. For admins, this has always been best practice. But with the rise of AI Agents, autonomous programs that can execute real transactions inside your org, data quality has become a hard requirement. AI is only as good as what it’s grounded in. Garbage in, garbage out, and now that garbage can trigger a bad transaction at scale.
In NPC specifically, clean data is the backbone of reliable volunteer coordination, accurate donor reporting, and eventually, trustworthy AI-assisted fundraising. One of the most common, and most overlooked, data quality issues is mobile phone formatting.
Part 1: Automating Data Sanitization with Record-Triggered Flow
Volunteers check in using their last name and mobile phone number. That sounds simple until you realize that the same phone number can be stored dozens of different ways: (512) 555-0100, 512-555-0100, 5125550100, 512 555 0100. When a Get Records element tries to match on an exact value, any inconsistency breaks the lookup.
The fix is a record-triggered flow that strips all non-digit characters from the mobile phone field the moment a Person Account is created or updated.
Person Account
A person account is a Salesforce record type that combines Account and Contact into a single entity, allowing you to manage individuals like donors or volunteers without needing a separate business account record. NPC relies on Person Accounts as its primary constituent record.
The “Clean Mobile Phone” Flow
This flow runs when a Person Account is created, or when the mobile phone field is changed and is not blank. The sanitization logic uses a chained SUBSTITUTE formula that removes spaces, dashes, and parentheses in sequence, leaving only pure digits. The result: a consistent, matchable value in every record.
If you need flexibility, there are alternatives. Validation rules can reject improperly formatted entries at the point of save, preventing the problem before it’s created. Scheduled flows can run as a daily batch job to clean up any legacy data that snuck through before your automation was in place. For most organizations, a combination of all three provides the most airtight coverage.
Part 2: Reactive Screen Flows for Volunteer Check-In
Once your data is clean, you can build experiences that actually work. In NPC, volunteer management tracks jobs, positions, and shifts, and getting volunteers into the right slot quickly is a real operational challenge.
Rather than relying on a standard digital experience site, we built a custom screen flow that leverages reactive functionality: the ability for a screen to update dynamically based on user input without navigating to a new page.
Reactive Screen Flow
A reactive screen flow allows components on the same screen to communicate with each other in real time. A data table can update the moment a user types a search term or makes a selection, with no page reload.
How the Check-In Flow Works
The volunteer enters their last name and mobile phone number. Because we’ve already sanitized the phone field, the Get Records query finds an exact match reliably. If no match exists, a warning screen appears immediately.
From there, a data table displays available jobs, such as “Food Distribution.” Once the volunteer selects a job, a Screen Action triggers an auto-launched subflow in the background.
That subflow queries available shifts for that specific day and passes them back to a second data table on the same screen. The volunteer selects their shift and clicks Next, and the flow creates a Job Position Assignment record with a status of “Complete.” Clean, fast, no paper sign-in sheet required.
Part 3: Complex Donor Fulfillment Calculations with Flow and the Transform Element
With volunteers managed and data sanitized, let’s look at the other side of the NPC operation: donor management. Here, the goal is to give fundraising teams a real-time snapshot of donor health directly on the Account page.
Specifically, we want to calculate three things for each donor:
Current Year Gift Commitment: The donor’s pledge for the year. In NPC’s data model, this tracks promises rather than payments.
Current Year Paid Amount: The total actually received via Gift Transactions. A single commitment can have multiple transactions associated with it as the donor makes payments over time.
Fulfillment Rate and Membership Level: The percentage of the commitment that’s been paid, and a tiered classification (Gold, Silver, Bronze) based on actual payments.
Why Flow Instead of DPE?
NPC includes pre-built Data Processing Engine (DPE) calculations for Donor Gift Summary. Think of DPE as a mini-ETL tool built directly into Salesforce, designed to handle millions of records with joins, filters, and aggregations that would push a standard Flow to its governor limits. It’s powerful, but it comes with two significant constraints: a steep learning curve that many admins haven’t climbed yet, and a license-based DPE credit limit that can be exhausted quickly if calculations run in real time or too frequently.
Flow provides a low-code alternative that doesn’t count against those credits, making it the right choice for on-demand or daily updates across mid-sized datasets. The golden rule: always use the tool you already know if it fits the case at hand.
Step 1: The Auto-Launched Subflow
We start by building an Auto-Launched Flow to house all the calculation logic. Keeping the math in a subflow means the same logic can be triggered by a user button, a nightly schedule, or an automated event, without ever rebuilding it.
The flow takes three input variables: the Account ID we’re processing, a StartDate, and an EndDate. Formulas handle null inputs gracefully, defaulting to January 1st of the current year and today’s date respectively, so the flow still works if those values aren’t provided.
Two Get Records elements pull the data. The first retrieves Gift Commitments filtered by DonorId and EffectiveStartDate within the selected range. The second retrieves Gift Transactions for the same donor where Status is Paid and TransactionDate falls within range.
The Transform Element
This is where Flow Builder has meaningfully evolved. The Transform element allows you to map and aggregate data collections without the traditional Loop + Assignment pattern. Instead of iterating through every transaction record manually, we point the Transform element at the Gift Transactions collection, set the target to a currency variable, select Sum, and choose the Amount field. The element does the rest. Repeat the process for Gift Commitments.
This approach is bulkified by design and significantly easier to debug than a loop-based alternative.
Categorization via Formulas
A nested IF formula handles Membership Level assignment: Bronze for paid amounts under $50,000, Silver up to $100,000, and Gold above that. A separate formula calculates the Fulfillment Rate as a percentage. Both formulas include null checks to handle donors who have commitments but no transactions yet.
Step 2: The Screen Flow and Quick Action
The subflow handles all three rollups in a single execution: total paid amount, total commitment, and the derived fulfillment rate and membership tier. The Screen Flow itself grabs the Account ID from the page, passes it into the subflow, receives the calculated values back, and writes them to custom fields on the Account using an Update Records element. A Flow Message component displays a toast-style confirmation to the user when the calculation is complete.
Step 3: Nightly Automation via Scheduled Flow
A button is great for one-off checks. But data goes stale. The subflow architecture makes automation straightforward: a Schedule-Triggered Flow runs nightly at 8:00 PM, loops through all active donor Accounts, and calls the same subflow we built for the button. Every morning, the fundraising team logs in to dashboards and Account views that are already current.
Conclusion
Clean data and efficient automation are the engine of nonprofit effectiveness. Accurate volunteer check-ins mean accurate service records. Accurate service records mean accurate outcome data. And accurate outcome data is what allows organizations to apply for larger grants, deepen constituent relationships, and scale their mission year over year.
The same principle applies on the donor side. When gift fulfillment data is reliable and up to date, fundraising teams can have better conversations, identify at-risk donors earlier, and make the case for continued investment with confidence.
With NPC’s purpose-built data model and Flow’s growing capabilities, especially the Transform element, there has never been a better time to consolidate your automation strategy around tools your team already understands. The result is an org that’s not just manageable, but genuinely ready for whatever comes next, including AI.
Want to walk through these builds step by step? The Clean Data Playbook is available FREE on Flow Canvas Academy.
Explore related content:
Mastering Data Rollups in Nonprofit Cloud
What Nonprofits Taught Me About Building Salesforce for Humans, Not Just Systems
Salesforce NPSP vs Nonprofit Cloud Consultant Certifications
How the Salesforce Architecture Program Is Being Rebuilt with the Community
#Nonprofit #NonprofitCloud #NPC #NPSP #SalesforceAdmins #SalesforceDevelopers #SalesforceHowTo #SalesforceTutorials -
Clean Data, Smart Flows: Automating Data Cleanup in Salesforce Nonprofit Cloud
I had the privilege of presenting at Nonprofit Dreamin, one of the most community-driven Salesforce events on the calendar. With a sold-out crowd of 300 participants, the energy in the room was exactly what you’d hope for when talking about technology that actually matters for mission-driven organizations. It was a great session, and the conversations that followed reminded me why this work matters. For everyone who attended, asked questions, or tracked me down afterward, thank you. Here’s a deeper look at everything we covered.
The Case for Clean Data in Nonprofit Cloud
Every Nonprofit wants to make decisions grounded in accurate, real-time data. But as any Salesforce professional knows, “accurate data” doesn’t just happen on its own. It requires deliberate architecture, thoughtful automation, and a clear understanding of which tools belong where.
In Salesforce Nonprofit Cloud (NPC), that challenge is multiplied. Built on the Salesforce Industries architecture, NPC introduces a purpose-built data model with Person Accounts, Gift Commitments, Gift Transactions, and volunteer management objects that all need to stay tightly synchronized. The good news? Salesforce Flow, especially with the addition of the Transform element, has become a powerful enough tool to handle both the data hygiene work and the complex calculations your fundraising and volunteer teams depend on, without touching your DPE credit limits.
This post covers two interconnected use cases: automating data sanitization for volunteer management and building advanced donor fulfillment calculations with Flow, including the new Transform element. Together, they demonstrate what’s possible when clean data and smart automation work in concert.
Why Clean Data Is the Non-Negotiable Starting Point
Before we get into calculations and check-in flows, let’s establish something foundational: none of this works without clean data.
In the Salesforce world, “clean data” means records that are accurate, consistent, and free of duplicates. For admins, this has always been best practice. But with the rise of AI Agents, autonomous programs that can execute real transactions inside your org, data quality has become a hard requirement. AI is only as good as what it’s grounded in. Garbage in, garbage out, and now that garbage can trigger a bad transaction at scale.
In NPC specifically, clean data is the backbone of reliable volunteer coordination, accurate donor reporting, and eventually, trustworthy AI-assisted fundraising. One of the most common, and most overlooked, data quality issues is mobile phone formatting.
Part 1: Automating Data Sanitization with Record-Triggered Flow
Volunteers check in using their last name and mobile phone number. That sounds simple until you realize that the same phone number can be stored dozens of different ways: (512) 555-0100, 512-555-0100, 5125550100, 512 555 0100. When a Get Records element tries to match on an exact value, any inconsistency breaks the lookup.
The fix is a record-triggered flow that strips all non-digit characters from the mobile phone field the moment a Person Account is created or updated.
Person Account
A person account is a Salesforce record type that combines Account and Contact into a single entity, allowing you to manage individuals like donors or volunteers without needing a separate business account record. NPC relies on Person Accounts as its primary constituent record.
The “Clean Mobile Phone” Flow
This flow runs when a Person Account is created, or when the mobile phone field is changed and is not blank. The sanitization logic uses a chained SUBSTITUTE formula that removes spaces, dashes, and parentheses in sequence, leaving only pure digits. The result: a consistent, matchable value in every record.
If you need flexibility, there are alternatives. Validation rules can reject improperly formatted entries at the point of save, preventing the problem before it’s created. Scheduled flows can run as a daily batch job to clean up any legacy data that snuck through before your automation was in place. For most organizations, a combination of all three provides the most airtight coverage.
Part 2: Reactive Screen Flows for Volunteer Check-In
Once your data is clean, you can build experiences that actually work. In NPC, volunteer management tracks jobs, positions, and shifts, and getting volunteers into the right slot quickly is a real operational challenge.
Rather than relying on a standard digital experience site, we built a custom screen flow that leverages reactive functionality: the ability for a screen to update dynamically based on user input without navigating to a new page.
Reactive Screen Flow
A reactive screen flow allows components on the same screen to communicate with each other in real time. A data table can update the moment a user types a search term or makes a selection, with no page reload.
How the Check-In Flow Works
The volunteer enters their last name and mobile phone number. Because we’ve already sanitized the phone field, the Get Records query finds an exact match reliably. If no match exists, a warning screen appears immediately.
From there, a data table displays available jobs, such as “Food Distribution.” Once the volunteer selects a job, a Screen Action triggers an auto-launched subflow in the background.
That subflow queries available shifts for that specific day and passes them back to a second data table on the same screen. The volunteer selects their shift and clicks Next, and the flow creates a Job Position Assignment record with a status of “Complete.” Clean, fast, no paper sign-in sheet required.
Part 3: Complex Donor Fulfillment Calculations with Flow and the Transform Element
With volunteers managed and data sanitized, let’s look at the other side of the NPC operation: donor management. Here, the goal is to give fundraising teams a real-time snapshot of donor health directly on the Account page.
Specifically, we want to calculate three things for each donor:
Current Year Gift Commitment: The donor’s pledge for the year. In NPC’s data model, this tracks promises rather than payments.
Current Year Paid Amount: The total actually received via Gift Transactions. A single commitment can have multiple transactions associated with it as the donor makes payments over time.
Fulfillment Rate and Membership Level: The percentage of the commitment that’s been paid, and a tiered classification (Gold, Silver, Bronze) based on actual payments.
Why Flow Instead of DPE?
NPC includes pre-built Data Processing Engine (DPE) calculations for Donor Gift Summary. Think of DPE as a mini-ETL tool built directly into Salesforce, designed to handle millions of records with joins, filters, and aggregations that would push a standard Flow to its governor limits. It’s powerful, but it comes with two significant constraints: a steep learning curve that many admins haven’t climbed yet, and a license-based DPE credit limit that can be exhausted quickly if calculations run in real time or too frequently.
Flow provides a low-code alternative that doesn’t count against those credits, making it the right choice for on-demand or daily updates across mid-sized datasets. The golden rule: always use the tool you already know if it fits the case at hand.
Step 1: The Auto-Launched Subflow
We start by building an Auto-Launched Flow to house all the calculation logic. Keeping the math in a subflow means the same logic can be triggered by a user button, a nightly schedule, or an automated event, without ever rebuilding it.
The flow takes three input variables: the Account ID we’re processing, a StartDate, and an EndDate. Formulas handle null inputs gracefully, defaulting to January 1st of the current year and today’s date respectively, so the flow still works if those values aren’t provided.
Two Get Records elements pull the data. The first retrieves Gift Commitments filtered by DonorId and EffectiveStartDate within the selected range. The second retrieves Gift Transactions for the same donor where Status is Paid and TransactionDate falls within range.
The Transform Element
This is where Flow Builder has meaningfully evolved. The Transform element allows you to map and aggregate data collections without the traditional Loop + Assignment pattern. Instead of iterating through every transaction record manually, we point the Transform element at the Gift Transactions collection, set the target to a currency variable, select Sum, and choose the Amount field. The element does the rest. Repeat the process for Gift Commitments.
This approach is bulkified by design and significantly easier to debug than a loop-based alternative.
Categorization via Formulas
A nested IF formula handles Membership Level assignment: Bronze for paid amounts under $50,000, Silver up to $100,000, and Gold above that. A separate formula calculates the Fulfillment Rate as a percentage. Both formulas include null checks to handle donors who have commitments but no transactions yet.
Step 2: The Screen Flow and Quick Action
The subflow handles all three rollups in a single execution: total paid amount, total commitment, and the derived fulfillment rate and membership tier. The Screen Flow itself grabs the Account ID from the page, passes it into the subflow, receives the calculated values back, and writes them to custom fields on the Account using an Update Records element. A Flow Message component displays a toast-style confirmation to the user when the calculation is complete.
Step 3: Nightly Automation via Scheduled Flow
A button is great for one-off checks. But data goes stale. The subflow architecture makes automation straightforward: a Schedule-Triggered Flow runs nightly at 8:00 PM, loops through all active donor Accounts, and calls the same subflow we built for the button. Every morning, the fundraising team logs in to dashboards and Account views that are already current.
Conclusion
Clean data and efficient automation are the engine of nonprofit effectiveness. Accurate volunteer check-ins mean accurate service records. Accurate service records mean accurate outcome data. And accurate outcome data is what allows organizations to apply for larger grants, deepen constituent relationships, and scale their mission year over year.
The same principle applies on the donor side. When gift fulfillment data is reliable and up to date, fundraising teams can have better conversations, identify at-risk donors earlier, and make the case for continued investment with confidence.
With NPC’s purpose-built data model and Flow’s growing capabilities, especially the Transform element, there has never been a better time to consolidate your automation strategy around tools your team already understands. The result is an org that’s not just manageable, but genuinely ready for whatever comes next, including AI.
Want to walk through these builds step by step? The Clean Data Playbook is available FREE on Flow Canvas Academy.
Explore related content:
Mastering Data Rollups in Nonprofit Cloud
What Nonprofits Taught Me About Building Salesforce for Humans, Not Just Systems
Salesforce NPSP vs Nonprofit Cloud Consultant Certifications
How the Salesforce Architecture Program Is Being Rebuilt with the Community
#Nonprofit #NonprofitCloud #NPC #NPSP #SalesforceAdmins #SalesforceDevelopers #SalesforceHowTo #SalesforceTutorials -
Beyond the URL Button: The Salesforce Request Approval Lightning Component
Are you finding that as your company grows, the complexity of your approval workflows grows along with it? What once might have been a simple sign-off from a single manager can quickly transform into a multi-step process involving input from multiple departments, stakeholders, and even external partners. This complexity often leads to delays, inefficiencies, and frustration as approvals get stuck in bottlenecks or lost in email chains.
Salesforce’s free Flow Approval Processes, built on Flow Orchestrator, automate even the most intricate workflows. A previous post explored launching these Autolaunched Approval Orchestrations via a custom URL button. Today, we are taking that functionality a massive step forward. We will explore the new Request Approval Lightning component and its tie-in to autolaunched flow functionality. This component expands automation by allowing dynamic user inputs directly from the record page.
The Foundation: Autolaunched Flow Approvals
Before diving into the new component, let’s quickly recap how autolaunched flow approvals function. When you build an autolaunched approval process, you are essentially building an autolaunched automation that can be executed on demand, very similar to an autolaunched Orchestration or flow. However, the traditional method of launching Salesforce automations (the quick action button) has strict limitations. Quick actions can only be used to add an active screen flow to the page layout; orchestrations are simply not supported. Furthermore, quick actions do not allow you to pass additional input parameter values into your automation beyond the standard recordId.
Because of these limitations, the standard workaround has been to build an autolaunched Approval Orchestration and assign it to a custom URL button on the page layout. For example, a common use case is to escalate a case to a queue of level 2 experts when a second opinion is required. By appending variables to the custom URL, such as
?recordId={!Case.Id}&submitter={!$User.Id}&retURL={!Case.Id}, administrators could successfully pass the necessary parameters to kick off the orchestration. While highly effective, this URL button method is a bit rigid. It automatically submits the record based on predefined flow logic without giving the submitter much runtime flexibility.Enter the Request Approval Lightning Component
This is where the new Request Approval component completely changes the game. Instead of relying on a custom URL button to trigger your background orchestration, you can now add a native, user-friendly interface directly to your Lightning record pages. This component bridges the gap between the UI of a screen flow and the processing power of an autolaunched orchestration.
To utilize this feature, you must first design, test, and activate an autolaunched flow approval process. Once your flow is ready, you can simply open the record page where you want to place the component. Click the gear icon on the navigation bar, and select Edit Page to open the Lightning App Builder. From the Components tab, search for “Request” and drag the Request Approval component directly onto the layout.
Straightforward Setup
You can customize the title of the component to display user-friendly text at run time. Then search for and select your active, autolaunched flow approval process to run whenever the user clicks the “Start” button. You can also assign a specific label to identify the associated flow approval process to your users.
Expanding the Use Case: What Can Be Added?
So, how exactly does this new component expand the capabilities of your autolaunched flow use cases? The true power of the Request Approval component lies in its ability to gather critical, dynamic inputs directly from the submitter at the exact moment of submission. When using the old custom URL button method, the approver destination (such as the Level 2 expert queue) was hardcoded into the flow steps. With the new component, you can dramatically increase the flexibility of your processes through two main enhancements:
Dynamic Approver Selection
The component allows you to require submitters to actively select an approver before the flow runs. To enable this, you must configure your underlying autolaunched flow approval process to assign one or more approval steps to a specific resource named firstApprover. In the Lightning App Builder, you then select the Require submitter to select an approver setting.
It is critical to ensure your flow is properly configured to accept this input. Consider whether the flow approval process you selected assigns one or more steps to the firstApprover resource. If it does, you must select this requirement on the component to prevent the flow approval process from failing when a submitter attempts to use it. This means a single autolaunched flow can now be routed to entirely different managers, departments, or external stakeholders on the fly.
Submission Comments
Another massive expansion of your use case is the ability to capture submission comments. Often, an approver needs context as to why a record is being submitted. The Request Approval component shows an Approval Request Comments field by default. This exposes optional submitter comments directly to the approvers via the submissionComments resource.
If your business process dictates that comments are unnecessary, or if you want to streamline the UI to prevent the submitter from adding comments about a submission, you easily have the option to select Hide submitter comments within the component configuration. These comments are stored cleanly in the new data model under the Approval Submissions object, specifically within the Comments field, making them accessible via queries if you wish to display them in custom approver screen flows.
The Impact on Your Org’s Architecture
By tying the Request Approval component to your autolaunched orchestrations, you unlock a highly scalable and flexible architecture. You no longer need to build dozens of slightly different flows for different queues or approvers. Instead, you can rely on a single autolaunched flow that dynamically adapts based on the firstApprover and submissionComments variables passed from the component.
This ties seamlessly into the broader Flow Approval Process ecosystem. Once submitted, the process still leverages the brand-new UI and audit trail, including the Approvals Lightning app, Approval Submissions, and Approval Work Items. The orchestration sequences stages and steps behind the scenes. It potentially triggers automated background steps like updating records or sending notifications without requiring further user interaction. Approvers still receive their email notifications with links to the Work Guide, and they can still reply directly to the emails with keywords like “Approve” or “Reject” to complete their action. Furthermore, administrators must still remember to add the Flow Orchestration Work Guide component to the record page. It approvers have a centralized interface to actually interact with the assigned approval step.
It is important no note that this component allows the user to recall the approval process once it is started.
Conclusion
The Request Approval component takes the Autolaunched Flow Approval Process and makes it more dynamic and user-centric. By moving away from static URL buttons and embracing this native Lightning component, administrators can empower their users to select appropriate approvers and provide vital context through comments. All while leveraging the free, robust automation engine of Salesforce Flow Orchestrator.
Whether you are routing cases to level 2 experts or managing multi-million dollar contracts, this functionality ensures your approval workflows are as efficient, user-friendly as possible. Save and activate your record page layout, exit the Lightning App Builder, and watch your new approval processes in action.
Explore related content:
How to Build Custom Flow Approval Submission Related Lists
Start Autolaunched Flow Approvals From A Button
Supercharge Your Approvals with Salesforce Flow Approval Processes
#FlowApprovals #HowTo #LightningComponent #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorial #useCase -
Beyond the URL Button: The Salesforce Request Approval Lightning Component
Are you finding that as your company grows, the complexity of your approval workflows grows along with it? What once might have been a simple sign-off from a single manager can quickly transform into a multi-step process involving input from multiple departments, stakeholders, and even external partners. This complexity often leads to delays, inefficiencies, and frustration as approvals get stuck in bottlenecks or lost in email chains.
Salesforce’s free Flow Approval Processes, built on Flow Orchestrator, automate even the most intricate workflows. A previous post explored launching these Autolaunched Approval Orchestrations via a custom URL button. Today, we are taking that functionality a massive step forward. We will explore the new Request Approval Lightning component and its tie-in to autolaunched flow functionality. This component expands automation by allowing dynamic user inputs directly from the record page.
The Foundation: Autolaunched Flow Approvals
Before diving into the new component, let’s quickly recap how autolaunched flow approvals function. When you build an autolaunched approval process, you are essentially building an autolaunched automation that can be executed on demand, very similar to an autolaunched Orchestration or flow. However, the traditional method of launching Salesforce automations (the quick action button) has strict limitations. Quick actions can only be used to add an active screen flow to the page layout; orchestrations are simply not supported. Furthermore, quick actions do not allow you to pass additional input parameter values into your automation beyond the standard recordId.
Because of these limitations, the standard workaround has been to build an autolaunched Approval Orchestration and assign it to a custom URL button on the page layout. For example, a common use case is to escalate a case to a queue of level 2 experts when a second opinion is required. By appending variables to the custom URL, such as
?recordId={!Case.Id}&submitter={!$User.Id}&retURL={!Case.Id}, administrators could successfully pass the necessary parameters to kick off the orchestration. While highly effective, this URL button method is a bit rigid. It automatically submits the record based on predefined flow logic without giving the submitter much runtime flexibility.Enter the Request Approval Lightning Component
This is where the new Request Approval component completely changes the game. Instead of relying on a custom URL button to trigger your background orchestration, you can now add a native, user-friendly interface directly to your Lightning record pages. This component bridges the gap between the UI of a screen flow and the processing power of an autolaunched orchestration.
To utilize this feature, you must first design, test, and activate an autolaunched flow approval process. Once your flow is ready, you can simply open the record page where you want to place the component. Click the gear icon on the navigation bar, and select Edit Page to open the Lightning App Builder. From the Components tab, search for “Request” and drag the Request Approval component directly onto the layout.
Straightforward Setup
You can customize the title of the component to display user-friendly text at run time. Then search for and select your active, autolaunched flow approval process to run whenever the user clicks the “Start” button. You can also assign a specific label to identify the associated flow approval process to your users.
Expanding the Use Case: What Can Be Added?
So, how exactly does this new component expand the capabilities of your autolaunched flow use cases? The true power of the Request Approval component lies in its ability to gather critical, dynamic inputs directly from the submitter at the exact moment of submission. When using the old custom URL button method, the approver destination (such as the Level 2 expert queue) was hardcoded into the flow steps. With the new component, you can dramatically increase the flexibility of your processes through two main enhancements:
Dynamic Approver Selection
The component allows you to require submitters to actively select an approver before the flow runs. To enable this, you must configure your underlying autolaunched flow approval process to assign one or more approval steps to a specific resource named firstApprover. In the Lightning App Builder, you then select the Require submitter to select an approver setting.
It is critical to ensure your flow is properly configured to accept this input. Consider whether the flow approval process you selected assigns one or more steps to the firstApprover resource. If it does, you must select this requirement on the component to prevent the flow approval process from failing when a submitter attempts to use it. This means a single autolaunched flow can now be routed to entirely different managers, departments, or external stakeholders on the fly.
Submission Comments
Another massive expansion of your use case is the ability to capture submission comments. Often, an approver needs context as to why a record is being submitted. The Request Approval component shows an Approval Request Comments field by default. This exposes optional submitter comments directly to the approvers via the submissionComments resource.
If your business process dictates that comments are unnecessary, or if you want to streamline the UI to prevent the submitter from adding comments about a submission, you easily have the option to select Hide submitter comments within the component configuration. These comments are stored cleanly in the new data model under the Approval Submissions object, specifically within the Comments field, making them accessible via queries if you wish to display them in custom approver screen flows.
The Impact on Your Org’s Architecture
By tying the Request Approval component to your autolaunched orchestrations, you unlock a highly scalable and flexible architecture. You no longer need to build dozens of slightly different flows for different queues or approvers. Instead, you can rely on a single autolaunched flow that dynamically adapts based on the firstApprover and submissionComments variables passed from the component.
This ties seamlessly into the broader Flow Approval Process ecosystem. Once submitted, the process still leverages the brand-new UI and audit trail, including the Approvals Lightning app, Approval Submissions, and Approval Work Items. The orchestration sequences stages and steps behind the scenes. It potentially triggers automated background steps like updating records or sending notifications without requiring further user interaction. Approvers still receive their email notifications with links to the Work Guide, and they can still reply directly to the emails with keywords like “Approve” or “Reject” to complete their action. Furthermore, administrators must still remember to add the Flow Orchestration Work Guide component to the record page. It approvers have a centralized interface to actually interact with the assigned approval step.
It is important no note that this component allows the user to recall the approval process once it is started.
Conclusion
The Request Approval component takes the Autolaunched Flow Approval Process and makes it more dynamic and user-centric. By moving away from static URL buttons and embracing this native Lightning component, administrators can empower their users to select appropriate approvers and provide vital context through comments. All while leveraging the free, robust automation engine of Salesforce Flow Orchestrator.
Whether you are routing cases to level 2 experts or managing multi-million dollar contracts, this functionality ensures your approval workflows are as efficient, user-friendly as possible. Save and activate your record page layout, exit the Lightning App Builder, and watch your new approval processes in action.
Explore related content:
How to Build Custom Flow Approval Submission Related Lists
Start Autolaunched Flow Approvals From A Button
Supercharge Your Approvals with Salesforce Flow Approval Processes
#FlowApprovals #HowTo #LightningComponent #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorial #useCase -
Beyond the URL Button: The Salesforce Request Approval Lightning Component
Are you finding that as your company grows, the complexity of your approval workflows grows along with it? What once might have been a simple sign-off from a single manager can quickly transform into a multi-step process involving input from multiple departments, stakeholders, and even external partners. This complexity often leads to delays, inefficiencies, and frustration as approvals get stuck in bottlenecks or lost in email chains.
Salesforce’s free Flow Approval Processes, built on Flow Orchestrator, automate even the most intricate workflows. A previous post explored launching these Autolaunched Approval Orchestrations via a custom URL button. Today, we are taking that functionality a massive step forward. We will explore the new Request Approval Lightning component and its tie-in to autolaunched flow functionality. This component expands automation by allowing dynamic user inputs directly from the record page.
The Foundation: Autolaunched Flow Approvals
Before diving into the new component, let’s quickly recap how autolaunched flow approvals function. When you build an autolaunched approval process, you are essentially building an autolaunched automation that can be executed on demand, very similar to an autolaunched Orchestration or flow. However, the traditional method of launching Salesforce automations (the quick action button) has strict limitations. Quick actions can only be used to add an active screen flow to the page layout; orchestrations are simply not supported. Furthermore, quick actions do not allow you to pass additional input parameter values into your automation beyond the standard recordId.
Because of these limitations, the standard workaround has been to build an autolaunched Approval Orchestration and assign it to a custom URL button on the page layout. For example, a common use case is to escalate a case to a queue of level 2 experts when a second opinion is required. By appending variables to the custom URL, such as
?recordId={!Case.Id}&submitter={!$User.Id}&retURL={!Case.Id}, administrators could successfully pass the necessary parameters to kick off the orchestration. While highly effective, this URL button method is a bit rigid. It automatically submits the record based on predefined flow logic without giving the submitter much runtime flexibility.Enter the Request Approval Lightning Component
This is where the new Request Approval component completely changes the game. Instead of relying on a custom URL button to trigger your background orchestration, you can now add a native, user-friendly interface directly to your Lightning record pages. This component bridges the gap between the UI of a screen flow and the processing power of an autolaunched orchestration.
To utilize this feature, you must first design, test, and activate an autolaunched flow approval process. Once your flow is ready, you can simply open the record page where you want to place the component. Click the gear icon on the navigation bar, and select Edit Page to open the Lightning App Builder. From the Components tab, search for “Request” and drag the Request Approval component directly onto the layout.
Straightforward Setup
You can customize the title of the component to display user-friendly text at run time. Then search for and select your active, autolaunched flow approval process to run whenever the user clicks the “Start” button. You can also assign a specific label to identify the associated flow approval process to your users.
Expanding the Use Case: What Can Be Added?
So, how exactly does this new component expand the capabilities of your autolaunched flow use cases? The true power of the Request Approval component lies in its ability to gather critical, dynamic inputs directly from the submitter at the exact moment of submission. When using the old custom URL button method, the approver destination (such as the Level 2 expert queue) was hardcoded into the flow steps. With the new component, you can dramatically increase the flexibility of your processes through two main enhancements:
Dynamic Approver Selection
The component allows you to require submitters to actively select an approver before the flow runs. To enable this, you must configure your underlying autolaunched flow approval process to assign one or more approval steps to a specific resource named firstApprover. In the Lightning App Builder, you then select the Require submitter to select an approver setting.
It is critical to ensure your flow is properly configured to accept this input. Consider whether the flow approval process you selected assigns one or more steps to the firstApprover resource. If it does, you must select this requirement on the component to prevent the flow approval process from failing when a submitter attempts to use it. This means a single autolaunched flow can now be routed to entirely different managers, departments, or external stakeholders on the fly.
Submission Comments
Another massive expansion of your use case is the ability to capture submission comments. Often, an approver needs context as to why a record is being submitted. The Request Approval component shows an Approval Request Comments field by default. This exposes optional submitter comments directly to the approvers via the submissionComments resource.
If your business process dictates that comments are unnecessary, or if you want to streamline the UI to prevent the submitter from adding comments about a submission, you easily have the option to select Hide submitter comments within the component configuration. These comments are stored cleanly in the new data model under the Approval Submissions object, specifically within the Comments field, making them accessible via queries if you wish to display them in custom approver screen flows.
The Impact on Your Org’s Architecture
By tying the Request Approval component to your autolaunched orchestrations, you unlock a highly scalable and flexible architecture. You no longer need to build dozens of slightly different flows for different queues or approvers. Instead, you can rely on a single autolaunched flow that dynamically adapts based on the firstApprover and submissionComments variables passed from the component.
This ties seamlessly into the broader Flow Approval Process ecosystem. Once submitted, the process still leverages the brand-new UI and audit trail, including the Approvals Lightning app, Approval Submissions, and Approval Work Items. The orchestration sequences stages and steps behind the scenes. It potentially triggers automated background steps like updating records or sending notifications without requiring further user interaction. Approvers still receive their email notifications with links to the Work Guide, and they can still reply directly to the emails with keywords like “Approve” or “Reject” to complete their action. Furthermore, administrators must still remember to add the Flow Orchestration Work Guide component to the record page. It approvers have a centralized interface to actually interact with the assigned approval step.
It is important no note that this component allows the user to recall the approval process once it is started.
Conclusion
The Request Approval component takes the Autolaunched Flow Approval Process and makes it more dynamic and user-centric. By moving away from static URL buttons and embracing this native Lightning component, administrators can empower their users to select appropriate approvers and provide vital context through comments. All while leveraging the free, robust automation engine of Salesforce Flow Orchestrator.
Whether you are routing cases to level 2 experts or managing multi-million dollar contracts, this functionality ensures your approval workflows are as efficient, user-friendly as possible. Save and activate your record page layout, exit the Lightning App Builder, and watch your new approval processes in action.
Explore related content:
How to Build Custom Flow Approval Submission Related Lists
Start Autolaunched Flow Approvals From A Button
Supercharge Your Approvals with Salesforce Flow Approval Processes
#FlowApprovals #HowTo #LightningComponent #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorial #useCase -
Mastering Data Rollups in Nonprofit Cloud
Every forward-thinking organization wants to base their business management decisions on intelligence derived from accurate, real-time data. However, as any Salesforce professional knows, “accurate data” is often the result of sophisticated background calculations and complex statistics. Depending on your specific Salesforce environment, the tool you choose for these calculations can vary wildly. In a standard Sales Cloud environment, you might reach for a simple Summary Report or a Roll-up Summary field. But when you move into the world of Salesforce Industries (like the new Nonprofit Cloud – NPC, aka Agentforce Nonprofit), the decision-making process becomes significantly more complex. Suddenly, you have a massive arsenal of tools: the Data Processing Engine (DPE), the Business Rules Engine (BRE), OmniStudio, and of course, Salesforce Flow.
My golden rule is this: Always use the tool you already know if it fits the case at hand. While advanced tools like DPE are powerful, they often come with steep learning curves and license-based limitations. In this tutorial, I’m going to show you how to perform complex donor fulfillment calculations using Flow and the new Transform element, writing that data directly to the Account record in a way that is efficient, scalable, and easy to maintain.
Use Case: Calculating Donor Fulfillment RatiosIn this scenario, we want to provide our fundraising team with a clear snapshot of a donor’s health directly on their Account page. Specifically, we want to calculate:
Current Year Gift Commitment: In the Nonprofit Cloud data model, this represents a donor’s promise or pledge. For example, if a donor promises to pay $100,000 over the next year, that commitment is tracked here.
Current Year Paid Amount: Total amount actually received via transactions. Transactions represent the actual financial record of money received. A single Gift Commitment can have multiple paid Gift Transactions associated with it as the donor makes payments over time.
Fulfillment Rate: The percentage of the commitment that has been paid.
Membership Level: A tiered categorization (Gold, Silver, Bronze) based on their actual payments.
Why Flow Instead of DPE?
Nonprofit Cloud (NPC), built on the Salesforce Industries (formerly Vlocity) architecture, represents Salesforce’s modern reimagining for nonprofits. Unlike the legacy Nonprofit Success Pack (NPSP), NPC runs on the core Salesforce platform, leveraging Person Accounts and purpose-built objects designed to support high-scale fundraising and program management.
NPC includes pre-built Data Processing Engine (DPE) calculations for Donor Gift Summary. Think of DPE as a “mini-ETL” (Extract, Transform, Load) tool built directly into Salesforce. It is designed to handle massive datasets, millions of records, performing joins, filters, and aggregations that would typically cause a standard Flow to hit governor limits. While robust, it presents two significant limitations for many organizations:
Complexity: Customizing a DPE requires a deep understanding of data orchestration that many admins haven’t mastered yet.
Computational Limits: Your Salesforce license includes a specific number of “DPE Credits” or computational hours. If you run these calculations in real-time or too frequently, you can quickly exhaust your limits.
Flow provides a “Low-Code” alternative that is highly customizable and doesn’t count against your DPE hour limits, making it the perfect choice for on-demand or daily updates for mid-sized datasets.
Step 1: Building the Logic (The Auto-Launched Subflow)
We start by building an Auto-Launched Flow. By keeping the math in a subflow, we ensure that we can trigger the calculation from a button, a schedule, or even an automated trigger without ever having to rebuild the logic.
1. Defining Input Variables
We need three primary inputs to make this flow flexible:
recordId: The Account ID we are currently processing.StartDate: The beginning of the date range (e.g., the start of the fiscal year).EndDate: The end of the date range (e.g., today).
Pro Tip: Use formulas to handle “Null” inputs. If the user doesn’t provide a
StartDate, my formula defaults to January 1st of the current year using:IF(ISBLANK({!StartDateVar}),DATE(YEAR({!$Flow.CurrentDate}),1,1),{!StartDateVar})IF(ISBLANK({!EndDateVar}),{!$Flow.CurrentDate},{!EndDateVar})2. Fetching the Data
We use two Get Records elements to gather our collections:
Get Gift Commitments: We filter by the
DonorId(matching ourrecordId) and ensure theEffectiveStartDatefalls within our selected range.Get Gift Transactions: We filter for records for the same Donor where the
Statusis ‘Paid’ and theTransactionDateis within our range.
3. Aggregating with the Transform Element
The Transform element is a powerful addition to Flow Builder that allows you to map and aggregate data collections. It eliminates the need for the traditional “Loop + Assignment” pattern when calculating sums or counts, making your flows significantly more readable and efficient. This is where the magic happens. Instead of a loop, we add a Transform element.
Source: The collection of Gift Transactions.
Target: A currency (single) variable.
Mapping: Connect the source collection to the target variable, select Sum, and choose the
Amountfield.
This method is “Bulkified” by nature and much easier to debug than traditional loops. We repeat this process for the Gift Commitments.
4. Categorization via Formulas
To determine the Membership Level, we use a Nested IF formula:
IF(ISNULL({!Sum_of_Paid_Transform}),"",IF({!Sum_of_Paid_Transform}<=50000,"Bronze",IF({!Sum_of_Paid_Transform}<=100000,"Silver","Gold")))We also calculate the Fulfillment Rate as (number, not percentage):
100*({!Sum_of_Paid_Transform}/{!Sum_of_Commitments_Transform})Step 2: Creating the User Interface (The Screen Flow)
Now that the logic is built, we need to expose it to the users. We create a simple Screen Flow and place it on the Account record page using a Quick Action.
Pass the ID: The Screen Flow automatically grabs the
recordIdfrom the page.Call the Subflow: It passes that ID into our “Logic Subflow.”
Update the Account: The subflow returns the calculated values (Total Paid, Rate, Level). The Screen Flow then uses an Update Records element to save these values directly onto the custom fields on the Account.
The Success Message: We use the new Flow Message component to show a nice, “Toast-like” confirmation to the user that the calculation is complete.
Step 3: Automating with Scheduled Flows
While the “Calculate” button is great for one-off checks, you don’t want your data to go stale. This is the beauty of the subflow architecture.
We created a Schedule-Triggered Flow that runs every night at 8:00 PM.
Object: Account
Filter: All Active Donors.
Action: It simply loops through the accounts and calls the same subflow we used for the button.
This ensures that every morning when the fundraising team logs in, their dashboards and Account views are perfectly up to date without them having to click a single button.
Advanced Considerations: Performance & Scalability
When deciding between real-time (Record-Triggered) and batch (Scheduled) processing, consider your data volume:
Real-Time: If you trigger this every time a
GiftTransactionis created, it provides the most “Live” data. However, this could have performance implications.Batch: Running this nightly is the safest and most efficient way to handle large volumes of data without impacting the user experience during business hours.
Troubleshooting Common Issues
Null Values: If a donor has zero transactions, the Transform element might return a null. Always ensure your formulas handle null values.
Currency Conversion: If your org uses Multi-Currency, ensure your Get Records and Transform elements are looking at the “Converted” currency fields to maintain accuracy across different regions.
Conclusion: The Power of Low-Code
The introduction of the Transform element in Flow Builder has significantly narrowed the gap between standard Flows and high-performance tools like the Data Processing Engine. For most Nonprofit Cloud and Industry users, Flow provides the perfect balance of ease of use and computational power.
By centralizing your logic in a subflow, you create a “single source of truth” for your calculations, whether they are triggered by a user, a schedule, or a system event. In Nonprofit Cloud, this approach is especially valuable, where fundraising, commitments, and transaction data must stay tightly aligned across multiple purpose-built objects. This not only makes your org easier to manage but also ensures that your business decisions are always backed by the most reliable data available.
Explore related content:
The Ultimate Guide to the Salesforce Screen Flow File Preview Component
What Nonprofits Taught Me About Building Salesforce for Humans, Not Just Systems
New Trailhead Badge: Accessible Salesforce Customizations
#Automation #Nonprofit #NonprofitCloud #Salesforce #SalesforceAdmins #SalesforceDevelopers -
Mastering Data Rollups in Nonprofit Cloud
Every forward-thinking organization wants to base their business management decisions on intelligence derived from accurate, real-time data. However, as any Salesforce professional knows, “accurate data” is often the result of sophisticated background calculations and complex statistics. Depending on your specific Salesforce environment, the tool you choose for these calculations can vary wildly. In a standard Sales Cloud environment, you might reach for a simple Summary Report or a Roll-up Summary field. But when you move into the world of Salesforce Industries (like the new Nonprofit Cloud – NPC, aka Agentforce Nonprofit), the decision-making process becomes significantly more complex. Suddenly, you have a massive arsenal of tools: the Data Processing Engine (DPE), the Business Rules Engine (BRE), OmniStudio, and of course, Salesforce Flow.
My golden rule is this: Always use the tool you already know if it fits the case at hand. While advanced tools like DPE are powerful, they often come with steep learning curves and license-based limitations. In this tutorial, I’m going to show you how to perform complex donor fulfillment calculations using Flow and the new Transform element, writing that data directly to the Account record in a way that is efficient, scalable, and easy to maintain.
Use Case: Calculating Donor Fulfillment RatiosIn this scenario, we want to provide our fundraising team with a clear snapshot of a donor’s health directly on their Account page. Specifically, we want to calculate:
Current Year Gift Commitment: In the Nonprofit Cloud data model, this represents a donor’s promise or pledge. For example, if a donor promises to pay $100,000 over the next year, that commitment is tracked here.
Current Year Paid Amount: Total amount actually received via transactions. Transactions represent the actual financial record of money received. A single Gift Commitment can have multiple paid Gift Transactions associated with it as the donor makes payments over time.
Fulfillment Rate: The percentage of the commitment that has been paid.
Membership Level: A tiered categorization (Gold, Silver, Bronze) based on their actual payments.
Why Flow Instead of DPE?
Nonprofit Cloud (NPC), built on the Salesforce Industries (formerly Vlocity) architecture, represents Salesforce’s modern reimagining for nonprofits. Unlike the legacy Nonprofit Success Pack (NPSP), NPC runs on the core Salesforce platform, leveraging Person Accounts and purpose-built objects designed to support high-scale fundraising and program management.
NPC includes pre-built Data Processing Engine (DPE) calculations for Donor Gift Summary. Think of DPE as a “mini-ETL” (Extract, Transform, Load) tool built directly into Salesforce. It is designed to handle massive datasets, millions of records, performing joins, filters, and aggregations that would typically cause a standard Flow to hit governor limits. While robust, it presents two significant limitations for many organizations:
Complexity: Customizing a DPE requires a deep understanding of data orchestration that many admins haven’t mastered yet.
Computational Limits: Your Salesforce license includes a specific number of “DPE Credits” or computational hours. If you run these calculations in real-time or too frequently, you can quickly exhaust your limits.
Flow provides a “Low-Code” alternative that is highly customizable and doesn’t count against your DPE hour limits, making it the perfect choice for on-demand or daily updates for mid-sized datasets.
Step 1: Building the Logic (The Auto-Launched Subflow)
We start by building an Auto-Launched Flow. By keeping the math in a subflow, we ensure that we can trigger the calculation from a button, a schedule, or even an automated trigger without ever having to rebuild the logic.
1. Defining Input Variables
We need three primary inputs to make this flow flexible:
recordId: The Account ID we are currently processing.StartDate: The beginning of the date range (e.g., the start of the fiscal year).EndDate: The end of the date range (e.g., today).
Pro Tip: Use formulas to handle “Null” inputs. If the user doesn’t provide a
StartDate, my formula defaults to January 1st of the current year using:IF(ISBLANK({!StartDateVar}),DATE(YEAR({!$Flow.CurrentDate}),1,1),{!StartDateVar})IF(ISBLANK({!EndDateVar}),{!$Flow.CurrentDate},{!EndDateVar})2. Fetching the Data
We use two Get Records elements to gather our collections:
Get Gift Commitments: We filter by the
DonorId(matching ourrecordId) and ensure theEffectiveStartDatefalls within our selected range.Get Gift Transactions: We filter for records for the same Donor where the
Statusis ‘Paid’ and theTransactionDateis within our range.
3. Aggregating with the Transform Element
The Transform element is a powerful addition to Flow Builder that allows you to map and aggregate data collections. It eliminates the need for the traditional “Loop + Assignment” pattern when calculating sums or counts, making your flows significantly more readable and efficient. This is where the magic happens. Instead of a loop, we add a Transform element.
Source: The collection of Gift Transactions.
Target: A currency (single) variable.
Mapping: Connect the source collection to the target variable, select Sum, and choose the
Amountfield.
This method is “Bulkified” by nature and much easier to debug than traditional loops. We repeat this process for the Gift Commitments.
4. Categorization via Formulas
To determine the Membership Level, we use a Nested IF formula:
IF(ISNULL({!Sum_of_Paid_Transform}),"",IF({!Sum_of_Paid_Transform}<=50000,"Bronze",IF({!Sum_of_Paid_Transform}<=100000,"Silver","Gold")))We also calculate the Fulfillment Rate as (number, not percentage):
100*({!Sum_of_Paid_Transform}/{!Sum_of_Commitments_Transform})Step 2: Creating the User Interface (The Screen Flow)
Now that the logic is built, we need to expose it to the users. We create a simple Screen Flow and place it on the Account record page using a Quick Action.
Pass the ID: The Screen Flow automatically grabs the
recordIdfrom the page.Call the Subflow: It passes that ID into our “Logic Subflow.”
Update the Account: The subflow returns the calculated values (Total Paid, Rate, Level). The Screen Flow then uses an Update Records element to save these values directly onto the custom fields on the Account.
The Success Message: We use the new Flow Message component to show a nice, “Toast-like” confirmation to the user that the calculation is complete.
Step 3: Automating with Scheduled Flows
While the “Calculate” button is great for one-off checks, you don’t want your data to go stale. This is the beauty of the subflow architecture.
We created a Schedule-Triggered Flow that runs every night at 8:00 PM.
Object: Account
Filter: All Active Donors.
Action: It simply loops through the accounts and calls the same subflow we used for the button.
This ensures that every morning when the fundraising team logs in, their dashboards and Account views are perfectly up to date without them having to click a single button.
Advanced Considerations: Performance & Scalability
When deciding between real-time (Record-Triggered) and batch (Scheduled) processing, consider your data volume:
Real-Time: If you trigger this every time a
GiftTransactionis created, it provides the most “Live” data. However, this could have performance implications.Batch: Running this nightly is the safest and most efficient way to handle large volumes of data without impacting the user experience during business hours.
Troubleshooting Common Issues
Null Values: If a donor has zero transactions, the Transform element might return a null. Always ensure your formulas handle null values.
Currency Conversion: If your org uses Multi-Currency, ensure your Get Records and Transform elements are looking at the “Converted” currency fields to maintain accuracy across different regions.
Conclusion: The Power of Low-Code
The introduction of the Transform element in Flow Builder has significantly narrowed the gap between standard Flows and high-performance tools like the Data Processing Engine. For most Nonprofit Cloud and Industry users, Flow provides the perfect balance of ease of use and computational power.
By centralizing your logic in a subflow, you create a “single source of truth” for your calculations, whether they are triggered by a user, a schedule, or a system event. In Nonprofit Cloud, this approach is especially valuable, where fundraising, commitments, and transaction data must stay tightly aligned across multiple purpose-built objects. This not only makes your org easier to manage but also ensures that your business decisions are always backed by the most reliable data available.
Explore related content:
The Ultimate Guide to the Salesforce Screen Flow File Preview Component
What Nonprofits Taught Me About Building Salesforce for Humans, Not Just Systems
New Trailhead Badge: Accessible Salesforce Customizations
#Automation #Nonprofit #NonprofitCloud #Salesforce #SalesforceAdmins #SalesforceDevelopers -
Mastering Data Rollups in Nonprofit Cloud
Every forward-thinking organization wants to base their business management decisions on intelligence derived from accurate, real-time data. However, as any Salesforce professional knows, “accurate data” is often the result of sophisticated background calculations and complex statistics. Depending on your specific Salesforce environment, the tool you choose for these calculations can vary wildly. In a standard Sales Cloud environment, you might reach for a simple Summary Report or a Roll-up Summary field. But when you move into the world of Salesforce Industries (like the new Nonprofit Cloud – NPC, aka Agentforce Nonprofit), the decision-making process becomes significantly more complex. Suddenly, you have a massive arsenal of tools: the Data Processing Engine (DPE), the Business Rules Engine (BRE), OmniStudio, and of course, Salesforce Flow.
My golden rule is this: Always use the tool you already know if it fits the case at hand. While advanced tools like DPE are powerful, they often come with steep learning curves and license-based limitations. In this tutorial, I’m going to show you how to perform complex donor fulfillment calculations using Flow and the new Transform element, writing that data directly to the Account record in a way that is efficient, scalable, and easy to maintain.
Use Case: Calculating Donor Fulfillment RatiosIn this scenario, we want to provide our fundraising team with a clear snapshot of a donor’s health directly on their Account page. Specifically, we want to calculate:
Current Year Gift Commitment: In the Nonprofit Cloud data model, this represents a donor’s promise or pledge. For example, if a donor promises to pay $100,000 over the next year, that commitment is tracked here.
Current Year Paid Amount: Total amount actually received via transactions. Transactions represent the actual financial record of money received. A single Gift Commitment can have multiple paid Gift Transactions associated with it as the donor makes payments over time.
Fulfillment Rate: The percentage of the commitment that has been paid.
Membership Level: A tiered categorization (Gold, Silver, Bronze) based on their actual payments.
Why Flow Instead of DPE?
Nonprofit Cloud (NPC), built on the Salesforce Industries (formerly Vlocity) architecture, represents Salesforce’s modern reimagining for nonprofits. Unlike the legacy Nonprofit Success Pack (NPSP), NPC runs on the core Salesforce platform, leveraging Person Accounts and purpose-built objects designed to support high-scale fundraising and program management.
NPC includes pre-built Data Processing Engine (DPE) calculations for Donor Gift Summary. Think of DPE as a “mini-ETL” (Extract, Transform, Load) tool built directly into Salesforce. It is designed to handle massive datasets, millions of records, performing joins, filters, and aggregations that would typically cause a standard Flow to hit governor limits. While robust, it presents two significant limitations for many organizations:
Complexity: Customizing a DPE requires a deep understanding of data orchestration that many admins haven’t mastered yet.
Computational Limits: Your Salesforce license includes a specific number of “DPE Credits” or computational hours. If you run these calculations in real-time or too frequently, you can quickly exhaust your limits.
Flow provides a “Low-Code” alternative that is highly customizable and doesn’t count against your DPE hour limits, making it the perfect choice for on-demand or daily updates for mid-sized datasets.
Step 1: Building the Logic (The Auto-Launched Subflow)
We start by building an Auto-Launched Flow. By keeping the math in a subflow, we ensure that we can trigger the calculation from a button, a schedule, or even an automated trigger without ever having to rebuild the logic.
1. Defining Input Variables
We need three primary inputs to make this flow flexible:
recordId: The Account ID we are currently processing.StartDate: The beginning of the date range (e.g., the start of the fiscal year).EndDate: The end of the date range (e.g., today).
Pro Tip: Use formulas to handle “Null” inputs. If the user doesn’t provide a
StartDate, my formula defaults to January 1st of the current year using:IF(ISBLANK({!StartDateVar}),DATE(YEAR({!$Flow.CurrentDate}),1,1),{!StartDateVar})IF(ISBLANK({!EndDateVar}),{!$Flow.CurrentDate},{!EndDateVar})2. Fetching the Data
We use two Get Records elements to gather our collections:
Get Gift Commitments: We filter by the
DonorId(matching ourrecordId) and ensure theEffectiveStartDatefalls within our selected range.Get Gift Transactions: We filter for records for the same Donor where the
Statusis ‘Paid’ and theTransactionDateis within our range.
3. Aggregating with the Transform Element
The Transform element is a powerful addition to Flow Builder that allows you to map and aggregate data collections. It eliminates the need for the traditional “Loop + Assignment” pattern when calculating sums or counts, making your flows significantly more readable and efficient. This is where the magic happens. Instead of a loop, we add a Transform element.
Source: The collection of Gift Transactions.
Target: A currency (single) variable.
Mapping: Connect the source collection to the target variable, select Sum, and choose the
Amountfield.
This method is “Bulkified” by nature and much easier to debug than traditional loops. We repeat this process for the Gift Commitments.
4. Categorization via Formulas
To determine the Membership Level, we use a Nested IF formula:
IF(ISNULL({!Sum_of_Paid_Transform}),"",IF({!Sum_of_Paid_Transform}<=50000,"Bronze",IF({!Sum_of_Paid_Transform}<=100000,"Silver","Gold")))We also calculate the Fulfillment Rate as (number, not percentage):
100*({!Sum_of_Paid_Transform}/{!Sum_of_Commitments_Transform})Step 2: Creating the User Interface (The Screen Flow)
Now that the logic is built, we need to expose it to the users. We create a simple Screen Flow and place it on the Account record page using a Quick Action.
Pass the ID: The Screen Flow automatically grabs the
recordIdfrom the page.Call the Subflow: It passes that ID into our “Logic Subflow.”
Update the Account: The subflow returns the calculated values (Total Paid, Rate, Level). The Screen Flow then uses an Update Records element to save these values directly onto the custom fields on the Account.
The Success Message: We use the new Flow Message component to show a nice, “Toast-like” confirmation to the user that the calculation is complete.
Step 3: Automating with Scheduled Flows
While the “Calculate” button is great for one-off checks, you don’t want your data to go stale. This is the beauty of the subflow architecture.
We created a Schedule-Triggered Flow that runs every night at 8:00 PM.
Object: Account
Filter: All Active Donors.
Action: It simply loops through the accounts and calls the same subflow we used for the button.
This ensures that every morning when the fundraising team logs in, their dashboards and Account views are perfectly up to date without them having to click a single button.
Advanced Considerations: Performance & Scalability
When deciding between real-time (Record-Triggered) and batch (Scheduled) processing, consider your data volume:
Real-Time: If you trigger this every time a
GiftTransactionis created, it provides the most “Live” data. However, this could have performance implications.Batch: Running this nightly is the safest and most efficient way to handle large volumes of data without impacting the user experience during business hours.
Troubleshooting Common Issues
Null Values: If a donor has zero transactions, the Transform element might return a null. Always ensure your formulas handle null values.
Currency Conversion: If your org uses Multi-Currency, ensure your Get Records and Transform elements are looking at the “Converted” currency fields to maintain accuracy across different regions.
Conclusion: The Power of Low-Code
The introduction of the Transform element in Flow Builder has significantly narrowed the gap between standard Flows and high-performance tools like the Data Processing Engine. For most Nonprofit Cloud and Industry users, Flow provides the perfect balance of ease of use and computational power.
By centralizing your logic in a subflow, you create a “single source of truth” for your calculations, whether they are triggered by a user, a schedule, or a system event. In Nonprofit Cloud, this approach is especially valuable, where fundraising, commitments, and transaction data must stay tightly aligned across multiple purpose-built objects. This not only makes your org easier to manage but also ensures that your business decisions are always backed by the most reliable data available.
Explore related content:
The Ultimate Guide to the Salesforce Screen Flow File Preview Component
What Nonprofits Taught Me About Building Salesforce for Humans, Not Just Systems
New Trailhead Badge: Accessible Salesforce Customizations
#Automation #Nonprofit #NonprofitCloud #Salesforce #SalesforceAdmins #SalesforceDevelopers -
Mastering Data Rollups in Nonprofit Cloud
Every forward-thinking organization wants to base their business management decisions on intelligence derived from accurate, real-time data. However, as any Salesforce professional knows, “accurate data” is often the result of sophisticated background calculations and complex statistics. Depending on your specific Salesforce environment, the tool you choose for these calculations can vary wildly. In a standard Sales Cloud environment, you might reach for a simple Summary Report or a Roll-up Summary field. But when you move into the world of Salesforce Industries (like the new Nonprofit Cloud – NPC, aka Agentforce Nonprofit), the decision-making process becomes significantly more complex. Suddenly, you have a massive arsenal of tools: the Data Processing Engine (DPE), the Business Rules Engine (BRE), OmniStudio, and of course, Salesforce Flow.
My golden rule is this: Always use the tool you already know if it fits the case at hand. While advanced tools like DPE are powerful, they often come with steep learning curves and license-based limitations. In this tutorial, I’m going to show you how to perform complex donor fulfillment calculations using Flow and the new Transform element, writing that data directly to the Account record in a way that is efficient, scalable, and easy to maintain.
Use Case: Calculating Donor Fulfillment RatiosIn this scenario, we want to provide our fundraising team with a clear snapshot of a donor’s health directly on their Account page. Specifically, we want to calculate:
Current Year Gift Commitment: In the Nonprofit Cloud data model, this represents a donor’s promise or pledge. For example, if a donor promises to pay $100,000 over the next year, that commitment is tracked here.
Current Year Paid Amount: Total amount actually received via transactions. Transactions represent the actual financial record of money received. A single Gift Commitment can have multiple paid Gift Transactions associated with it as the donor makes payments over time.
Fulfillment Rate: The percentage of the commitment that has been paid.
Membership Level: A tiered categorization (Gold, Silver, Bronze) based on their actual payments.
Why Flow Instead of DPE?
Nonprofit Cloud (NPC), built on the Salesforce Industries (formerly Vlocity) architecture, represents Salesforce’s modern reimagining for nonprofits. Unlike the legacy Nonprofit Success Pack (NPSP), NPC runs on the core Salesforce platform, leveraging Person Accounts and purpose-built objects designed to support high-scale fundraising and program management.
NPC includes pre-built Data Processing Engine (DPE) calculations for Donor Gift Summary. Think of DPE as a “mini-ETL” (Extract, Transform, Load) tool built directly into Salesforce. It is designed to handle massive datasets, millions of records, performing joins, filters, and aggregations that would typically cause a standard Flow to hit governor limits. While robust, it presents two significant limitations for many organizations:
Complexity: Customizing a DPE requires a deep understanding of data orchestration that many admins haven’t mastered yet.
Computational Limits: Your Salesforce license includes a specific number of “DPE Credits” or computational hours. If you run these calculations in real-time or too frequently, you can quickly exhaust your limits.
Flow provides a “Low-Code” alternative that is highly customizable and doesn’t count against your DPE hour limits, making it the perfect choice for on-demand or daily updates for mid-sized datasets.
Step 1: Building the Logic (The Auto-Launched Subflow)
We start by building an Auto-Launched Flow. By keeping the math in a subflow, we ensure that we can trigger the calculation from a button, a schedule, or even an automated trigger without ever having to rebuild the logic.
1. Defining Input Variables
We need three primary inputs to make this flow flexible:
recordId: The Account ID we are currently processing.StartDate: The beginning of the date range (e.g., the start of the fiscal year).EndDate: The end of the date range (e.g., today).
Pro Tip: Use formulas to handle “Null” inputs. If the user doesn’t provide a
StartDate, my formula defaults to January 1st of the current year using:IF(ISBLANK({!StartDateVar}),DATE(YEAR({!$Flow.CurrentDate}),1,1),{!StartDateVar})IF(ISBLANK({!EndDateVar}),{!$Flow.CurrentDate},{!EndDateVar})2. Fetching the Data
We use two Get Records elements to gather our collections:
Get Gift Commitments: We filter by the
DonorId(matching ourrecordId) and ensure theEffectiveStartDatefalls within our selected range.Get Gift Transactions: We filter for records for the same Donor where the
Statusis ‘Paid’ and theTransactionDateis within our range.
3. Aggregating with the Transform Element
The Transform element is a powerful addition to Flow Builder that allows you to map and aggregate data collections. It eliminates the need for the traditional “Loop + Assignment” pattern when calculating sums or counts, making your flows significantly more readable and efficient. This is where the magic happens. Instead of a loop, we add a Transform element.
Source: The collection of Gift Transactions.
Target: A currency (single) variable.
Mapping: Connect the source collection to the target variable, select Sum, and choose the
Amountfield.
This method is “Bulkified” by nature and much easier to debug than traditional loops. We repeat this process for the Gift Commitments.
4. Categorization via Formulas
To determine the Membership Level, we use a Nested IF formula:
IF(ISNULL({!Sum_of_Paid_Transform}),"",IF({!Sum_of_Paid_Transform}<=50000,"Bronze",IF({!Sum_of_Paid_Transform}<=100000,"Silver","Gold")))We also calculate the Fulfillment Rate as (number, not percentage):
100*({!Sum_of_Paid_Transform}/{!Sum_of_Commitments_Transform})Step 2: Creating the User Interface (The Screen Flow)
Now that the logic is built, we need to expose it to the users. We create a simple Screen Flow and place it on the Account record page using a Quick Action.
Pass the ID: The Screen Flow automatically grabs the
recordIdfrom the page.Call the Subflow: It passes that ID into our “Logic Subflow.”
Update the Account: The subflow returns the calculated values (Total Paid, Rate, Level). The Screen Flow then uses an Update Records element to save these values directly onto the custom fields on the Account.
The Success Message: We use the new Flow Message component to show a nice, “Toast-like” confirmation to the user that the calculation is complete.
Step 3: Automating with Scheduled Flows
While the “Calculate” button is great for one-off checks, you don’t want your data to go stale. This is the beauty of the subflow architecture.
We created a Schedule-Triggered Flow that runs every night at 8:00 PM.
Object: Account
Filter: All Active Donors.
Action: It simply loops through the accounts and calls the same subflow we used for the button.
This ensures that every morning when the fundraising team logs in, their dashboards and Account views are perfectly up to date without them having to click a single button.
Advanced Considerations: Performance & Scalability
When deciding between real-time (Record-Triggered) and batch (Scheduled) processing, consider your data volume:
Real-Time: If you trigger this every time a
GiftTransactionis created, it provides the most “Live” data. However, this could have performance implications.Batch: Running this nightly is the safest and most efficient way to handle large volumes of data without impacting the user experience during business hours.
Troubleshooting Common Issues
Null Values: If a donor has zero transactions, the Transform element might return a null. Always ensure your formulas handle null values.
Currency Conversion: If your org uses Multi-Currency, ensure your Get Records and Transform elements are looking at the “Converted” currency fields to maintain accuracy across different regions.
Conclusion: The Power of Low-Code
The introduction of the Transform element in Flow Builder has significantly narrowed the gap between standard Flows and high-performance tools like the Data Processing Engine. For most Nonprofit Cloud and Industry users, Flow provides the perfect balance of ease of use and computational power.
By centralizing your logic in a subflow, you create a “single source of truth” for your calculations, whether they are triggered by a user, a schedule, or a system event. In Nonprofit Cloud, this approach is especially valuable, where fundraising, commitments, and transaction data must stay tightly aligned across multiple purpose-built objects. This not only makes your org easier to manage but also ensures that your business decisions are always backed by the most reliable data available.
Explore related content:
The Ultimate Guide to the Salesforce Screen Flow File Preview Component
What Nonprofits Taught Me About Building Salesforce for Humans, Not Just Systems
New Trailhead Badge: Accessible Salesforce Customizations
#Automation #Nonprofit #NonprofitCloud #Salesforce #SalesforceAdmins #SalesforceDevelopers -
The Ultimate Guide to the Salesforce Screen Flow File Preview Component
The Spring ’26 Release introduced the File Preview Screen Flow Component. This native tool allows Admins to embed document viewing directly into the flow of work. In this post, we’ll explore the technical requirements, real-world observations, and the strategic implications of this functionality.
Beyond the “Files” Tab: Why This Matters
Historically, viewing a file in Salesforce required navigating to the “Files” related list, clicking the file, and waiting for the standard previewer to launch in a separate overlay. If you were in the middle of a Screen Flow, perhaps a guided survey or a lead conversion process, leaving that flow to check a document meant breaking your concentration.
Salesforce introduced a file thumbnail preview that shows visually what is in the file without having to click into it. Please note that the thumbnails show beautifully in the Single Related List component for lightning record pages. In the multiple related list view, I did not see the thumbnails.
In addition to the lightning record page and related list functionality, Salesforce introduced a file preview component that allows the user to see the preview of the file they have just uploaded, or they find attached to an object record in Salesforce.
Technical Blueprint: Configuring the Component
Setting up this component requires a shift in how Admins think about file data. Files data model is unique. To make the component work, you need to navigate the relationship between
ContentDocumentLink,ContentDocument, andContentVersion.Core Attribute Requirements
When you drag the File Preview component onto a screen in Flow Builder, you must configure the following:
Content Document ID (Required): This is the most critical field. The component needs the unique 18-character ID of the
ContentDocumentrecord. It will not accept theContentVersionID (which represents a specific iteration) or theAttachmentID (the legacy file format). Please note: the preview component always shows the latest version of the file.Label: This attribute allows you to provide instructions above the preview window. This is highly effective for compliance-heavy roles, where the label can say: “Verify that the signature on this ID matches the physical application.”
API Name: The unique identifier for the element within your flow logic, following standard alphanumeric naming conventions.
Using Conditional Visibility
Because the preview window takes up significant screen real estate, it should not be set to “Always Display”, if it will be driven by a data table reactively. Salesforce allows you to specify logic that determines when the component appears. You can set it to display only if a specific file type is selected in the collection and hide the component if the
ContentDocumentIDvariable is null to avoid showing an empty box.Lessons from the Field: Our “Around the Block” Test
In our recent hands-on testing, we put the component through its paces to see where it shines and where its boundaries lie.
The File Extension
The previewer is highly dependent on the browser’s ability to interpret file headers and extensions. During our test, we uploaded a standard log file. While the content was technically plain text, the file had a
.logextension. The component struggled to render this because it didn’t recognize it as a standard format. However, once we switched to a.txtextension, the preview was crisp and readable. The admin takeaway here is that if your business process involves non-standard file types, you may need to implement a naming convention to ensure files are saved in formats the previewer can handle: primarily.pdf,.jpg,.png, and.txt.Real-World Use Case
How can you use this component in a live production environment? Here is a scenario where the File Preview component adds immediate value:
Imagine a customer service representative handling a shipping insurance claim. The customer has uploaded a photo of a broken item. Instead of the agent navigating to the “Files” tab, the Screen Flow surfaces the photo on the “Review Claim” screen. The agent sees the damage, verifies the details, and clicks “Approve” all on one page.
Conclusion: A New Era of Flow
The File Preview component represents Salesforce being a holistic workspace. By integrating document viewing into the automation engine of Flow, Salesforce has empowered Admins to build tools that feel like custom-coded applications without writing a single line of Apex. As we saw in our testing, the component is robust and user-friendly. Most importantly, it keeps users focused. Whether you are streamlining an approval process or simplifying a complex data entry task, the ability to see what you are working on without leaving the screen is *chef’s kiss.*
Explore related content:
What’s New With Salesforce’s Agentblazer Status in 2026
Add Salesforce Files and Attachments to Multiple Related Lists On Content Document Trigger
Profiles and Permissions in Salesforce: The Simple Guide for Admins
#Automation #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceHowTo #SalesforceTutorials #Spring26 #Winter25 -
The Ultimate Guide to the Salesforce Screen Flow File Preview Component
The Spring ’24 Release introduced the File Preview Screen Flow Component. This native tool allows Admins to embed document viewing directly into the flow of work. In this post, we’ll explore the technical requirements, real-world observations, and the strategic implications of this functionality.
Beyond the “Files” Tab: Why This Matters
Historically, viewing a file in Salesforce required navigating to the “Files” related list, clicking the file, and waiting for the standard previewer to launch in a separate overlay. If you were in the middle of a Screen Flow, perhaps a guided survey or a lead conversion process, leaving that flow to check a document meant breaking your concentration.
Salesforce introduced a file thumbnail preview that shows visually what is in the file without having to click into it. Please note that the thumbnails show beautifully in the Single Related List component for lightning record pages. In the multiple related list view, I did not see the thumbnails.
In addition to the lightning record page and related list functionality, Salesforce introduced a file preview component that allows the user to see the preview of the file they have just uploaded, or they find attached to an object record in Salesforce.
Technical Blueprint: Configuring the Component
Setting up this component requires a shift in how Admins think about file data. Files data model is unique. To make the component work, you need to navigate the relationship between
ContentDocumentLink,ContentDocument, andContentVersion.Core Attribute Requirements
When you drag the File Preview component onto a screen in Flow Builder, you must configure the following:
Content Document ID (Required): This is the most critical field. The component needs the unique 18-character ID of the
ContentDocumentrecord. It will not accept theContentVersionID (which represents a specific iteration) or theAttachmentID (the legacy file format). Please note: the preview component always shows the latest version of the file.Label: This attribute allows you to provide instructions above the preview window. This is highly effective for compliance-heavy roles, where the label can say: “Verify that the signature on this ID matches the physical application.”
API Name: The unique identifier for the element within your flow logic, following standard alphanumeric naming conventions.
Using Conditional Visibility
Because the preview window takes up significant screen real estate, it should not be set to “Always Display”, if it will be driven by a data table reactively. Salesforce allows you to specify logic that determines when the component appears. You can set it to display only if a specific file type is selected in the collection and hide the component if the
ContentDocumentIDvariable is null to avoid showing an empty box.Lessons from the Field: Our “Around the Block” Test
In our recent hands-on testing, we put the component through its paces to see where it shines and where its boundaries lie.
The File Extension
The previewer is highly dependent on the browser’s ability to interpret file headers and extensions. During our test, we uploaded a standard log file. While the content was technically plain text, the file had a
.logextension. The component struggled to render this because it didn’t recognize it as a standard format. However, once we switched to a.txtextension, the preview was crisp and readable. The admin takeaway here is that if your business process involves non-standard file types, you may need to implement a naming convention to ensure files are saved in formats the previewer can handle: primarily.pdf,.jpg,.png, and.txt.Real-World Use Case
How can you use this component in a live production environment? Here is a scenario where the File Preview component adds immediate value:
Imagine a customer service representative handling a shipping insurance claim. The customer has uploaded a photo of a broken item. Instead of the agent navigating to the “Files” tab, the Screen Flow surfaces the photo on the “Review Claim” screen. The agent sees the damage, verifies the details, and clicks “Approve” all on one page.
Conclusion: A New Era of Flow
The File Preview component represents Salesforce being a holistic workspace. By integrating document viewing into the automation engine of Flow, Salesforce has empowered Admins to build tools that feel like custom-coded applications without writing a single line of Apex. As we saw in our testing, the component is robust and user-friendly. Most importantly, it keeps users focused. Whether you are streamlining an approval process or simplifying a complex data entry task, the ability to see what you are working on without leaving the screen is *chef’s kiss.*
Explore related content:
What’s New With Salesforce’s Agentblazer Status in 2026
Add Salesforce Files and Attachments to Multiple Related Lists On Content Document Trigger
Profiles and Permissions in Salesforce: The Simple Guide for Admins
#Automation #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceHowTo #SalesforceTutorials #Spring26 #Winter25 -
The Ultimate Guide to the Salesforce Screen Flow File Preview Component
The Spring ’26 Release introduced the File Preview Screen Flow Component. This native tool allows Admins to embed document viewing directly into the flow of work. In this post, we’ll explore the technical requirements, real-world observations, and the strategic implications of this functionality.
Beyond the “Files” Tab: Why This Matters
Historically, viewing a file in Salesforce required navigating to the “Files” related list, clicking the file, and waiting for the standard previewer to launch in a separate overlay. If you were in the middle of a Screen Flow, perhaps a guided survey or a lead conversion process, leaving that flow to check a document meant breaking your concentration.
Salesforce introduced a file thumbnail preview that shows visually what is in the file without having to click into it. Please note that the thumbnails show beautifully in the Single Related List component for lightning record pages. In the multiple related list view, I did not see the thumbnails.
In addition to the lightning record page and related list functionality, Salesforce introduced a file preview component that allows the user to see the preview of the file they have just uploaded, or they find attached to an object record in Salesforce.
Technical Blueprint: Configuring the Component
Setting up this component requires a shift in how Admins think about file data. Files data model is unique. To make the component work, you need to navigate the relationship between
ContentDocumentLink,ContentDocument, andContentVersion.Core Attribute Requirements
When you drag the File Preview component onto a screen in Flow Builder, you must configure the following:
Content Document ID (Required): This is the most critical field. The component needs the unique 18-character ID of the
ContentDocumentrecord. It will not accept theContentVersionID (which represents a specific iteration) or theAttachmentID (the legacy file format). Please note: the preview component always shows the latest version of the file.Label: This attribute allows you to provide instructions above the preview window. This is highly effective for compliance-heavy roles, where the label can say: “Verify that the signature on this ID matches the physical application.”
API Name: The unique identifier for the element within your flow logic, following standard alphanumeric naming conventions.
Using Conditional Visibility
Because the preview window takes up significant screen real estate, it should not be set to “Always Display”, if it will be driven by a data table reactively. Salesforce allows you to specify logic that determines when the component appears. You can set it to display only if a specific file type is selected in the collection and hide the component if the
ContentDocumentIDvariable is null to avoid showing an empty box.Lessons from the Field: Our “Around the Block” Test
In our recent hands-on testing, we put the component through its paces to see where it shines and where its boundaries lie.
The File Extension
The previewer is highly dependent on the browser’s ability to interpret file headers and extensions. During our test, we uploaded a standard log file. While the content was technically plain text, the file had a
.logextension. The component struggled to render this because it didn’t recognize it as a standard format. However, once we switched to a.txtextension, the preview was crisp and readable. The admin takeaway here is that if your business process involves non-standard file types, you may need to implement a naming convention to ensure files are saved in formats the previewer can handle: primarily.pdf,.jpg,.png, and.txt.Real-World Use Case
How can you use this component in a live production environment? Here is a scenario where the File Preview component adds immediate value:
Imagine a customer service representative handling a shipping insurance claim. The customer has uploaded a photo of a broken item. Instead of the agent navigating to the “Files” tab, the Screen Flow surfaces the photo on the “Review Claim” screen. The agent sees the damage, verifies the details, and clicks “Approve” all on one page.
Conclusion: A New Era of Flow
The File Preview component represents Salesforce being a holistic workspace. By integrating document viewing into the automation engine of Flow, Salesforce has empowered Admins to build tools that feel like custom-coded applications without writing a single line of Apex. As we saw in our testing, the component is robust and user-friendly. Most importantly, it keeps users focused. Whether you are streamlining an approval process or simplifying a complex data entry task, the ability to see what you are working on without leaving the screen is *chef’s kiss.*
Explore related content:
What’s New With Salesforce’s Agentblazer Status in 2026
Add Salesforce Files and Attachments to Multiple Related Lists On Content Document Trigger
Profiles and Permissions in Salesforce: The Simple Guide for Admins
#Automation #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceHowTo #SalesforceTutorials #Spring26 #Winter25 -
The Ultimate Guide to the Salesforce Screen Flow File Preview Component
The Spring ’24 Release introduced the File Preview Screen Flow Component. This native tool allows Admins to embed document viewing directly into the flow of work. In this post, we’ll explore the technical requirements, real-world observations, and the strategic implications of this functionality.
Beyond the “Files” Tab: Why This Matters
Historically, viewing a file in Salesforce required navigating to the “Files” related list, clicking the file, and waiting for the standard previewer to launch in a separate overlay. If you were in the middle of a Screen Flow, perhaps a guided survey or a lead conversion process, leaving that flow to check a document meant breaking your concentration.
Salesforce introduced a file thumbnail preview that shows visually what is in the file without having to click into it. Please note that the thumbnails show beautifully in the Single Related List component for lightning record pages. In the multiple related list view, I did not see the thumbnails.
In addition to the lightning record page and related list functionality, Salesforce introduced a file preview component that allows the user to see the preview of the file they have just uploaded, or they find attached to an object record in Salesforce.
Technical Blueprint: Configuring the Component
Setting up this component requires a shift in how Admins think about file data. Files data model is unique. To make the component work, you need to navigate the relationship between
ContentDocumentLink,ContentDocument, andContentVersion.Core Attribute Requirements
When you drag the File Preview component onto a screen in Flow Builder, you must configure the following:
Content Document ID (Required): This is the most critical field. The component needs the unique 18-character ID of the
ContentDocumentrecord. It will not accept theContentVersionID (which represents a specific iteration) or theAttachmentID (the legacy file format). Please note: the preview component always shows the latest version of the file.Label: This attribute allows you to provide instructions above the preview window. This is highly effective for compliance-heavy roles, where the label can say: “Verify that the signature on this ID matches the physical application.”
API Name: The unique identifier for the element within your flow logic, following standard alphanumeric naming conventions.
Using Conditional Visibility
Because the preview window takes up significant screen real estate, it should not be set to “Always Display”, if it will be driven by a data table reactively. Salesforce allows you to specify logic that determines when the component appears. You can set it to display only if a specific file type is selected in the collection and hide the component if the
ContentDocumentIDvariable is null to avoid showing an empty box.Lessons from the Field: Our “Around the Block” Test
In our recent hands-on testing, we put the component through its paces to see where it shines and where its boundaries lie.
The File Extension
The previewer is highly dependent on the browser’s ability to interpret file headers and extensions. During our test, we uploaded a standard log file. While the content was technically plain text, the file had a
.logextension. The component struggled to render this because it didn’t recognize it as a standard format. However, once we switched to a.txtextension, the preview was crisp and readable. The admin takeaway here is that if your business process involves non-standard file types, you may need to implement a naming convention to ensure files are saved in formats the previewer can handle: primarily.pdf,.jpg,.png, and.txt.Real-World Use Case
How can you use this component in a live production environment? Here is a scenario where the File Preview component adds immediate value:
Imagine a customer service representative handling a shipping insurance claim. The customer has uploaded a photo of a broken item. Instead of the agent navigating to the “Files” tab, the Screen Flow surfaces the photo on the “Review Claim” screen. The agent sees the damage, verifies the details, and clicks “Approve” all on one page.
Conclusion: A New Era of Flow
The File Preview component represents Salesforce being a holistic workspace. By integrating document viewing into the automation engine of Flow, Salesforce has empowered Admins to build tools that feel like custom-coded applications without writing a single line of Apex. As we saw in our testing, the component is robust and user-friendly. Most importantly, it keeps users focused. Whether you are streamlining an approval process or simplifying a complex data entry task, the ability to see what you are working on without leaving the screen is *chef’s kiss.*
Explore related content:
What’s New With Salesforce’s Agentblazer Status in 2026
Add Salesforce Files and Attachments to Multiple Related Lists On Content Document Trigger
Profiles and Permissions in Salesforce: The Simple Guide for Admins
#Automation #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceHowTo #SalesforceTutorials #Spring26 #Winter25 -
The Ultimate Guide to the Salesforce Screen Flow File Preview Component
The Spring ’24 Release introduced the File Preview Screen Flow Component. This native tool allows Admins to embed document viewing directly into the flow of work. In this post, we’ll explore the technical requirements, real-world observations, and the strategic implications of this functionality.
Beyond the “Files” Tab: Why This Matters
Historically, viewing a file in Salesforce required navigating to the “Files” related list, clicking the file, and waiting for the standard previewer to launch in a separate overlay. If you were in the middle of a Screen Flow, perhaps a guided survey or a lead conversion process, leaving that flow to check a document meant breaking your concentration.
Salesforce introduced a file thumbnail preview that shows visually what is in the file without having to click into it. Please note that the thumbnails show beautifully in the Single Related List component for lightning record pages. In the multiple related list view, I did not see the thumbnails.
In addition to the lightning record page and related list functionality, Salesforce introduced a file preview component that allows the user to see the preview of the file they have just uploaded, or they find attached to an object record in Salesforce.
Technical Blueprint: Configuring the Component
Setting up this component requires a shift in how Admins think about file data. Files data model is unique. To make the component work, you need to navigate the relationship between
ContentDocumentLink,ContentDocument, andContentVersion.Core Attribute Requirements
When you drag the File Preview component onto a screen in Flow Builder, you must configure the following:
Content Document ID (Required): This is the most critical field. The component needs the unique 18-character ID of the
ContentDocumentrecord. It will not accept theContentVersionID (which represents a specific iteration) or theAttachmentID (the legacy file format). Please note: the preview component always shows the latest version of the file.Label: This attribute allows you to provide instructions above the preview window. This is highly effective for compliance-heavy roles, where the label can say: “Verify that the signature on this ID matches the physical application.”
API Name: The unique identifier for the element within your flow logic, following standard alphanumeric naming conventions.
Using Conditional Visibility
Because the preview window takes up significant screen real estate, it should not be set to “Always Display”, if it will be driven by a data table reactively. Salesforce allows you to specify logic that determines when the component appears. You can set it to display only if a specific file type is selected in the collection and hide the component if the
ContentDocumentIDvariable is null to avoid showing an empty box.Lessons from the Field: Our “Around the Block” Test
In our recent hands-on testing, we put the component through its paces to see where it shines and where its boundaries lie.
The File Extension
The previewer is highly dependent on the browser’s ability to interpret file headers and extensions. During our test, we uploaded a standard log file. While the content was technically plain text, the file had a
.logextension. The component struggled to render this because it didn’t recognize it as a standard format. However, once we switched to a.txtextension, the preview was crisp and readable. The admin takeaway here is that if your business process involves non-standard file types, you may need to implement a naming convention to ensure files are saved in formats the previewer can handle: primarily.pdf,.jpg,.png, and.txt.Real-World Use Case
How can you use this component in a live production environment? Here is a scenario where the File Preview component adds immediate value:
Imagine a customer service representative handling a shipping insurance claim. The customer has uploaded a photo of a broken item. Instead of the agent navigating to the “Files” tab, the Screen Flow surfaces the photo on the “Review Claim” screen. The agent sees the damage, verifies the details, and clicks “Approve” all on one page.
Conclusion: A New Era of Flow
The File Preview component represents Salesforce being a holistic workspace. By integrating document viewing into the automation engine of Flow, Salesforce has empowered Admins to build tools that feel like custom-coded applications without writing a single line of Apex. As we saw in our testing, the component is robust and user-friendly. Most importantly, it keeps users focused. Whether you are streamlining an approval process or simplifying a complex data entry task, the ability to see what you are working on without leaving the screen is *chef’s kiss.*
Explore related content:
What’s New With Salesforce’s Agentblazer Status in 2026
Add Salesforce Files and Attachments to Multiple Related Lists On Content Document Trigger
Profiles and Permissions in Salesforce: The Simple Guide for Admins
#Automation #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceHowTo #SalesforceTutorials #Spring26 #Winter25 -
Salesforce Spring ’26 Brings Major Debug Improvements to Flow Builder
If you’ve been building flows for any length of time, you already know this: a lot of the real work and time goes into debugging. It’s re-running the same automation over and over. Swapping out record IDs. Resetting input values. Clicking Debug, making a small change, saving, and sometimes starting the whole setup again. That loop is where Flow builders spend a lot of their time, especially once flows get even moderately complex.
Salesforce’s Spring ’26 release finally takes aim at that reality. Instead of piling on new features, this update focuses on removing friction from the debugging experience itself. The result is a Flow Builder that feels faster, less disruptive, and much closer to a modern development environment.
Debug Sessions That Don’t Forget Everything
One of the most impactful improvements in Spring ’26 is also one of the simplest: Flow Builder now remembers your debug configuration while you’re actively editing a flow. When you debug a flow, make a change, and save, Salesforce preserves the triggering record you used, your debug options, and your input variable values. That means no more losing your setup every time you click Save, no more re-pasting record IDs, and no more rebuilding your test scenario from scratch.
Your debug session stays intact until you refresh your browser, close Flow Builder, or manually click Reset Debug Settings. This is a big quality-of-life upgrade, especially if you work with record-triggered flows that have edge cases, complex decision logic, multi-screen flows with test data, or anything that requires several small iterations to get right. The practical impact is simple: you can now fix, save, and re-run flows much faster, without constantly breaking your momentum.
Flow Tests Are No Longer “Latest Version Only”
Spring ’26 also changes how flow tests work behind the scenes.
Previously, flow tests were tied only to the latest version of a flow. As soon as you created a new version, older tests were essentially left behind. If a test no longer applied, you deleted it. If it still applied, you recreated it. Now, tests can be associated with specific flow versions.
Source: https://help.salesforce.com/s/articleView?id=release-notes.rn_automate_flow_debug_test_versions.htm&release=260&type=5You can now reuse the same test across multiple flow versions or limit it to only the versions it truly belongs to, and when you create a new version, Salesforce automatically carries those tests forward from the version you cloned. This gives you much tighter control over scenarios like preserving regression tests for older logic, maintaining multiple supported versions, validating breaking changes, and keeping historical test coverage intact. Instead of treating tests as disposable, they become part of your flow’s lifecycle. This is a foundational shift for teams building mission-critical automation.
Compare Screen Flow Versions to See What Changed
Salesforce has had version comparison in other areas of the platform, but Spring ’26 brings it to screen flows. You can now compare any two versions of a screen flow and instantly see what changed across elements, resources, fields, components, properties, and styles.
This makes it much easier to answer the first question most debugging starts with: what changed? Instead of manually opening versions side by side, you get a clear view of differences, helping you pinpoint where issues may have been introduced and focus your testing where it actually matters.
Source: https://help.salesforce.com/s/articleView?id=release-notes.rn_automate_flow_mgmt_compare_screen_flow_versions.htm&release=260&type=5More Control When Debugging Approvals and Orchestrations
Debugging long approval chains or orchestrations has always been painful. You’d often have to run the entire thing just to test one step. Spring ’26 introduces several upgrades that make this far more surgical.
Complete work items directly in Flow Builder
You can now complete orchestration and approval work items without leaving Flow Builder.
While debugging, interactive steps can be opened directly on the canvas. Once completed, the orchestration or approval process resumes immediately.
This keeps the entire test cycle inside the builder instead of bouncing between apps, emails, and work queues.
Debug only the part you care about
You can now define a start point, an end point, or both when debugging orchestration and approval flows, which gives you much more control over what actually runs. Instead of being forced to execute the entire automation, you can skip earlier stages, stop before downstream logic, isolate a single phase, or focus on one problematic section. When you skip steps, you can also provide test inputs to simulate outputs from earlier stages. In other words, you no longer have to run the whole machine just to test one gear.
Selectively control which steps execute
Salesforce has expanded test output controls beyond rollback mode.
You can now decide which orchestration or approval steps should run while debugging, and which should be skipped, directly from the new Configure Test Output experience.
This makes it much easier to validate edge cases, exception handling, and conditional behavior without unnecessary noise.
Smarter Debugging for More Advanced Flow Types
Spring ’26 also delivers improvements for more specialized use cases.
Segment-Triggered Flows: Testing multiple records at once
For segment-triggered flows, you can now debug up to ten records at the same time instead of testing one record after another. You can select multiple segment members, run the debugger, and cycle through each result to see exactly how different records move through your flow.
The canvas highlights the active path for the selected record, and you can filter results by successes or failures, making it much easier to spot inconsistencies. This is especially useful when validating logic across different customer types, messy or incomplete data, and edge cases that would normally take many separate test runs to uncover.
Why This Release Actually Matters
It’s easy to skim release notes and see “debug improvements” as minor polish, but debugging speed directly affects how confidently people build automation, how complex flows can realistically become, how quickly teams can ship fixes, and how much risk is involved in every change.
With these changes, you can rerun the same scenarios without constantly rebuilding your debug setup, test individual flow versions with far more precision, and isolate only the parts of your logic you actually care about. You can walk through approvals and orchestrations directly inside Flow Builder instead of jumping between tools, and even validate how a flow behaves across multiple records in a single debug run. This is the kind of release that changes how Flow Builder feels to use.
Conclusion
Salesforce has spent the last few releases expanding what Flow can do, and Spring ’26 is about improving how Flow is built. Persistent debug sessions, version-aware tests, selective execution, in-builder work items, and targeted debugging all point in the same direction. Flow Builder is evolving from a configuration tool into a true development environment.
If you build anything non-trivial in Flow, these changes will save you time immediately. And if you teach, support, or scale Flow across teams, they open the door to far better testing practices going forward.
Explore related content:
Top Spring ’26 Salesforce Flow Features
Add Salesforce Files and Attachments to Multiple Related Lists On Content Document Trigger
Spring ’26 Release Notes: Highlights for Admins and Developers
#FlowBuilder #LowCode #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials -
Salesforce Spring ’26 Brings Major Debug Improvements to Flow Builder
If you’ve been building flows for any length of time, you already know this: a lot of the real work and time goes into debugging. It’s re-running the same automation over and over. Swapping out record IDs. Resetting input values. Clicking Debug, making a small change, saving, and sometimes starting the whole setup again. That loop is where Flow builders spend a lot of their time, especially once flows get even moderately complex.
Salesforce’s Spring ’26 release finally takes aim at that reality. Instead of piling on new features, this update focuses on removing friction from the debugging experience itself. The result is a Flow Builder that feels faster, less disruptive, and much closer to a modern development environment.
Debug Sessions That Don’t Forget Everything
One of the most impactful improvements in Spring ’26 is also one of the simplest: Flow Builder now remembers your debug configuration while you’re actively editing a flow. When you debug a flow, make a change, and save, Salesforce preserves the triggering record you used, your debug options, and your input variable values. That means no more losing your setup every time you click Save, no more re-pasting record IDs, and no more rebuilding your test scenario from scratch.
Your debug session stays intact until you refresh your browser, close Flow Builder, or manually click Reset Debug Settings. This is a big quality-of-life upgrade, especially if you work with record-triggered flows that have edge cases, complex decision logic, multi-screen flows with test data, or anything that requires several small iterations to get right. The practical impact is simple: you can now fix, save, and re-run flows much faster, without constantly breaking your momentum.
Flow Tests Are No Longer “Latest Version Only”
Spring ’26 also changes how flow tests work behind the scenes.
Previously, flow tests were tied only to the latest version of a flow. As soon as you created a new version, older tests were essentially left behind. If a test no longer applied, you deleted it. If it still applied, you recreated it. Now, tests can be associated with specific flow versions.
Source: https://help.salesforce.com/s/articleView?id=release-notes.rn_automate_flow_debug_test_versions.htm&release=260&type=5You can now reuse the same test across multiple flow versions or limit it to only the versions it truly belongs to, and when you create a new version, Salesforce automatically carries those tests forward from the version you cloned. This gives you much tighter control over scenarios like preserving regression tests for older logic, maintaining multiple supported versions, validating breaking changes, and keeping historical test coverage intact. Instead of treating tests as disposable, they become part of your flow’s lifecycle. This is a foundational shift for teams building mission-critical automation.
Compare Screen Flow Versions to See What Changed
Salesforce has had version comparison in other areas of the platform, but Spring ’26 brings it to screen flows. You can now compare any two versions of a screen flow and instantly see what changed across elements, resources, fields, components, properties, and styles.
This makes it much easier to answer the first question most debugging starts with: what changed? Instead of manually opening versions side by side, you get a clear view of differences, helping you pinpoint where issues may have been introduced and focus your testing where it actually matters.
Source: https://help.salesforce.com/s/articleView?id=release-notes.rn_automate_flow_mgmt_compare_screen_flow_versions.htm&release=260&type=5More Control When Debugging Approvals and Orchestrations
Debugging long approval chains or orchestrations has always been painful. You’d often have to run the entire thing just to test one step. Spring ’26 introduces several upgrades that make this far more surgical.
Complete work items directly in Flow Builder
You can now complete orchestration and approval work items without leaving Flow Builder.
While debugging, interactive steps can be opened directly on the canvas. Once completed, the orchestration or approval process resumes immediately.
This keeps the entire test cycle inside the builder instead of bouncing between apps, emails, and work queues.
Debug only the part you care about
You can now define a start point, an end point, or both when debugging orchestration and approval flows, which gives you much more control over what actually runs. Instead of being forced to execute the entire automation, you can skip earlier stages, stop before downstream logic, isolate a single phase, or focus on one problematic section. When you skip steps, you can also provide test inputs to simulate outputs from earlier stages. In other words, you no longer have to run the whole machine just to test one gear.
Selectively control which steps execute
Salesforce has expanded test output controls beyond rollback mode.
You can now decide which orchestration or approval steps should run while debugging, and which should be skipped, directly from the new Configure Test Output experience.
This makes it much easier to validate edge cases, exception handling, and conditional behavior without unnecessary noise.
Smarter Debugging for More Advanced Flow Types
Spring ’26 also delivers improvements for more specialized use cases.
Segment-Triggered Flows: Testing multiple records at once
For segment-triggered flows, you can now debug up to ten records at the same time instead of testing one record after another. You can select multiple segment members, run the debugger, and cycle through each result to see exactly how different records move through your flow.
The canvas highlights the active path for the selected record, and you can filter results by successes or failures, making it much easier to spot inconsistencies. This is especially useful when validating logic across different customer types, messy or incomplete data, and edge cases that would normally take many separate test runs to uncover.
Why This Release Actually Matters
It’s easy to skim release notes and see “debug improvements” as minor polish, but debugging speed directly affects how confidently people build automation, how complex flows can realistically become, how quickly teams can ship fixes, and how much risk is involved in every change.
With these changes, you can rerun the same scenarios without constantly rebuilding your debug setup, test individual flow versions with far more precision, and isolate only the parts of your logic you actually care about. You can walk through approvals and orchestrations directly inside Flow Builder instead of jumping between tools, and even validate how a flow behaves across multiple records in a single debug run. This is the kind of release that changes how Flow Builder feels to use.
Conclusion
Salesforce has spent the last few releases expanding what Flow can do, and Spring ’26 is about improving how Flow is built. Persistent debug sessions, version-aware tests, selective execution, in-builder work items, and targeted debugging all point in the same direction. Flow Builder is evolving from a configuration tool into a true development environment.
If you build anything non-trivial in Flow, these changes will save you time immediately. And if you teach, support, or scale Flow across teams, they open the door to far better testing practices going forward.
Explore related content:
Top Spring ’26 Salesforce Flow Features
Add Salesforce Files and Attachments to Multiple Related Lists On Content Document Trigger
Spring ’26 Release Notes: Highlights for Admins and Developers
#FlowBuilder #LowCode #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials -
Add Salesforce Files and Attachments to Multiple Related Lists On Content Document Trigger
Flow builders, rejoice! Now with the Spring 26 Release you can trigger your flow automations on ContentDocument and ContentVersion Flow objects for Files and Attachments. Salesforce had delivered a new event type in the previous release that supported flow triggers for standard object files and attachments. The functionality was limited. In this release, Salesforce gave us the ability to trigger on all new files/attachments and their updates for all objects.
Use case: When a document is uploaded to a custom object with lookups to other objects like contact and account, add links to these objects, so that the same file is visible and listed under the related lists.You could easily expand this use case to add additional sharing to the uploaded file, which is also a common pain point in many organizations. I will leave out this use case for now which you can easily explore by expanding the functionality of this flow.
Objects that are involved when you upload a file
In Salesforce, three objects work together to manage files: ContentDocument, ContentVersion and ContentDocumentLink.
Think of them as a hierarchy that separates the file record, the actual data, and the location where it is shared. The definition for these three core objects are:
ContentDocument: Represents the “shell” or the permanent ID of a file. It doesn’t store the data itself but acts as a parent container that remains constant even if you upload new versions.
ContentVersion: This is where the actual file data (the “meat”) lives. Every time you upload a new version of a file, a new ContentVersion record is created. It tracks the size, extension, and the binary data.
lass=”yoast-text-mark” />>ContentDocumentLink: This is a junction object that links a file to other records (like an Account, Opportunity, or Case) or users. It defines who can see the file and what their permissions are.Object Relationships:
The relationship is structured to allow for version control and many-to-many sharing:
ContentDocument > ContentVersion: One-to-Many. One document can have many versions, but only one is the “Latest Published Version.
ContentDocument > ContentDocumentLink: One-to-Many. One document can be linked to many different records or users simultaneously.ContentDocumentLink is a junction object that does not allow duplicates. If you attempt to create the relationship between a linked entity and the content document when it already exists, your attempt will fail.
What happens when a file is uploaded to the files related list under an object?
Salesforce creates the ContentDocument and ContentVersion records. Salesforce will also create the necessary ContentDocumentLink records; often one for the object record relationship, one for the user who uploaded the file.
For each new file (not a new version of the same file) a new ContentDocument record will be created. You can trigger your automation based on this record being created, and then create additional ContentDocumentLink records to expand relationships and sharing.
Building Blocks of the Content Document Triggered Automation
For this use case I used a custom object named Staging Record which has dedicated fields for Contact and Account (both lookups). This method of uploading new documents and updating new field values to a custom record is often used when dealing with integrations and digital experience users. You can easily build a similar automation if a ContentDocumentLink for the Account needs to be created when the file is uploaded to a standard object like Contact.
Follow these steps to build your flow:
- Trigger your record-triggered flow when a ContentDocument record is created (no criteria)
- Add a scheduled path to your flow and set it up to execute with 0 min delay. Under advanced settings, set up the batch size as 1. Async seems to work, as well. I will explain the reason for this at the end of the post.
- Get all ContentDocumentLink records for the ContentDocument
- Check null for the get in the previous step (may not be necessary, but for good measure)
- <span style=”font-weight: 400;”>If not null, use a collection filter to filter for all records where the LinkedEntity Id starts with the prefix of your custom object record (I pasted the 3 character prefix into a constant and referenced it). Here is the formula I used:
LEFT({!currentItem_Filter_Staging.LinkedEntityId},3)= {!ObjectPrefixConstant} - Loop through the filtered records. There should be only one max. You have to loop, because the collection filter element creates a collection as an output even for one record.
- Inside the loop, get the staging record. I know, it is a get inside the loop, but this will execute once. You can add a counter and a decision to execute it only in the first iteration if you want.
- Build two ContentDocumentLink records using an assignment. One between the ContentDocument and the Contact on the staging record, the other one between the ContentDocument and the Account. You could add additional records here for sharing.
- Add your ContentDocumentLink records to a collection.
- Exit the loop and create the ContentDocumentLink records using the collection you built in one shot.
Here is a screenshot of the resulting flow.
Here is what happens when you create a staging record and upload a file to Salesforce using the related list under this record.
Here is the resulting view on the Contact and Account records.
Why is the Scheduled Path or Async Path Necessary?
When you build the automation on the immediate path, the ContentDocumentLink records are not created. You don’t receive a fault email, either, although the automation runs well in debug mode. I wondered about why that is and set up a user trace to see what is happening. This is the message I have found that is stopping the flow from executing:
(248995872)|FLOW_BULK_ELEMENT_NOT_SUPPORTED|FlowRecordLookup|Get_Contact_Document_Links|ContentDocumentLink
According to this the get step for ContentDocumentLink records cannot be bulkified, and therefore the flow cannot execute. Flow engine attempts to always bulkify gets. There is nothing fancy about the get criteria here. What must give us trouble is the unique nature of the ContentDocumentLink object.The async path seems to bypass this issue. However, if you want to ensure this element is never executed in bulk, the better approach is to use a scheduled path with zero delay and set the batch size to one record in advanced settings.
Please note that the scheduled path takes a minute to execute in my preview org. Be patient and check back if you don’t initially see the new ContentDocumentLink records.
Conclusion
In the past, handling file uploads gave flow builders a lot of trouble, because the related objects did not support flow triggers.
Now that we have this functionality rolling out in the latest release, the opportunities are pretty much limitless. The functionality still has its quirks as you can see above.
I would recommend that you set up a custom metadata kill switch for this automation so that it can easily be turned off for bulk upload scenarios.
Explore related content:
Top Spring 26 Salesforce Flow Features
Should You Use Fault Paths in Salesforce Flows?
How to Use Custom Metadata Types in Flow
See the Spring 26 Release Notes HERE.
#Automation #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials #Spring26 #UseCases -
Add Salesforce Files and Attachments to Multiple Related Lists On Content Document Trigger
Flow builders, rejoice! Now with the Spring 26 Release you can trigger your flow automations on ContentDocument and ContentVersion Flow objects for Files and Attachments. Salesforce had delivered a new event type in the previous release that supported flow triggers for standard object files and attachments. The functionality was limited. In this release, Salesforce gave us the ability to trigger on all new files/attachments and their updates for all objects.
Use case: When a document is uploaded to a custom object with lookups to other objects like contact and account, add links to these objects, so that the same file is visible and listed under the related lists.You could easily expand this use case to add additional sharing to the uploaded file, which is also a common pain point in many organizations. I will leave out this use case for now which you can easily explore by expanding the functionality of this flow.
Objects that are involved when you upload a file
In Salesforce, three objects work together to manage files: ContentDocument, ContentVersion and ContentDocumentLink.
Think of them as a hierarchy that separates the file record, the actual data, and the location where it is shared. The definition for these three core objects are:
ContentDocument: Represents the “shell” or the permanent ID of a file. It doesn’t store the data itself but acts as a parent container that remains constant even if you upload new versions.
ContentVersion: This is where the actual file data (the “meat”) lives. Every time you upload a new version of a file, a new ContentVersion record is created. It tracks the size, extension, and the binary data.
ContentDocumentLink: This is a junction object that links a file to other records (like an Account, Opportunity, or Case) or users. It defines who can see the file and what their permissions are.Object Relationships:
The relationship is structured to allow for version control and many-to-many sharing:
ContentDocument > ContentVersion: One-to-Many. One document can have many versions, but only one is the “Latest Published Version.
ContentDocument > ContentDocumentLink: One-to-Many. One document can be linked to many different records or users simultaneously.ContentDocumentLink is a junction object that does not allow duplicates. If you attempt to create the relationship between a linked entity and the content document when it already exists, your attempt will fail.
What happens when a file is uploaded to the files related list under an object?
Salesforce creates the ContentDocument and ContentVersion records. Salesforce will also create the necessary ContentDocumentLink records; often one for the object record relationship, one for the user who uploaded the file.
For each new file (not a new version of the same file) a new ContentDocument record will be created. You can trigger your automation based on this record being created, and then create additional ContentDocumentLink records to expand relationships and sharing.
Building Blocks of the Content Document Triggered Automation
For this use case I used a custom object named Staging Record which has dedicated fields for Contact and Account (both lookups). This method of uploading new documents and updating new field values to a custom record is often used when dealing with integrations and digital experience users. You can easily build a similar automation if a ContentDocumentLink for the Account needs to be created when the file is uploaded to a standard object like Contact.
Follow these steps to build your flow:
- Trigger your record-triggered flow when a ContentDocument record is created (no criteria)
- Add a scheduled path to your flow and set it up to execute with 0 min delay. Under advanced settings, set up the batch size as 1. Async seems to work, as well. I will explain the reason for this at the end of the post.
- Get all ContentDocumentLink records for the ContentDocument
- Check null for the get in the previous step (may not be necessary, but for good measure)
- If not null, use a collection filter to filter for all records where the LinkedEntity Id starts with the prefix of your custom object record (I pasted the 3 character prefix into a constant and referenced it). Here is the formula I used:
LEFT({!currentItem_Filter_Staging.LinkedEntityId},3)= {!ObjectPrefixConstant} - Loop through the filtered records. There should be only one max. You have to loop, because the collection filter element creates a collection as an output even for one record.
- Inside the loop, get the staging record. I know, it is a get inside the loop, but this will execute once. You can add a counter and a decision to execute it only in the first iteration if you want.
- Build two ContentDocumentLink records using an assignment. One between the ContentDocument and the Contact on the staging record, the other one between the ContentDocument and the Account. You could add additional records here for sharing.
- Add your ContentDocumentLink records to a collection.
- Exit the loop and create the ContentDocumentLink records using the collection you built in one shot.
Here is a screenshot of the resulting flow.
Here is what happens when you create a staging record and upload a file to Salesforce using the related list under this record.
Here is the resulting view on the Contact and Account records.
Why is the Scheduled Path or Async Path Necessary?
When a file is uploaded, a ContentDocument record and a ContenDocumentVersion record are created. The junction object for the ContentDocumentLink record will need to be created after these records are created, because the relationship is established by populating these Ids on this record. When you build the automation on the immediate path, your get will not find the ContentDocumentLink record. To ensure Salesforce flow can find the record, use either async path or scheduled path.
When you build the automation on the immediate path, the ContentDocumentLink records are not created. You don’t receive a fault email, either, although the automation runs well in debug mode. I wanted to observe this behavior in detail, and therefore I set up a user trace to log the steps involved. This is the message I have found that is stopping the flow from executing:
(248995872)|FLOW_BULK_ELEMENT_NOT_SUPPORTED|FlowRecordLookup|Get_Contact_Document_Links|ContentDocumentLink
According to this the get step for ContentDocumentLink records cannot be bulkified, and therefore the flow cannot execute. Flow engine attempts to always bulkify gets. There is nothing fancy about the get criteria here. What must give us trouble is the unique nature of the ContentDocumentLink object.The async path seems to bypass this issue. However, if you want to ensure this element is never executed in bulk, the better approach is to use a scheduled path with zero delay and set the batch size to one record in advanced settings. I have communicated this message to the product team.
Please note that the scheduled path takes a minute to execute in my preview org. Be patient and check back if you don’t initially see the new ContentDocumentLink records.
Conclusion
In the past, handling file uploads gave flow builders a lot of trouble, because the related objects did not support flow triggers.
Now that we have this functionality rolling out in the latest release, the opportunities are pretty much limitless. The functionality still has its quirks as you can see above.
I would recommend that you set up a custom metadata kill switch for this automation so that it can easily be turned off for bulk upload scenarios.
Watch the video on our YouTube channel.
[youtube https://www.youtube.com/watch?v=Gl0XCtMAhmc?feature=oembed&w=800&h=450]
Explore related content:
Top Spring 26 Salesforce Flow Features
Should You Use Fault Paths in Salesforce Flows?
How to Use Custom Metadata Types in Flow
See the Spring 26 Release Notes HERE.
#Automation #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials #Spring26 #UseCases -
Add Salesforce Files and Attachments to Multiple Related Lists On Content Document Trigger
Flow builders, rejoice! Now with the Spring 26 Release you can trigger your flow automations on ContentDocument and ContentVersion Flow objects for Files and Attachments. Salesforce had delivered a new event type in the previous release that supported flow triggers for standard object files and attachments. The functionality was limited. In this release, Salesforce gave us the ability to trigger on all new files/attachments and their updates for all objects.
Use case: When a document is uploaded to a custom object with lookups to other objects like contact and account, add links to these objects, so that the same file is visible and listed under the related lists.You could easily expand this use case to add additional sharing to the uploaded file, which is also a common pain point in many organizations. I will leave out this use case for now which you can easily explore by expanding the functionality of this flow.
Objects that are involved when you upload a file
In Salesforce, three objects work together to manage files: ContentDocument, ContentVersion and ContentDocumentLink.
Think of them as a hierarchy that separates the file record, the actual data, and the location where it is shared. The definition for these three core objects are:
ContentDocument: Represents the “shell” or the permanent ID of a file. It doesn’t store the data itself but acts as a parent container that remains constant even if you upload new versions.
ContentVersion: This is where the actual file data (the “meat”) lives. Every time you upload a new version of a file, a new ContentVersion record is created. It tracks the size, extension, and the binary data.
ContentDocumentLink: This is a junction object that links a file to other records (like an Account, Opportunity, or Case) or users. It defines who can see the file and what their permissions are.Object Relationships:
The relationship is structured to allow for version control and many-to-many sharing:
ContentDocument > ContentVersion: One-to-Many. One document can have many versions, but only one is the “Latest Published Version.
ContentDocument > ContentDocumentLink: One-to-Many. One document can be linked to many different records or users simultaneously.ContentDocumentLink is a junction object that does not allow duplicates. If you attempt to create the relationship between a linked entity and the content document when it already exists, your attempt will fail.
What happens when a file is uploaded to the files related list under an object?
Salesforce creates the ContentDocument and ContentVersion records. Salesforce will also create the necessary ContentDocumentLink records; often one for the object record relationship, one for the user who uploaded the file.
For each new file (not a new version of the same file) a new ContentDocument record will be created. You can trigger your automation based on this record being created, and then create additional ContentDocumentLink records to expand relationships and sharing.
Building Blocks of the Content Document Triggered Automation
For this use case I used a custom object named Staging Record which has dedicated fields for Contact and Account (both lookups). This method of uploading new documents and updating new field values to a custom record is often used when dealing with integrations and digital experience users. You can easily build a similar automation if a ContentDocumentLink for the Account needs to be created when the file is uploaded to a standard object like Contact.
Follow these steps to build your flow:
- Trigger your record-triggered flow when a ContentDocument record is created (no criteria)
- Add a scheduled path to your flow and set it up to execute with 0 min delay. Under advanced settings, set up the batch size as 1. Async seems to work, as well. I will explain the reason for this at the end of the post.
- Get all ContentDocumentLink records for the ContentDocument
- Check null for the get in the previous step (may not be necessary, but for good measure)
- If not null, use a collection filter to filter for all records where the LinkedEntity Id starts with the prefix of your custom object record (I pasted the 3 character prefix into a constant and referenced it). Here is the formula I used:
LEFT({!currentItem_Filter_Staging.LinkedEntityId},3)= {!ObjectPrefixConstant} - Loop through the filtered records. There should be only one max. You have to loop, because the collection filter element creates a collection as an output even for one record.
- Inside the loop, get the staging record. I know, it is a get inside the loop, but this will execute once. You can add a counter and a decision to execute it only in the first iteration if you want.
- Build two ContentDocumentLink records using an assignment. One between the ContentDocument and the Contact on the staging record, the other one between the ContentDocument and the Account. You could add additional records here for sharing.
- Add your ContentDocumentLink records to a collection.
- Exit the loop and create the ContentDocumentLink records using the collection you built in one shot.
Here is a screenshot of the resulting flow.
Here is what happens when you create a staging record and upload a file to Salesforce using the related list under this record.
Here is the resulting view on the Contact and Account records.
Why is the Scheduled Path or Async Path Necessary?
When a file is uploaded, a ContentDocument record and a ContenDocumentVersion record are created. The junction object for the ContentDocumentLink record will need to be created after these records are created, because the relationship is established by populating these Ids on this record. When you build the automation on the immediate path, your get will not find the ContentDocumentLink record. To ensure Salesforce flow can find the record, use either async path or scheduled path.
When you build the automation on the immediate path, the ContentDocumentLink records are not created. You don’t receive a fault email, either, although the automation runs well in debug mode. I wanted to observe this behavior in detail, and therefore I set up a user trace to log the steps involved. This is the message I have found that is stopping the flow from executing:
(248995872)|FLOW_BULK_ELEMENT_NOT_SUPPORTED|FlowRecordLookup|Get_Contact_Document_Links|ContentDocumentLink
According to this the get step for ContentDocumentLink records cannot be bulkified, and therefore the flow cannot execute. Flow engine attempts to always bulkify gets. There is nothing fancy about the get criteria here. What must give us trouble is the unique nature of the ContentDocumentLink object.The async path seems to bypass this issue. However, if you want to ensure this element is never executed in bulk, the better approach is to use a scheduled path with zero delay and set the batch size to one record in advanced settings. I have communicated this message to the product team.
Please note that the scheduled path takes a minute to execute in my preview org. Be patient and check back if you don’t initially see the new ContentDocumentLink records.
Conclusion
In the past, handling file uploads gave flow builders a lot of trouble, because the related objects did not support flow triggers.
Now that we have this functionality rolling out in the latest release, the opportunities are pretty much limitless. The functionality still has its quirks as you can see above.
I would recommend that you set up a custom metadata kill switch for this automation so that it can easily be turned off for bulk upload scenarios.
Watch the video on our YouTube channel.
[youtube https://www.youtube.com/watch?v=Gl0XCtMAhmc?feature=oembed&w=800&h=450]
Explore related content:
Top Spring 26 Salesforce Flow Features
Should You Use Fault Paths in Salesforce Flows?
How to Use Custom Metadata Types in Flow
See the Spring 26 Release Notes HERE.
#Automation #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials #Spring26 #UseCases -
Add Salesforce Files and Attachments to Multiple Related Lists On Content Document Trigger
Flow builders, rejoice! Now with the Spring 26 Release you can trigger your flow automations on ContentDocument and ContentVersion Flow objects for Files and Attachments. Salesforce had delivered a new event type in the previous release that supported flow triggers for standard object files and attachments. The functionality was limited. In this release, Salesforce gave us the ability to trigger on all new files/attachments and their updates for all objects.
Use case: When a document is uploaded to a custom object with lookups to other objects like contact and account, add links to these objects, so that the same file is visible and listed under the related lists.You could easily expand this use case to add additional sharing to the uploaded file, which is also a common pain point in many organizations. I will leave out this use case for now which you can easily explore by expanding the functionality of this flow.
Objects that are involved when you upload a file
In Salesforce, three objects work together to manage files: ContentDocument, ContentVersion and ContentDocumentLink.
Think of them as a hierarchy that separates the file record, the actual data, and the location where it is shared. The definition for these three core objects are:
ContentDocument: Represents the “shell” or the permanent ID of a file. It doesn’t store the data itself but acts as a parent container that remains constant even if you upload new versions.
ContentVersion: This is where the actual file data (the “meat”) lives. Every time you upload a new version of a file, a new ContentVersion record is created. It tracks the size, extension, and the binary data.
lass=”yoast-text-mark” />>ContentDocumentLink: This is a junction object that links a file to other records (like an Account, Opportunity, or Case) or users. It defines who can see the file and what their permissions are.Object Relationships:
The relationship is structured to allow for version control and many-to-many sharing:
ContentDocument > ContentVersion: One-to-Many. One document can have many versions, but only one is the “Latest Published Version.
ContentDocument > ContentDocumentLink: One-to-Many. One document can be linked to many different records or users simultaneously.ContentDocumentLink is a junction object that does not allow duplicates. If you attempt to create the relationship between a linked entity and the content document when it already exists, your attempt will fail.
What happens when a file is uploaded to the files related list under an object?
Salesforce creates the ContentDocument and ContentVersion records. Salesforce will also create the necessary ContentDocumentLink records; often one for the object record relationship, one for the user who uploaded the file.
For each new file (not a new version of the same file) a new ContentDocument record will be created. You can trigger your automation based on this record being created, and then create additional ContentDocumentLink records to expand relationships and sharing.
Building Blocks of the Content Document Triggered Automation
For this use case I used a custom object named Staging Record which has dedicated fields for Contact and Account (both lookups). This method of uploading new documents and updating new field values to a custom record is often used when dealing with integrations and digital experience users. You can easily build a similar automation if a ContentDocumentLink for the Account needs to be created when the file is uploaded to a standard object like Contact.
Follow these steps to build your flow:
- Trigger your record-triggered flow when a ContentDocument record is created (no criteria)
- Add a scheduled path to your flow and set it up to execute with 0 min delay. Under advanced settings, set up the batch size as 1. Async seems to work, as well. I will explain the reason for this at the end of the post.
- Get all ContentDocumentLink records for the ContentDocument
- Check null for the get in the previous step (may not be necessary, but for good measure)
- <span style=”font-weight: 400;”>If not null, use a collection filter to filter for all records where the LinkedEntity Id starts with the prefix of your custom object record (I pasted the 3 character prefix into a constant and referenced it). Here is the formula I used:
LEFT({!currentItem_Filter_Staging.LinkedEntityId},3)= {!ObjectPrefixConstant} - Loop through the filtered records. There should be only one max. You have to loop, because the collection filter element creates a collection as an output even for one record.
- Inside the loop, get the staging record. I know, it is a get inside the loop, but this will execute once. You can add a counter and a decision to execute it only in the first iteration if you want.
- Build two ContentDocumentLink records using an assignment. One between the ContentDocument and the Contact on the staging record, the other one between the ContentDocument and the Account. You could add additional records here for sharing.
- Add your ContentDocumentLink records to a collection.
- Exit the loop and create the ContentDocumentLink records using the collection you built in one shot.
Here is a screenshot of the resulting flow.
Here is what happens when you create a staging record and upload a file to Salesforce using the related list under this record.
Here is the resulting view on the Contact and Account records.
Why is the Scheduled Path or Async Path Necessary?
When you build the automation on the immediate path, the ContentDocumentLink records are not created. You don’t receive a fault email, either, although the automation runs well in debug mode. I wondered about why that is and set up a user trace to see what is happening. This is the message I have found that is stopping the flow from executing:
(248995872)|FLOW_BULK_ELEMENT_NOT_SUPPORTED|FlowRecordLookup|Get_Contact_Document_Links|ContentDocumentLink
According to this the get step for ContentDocumentLink records cannot be bulkified, and therefore the flow cannot execute. Flow engine attempts to always bulkify gets. There is nothing fancy about the get criteria here. What must give us trouble is the unique nature of the ContentDocumentLink object.The async path seems to bypass this issue. However, if you want to ensure this element is never executed in bulk, the better approach is to use a scheduled path with zero delay and set the batch size to one record in advanced settings.
Please note that the scheduled path takes a minute to execute in my preview org. Be patient and check back if you don’t initially see the new ContentDocumentLink records.
Conclusion
In the past, handling file uploads gave flow builders a lot of trouble, because the related objects did not support flow triggers.
Now that we have this functionality rolling out in the latest release, the opportunities are pretty much limitless. The functionality still has its quirks as you can see above.
I would recommend that you set up a custom metadata kill switch for this automation so that it can easily be turned off for bulk upload scenarios.
Explore related content:
Top Spring 26 Salesforce Flow Features
Should You Use Fault Paths in Salesforce Flows?
How to Use Custom Metadata Types in Flow
See the Spring 26 Release Notes HERE.
#Automation #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials #Spring26 #UseCases -
Add Salesforce Files and Attachments to Multiple Related Lists On Content Document Trigger
Flow builders, rejoice! Now with the Spring 26 Release you can trigger your flow automations on ContentDocument and ContentVersion Flow objects for Files and Attachments. Salesforce had delivered a new event type in the previous release that supported flow triggers for standard object files and attachments. The functionality was limited. In this release, Salesforce gave us the ability to trigger on all new files/attachments and their updates for all objects.
Use case: When a document is uploaded to a custom object with lookups to other objects like contact and account, add links to these objects, so that the same file is visible and listed under the related lists.You could easily expand this use case to add additional sharing to the uploaded file, which is also a common pain point in many organizations. I will leave out this use case for now which you can easily explore by expanding the functionality of this flow.
Objects that are involved when you upload a file
In Salesforce, three objects work together to manage files: ContentDocument, ContentVersion and ContentDocumentLink.
Think of them as a hierarchy that separates the file record, the actual data, and the location where it is shared. The definition for these three core objects are:
ContentDocument: Represents the “shell” or the permanent ID of a file. It doesn’t store the data itself but acts as a parent container that remains constant even if you upload new versions.
ContentVersion: This is where the actual file data (the “meat”) lives. Every time you upload a new version of a file, a new ContentVersion record is created. It tracks the size, extension, and the binary data.
lass=”yoast-text-mark” />>ContentDocumentLink: This is a junction object that links a file to other records (like an Account, Opportunity, or Case) or users. It defines who can see the file and what their permissions are.Object Relationships:
The relationship is structured to allow for version control and many-to-many sharing:
ContentDocument > ContentVersion: One-to-Many. One document can have many versions, but only one is the “Latest Published Version.
ContentDocument > ContentDocumentLink: One-to-Many. One document can be linked to many different records or users simultaneously.ContentDocumentLink is a junction object that does not allow duplicates. If you attempt to create the relationship between a linked entity and the content document when it already exists, your attempt will fail.
What happens when a file is uploaded to the files related list under an object?
Salesforce creates the ContentDocument and ContentVersion records. Salesforce will also create the necessary ContentDocumentLink records; often one for the object record relationship, one for the user who uploaded the file.
For each new file (not a new version of the same file) a new ContentDocument record will be created. You can trigger your automation based on this record being created, and then create additional ContentDocumentLink records to expand relationships and sharing.
Building Blocks of the Content Document Triggered Automation
For this use case I used a custom object named Staging Record which has dedicated fields for Contact and Account (both lookups). This method of uploading new documents and updating new field values to a custom record is often used when dealing with integrations and digital experience users. You can easily build a similar automation if a ContentDocumentLink for the Account needs to be created when the file is uploaded to a standard object like Contact.
Follow these steps to build your flow:
- Trigger your record-triggered flow when a ContentDocument record is created (no criteria)
- Add a scheduled path to your flow and set it up to execute with 0 min delay. Under advanced settings, set up the batch size as 1. Async seems to work, as well. I will explain the reason for this at the end of the post.
- Get all ContentDocumentLink records for the ContentDocument
- Check null for the get in the previous step (may not be necessary, but for good measure)
- <span style=”font-weight: 400;”>If not null, use a collection filter to filter for all records where the LinkedEntity Id starts with the prefix of your custom object record (I pasted the 3 character prefix into a constant and referenced it). Here is the formula I used:
LEFT({!currentItem_Filter_Staging.LinkedEntityId},3)= {!ObjectPrefixConstant} - Loop through the filtered records. There should be only one max. You have to loop, because the collection filter element creates a collection as an output even for one record.
- Inside the loop, get the staging record. I know, it is a get inside the loop, but this will execute once. You can add a counter and a decision to execute it only in the first iteration if you want.
- Build two ContentDocumentLink records using an assignment. One between the ContentDocument and the Contact on the staging record, the other one between the ContentDocument and the Account. You could add additional records here for sharing.
- Add your ContentDocumentLink records to a collection.
- Exit the loop and create the ContentDocumentLink records using the collection you built in one shot.
Here is a screenshot of the resulting flow.
Here is what happens when you create a staging record and upload a file to Salesforce using the related list under this record.
Here is the resulting view on the Contact and Account records.
Why is the Scheduled Path or Async Path Necessary?
When you build the automation on the immediate path, the ContentDocumentLink records are not created. You don’t receive a fault email, either, although the automation runs well in debug mode. I wondered about why that is and set up a user trace to see what is happening. This is the message I have found that is stopping the flow from executing:
(248995872)|FLOW_BULK_ELEMENT_NOT_SUPPORTED|FlowRecordLookup|Get_Contact_Document_Links|ContentDocumentLink
According to this the get step for ContentDocumentLink records cannot be bulkified, and therefore the flow cannot execute. Flow engine attempts to always bulkify gets. There is nothing fancy about the get criteria here. What must give us trouble is the unique nature of the ContentDocumentLink object.The async path seems to bypass this issue. However, if you want to ensure this element is never executed in bulk, the better approach is to use a scheduled path with zero delay and set the batch size to one record in advanced settings.
Please note that the scheduled path takes a minute to execute in my preview org. Be patient and check back if you don’t initially see the new ContentDocumentLink records.
Conclusion
In the past, handling file uploads gave flow builders a lot of trouble, because the related objects did not support flow triggers.
Now that we have this functionality rolling out in the latest release, the opportunities are pretty much limitless. The functionality still has its quirks as you can see above.
I would recommend that you set up a custom metadata kill switch for this automation so that it can easily be turned off for bulk upload scenarios.
Explore related content:
Top Spring 26 Salesforce Flow Features
Should You Use Fault Paths in Salesforce Flows?
How to Use Custom Metadata Types in Flow
See the Spring 26 Release Notes HERE.
#Automation #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials #Spring26 #UseCases -
Top Spring ’26 Salesforce Flow Features
What are the new features about? Spring 26 brings new screen, usability and platform enhancement features. Let’s dive into the details.
Top Screen Flow Spring 26 Features
It seems like most of the new features involve screen flows.
I will not go into further detail, but this release introduces yet another file upload component for screen flows: LWR File Upload Component for Experience Cloud.
Here are the rest of the screen flow improvements.
Screen Flow Screen Element and Component Style Enhancements
Screen flow screen element gets features that allow you do set the background, text and border colors. Border weight and radius can be adjusted. For input components, in-focus color for text can be differentiated. Flow buttons also get similar adjustments gaining the ability to change colors on hover over.
Any styling changes you set override your org or Experience Cloud site’s default theme.
Remember to keep your color and contrast choices in check for accessibility. Don’t do it as I did below. Go to the WebAIM contrast checker website and plug in your color codes to check whether their contrast is sufficient for accessibility.
Screen Flow Message Element
Screen Flow Message Element leverages the new styling options to display a message on the screen. It has a pulldown that allows you to create an information, success, warning or an error message. These come with standard color sets, which will direct flow developers in using a standard visual language.
This functionality is compliant with A11y for accessibility.
See all the four types on the same screen below.
Screen Flow Kanban Component (Beta)
The new Kanban component allows you to organize records into cards and columns. This is particularly useful for visualizing process phases and managing transitions across your workflow.
Use the new Kanban Board component to show records as cards in columns that represent workflow stages, without custom Lightning implementations. The Kanban Board is read-only, so users can’t drag cards between stages at run time.
Data Table Column Sort and Row Value Edit (TBD)
Now the user can sort the data table by columns and edit text fields in rows. This feature is not available in the preview orgs. The product team is working hard in the background to make this into the Spring 26 release. This functionality is slated to make it to the release at the last minute.
Preview Files Natively in Screen Flows
Elevate document-based processes by enabling your users to review file content directly within a screen flow. The new File Preview screen component removes the requirement to download files externally, ensuring easier document review and approval workflows.
This component seems to be already in production.
Open Screen Flows in Lightning Experience with a URL
Previously, when you opened a flow via URL, it did not launch in lightning experience. Now, it will launch in lightning preserving the experience your user is used to especially when they are working on a customized lightning console app.
I will quote the release notes for this one.
“To open a flow in Lightning Experience, append
/lightning/flow/YourFlowNameHereto your URL. To run a specific flow version, append/lightning/flow/YourFlowNameHere/versionIdto your URL. Flows that open in Lightning Experience have improved performance because most required Lightning components are already loaded into the browser session. In Lightning console apps, your tabs are preserved when a flow opens, and you can switch to other tabs while the flow is working. Using the new URL format also ensures that your browser behaves consistently, with forward, back, and your browser history working as expected.To pass data into a flow through its URL, append ?flow__variableIdHere=value to the end of your URL. For example, to pass a case number into a flow,
/lightning/flow/YourFlowNameHere?flow__variableIdHereID={!Case.CaseNumber}.Use
&to append multiple variables into a flow. For example,/lightning/flow/YourFlowNameHere?flow__varUserFirst={!$User.FirstName}&flow__varUserLast={!$User.LastName}passes both the user first name and last name into the flow.”Usability and Platform Features
I listed all of the screen flow features above. The following two items are huge usability improvements that also involves screen management for the flow canvas, not just only for screen flows.
Collapse and Expand Decision and Loop Elements
When your flow gets to big and you need to Marie Kondo (tidy up) your flow canvas, you can collapse the decision and loop elements that take up a lot of real estate. You can always expand them back when needed.
Now you can collapse and expand branching elements with Flow Builder, including Wait, Decision, Loop, Path Experiment, and Async Actions, helping you focus on the key parts of your flow.
This layout is saved automatically and locally in your browser, making it easier to return to your work without changing the view for other users.
Mouse, Trackpad and Keyboard Scroll
Now you don’t have to drag or use the scroll bar to move the flow around on the flow canvas. You can use vertical and horizontal wheels on your mouse, the arrows keys on your keyboard or your trackpad if you have one.
No need to use Salesforce Inspector Reloaded to get this functionality any more. Thanks to Salesforce Inspector Relaoded for filling the gap in the mean time.
Content Document and Content Version Flow Triggers for Files and Attachments (Beta)
Salesforce delivered a new event type in the last release that could trigger flows for standard object files and attachments. The functionality was limited. In this release, Salesforce gave us the ability to trigger on all new files/attachments and their updates for all objects.
I was told by the product team that this functionality will be released as beta.
Flow Logging
I am not exactly sure what has been improved here. Salesforce had previously announced additional flow logging capabilities leveraging Data Cloud. Now, a new flow logging tab has been added to the Automation Lightning App.
Debug Improvements
The debug in the flow builder will now remember the record that it ran on and the updated field value if it is running in an update scenario. Debug inputs such as triggering record values, debug options, and input variable values now remain set when you save flow changes within your Flow Builder session. The user will need to click a reset button to disassociate the debug run from the input for the last run. This change is intended to make debug reruns faster.
Flow builder will preserve debug configurations when you save changes to your flow. Refreshing your browser or closing Flow Builder clears all debug settings.
Conclusion
Salesforce product teams work hard delivering new features for every release. Spring 26 release brings significant new improvements for the flow builder. I would have liked to see additional capabilities coming for flow types other than screen flows. This release seems to be a lighter release in that area.
Additional bonus features include request for approval component for lightning page layouts (highly-requested feature), compare screen flow versions, and associating flow tests with flow versions.
The release notes are still in preview. And we could still have new functionalities removed or added in the release cycle.
This post will be updated as additional details are made available.
[youtube https://www.youtube.com/watch?v=eZC_8W1IbUs?feature=oembed&w=800&h=450]
Explore related content:
Salesforce Optimizer Is Retired: Meet Org Check
One Simple Salesforce Flow Hack That Will Change Your Workflow Forever!
Automate Permissions in Salesforce with User Access Policies
Spring ’26 Release Notes: Highlights for Admins and Developers
What Is Vibe Coding? And What’s New in Agentforce Vibes for Developers?
#Kanban #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials #SalesforceUpdate #ScreenFlow #Spring26 -
Top Spring ’26 Salesforce Flow Features
What are the new features about? Spring 26 brings new screen, usability and platform enhancement features. Let’s dive into the details.
Top Screen Flow Spring 26 Features
It seems like most of the new features involve screen flows.
I will not go into further detail, but this release introduces yet another file upload component for screen flows: LWR File Upload Component for Experience Cloud.
Here are the rest of the screen flow improvements.
Screen Flow Screen Element and Component Style Enhancements
Screen flow screen element gets features that allow you do set the background, text and border colors. Border weight and radius can be adjusted. For input components, in-focus color for text can be differentiated. Flow buttons also get similar adjustments gaining the ability to change colors on hover over.
Any styling changes you set override your org or Experience Cloud site’s default theme.
Remember to keep your color and contrast choices in check for accessibility. Don’t do it as I did below. Go to the WebAIM contrast checker website and plug in your color codes to check whether their contrast is sufficient for accessibility.
Screen Flow Message Element
Screen Flow Message Element leverages the new styling options to display a message on the screen. It has a pulldown that allows you to create an information, success, warning or an error message. These come with standard color sets, which will direct flow developers in using a standard visual language.
This functionality is compliant with A11y for accessibility.
See all the four types on the same screen below.
Screen Flow Kanban Component (Beta)
The new Kanban component allows you to organize records into cards and columns. This is particularly useful for visualizing process phases and managing transitions across your workflow.
Use the new Kanban Board component to show records as cards in columns that represent workflow stages, without custom Lightning implementations. The Kanban Board is read-only, so users can’t drag cards between stages at run time.
Data Table Column Sort and Row Value Edit (TBD)
Now the user can sort the data table by columns and edit text fields in rows. This feature is not available in the preview orgs. The product team is working hard in the background to make this into the Spring 26 release. This functionality is slated to make it to the release at the last minute.
Preview Files Natively in Screen Flows
Elevate document-based processes by enabling your users to review file content directly within a screen flow. The new File Preview screen component removes the requirement to download files externally, ensuring easier document review and approval workflows.
This component seems to be already in production.
Open Screen Flows in Lightning Experience with a URL
Previously, when you opened a flow via URL, it did not launch in lightning experience. Now, it will launch in lightning preserving the experience your user is used to especially when they are working on a customized lightning console app.
I will quote the release notes for this one.
“To open a flow in Lightning Experience, append
/lightning/flow/YourFlowNameHereto your URL. To run a specific flow version, append/lightning/flow/YourFlowNameHere/versionIdto your URL. Flows that open in Lightning Experience have improved performance because most required Lightning components are already loaded into the browser session. In Lightning console apps, your tabs are preserved when a flow opens, and you can switch to other tabs while the flow is working. Using the new URL format also ensures that your browser behaves consistently, with forward, back, and your browser history working as expected.To pass data into a flow through its URL, append ?flow__variableIdHere=value to the end of your URL. For example, to pass a case number into a flow,
/lightning/flow/YourFlowNameHere?flow__variableIdHereID={!Case.CaseNumber}.Use
&to append multiple variables into a flow. For example,/lightning/flow/YourFlowNameHere?flow__varUserFirst={!$User.FirstName}&flow__varUserLast={!$User.LastName}passes both the user first name and last name into the flow.”Usability and Platform Features
I listed all of the screen flow features above. The following two items are huge usability improvements that also involves screen management for the flow canvas, not just only for screen flows.
Collapse and Expand Decision and Loop Elements
When your flow gets to big and you need to Marie Kondo (tidy up) your flow canvas, you can collapse the decision and loop elements that take up a lot of real estate. You can always expand them back when needed.
Now you can collapse and expand branching elements with Flow Builder, including Wait, Decision, Loop, Path Experiment, and Async Actions, helping you focus on the key parts of your flow.
This layout is saved automatically and locally in your browser, making it easier to return to your work without changing the view for other users.
Mouse, Trackpad and Keyboard Scroll
Now you don’t have to drag or use the scroll bar to move the flow around on the flow canvas. You can use vertical and horizontal wheels on your mouse, the arrows keys on your keyboard or your trackpad if you have one.
No need to use Salesforce Inspector Reloaded to get this functionality any more. Thanks to Salesforce Inspector Relaoded for filling the gap in the mean time.
Content Document and Content Version Flow Triggers for Files and Attachments (Beta)
Salesforce delivered a new event type in the last release that could trigger flows for standard object files and attachments. The functionality was limited. In this release, Salesforce gave us the ability to trigger on all new files/attachments and their updates for all objects.
I was told by the product team that this functionality will be released as beta.
Flow Logging
I am not exactly sure what has been improved here. Salesforce had previously announced additional flow logging capabilities leveraging Data Cloud. Now, a new flow logging tab has been added to the Automation Lightning App.
Debug Improvements
The debug in the flow builder will now remember the record that it ran on and the updated field value if it is running in an update scenario. Debug inputs such as triggering record values, debug options, and input variable values now remain set when you save flow changes within your Flow Builder session. The user will need to click a reset button to disassociate the debug run from the input for the last run. This change is intended to make debug reruns faster.
Flow builder will preserve debug configurations when you save changes to your flow. Refreshing your browser or closing Flow Builder clears all debug settings.
Conclusion
Salesforce product teams work hard delivering new features for every release. Spring 26 release brings significant new improvements for the flow builder. I would have liked to see additional capabilities coming for flow types other than screen flows. This release seems to be a lighter release in that area.
Additional bonus features include request for approval component for lightning page layouts (highly-requested feature), compare screen flow versions, and associating flow tests with flow versions.
The release notes are still in preview. And we could still have new functionalities removed or added in the release cycle.
This post will be updated as additional details are made available.
[youtube https://www.youtube.com/watch?v=eZC_8W1IbUs?feature=oembed&w=800&h=450]
Explore related content:
Salesforce Optimizer Is Retired: Meet Org Check
One Simple Salesforce Flow Hack That Will Change Your Workflow Forever!
Automate Permissions in Salesforce with User Access Policies
Spring ’26 Release Notes: Highlights for Admins and Developers
What Is Vibe Coding? And What’s New in Agentforce Vibes for Developers?
#Kanban #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials #SalesforceUpdate #ScreenFlow #Spring26 -
Should You Use Fault Paths in Salesforce Flows?
If you build enough Flows, you’ll eventually see the dreaded flow fault email. Maybe a record you tried to update was locked, a required field value was not set in a create operation, or a validation rule tripped your commit. Regardless of the root cause, the impact on your users is the same: confusion, broken trust, and a support ticket. The good news is you can catch your faults using the fault path functionality. In this post, we’ll walk through practical patterns for fault handling, show how and when to use custom error element, and explain why a dedicated error screen in screen flows is worth the extra minute to build. We’ll also touch on the roll back records element for screen flows where this functionality can make a difference.
Why Fault Paths Matter
Faults are opportunities for your Salesforce Org automation to improve. While unhandled faults are almost always trouble, handled faults do not have to be a huge pain in our necks.
The Core Building Blocks of Flow Fault Handling
1) Fault paths
Gets (SOQLs), DMLs (create, update, and deletes) and actions support fault paths. Fault paths provide a way for the developer to determine what to do in the event of an error.
2) Fault actions
You can add elements to your fault path to determine the next steps. You can also add a custom error element in record-triggered flows or error screens in screen flows for user interactivity. Multiple fault paths in the flow can be connected to the same element executing the same logic. A subflow can be used to standardize and maintain the fault actions such as temporarily logging the fault events.Logging Errors
Here is a list of data that may be important to include in your fault communications and logging:
- Flow label
- User Name
- Date/Time
- Technical details (e.g. $Flow.FaultMessage)
- Record Id(s) and business context (e.g., Opportunity Id, Stage)
- User-friendly message (plain English)
Subflow Solution
The advantage of a subflow when dealing with fault paths is that you can modify the logic once on a central location. If you want to start logging temporarily, you can do that without modifying tons of flows. If you want to stop logging, this change can be completed fairly easily, as well.
Inside the subflow, decide whether to:
- Log to a custom object (e.g., Flow_Error__c)
- Notify admins via Email/Slack
Meet the Custom Error Element
The Custom Error element in Salesforce Flow is a powerful yet often underutilized tool that allows administrators and developers to implement robust error handling and create more user-friendly experiences. Unlike system-generated errors that can be cryptic or technical, the Custom Error element gives you complete control over when to halt flow execution and what message to display to your users.
The Custom Error element lets you intentionally raise a validation-style error from inside your flow, without causing a system fault, so you can keep users on the same screen, highlight what needs fixing, and block navigation until it’s resolved. Think of it as flow-native inline validation.
What The Custom Error Element Does
It displays a message at a specific location (the entire screen or a specific field) and stops the user from moving forward. This functionality does present a less than ideal self-disappearing red banner message if you make a change to a picklist using the path component, though. Refrain from using the custom error messages in these situations.
The unique thing about the custom error message is that it can be used to throw an intentional exception to stop the user from proceeding. In these use cases, it works very similarly to a validation rule on the object.
This becomes particularly valuable in complex business processes where you need to validate data against specific business rules that can’t be easily captured in standard validation rules. For instance, you might use a Custom Error to prevent a case from being closed if certain required child records haven’t been created, or to stop an approval process if budget thresholds are exceeded.
Please note that custom error messages block the transaction from executing, while a fault path connected to any other element will allow the original DML (the triggering DML) to complete when the record-triggered automation is failing.
Custom Error Screen in Screen Flows
Incorporating a dedicated custom error screen in your screen flows dramatically improves the user experience by transforming potentially frustrating dead-ends into helpful, actionable moments. When users encounter an error in a screen flow without a custom error screen, they’re often left with generic system messages that don’t explain what went wrong in business terms or what they should do next, leading to confusion, repeated help desk tickets, and abandoned processes.
A well-designed custom error screen, however, allows you to explain the specific issue in plain language that resonates with your users’ understanding of the business process. Beyond clear messaging, custom error screens give you the opportunity to provide contextual guidance, such as directing users to the right person or department for exceptions, offering alternative paths forward, or explaining the underlying business rule that triggered the error. You can also leverage display text components with dynamic merge fields to show users what caused the problem turning the error into a learning moment rather than a roadblock. Additionally, custom error screens maintain your organization’s branding and tone of voice, include helpful links to documentation or knowledge articles, and pair with logging actions to give you valuable insights into potential process improvements or additional training needs.
Here is an example custom error screen element format (customize to your liking):
Error Your transaction has not been completed successfully. Everything has been rolled back. Please try again or contact your admin with the detailed information below. Account Id: {!recordId} Time and Date: {!$Flow.CurrentDateTime} User: {!$User.Username} System fault message: {!$Flow.FaultMessage} Flow Label: Account - XPR - Opportunity Task Error Screen FlowThe “Roll Back Records” Element
There are use cases in screen flows where you create a record and then update this record based on follow-up screen actions. You could be creating related records for a newly created record, which would require you to create the parent record to get the record Id first. If you experience a fault in your screen flow, record(s) can remain in your system that are not usable. In these situations the Roll Back Records element lets you undo database changes made earlier in the same transaction. Roll Back Records does not roll back all changes to its original state, it only rolls back the last transaction in a series of transactions.
Tips for fewer faults in the first place
Here are some practical tips:
- Validate early on screens with input rules (Required, min/max, regex).
- Use Decisions to catch known conflicts before DML.
- Place DMLs strategically in screen flows: Near the end so success is all-or-nothing (plus Roll Back Records if needed) or after each screen to record the progress without loss.
The fewer faults you surface, the more your users will trust your flows.
Putting it all together
Here’s a checklist you can apply to your next Screen Flow:
- Every DML/Callout element has a Fault connector.
- A reusable Fault Handler subflow logs & standardizes messages.
- Custom Error is used for predictable, user-fixable issues on screens.
- A custom error screen presents clear actions and preserves inputs.
- Technical details are available, not imposed (display only if helpful).
- Roll Back Records is used when it matters.
- Prevention first: validate and decide before you write.
Other Considerations
When you use a fault path on a record-triggered flow create element, and your create fails, please keep in mind that you will get a partial commit. This means the records that fail won’t be created while others may be created.
Example: You are creating three tasks in a case record-triggered flow. If one of your record field assignments writes a string longer than the text field’s max length (for example, Subject) and you use a fault path on that create element, one task fails while the other two create successfully.
Conclusion
My philosophy regarding fault paths is to add them to your flows, but never go down them if possible. When you see you are going down fault paths, then that means you have opportunity for improvement in your automation design.
Every fault you handle offers insight into how your flow behaves in the real world. Each one reveals something about the assumptions built into your automation, the data quality in your org, or the user experience you’ve designed. Treating faults as signals rather than setbacks helps you evolve your automations into resilient, reliable tools your users can trust. Over time, these lessons refine both your technical build patterns and your understanding of how people interact with automation inside Salesforce.
Explore related content:
How to Use a Salesforce Action Button to Validate Lookup Fields in Screen Flows
Should You Leave Unused Input and Output Flow Variables?
How To Build Inline Editing for Screen Flow Data Tables in Salesforce
Salesforce Flow Best Practices
Add Salesforce Files and Attachments to Multiple Related Lists On Content Document Trigger
#CustomErrors #FaultHandling #FaultPath #SalesforceAdmins #SalesforceDevelopers #SalesforceHowTo #SalesforceTutorials #ScreenFlows -
Should You Use Fault Paths in Salesforce Flows?
If you build enough Flows, you’ll eventually see the dreaded flow fault email. Maybe a record you tried to update was locked, a required field value was not set in a create operation, or a validation rule tripped your commit. Regardless of the root cause, the impact on your users is the same: confusion, broken trust, and a support ticket. The good news is you can catch your faults using the fault path functionality. In this post, we’ll walk through practical patterns for fault handling, show how and when to use custom error element, and explain why a dedicated error screen in screen flows is worth the extra minute to build. We’ll also touch on the roll back records element for screen flows where this functionality can make a difference.
Why Fault Paths Matter
Faults are opportunities for your Salesforce Org automation to improve. While unhandled faults are almost always trouble, handled faults do not have to be a huge pain in our necks.
The Core Building Blocks of Flow Fault Handling
1) Fault paths
Gets (SOQLs), DMLs (create, update, and deletes) and actions support fault paths. Fault paths provide a way for the developer to determine what to do in the event of an error.
2) Fault actions
You can add elements to your fault path to determine the next steps. You can also add a custom error element in record-triggered flows or error screens in screen flows for user interactivity. Multiple fault paths in the flow can be connected to the same element executing the same logic. A subflow can be used to standardize and maintain the fault actions such as temporarily logging the fault events.Logging Errors
Here is a list of data that may be important to include in your fault communications and logging:
- Flow label
- User Name
- Date/Time
- Technical details (e.g. $Flow.FaultMessage)
- Record Id(s) and business context (e.g., Opportunity Id, Stage)
- User-friendly message (plain English)
Subflow Solution
The advantage of a subflow when dealing with fault paths is that you can modify the logic once on a central location. If you want to start logging temporarily, you can do that without modifying tons of flows. If you want to stop logging, this change can be completed fairly easily, as well.
Inside the subflow, decide whether to:
- Log to a custom object (e.g., Flow_Error__c)
- Notify admins via Email/Slack
Meet the Custom Error Element
The Custom Error element in Salesforce Flow is a powerful yet often underutilized tool that allows administrators and developers to implement robust error handling and create more user-friendly experiences. Unlike system-generated errors that can be cryptic or technical, the Custom Error element gives you complete control over when to halt flow execution and what message to display to your users.
The Custom Error element lets you intentionally raise a validation-style error from inside your flow, without causing a system fault, so you can keep users on the same screen, highlight what needs fixing, and block navigation until it’s resolved. Think of it as flow-native inline validation.
What The Custom Error Element Does
It displays a message at a specific location (the entire screen or a specific field) and stops the user from moving forward. This functionality does present a less than ideal self-disappearing red banner message if you make a change to a picklist using the path component, though. Refrain from using the custom error messages in these situations.
The unique thing about the custom error message is that it can be used to throw an intentional exception to stop the user from proceeding. In these use cases, it works very similarly to a validation rule on the object.
This becomes particularly valuable in complex business processes where you need to validate data against specific business rules that can’t be easily captured in standard validation rules. For instance, you might use a Custom Error to prevent a case from being closed if certain required child records haven’t been created, or to stop an approval process if budget thresholds are exceeded.
Please note that custom error messages block the transaction from executing, while a fault path connected to any other element will allow the original DML (the triggering DML) to complete when the record-triggered automation is failing.
Custom Error Screen in Screen Flows
Incorporating a dedicated custom error screen in your screen flows dramatically improves the user experience by transforming potentially frustrating dead-ends into helpful, actionable moments. When users encounter an error in a screen flow without a custom error screen, they’re often left with generic system messages that don’t explain what went wrong in business terms or what they should do next, leading to confusion, repeated help desk tickets, and abandoned processes.
A well-designed custom error screen, however, allows you to explain the specific issue in plain language that resonates with your users’ understanding of the business process. Beyond clear messaging, custom error screens give you the opportunity to provide contextual guidance, such as directing users to the right person or department for exceptions, offering alternative paths forward, or explaining the underlying business rule that triggered the error. You can also leverage display text components with dynamic merge fields to show users what caused the problem turning the error into a learning moment rather than a roadblock. Additionally, custom error screens maintain your organization’s branding and tone of voice, include helpful links to documentation or knowledge articles, and pair with logging actions to give you valuable insights into potential process improvements or additional training needs.
Here is an example custom error screen element format (customize to your liking):
Error Your transaction has not been completed successfully. Everything has been rolled back. Please try again or contact your admin with the detailed information below. Account Id: {!recordId} Time and Date: {!$Flow.CurrentDateTime} User: {!$User.Username} System fault message: {!$Flow.FaultMessage} Flow Label: Account - XPR - Opportunity Task Error Screen FlowThe “Roll Back Records” Element
There are use cases in screen flows where you create a record and then update this record based on follow-up screen actions. You could be creating related records for a newly created record, which would require you to create the parent record to get the record Id first. If you experience a fault in your screen flow, record(s) can remain in your system that are not usable. In these situations the Roll Back Records element lets you undo database changes made earlier in the same transaction. Roll Back Records does not roll back all changes to its original state, it only rolls back the last transaction in a series of transactions.
Tips for fewer faults in the first place
Here are some practical tips:
- Validate early on screens with input rules (Required, min/max, regex).
- Use Decisions to catch known conflicts before DML.
- Place DMLs strategically in screen flows: Near the end so success is all-or-nothing (plus Roll Back Records if needed) or after each screen to record the progress without loss.
The fewer faults you surface, the more your users will trust your flows.
Putting it all together
Here’s a checklist you can apply to your next Screen Flow:
- Every DML/Callout element has a Fault connector.
- A reusable Fault Handler subflow logs & standardizes messages.
- Custom Error is used for predictable, user-fixable issues on screens.
- A custom error screen presents clear actions and preserves inputs.
- Technical details are available, not imposed (display only if helpful).
- Roll Back Records is used when it matters.
- Prevention first: validate and decide before you write.
Other Considerations
When you use a fault path on a record-triggered flow create element, and your create fails, please keep in mind that you will get a partial commit. This means the records that fail won’t be created while others may be created.
Example: You are creating three tasks in a case record-triggered flow. If one of your record field assignments writes a string longer than the text field’s max length (for example, Subject) and you use a fault path on that create element, one task fails while the other two create successfully.
Conclusion
My philosophy regarding fault paths is to add them to your flows, but never go down them if possible. When you see you are going down fault paths, then that means you have opportunity for improvement in your automation design.
Every fault you handle offers insight into how your flow behaves in the real world. Each one reveals something about the assumptions built into your automation, the data quality in your org, or the user experience you’ve designed. Treating faults as signals rather than setbacks helps you evolve your automations into resilient, reliable tools your users can trust. Over time, these lessons refine both your technical build patterns and your understanding of how people interact with automation inside Salesforce.
Explore related content:
How to Use a Salesforce Action Button to Validate Lookup Fields in Screen Flows
Should You Leave Unused Input and Output Flow Variables?
How To Build Inline Editing for Screen Flow Data Tables in Salesforce
Salesforce Flow Best Practices
Add Salesforce Files and Attachments to Multiple Related Lists On Content Document Trigger
#CustomErrors #FaultHandling #FaultPath #SalesforceAdmins #SalesforceDevelopers #SalesforceHowTo #SalesforceTutorials #ScreenFlows -
Top 7 Key Takeaways from Salesforce Dreamforce 2025
Salesforce Break reviewed the press releases and sessions coming out of Salesforce Dreamforce 2025, and prepared the ket takeaways in this post, so you don’t have to go thorough all the materials.
The biggest announcements for Salesforce at Dreamforce 2025 were centered on advancing the company’s foundational “Agentic Enterprise” vision through enhanced control, deeper context integration, and widespread collaboration tools.
The announced functionalities were more evolutionary than revolutionary.
Here is the list.
Top 7 Key Dreamforce Takeaways
1. Agentforce 360 Platform, New Agentforce Builder, and Agent Script
The cornerstone announcement was the launch of Agentforce 360, the latest version of the comprehensive platform designed to unify AI, trust, and data capabilities across all Salesforce products. Salesforce has completely reimagined the entire Customer 360 platform as Agentforce 360, ensuring that every app is now “agentic”. This platform emphasizes providing users with more control than ever over their AI systems. To make development accessible to a wider audience, including line-of-business leaders, IT teams, service, and sales teams, a brand new AgentForce Builder was introduced, featuring a radically simplified, clean, and beautiful interface built from the ground up.
This capability is powered by Agent Script, a new scripting language that exposes the reasoning engine and allows users to define deterministic chaining actions and conditional logic. Agent Script unlocks patterns needed for mission-critical use cases, blending fluid agentic reasoning with the certainty of rules-based control in a unified instruction set to ensure agents are predictable and stay “on track,” preventing costly unpredictability. Agent Script can be built at the topic level where previous LLM-based non-deterministic functionality produced unpredictable results.
In addition, Salesforce announced Slack as its future conversational interface. Several sessions demonstrated deeper integrations in action. Another major change of course was the ability to use external LLMs for the Atlas Reasoning Engine. I believe this demonstrates that Salesforce is positioning Agentforce more as an orchestrator and collaborator of agents and AI capabilities rather than competing to become the agent for the enterprise.
2. Agentforce Voice
Agentforce Voice extends the power of the Agentforce platform by allowing agents to talk, bringing AI capabilities directly to contact centers and 800 numbers. Businesses can now configure the voice, tone, and personality of the AI right inside the AgentForce Builder. The goal is to deliver a unified customer experience across all channels, providing a highly human-like and interruptible conversational flow. A critical feature of Agentforce Voice is ensuring a seamless transition when the AI needs to transfer a customer to a human agent; the human representative automatically receives the full transcript and context of the AI conversation, allowing them to pick up the experience precisely where the AI left off. This functionality is available GA October ’25.
3. Intelligent Context Processing (Data 360)
Intelligent Context Processing tackles one of the greatest challenges for AI agents: understanding and utilizing vast amounts of complex, unstructured data. Agents often struggle with content in rich formats, pictures, tables, and existing workflows, the accumulated wisdom of the company. These new tools, built into Data 360, interpret and index this data by analyzing and parsing complex content (such as product manuals containing charts and images). This allows agents to pull in the exact, correct context required to deliver accurate and rich responses at the precise moment it is needed.
Furthermore, Data 360 enhances governance across both structured and unstructured data. Using natural language, administrators can create policies, such as masking internal FedEx employee contact details within agent responses, ensuring the information provided is not only accurate but also appropriate for the customer. It is not clear to us whether this is solely a rename of the product called Data Cloud. It seems that way.
4. Agentforce Vibes
Salesforce launched Agentforce Vibes as a new product that lets trailblazers quickly and easily build apps, dashboards, automations, and flows. Users achieve this via vie coding, which involves providing a simple, natural language description of what they want the platform to build. The core innovation of Agentforce Vibes is its deep contextual understanding; it speaks “the language of the business,” including the organization’s data, relationships, customers, products, and security permissions. This contextual intelligence allows Agentforce Vibes to rapidly translate a descriptive idea into deployable, production-grade Salesforce metadata (such as a screen flow). This drastically reduces development time, saving what could amount to dozens of manual clicks inside a traditional flow builder. This effectively elevates the capabilities of every developer. Interesting tidbits: Developers can develop using the coding language of their choice, and there is a local LWC preview function that will be launched soon.
5. Slackbot
Salesforce unveiled Slackbot as a new personalized, AI-powered companion that boosts productivity directly within Slack. It will launch for General Availability (GA) in January and draws on each user’s unique organizational context, including conversations, files, workflows, and apps. The tool moves users beyond asking simple questions toward achieving complex, tangible outcomes. For example, a user can ask Slackbot to handle a multi-step process with one command. It can review deal status, find compliance files, and draft a customer email in the user’s tone. Slackbot can also create a prep document and calendar invite for key stakeholders automatically. Slackbot will be the home of AI capabilities within Slack, even for customers who don’t use Salesforce.
6. Support for Third-Party Agents in Slack (AgentExchange)
Salesforce affirmed its vision of Slack becoming the “home for all AI” by announcing support for all third-party AI agents, such as ChatGPT, Claude and Gemini. This transformation positions Slack as an agentic operating system where external agents can exist as collaborative “teammates” alongside human employees. To ensure these external agents can perform sophisticated reasoning, they are grounded in the company’s real-time knowledge and context via a real-time search API and an MCP server. This initiative allows Salesforce agents to work in conjunction with agents from other platforms. This coupled with the AI-assisted enterprise search capabilities of Slack empowers Slack admins and users to be more productive.
7. Agentforce Observability
Agentforce Observability was introduced to help monitor and scale digital work in the new agentic enterprise era. It serves as one control center for managers to monitor and improve agent team performance. The tool gives leaders visibility into KPIs like escalation and deflection rates using Tableau Next analytics.
Most importantly, it features Agent Insights, which acts as a performance review by scoring every single agent session. This scoring helps managers find and analyze poor-performing conversations to uncover root causes like process issues. It enables tuning of agent prompts and behaviors for consistent results. This management layer is essential since prompts and loops alone aren’t enough.
This was a major pain point with the clients. I am happy Salesforce is addressing it with this new functionality that will be available for most clients.
Conclusion
I personally found the announcements more evolutionary than revolutionary. It was not a strong Dreamforce in terms of new functionalities covered.
Adoption challenges and cleanup are still needed to make current products appealing. These announcements mark real progress for Salesforce.
Explore related content:
Salesforce Ushers in the Age of the Agentic Enterprise at Dreamforce 2025
Dreamforce 2025: Standout Sessions Streaming on Salesforce+
Salesforce Winter ’26 Release: Comprehensive Overview of New Flow Features
#AgentScript #Agentforce #Agentforce360 #AgentforceBuilder #Data360 #Dreamforce #NewRelease #Salesforce #SalesforceAdmins #SalesforceDevelopers -
Top 7 Key Takeaways from Salesforce Dreamforce 2025
Salesforce Break reviewed the press releases and sessions coming out of Salesforce Dreamforce 2025, and prepared the ket takeaways in this post, so you don’t have to go thorough all the materials.
The biggest announcements for Salesforce at Dreamforce 2025 were centered on advancing the company’s foundational “Agentic Enterprise” vision through enhanced control, deeper context integration, and widespread collaboration tools.
The announced functionalities were more evolutionary than revolutionary.
Here is the list.
Top 7 Key Dreamforce Takeaways
1. Agentforce 360 Platform, New Agentforce Builder, and Agent Script
The cornerstone announcement was the launch of Agentforce 360, the latest version of the comprehensive platform designed to unify AI, trust, and data capabilities across all Salesforce products. Salesforce has completely reimagined the entire Customer 360 platform as Agentforce 360, ensuring that every app is now “agentic”. This platform emphasizes providing users with more control than ever over their AI systems. To make development accessible to a wider audience, including line-of-business leaders, IT teams, service, and sales teams, a brand new AgentForce Builder was introduced, featuring a radically simplified, clean, and beautiful interface built from the ground up.
This capability is powered by Agent Script, a new scripting language that exposes the reasoning engine and allows users to define deterministic chaining actions and conditional logic. Agent Script unlocks patterns needed for mission-critical use cases, blending fluid agentic reasoning with the certainty of rules-based control in a unified instruction set to ensure agents are predictable and stay “on track,” preventing costly unpredictability. Agent Script can be built at the topic level where previous LLM-based non-deterministic functionality produced unpredictable results.
In addition, Salesforce announced Slack as its future conversational interface. Several sessions demonstrated deeper integrations in action. Another major change of course was the ability to use external LLMs for the Atlas Reasoning Engine. I believe this demonstrates that Salesforce is positioning Agentforce more as an orchestrator and collaborator of agents and AI capabilities rather than competing to become the agent for the enterprise.
2. Agentforce Voice
Agentforce Voice extends the power of the Agentforce platform by allowing agents to talk, bringing AI capabilities directly to contact centers and 800 numbers. Businesses can now configure the voice, tone, and personality of the AI right inside the AgentForce Builder. The goal is to deliver a unified customer experience across all channels, providing a highly human-like and interruptible conversational flow. A critical feature of Agentforce Voice is ensuring a seamless transition when the AI needs to transfer a customer to a human agent; the human representative automatically receives the full transcript and context of the AI conversation, allowing them to pick up the experience precisely where the AI left off. This functionality is available GA October ’25.
3. Intelligent Context Processing (Data 360)
Intelligent Context Processing tackles one of the greatest challenges for AI agents: understanding and utilizing vast amounts of complex, unstructured data. Agents often struggle with content in rich formats, pictures, tables, and existing workflows, the accumulated wisdom of the company. These new tools, built into Data 360, interpret and index this data by analyzing and parsing complex content (such as product manuals containing charts and images). This allows agents to pull in the exact, correct context required to deliver accurate and rich responses at the precise moment it is needed.
Furthermore, Data 360 enhances governance across both structured and unstructured data. Using natural language, administrators can create policies, such as masking internal FedEx employee contact details within agent responses, ensuring the information provided is not only accurate but also appropriate for the customer. It is not clear to us whether this is solely a rename of the product called Data Cloud. It seems that way.
4. Agentforce Vibes
Salesforce launched Agentforce Vibes as a new product that lets trailblazers quickly and easily build apps, dashboards, automations, and flows. Users achieve this via vie coding, which involves providing a simple, natural language description of what they want the platform to build. The core innovation of Agentforce Vibes is its deep contextual understanding; it speaks “the language of the business,” including the organization’s data, relationships, customers, products, and security permissions. This contextual intelligence allows Agentforce Vibes to rapidly translate a descriptive idea into deployable, production-grade Salesforce metadata (such as a screen flow). This drastically reduces development time, saving what could amount to dozens of manual clicks inside a traditional flow builder. This effectively elevates the capabilities of every developer. Interesting tidbits: Developers can develop using the coding language of their choice, and there is a local LWC preview function that will be launched soon.
5. Slackbot
Salesforce unveiled Slackbot as a new personalized, AI-powered companion that boosts productivity directly within Slack. It will launch for General Availability (GA) in January and draws on each user’s unique organizational context, including conversations, files, workflows, and apps. The tool moves users beyond asking simple questions toward achieving complex, tangible outcomes. For example, a user can ask Slackbot to handle a multi-step process with one command. It can review deal status, find compliance files, and draft a customer email in the user’s tone. Slackbot can also create a prep document and calendar invite for key stakeholders automatically. Slackbot will be the home of AI capabilities within Slack, even for customers who don’t use Salesforce.
6. Support for Third-Party Agents in Slack (AgentExchange)
Salesforce affirmed its vision of Slack becoming the “home for all AI” by announcing support for all third-party AI agents, such as ChatGPT, Claude and Gemini. This transformation positions Slack as an agentic operating system where external agents can exist as collaborative “teammates” alongside human employees. To ensure these external agents can perform sophisticated reasoning, they are grounded in the company’s real-time knowledge and context via a real-time search API and an MCP server. This initiative allows Salesforce agents to work in conjunction with agents from other platforms. This coupled with the AI-assisted enterprise search capabilities of Slack empowers Slack admins and users to be more productive.
7. Agentforce Observability
Agentforce Observability was introduced to help monitor and scale digital work in the new agentic enterprise era. It serves as one control center for managers to monitor and improve agent team performance. The tool gives leaders visibility into KPIs like escalation and deflection rates using Tableau Next analytics.
Most importantly, it features Agent Insights, which acts as a performance review by scoring every single agent session. This scoring helps managers find and analyze poor-performing conversations to uncover root causes like process issues. It enables tuning of agent prompts and behaviors for consistent results. This management layer is essential since prompts and loops alone aren’t enough.
This was a major pain point with the clients. I am happy Salesforce is addressing it with this new functionality that will be available for most clients.
Conclusion
I personally found the announcements more evolutionary than revolutionary. It was not a strong Dreamforce in terms of new functionalities covered.
Adoption challenges and cleanup are still needed to make current products appealing. These announcements mark real progress for Salesforce.
Explore related content:
Salesforce Ushers in the Age of the Agentic Enterprise at Dreamforce 2025
Dreamforce 2025: Standout Sessions Streaming on Salesforce+
Salesforce Winter ’26 Release: Comprehensive Overview of New Flow Features
#AgentScript #Agentforce #Agentforce360 #AgentforceBuilder #Data360 #Dreamforce #NewRelease #Salesforce #SalesforceAdmins #SalesforceDevelopers -
How to Quickly Build a Salesforce-Native Satisfaction Survey Using SurveyVista
SurveyVista by Ardira is a Salesforce native survey solution that allows you to design, distribute, and analyze surveys directly within your Salesforce org. Unlike external survey tools that require complex integrations or third-party data syncs, SurveyVista keeps everything in-platform. This gives admins and business users a secure, streamlined way to capture feedback without leaving Salesforce.
🚨 Use case: Build a satisfaction survey to measure CSAT and NPS, accept free-form responses in addition to scores, and attach them to records in Salesforce for visibility, action and reporting purposes.Why Salesforce-Native Matters
Many survey tools rely on connectors, middleware, or APIs to bring data back into Salesforce. While this approach works, it introduces several challenges. Data leaving Salesforce and traveling across external systems creates additional security risks. It also increases integration overhead by requiring ongoing maintenance, troubleshooting, and vendor updates. On top of that, responses may not be available in real time inside Salesforce, which can slow down reporting and automation.
SurveyVista avoids these issues because it is 100% Salesforce native. Data never leaves your org and remains protected under the Salesforce trust framework, giving you stronger security. Responses are available instantly, making them immediately usable for reporting, flows, and automation. Since no external integration is required, admin overhead is reduced and your tech stack stays simple.
SurveyVista Install and Preparation
SurveyVista is an AppExchange solution. You can head over to the AppExchange and install the free/trial version of SurveyVista in your Org. Get it HERE.
Once you install the AppExchange package you can go to the lightning page and finish up your configuration there. The required steps are fairly simple and they relate to publishing a digital experience site where the surveys will be hosted. There are a few steps that require you to copy and paste code into the Developer Console and execute them. You should also check on the digital experience builder whether your digital experience site requires login or not. If you are going to host the survey publicly and accept anonymous responses, then your digital experience site needs to be made public.
You will also find on this page an option to download templates and examples. I find the template that includes all UI components very useful, because it quickly shows you what is possible.
You can start your survey from scratch or from a template.
Build
I decided to build a 5-question CSAT and NPS form. One question will accept the NPS score, while the last question will accept free-form text for open feedback.
The form structure is as follows:
Customer Satisfaction Survey
Q1. How satisfied are you with your overall experience?
Very satisfied / Satisfied / Neutral / Dissatisfied / Very dissatisfied
Q2. How would you rate the quality of our product/service?
Excellent / Good / Fair / Poor
Q3. How likely are you to recommend us to a friend or colleague?
NPS scale (0-10)
Q4. How responsive have we been to your questions or concerns?
Extremely responsive / Very responsive / Somewhat responsive / Not so responsive / Not at all responsive
Q5. Please share any additional feedback or suggestions you may have.
Paragraph (free-form text)SurveyVista offers ready components for you to add these inputs on your form. The customization options seem virtually limitless. Branding your survey is easy.
You can also customize your “Thank You” landing page and provide links on that page, as well.
Once you complete your design, you add the digital experience site to your survey and publish it. SurveyVista produces two links for your Survey form. One can be used for internal users, the other one for external users. You can send this link to your audience anytime on any channel, either manually or automatically.
Result
Here is the resulting form.
The beauty of SurveyVista is that the response is recorded in your Salesforce Org as an object record. You can trigger automation when the record is created, and relate this record to any record(s) you want in your Salesforce Org.
You can use the reports and dashboards SurveyVista package gives you, or set up your own reports and dashboards in Salesforce. In addition to relating to records, you can use response mapping features to automate creating and/or updating Salesforce standard or custom object records.
Overview of SurveyVista Features and Use Cases
SurveyVista includes a survey builder that lives entirely in Salesforce, allowing you to create surveys with customizable questions, logic, and branding. Responses are stored directly in Salesforce records through a native data model, eliminating the need for external syncs or integrations.
Because the tool is built on Salesforce, responses can trigger Flows, Approvals, or Processes automatically. You can also analyze results using standard Salesforce Reports and Dashboards, and distribute surveys securely through Salesforce email, Experience Cloud, or custom links.One important note is that SurveyVista can handle both authenticated and unauthenticated respondents. If you want to collect responses from external participants who do not have a Salesforce login, you can do so through public or personalized links. For authenticated external respondents, such as community users who log in through a Salesforce Digital Experience site, additional Salesforce licensing may be required.
Use Cases:
- Customer Satisfaction (CSAT) and NPS: Gather customer insights after key interactions.
- Employee Feedback: Collect internal survey responses securely.
- Training Assessments: Get immediate feedback from attendees.
- Operational checklists: Inspection checklists guiding the inspector to complete a list of tasks.
- Custom Business Processes: Build forms and capture input tied directly to Salesforce records.
Why Choose SurveyVista?
If your team values security, speed, and simplicity, SurveyVista gives you a native first alternative to tools like Qualtrics or SurveyMonkey. Because everything lives in Salesforce, you avoid integration headaches and keep sensitive data where it belongs, under your org’s security umbrella.
SurveyVista keeps all survey responses inside Salesforce, giving you real time insights that combine feedback data with your existing customer CRM data, so you can take immediate action without waiting on integrations or external syncs.
SurveyVista Pricing: What It Costs and What You Get
SurveyVista is priced on an annual, org-wide basis, with plans starting at US $2,999 per year for smaller organizations. This gives you full access to a Salesforce-native survey solution without the overhead of integrating an external system.
There is also a Free Edition that includes core survey builder functionality. The free version comes with certain limitations, such as restrictions on how respondents access the survey, but it is a good way to explore the product and test it out inside your Salesforce environment.
Paid tiers begin at around $2,999 per year and scale up depending on your organization’s size and requirements. Larger organizations or those needing more advanced features can expect higher-tier plans in the range of $5,499 per year or more. For enterprise needs, Ardira offers custom pricing tailored to the scope of your surveys and the scale of your Salesforce org.
SurveyVista also supports a free trial of its paid tiers, so you can evaluate the tool before committing. See more pricing details on their website HERE.
Conclusion
SurveyVista makes collecting and acting on feedback simple, secure, and Salesforce native. Whether you’re measuring customer satisfaction, running employee surveys, or embedding forms into business processes, everything stays inside your org, where it’s accessible in real time, protected by Salesforce security, and ready to power automation. With flexible pricing, a free edition to get started, and an intuitive builder that lives in Salesforce, SurveyVista is an accessible solution for any team that wants actionable insights without integration headaches. Try it today at the Ardira website to see how easily you can bring surveys into Salesforce!
This post was sponsored by SurveyVista by Ardira.
#Adrira #AppExchange #SaleforceTutorials #Salesforce #SalesforceAdmins #SalesforceDevelopers #SurveyVista
-
How to Quickly Build a Salesforce-Native Satisfaction Survey Using SurveyVista
SurveyVista by Ardira is a Salesforce native survey solution that allows you to design, distribute, and analyze surveys directly within your Salesforce org. Unlike external survey tools that require complex integrations or third-party data syncs, SurveyVista keeps everything in-platform. This gives admins and business users a secure, streamlined way to capture feedback without leaving Salesforce.
🚨 Use case: Build a satisfaction survey to measure CSAT and NPS, accept free-form responses in addition to scores, and attach them to records in Salesforce for visibility, action and reporting purposes.Why Salesforce-Native Matters
Many survey tools rely on connectors, middleware, or APIs to bring data back into Salesforce. While this approach works, it introduces several challenges. Data leaving Salesforce and traveling across external systems creates additional security risks. It also increases integration overhead by requiring ongoing maintenance, troubleshooting, and vendor updates. On top of that, responses may not be available in real time inside Salesforce, which can slow down reporting and automation.
SurveyVista avoids these issues because it is 100% Salesforce native. Data never leaves your org and remains protected under the Salesforce trust framework, giving you stronger security. Responses are available instantly, making them immediately usable for reporting, flows, and automation. Since no external integration is required, admin overhead is reduced and your tech stack stays simple.
SurveyVista Install and Preparation
SurveyVista is an AppExchange solution. You can head over to the AppExchange and install the free/trial version of SurveyVista in your Org. Get it HERE.
Once you install the AppExchange package you can go to the lightning page and finish up your configuration there. The required steps are fairly simple and they relate to publishing a digital experience site where the surveys will be hosted. There are a few steps that require you to copy and paste code into the Developer Console and execute them. You should also check on the digital experience builder whether your digital experience site requires login or not. If you are going to host the survey publicly and accept anonymous responses, then your digital experience site needs to be made public.
You will also find on this page an option to download templates and examples. I find the template that includes all UI components very useful, because it quickly shows you what is possible.
You can start your survey from scratch or from a template.
Build
I decided to build a 5-question CSAT and NPS form. One question will accept the NPS score, while the last question will accept free-form text for open feedback.
The form structure is as follows:
Customer Satisfaction Survey
Q1. How satisfied are you with your overall experience?
Very satisfied / Satisfied / Neutral / Dissatisfied / Very dissatisfied
Q2. How would you rate the quality of our product/service?
Excellent / Good / Fair / Poor
Q3. How likely are you to recommend us to a friend or colleague?
NPS scale (0-10)
Q4. How responsive have we been to your questions or concerns?
Extremely responsive / Very responsive / Somewhat responsive / Not so responsive / Not at all responsive
Q5. Please share any additional feedback or suggestions you may have.
Paragraph (free-form text)SurveyVista offers ready components for you to add these inputs on your form. The customization options seem virtually limitless. Branding your survey is easy.
You can also customize your “Thank You” landing page and provide links on that page, as well.
Once you complete your design, you add the digital experience site to your survey and publish it. SurveyVista produces two links for your Survey form. One can be used for internal users, the other one for external users. You can send this link to your audience anytime on any channel, either manually or automatically.
Result
Here is the resulting form.
The beauty of SurveyVista is that the response is recorded in your Salesforce Org as an object record. You can trigger automation when the record is created, and relate this record to any record(s) you want in your Salesforce Org.
You can use the reports and dashboards SurveyVista package gives you, or set up your own reports and dashboards in Salesforce. In addition to relating to records, you can use response mapping features to automate creating and/or updating Salesforce standard or custom object records.
Overview of SurveyVista Features and Use Cases
SurveyVista includes a survey builder that lives entirely in Salesforce, allowing you to create surveys with customizable questions, logic, and branding. Responses are stored directly in Salesforce records through a native data model, eliminating the need for external syncs or integrations.
Because the tool is built on Salesforce, responses can trigger Flows, Approvals, or Processes automatically. You can also analyze results using standard Salesforce Reports and Dashboards, and distribute surveys securely through Salesforce email, Experience Cloud, or custom links.One important note is that SurveyVista can handle both authenticated and unauthenticated respondents. If you want to collect responses from external participants who do not have a Salesforce login, you can do so through public or personalized links. For authenticated external respondents, such as community users who log in through a Salesforce Digital Experience site, additional Salesforce licensing may be required.
Use Cases:
- Customer Satisfaction (CSAT) and NPS: Gather customer insights after key interactions.
- Employee Feedback: Collect internal survey responses securely.
- Training Assessments: Get immediate feedback from attendees.
- Operational checklists: Inspection checklists guiding the inspector to complete a list of tasks.
- Custom Business Processes: Build forms and capture input tied directly to Salesforce records.
Why Choose SurveyVista?
If your team values security, speed, and simplicity, SurveyVista gives you a native first alternative to tools like Qualtrics or SurveyMonkey. Because everything lives in Salesforce, you avoid integration headaches and keep sensitive data where it belongs, under your org’s security umbrella.
SurveyVista keeps all survey responses inside Salesforce, giving you real time insights that combine feedback data with your existing customer CRM data, so you can take immediate action without waiting on integrations or external syncs.
SurveyVista Pricing: What It Costs and What You Get
SurveyVista is priced on an annual, org-wide basis, with plans starting at US $2,999 per year for smaller organizations. This gives you full access to a Salesforce-native survey solution without the overhead of integrating an external system.
There is also a Free Edition that includes core survey builder functionality. The free version comes with certain limitations, such as restrictions on how respondents access the survey, but it is a good way to explore the product and test it out inside your Salesforce environment.
Paid tiers begin at around $2,999 per year and scale up depending on your organization’s size and requirements. Larger organizations or those needing more advanced features can expect higher-tier plans in the range of $5,499 per year or more. For enterprise needs, Ardira offers custom pricing tailored to the scope of your surveys and the scale of your Salesforce org.
SurveyVista also supports a free trial of its paid tiers, so you can evaluate the tool before committing. See more pricing details on their website HERE.
Conclusion
SurveyVista makes collecting and acting on feedback simple, secure, and Salesforce native. Whether you’re measuring customer satisfaction, running employee surveys, or embedding forms into business processes, everything stays inside your org, where it’s accessible in real time, protected by Salesforce security, and ready to power automation. With flexible pricing, a free edition to get started, and an intuitive builder that lives in Salesforce, SurveyVista is an accessible solution for any team that wants actionable insights without integration headaches. Try it today at the Ardira website to see how easily you can bring surveys into Salesforce!
This post was sponsored by SurveyVista by Ardira.
#Adrira #AppExchange #SaleforceTutorials #Salesforce #SalesforceAdmins #SalesforceDevelopers #SurveyVista
-
One Simple Salesforce Flow Hack That Will Change Your Workflow Forever!
What if I told you that the Flow you’ve been building could secretly hold the key to a much bigger impact in the automation world? A world where you don’t rebuild logic over and over… where one Flow powers multiple flows.
Sounds dramatic, right? But once you learn this trick, it will be an invaluable addition to your flow arsenal that will superpower your workflows going forward.
Use case: Create a follow-up task due in seven days for the proposal step when the stage is updated to proposal (do the same on create), if there is no existing open task already with the same subject.Let’s start by building this use case. Then we will get to the hack part.
Step 1. Build the Original Record-Triggered Flow
We’ll start with something simple: a record-triggered Flow on Opportunity that creates a Task when the Opportunity hits a certain stage. Check whether there is an open task already with the same subject related to the opportunity, before creating another one. If there is an open task already, skip the create.
- Trigger: Opportunity → when Stage = “Proposal/Quote”
- Action: Create Task → Assigned to Opportunity Owner
- Due date: 7 days from the current date
- WhatId (Related to Id) set as the triggering Opportunity
Straightforward.
But here’s the catch: this logic lives in a record-triggered flow. What if I wanted to leverage the task creation logic for multiple record-triggered flows (including scheduled paths), schedule-triggered flows and possibly for screen flows, as well. In addition, could I leverage the same flow for other object records in addition to opportunities? Good food for thought.
Step 2. Save As an Autolaunched Flow
Here’s where the hack begins.
From the Flow Builder menu, click Save As → choose A New Flow → Autolaunched (No Trigger).
Now we have the same logic, but free from the record trigger.
Step 3. Replace $Record With Input Variables
The Autolaunched Flow still references $Record from the Opportunity. That won’t work anymore. Time to swap those out. The references are listed under Errors. The flow cannot be saved until these Errors are fixed.
Create Input Variables for everything your logic needs; e.g., recordId (WhatId), OwnerUserIdVar, DelayInDaysVar.
- Update your Create Task, Get Task elements and the Due Date formula to reference those input variables instead of the $Record.
Boom. Your Flow is now a Subflow – it can take in data from anywhere and run its magic.
Step 4. Refactor the Original Record-Triggered Flow
Time to circle back to the original record-triggered Flow.
Open the Flow, Save As a New Version.
Delete all the elements. (Yes, all. Feels risky, but trust me.)
Add a Subflow element.
Select your new Autolaunched Flow.
Map the input variables to $Record fields, and provide the delay in days parameter value.
Now, instead of directly creating the Task, your record-triggered Flow just hands $Record data to the Subflow – which does the real work.
Here is how the debug runs works.
Why This Hack Changes Everything
This one move unlocks a whole new way of thinking about Flows:
Reusability – Logic built once, used anywhere.
Maintainability – Update the Subflow, and every Flow that calls it stays consistent.
Scalability – Build a library of Subflows and assemble them like Lego pieces.
- Testing Ease – Some flow types are hard to test. Your autolaunched subflow takes in all the necessary parameters in the debug mode, and rolls back or commits the changes based on your preference.
Suddenly, your automation isn’t a patchwork of disconnected Flows – it’s a modular, scalable system.
The Secret’s Out
I call this the “Save As Subflow” hack. It’s hiding in plain sight, but most builders never use it. Once you do, your workflow will never be the same.
Remember, you can make your subflow logic as flexible as you want. You can add input variables for subject and description. This would make your task creation even more flexible so that it can be used for other objects like Case and Custom objects.
Try it today – and the next time you find yourself rebuilding logic, remember: you don’t have to. Just save it, strip $Record, add input variables, and let your Subflows do the heavy lifting.
Explore related content:
Automate Permissions in Salesforce with User Access Policies
When Your DMLs Have Criteria Conditions Other Than Id
Display Product and Price Book Entry Fields in the Same Flow Data Table
How to Use a Salesforce Action Button to Validate Lookup Fields in Screen Flows
#Hack #HowTo #RecordTriggered #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials #Subflow
-
One Simple Salesforce Flow Hack That Will Change Your Workflow Forever!
What if I told you that the Flow you’ve been building could secretly hold the key to a much bigger impact in the automation world? A world where you don’t rebuild logic over and over… where one Flow powers multiple flows.
Sounds dramatic, right? But once you learn this trick, it will be an invaluable addition to your flow arsenal that will superpower your workflows going forward.
Use case: Create a follow-up task due in seven days for the proposal step when the stage is updated to proposal (do the same on create), if there is no existing open task already with the same subject.Let’s start by building this use case. Then we will get to the hack part.
Step 1. Build the Original Record-Triggered Flow
We’ll start with something simple: a record-triggered Flow on Opportunity that creates a Task when the Opportunity hits a certain stage. Check whether there is an open task already with the same subject related to the opportunity, before creating another one. If there is an open task already, skip the create.
- Trigger: Opportunity → when Stage = “Proposal/Quote”
- Action: Create Task → Assigned to Opportunity Owner
- Due date: 7 days from the current date
- WhatId (Related to Id) set as the triggering Opportunity
Straightforward.
But here’s the catch: this logic lives in a record-triggered flow. What if I wanted to leverage the task creation logic for multiple record-triggered flows (including scheduled paths), schedule-triggered flows and possibly for screen flows, as well. In addition, could I leverage the same flow for other object records in addition to opportunities? Good food for thought.
Step 2. Save As an Autolaunched Flow
Here’s where the hack begins.
From the Flow Builder menu, click Save As → choose A New Flow → Autolaunched (No Trigger).
Now we have the same logic, but free from the record trigger.
Step 3. Replace $Record With Input Variables
The Autolaunched Flow still references $Record from the Opportunity. That won’t work anymore. Time to swap those out. The references are listed under Errors. The flow cannot be saved until these Errors are fixed.
Create Input Variables for everything your logic needs; e.g., recordId (WhatId), OwnerUserIdVar, DelayInDaysVar.
- Update your Create Task, Get Task elements and the Due Date formula to reference those input variables instead of the $Record.
Boom. Your Flow is now a Subflow – it can take in data from anywhere and run its magic.
Step 4. Refactor the Original Record-Triggered Flow
Time to circle back to the original record-triggered Flow.
Open the Flow, Save As a New Version.
Delete all the elements. (Yes, all. Feels risky, but trust me.)
Add a Subflow element.
Select your new Autolaunched Flow.
Map the input variables to $Record fields, and provide the delay in days parameter value.
Now, instead of directly creating the Task, your record-triggered Flow just hands $Record data to the Subflow – which does the real work.
Here is how the debug runs works.
Why This Hack Changes Everything
This one move unlocks a whole new way of thinking about Flows:
Reusability – Logic built once, used anywhere.
Maintainability – Update the Subflow, and every Flow that calls it stays consistent.
Scalability – Build a library of Subflows and assemble them like Lego pieces.
- Testing Ease – Some flow types are hard to test. Your autolaunched subflow takes in all the necessary parameters in the debug mode, and rolls back or commits the changes based on your preference.
Suddenly, your automation isn’t a patchwork of disconnected Flows – it’s a modular, scalable system.
The Secret’s Out
I call this the “Save As Subflow” hack. It’s hiding in plain sight, but most builders never use it. Once you do, your workflow will never be the same.
Remember, you can make your subflow logic as flexible as you want. You can add input variables for subject and description. This would make your task creation even more flexible so that it can be used for other objects like Case and Custom objects.
Try it today – and the next time you find yourself rebuilding logic, remember: you don’t have to. Just save it, strip $Record, add input variables, and let your Subflows do the heavy lifting.
Explore related content:
Automate Permissions in Salesforce with User Access Policies
When Your DMLs Have Criteria Conditions Other Than Id
Display Product and Price Book Entry Fields in the Same Flow Data Table
How to Use a Salesforce Action Button to Validate Lookup Fields in Screen Flows
#Hack #HowTo #RecordTriggered #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials #Subflow
-
Should You Leave Unused Input and Output Flow Variables?
In Salesforce Flow, input variables are special placeholders that allow data to be passed into a flow from an external source, such as a Lightning page, a button, another flow, or even an Apex class, so that the flow can use that data during its execution. When you create an input variable in Flow Builder, you mark it as Available for Input, which makes it visible and ready to receive values from outside the flow. Output variables, on the other hand, are used to send data out of a flow so it can be consumed by whatever triggered or called the flow, such as another flow, a Lightning web component, or an Apex class. When you create a variable and mark it as Available for Output, the flow can pass its final or intermediate values back to the caller once it finishes running.
Input variables are especially useful for building modular, reusable flows. You can design them to handle different scenarios based on the values provided at runtime. For example, a record ID provided as an input variable can help the flow retrieve and update that specific record without needing user input. By leveraging input variables, you can keep flows flexible, reduce duplication, and make them easier to maintain.
Similarly, output variables are powerful when building modular, subflow-based solutions. The parent flow can feed inputs to the subflow, receive outputs in return, and then continue processing without extra queries or logic. For example, a subflow might calculate a discount amount or generate a new record ID. It can then return it as an output variable for the parent flow to use. Output variables make flows more reusable, keep processes streamlined, and allow different automation components to share data seamlessly.
Security Implications of Variables Available for Input and Output
In programming, a variable’s scope defines the region of code where it exists and can be used, such as within a specific method, a class, or an entire module. For example, a variable defined inside a method is local to that method and cannot be seen or changed by code outside it, much like keeping notes in your own locked desk drawer. This “privacy” ensures that internal details remain protected from unintended interference, which is a key aspect of encapsulation in programming. If you want other parts of the program to access the data, you must explicitly expose it through return values, public properties, parameters, or other controlled interfaces. This principle not only prevents accidental bugs but also supports security. Sensitive data and logic remain inaccessible unless intentionally shared, helping keep the system stable, predictable, and easier to maintain.
When you allow input variables for your flow, you allow external environments that run this flow to pass parameters into it. This potentially makes your flow vulnerable to outside attacks. When you configure output variables for your flow, you are creating a risk of external environments accessing flow output data. This is often data recorded in your Salesforce org. This data may include personally identifiable information or sensitive data.
In addition, avoid using inputs that are easy to guess. If you look up a contact record based on their email address, attackers may guess the email address after a few tries ([email protected] for example).
What About Flows Built for Digital Experience Guest Users?
When you build a flow and deploy it on a digital experience site, where the guest user can execute it without logging in, you are exposing your flow to the outside world. This scenario makes your flow even more vulnerable to outside attacks.
Guest User Means Anybody Can Access Any Time
First of all, please know that this is a very risky approach. You should assume anybody can run that flow anytime, which is what you allowed. Make sure that only limited inputs and outputs are defined and used. The flow should only execute a limited scope that it absolutely needs. You should not allow the flow to perform a multitude of operations because you aim for flexibility. Test many scenarios to ensure attacks can not derail your flow and trick it to perform operations that it is not intended to perform.
Limit the Data
Furthermore, you should not allow the flow to access any information it does not need to see. If you are dealing with records or record collections, make sure your gets specify fields that are absolutely necessary. Do not get the drivers license number for the contact when you just need the name. In this scenario, do not let Salesforce automatically decide what fields to get. Also, when performing updates, do not update all the field values on the record. Just update whichever field is important to update for your process.
Isolate the Elevated Functionality
Finally, you may be tempted to set your flow to run in system context without sharing, or to allow a guest user to view records in the org through sharing rules. Both scenarios introduce additional risks that must be carefully considered.
When allowing your automation to run in system context without sharing, isolate the necessary part into a subflow. Ensure that logic is tightened well from a security standpoint. Do not run the whole flow in system context without sharing mode. Just run the necessary part in a subflow using this elevated setting.
Screen Flows and Reactivity
Whether you allow elevated access or not, screen flows present a couple of inherited risks.
When you pass information to a data table, lightning web component or a screen action, that information is accessed by your browser locally. If you feed a collection of contact records to a datatable and get all field values before you go to the data table screen, the local browser will see all the field values on the record. This happens before the user interacts with the table. The user can see these values.
Recent developments of reactivity for screen flows are fantastic from a UI standpoint, but further complicate the security risks. The more reactive functionality you use in your flow, the more data you handle locally in your browser.
Conclusion
When flow builders, especially new starters, build flow variables, they often freely check available for input and available for output checkboxes. They do this thinking the alternative would limit them. This is risky and not necessary. You can change these settings at any time without having to create or recreate variables.
Always plan your inputs and outputs carefully and review them at the end of development. Make sure you don’t have any unused variables still accepting inputs or producing outputs.
In this era, where we hear the Salesforce name associated with client data security breach incidents, apply extreme security caution when dealing with automation.
This post is part of our Flow Best Practice series. See the other posts HERE.
Sources and references:
Building Secure Screen Flows For External User Access by Adam White
Data Safety When Running Screen and Autolaunched Flows in System Context – Salesforce Help
Explore related content:
How To Attach Files Using the Flow Email Action in Salesforce
Getting Started with Salesforce Data Cloud: Your Roadmap to Unified Customer Insights
How To Build Flex and Field Generation Prompt Templates in the Prompt Builder
#Apex #BestPractices #InputVariables #LowCode #OutputVariables #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials #ScreenFlow #Security
-
Should You Leave Unused Input and Output Flow Variables?
In Salesforce Flow, input variables are special placeholders that allow data to be passed into a flow from an external source, such as a Lightning page, a button, another flow, or even an Apex class, so that the flow can use that data during its execution. When you create an input variable in Flow Builder, you mark it as Available for Input, which makes it visible and ready to receive values from outside the flow. Output variables, on the other hand, are used to send data out of a flow so it can be consumed by whatever triggered or called the flow, such as another flow, a Lightning web component, or an Apex class. When you create a variable and mark it as Available for Output, the flow can pass its final or intermediate values back to the caller once it finishes running.
Input variables are especially useful for building modular, reusable flows. You can design them to handle different scenarios based on the values provided at runtime. For example, a record ID provided as an input variable can help the flow retrieve and update that specific record without needing user input. By leveraging input variables, you can keep flows flexible, reduce duplication, and make them easier to maintain.
Similarly, output variables are powerful when building modular, subflow-based solutions. The parent flow can feed inputs to the subflow, receive outputs in return, and then continue processing without extra queries or logic. For example, a subflow might calculate a discount amount or generate a new record ID. It can then return it as an output variable for the parent flow to use. Output variables make flows more reusable, keep processes streamlined, and allow different automation components to share data seamlessly.
Security Implications of Variables Available for Input and Output
In programming, a variable’s scope defines the region of code where it exists and can be used, such as within a specific method, a class, or an entire module. For example, a variable defined inside a method is local to that method and cannot be seen or changed by code outside it, much like keeping notes in your own locked desk drawer. This “privacy” ensures that internal details remain protected from unintended interference, which is a key aspect of encapsulation in programming. If you want other parts of the program to access the data, you must explicitly expose it through return values, public properties, parameters, or other controlled interfaces. This principle not only prevents accidental bugs but also supports security. Sensitive data and logic remain inaccessible unless intentionally shared, helping keep the system stable, predictable, and easier to maintain.
When you allow input variables for your flow, you allow external environments that run this flow to pass parameters into it. This potentially makes your flow vulnerable to outside attacks. When you configure output variables for your flow, you are creating a risk of external environments accessing flow output data. This is often data recorded in your Salesforce org. This data may include personally identifiable information or sensitive data.
In addition, avoid using inputs that are easy to guess. If you look up a contact record based on their email address, attackers may guess the email address after a few tries ([email protected] for example).
What About Flows Built for Digital Experience Guest Users?
When you build a flow and deploy it on a digital experience site, where the guest user can execute it without logging in, you are exposing your flow to the outside world. This scenario makes your flow even more vulnerable to outside attacks.
Guest User Means Anybody Can Access Any Time
First of all, please know that this is a very risky approach. You should assume anybody can run that flow anytime, which is what you allowed. Make sure that only limited inputs and outputs are defined and used. The flow should only execute a limited scope that it absolutely needs. You should not allow the flow to perform a multitude of operations because you aim for flexibility. Test many scenarios to ensure attacks can not derail your flow and trick it to perform operations that it is not intended to perform.
Limit the Data
Furthermore, you should not allow the flow to access any information it does not need to see. If you are dealing with records or record collections, make sure your gets specify fields that are absolutely necessary. Do not get the drivers license number for the contact when you just need the name. In this scenario, do not let Salesforce automatically decide what fields to get. Also, when performing updates, do not update all the field values on the record. Just update whichever field is important to update for your process.
Isolate the Elevated Functionality
Finally, you may be tempted to set your flow to run in system context without sharing, or to allow a guest user to view records in the org through sharing rules. Both scenarios introduce additional risks that must be carefully considered.
When allowing your automation to run in system context without sharing, isolate the necessary part into a subflow. Ensure that logic is tightened well from a security standpoint. Do not run the whole flow in system context without sharing mode. Just run the necessary part in a subflow using this elevated setting.
Screen Flows and Reactivity
Whether you allow elevated access or not, screen flows present a couple of inherited risks.
When you pass information to a data table, lightning web component or a screen action, that information is accessed by your browser locally. If you feed a collection of contact records to a datatable and get all field values before you go to the data table screen, the local browser will see all the field values on the record. This happens before the user interacts with the table. The user can see these values.
Recent developments of reactivity for screen flows are fantastic from a UI standpoint, but further complicate the security risks. The more reactive functionality you use in your flow, the more data you handle locally in your browser.
Conclusion
When flow builders, especially new starters, build flow variables, they often freely check available for input and available for output checkboxes. They do this thinking the alternative would limit them. This is risky and not necessary. You can change these settings at any time without having to create or recreate variables.
Always plan your inputs and outputs carefully and review them at the end of development. Make sure you don’t have any unused variables still accepting inputs or producing outputs.
In this era, where we hear the Salesforce name associated with client data security breach incidents, apply extreme security caution when dealing with automation.
This post is part of our Flow Best Practice series. See the other posts HERE.
Sources and references:
Building Secure Screen Flows For External User Access by Adam White
Data Safety When Running Screen and Autolaunched Flows in System Context – Salesforce Help
Explore related content:
How To Attach Files Using the Flow Email Action in Salesforce
Getting Started with Salesforce Data Cloud: Your Roadmap to Unified Customer Insights
How To Build Flex and Field Generation Prompt Templates in the Prompt Builder
#Apex #BestPractices #InputVariables #LowCode #OutputVariables #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials #ScreenFlow #Security
-
How to Build Custom Flow Approval Submission Related Lists
In the Spring ’25 release, Salesforce introduced Flow Approvals to replace the legacy approval processes. This approval platform was based on the orchestration functionality. I recorded and released two videos and posts to share this functionality on Salesforce Break. The videos saw great interest from the community, they are about to reach 20K views soon. So, why is everyone talking about flow approvals?
There are multiple reasons:
- Flow approvals are orchestration-based, but they are entirely free unlike other orchestrations.
- Legacy approvals are really old. Salesforce has not been investing in them. They are past due for a remake.
- Legacy approvals are limited. To enhance the functionality, clients had to use AppExchange solutions or paid alternatives by Salesforce like advanced approvals for CPQ.
- Flow approvals allow for parallel approvals, dynamic steps, and flexibility in the approval process.
This is why I decided to create more content in this area, starting with:
- A self-paced video course that teaches Flow Approval processes in depth, with hands-on practice. See the details here.
- Additional resources focused on solutions that bridge the gaps between Flow Approvals and Legacy Approvals, addressing the limitations of the new platform.
Here is the first post detailing a solution filling one of the gaps.
Flow Approvals Don’t Provide Sufficient Detail In The Related Lists
Here is the first point I would like to address: Flow approvals don’t provide good detailed information in the related lists of the object record like the legacy approvals did.
Solution: Build a screen flow with reactive data tables to show the approval submission records and their related records. Add the screen flow to a tab on the record page.
Salesforce provided a component that can be added to the record page. It is called the Approval Trace component. This provides some information about the approval process, but is not customizable. I asked myself how I can go beyond that, and decided to build a reactive screen flow with data tables to fill this functionality gap.
To build and deploy this flow, you need to follow these steps:
- Build the screen flow.
- Build the autolaunched flow that will fetch the data you will need. This flow will be used as the screen action in step one.
- After testing and activation, add the screen flow to the record page.
If you have never built a screen flow with screen actions before, let me be the first one to tell you that step one and two are not really completed in sequence. You go back and forth building these two flows.
Let’s get started.
Build the Flow Approval Submission Screen Flow
What I usually do, when building these flows is that I first get the screen flow started. Then I build the autolaunched flow, and go back to the screen flow to build out the rest of the functionality. The reason is that the screen flow data tables will need the outputs from the autolaunched flow to be fully configured.
This is what the screen flow looks like, once it is completed.
For now, you can just ignore the loop section. This is there to ensure that there is a default selection for the first data table, when the flow first runs.
This is the structure of the flow excluding that part:
- Get all approval submission records for the recordId that will be provided as input into the flow.
- Check if there are approval submissions found.
- Display a screen saying “no records were found,” if the get returns null.
- Display a reactive screen mainly consisting of three data tables with conditional visibility calling an autolaunched flow as a screen action.
Here is what this screen looks like:
After you build, test, and activate the autolaunched flow, configure the screen action under the screen properties as shown below.
How the Loop Section Works
The first data table has an input parameter that determines the default selection, when the flow first runs. This is a record variable representing one of the members of the collection record variable that supplies the data. You need to loop the collection of records to get to the record variable. Follow these steps:
- Loop the collection record variable which is the output of your get step. Sort the data by last modified date in your get step.
- Assign the first member to a record variable.
- Exit the loop without condition. Connect the path to the next element outside the loop.
- Add the resulting record variable to the default selection parameter under the configure rows section of your data table.
This loop always runs once, setting the default selection to the most recent approval submission. This populates the related data tables when the flow first runs.
Build the Screen Action Autolaunched Flow for Related Lists
The autolaunched flow receives a single approval submission recordId as input. Then it gets the related records and data the screen flow needs, and returns the data as output.
Here is a screenshot of the autolaunched flow.
This flow executes the following steps:
- Gets the approval submission data.
- Gets the user data for the submitter to resolve the full name.
- Gets approval work items.
- Checks null and sets a boolean (checkbox) variable when the get returns null. The output uses this variable to control conditional visibility of the relevant data table. If found this method yields the best results.
- Get approval submission details.
- Checks null and sets a boolean variable when the get returns null. This variable is then used in the output to drive conditional visibility of the relevant data table.
- Assigns the get results to output collection record variables.
Final Deployment Steps
After testing and activating the autolaunched flow, you need to add the flow to the screen flow as the screen action. The flow input will be fed from the selection of the first data table. You will see that this step will make all the outputs of the autolaunched flow available for the screen flow. Using these outputs build the additional two data tables and configure the conditional visibility.
After testing and activating your screen flow, add the flow to the record page on a dedicated new tab (or to a section on an existing tab). Select the checkbox to pass the recordId to the flow. Note that this flow will work with any record for any object.
Limitations and Suggested Improvements
While this screen flow provides a lot of detail and customization options it has two limitations:
- By default, the data table does not resolve and display record names in lookup fields when you add these fields as columns. To address this, I added the submitter’s full name in a read-only text field for display on the screen. Workaround: Create formula fields on the object and display those in the data table.
- The data tables do not provide a clickable link. Combined with the limitation above, you can create a formula field on the object to address both of these gaps: show the record name and make it a clickable link. Here is the formula example you need for this (shout out goes to Brad Weller for his contribution):
HYPERLINK("/" & Id, Name, '_self')
While I wanted to make these additions to these flows, I did not want to add custom fields to the objects. It should be your decision whether you want to do that or not.
Install the Package to Your Dev Org
Here is the second generation unprotected package for these two flows that you can install in your Dev Org:
For a more visual walk through of how these flows are built, watch the Salesforce Break YouTube video below.
With Salesforce phasing out legacy approvals, mastering Flow Approvals is essential to keep your org’s processes modern, flexible, and future-ready. Gain the confidence to handle any approval challenge with solutions that work seamlessly in real-world Salesforce environments HERE.
Explore related content:
Supercharge Your Approvals with Salesforce Flow Approval Processes
When Your DMLs Have Criteria Conditions Other Than Id
Start Autolaunched Flow Approvals From A Button
Get Ready for the New Time Data Type – Summer ‘25 Flow Goodness
#AutolaunchedFlow #FlowApprovals #FlowBuilder #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials -
How to Build Custom Flow Approval Submission Related Lists
In the Spring ’25 release, Salesforce introduced Flow Approvals to replace the legacy approval processes. This approval platform was based on the orchestration functionality. I recorded and released two videos and posts to share this functionality on Salesforce Break. The videos saw great interest from the community, they are about to reach 20K views soon. So, why is everyone talking about flow approvals?
There are multiple reasons:
- Flow approvals are orchestration-based, but they are entirely free unlike other orchestrations.
- Legacy approvals are really old. Salesforce has not been investing in them. They are past due for a remake.
- Legacy approvals are limited. To enhance the functionality, clients had to use AppExchange solutions or paid alternatives by Salesforce like advanced approvals for CPQ.
- Flow approvals allow for parallel approvals, dynamic steps, and flexibility in the approval process.
This is why I decided to create more content in this area, starting with:
- A self-paced video course that teaches Flow Approval processes in depth, with hands-on practice. See the details here.
- Additional resources focused on solutions that bridge the gaps between Flow Approvals and Legacy Approvals, addressing the limitations of the new platform.
Here is the first post detailing a solution filling one of the gaps.
Flow Approvals Don’t Provide Sufficient Detail In The Related Lists
Here is the first point I would like to address: Flow approvals don’t provide good detailed information in the related lists of the object record like the legacy approvals did.
Solution: Build a screen flow with reactive data tables to show the approval submission records and their related records. Add the screen flow to a tab on the record page.
Salesforce provided a component that can be added to the record page. It is called the Approval Trace component. This provides some information about the approval process, but is not customizable. I asked myself how I can go beyond that, and decided to build a reactive screen flow with data tables to fill this functionality gap.
To build and deploy this flow, you need to follow these steps:
- Build the screen flow.
- Build the autolaunched flow that will fetch the data you will need. This flow will be used as the screen action in step one.
- After testing and activation, add the screen flow to the record page.
If you have never built a screen flow with screen actions before, let me be the first one to tell you that step one and two are not really completed in sequence. You go back and forth building these two flows.
Let’s get started.
Build the Flow Approval Submission Screen Flow
What I usually do, when building these flows is that I first get the screen flow started. Then I build the autolaunched flow, and go back to the screen flow to build out the rest of the functionality. The reason is that the screen flow data tables will need the outputs from the autolaunched flow to be fully configured.
This is what the screen flow looks like, once it is completed.
For now, you can just ignore the loop section. This is there to ensure that there is a default selection for the first data table, when the flow first runs.
This is the structure of the flow excluding that part:
- Get all approval submission records for the recordId that will be provided as input into the flow.
- Check if there are approval submissions found.
- Display a screen saying “no records were found,” if the get returns null.
- Display a reactive screen mainly consisting of three data tables with conditional visibility calling an autolaunched flow as a screen action.
Here is what this screen looks like:
After you build, test, and activate the autolaunched flow, configure the screen action under the screen properties as shown below.
How the Loop Section Works
The first data table has an input parameter that determines the default selection, when the flow first runs. This is a record variable representing one of the members of the collection record variable that supplies the data. You need to loop the collection of records to get to the record variable. Follow these steps:
- Loop the collection record variable which is the output of your get step. Sort the data by last modified date in your get step.
- Assign the first member to a record variable.
- Exit the loop without condition. Connect the path to the next element outside the loop.
- Add the resulting record variable to the default selection parameter under the configure rows section of your data table.
This loop always runs once, setting the default selection to the most recent approval submission. This populates the related data tables when the flow first runs.
Build the Screen Action Autolaunched Flow for Related Lists
The autolaunched flow receives a single approval submission recordId as input. Then it gets the related records and data the screen flow needs, and returns the data as output.
Here is a screenshot of the autolaunched flow.
This flow executes the following steps:
- Gets the approval submission data.
- Gets the user data for the submitter to resolve the full name.
- Gets approval work items.
- Checks null and sets a boolean (checkbox) variable when the get returns null. The output uses this variable to control conditional visibility of the relevant data table. If found this method yields the best results.
- Get approval submission details.
- Checks null and sets a boolean variable when the get returns null. This variable is then used in the output to drive conditional visibility of the relevant data table.
- Assigns the get results to output collection record variables.
Final Deployment Steps
After testing and activating the autolaunched flow, you need to add the flow to the screen flow as the screen action. The flow input will be fed from the selection of the first data table. You will see that this step will make all the outputs of the autolaunched flow available for the screen flow. Using these outputs build the additional two data tables and configure the conditional visibility.
After testing and activating your screen flow, add the flow to the record page on a dedicated new tab (or to a section on an existing tab). Select the checkbox to pass the recordId to the flow. Note that this flow will work with any record for any object.
Limitations and Suggested Improvements
While this screen flow provides a lot of detail and customization options it has two limitations:
- By default, the data table does not resolve and display record names in lookup fields when you add these fields as columns. To address this, I added the submitter’s full name in a read-only text field for display on the screen. Workaround: Create formula fields on the object and display those in the data table.
- The data tables do not provide a clickable link. Combined with the limitation above, you can create a formula field on the object to address both of these gaps: show the record name and make it a clickable link. Here is the formula example you need for this (shout out goes to Brad Weller for his contribution):
HYPERLINK("/" & Id, Name, '_self')
While I wanted to make these additions to these flows, I did not want to add custom fields to the objects. It should be your decision whether you want to do that or not.
Install the Package to Your Dev Org
Here is the second generation unprotected package for these two flows that you can install in your Dev Org:
For a more visual walk through of how these flows are built, watch the Salesforce Break YouTube video below.
With Salesforce phasing out legacy approvals, mastering Flow Approvals is essential to keep your org’s processes modern, flexible, and future-ready. Gain the confidence to handle any approval challenge with solutions that work seamlessly in real-world Salesforce environments HERE.
Explore related content:
Supercharge Your Approvals with Salesforce Flow Approval Processes
When Your DMLs Have Criteria Conditions Other Than Id
Start Autolaunched Flow Approvals From A Button
Get Ready for the New Time Data Type – Summer ‘25 Flow Goodness
#AutolaunchedFlow #FlowApprovals #FlowBuilder #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials -
Simplify Salesforce Integrations with Declarative Webhooks
Salesforce continues to invest in tools that simplify integration tasks for administrators. Low-code set up for integrations are possible on the Salesforce platform today. However, the functionality still seems to be dispersed all over the platform utilizing several tools, and keeping the difficulty of setup relatively high. This is where Declarative Webhooks comes in. This platform makes inbound and outbound integrations easy, and keeps all your configurations together in one single app.
What is Declarative Webhooks?
Declarative Webhooks is a native Salesforce application developed by Omnitoria. It allows admins to build significant, scalable integrations with external platforms without writing code. Built to work with REST APIs that use JSON or x-www-form-urlencoded data, the app makes it possible to configure both outbound and inbound connections from within Salesforce. It’s ideal for admins, developers, and operations teams looking to connect Salesforce to third-party tools quickly and securely.
Declarative Webhooks currently holds a 5-star rating on the AppExchange, with positive feedback from users across industries.
Key Declarative Webhooks Features
Declarative Webhooks enables bidirectional integrations. You can send data out of Salesforce (outbound) by triggering callouts through Flow, Process Builder, Apex, custom buttons, or scheduled batches. You can also receive data from external systems (inbound) by defining endpoints within Salesforce that respond to external webhooks.
Unlike standard Salesforce tools, Declarative Webhooks actually creates and hosts inbound URLs—eliminating the need for middleware, and enabling real-time sync with external systems directly from your org.
The interface is entirely point-and-click, making setup approachable even for non-developers. The app includes template-based configurations that streamline implementation without the need for custom Apex. Help and guidance is provided throughout the UI each step of the way.
Security and flexibility are top priorities. Declarative Webhooks supports a variety of authentication methods, including OAuth and Basic Authentication, and allows you to configure secure handling of credentials and external tokens.
For more advanced use cases, the app includes features like retry logic, callout sequences, and detailed error handling. You can tailor integrations to your needs using scheduling tools or triggering logic from inside Salesforce.
Real-World Use Cases
Slack Webhook
Simple use case: Trigger Slack actions via Slack workflows from Salesforce – Send a message to a channel and add a name to a Slack list.Now granted, this can also be achieved with Salesforce-Slack actions, however, I wanted to take this opportunity to trigger Slack workflows with Webhooks, and demo the Declarative Webhooks functionality with a simple use case.
I set up a Slack workflow that triggers based on a webhook. This workflow posts a message to a channel and adds the name of the person that is passed into the workflow via the webhook to a list of contacts.
You can see the configuration of the Slack workflow and the Slack output results below.
How Did I Configure Declarative Webhooks to Achieve This Result?
First you need to install Declarative Webhooks from Salesforce AppExchange. I will give you the link further down on this post. This app is free to install and try.
- Complete the Slack configuration of the workflow. Slack will give you a webhook URL.
- Configure Declarative Webhooks and add the URL to the configuration page. Make sure you add the domain URL to Salesforce Remote Site Settings.
- Test and activate your callout
- Use one of the many methods available in Declarative Webhooks to trigger the callout from Salesforce.
Inbound Call Template for Zoho Campaigns
Use case: Zoho Campaigns can generate a webhook callout when a contact unsubscribes from an email list. When a contact unsubscribes from the list, make a callout to Salesforce and activate the Do Not Call checkbox on the contact.How Did I Configure Declarative Webhooks and Zoho Campaigns to Achieve This Outcome?
- Set up an Inbound Call Template on Declarative Webhooks. The magic on this platform is that it generates an external endpoint URL for you. You can chose to authenticate or not.
- Create a webhook on the Zoho Campaign side and pass the Name and Email of the contact to Salesforce. Enter the URL generated by Declarative Webhooks here.
- Build an autolaunched flow to update the checkbox on the matching record.
- Test and activate your flow and Declarative Webhooks template.
- Unsubscribe the contact from the list on the Zoho Campaigns side and see the magic unfold.
I really liked this functionality. The logs show whether the flow executed successfully. For future enhancements, I would like for the Declarative Webhooks logs to also show output variable values coming from the flow.
Pricing Overview
Declarative Webhooks is free to install and use in Salesforce sandbox environments indefinitely. In a production or developer org, users get a 30-day free trial. After that, the app remains free for basic use, up to 100 inbound and 100 outbound calls per month, using one outbound and one inbound template.
For organizations that need more capacity or advanced functionality, paid plans are available. These plans scale with usage and support additional templates, retries, and enhanced features. Nonprofit discounts are available, making the app accessible to mission-driven organizations.
Follow this link to find out more about the product and try it yourself.
Why Declarative Webhooks?
This app removes the need for manual data entry and reduces the likelihood of human error. It lets teams centralize their business operations within Salesforce, replacing disconnected workflows with streamlined automations. Whether you’re connecting to popular SaaS tools or custom-built systems, Declarative Webhooks empowers teams of all skill levels to build reliable integrations that scale with their business.
How to Get Started
You can install Declarative Webhooks directly from the AppExchange. The installation process is quick, and the setup guide walks you through each step. Start experimenting in a sandbox or production trial, and configure your first outbound or inbound connection using the built-in templates. Whether you’re an admin looking to eliminate duplicate entries or a developer needing a fast integration framework, this tool provides the support you need to get started quickly.
Final Thoughts
I liked how Declarative Webhooks brought various integration methods together in one app. I especially like the inbound call functionality. Ease of setup, flexible pricing, and native integration with Salesforce automation tools are attractive features for Salesforce Admins. If you are in the market for integration solutions, I recommend you check out Declarative Webhooks by Omnitoria here.
This post was sponsored by Omnitoria.
Explore related content:
Getting Started with Salesforce Data Cloud: Your Roadmap to Unified Customer Insights
How To Use Custom Permissions In Salesforce Flow
Create Document Zip Archives in Salesforce Flow
Dynamically Create Documents Using PDF Butler
#DeclarativeWebhooks #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials
-
Simplify Salesforce Integrations with Declarative Webhooks
Salesforce continues to invest in tools that simplify integration tasks for administrators. Low-code set up for integrations are possible on the Salesforce platform today. However, the functionality still seems to be dispersed all over the platform utilizing several tools, and keeping the difficulty of setup relatively high. This is where Declarative Webhooks comes in. This platform makes inbound and outbound integrations easy, and keeps all your configurations together in one single app.
What is Declarative Webhooks?
Declarative Webhooks is a native Salesforce application developed by Omnitoria. It allows admins to build significant, scalable integrations with external platforms without writing code. Built to work with REST APIs that use JSON or x-www-form-urlencoded data, the app makes it possible to configure both outbound and inbound connections from within Salesforce. It’s ideal for admins, developers, and operations teams looking to connect Salesforce to third-party tools quickly and securely.
Declarative Webhooks currently holds a 5-star rating on the AppExchange, with positive feedback from users across industries.
Key Declarative Webhooks Features
Declarative Webhooks enables bidirectional integrations. You can send data out of Salesforce (outbound) by triggering callouts through Flow, Process Builder, Apex, custom buttons, or scheduled batches. You can also receive data from external systems (inbound) by defining endpoints within Salesforce that respond to external webhooks.
Unlike standard Salesforce tools, Declarative Webhooks actually creates and hosts inbound URLs—eliminating the need for middleware, and enabling real-time sync with external systems directly from your org.
The interface is entirely point-and-click, making setup approachable even for non-developers. The app includes template-based configurations that streamline implementation without the need for custom Apex. Help and guidance is provided throughout the UI each step of the way.
Security and flexibility are top priorities. Declarative Webhooks supports a variety of authentication methods, including OAuth and Basic Authentication, and allows you to configure secure handling of credentials and external tokens.
For more advanced use cases, the app includes features like retry logic, callout sequences, and detailed error handling. You can tailor integrations to your needs using scheduling tools or triggering logic from inside Salesforce.
Real-World Use Cases
Slack Webhook
Simple use case: Trigger Slack actions via Slack workflows from Salesforce – Send a message to a channel and add a name to a Slack list.Now granted, this can also be achieved with Salesforce-Slack actions, however, I wanted to take this opportunity to trigger Slack workflows with Webhooks, and demo the Declarative Webhooks functionality with a simple use case.
I set up a Slack workflow that triggers based on a webhook. This workflow posts a message to a channel and adds the name of the person that is passed into the workflow via the webhook to a list of contacts.
You can see the configuration of the Slack workflow and the Slack output results below.
How Did I Configure Declarative Webhooks to Achieve This Result?
First you need to install Declarative Webhooks from Salesforce AppExchange. I will give you the link further down on this post. This app is free to install and try.
- Complete the Slack configuration of the workflow. Slack will give you a webhook URL.
- Configure Declarative Webhooks and add the URL to the configuration page. Make sure you add the domain URL to Salesforce Remote Site Settings.
- Test and activate your callout
- Use one of the many methods available in Declarative Webhooks to trigger the callout from Salesforce.
Inbound Call Template for Zoho Campaigns
Use case: Zoho Campaigns can generate a webhook callout when a contact unsubscribes from an email list. When a contact unsubscribes from the list, make a callout to Salesforce and activate the Do Not Call checkbox on the contact.How Did I Configure Declarative Webhooks and Zoho Campaigns to Achieve This Outcome?
- Set up an Inbound Call Template on Declarative Webhooks. The magic on this platform is that it generates an external endpoint URL for you. You can chose to authenticate or not.
- Create a webhook on the Zoho Campaign side and pass the Name and Email of the contact to Salesforce. Enter the URL generated by Declarative Webhooks here.
- Build an autolaunched flow to update the checkbox on the matching record.
- Test and activate your flow and Declarative Webhooks template.
- Unsubscribe the contact from the list on the Zoho Campaigns side and see the magic unfold.
I really liked this functionality. The logs show whether the flow executed successfully. For future enhancements, I would like for the Declarative Webhooks logs to also show output variable values coming from the flow.
Pricing Overview
Declarative Webhooks is free to install and use in Salesforce sandbox environments indefinitely. In a production or developer org, users get a 30-day free trial. After that, the app remains free for basic use, up to 100 inbound and 100 outbound calls per month, using one outbound and one inbound template.
For organizations that need more capacity or advanced functionality, paid plans are available. These plans scale with usage and support additional templates, retries, and enhanced features. Nonprofit discounts are available, making the app accessible to mission-driven organizations.
Follow this link to find out more about the product and try it yourself.
Why Declarative Webhooks?
This app removes the need for manual data entry and reduces the likelihood of human error. It lets teams centralize their business operations within Salesforce, replacing disconnected workflows with streamlined automations. Whether you’re connecting to popular SaaS tools or custom-built systems, Declarative Webhooks empowers teams of all skill levels to build reliable integrations that scale with their business.
How to Get Started
You can install Declarative Webhooks directly from the AppExchange. The installation process is quick, and the setup guide walks you through each step. Start experimenting in a sandbox or production trial, and configure your first outbound or inbound connection using the built-in templates. Whether you’re an admin looking to eliminate duplicate entries or a developer needing a fast integration framework, this tool provides the support you need to get started quickly.
Final Thoughts
I liked how Declarative Webhooks brought various integration methods together in one app. I especially like the inbound call functionality. Ease of setup, flexible pricing, and native integration with Salesforce automation tools are attractive features for Salesforce Admins. If you are in the market for integration solutions, I recommend you check out Declarative Webhooks by Omnitoria here.
This post was sponsored by Omnitoria.
Explore related content:
Getting Started with Salesforce Data Cloud: Your Roadmap to Unified Customer Insights
How To Use Custom Permissions In Salesforce Flow
Create Document Zip Archives in Salesforce Flow
Dynamically Create Documents Using PDF Butler
#DeclarativeWebhooks #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials
-
When Your DMLs Have Criteria Conditions Other Than Id
The Update Records element in Salesforce Flow is a powerful tool that allows you to modify existing records without writing any code. It’s commonly used to change field values and update statuses. You can configure it to update a specific record (like a record from the trigger or a record you’ve retrieved in a prior element), or you can set conditions to update multiple records that meet certain criteria. Best practice is to keep your updates efficient. Limit the number of records updated when possible, and always ensure that your flow logic avoids unnecessary updates to prevent hitting governor limits or creating infinite loops. Use it thoughtfully to streamline processes and maintain clean, accurate data.
Update Records
When you update records, there are three ways you can configure the update element:
- Update using Id(s): Your update element can point to one record Id or multiple record Ids using the IN operator when executing the update. This is an efficient alternative, as the record(s) are uniquely identified. This alternative consumes one DML against your governor limit.
- Update using a collection: This method is efficient, because the update element always consumes one DML against your governor limit, regardless of how many records your are updating in one show. You can update up to 10K records in one update element.
- Update using criteria conditions for field values other than Id: When updating multiple records, we can also set conditions and update all the records that meet the conditions. In this case, Salesforce queries the database and gets the records that will be updated, and performs the update. This method therefore consumes one SOQL and one DML against your governor limit. It is possible that one or no record meets the conditions, as well.
Update Using Criteria Conditions For Field Values Other Than Id
Let’s expand on the last method. For an inactive account, you may want to update all open cases to closed status. In a flow we could configure the update element with the following conditions:
- AccountId = Inactive Account
- Closed = false (case status is not closed)
And for these accounts the field update that will be performed is as follows:
Status = Closed (set status to closed)
In this scenario, what Salesforce will do is query and find the records using the two conditions listed above (SOQL) and set the Status field on these records to Closed (DML).
Now, is this a bad thing? Not necessarily. This is a little known fact, that you should keep in mind when optimizing your flow for governor limit usage.
What is the alternative? I guess you could perform an update using one of the other alternatives listed above. Let’s look at these alternatives in detail:
Update Using Id(s)
If you wanted to use this method you could get the records according to the criteria conditions, and extract the Ids and put them into a text collection using the transform element, and do the update using the IN element. This alternative is more complicated. It does not bring any efficiencies.
Update Using a Collection
You could get a collection of records using the conditions, loop through each item to update the case status, or possibly use the transform element to update the status in one shot – depending on your use case – then go to update using the processed collection. Too complicated. This alternative still uses one SOQL and one DML.
Conclusion
Updates that include conditions beyond specifying the Id of the record consume one SOQL and one DML against your execution governor limits; make sure you check and control your governor limit usage.
Explore related content:
Salesforce Flow Best Practices
Can You Start With a Decision Inside Your Record-Triggered Flow?
#Automation #DML #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials #UpdateElement
-
When Your DMLs Have Criteria Conditions Other Than Id
The Update Records element in Salesforce Flow is a powerful tool that allows you to modify existing records without writing any code. It’s commonly used to change field values and update statuses. You can configure it to update a specific record (like a record from the trigger or a record you’ve retrieved in a prior element), or you can set conditions to update multiple records that meet certain criteria. Best practice is to keep your updates efficient. Limit the number of records updated when possible, and always ensure that your flow logic avoids unnecessary updates to prevent hitting governor limits or creating infinite loops. Use it thoughtfully to streamline processes and maintain clean, accurate data.
Update Records
When you update records, there are three ways you can configure the update element:
- Update using Id(s): Your update element can point to one record Id or multiple record Ids using the IN operator when executing the update. This is an efficient alternative, as the record(s) are uniquely identified. This alternative consumes one DML against your governor limit.
- Update using a collection: This method is efficient, because the update element always consumes one DML against your governor limit, regardless of how many records your are updating in one show. You can update up to 10K records in one update element.
- Update using criteria conditions for field values other than Id: When updating multiple records, we can also set conditions and update all the records that meet the conditions. In this case, Salesforce queries the database and gets the records that will be updated, and performs the update. This method therefore consumes one SOQL and one DML against your governor limit. It is possible that one or no record meets the conditions, as well.
Update Using Criteria Conditions For Field Values Other Than Id
Let’s expand on the last method. For an inactive account, you may want to update all open cases to closed status. In a flow we could configure the update element with the following conditions:
- AccountId = Inactive Account
- Closed = false (case status is not closed)
And for these accounts the field update that will be performed is as follows:
Status = Closed (set status to closed)
In this scenario, what Salesforce will do is query and find the records using the two conditions listed above (SOQL) and set the Status field on these records to Closed (DML).
Now, is this a bad thing? Not necessarily. This is a little known fact, that you should keep in mind when optimizing your flow for governor limit usage.
What is the alternative? I guess you could perform an update using one of the other alternatives listed above. Let’s look at these alternatives in detail:
Update Using Id(s)
If you wanted to use this method you could get the records according to the criteria conditions, and extract the Ids and put them into a text collection using the transform element, and do the update using the IN element. This alternative is more complicated. It does not bring any efficiencies.
Update Using a Collection
You could get a collection of records using the conditions, loop through each item to update the case status, or possibly use the transform element to update the status in one shot – depending on your use case – then go to update using the processed collection. Too complicated. This alternative still uses one SOQL and one DML.
Conclusion
Updates that include conditions beyond specifying the Id of the record consume one SOQL and one DML against your execution governor limits; make sure you check and control your governor limit usage.
Explore related content:
Salesforce Flow Best Practices
Can You Start With a Decision Inside Your Record-Triggered Flow?
#Automation #DML #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials #UpdateElement
-
Getting Started with Salesforce Data Cloud: Your Roadmap to Unified Customer Insights
It’s not uncommon for businesses to lose track of their customers when data lives in too many places. Data scattered across various systems, from CRM and marketing automation to e-commerce platforms and mobile apps, creates “data silos” that hinder a complete understanding of customer behavior and preferences. This leads to misleading metrics, redundant communications, and missed opportunities for truly personalized engagement. This is where Salesforce Data Cloud steps in, offering a solution to connect, harmonize, and activate all your customer data, transforming it into actionable insights.
Evolving from Salesforce CDP (Customer Data Platform) and formerly known as Genie, Salesforce Data Cloud is designed to create a unified picture for your customer. It enables you to bring together data from any source, regardless of its format, using low-code tools and advanced architectural foundations like the lakehouse architecture and Hyperforce. The ultimate goal is not just data aggregation, but also to empower every part of your organization, from marketing and sales to service and commerce, with real-time, intelligent actions.
This guide will walk you through the essential phases of getting started with Salesforce Data Cloud.
Why Data Cloud? The Core Problem It Solves
The primary challenge Salesforce Data Cloud addresses is the elimination of data silos. Imagine a customer interacting with your brand through multiple touchpoints: they browse your website, sign up for a newsletter, make a purchase through your e-commerce platform, and contact customer service. Each interaction generates data, but this data often resides in separate systems each managed by different teams or individuals. Without a unified view, you might send generic emails, offer irrelevant products, or even annoy customers with redundant communications because you don’t recognize them as the same individual across all these systems.
Data Cloud provides a unified picture by ingesting data from diverse sources, including Salesforce CRM, Marketing Cloud, Commerce Cloud, Amazon S3, Google Cloud Storage, Azure, Workday, and SAP, using a rich library of pre-built connectors or flexible APIs. This consolidation is crucial for building unified customer profiles that represent a complete, 360-degree view of each individual, avoiding misleading metrics and improving personalization.
Beyond just collection, Data Cloud is built to make data actionable. It enables you to perform transformations and aggregations to generate calculated insights (e.g., Customer Lifetime Value, engagement scores), segment your audience with precision, and trigger real-time actions across various channels. Its architecture, based on a lakehouse model on Hyperforce, supports high-volume data ingestion and processing at the metadata level, ensuring efficiency and scalability.
It’s also important to note Data Cloud’s consumption-based pricing model, where you pay only for the services you use, making efficient data management even more critical. Despite the improvements made over the recent years, the estimation of Data Cloud costs remains to be a challenge.
Phase 1: Planning and Discovery – Laying the Groundwork
Any successful Data Cloud implementation begins with a meticulous planning and discovery phase. This foundational step ensures alignment with business goals and prepares the ground for effective data management. Data Cloud is a platform where most of the time of the implementation needs to be spent on preparation and design. Expediting these phases can be costly causing rework and frustration.
Define Business Objectives and Use Cases
Before diving into technicalities, ask fundamental questions:
- Why are you starting a data platform solution?
- What is the vision for this Data Cloud solution?
- What are your primary use cases, and are they aligned with top business priorities?
- How will you measure the success of the implementation?
For optimal results, start small. Focus on one or two core use cases initially. This iterative approach allows you to:
- Identify platform nuances.
- Understand source systems and their data quality.
- Develop robust data dictionaries.
- Monitor use cases, then expand.
Ultimately, you should catalog the available data and build a prioritized list of use cases based on their tangible business value.
Understanding Roles and Ownership
A Data Cloud implementation necessitates a strong partnership between IT and marketing/business teams. Clearly define who owns what:
- CDP Administrator/Platform Owner: Manages the Data Cloud platform.
- Data Roles: Responsible for creating data pipelines.
- Marketing Roles: Focus on audience creation, campaign execution, and strategy.
- Customer Insights and Analytics Teams: Leverage the unified data for reporting and analysis.
Align these roles with your organization’s existing structure to ensure all necessary stakeholders are involved from the outset.
Data Inventory and Quality
This is arguably the most critical aspect of planning. Prepare a thorough data dictionary or inventory that comprehensively lists all data sources, preferred ingestion methods, necessary transformations, and how they relate to your defined use cases.
- Field-Level Data Inspection: Scrutinize individual fields for accuracy, identify primary keys, and assess whether data needs normalization or denormalization.
- Data Profiling Tools: These are invaluable for understanding your data. They can analyze field distribution, completion rates, and help identify relevant fields. Profiling helps confirm if your approach will stay within free credit limits and accelerates the design phase.
- Clean Data Upstream: It cannot be stressed enough: clean and sanitize your data at the source system before ingestion. Data Cloud is a unification tool, not primarily a data cleansing or deduplication tool. Ingesting bad or unnecessary data can significantly increase credit consumption and lead to inaccurate results.
- Prioritize Data: Avoid the common pitfall of trying to bring in “all the data”.
- Data Type Alignment: For Zero-Copy integrations, ensuring data type alignment between your source schema (e.g., Snowflake) and Data Cloud’s data model objects (DMOs) is crucial to prevent mapping issues.
- Unique Keys: Data Cloud operates on an upsert (update or insert) model. Ensure every row in your data files has a unique key (either a single field or a composite key) to prevent incorrect merging of records during ingestion.
Phase 2: Architecture and Setup – Building the Foundation
Once the planning is complete, the next phase involves architecting and setting up Data Cloud to receive and process your data.
Connector Selection and Data Ingestion
Salesforce Data Cloud offers flexible ways to ingest data:
- Out-of-the-Box (OOTB) Connectors:
- Prioritize using OOTB connectors for Salesforce CRM, Marketing Cloud, Commerce Cloud, Amazon S3, Google Cloud Storage, and Azure. These are pre-built and minimize effort.
- Ingestion API (Batch vs. Streaming):
- Batch Ingestion: Ideal for front-loading historical data or ingesting large volumes at scheduled, off-peak hours. Data is typically sent in CSV format.
- Streaming Ingestion: Designed for near real-time ingestion of small batches of data, such as user actions on websites or POS system events. Data is typically sent in JSON format.
- Setup Process: First, create an Ingestion API connector, which defines the expected schema and data format. Then, create a data stream for each object you intend to ingest through that connector.
- Authentication: Secure API calls require setting up Connected Apps in Salesforce, leveraging OAuth flows like JWT for authentication.
- API Limits: Be aware of limitations, such as 250 requests per second for streaming APIs and a 200 KB payload size per request. These are important for designing your ingestion strategy.
- Schema Mistakes: If you get a data type wrong in your schema, you generally cannot change it directly after creation.
- Web & Mobile SDK:
- Developers specifically tailor these SDKs to capture interaction data from websites and mobile applications, such as page views and clicks.
- Key Benefits: They come with built-in identity tracking (managing both anonymous and known user profiles) and cookie management, simplifying the process of linking anonymous activity to known profiles once a user identifies themselves.
- Consent Management: The SDKs also include integrated consent management, ensuring data is only collected and used with user permission.
- Sitemap: A powerful feature that allows for centralized data capture logic across multiple web pages, reducing the need to embed code on every page.
- Experience Cloud Integration: For Experience Cloud sites, a new integration feature provides a data kit that simplifies setup and automatically captures standard events.
- SDK vs. Ingestion API for Web: For web and mobile applications, the SDK is generally preferred over the Ingestion API because it handles authentication more securely (no client-side exposure) and streamlines data capture.
- Zero-Copy Integration:
- This revolutionary feature allows Data Cloud to directly access live data stored in external data lakes and warehouses like Snowflake, Databricks, Google BigQuery, and AWS (S3, Redshift) without physically moving or duplicating the data.
- Advantages: Offers near real-time data access, eliminates data duplication, and extends the value of existing data lake/warehouse investments.
- Important Considerations: Data type alignment between your source system and Data Cloud is critical for successful mapping. Also, be prepared for network and security configurations (e.g., VPC, IP whitelisting) to ensure secure connectivity between Data Cloud (hosted on AWS) and your external cloud environments.
Data Harmonization and Modeling
After data is ingested into Data Cloud, it enters the harmonization and modeling stage:
- Data Lake Objects (DLOs): When data first enters Data Cloud, it’s stored in DLOs, which are essentially raw, un-transformed representations of your source data.
- Data Model Objects (DMOs): DMOs represent Data Cloud’s canonical data model. The next crucial step is to map your DLOs to DMOs, transforming the raw data into a standardized structure that Data Cloud understands and uses for downstream processes.
- Standard vs. Custom DMOs/Fields: Data Cloud provides standard DMOs (e.g., Account, Contact, Individual). Leverage these where possible. For unique business requirements or custom fields from your source systems, you have the flexibility to create custom DMOs or add custom fields to standard DMOs.
- Formula Fields: These are powerful tools within Data Cloud, similar to Salesforce CRM formulas. Use them to augment your data (e.g., create composite unique keys for identity resolution) or cast data types if mismatches occurred during ingestion.
- Interim DLOs: In complex scenarios, consider creating “interim DLOs.” These can be used as an intermediate step to maintain additional business context, perform standardization, or scrub data before it’s mapped to the final target DMOs.
- Data Categories: When setting up data streams, you assign a category to the data, which influences how it’s used:
- Profile Data: Contains identification information (like name, email, address) and is crucial for identity resolution.
- Engagement Data: Represents event-driven interactions (e.g., website clicks, purchases, mobile app logins). This data is typically used for aggregated statistics and behavioral insights.
- Other: For data that doesn’t fit neatly into the above categories.
- Data Spaces: Data Cloud allows you to logically separate data using data spaces. These function similarly to business units in Marketing Cloud, enabling you to manage data for different regions, brands, or entities, and ensuring compliance with regulations like PDPA, GDPR, or CCPA by controlling data visibility and access.
- Relational Model: Maintain a comprehensive data dictionary that details your entire data model, including relationships between DLOs and DMOs.
Phase 3: Unification
With your data ingested and harmonized, the next critical phase is unification, where disparate customer profiles are brought together into a single, comprehensive view.
Identity Resolution
Identity Resolution is the core capability that enables Data Cloud to build a single, unified customer profile from various data sources. This process is crucial to:
- Avoid inflating your customer metrics.
- Prevent sending redundant communications.
- Enhance personalization across all touch points.
The identity resolution process is typically two-fold:
- Matching Rules: These rules define the criteria for identifying when different records belong to the same individual. Examples include using fuzzy matching for first names (allowing for minor variations), exact matching for last names and email addresses, or linking records based on social handles.
- Party Identification Model: Leverage external identifiers like loyalty member IDs or driver’s license numbers to enhance matching accuracy. This model helps link profiles across systems that might not share common direct identifiers.
- Required Match Elements: Be aware of specific requirements when unifying accounts or individuals.
- Reconciliation Rules: Once potential matches are identified, reconciliation rules determine which attribute values will represent the unified profile. For instance, if a customer has multiple email addresses across different source systems, you can define rules to select the “most frequent” email, or prioritize data from a “source of truth” system.
Key Considerations for Identity Resolution:
- Thorough Data Understanding: A deep understanding of your data, including unique IDs, field values, and relationships, is paramount for configuring effective matching and reconciliation rules.
- Start with Unified Profiles Early: Even if your initial match rates are low, begin building calculated insights and segments against unified profiles from the outset. This prepares your Data Cloud environment for seamless integration of new data sources in the future.
- Credit Consumption: Identity resolution is a credit-intensive operation (e.g., 100,000 credits per million rows processed). While incremental processing is improving efficiency, careful planning of how often identity resolution runs is essential to manage costs.
- Anonymous Data: By default, the Marketing Cloud Personalization connector sends events only for known users. Enabling anonymous events drastically increases data volume and credit consumption, and you should note that Data Cloud doesn’t reconcile anonymous events to known users out of the box. You’ll need to implement custom solutions for that reconciliation.
- Data Quality is Paramount: The success of identity resolution hinges on the quality of your incoming data. If your source systems contain “garbage” (inaccurate or inconsistent data), your unified profiles will reflect that. Therefore, prioritize cleaning your source data before bringing it into Data Cloud.
Phase 4: Activation – Turning Data Into Actions
The final, and arguably most impactful, phase is activation. This is where you use your unified, intelligent data to drive personalized customer experiences and automate workflows across various channels.
Calculated Insights
Calculated Insights allow you to perform aggregations and transformations on your data to derive meaningful metrics. These can include:
- Customer Lifetime Value (LTV)
- Engagement Scores
- Total Deposit per Month
- Propensity to Buy
These insights enrich your unified customer profiles, providing deeper understanding and enabling more sophisticated segmentation and personalization strategies.
Segmentation
Data Cloud’s segmentation capabilities enable you to create dynamic audience segments based on any harmonized attribute or calculated insight. This allows for precise targeting of specific customer groups.
- Building Segments: Use the intuitive segment builder to drag and drop fields and apply criteria. You can combine rules with AND/OR logic to refine your audience.
- Nested Segments: This feature allows you to incorporate one segment within another. However, be mindful of limitations, such as a maximum of 50 filters per segment.
- Publishing: Publish segments to various activation targets. While Marketing Cloud Personalization supports only “standard publish,” other targets might allow “rapid publish” for faster audience delivery.
Activation Targets and Activations
After creating segments or calculated insights, you define activation targets, the destinations where you send this actionable data. Data Cloud offers broad activation capabilities:
- Marketing Cloud: Push segments into Marketing Cloud data extensions for email personalization and Journey Builder entry events. You can also use Data Cloud data to influence different journey paths within Marketing Cloud, for example, by attaching custom attributes to Contact Builder.
- Advertising Platforms: Directly send customer segments to major advertising platforms like Google, Meta, and Amazon for targeted campaigns.
- Salesforce Flow: Initiate real-time Salesforce automation (Flows) based on data changes, calculated insights, or streaming events processed by Data Cloud. You can configure this via Data Actions.
- Webhooks: Data Actions can also trigger webhooks to send data to virtually any third-party system.
- Data Lakes & Warehouses: Securely share harmonized profiles, segments, or insights back to external platforms like Snowflake, Databricks, or Google BigQuery.
- Business Applications: Push unified data or activate segments directly into other downstream business applications like ERP systems or other analytics tools.
Platform Monitoring
Consistent monitoring of your Data Cloud platform is crucial post-implementation. This includes:
- API Ingestion Monitoring: Track data flow from MuleSoft or other APIs to Data Cloud.
- Segment Publications: Verify that segments are publishing correctly and yielding expected results. Issues can occur if upstream data ingestion or unification breaks.
- Activations: Ensure data is successfully reaching its intended activation targets.
- Status Alerts: Subscribe to status.salesforce.com for updates on your instance to stay informed about any maintenance or performance degradations.
Key Lessons Learned & Continuous Evolution
Salesforce Data Cloud is a dynamic product that undergoes rapid evolution, with new features and changes rolling out frequently, often on a monthly basis, outside of the major seasonal releases. Staying current is key to maximizing your investment.
Key lessons from real-world implementations:
- Stay Connected: Maintain close communication with your Salesforce account team, participate in partner Slack channels, and engage with Trailblazer communities. This helps you stay informed about upcoming features, pilot programs, and best practices.
- Non-Reversible Data Ingestion: Be extremely diligent in your planning, especially regarding data types and unique keys. Correcting bad data types or core stream elements after you ingest and activate data is highly difficult and often requires you to delete downstream segments, calculated insights, and even DLO/DMO mappings to re-implement. Plan ahead to avoid costly rework.
- Marketing Cloud Connector Caution: The Marketing Cloud connector will bring in all subscriber data from your Marketing Cloud instance, including data from multiple business units. This can significantly impact your profile counts and potentially lead to overages if not anticipated and managed. Understand what’s in your “all subscribers” table before connecting.
- Consumption Costs: Data Cloud operates on a consumption-based model, so every operation has a cost.
- Data Ingestion: Volume of data ingested directly impacts cost.
- Batch Transforms: These process the entire dataset for every execution, potentially burning significant credits even if data hasn’t changed.
- Identity Resolution: This is a credit-intensive process.
- Segmentation: Publishing segments also consumes credits. Carefully plan your data volumes, refresh schedules, and automation frequencies to manage and optimize credit consumption.
- Zero-Copy Considerations: While revolutionary, ensure data type alignment between your source systems (e.g., Snowflake, Redshift) and Data Cloud. Also, factor in time for network and security setup for private connections between cloud environments.
- Optimize Journeys for Data Cloud: Instead of trying to force Data Cloud activations into existing, potentially inefficient Marketing Cloud Journey structures, take the opportunity to remediate and optimize your journeys for best practices aligned with Data Cloud’s capabilities.
- Data Cloud is NOT a Cleansing Tool: Reiterate this fundamental point: Data Cloud is primarily a data unification tool, not a data cleansing tool. It is your duty to ensure your source data is clean and accurate before it enters Data Cloud.
- No Master Data Management (MDM) Solution: Data Cloud adopts a “key ring” approach to identity, focusing on linking various identifiers to a unified profile, rather than aiming to be a traditional “golden record” MDM solution.
- Consent Management: The Web SDK includes built-in consent management. If you are using the Ingestion API, you will need to implement custom solutions to handle user consent requirements.
- AI Integration: Data Cloud offers robust AI capabilities. You can build your own regression models using Einstein Studio with your Data Cloud data, or integrate external AI models from platforms like Amazon SageMaker, Google Vertex AI, Data Bricks, and even large language models from OpenAI or Azure OpenAI. This enables predictive analytics and smarter decision-making.
Conclusion
Salesforce Data Cloud represents a significant step forward in leveraging customer data. By breaking down silos, unifying profiles, and providing powerful activation capabilities, it empowers businesses to deliver hyper-personalized experiences and drive intelligent actions across their entire enterprise.
To get started, you need to take a strategic approach, plan carefully, understand your data deeply, and commit to continuous learning as the platform evolves. By prioritizing use cases, ensuring data quality upstream, and leveraging the diverse ingestion and activation methods, you can successfully implement Data Cloud and unlock the full value of your customer insights. The journey may present challenges, but a truly unified and actionable customer view – once implemented and maintained effectively – will be a precious asset for your business.
Explore related content:
Bring Customer Data into Slack with Salesforce Channels
How to Earn the Salesforce Data Cloud Consultant Certification
Can You Use DML or SOQL Inside the Loop?
How to Quickly Build a Salesforce-Native Satisfaction Survey Using SurveyVista
#DataCloud #MarketingCloud #Salesforce #SalesforceAdmins #SalesforceDevelopers
-
Getting Started with Salesforce Data Cloud: Your Roadmap to Unified Customer Insights
It’s not uncommon for businesses to lose track of their customers when data lives in too many places. Data scattered across various systems, from CRM and marketing automation to e-commerce platforms and mobile apps, creates “data silos” that hinder a complete understanding of customer behavior and preferences. This leads to misleading metrics, redundant communications, and missed opportunities for truly personalized engagement. This is where Salesforce Data Cloud steps in, offering a solution to connect, harmonize, and activate all your customer data, transforming it into actionable insights.
Evolving from Salesforce CDP (Customer Data Platform) and formerly known as Genie, Salesforce Data Cloud is designed to create a unified picture for your customer. It enables you to bring together data from any source, regardless of its format, using low-code tools and advanced architectural foundations like the lakehouse architecture and Hyperforce. The ultimate goal is not just data aggregation, but also to empower every part of your organization, from marketing and sales to service and commerce, with real-time, intelligent actions.
This guide will walk you through the essential phases of getting started with Salesforce Data Cloud.
Why Data Cloud? The Core Problem It Solves
The primary challenge Salesforce Data Cloud addresses is the elimination of data silos. Imagine a customer interacting with your brand through multiple touchpoints: they browse your website, sign up for a newsletter, make a purchase through your e-commerce platform, and contact customer service. Each interaction generates data, but this data often resides in separate systems each managed by different teams or individuals. Without a unified view, you might send generic emails, offer irrelevant products, or even annoy customers with redundant communications because you don’t recognize them as the same individual across all these systems.
Data Cloud provides a unified picture by ingesting data from diverse sources, including Salesforce CRM, Marketing Cloud, Commerce Cloud, Amazon S3, Google Cloud Storage, Azure, Workday, and SAP, using a rich library of pre-built connectors or flexible APIs. This consolidation is crucial for building unified customer profiles that represent a complete, 360-degree view of each individual, avoiding misleading metrics and improving personalization.
Beyond just collection, Data Cloud is built to make data actionable. It enables you to perform transformations and aggregations to generate calculated insights (e.g., Customer Lifetime Value, engagement scores), segment your audience with precision, and trigger real-time actions across various channels. Its architecture, based on a lakehouse model on Hyperforce, supports high-volume data ingestion and processing at the metadata level, ensuring efficiency and scalability.
It’s also important to note Data Cloud’s consumption-based pricing model, where you pay only for the services you use, making efficient data management even more critical. Despite the improvements made over the recent years, the estimation of Data Cloud costs remains to be a challenge.
Phase 1: Planning and Discovery – Laying the Groundwork
Any successful Data Cloud implementation begins with a meticulous planning and discovery phase. This foundational step ensures alignment with business goals and prepares the ground for effective data management. Data Cloud is a platform where most of the time of the implementation needs to be spent on preparation and design. Expediting these phases can be costly causing rework and frustration.
Define Business Objectives and Use Cases
Before diving into technicalities, ask fundamental questions:
- Why are you starting a data platform solution?
- What is the vision for this Data Cloud solution?
- What are your primary use cases, and are they aligned with top business priorities?
- How will you measure the success of the implementation?
For optimal results, start small. Focus on one or two core use cases initially. This iterative approach allows you to:
- Identify platform nuances.
- Understand source systems and their data quality.
- Develop robust data dictionaries.
- Monitor use cases, then expand.
Ultimately, you should catalog the available data and build a prioritized list of use cases based on their tangible business value.
Understanding Roles and Ownership
A Data Cloud implementation necessitates a strong partnership between IT and marketing/business teams. Clearly define who owns what:
- CDP Administrator/Platform Owner: Manages the Data Cloud platform.
- Data Roles: Responsible for creating data pipelines.
- Marketing Roles: Focus on audience creation, campaign execution, and strategy.
- Customer Insights and Analytics Teams: Leverage the unified data for reporting and analysis.
Align these roles with your organization’s existing structure to ensure all necessary stakeholders are involved from the outset.
Data Inventory and Quality
This is arguably the most critical aspect of planning. Prepare a thorough data dictionary or inventory that comprehensively lists all data sources, preferred ingestion methods, necessary transformations, and how they relate to your defined use cases.
- Field-Level Data Inspection: Scrutinize individual fields for accuracy, identify primary keys, and assess whether data needs normalization or denormalization.
- Data Profiling Tools: These are invaluable for understanding your data. They can analyze field distribution, completion rates, and help identify relevant fields. Profiling helps confirm if your approach will stay within free credit limits and accelerates the design phase.
- Clean Data Upstream: It cannot be stressed enough: clean and sanitize your data at the source system before ingestion. Data Cloud is a unification tool, not primarily a data cleansing or deduplication tool. Ingesting bad or unnecessary data can significantly increase credit consumption and lead to inaccurate results.
- Prioritize Data: Avoid the common pitfall of trying to bring in “all the data”.
- Data Type Alignment: For Zero-Copy integrations, ensuring data type alignment between your source schema (e.g., Snowflake) and Data Cloud’s data model objects (DMOs) is crucial to prevent mapping issues.
- Unique Keys: Data Cloud operates on an upsert (update or insert) model. Ensure every row in your data files has a unique key (either a single field or a composite key) to prevent incorrect merging of records during ingestion.
Phase 2: Architecture and Setup – Building the Foundation
Once the planning is complete, the next phase involves architecting and setting up Data Cloud to receive and process your data.
Connector Selection and Data Ingestion
Salesforce Data Cloud offers flexible ways to ingest data:
- Out-of-the-Box (OOTB) Connectors:
- Prioritize using OOTB connectors for Salesforce CRM, Marketing Cloud, Commerce Cloud, Amazon S3, Google Cloud Storage, and Azure. These are pre-built and minimize effort.
- Ingestion API (Batch vs. Streaming):
- Batch Ingestion: Ideal for front-loading historical data or ingesting large volumes at scheduled, off-peak hours. Data is typically sent in CSV format.
- Streaming Ingestion: Designed for near real-time ingestion of small batches of data, such as user actions on websites or POS system events. Data is typically sent in JSON format.
- Setup Process: First, create an Ingestion API connector, which defines the expected schema and data format. Then, create a data stream for each object you intend to ingest through that connector.
- Authentication: Secure API calls require setting up Connected Apps in Salesforce, leveraging OAuth flows like JWT for authentication.
- API Limits: Be aware of limitations, such as 250 requests per second for streaming APIs and a 200 KB payload size per request. These are important for designing your ingestion strategy.
- Schema Mistakes: If you get a data type wrong in your schema, you generally cannot change it directly after creation.
- Web & Mobile SDK:
- Developers specifically tailor these SDKs to capture interaction data from websites and mobile applications, such as page views and clicks.
- Key Benefits: They come with built-in identity tracking (managing both anonymous and known user profiles) and cookie management, simplifying the process of linking anonymous activity to known profiles once a user identifies themselves.
- Consent Management: The SDKs also include integrated consent management, ensuring data is only collected and used with user permission.
- Sitemap: A powerful feature that allows for centralized data capture logic across multiple web pages, reducing the need to embed code on every page.
- Experience Cloud Integration: For Experience Cloud sites, a new integration feature provides a data kit that simplifies setup and automatically captures standard events.
- SDK vs. Ingestion API for Web: For web and mobile applications, the SDK is generally preferred over the Ingestion API because it handles authentication more securely (no client-side exposure) and streamlines data capture.
- Zero-Copy Integration:
- This revolutionary feature allows Data Cloud to directly access live data stored in external data lakes and warehouses like Snowflake, Databricks, Google BigQuery, and AWS (S3, Redshift) without physically moving or duplicating the data.
- Advantages: Offers near real-time data access, eliminates data duplication, and extends the value of existing data lake/warehouse investments.
- Important Considerations: Data type alignment between your source system and Data Cloud is critical for successful mapping. Also, be prepared for network and security configurations (e.g., VPC, IP whitelisting) to ensure secure connectivity between Data Cloud (hosted on AWS) and your external cloud environments.
Data Harmonization and Modeling
After data is ingested into Data Cloud, it enters the harmonization and modeling stage:
- Data Lake Objects (DLOs): When data first enters Data Cloud, it’s stored in DLOs, which are essentially raw, un-transformed representations of your source data.
- Data Model Objects (DMOs): DMOs represent Data Cloud’s canonical data model. The next crucial step is to map your DLOs to DMOs, transforming the raw data into a standardized structure that Data Cloud understands and uses for downstream processes.
- Standard vs. Custom DMOs/Fields: Data Cloud provides standard DMOs (e.g., Account, Contact, Individual). Leverage these where possible. For unique business requirements or custom fields from your source systems, you have the flexibility to create custom DMOs or add custom fields to standard DMOs.
- Formula Fields: These are powerful tools within Data Cloud, similar to Salesforce CRM formulas. Use them to augment your data (e.g., create composite unique keys for identity resolution) or cast data types if mismatches occurred during ingestion.
- Interim DLOs: In complex scenarios, consider creating “interim DLOs.” These can be used as an intermediate step to maintain additional business context, perform standardization, or scrub data before it’s mapped to the final target DMOs.
- Data Categories: When setting up data streams, you assign a category to the data, which influences how it’s used:
- Profile Data: Contains identification information (like name, email, address) and is crucial for identity resolution.
- Engagement Data: Represents event-driven interactions (e.g., website clicks, purchases, mobile app logins). This data is typically used for aggregated statistics and behavioral insights.
- Other: For data that doesn’t fit neatly into the above categories.
- Data Spaces: Data Cloud allows you to logically separate data using data spaces. These function similarly to business units in Marketing Cloud, enabling you to manage data for different regions, brands, or entities, and ensuring compliance with regulations like PDPA, GDPR, or CCPA by controlling data visibility and access.
- Relational Model: Maintain a comprehensive data dictionary that details your entire data model, including relationships between DLOs and DMOs.
Phase 3: Unification
With your data ingested and harmonized, the next critical phase is unification, where disparate customer profiles are brought together into a single, comprehensive view.
Identity Resolution
Identity Resolution is the core capability that enables Data Cloud to build a single, unified customer profile from various data sources. This process is crucial to:
- Avoid inflating your customer metrics.
- Prevent sending redundant communications.
- Enhance personalization across all touch points.
The identity resolution process is typically two-fold:
- Matching Rules: These rules define the criteria for identifying when different records belong to the same individual. Examples include using fuzzy matching for first names (allowing for minor variations), exact matching for last names and email addresses, or linking records based on social handles.
- Party Identification Model: Leverage external identifiers like loyalty member IDs or driver’s license numbers to enhance matching accuracy. This model helps link profiles across systems that might not share common direct identifiers.
- Required Match Elements: Be aware of specific requirements when unifying accounts or individuals.
- Reconciliation Rules: Once potential matches are identified, reconciliation rules determine which attribute values will represent the unified profile. For instance, if a customer has multiple email addresses across different source systems, you can define rules to select the “most frequent” email, or prioritize data from a “source of truth” system.
Key Considerations for Identity Resolution:
- Thorough Data Understanding: A deep understanding of your data, including unique IDs, field values, and relationships, is paramount for configuring effective matching and reconciliation rules.
- Start with Unified Profiles Early: Even if your initial match rates are low, begin building calculated insights and segments against unified profiles from the outset. This prepares your Data Cloud environment for seamless integration of new data sources in the future.
- Credit Consumption: Identity resolution is a credit-intensive operation (e.g., 100,000 credits per million rows processed). While incremental processing is improving efficiency, careful planning of how often identity resolution runs is essential to manage costs.
- Anonymous Data: By default, the Marketing Cloud Personalization connector sends events only for known users. Enabling anonymous events drastically increases data volume and credit consumption, and you should note that Data Cloud doesn’t reconcile anonymous events to known users out of the box. You’ll need to implement custom solutions for that reconciliation.
- Data Quality is Paramount: The success of identity resolution hinges on the quality of your incoming data. If your source systems contain “garbage” (inaccurate or inconsistent data), your unified profiles will reflect that. Therefore, prioritize cleaning your source data before bringing it into Data Cloud.
Phase 4: Activation – Turning Data Into Actions
The final, and arguably most impactful, phase is activation. This is where you use your unified, intelligent data to drive personalized customer experiences and automate workflows across various channels.
Calculated Insights
Calculated Insights allow you to perform aggregations and transformations on your data to derive meaningful metrics. These can include:
- Customer Lifetime Value (LTV)
- Engagement Scores
- Total Deposit per Month
- Propensity to Buy
These insights enrich your unified customer profiles, providing deeper understanding and enabling more sophisticated segmentation and personalization strategies.
Segmentation
Data Cloud’s segmentation capabilities enable you to create dynamic audience segments based on any harmonized attribute or calculated insight. This allows for precise targeting of specific customer groups.
- Building Segments: Use the intuitive segment builder to drag and drop fields and apply criteria. You can combine rules with AND/OR logic to refine your audience.
- Nested Segments: This feature allows you to incorporate one segment within another. However, be mindful of limitations, such as a maximum of 50 filters per segment.
- Publishing: Publish segments to various activation targets. While Marketing Cloud Personalization supports only “standard publish,” other targets might allow “rapid publish” for faster audience delivery.
Activation Targets and Activations
After creating segments or calculated insights, you define activation targets, the destinations where you send this actionable data. Data Cloud offers broad activation capabilities:
- Marketing Cloud: Push segments into Marketing Cloud data extensions for email personalization and Journey Builder entry events. You can also use Data Cloud data to influence different journey paths within Marketing Cloud, for example, by attaching custom attributes to Contact Builder.
- Advertising Platforms: Directly send customer segments to major advertising platforms like Google, Meta, and Amazon for targeted campaigns.
- Salesforce Flow: Initiate real-time Salesforce automation (Flows) based on data changes, calculated insights, or streaming events processed by Data Cloud. You can configure this via Data Actions.
- Webhooks: Data Actions can also trigger webhooks to send data to virtually any third-party system.
- Data Lakes & Warehouses: Securely share harmonized profiles, segments, or insights back to external platforms like Snowflake, Databricks, or Google BigQuery.
- Business Applications: Push unified data or activate segments directly into other downstream business applications like ERP systems or other analytics tools.
Platform Monitoring
Consistent monitoring of your Data Cloud platform is crucial post-implementation. This includes:
- API Ingestion Monitoring: Track data flow from MuleSoft or other APIs to Data Cloud.
- Segment Publications: Verify that segments are publishing correctly and yielding expected results. Issues can occur if upstream data ingestion or unification breaks.
- Activations: Ensure data is successfully reaching its intended activation targets.
- Status Alerts: Subscribe to status.salesforce.com for updates on your instance to stay informed about any maintenance or performance degradations.
Key Lessons Learned & Continuous Evolution
Salesforce Data Cloud is a dynamic product that undergoes rapid evolution, with new features and changes rolling out frequently, often on a monthly basis, outside of the major seasonal releases. Staying current is key to maximizing your investment.
Key lessons from real-world implementations:
- Stay Connected: Maintain close communication with your Salesforce account team, participate in partner Slack channels, and engage with Trailblazer communities. This helps you stay informed about upcoming features, pilot programs, and best practices.
- Non-Reversible Data Ingestion: Be extremely diligent in your planning, especially regarding data types and unique keys. Correcting bad data types or core stream elements after you ingest and activate data is highly difficult and often requires you to delete downstream segments, calculated insights, and even DLO/DMO mappings to re-implement. Plan ahead to avoid costly rework.
- Marketing Cloud Connector Caution: The Marketing Cloud connector will bring in all subscriber data from your Marketing Cloud instance, including data from multiple business units. This can significantly impact your profile counts and potentially lead to overages if not anticipated and managed. Understand what’s in your “all subscribers” table before connecting.
- Consumption Costs: Data Cloud operates on a consumption-based model, so every operation has a cost.
- Data Ingestion: Volume of data ingested directly impacts cost.
- Batch Transforms: These process the entire dataset for every execution, potentially burning significant credits even if data hasn’t changed.
- Identity Resolution: This is a credit-intensive process.
- Segmentation: Publishing segments also consumes credits. Carefully plan your data volumes, refresh schedules, and automation frequencies to manage and optimize credit consumption.
- Zero-Copy Considerations: While revolutionary, ensure data type alignment between your source systems (e.g., Snowflake, Redshift) and Data Cloud. Also, factor in time for network and security setup for private connections between cloud environments.
- Optimize Journeys for Data Cloud: Instead of trying to force Data Cloud activations into existing, potentially inefficient Marketing Cloud Journey structures, take the opportunity to remediate and optimize your journeys for best practices aligned with Data Cloud’s capabilities.
- Data Cloud is NOT a Cleansing Tool: Reiterate this fundamental point: Data Cloud is primarily a data unification tool, not a data cleansing tool. It is your duty to ensure your source data is clean and accurate before it enters Data Cloud.
- No Master Data Management (MDM) Solution: Data Cloud adopts a “key ring” approach to identity, focusing on linking various identifiers to a unified profile, rather than aiming to be a traditional “golden record” MDM solution.
- Consent Management: The Web SDK includes built-in consent management. If you are using the Ingestion API, you will need to implement custom solutions to handle user consent requirements.
- AI Integration: Data Cloud offers robust AI capabilities. You can build your own regression models using Einstein Studio with your Data Cloud data, or integrate external AI models from platforms like Amazon SageMaker, Google Vertex AI, Data Bricks, and even large language models from OpenAI or Azure OpenAI. This enables predictive analytics and smarter decision-making.
Conclusion
Salesforce Data Cloud represents a significant step forward in leveraging customer data. By breaking down silos, unifying profiles, and providing powerful activation capabilities, it empowers businesses to deliver hyper-personalized experiences and drive intelligent actions across their entire enterprise.
To get started, you need to take a strategic approach, plan carefully, understand your data deeply, and commit to continuous learning as the platform evolves. By prioritizing use cases, ensuring data quality upstream, and leveraging the diverse ingestion and activation methods, you can successfully implement Data Cloud and unlock the full value of your customer insights. The journey may present challenges, but a truly unified and actionable customer view – once implemented and maintained effectively – will be a precious asset for your business.
Explore related content:
Bring Customer Data into Slack with Salesforce Channels
How to Earn the Salesforce Data Cloud Consultant Certification
Can You Use DML or SOQL Inside the Loop?
How to Quickly Build a Salesforce-Native Satisfaction Survey Using SurveyVista
#DataCloud #MarketingCloud #Salesforce #SalesforceAdmins #SalesforceDevelopers
-
Display Product and Price Book Entry Fields in the Same Flow Data Table
The Salesforce Flow Data Table component is a powerful screen element that allows users to view and interact with records in a structured, spreadsheet-like format within a Flow. It supports features like record selection, sorting, and filtering, making it ideal for building guided user experiences. For example, in a product selection use case, a sales rep can launch a Flow that displays a list of products retrieved from the Product2 or PriceBookEntry objects. Using the data table, the rep can easily compare options and select multiple products to add to an opportunity, all within a single, streamlined Flow screen.
The data table component has been added to Salesforce based on the success of Eric Smith’s open source data table component published on UnofficialSF. The out of the box component is still not as powerful as the unofficialSF sibling.
In this post, I will show you how I leveraged the transform element inner join functionality to bring together Product2 or PriceBookEntry field values which I showed in the unofficial SF data table component.
The inner join functionality is a powerful one. It falls short of its full potential, because flow builder does not offer a way for us to generate custom data types to hold the information we bring together.
I created a placeholder Apex-defined data type which I used on the output side of the transform element. The unofficial SF data table supports the display of Apex-defined collection data. Leveraging this functionality, I brought the field values of both Product and Price Book Entry objects for the user to make an informed product selection.
🚨 Use case 👇🏼User will select products and add them to the opportunity record. When making the selection, user should be able to see product information and price book entry information from the selected price book on the same row: Product name, code, family, description and unit price.
Apex-Defined Data Types in Flow
Apex-Defined Data Types allow developers to create custom, structured objects in Apex that can be used as inputs and outputs within Flow. These types enable more complex data handling than standard Flow variables, supporting multiple fields, including nested data, within a single variable. For example, you might define an Apex class that bundles together a product’s name, price, discount, and inventory status, then use it in a Flow to display custom pricing logic or pass structured data between Flow and Apex actions. This approach enhances flexibility and scalability when building advanced automation.
The key to defining an Apex-defined data type available for flow is the @AuraEnabled annotation in the Apex class. Once you write an Apex class that defines the fields in the Apex-defined object and deploy it to production, you don’t need to do anything in the flow builder to make this data type available in flow. In the areas where and Apex-defined resource selection is allowed, the new data type will be accessible.
I decided to create an Apex-defined data type with various multiple fields that I can use in the flow builder. The fields I generated are:
- 4 strings
- 2 numbers
- 2 currency fields
- 1 boolean (checkbox)
Here is the simple (the name says complex, but it is simple) Apex code that does the trick:
/** * ComplexDataCollection - Apex-defined data type for Salesforce Flow */public class ComplexDataCollection { @AuraEnabled public String string1 { get; set; } @AuraEnabled public String string2 { get; set; } @AuraEnabled public String string3 { get; set; } @AuraEnabled public String string4 { get; set; } @AuraEnabled public Decimal number1 { get; set; } @AuraEnabled public Decimal number2 { get; set; } @AuraEnabled public Decimal currency1 { get; set; } @AuraEnabled public Decimal currency2 { get; set; } @AuraEnabled public Boolean boolean1 { get; set; } }You will need a test class to deploy this code to production. That should be easy especially with the help of AI, but let me know if you need me post the test class.
Transform and Join Product and Price Book Entry Field Values to Populate the Apex-Defined Data Type
Follow these steps to prepare your data for the data table component:
- Get all the Price Book Entries for one Price Book.
- Get all the Products in the Org (limit your get at 2,000 records for good measure).
- Join the two collections in the transform element using the Product2 Id.
- Map the fields from source collections to the Apex-defined data type.
Here is more detail about the transform element configuration:
- Add the transform element.
- Add the price book entries collection from the get element on the left side.
- Add the product collection on the left side.
- Add an Apex-defined collection on the right side. In my case this is called “
ComplexDataCollection“. Search by name. Make sure you check the collection checkbox. - Click on the first collection on the left side at the top collection level (not next to the individual fields). Connect this to the collection on the right side. You will see instructions for inner join.
- Click on the second collection on the left side. You should see a join configuration screen. Configure your join. More instructions will follow.
Configure your join:
- Left source and right source order does not matter for inner join. Select both collections on the left side.
- The join key will be Product2 on the PriceBookEntry and Id on the Product2.
- Select the fields you want on the output. For me these are: Name, ProductCode, UnitPrice, Family, Description. I added also isActive which I did not end up using in the data table.
- Map these to your Apex-defined object fields: string1 through string4, currency1 and boolean1 (if you want isActive).
Your configured transform join should look like the screen image below.
Prepare the Apex-Defined Object Data for the Data Table
UnofficialSF data table supports Apex-Defined objects, but requires that the input is serialized. The data table cannot process Apex-Defined collection data as input. It expects a JSON format. More on that is available on Eric Smith’s post HERE.
To achieve this, you can either leverage Apex, or so the processing in flow. I tried both ways, and both methods works. Flow method requires looping.
Here is the Apex code for the invocable action that serializes the data:
/** * * Sample Apex Class Template to get data from a Flow, * Process the data, and Send data back to the Flow * This example translates an Apex-Defined Variable * between a Collection of Object Records and a Seraialized String * Eric Smith - May 2020 * **/ public with sharing class TranslateApexDefinedRecords { // *** Apex Class Name *** // Attributes passed in from the Flow public class Requests { @InvocableVariable(label='Input Record String') public String inputString; @InvocableVariable(label='Input Record Collection') public List inputCollection; // *** Apex-Defined Class Descriptor Name *** } // Attributes passed back to the Flow public class Results { @InvocableVariable public String outputString; @InvocableVariable public List outputCollection; // *** Apex-Defined Class Descriptor Name *** } // Expose this Action to the Flow @InvocableMethod public static List translateADR(List requestList) { // Instantiate the record collection List tcdList = new List(); // *** Apex-Defined Class Descriptor Name *** // Prepare the response to send back to the Flow Results response = new Results(); List responseWrapper = new List(); // Bulkify proccessing of multiple requests for (Requests req : requestList) { // Get Input Value(s) String inputString = req.inputString; tcdList = req.inputCollection;// BEGIN APEX ACTION PROCESSING LOGIC // Convert Serialized String to Record Collection List collectionOutput = new List(); // *** Apex-Defined Class Descriptor Name *** if (inputString != null && inputString.length() > 0) { collectionOutput = (List)System.JSON.deserialize(inputString, List.class); // *** Apex-Defined Class Descriptor Name *** } // Convert Record Collection to Serialized String String stringOutput = JSON.serialize(tcdList);// END APEX ACTION PROCESSING LOGIC // Set Output Values response.outputString = stringOutput; response.outputCollection = collectionOutput; responseWrapper.add(response); } // Return values back to the Flow return responseWrapper; }}Please note that this code refers to the name of the first Apex class. If you change the name, you will need to replace the references here, as well. Source: Eric Smith’s Blog.
See how the action will be used and configured in the image below.
Data Table Configuration
Here is how you configure the data table for this data:
- Give your data table and API name
- Scroll down to the advanced section and check the checkbox titled Input data is Apex-Defined.
- Add the string variable you used to assign the value of the translate action output to Datatable Record String.
- For the required unique Key Field input use the string that has the product code. For me this is string2.
- To configure Column Fields add string1,string2,string3,string4,currency1 there.
- Add 1:Name,2:Code,3:Description,4:Family,5:Price for Column Labels.
- Configure Column Types by adding 1:text,2:text,3:text,4:text,5:currency there.
Once completed, you should see a similar output to this image below.
Conclusion
While this example illustrates the way Apex can boost the capabilities of flow, it is very cumbersome to set up this solution to leverage Apex-defined data types in the flow builder and in the data table.
This was more of an experiment than a solution I will use frequently.
If you don’t want to write code, you can easily create a custom placeholder object to achieve a similar result with the out of the box data table component.
I look forward to having this functionality built into the flow builder in the coming releases. I hope Salesforce product teams will prioritize this.
Explore related content:
How to Use the Data Table Component in Screen Flow
Send Salesforce Reports and Dashboards to Slack with Flow
How to Use the Repeater Component in Screen Flow
London’s Calling and Antipatterns to Look For in Flow
#DataTable #InnerJoin #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials #TransformElement
-
Display Product and Price Book Entry Fields in the Same Flow Data Table
The Salesforce Flow Data Table component is a powerful screen element that allows users to view and interact with records in a structured, spreadsheet-like format within a Flow. It supports features like record selection, sorting, and filtering, making it ideal for building guided user experiences. For example, in a product selection use case, a sales rep can launch a Flow that displays a list of products retrieved from the Product2 or PriceBookEntry objects. Using the data table, the rep can easily compare options and select multiple products to add to an opportunity, all within a single, streamlined Flow screen.
The data table component has been added to Salesforce based on the success of Eric Smith’s open source data table component published on UnofficialSF. The out of the box component is still not as powerful as the unofficialSF sibling.
In this post, I will show you how I leveraged the transform element inner join functionality to bring together Product2 or PriceBookEntry field values which I showed in the unofficial SF data table component.
The inner join functionality is a powerful one. It falls short of its full potential, because flow builder does not offer a way for us to generate custom data types to hold the information we bring together.
I created a placeholder Apex-defined data type which I used on the output side of the transform element. The unofficial SF data table supports the display of Apex-defined collection data. Leveraging this functionality, I brought the field values of both Product and Price Book Entry objects for the user to make an informed product selection.
🚨 Use case 👇🏼User will select products and add them to the opportunity record. When making the selection, user should be able to see product information and price book entry information from the selected price book on the same row: Product name, code, family, description and unit price.
Apex-Defined Data Types in Flow
Apex-Defined Data Types allow developers to create custom, structured objects in Apex that can be used as inputs and outputs within Flow. These types enable more complex data handling than standard Flow variables, supporting multiple fields, including nested data, within a single variable. For example, you might define an Apex class that bundles together a product’s name, price, discount, and inventory status, then use it in a Flow to display custom pricing logic or pass structured data between Flow and Apex actions. This approach enhances flexibility and scalability when building advanced automation.
The key to defining an Apex-defined data type available for flow is the @AuraEnabled annotation in the Apex class. Once you write an Apex class that defines the fields in the Apex-defined object and deploy it to production, you don’t need to do anything in the flow builder to make this data type available in flow. In the areas where and Apex-defined resource selection is allowed, the new data type will be accessible.
I decided to create an Apex-defined data type with various multiple fields that I can use in the flow builder. The fields I generated are:
- 4 strings
- 2 numbers
- 2 currency fields
- 1 boolean (checkbox)
Here is the simple (the name says complex, but it is simple) Apex code that does the trick:
/** * ComplexDataCollection - Apex-defined data type for Salesforce Flow */public class ComplexDataCollection { @AuraEnabled public String string1 { get; set; } @AuraEnabled public String string2 { get; set; } @AuraEnabled public String string3 { get; set; } @AuraEnabled public String string4 { get; set; } @AuraEnabled public Decimal number1 { get; set; } @AuraEnabled public Decimal number2 { get; set; } @AuraEnabled public Decimal currency1 { get; set; } @AuraEnabled public Decimal currency2 { get; set; } @AuraEnabled public Boolean boolean1 { get; set; } }You will need a test class to deploy this code to production. That should be easy especially with the help of AI, but let me know if you need me post the test class.
Transform and Join Product and Price Book Entry Field Values to Populate the Apex-Defined Data Type
Follow these steps to prepare your data for the data table component:
- Get all the Price Book Entries for one Price Book.
- Get all the Products in the Org (limit your get at 2,000 records for good measure).
- Join the two collections in the transform element using the Product2 Id.
- Map the fields from source collections to the Apex-defined data type.
Here is more detail about the transform element configuration:
- Add the transform element.
- Add the price book entries collection from the get element on the left side.
- Add the product collection on the left side.
- Add an Apex-defined collection on the right side. In my case this is called “
ComplexDataCollection“. Search by name. Make sure you check the collection checkbox. - Click on the first collection on the left side at the top collection level (not next to the individual fields). Connect this to the collection on the right side. You will see instructions for inner join.
- Click on the second collection on the left side. You should see a join configuration screen. Configure your join. More instructions will follow.
Configure your join:
- Left source and right source order does not matter for inner join. Select both collections on the left side.
- The join key will be Product2 on the PriceBookEntry and Id on the Product2.
- Select the fields you want on the output. For me these are: Name, ProductCode, UnitPrice, Family, Description. I added also isActive which I did not end up using in the data table.
- Map these to your Apex-defined object fields: string1 through string4, currency1 and boolean1 (if you want isActive).
Your configured transform join should look like the screen image below.
Prepare the Apex-Defined Object Data for the Data Table
UnofficialSF data table supports Apex-Defined objects, but requires that the input is serialized. The data table cannot process Apex-Defined collection data as input. It expects a JSON format. More on that is available on Eric Smith’s post HERE.
To achieve this, you can either leverage Apex, or so the processing in flow. I tried both ways, and both methods works. Flow method requires looping.
Here is the Apex code for the invocable action that serializes the data:
/** * * Sample Apex Class Template to get data from a Flow, * Process the data, and Send data back to the Flow * This example translates an Apex-Defined Variable * between a Collection of Object Records and a Seraialized String * Eric Smith - May 2020 * **/ public with sharing class TranslateApexDefinedRecords { // *** Apex Class Name *** // Attributes passed in from the Flow public class Requests { @InvocableVariable(label='Input Record String') public String inputString; @InvocableVariable(label='Input Record Collection') public List inputCollection; // *** Apex-Defined Class Descriptor Name *** } // Attributes passed back to the Flow public class Results { @InvocableVariable public String outputString; @InvocableVariable public List outputCollection; // *** Apex-Defined Class Descriptor Name *** } // Expose this Action to the Flow @InvocableMethod public static List translateADR(List requestList) { // Instantiate the record collection List tcdList = new List(); // *** Apex-Defined Class Descriptor Name *** // Prepare the response to send back to the Flow Results response = new Results(); List responseWrapper = new List(); // Bulkify proccessing of multiple requests for (Requests req : requestList) { // Get Input Value(s) String inputString = req.inputString; tcdList = req.inputCollection;// BEGIN APEX ACTION PROCESSING LOGIC // Convert Serialized String to Record Collection List collectionOutput = new List(); // *** Apex-Defined Class Descriptor Name *** if (inputString != null && inputString.length() > 0) { collectionOutput = (List)System.JSON.deserialize(inputString, List.class); // *** Apex-Defined Class Descriptor Name *** } // Convert Record Collection to Serialized String String stringOutput = JSON.serialize(tcdList);// END APEX ACTION PROCESSING LOGIC // Set Output Values response.outputString = stringOutput; response.outputCollection = collectionOutput; responseWrapper.add(response); } // Return values back to the Flow return responseWrapper; }}Please note that this code refers to the name of the first Apex class. If you change the name, you will need to replace the references here, as well. Source: Eric Smith’s Blog.
See how the action will be used and configured in the image below.
Data Table Configuration
Here is how you configure the data table for this data:
- Give your data table and API name
- Scroll down to the advanced section and check the checkbox titled Input data is Apex-Defined.
- Add the string variable you used to assign the value of the translate action output to Datatable Record String.
- For the required unique Key Field input use the string that has the product code. For me this is string2.
- To configure Column Fields add string1,string2,string3,string4,currency1 there.
- Add 1:Name,2:Code,3:Description,4:Family,5:Price for Column Labels.
- Configure Column Types by adding 1:text,2:text,3:text,4:text,5:currency there.
Once completed, you should see a similar output to this image below.
Conclusion
While this example illustrates the way Apex can boost the capabilities of flow, it is very cumbersome to set up this solution to leverage Apex-defined data types in the flow builder and in the data table.
This was more of an experiment than a solution I will use frequently.
If you don’t want to write code, you can easily create a custom placeholder object to achieve a similar result with the out of the box data table component.
I look forward to having this functionality built into the flow builder in the coming releases. I hope Salesforce product teams will prioritize this.
Explore related content:
How to Use the Data Table Component in Screen Flow
Send Salesforce Reports and Dashboards to Slack with Flow
How to Use the Repeater Component in Screen Flow
London’s Calling and Antipatterns to Look For in Flow
#DataTable #InnerJoin #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials #TransformElement