SharePoint is SharePoint

  1. Introduction
  2. Why SharePoint to begin with
    1. Technical Debt from Using SharePoint
  3. Dataverse
  4. Conclusion

Introduction

This article goes against a formerly unenlightened Duke. When I started on the Power Platform, I like many of us in the community, utilized SharePoint as my back-end data source. At the time, it was a suitable database for the solutions that we needed to implement as an organization. Ahead six years and I have long arrived at the notion that SharePoint is SharePoint.

It is not to say that SharePoint isn’t a powerful solution in its own right, but it is not a relational database, nor was it ever designed to be. While SharePoint excels at document management, collaboration, and metadata tagging, it falls short when it comes to handling complex, relational data structures or scaling to meet the demands of enterprise-level applications. Over time, I’ve realized that forcing SharePoint to act as a database often leads to performance bottlenecks, limitations in data relationships, and challenges with governance.


Why SharePoint to begin with

Let me share a bit of my personal journey. For me, it all comes down to cost. SharePoint is included with most Microsoft 365 licenses, which essentially makes it a “free” option.

SharePoint is great in that a list is easy to spin up, making it a quick and accessible solution for simple data storage. It also allows developers to work with flat data structures, which are ideal for straightforward use cases like task tracking, basic form submissions, or lightweight data management. However, as the complexity of your application grows—requiring relational data, scalability, or advanced querying capabilities—SharePoint’s limitations become more apparent.

While SharePoint shines in scenarios where document management and collaboration are key, it struggles with handling intricate relationships, large datasets, or scenarios demanding real-time performance. For these cases, leveraging a relational database like Dataverse or SQL Server ensures your solution is built on a foundation designed for robust, enterprise-grade applications. The key is to use SharePoint for its strengths and know when to transition to more advanced data solutions for evolving requirements.

Technical Debt from Using SharePoint

Technical debt can easily accumulate with ‘freemium’ offerings, as these often require workarounds to implement effectively. One thing that has consistently saved me when working with SharePoint is Power Automate. It enables functionality in SharePoint that is otherwise native to relational databases like Dataverse.

For example, in Power Apps, there’s a delegation limit when using SharePoint as a database. To address this, you might apply filters to narrow down your data. But what happens when your filters still exceed the delegation limit? In such cases, you’d need to get creative with your approach—either by devising clever patterns to work around the limitation or by using Power Automate to fetch the required rows and return the dataset to your app. This approach may work for smaller apps and for a limited time, but as your data grows this approach can be made obsolete in a short matter of time.


Dataverse

Today, solutions like Dataverse provide the robust, scalable, and secure foundation needed for modern Power Platform applications. While SharePoint still has its place for document-centric workflows or lightweight data storage, Dataverse is purpose-built for managing structured, relational data with advanced capabilities like business rules, row-level security, and seamless integration across the Power Platform.


If your solution design requires a model-driven app, you can use SharePoint data, but it would need to be integrated via a virtual table in Dataverse. This is because model-driven apps rely on Dataverse and can only access tables that exist in Dataverse, either directly or through virtual tables. However, this would make your solution a premium offering because of the Dataverse integration.

This applies not only to model-driven apps but also to other Power Platform solutions. For example, if a canvas app or a Power Automate flow needs to handle complex relationships, larger datasets, or enhanced security, using Dataverse may become necessary. SharePoint works well for lightweight scenarios, such as simple lists or document-centric workflows, but its limitations in relational data handling and delegation can pose challenges for more complex solutions.

Dataverse is designed to handle structured, relational data and works seamlessly across the Power Platform. While virtual tables can connect SharePoint to Dataverse, this approach adds complexity and costs due to the premium licensing required.

When deciding between SharePoint and Dataverse, it’s important to consider the needs of your solution. SharePoint can be a great option for straightforward use cases with a focus on cost efficiency, while Dataverse is better suited for scenarios requiring scalability, advanced relationships, and deeper integration with Power Platform features. Balancing these factors will help you choose the best approach for your project.

Key Differences

Feature/CriteriaSharePoint Dataverse
Purpose Document management system with lightweight data storage capabilities.Enterprise-grade relational database for structured data.
Data Type Suited for flat or semi-structured data (e.g., lists). Designed for structured, relational data with complex relationships.
Scalability Handles small to medium datasets (up to 30M items per list).Built for large, enterprise-level datasets and scalable storage.
Complex Relationships Limited support for lookups and relationships. Supports many-to-many, one-to-many relationships, and hierarchical data models.
Security SharePoint permissions are list or document-level. Fine-grained, row-level security with role-based access control (RBAC).
Data Capacity Limited by SharePoint site collection storage quota. Capacity based on allocated Dataverse storage in your tenant.
Offline Capability Limited offline capabilities (e.g., via synced lists). Supports offline data access in Power Apps and Dynamics 365.
Integration Seamless with Microsoft 365 (Teams, OneDrive, Office). Seamless with Power Platform (Power Automate, Power Apps, Power BI).
Data Validation Basic validation (via Power Automate or custom scripts). Advanced validation with calculated columns, business rules, and workflows.
Licensing Cost Included with Microsoft 365 subscriptions. Requires premium licensing (e.g., Power Apps Plan 2, Dynamics 365 licenses).
API Integration Limited REST API capabilities, slower for bulk operations. Rich APIs, faster for batch and complex operations (OData, Web API).
Governance Harder to implement strict data governance. Strong governance with auditing, compliance, and managed environments.

Conclusion

SharePoint can be a great lightweight data source, and I’m not here to steer you away from it. Instead, my aim is to encourage careful planning when designing your solutions. I still use SharePoint for quick proofs of concept or in production environments to hold files that need to be manipulated within a flow. If SharePoint works for your needs and you’re confident in managing its limitations, by all means, use it. Just be mindful of the potential technical debt and additional programming it may require to handle tasks that Dataverse supports natively.

The key takeaway is this: leverage SharePoint for what it excels at—collaboration, document management, and lightweight data storage. When your application requires more complexity, scalability, or advanced capabilities, consider Dataverse or other dedicated database solutions. Understanding and respecting these distinctions has been pivotal in creating solutions that not only address current needs but also provide a solid foundation for future growth.

Streamlining Workflow: Comparing Objects with Inline JavaScript in Azure Functions Powered by Power Automate

  1. What Are Azure Functions?
  2. Exercise
    1. Code
    2. Full code
    3. Power Automate

The short of it is…Power Platform lives on Azure. Now this article is not an attempt to push people into Azure, but I do at least want you to consider it. If you are like me, your introduction into Azure was most likely Azure AD now known as Entra ID and next year probably something else.

Azure is a big universe that supports many different workloads. Of particular interest to me and for the benefit of this article I want to focus on Azure Functions.

What Are Azure Functions?

I don’t think there is a need to dive deep into the subject, but what I will say is, Azure Functions allow us to run serverless, on-demand code. Azure Functions support various programming languages such as C# and JavaScript. The ability to call these functions and have them execute a piece of code is beneficial to developers, especially to “low-coders”. Reason being is, we don’t have to be as seasoned as a traditional “pro-coder” we can focus only on the code that matters to us.

With the advent of Co-Pilot and ChatGpt we can write code that serves our needs. I’d hate to cause a stir so I will say this, be careful with a.i. generated code, make sure your prompt is as explicit as you can get it and if you are unsure about the piece of code you are using…ask for help.

Exercise

I once worked with a client who needed to compare two objects and identify their differences to create exceptions for their customer service representatives to address. Implementing this process in Power Automate significantly saved time, though the solution involved multiple actions, loops, and data transformations, making it less streamlined than desired. However, as I gained more expertise in Azure Functions, I discovered that the same task could be accomplished more efficiently using just a few lines of JavaScript and a single action within the flow.

Before we proceed with this exercise, it’s important to note that a foundational understanding of how to provision an Azure resource and set up your function app is essential. To ensure you’re adequately prepared, I strongly recommend reviewing the information provided in the following link: Getting started with Azure Functions | Microsoft Learn.

Keep in mind that we are utilizing JavaScript for this task, so make sure to configure your function accordingly. While you can create your function directly in the Azure portal, which is the simplest approach, there’s also the option to develop it in Visual Studio Code. This latter method is a bit more complex but certainly achievable if you’re up for the challenge. For our function app, we’ll be using an HTTP trigger as the primary mechanism to initiate the process.

Code

Keep in mind that a function app essentially serves as a container for your code snippet. In this exercise, we’ll be adding code to our function and then obtaining the URL to integrate it with Power Automate. While I utilized ChatGPT to generate my code, it’s crucial to be aware that certain parts of this code are indispensable. One such essential line of code is as follows:

module.exports = async function (context, req){}

It’s important to operate within this framework, as the context from Power Automate will be fed into this function. Once that’s in place, our code will handle the rest of the process. This setup ensures that the data from Power Automate is seamlessly integrated and processed by our Azure Function.

Full code

Simply copy and paste the entire code into your function. Pay special attention to the parameter names ‘object1’ and ‘object2’. These are the parameters through which we will pass our objects. The code will then perform the comparison and return an object highlighting the differences, which we can utilize on the Power Automate side.

module.exports = async function (context, req) {

context.log('JavaScript HTTP trigger function processed a request.');

try {
// Assuming the input is a JSON string representing the objects.
const obj1 = req.query.object1 ? JSON.parse(req.query.object1) : null;
const obj2 = req.query.object2 ? JSON.parse(req.query.object2) : null;

// Validate the input
if (typeof obj1 !== 'object' || obj1 === null || typeof obj2 !== 'object' || obj2 === null) {
context.res = {
status: 400,
body: "Please pass two JSON objects in the query string or in the request body",
headers: {
'Content-Type': 'application/json; charset=utf-8'
}
};
return;
}

// Find differences
const differences = findObjectDifferences(obj1, obj2, "root");
context.log('Differences:', differences);

// Return the differences as a JSON string with the header "differences"
context.res = {
body: { "differences": differences },
headers: {
'Content-Type': 'application/json; charset=utf-8'
}
};

} catch (error) {
context.res = {
status: 500,
body: "Error processing your request",
headers: {
'Content-Type': 'application/json; charset=utf-8'
}
};
context.log.error("Exception thrown", error);
}
};

function findObjectDifferences(obj1, obj2, path) {
let differences = [];
let allKeys = new Set([...Object.keys(obj1), ...Object.keys(obj2)]);

allKeys.forEach((key) => {
let newPath = path + "." + key;
if (obj1.hasOwnProperty(key) && !obj2.hasOwnProperty(key)) {
differences.push({ path: newPath, obj1: JSON.stringify(obj1[key]), obj2: 'Not present' });
} else if (!obj1.hasOwnProperty(key) && obj2.hasOwnProperty(key)) {
differences.push({ path: newPath, obj1: 'Not present', obj2: JSON.stringify(obj2[key]) });
} else if (obj1[key] && obj2[key] && typeof obj1[key] === 'object' && typeof obj2[key] === 'object') {
differences = differences.concat(findObjectDifferences(obj1[key], obj2[key], newPath));
} else if (obj1[key] !== obj2[key]) {
differences.push({ path: newPath, obj1: JSON.stringify(obj1[key]), obj2: JSON.stringify(obj2[key]) });
}
});

return differences;
}

After configuring your function, save it. Then, locate and click on the ‘Get function URL’ button found in the editor’s ribbon. This URL is crucial as you will need it to make the HTTP request in Power Automate.

Power Automate

  1. Set Up the Trigger: Begin by setting up your Power Automate flow with a trigger of your choice. This trigger will initiate the flow.
  2. Initialize Object Variables: Next, initialize two object variables named ‘varObject1’ and ‘varObject2’. Initially, I attempted to directly write the objects in the HTTP request but encountered issues. It’s more effective to initialize these variables and assign your arrays to them, allowing for a smoother conversion process.
  3. Create an HTTP Request:
    • Choose ‘POST’ as the method for your HTTP request.
    • In the query, you will need to add two lines:
      • The first line should have ‘object1’ as the key, with the value being the ‘varObject1’ variable.
      • The second line should have ‘object2’ as the key, with the value being the ‘varObject2’ variable.
  4. Paste the Function App URL: Paste the URL you obtained from your Azure Function app into the appropriate field in your HTTP request setup.
  5. Save and Run: After setting up everything, save your flow configuration. Then, run your flow to test and see it in action.
varObject1
varObject2
HTTP Request
successful run and differences in the body

The following is the returned exceptions (differences) in the objects. The object specifies the affected columns “root.Borrower2” and “root.Address”

{ "differences": [ { "path": "root.Borrower2", "obj1": "\"Marsurice DeVan\"", "obj2": "\"Marsurice Devan\"" }, { "path": "root.Address", "obj1": "\"Dean\"", "obj2": "\"Down\"" } ] }

Why Not Azure (or however you pronounce it)?

Alright, Azure and Power Platform. It’s easy to overlook, but remember, Power Platform is cozily nestled right into Azure. I’ve gone from saying I build apps to now, more accurately, saying I work in the cloud.

Azure is essentially the backbone of the cloud world for Microsoft 365 and Power Platform, now Azure OpenAi. Between you and me, Azure has long had Ai capabilities with Azure Cognitive Services.

Azure has replaced the need for those loud, space-eating server rooms with a sleek, digital space. I remember the days of hiding in server rooms to escape the constant barrage of IT queries. Now, with Azure, those days are just a funny memory.

So, why Azure? Well, it’s vast. When my wife expressed interest in learning it, I had to ask, “Which part?” because it’s that big. It’s not just one thing; it’s a whole ecosystem waiting to be explored.

The cost might make some hesitant. I’m not claiming to be a Microsoft licensing expert, but generally, if the benefits are clear, people find the costs justifiable. And with Azure, the benefits are often worth it.

My journey and new role have led me deeper into Azure, especially Azure Functions. These have been a game-changer in my Power App development. They allow me to execute code, like JavaScript and my new favorite PowerShell, for complex tasks I’d rather not handle directly in the app. It’s been a great push to brush up on my coding skills too. I’ve reacquainted myself with Visual Studio Code as the last 4 years or so, I’ve lived in the Power Apps formula bar and Power Automate expressions.

But it’s not just about using Azure for the sake of it. Integrating Azure into your Power Platform solutions can significantly enhance what you’re able to do. It’s about making your solutions more robust and efficient. Azure Logic Apps, for instance, are like Power Automate on steroids, offering more capabilities and flexibility.

In essence, leveraging Azure within your Power Platform solutions isn’t just a good idea; it’s a strategic move. It’s about expanding your capabilities and delivering more sophisticated solutions. So, consider taking that step into Azure. It’s a solid move for anyone looking to enhance their Power Platform projects.

Enhancing Power Automate Desktop Proficiency with Access and PowerShell Integration

  1. Install ImportExcel Module
  2. Creating Desktop Flow
    1. Create Input Variable
    2. Retrieving Excel File From Folder
    3. Create “For each” loop
    4. Launch Excel
    5. Read from Excel worksheet
    6. Running PowerShell Script to insert data into our Access database

I recently had a conversation with a friend about the relevance of Access databases in today’s tech landscape. To my surprise, he emphatically affirmed that Access databases are still widely used. The last time I encountered Access was during my college days when it was used as a teaching tool for database modeling. Honestly, I had no idea that organizations continued to rely on it.

In my personal journey, Access has played a pivotal role in enhancing my skills with Power Automate Desktop. Initially, my goal was to create a Model Driven App and utilize it in conjunction with Power Apps desktop to execute desktop flows. This approach worked reasonably well, but over time, I no longer had the Model Driven App and didn’t want to recreate it. That’s when I turned to Access as my training ground.

However, even my practice with desktop flows on Access forms had its limitations. During my experimentation with PowerShell, I became determined to find a way to insert data into an Access database file without ever having to open Access itself. So, for those who still rely on Access in their workflows, This one is for you…

Here’s what you’ll need: a designated location to store your Excel file. Personally, I store mine in a folder on my desktop.

Install ImportExcel Module

To install the ImportExcel module, please open PowerShell and execute the provided script below:

Install-Module -Name ImportExcel -Scope CurrentUser

Creating Desktop Flow

Create Input Variable


To create an input variable for storing the path to the Access database we are working with, follow these steps to configure it:

  1. Open the Power Automate Desktop workflow where you intend to use the variable.
  2. Locate the area where you can define variables, typically in the workflow settings or variable panel.
  3. Create a new variable and give it a meaningful name, such as “varAccessDb.”
  4. Set the data type of this variable to “String” since it will store the file path, which is typically a text value.
  5. Save your workflow to ensure the variable is retained.

Now, you have an input variable named “varAccessDb” that can be used within your PowerShell script to hold the path to the Access database.

Retrieving Excel File From Folder


To initiate the process, the first action we need to perform is to retrieve the Excel file(s) that will be used in our flow. Follow these steps:

  1. In the Actions pane on the left-hand side of your Power Automate Desktop interface, type “Get files in folder” in the search bar.
  2. Locate the “Get files in folder” action in the search results.
  3. Drag and drop the “Get files in folder” action onto your workflow canvas.

This action will allow you to specify the folder from which you want to fetch the Excel files for further processing in your flow.


To direct the “Get files in folder” action to the folder containing your Excel workbooks and set it up to collect all files with a .xlsx extension, follow these steps:

  1. Select the “Get files in folder” action block on your workflow canvas to access its configuration.
  2. In the “Folder path” input field, specify the path to the folder where your Excel workbooks are located. You can either type the path or use the browse button (…) to navigate to and select the folder.
  3. In the “File filter” input box, input “*.xlsx”. This filter will ensure that the action captures all files with the .xlsx extension.
  4. Save your configuration.

Now, the “Get files in folder” action is configured to retrieve all Excel files with the .xlsx extension from the designated folder for further processing in your flow.

File filter: *.xlsx

Save.

Create “For each” loop

To make your solution dynamic and capable of handling one or multiple files, you can set up a “For Each” loop. Follow these steps:

  1. In the Actions pane, search for “For Each” to find the For Each loop action.
  2. Drag and drop the “For Each” loop action onto your workflow canvas.
  3. In the “Values to iterate” field of the “For Each” loop, use the %Files% variable that was generated from your “Get files in folder” action.
  4. To enhance readability, you can rename the “Store into” section from “CurrentItem” to “CurrentFile.”

By doing this, you’ve set up a loop that will iterate through each file retrieved by the “Get files in folder” action, allowing your workflow to handle both single and multiple files seamlessly.

Launch Excel

To add a “Launch Excel” action within your “For Each” loop and configure it, follow these steps:

  1. In the Actions pane, search for “Launch Excel” to locate the “Launch Excel” action.
  2. Drag and drop the “Launch Excel” action into your workflow editor, making sure it is nested within the “For Each” loop.

Now, let’s configure the “Launch Excel” action with the following parameters:

  • File path: This should point to the current Excel file being processed in the loop. Use the “CurrentFile” variable to specify the file path dynamically.
  • Visible: You can choose whether you want Excel to be visible during the execution of your workflow. Set it to “No” if you want Excel to run in the background without displaying the user interface, or “Yes” if you want Excel to be visible.
  • Create new instance: You can choose to create a new instance of Excel for each file, ensuring that each file is processed independently. Set it to “Yes” if you want a new instance for each file, or “No” if you want to reuse the same instance for all files.
  • Run macro (optional): If you have a specific Excel macro to run, you can specify it here. Otherwise, leave it blank if you don’t need to run a macro.

By configuring the “Launch Excel” action in this manner, you ensure that your workflow opens and processes each Excel file one by one within the “For Each” loop.

Launch Excel: and open the following document
Document Path: %CurrentFile.FullName%

Certainly, if you prefer to make the Excel instance visible, you can set the “Visible” option to “Yes” in the “Launch Excel” action configuration. This choice will allow you to see the Excel interface while the workflow is running, which can be helpful for monitoring and debugging your automation process.

Read from Excel worksheet

To add a “Read from Excel worksheet” action and configure it as described, follow these steps:

  1. In the Actions pane, search for “Read from Excel worksheet” and drag it into your workflow editor. Ensure that you place it within the same “For Each” loop as the “Launch Excel” action.
  2. Configure the “Read from Excel worksheet” action with the following parameters:
Excel instance: %ExcelInstance%
Retrieve: All available values from worksheet

Running PowerShell Script to insert data into our Access database

This PowerShell script will loop through the Excel data and insert records into the specified Access table without the need to have Access open. It should significantly improve the efficiency of your workflow compared to looping through each record in the spreadsheet using multiple actions in Power Automate Desktop.

  1. Add a “Run PowerShell script” action to your workflow.
  2. Copy and paste the following PowerShell script into the action, making any necessary updates to match your specific file paths, table names, and data variables:
# Define the path to the Access database
# varAccessDb is the our input variable
$accessDBPath = "%varAccessDb%"

# Define the connection string
$connectionString = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=$accessDBPath;Persist Security Info=False;"

# Define the path to the Excel file
$excelFilePath = "%CurrentFile%"

# Import data from the Excel file
# Ensure that your Excel file has columns named PartName, SKU, Description, and Price
$dataCollection = Import-Excel -Path $excelFilePath

try {
    # Create a connection object
    $connection = New-Object System.Data.OleDb.OleDbConnection($connectionString)

    # Open the connection
    $connection.Open()

    foreach ($item in $dataCollection) {
        # Define the SQL query for inserting data
        $query = "INSERT INTO Part ([Part Name], [SKU], [Part Description], [Price]) VALUES ('$($item.PartName)', '$($item.SKU)', '$($item.Description)', $($item.Price))"

        # Create a command object
        $command = New-Object System.Data.OleDb.OleDbCommand($query, $connection)

        # Execute the command (perform the insert operation)
        $command.ExecuteNonQuery()
    }

    # Close the connection
    $connection.Close()
}
catch {
    # Handle the error
    Write-Host "An error occurred: $_"
}
finally {
    # Ensure that the connection is closed even if an error occurs
    if ($connection.State -eq [System.Data.ConnectionState]::Open) {
        $connection.Close()
    }
}

Lastly add an action to close Excel.

Power Automate Desktop – Turning on Track Changes in Word Document

Automation is a game-changer in the world of productivity and efficiency. It’s all about streamlining processes, ensuring consistency, and, believe it or not, even providing a sense of buoyancy. Yes, you read that right – buoyancy. Desktop automation can be your secret weapon in keeping those seemingly irreplaceable legacy systems afloat, even when they’re driving your CIO crazy. After all, many of your colleagues are still in love with those systems because they just work – that’s the magic of automation.

But desktop automation isn’t limited to legacy system support; it can also enhance your everyday tasks, like automating Microsoft Word. Now, you might be thinking, “I can do that with a cloud flow,” and you’re not wrong. Cloud-based automation is fantastic for updating templates and structured data. However, what I’m proposing is taking automation to the next level by working with free text.

But, before we dive into the exciting world of free-text automation, let’s talk safety first. There’s no point in automating something if we can’t roll it back in case of a mishap. We’ve all been there – that heart-pounding moment when you’ve pushed the proverbial ‘red button,’ and suddenly, you’re in a lonely, isolating place where your brain feels like it’s losing blood, and you start wondering if you’ll still have a job in the next five minutes. That’s why it’s crucial to implement safety measures to avoid such nerve-wracking situations.

The Guard Rails

In this exercise, we’ll be focusing on a SharePoint document library, which is already equipped with robust document management features such as check-in/check-out and version history. However, one feature that might be missing from your SharePoint arsenal is the ability to activate ‘Track Changes.’

Desktop automation opens up exciting possibilities, allowing us to activate ‘Track Changes’ effortlessly. One of my personal favorites for achieving this is harnessing the power of PowerShell in combination with desktop flows.

PowerShell provides the ultimate level of control and efficiency, eliminating the need for multiple unnecessary actions by allowing us to solve complex tasks with a single script. It’s a game-changer when it comes to automating tasks and streamlining processes in desktop flows.

The Solution

To work with a document in SharePoint and access its settings, follow these steps:

  1. Right-click on the document you want to work with within your SharePoint document library.
  2. From the context menu that appears, select “Copy Link.”
  3. After copying the link, proceed to select the “Settings” link or option to access and configure the document’s settings as needed. This step may vary slightly depending on your specific SharePoint environment and version.

4. In the sharing settings, you should see an option to invite people or add users. Enter the email addresses of the users you want to grant “Can Edit” permissions to. You can also use SharePoint groups if applicable.

5. Select apply and the link will be copied to your clipboard and settings for sharing saved.

In Power Automate Desktop

  1. Add the ‘Terminate’ action
  2. Configure the “Terminate” action settings as follows:
    • Process Name: Enter “WINWORD.EXE” (without quotes) in the “Process Name” field. This is the executable name for Microsoft Word.
    • Action: Choose “Terminate” to close any running instances of Microsoft Word.
  3. Save your flow with the “Terminate” action in place.

2. Next we’ll add an action called ‘Run application’

For the “Run application” action with the specified parameters, here’s how you can configure it:

  1. Application Path: Enter “winword.exe” (without quotes) as the application path. This specifies that you want to run Microsoft Word.
  2. Command Line Arguments: Enter “/t” followed by the sharing link from the Word document in your document library. Make sure to replace “sharing link” with the actual URL or path to your Word document. For example, if your document’s sharing link is “https://example.com/document.docx,” your command line argument would be “/t https://example.com/document.docx.”
  3. Window Style: Set the window style to “Maximized.” This will ensure that the Microsoft Word application window is opened in a maximized state.

With these parameters, the “Run application” action will launch Microsoft Word with the specified command line arguments, opening the document in a maximized window.

Sidebar – sort of

Introducing the concept of command line switches is crucial when working with Microsoft Office products and many other applications. Command line switches provide a convenient way to customize how an application behaves when launched from the command line or as part of an automation process.

In our specific case, the “/t” switch used before the document link is a common command line switch for Microsoft Word, indicating that we want to open an existing file. By including this switch in the command line arguments, we instruct Microsoft Word to treat the provided link as the path to an existing document that should be opened.

Back to regularly scheduled programming…


Next, we’ll insert two left clicks to the foreground window, which should be Microsoft Word, you can use the “Mouse Click” action. The goal is to mitigate any potential dialog interruptions:

  1. Add a “Mouse Click” action to your automation.
  2. In the “Mouse Click” action settings, specify the following:
    • Click Type: Choose “Left Click” to perform a single left-click.
    • Target: Select “Foreground Window” to ensure that the click is directed to the currently active window, which should be Microsoft Word.
    • Repeat: Set the “Repeat” option to “1” to perform the click once.
  3. After the first “Mouse Click” action, add another “Mouse Click” action with the same settings as the first one.

Trust me this step is warranted; I’ve seen some weird things in my time.

Now, let’s incorporate a PowerShell script into our workflow to achieve the goal of enabling track changes for our document. What makes this script particularly useful is its precision: it focuses solely on enabling track changes. Importantly, if track changes are already active, the script won’t “bat an eye”. However, if track changes are not yet enabled, the script will take action, enforcing track changes and ensuring that revisions are visible as they occur.

  1. Add a “Run PowerShell Script” action to your automation.
  2. In the “Run PowerShell Script” action settings, you can paste the provided script into the script editor. Make sure to include the entire script:
$wordApp = [System.Runtime.Interopservices.Marshal]::GetActiveObject("Word.Application")

# Get the active document
$document = $wordApp.ActiveDocument

# Enable track changes
$document.TrackRevisions = $true
$document.ShowRevisions = $true

# Save changes (optional)
$document.Save()
  1. Save your flow with the “Run PowerShell Script” action included.

Let’s introduce a step to test our automation by using the ‘Send Keys’ action. In the input parameters, we’ll enter a specific text of our choice. For instance, I’ll input “tested.” This action will transmit this free-text input directly to our Word document, allowing us to verify the effectiveness of our automation process.

Run the flow

Get what you need – Selecting Columns and Filtering Data

So, I have built some pretty cool stuff with the Power Platform, and I’ve built some things that I’ve come to regret. In the consulting world or at least in my head, there is nothing more embarrassing than creating a problem that could’ve been solved with some due diligence.

Explicit column selection

Have you ever encountered a situation where a Power Platform flow seemed to take longer than expected, leaving you a bit puzzled? I certainly have. One day, I found myself dealing with a flow that was running longer than anticipated, even though I was only retrieving five columns from a Dataverse table. It worked fine in my development environment, but as is often the case, things tend to go smoothly in a controlled development setting. Additionally, I wasn’t working with a large volume of records, so the performance issue was a bit surprising.

After some investigation, I decided to take a closer look at the flow. To my surprise, I realized that I hadn’t explicitly selected the columns during the data retrieval process.

Now, you might be wondering why explicit column selection matters in Power Platform flows. Well, a more practical question to ask is why retrieve and process data that you don’t actually need? Is it worth the convenience of working with larger-than-necessary data?

Explicit column selection plays a crucial role in optimizing the efficiency of your Power Platform flows. By specifying the exact columns you need, you can significantly reduce the data transfer and processing load, resulting in faster execution and better resource management. Let’s explore why this practice is important:

  1. Efficiency: Retrieving only the necessary columns reduces the amount of data that needs to be transferred, leading to quicker data retrieval and processing.
  2. Resource Conservation: Unnecessary data retrieval can strain system resources, potentially causing performance issues. Selecting only the required columns helps conserve resources and ensures smoother flow execution.
  3. Clarity and Simplicity: Explicit column selection makes your flow more straightforward and easier to understand. It clearly communicates the purpose of your actions and contributes to a cleaner, more organized flow structure.

Filtering

Selecting only the necessary columns for your flow is undoubtedly important, but there’s something even more critical: filtering the data. We’ve matured in our approach, and there’s no reason to run a flow “wide-open,” retrieving every record from the database. Sure, we could do it, but that doesn’t mean we should. Whenever possible, applying filters is the key to ensuring that you retrieve the most relevant dataset or record for your specific needs.

Let’s pause and consider this: What exactly are you planning to do with all those rows once you’ve retrieved them? Are you prepared to create additional actions just to extract the record(s) you’re actually interested in?

I understand that you might be new to Power Automate, and it’s great that it can make things work effortlessly. However, let’s take a step further and aim to do it “right” from the very beginning.

Filtering data not only enhances the efficiency of your flow but also ensures that you work with the most pertinent information. Here’s why it’s crucial:

  1. Efficiency and Speed: Filtering data allows you to narrow down your dataset, reducing the processing time and making your flow run faster and smoother.
  2. Precision: Filters enable you to pinpoint the exact information you need, avoiding the hassle of extracting relevant records from a sea of data.
  3. Best Practices: Implementing data filtering is a best practice in Power Automate. It demonstrates a thoughtful and efficient approach to flow design.

Flow

Let’s create a flow. It doesn’t have to be fancy for this exercise; it will be quick.

We have a basic flow here with PowerApps (V2) as the trigger, followed by a Dataverse Action for listing all of the rows in a table.

The table that I am using is the Solutions table, which contains all of the metadata on solutions present in the current environment.

My goal is to only list ‘Unmanaged Solutions‘ and return only a select number of fields.

  1. Columns must be selected by their logical name.

2. Select queries with more than one column must be comma delimited.

3. Our filter will use odata comparison operators.

When you run your flow, your payload will include the columns and filter that you applied to it.


In summary, when retrieving data from any system, it’s best practice to retrieve only what you actually need.

Best Practices: Solutioning Part 2 – Adding Existing Tables to Solution

Welcome back for Part 2 of Solutioning. If you missed Part 1 please check it out.

In this week’s discussion on enhancing your Power Platform solutions, we’ll be delving into the process of incorporating existing tables into your solution. Before we dive into the details, it’s essential to recognize that there’s no mystical formula for adding a component to your solution. This reminds me of a valuable lesson imparted by one of my colleagues, especially when discussing governance: “Just because we can, doesn’t mean we should.” This principle directly applies to the inclusion of tables in a solution. Simply having the ability to add every field from a table into your solution doesn’t automatically mean it’s the best course of action. Let’s explore this further.

Business Value

In the context of Power Platform solutions, adding an existing object to a table may not seem immediately impactful, but it holds significant value for your business.

Here’s a brief story to help illustrate this:

To better understand the importance of adding only the necessary fields to your table within a Power Platform solution, let me share a personal experience that highlights this point. It relates to that insightful phrase from my colleague. Here’s what happened:

I was working on a Power Platform solution—whether it was an app or a flow—in a development environment, preparing it for the transition to the testing phase. In the early stages of building the solution, I faced a crucial decision when incorporating an existing table into the solution. At that moment, I chose to select the ‘Include all objects’ checkbox without giving it much thought.

Little did I know, I unintentionally created a situation that would require some effort to resolve. As we all understand, time is valuable in business.

In a development environment, it’s not always easy to track who’s working on what, and how changes to a table schema might affect your deployment. Communication isn’t always perfect. To reduce risks and minimize errors, it’s a good practice to take some precautions.

In simple terms, this approach helps to prevent errors, leading to quicker issue resolution. This saves time, money, and the stress of figuring out why you’re encountering errors related to missing dependencies that you didn’t directly modify.

Adding existing table to solution

Within our solution, follow these steps:

  1. Select “Add Existing.”
  2. Choose “Table” from the options.
  1. Pick the specific table you wish to work with.
  2. Then, click on “Next” to proceed.

5. Now, go ahead and choose “Select objects.”

At this point, you’ll be presented with a list of all the objects that are currently part of the selected table.

  1. Pick only the objects that you plan to include in your solution.
  2. Then, click the “Add” button to add the selected objects.

On this screen, you’ll notice a count of the objects you’ve selected.

  1. Go ahead and click the “Add” button to proceed.

You will now see that you have added your table to the solution

  1. To view the contents of your table, click on it. (Please note that if there is data in the table, you’ll be able to see all the columns and data. However, these columns are not automatically included in your solution when you move it.)
  2. Now, choose “Columns,” and you will only see the column or object that you added to the solution.

Thank you for joining us this week. In our next installment, we will continue to explore the world of solutioning. Our ultimate aim in solutioning is to lead into a discussion about Application Lifecycle Management (ALM). So, stay tuned for more exciting insights!

Best Practices: Solutioning Part 1

The Power Platform maker experience can seem like the Wild West. At any moment you can create a component that relies on other components and can create a disaster quickly.

In this series I’d like to cover some best practices, that I’ve seen over the past 5 years of working with the Power Platform – some lessons learned – such as – ‘DON’T BUILD IN THE DEFAULT ENVIRONMENT’ and other.

Solutions explained in a not so Microsoft way…A.K.A. – my way.

Imagine you have a kid with a room bursting with toys. It’s like a treasure trove of fun waiting to be explored. But hold on, it’s also a minefield! Stepping on a Lego here, tripping over something there – it’s an adventure you never signed up for. Sound familiar?

One fine day, you, the heroic parent, decide to bring order to this chaos. You’re on a mission to save your head and your feet from more unexpected encounters with toy-related perils. So, what’s your grand plan? You buy a container—a magical vessel that can hold all the toys.

In essence, a solution is your trusty container. It’s like that superhero utility belt, but for organizing digital goodies. You can get as fancy as you want with how you organize your toys – perhaps all the trucks go in one, Legos have their VIP section, and stuffed animals chill in another. Each container’s job? To amp up the fun!

Now, you might be thinking, “Why not just cram them all into one mega-container?” Well, my friend, think about it this way: when a child wants to supercharge their fun, they focus on one toy at a time, right? Or maybe you swap out the old for something shiny and new—a fantastic upgrade! We divide our solutions because not everything needs an upgrade. Remember, the bigger the toybox, the heavier it is to move.

So, there you have it – solutions are like these nifty toy containers, but for the Power Platform. They house all the bits and bobs that make the digital world go ’round. You can move them around effortlessly, using pipelines or a little digital muscle. And how you put these solutions together? Well, that’s where your creativity kicks in.

In a nutshell, solutions are your secret sauce, your digital playroom organizers. They’ll make your life as a Power Platform Developer/Maker a breeze. So go ahead, dive into the world of solutions, and let the fun (and efficiency) begin! Toys sold separately… 🚀

Creating a Solution

First things first, head on over to the Power Platform environment. You can do this by simply going to either make.powerapps.com or make.powerautomate.com – choose the one that suits your needs.

Now, on the left-hand side, look for and click on the “Solutions” option in the navigation bar.

Here comes the fun part – hit the “+ New Solution” button. That’s the one that gets the ball rolling for creating your new solution.

In the form that pops up, give your solution a cool name. If you’re not cool with the default publisher for your solution, no worries – you can make a new one. It’s actually a good idea to give it a name that makes sense. We can cover the publisher in further posts until then click “The Link” to view “Solution Concepts”… The Link

Finally, just click “Create,” and there you have it – a solution.

Thank you for tuning in… until next time.

Dataverse Auditing Part 2 – Retrieve Old Value for Dataverse Record Using HTTP With Azure AD

Extending our exploration of auditing within Dataverse, we’ll employ the ‘HTTP With Azure AD’ action in Power Automate to retrieve the previous value of a modified record.

Before proceeding with this tutorial, it’s essential to have auditing enabled in your environment. If you’re not sure how to do this, please consult the first episode in this series for step-by-step guidance.

Business Value

“Garbage In, Garbage Out.” You’ve probably heard the importance of data integrity. Ensuring data consistency and quality is pivotal because it lays the foundation for reliable analytics and decision-making. Audit trails serve a dual purpose: they not only make users responsible for their actions within the system but also offer invaluable data points for analysts to identify trends, facilitating smarter business decisions.

Framework

Solution

In this tutorial, our focus will primarily be within the context of a solution. A solution serves as a container where we organize and house the components related to the digital transformation we are undertaking. While we won’t delve into an exhaustive explanation, it’s important to understand that solutions provide a structured framework for managing and deploying our project’s components.

Datasource

With auditing activated in our setup, go ahead and pick a table to experiment on. For this demonstration, I’ve selected the “Accounts” table from Dataverse’s Common Data Model.

Retrieve Environment URL

To access the Environment URL required for our flow, navigate to the ‘Power Platform Admin Center.’ This URL is crucial for configuring the flow within its respective environment.

Upon reaching the Admin center, follow these steps:

  1. Navigate to the left-hand side navigation menu.
  2. Select “environments.”
  3. Choose the specific environment you are currently working in.

This will allow you to access the environment URL needed later on.

Copy the environment URL and paste it for use later on.

Environment Variable

In our workflow, especially for the ‘HTTP with Azure AD’ action, we will create an environment variable. This variable will store the URL specific to our environment, which we will subsequently incorporate into the request URL for this action.

By leveraging environment variables, you’re not just streamlining your current workflow; you’re future-proofing your solution. When the time comes to move your solution to a new environment, you can easily update these variables to align with the context of the new environment. This ensures that your solution maintains its agility and effectiveness, regardless of where it’s deployed.

In your solution, follow these steps to create a new environment variable:

  1. Select “+New.”
  2. Click on “More.”
  3. Choose “Environment Variable.”

This will initiate the process of creating a new environment variable within your solution.

For the form, please complete it as follows (note that there is some flexibility with the Display Name and Description):

  1. Copy and paste the previously retrieved environment URL into the “Default Value” field.
  2. Save the changes.

This step ensures that the environment URL is properly integrated into the flow, facilitating the seamless configuration of your flow.

Connection Reference

Before we proceed to configure our flow, there’s one more crucial task to enhance its robustness. We need to create a connection reference for the ‘HTTP with Azure AD’ action. This connection reference serves a similar purpose to the environment variable by enabling the flow to seamlessly transition to a new environment while establishing a connection to the resources within that environment. This step ensures the continuity and adaptability of our flow across different environments.

Inside our solution, follow these steps to create a connection reference:

  1. Select “+New.”
  2. Click on “More.”
  3. Choose “Connection Reference” to open the connection reference form.

This form will allow us to establish a connection reference for our flow, enhancing its flexibility and portability across different environments within the solution.

In the connection reference form, follow these steps:

  1. Name the connection reference.
  2. Optionally, add a description.
  3. From the connector dropdown, select ‘HTTP with Azure AD.’
  4. In the connection selector, click “+ New connection” to create a new ‘HTTP with Azure AD’ connection. This action will open a new tab where you can set up the connection details for ‘HTTP with Azure AD.’

In the new tab for connection details, follow these steps to create our connection:

  1. Select ‘Connect directly (cloud-services).’
  2. Enter the environment URL that we retrieved from the admin center earlier into the “Base Resource URL” field.
  3. Enter the environment URL in Azure AD Resource URI (Application ID URI) field.
  4. Select Create to create connection.
Creating connection

5. Next, you will be prompted to select an account to establish the connection with. Once you’ve chosen the account, the connection will be created, solidifying the link between your flow and the ‘HTTP with Azure AD’ action.

Return to the tab with the connection reference form and follow these steps:

  1. Select the refresh button to ensure your newly created connection is visible.
  2. Choose your newly created connection from the list.
  3. Finally, select “Create” to create the connection reference.

This step will integrate the connection reference into your solution.

Flow

Time to build a flow!

Within your solution, proceed as follows:

  1. Select “New.”
  2. Choose “Automation.”
  3. Select “Cloud Flow.”
  4. Opt for “Automated” to set up your flow.
  5. You will be directed to a screen where you can name your flow and select its trigger.

For the purposes of this demo, you’ve named your flow ‘Account Updates’ and selected the Dataverse trigger, specifically, ‘When a row is added, modified, or deleted.’ Now, proceed by selecting “Create” to start configuring your flow.

In the trigger configuration, set the following parameters:

  1. For “Change type,” select “Modified.”
  2. For “Table name,” choose “Accounts” or the table of your choice.
  3. Set the “Scope” to “Organization.”

These parameters will define the trigger conditions for your flow, ensuring it activates when a row is modified in the specified table.

Let’s add a new step to your flow. Follow these steps:

  1. Search for “HTTP with Azure AD.”
  2. Select “Invoke an HTTP request.”

To ensure that your connection reference is properly selected, follow these steps:

  1. Click on the three dots in the top right corner of the “Invoke an HTTP request” action.
  2. Confirm that your connection reference is correctly chosen. This step is essential for maintaining the flow’s integrity if you decide to migrate the solution to a different environment.

To configure the “Invoke an HTTP request” action, follow these steps:

  1. For the HTTP method, select “GET.”
  2. In the URL field of your request, use the following template:
{Environment Variable URL}/api/data/v9.2/RetrieveRecordChangeHistory(Target=@target,PagingInfo=@paginginfo)?@target={'@odata.id':'[table plural name](GUID of the updated record from our trigger)'}&@paginginfo={"PageNumber": 1,"Count": 1,"ReturnTotalRecordCount": true}

Replace the placeholders with the appropriate values:

  • {Environment Variable URL} should be replaced with the actual environment variable you created.
  • [table plural name] should be replaced with the name of the table (in plural form).
  • (GUID of the updated record from our trigger) should be replaced with the GUID of the updated record obtained from your trigger.

This configuration sets up the HTTP request to retrieve the record change history based on the trigger’s parameters.

Make sure to validate and adjust the URL according to your specific environment and requirements.

For the action headers, follow these steps:

  1. Switch to text mode by selecting the ‘T’ icon.
  2. Copy and paste the following headers configuration:
{
  "Accept": "application/json",
  "OData-MaxVersion": "4.0",
  "OData-Version": "4.0",
  "If-None-Match": "null",
  "Prefer": "odata.include-annotations=\"*\""
}

To simplify, follow these steps:

  1. Add a “Compose” action to your flow.
  2. In the “Compose” action, select the output from the “Invoke an HTTP Request” action as its input.

This way, you’re capturing the output from the HTTP request for further use in your flow.

Save your flow.

To test your flow, follow these steps:

  1. In Dataverse, open the table you are working on.
  2. Edit the value of a field on an existing record within that table.

This action will trigger your flow, allowing you to verify that it responds correctly to the changes made in Dataverse.

Old Value
New Value
Changes reflected in flow

Dataverse Auditing Part. 1- Configure Auditing Settings for Environment

Ever curious about the previous state of a modified record? In the Power Platform, you have various methods to track these changes. One approach is to set up a transaction or log table in Dataverse, capturing every event and its corresponding value. Alternatively, you can enable audit logs in your environment and then deploy a flow utilizing the ‘HTTP With Azure AD’ action to achieve a similar outcome.

Implementing Audit logs in your organization comes with intrinsic business advantages, chief among them being transparency and accountability. This principle aids in identifying the origin of data or transactions within the system, thereby establishing a framework that holds users accountable for their interactions with the platform.

Turn on Auditing for the Environment

To turn on auditing

  1. navigate to the Power Platform Admin Center @ https://admin.powerplatform.microsoft.com/
  2. Open the appropriate environment.
Power Platform Admin Center
Power Platform Admin Center

In the ‘Auditing Tile’, you can easily check whether auditing is activated for your environment. If it’s not yet enabled, simply click the ‘Manage’ link located in the upper right corner of the tile, which will direct you to the Auditing Settings page.

Audit Settings

On the Audit Settings screen check the ‘Start Auditing’, ‘Log access’, and ‘Read logs’ checkboxes.

There is a prompt that instructs you to set the retention policy for the logs. Options are:

  1. 30 days
  2. 90 days
  3. 180 days
  4. One year
  5. Two years
  6. Seven years
  7. Custom
  8. Forever
Audit Settings – Configuration

Configure your Audit Settings and select save button.