The Art of the Start
A key report is slow.
Users are complaining, refresh cycles are timing out, and the business is starting to lose trust in the data. It’s a familiar story, and the immediate impulse is often to dive in and start fixing things.
But what if the secret to effective performance tuning isn't about tools, but about questions?
Performance is more than just load times; it’s a critical business issue that affects cost, capacity, and user adoption. Inefficient reports consume more Capacity Units (CUs) in Microsoft Fabric, potentially driving up costs and impacting other workloads. They lead to refresh failures, unreliable data, and frustrated users who abandon the solution altogether.
At our core, we believe that you can't fix what you don't understand.
The most impactful performance improvements come from a methodical approach that begins long before a diagnostic tool is ever opened. This first article is our playbook for that crucial first step: gathering the essential context to ensure that tuning efforts are targeted, effective, and lasting.
The Discovery Checklist: Asking the Right Questions
Before we analyze a single DAX query, we act like detectives, gathering clues about the solution's history, architecture, and purpose. This context is essential for interpreting performance data and prioritizing action.
Uncovering the History
We start by interviewing stakeholders to understand the timeline of the problem and what has already been tried.
-
What are the specific performance pain points from the user's perspective?
-
When did these issues begin? Was it after a specific change?
-
What troubleshooting has already been done, and what was learned?
-
Is the issue observed in Power BI Desktop, in the Service, or both?
- Who originally built the solution, and what is their experience level?
-
How frequently is the solution used, and by how many users? Are there unusual usage patterns (e.g., heavy use at month-end)?
-
Is the semantic model built for this specific solution or intended for broader self-service?
Assessing the Environment
Next, we map out the technical landscape to understand the resources and constraints at play.
-
What Fabric or Power BI SKU is being used (e.g., F64, P1, PPU)? How utilized is the capacity?
-
What type of workspace hosts the content (Premium/Fabric, Pro, Personal)?
Deconstructing the Semantic Model
Finally, we examine the solution's architecture to identify potential areas of focus for both data refresh and report calculations.
-
What is the model’s storage mode (Import, DirectQuery, Hybrid, DirectLake)?
-
Is row-level security (RLS) implemented?
-
What is the source system and data format (e.g., SQL tables, views, CSV files)?
-
For Refresh Time:
-
What data connector is used? Is it relying on a generic ODBC connection?
-
Is a data gateway required?
-
Is incremental refresh enabled?
-
Where are business rules applied—at the source in SQL, or later in Power Query?
-
-
For Calculation Time:
-
What is the modeling approach—a clean star schema or something more complex?
-
Are many-to-many or bi-directional relationships used?
-
Are there known bottlenecks on specific report pages or complex visuals?
-
How visually dense are the report pages? Are custom visuals being used?
-
Assembling Your Toolkit: Prep and Access
With the background established, the final step is to prepare for testing. This means ensuring you have the right tools and permissions. Most of these tools are lightweight and do not require complex installation.
-
Power BI Desktop: Essential for local testing and contains the built-in Performance Analyzer.
-
DAX Studio: The go-to tool for analyzing DAX query performance and model memory usage.
-
Tabular Editor: Indispensable for its Best Practice Analyzer, which flags inefficient model design patterns.
-
Measure Killer: Quickly identifies unused fields and measures that can be removed to simplify the model.
-
Report Analyzer: Creates a visual wireframe of your report to identify slow-loading objects.
-
SQL Server Profiler: Traces semantic model refreshes to analyze reload time
-
Generative AI (ChatGPT, Copilot): Useful for reviewing DAX expressions and providing optimization suggestions based on best practices.
Before you begin, ensure you have the necessary access: a copy of the PBIX file for isolated desktop testing, Contributor or Build permissions in the service, and access to the underlying data sources to support refreshes.
What’s Next?
This methodical discovery is the foundation of every successful tuning engagement. It ensures that when we move to the next phase, our efforts are precise and impactful.
In our next series of articles, we’ll put this foundational knowledge to work. We'll dive deep into the two primary culprits of poor performance, applying our toolkit to diagnose and solve issues related to semantic model refresh time and report calculation performance.