Microsoft Fabric, Microsoft’s all-in-one analytics solution covering everything from data movement to data science, real-time analytics, and business intelligence, is driven by computing power. Fabric Capacities provide the computing power that drives the functionality, offering a simple and unified way to scale resources.
Fabio Hepworth, Head of Data and Integration in our Business Solutions Team tell us what he’s learned so far.
When learning about Fabric, a few things grabbed my interest. My previous article goes into some initial lessons learned from migrating a traditional SQL Server data warehouse into Fabric warehouse. Shameless plug here.
This article explores what capacities are and how they drive the Fabric experience. To save you suffering from TL;DR, my key take outs are:
- A capacity is the way compute power and billing are represented in Microsoft Fabric – there are three different types, they work the same.
- Budgeting and controlling spend is straight forward as you can know the maximum a Fabric capacity will cost for any given month.
- Bursting and smoothing helps you to manage your workload.
- The capacity tries to keep background or scheduled jobs running.
- The Fabric Capacity Metrics app in Power BI shows very detailed information on capacity usage.
- In the event of an extreme overage, or a one-off big process, capacity can be increased on the fly, with no downtime, to allow for more throughput.
If that’s piqued your interest, then read on.
What are “capacities”?
Fabric capacities are the way that the compute power and billing is represented in Microsoft Fabric and Power BI Premium, aiming to give a simple and flexible foundation to the licensing model. They are measured in Capacity Unit Seconds (CUS) which represents how much throughput and processing power you have during a given period.
There are three types of capacity within Fabric:
Power BI premium capacities
These aren’t new and continue to be bought via Office 365. Since October 2023, premium capacities will stop being measured in v-cores (Voltage to the CPU Core) and instead measured in capacity units (CUs). This is a measurement change only, performance and costing hasn’t changed. The entry cost here is a Power BI P1 workspace which is about £4000 a month.
Fabric capacities
These are the new kid on the block, and they are purchased through Azure. They can be bought in the same sizes as power BI premium capacities but more interesting to us is that they can be bought in much smaller sizes. This allows a lower cost of entry to enterprise grade data analytics tools. The entry cost for an F2 is capped to £220 a month (more on this later).
The trial capacity
This doesn’t need to be purchased and allows us to test out the fabric experience with 64 CUs of throughput. This is a useful tool to get to test and play with Fabric tools. It can also be used as a method to run a real workload, which will help you understand what level of capacity is required before purchase (though 64 CUs is a lot).
The cost of Fabric capacities
A very welcome piece of Fabric news for budgeting and controlling spend is that you can know the maximum a Fabric capacity will cost for any given month. It has a single cost regardless of workload or tool (whereas estimating Azure Synapse Workspace costs requires at least a master’s degree!)
Other useful points to note on capacity cost are:
- Capacity is universal compute and all workloads within Fabric use this.
- Capacity usage is billed per minute of runtime, so you only pay for what you use.
- There is a fixed upper cost to the capacity if you ran it 24/7 for the entire month.
Pushing the boundaries
The other thing that I thought was: can I break it? Is this true?
I value knowing what happens when you push against the boundary of a platform. I believe that this level of understanding adds to our confidence when making recommendations to clients.
How will Fabric react? Are there overage andfair usage charges? Will Microsoft stop my workloads from running??
To answer this, I did the only reasonable thing: I set out to utterly exceed a realistic scenario:
- Purchased a F2 capacity SKU (the smallest available).
- Created 2190 parquet files containing roughly 9.5 billion rows of data.
- Created a Warehouse table of the same schema as the parquet files.
- Created pipeline job that would copy all the data from the files into the Warehouse table.
What happened?
The pipeline ran successfully, all the data loaded, as we expected, in 30 minutes.
How did it happen?
That is because capacities feature ‘bursting’. Microsoft will give you more capacity than purchased to get the work done. Additionally, Microsoft will ensure that any started jobs will finish. Any throttling will only happen on new/future queries and jobs.
But there’s no free dinner in the cloud. To pay back for this bursting, Microsoft have built a feature called ‘smoothing’. This is the process where burst of capacity is smoothed across the rest of the day whilst your capacity is in low or idle usage. I’ve come to think of it as borrowing compute power from future you. Instead of slowing down the job, Fabric lets you exceed it and pay it back as if the job took longer.
Bursting and smoothing are completely transparent to end users. It is fully managed by the workspace. I have had a couple of other experiences with this, and it works well. It is normal to have ETL jobs that run in the morning/out of hours. Knowing they can go a bit over capacity and still get the data in on time is reassuring.
Overage and throttling.
In short peaks of bursting (10 minutes) there is no throttling imposed. The table from Microsoft below shows how throttling gets stricter as the overage increases.
Source for the table above is Microsoft Docs: Understand your Fabric capacity throttling – Microsoft Fabric | Microsoft Learn.
The takeaway here is that the capacity tries to keep background or scheduled jobs running.
I went well into the “all requests are rejected” territory… So, my workspace was toast for a while.
How do I know how much capacity I am using
Microsoft have released a Fabric Capacity Metrics app in Power BI that shows very detailed information on capacity usage.
- The capacity available on the current SKU.
- How much capacity is being utilised and by what. Evidence that we are bursting over capacity and that is being smoothed over the next day.
- Changes to the utilisation based on SKU change to F4 to reduce how much time smoothing/throttling take.
The metrics app also shows details on overages.
- By going over capacity the workspace starts adding to the cumulative overage
- As capacity continues to be used overage increases.
- Capacity resized to F4 (doubling the size from F2) to get out of throttle purgatory quicker. Increased CUs means the cumulative overage can be paid quicker.
- Once the capacity is in idle burndown starts. This could have been done quicker on a larger capacity.
- Once burndown complete resize the capacity to F2.
The takeaway here is that in the event of an extreme overage, or a one-off big process, capacity can be increased to allow for more throughput. The change in resize is quick and reflects in fabric in minutes.
Note: Power BI capacities can have auto scaling enabled will automatically take care of this capacity resizing if it is enabled.
Overall, the experience with Fabric continues to impress clients and myself. Performance of the platform over analytical workloads is blazing fast and the costing around it transparent, reasonable, and flexible.
This article would not be possible without this great blog from Chris Novak. Chris gets extra points for a great gif explaining bursting/smoothing. If you want to know more, and how the new Fabric capacities compare to Power BI Premium capacity, check it out.