Microsoft Fabric has experienced significant growth, with its customer base expanding by nearly 75% over the past year—from 11,000 to over 19,000 organizations. This surge underscores the platform’s appeal as a unified solution for data engineering, analytics, and business intelligence.
A standout feature contributing to this adoption is OneLake Shortcuts. These shortcuts enable organizations to reference data across different domains, clouds, and accounts without the need to move or duplicate data. By creating a single virtual data lake , OneLake Shortcuts facilitate seamless data access and collaboration across various teams and departments.
In this blog, we’ll explore how OneLake Shortcuts addresses these needs by providing a streamlined approach to data sharing, reducing redundancy, and enhancing performance across the board.
What Is a OneLake Shortcut? A OneLake Shortcut in Microsoft Fabric is a smart way to link to data without actually moving or copying it. Think of it as a virtual bridge — it connects to data stored in another location (like a different workspace, storage account, or even a separate capacity), letting you access and use that data as if it were local. The actual data never moves. It stays right where it was created. But thanks to the shortcut, you can still explore it, query it, build semantic models, and create Power BI reports on top of it — all from a different workspace.
It works just like a shortcut in Google Drive or a symbolic link in a file system—the same data, different access points.
Here are a few features that make OneLake Shortcuts super useful:
No Data Movement: Data stays in its source location — no duplication, sync jobs, or migration tasks. Cross-Workspace Access: Easily bring data from one Fabric workspace into another without importing or copying. Cross-Capacity Support: You can access data even if the source is in a paused or separate capacity. Consumer-Pays Model: When someone uses the shortcut, their capacity is charged — not the owner’s. Unified View Across Domains: Perfect for domain-based data mesh setups — centralize data, decentralize access. Low Overhead: Shortcuts are just pointers, so almost no perfor mance or storage cost is added.
Where Can You Use OneLake Shortcuts in Microsoft Fabric? OneLake Shortcuts are built to be flexible and work across many parts of the Microsoft Fabric ecosystem. Here’s where and how you can use them:
Across Capacities (Trial, Paid, On-Demand): Access data from a workspace that’s on a paused or different capacity. Shortcuts pull data through the active capacity where the shortcut lives — keeping things running even when the source is offline. From Azure Data Lake Storage Gen2 (ADLS Gen2): Create shortcuts to external storage so you can read data files directly without uploading or duplicating them in Fabric. Across Domains and Business Units: This allows teams to share certified datasets across domains (great for data mesh setups) without shifting data ownership or causing version chaos. In Hybrid Storage Environments: Use shortcuts to unify access to data that may live partly in Fabric and partly in external systems like Azure or other cloud storage. No Ownership Conflicts: The original data remains owned and controlled by the source workspace or team. Shortcuts don’t change permissions or access rules, so you avoid accidental edits and permission headaches. Lightweight and Non-Intrusive: Since shortcuts are just pointers, they don’t take up extra storage and add very little overhead — making them ideal for scaling access without scaling complexity. Bottom Line: If you’re using Microsoft Fabric across departments, domains, or workloads, OneLake Shortcuts lets you share and access data without hassle—no movement, no duplication, and no need to redesign your architecture . Transform Your Data Analytics with Microsoft Fabric! Partner with Kanerika for Expert Fabric implementation Services
Book a Meeting
Who Pays for the Data Access? One of the most important—and often overlooked—features of OneLake Shortcuts is how billing works. It’s not just about linking to data; it’s also about who gets charged when that data is used.
Consumer Capacity Gets Billed, Not the Owner’s When a user queries data through a shortcut — whether via a Power BI report, a SQL query, or a Spark job — the processing happens in their capacity, not the one that holds the original data. So:
The source workspace doesn’t take the hit. The destination workspace (where the shortcut lives) pays the compute cost. The storage charges still apply to the original location — but that’s based on GB stored, not compute used.
Why This Matters in Practice? No resource conflicts: Your data engineering team can load and transform data without being slowed down by report users running heavy dashboards simultaneously. Cleaner performance management: You can isolate noisy workloads (like large reports or frequent refreshes) to a separate capacity, keeping your main data processing smooth and stable. Easier cost tracking: Since compute billing is tied to usage, each team can pay for what they use, which is especially useful in large organizations with multiple business units. Works even when source capacity is paused: This is a game-changer. If the original workspace’s capacity is turned off (for example, paused to save costs), shortcuts still work — as long as the consuming capacity is active.
Example Scenario Let’s say:
Team A owns a Lakehouse in Workspace A, using a paid capacity. Instead of duplicating the data, Team B creates shortcuts. Any queries, model refreshes, or reports built by Team B use their own capacity — not Team A’s.
So, Team A’s capacity isn’t throttled, and Team B gets the data access they need without stepping on anyone’s toes.
Why Use Multiple Capacities? In real-world setups, one capacity often can’t handle everything smoothly. When data engineers run heavy jobs like Spark notebooks or SQL pipelines, and analysts are refreshing Power BI reports simultaneously, the system starts to lag.
The better approach is to split workloads:
Use one capacity for data processing — this handles ETL, data ingestion , and transformations. This separation improves performance, reduces delays, and helps avoid resource conflicts. Reporting teams get faster results, while data teams can process without interruption.
OneLake Shortcuts make this possible without duplicating data. Using shortcuts, you can keep data in one workspace (on a processing capacity) and access it from another (on a reporting capacity). It keeps everything connected — and keeps your users out of each other’s way .
Shortcuts in Data Mesh & Domain-Driven Architecture Microsoft Fabric is built to support modern data architecture patterns, two of the most relevant ones today being data mesh and domain-driven design. Both approaches focus on decentralizing data ownership by assigning responsibility to the teams who know the data best—the domain experts.
But they also introduce a new challenge: How do you share and reuse data across domains without duplicating it or losing control? That’s where OneLake Shortcuts step in.
1. How It Works in a Domain-Centric Setup In a typical domain-driven setup:
Each team or department — like Sales, Finance, or Operations — has its workspace. They build pipelines, transform data , and produce certified datasets for others to use.
But instead of everyone downloading or copying the same dataset into their own workspace (leading to multiple versions of the same data), other domains can simply create shortcuts to the certified data in place.
With OneLake Shortcuts:
Data ownership stays with the producing domain. Consuming teams can access the latest version instantly through a virtual link. There’s no duplication, no extra storage, and no outdated copies floating around. 2. Why This Matters for Data Mesh A true data mesh requires four key principles:
Domain-oriented ownership Self-serve data infrastructure Data as a product Federated governance
OneLake Shortcuts help reinforce all four:
Domain-oriented ownership: Teams keep control over their data while still enabling access. Data as a product: Certified, high-quality datasets can be shared broadly without risk of duplication or drift. Federated governance: Permissions and access control stay centralized and consistent because the data doesn’t move.
3. Example Use Case Let’s say the Finance team owns a master dataset of monthly revenue stored in a Fabric Lakehouse. The Marketing team needs to build a report using that data . Instead of exporting it or asking Finance to load it elsewhere, Marketing simply creates a shortcut to the dataset in their own workspace.
Now:
Finance keeps ownership and maintains data quality. Marketing gets live access to the same version of the data. There’s no confusion over which version is correct. Everyone avoids duplication, sync errors, and redundant storage costs. Microsoft Fabric Vs Tableau: Choosing the Best Data Analytics Tool A detailed comparison of Microsoft Fabric and Tableau, highlighting their unique features and benefits to help enterprises determine the best data analytics tool for their needs.
Learn More
How OneLake Storage Billing Works OneLake in Microsoft Fabric follows a straightforward billing model that separates storage from compute. This gives you the flexibility to scale workloads and manage costs independently. Think of it like a modern cloud storage service — you pay for what you store, not for how often you use it.
1. Storage Is Billed Per GB OneLake charges based on the volume of data stored, much like Amazon S3 or Azure Blob Storage. If you store 500 GB of data, you’re billed for exactly that. There’s no added cost for simply keeping your data in OneLake. This approach makes it predictable and efficient, especially for teams managing large volumes of structured or semi-structured data .
2. No Extra Charges for Read or Write Operations OneLake doesn’t tack on transaction fees when you read from or write to your data. Whether you’re querying data using Power BI, running transformations in Spark, or pushing updates via pipelines, you’re not penalized for how often you access it. This is a big plus for active environments where data is being refreshed or reported on frequently — the cost remains based solely on storage, regardless of usage intensity.
3. Storage Remains Active Even When Capacity Is Paused Your data is always available, even if the workspace or capacity it was created in is currently paused. Storage billing continues in the background, but compute usage stops, which can help reduce costs. If you have a Lakehouse or Warehouse in a paused capacity, you can still access its data using a shortcut from another active workspace. This makes it easier to manage workloads and budgets without disrupting access to important datasets.
4. Separation of Storage and Compute Enables Flexibility This design — keeping storage and compute separate — allows teams to plan resources more effectively. You can centralize your storage in OneLake and activate compute only when needed. Shortcuts let you reuse the same datasets across multiple teams and workspaces, all while keeping control of compute consumption.
Setting Up OneLake Shortcuts in Microsoft Fabric 1. Setup Workspaces Workspace A: Holds the Lakehouse with real data (Lake01 ). This is where the actual tables like Customer, Date, Geo, Item, and Sales are stored. It’s your core data processing environment.
Workspace B: This workspace will be used for reporting. It starts out empty but will later contain shortcuts to the tables in Workspace A. This setup helps separate compute workloads and avoids putting strain on the data processing layer.
2. Check Capacities Both workspaces begin on trial capacities, which is common in initial development or testing. Later in the process, you will switch Workspace A to a paid Fabric capacity to handle heavy data loads more reliably. Workspace B can remain on a trial or lighter capacity since it will be used mostly for reading and reporting.
3. Create Shortcuts In Workspace B:
Create a new Lakehouse (this Lakehouse will not store any real data). Click on the “New Shortcut” option inside the Lakehouse. Point the shortcut to Workspace A’s Lake01 . Select the required tables: Customer, Date, Geo, Item, and Sales. Confirm and create the shortcuts.
At this point, Workspace B will have access to the tables, but they are not duplicated. All data still lives in Workspace A, and Workspace B only references it through shortcuts.
4. Build Semantic Model Use the shortcut tables inside Workspace B to build a semantic model. This model will serve as the base for your Power BI reports. Set up key relationships such as:
Item ID in Sales → Item table Customer ID in Sales → Customer table Geo ID in Sales → Geography table This step ensures your model is logically connected and ready for reporting.
5. Build a Report Create a simple Power BI report using the semantic model — for example, a chart showing total quantity by item category.
You can save this report in Workspace B, but it’s highly recommended to use a third workspace for reporting. This gives you better control over access, versioning, and user permissions — especially if different teams are handling modeling and reporting.
Switching to Paid Capacity Once the setup is tested and working on trial capacities, it’s time to move Workspace A — the one holding your actual data — to a paid Fabric capacity. This ensures better performance, more consistent availability, and avoids the limitations of trial usage.
1. Turn on Your on-demand Fabric capacity Go to the Azure portal or Microsoft 365 admin center, depending on how your organization manages Fabric. Locate your on-demand capacity (such as F2, F4, etc.), and start it. You’ll know it’s active when the status changes and you see the “Pause” option enabled — this means the capacity is now running and ready to handle workloads.
2. Assign Workspace A to the paid capacity Open Microsoft Fabric (app.powerbi.com), navigate to Workspace A and click on the settings icon in the top right corner. Under “Settings”> “Licenses,” you’ll see the current capacity assignment. Click Edit, and from the dropdown list, choose the active paid capacity that you just started.
This step shifts all compute operations for that workspace — including Spark jobs, SQL queries, and pipeline runs — to the paid capacity. This helps you avoid throttling and resource limits often seen in the trial setup.
3. Confirm it’s active under workspace settings After assigning the workspace to the new capacity, double-check the change by refreshing the workspace page and going back to the “Licenses” section in settings. You should now see that Workspace A is running at the fabric capacity you selected.
This change doesn’t affect the data itself — the Lakehouse and its tables remain intact. But from this point forward, any processing done in Workspace A will use the more powerful paid capacity, giving you higher throughput and more stability for production-level workloads.
Testing Access When Source Capacity Is Paused Now it’s time to see how OneLake Shortcuts handles real-world conditions — like a workspace going offline.
1. Pause the paid capacity assigned to Workspace A In the Azure portal, pause the Fabric capacity that Workspace A is using. This simulates a scenario where the computer is turned off, either to save costs or due to a planned downtime.
2. Try opening the Lakehouse in Workspace A — it fails Go to Lake01 in Workspace A and try to open it. You’ll get an error because the workspace no longer has active computing. The data is still stored, but you can’t access it directly without capacity.
3. Go to Workspace B (still on trial capacity) Now switch to Workspace B, which contains the shortcuts pointing to Lake01. This workspace is still running and has compute available.
4. Open the shortcut Lakehouse and report — they work Open the Lakehouse in Workspace B. The shortcut tables load without issues. Open the Power BI report — it still shows the data and refreshes as expected.
The Ultimate Databricks to Fabric Migration Roadmap for Enterprises Explore AI’s impact on robotics and follow our step-by-step guide to efficiently migrate enterprise analytics from Databricks to Microsoft Fabric with minimal disruption.
Learn More
Best Practices for Structuring Workspaces with OneLake Shortcuts For a scalable and clean setup in Microsoft Fabric, it’s a good idea to separate your workloads into purpose-specific workspaces. This approach helps you manage permissions, organize roles, and control performance more effectively — especially when you’re using OneLake Shortcuts to share data across layers.
Workspace 1: Raw Data Used for storing raw or curated data in Lakehouses or Warehouses. This is where your ingestion, transformation, and ETL tasks run. Data engineers typically manage it, and it is computed heavily.
Workspace 2: Shortcuts and Semantic Models This is where you create OneLake Shortcuts pointing to the raw data from Workspace 1. You also build semantic models here, which define tables, relationships, and business logic used by reports. It acts as a bridge between the data layer and reporting.
Workspace 3: Reports Dedicated to report building and publishing. Analysts and business users use this layer to create dashboards using semantic models. This workspace stays lean and doesn’t handle heavy computing.
Kanerika: Helping You Build Smarter Data Architectures with Microsoft Fabric Implementing Microsoft Fabric the right way — especially with features like OneLake Shortcuts and layered workspace design — can make a big difference in how teams access, manage and act on data . At Kanerika , we help organizations do exactly that.
As a certified Microsoft partner with deep expertise in data and AI, Kanerika works closely with businesses to integrate Fabric into real-world workflows. From setting up multi-capacity environments to designing shortcut-driven models that avoid duplication, we build practical, scalable solutions tailored to your goals.
Our hands-on experience across industries means we don’t just recommend best practices—we implement them fast. Whether you’re modernizing reporting, consolidating data across teams, or building for scale, we ensure your Fabric environment is built to deliver results from day one.
Partner with Kanerika and take the next step toward faster insights, cleaner architecture, and smarter decisions.
Frequently Asked Question What is OneLake Microsoft Fabric? OneLake. A data lake is the foundation for all Fabric workloads. In Microsoft Fabric, this lake is called OneLake. It’s built into the platform and serves as a single store for all organizational data. OneLake is built on ADLS (Azure Data Lake Storage) Gen2.
What is the difference between OneLake and lakehouse? OneLake is a unified, logical data lake for an entire organization in Microsoft Fabric, while a Lakehouse is a specific architecture and storage location within OneLake that combines the flexibility of a data lake with the query capabilities of a data warehouse. Essentially, OneLake is the broader container for all data, and Lakehouses are built on top of it, serving as individual environments for structured and unstructured data management.
What are the benefits of OneLake? Microsoft OneLake offers a great scalable storage solution, as well as security or integration with other Microsoft tools, but brings many more benefits: Efficiency in data management: OneLake can centralize and organize large volumes of information in one place, facilitating more effective access and management.
What is the difference between OneLake and OneDrive? OneLake is the OneDrive for data. Just like OneDrive, you can easily explore OneLake data from Windows using the OneLake file explorer for Windows. You can navigate all your workspaces and data items, easily uploading, downloading, or modifying files just like you do in Office.
What is the difference between OneLake and Direct Lake? Direct Lake is a storage mode option for tables in a Power BI semantic model that’s stored in a Microsoft Fabric workspace. It’s optimized for large volumes of data that can be quickly loaded into memory from Delta tables, which store their data in Parquet files in OneLake—the single store for all analytics data.