Microsoft released a new (preview) feature in Power BI Desktop December 2020. The official name seems to be “DirectQuery for Power BI datasets and Analysis Services”. However, an easier way to identify the feature for existing Power BI users could be “new composite models” – think of it as “composite models 2.0”, you get the point.

It is not easy to find a name that accurately describes the massive step forward that is this feature in the evolution of Business Intelligence solutions. It would fully deserve a hyperbolic name, but the marketing departments already overused so many terms over the years: “universe” was already used in the past century; you can imagine what happened after that. We like “composite models” because it is a name that concisely describes the purpose of this new feature.

The idea is amazingly simple: you start with a Power BI dataset published on powerbi.com. You connect to it, and you can build a new model over the original one. You can extend the original model by adding tables, columns, and measures, or by connecting to other datasets and combining them into a single semantic model. While doing so, you do not lose any element from the original model. The measures continue to work; the relationships continue to work. You do not have to worry about anything but the additional data you want to integrate. When the original model is refreshed, your local model also sees any updated information.

In other words, you work as if you had a local copy of the model and full rights to modify and expand it, even though you are not duplicating any data already stored on the server.

In the related video, Alberto shows more practical examples of what you can do and how this technology works. In this post, I want to share a few thoughts about this feature.

A concept as simple as “extending a semantic model” actually hides an incredibly difficult problem to solve. Since 1990, many products have provided a notion of a semantic model enabling users to navigate data and create reports without having to write a query for each visualization. Many companies implemented different approaches to the problem, millions of users adopted these technologies, and many users continued to download a copy of the data in Excel to make their own analysis.

While Excel is a fundamental tool for any business, in many cases using Excel was a workaround to a problem common to many users: they need a slightly different view of the data. For example, they need to create a new classification of products, or to compare the sales with the budget data already stored in another data source (Excel, maybe?). In the absence of a better solution, “download in Excel” was always the way to go. There was never an architectural solution to solve this problem once and for all.

Until today.

The Power BI Desktop December 2020 version contains a preview of the feature we had been waiting for for so many years. I doubt that the mainstream media will recognize the importance of such innovation. But users that create and consume reports every day will immediately realize the impact of this change.

For example, you can open Power BI, connect to a Sales dataset published on powerbi.com, and:

  • You can add a column to create a custom classification of customers or products;
  • You can import an Excel file with the budget and create multiple relationships with the existing model, comparing budget and sales without downloading a local copy of the sales;
  • You can also connect to a Purchases dataset and create relationships between different models, without copying data locally and creating new measures that leverage measures defined in both existing models.

As simple as it is to use, keep in mind that the technology required to solve these problems is incredibly complex. It is the result of long-term investment over many years. It is not easy to replicate in other products. I do not like to make comparisons between vendors, but starting from today, I am really wondering how far along any Microsoft competitor finds themselves on this front. They may all be considering that the market for that feature is a niche, whereas I anticipate a broad adoption across the enterprise market.

Of course, I might be wrong. I might be overestimating the impact of the new composite models just because I am blinded by the brilliance of this technological diamond.

But if I am only 30% right, this feature still creates a huge gap with Microsoft competitors in enterprise reporting and analytics.

I might be biased, but I recognize that Microsoft does not always get it right. Sometimes they fail on simple implementations, and I have never had an issue saying so loud and clear. But this is not one of these times. The new composite model is one of those features that require a long-term commitment that is hard to find in a world based on quarterly results. Microsoft did it, and I am honestly amazed.

However, an easy prediction is that this feature will also be used in the wrong way. Beware of new architectures based on new composite models. I am already scared of diagrams showing tens of data sources federated into a “virtual” semantic model connected to smaller datasets. Even though we will teach loud and clear that it is a bad idea, someone will try it.


This is how evolution works… and technological evolution is no exception. But failures caused by the misuse of a technology do not constitute proof of its irrelevance. It is the adoption of this technology throughout the next decade that will write the story.

This change will take time; time to create the proper models, to understand the limitations, to define new best practices, to learn how to optimize performance. But I am sure today is a historical moment. I was used to calling this the “ultimate feature” of a semantic model. I am so excited for the long journey ahead, where we get to learn and to use something we could only dream of.

It is here, and it just works.

Articles in the Composite model series