AI integration in Power BI continues to evolve and mature. However, up to now, we are still far from the transformative impact in business outcomes or productivity promised in the marketing materials and conference presentations. However, recent new tools and technology have the potential to change this: the model context protocol, and Power BI MCP servers. In this article, we review the state of AI in Power BI, introduce MCP servers, and explain why they could be a big deal for both Power BI developers and users.
We prefer to show and not just tell, so here is a demonstration where a user in Claude desktop asks to diagnose a problem on a dashboard, and to retrieve the file in Power BI Desktop for them to fix it.
As you saw, the single natural language request results in the LLM taking a series of steps, where it finds the workspace (by speculating on the name), finds the dashboard, and then inspects the visual to diagnose the issue. It even found an unintentional column misspelling in the underlying dataset (Reptitions instead of Repetitions). All this happens because the user has a Power BI MCP server, which can enable them to query and control Power BI items – and even Power BI Desktop – by using natural language.
To summarize, it is now possible to control most aspects of Power BI and Fabric by using natural language from your desktop with an MCP server. This includes creating, reading, updating and deleting workspaces, items, and their configuration or metadata. It does not require a Fabric capacity, and can even run locally on your laptop (Windows or Mac).
At SQLBI, we think that MCP servers have potential for a dramatic impact and change regarding the way that we work with and in Power BI. For this reason, we are writing a series of three articles about MCP servers and their relevance for Power BI, Fabric, and business intelligence at large. This first article introduces MCP servers and explains how to use them to chat with and visualize your data in any Power BI workspace from an LLM tool such as Claude Desktop.
If you want to understand the technical details about what the model context protocol is and how to make your own server then we recommend the documentation from Anthropic. We will not explain these details, and will instead focus in the next section on what MCP servers are for Power BI and Fabric, and why they are relevant.
First, let us briefly review the current state of AI in Power BI so far as of June 2025.
AI in Power BI: the story so far
There are various ways that people have been using AI in Power BI, including both first-party and third-party tools and services. The following diagram shows you an overview of some of these.
This overview is quite simple and very high-level, but so far we can understand AI in Power BI that is meant for creators (or developers) and for consumers (or business users):
- Creator experiences help people build with AI. They include:
- Copilot experiences from Microsoft. These require access to a Fabric capacity, and consume from your available compute resources.
- Third-party LLMs and tools to use them, like Chat-GPT, Claude, and Gemini, which you can use from various applications, including a web browser or VS code. Some common use-cases include:
- Generating or troubleshooting DAX and M code.
- Using web search and other tools to summarize or search information.
- Generating documentation, code comments, or field descriptions.
- Generating report background images or (parts of) report theme files.
- Generating sample data for portfolios, testing, and prototyping.
- Consumer experiences let people use AI to consume data. This typically involves an experience where people chat with their data and get answers in text, tables, and visuals, which we will refer to henceforth as Conversational BI. To have success with conversational BI in Power BI and Fabric, you must invest significant time and effort to prepare semantic models and data for use with AI:
- Copilot lets you summarize information to ask questions about it. You can do this in various places, including the Copilot pane in reports and apps, and also the Copilot standalone window in Fabric, where you can chat with any supported item in your tenant that you have access to.
- Fabric data agents are separate items you can create in Fabric to set up custom conversational BI experiences with semantic models, lakehouses, or data warehouses. Data agents can be more flexible to set up and configure, but are more limited in the type of outputs they provide, since they do not generate charts and visuals.
- Power BI Q&A is an experience that allows for limited conversational BI, but it does not require a Fabric capacity to set up and use.
So far, the biggest impact of these experiences and tools is that people can leverage them to help write and troubleshoot their code. However, there has been little impact on other areas, and conversational BI with Copilot has thus far been rather underwhelming. However, this will likely soon change with the availability of MCP servers.
Introducing MCP servers
Here is a diagram so that you understand where MCP servers fit into the “big picture” of Power BI and Fabric.
You can understand the process as follows:
- A user asks a question to a large language model via a host application, like Claude Desktop or VS Code.
- The host application – if it supports the model context protocol – can use this protocol to leverage one or more MCP servers. MCP servers specify code to give the LLM prompts, resources, and tools that can interact with various local and external data and services, including Power BI and Fabric.
- The information can be passive context provided back to the LLM to improve outputs. Tools can trigger actions like executing queries, changing configuration, reading/writing metadata, etc.
In short, MCP servers let you interact with external resources and services in natural language. For instance, you can use MCP servers to ask questions about your data in Power BI, but also to take actions in Power BI like refreshing semantic models, changing configuration, or even formatting reports. These actions can be done individually or in bulk from the chat interface. They can be combined or repeated, and even used together with multiple MCP servers in tandem, such as taking an action in Power BI, and then sending an email, modifying a file, or creating a support ticket. The potential is enormous.
You can see an example of what this looks like, below. A user in Claude Desktop uses a Power BI MCP server to search for a potential data quality issue, where the LLM model uses the server to find and explore the underlying model, both by looking at its schema and querying the data in DAX.
Tools like MCP servers are interesting because they aim not to replace you but to augment your existing workflow and improve human efficiency. In this example, the user can dispatch the agent to review the report while they finish another task, and use the results later to better diagnose and fix the problem. Basically, MCP servers can be built to solve specific problems that exist, rather than how many AI tools and features in recent years seem to be built to find problems to solve.
MCP servers do this by extending the capabilities of LLMs with custom, third-party integrations. These integrations are model-agnostic, and easy to set up or use. To use an MCP server, you need to use a host application that supports them by using an MCP client. Examples include the Claude.ai desktop app, Claude code, and VS code.
Basically, these applications let the model access capabilities of MCP servers:
- Resources give it relevant information for context, like files or schemas.
- Tools let it do things on your computer or in the real world, such as by using APIs.
- Prompts give standard templates for instructions, so that tools or resources are easier to use.
Here are some examples of MCP servers and what they let you do from a chat window. Note that you should carefully validate and check open source or third-party MCP servers before you start using them:
- Obsidian: Search, read, and manage your notes in an obsidian vault
- Excel: Create, read, and modify Excel workbooks – even without Excel being installed.
- Filesystem: Search, read, and manage files in local files and directories. There are various tools including the ability to create new files or directories on your computer, or read from existing ones.
- GitHub: Search, read, and manage files in a GitHub repository, including various toolsets like creating, merging or reviewing pull requests, and creating, updating, or commenting on issues.
An MCP server can also be either local or remote:
- Local servers run on your computer. This could be a simple program or a Python script that you made yourself, which you connect to by specifying its directory location. You can only use local servers with an MCP host on that same computer (like Claude desktop).
- Remote servers run on cloud infrastructure. You connect to them by specifying an endpoint, which might require authentication. You can use a remote server from any device, with any supported host.
In summary, an MCP server lets you take actions or access information from an application using an LLM; you can ask the LLM to do something, and if it has an integration that facilitates the task, it will try to do it. The LLM does this using tools, which are specified in code. For this reason, you must be careful when you use MCP servers from untrusted sources, because they could execute code on your computer or use your data in way you do not want it to. However, MCP servers from Microsoft or custom servers you create yourself using Microsoft APIs are safe to use.
This might all seem very complicated, but it is surprisingly easy to make your own, custom MCP server, and then run it, locally. For instance, with Claude or a similar tool helping you, you can create and run one to query a Power BI model in less than an hour.
Power BI MCP servers and Microsoft Fabric MCP servers
An MCP server for Power BI and Fabric could be many things. However, the simplest version gives you a way to programmatically view and manage items in your tenant, such as by using the Power BI or Fabric APIs. By using these APIs, you can define multiple tools. Here is an example of what that might look like:
A basic MCP server could have the following tools:
- list_workspaces which uses the Fabric API to return workspaces you have access to in a tenant.
- list_models which uses the Fabric API to return semantic models in a workspace.
- get_model_definition which uses the Fabric API to retrieve and read the model metadata.
- execute_dax which uses the Power BI executeQueries API to evaluate DAX queries against the model.
These APIs do not require a Fabric capacity, and will respect user permissions and data security rules when using user authentication. This means that we can use our MCP server on any workspace.
Such a server allows you to use any supported MCP host to do many things. For instance, you can chat with your data that is in semantic models, but you could also add more tools to access data in lakehouses and other data items.
How does using an MCP server compare to other conversational BI tools?
When you use an MCP server to chat with your data, (depending on what host application you use) you will probably notice that the experience is quite different from what we have seen so far with conversational BI tools on the market.
Until now, most tools were using a one-question-one-query flow. This means that you get a single query to answer a single question. These tools also provide only information from a single source, and cannot easily mix information from multiple data sources. They cannot use external context, like web search or documents, unless it is provided in the chat. You also cannot change the model that these tools use and you have limited options to customize its behavior and outputs. These include the type and format of outputs; for instance, Copilot is limited to producing Power BI core visuals with limited variation in formatting, while Fabric data agents typically produce tables and text. You cannot ask these tools to customize or improve the visuals, either.
In contrast, the experience in many LLM tools like Claude uses a reasoning flow. It can execute multiple queries to provide the data answer in a richer, more meaningful context. It might even try alternate approaches when it encounters incorrect or unexpected query results. Furthermore, it can enhance that context with additional data sources, information from the web, or even other MCP servers. With these tools, you have more control over what models to use, how they behave, and what types of outputs you get back. For instance, you can get a fully customized visual, and refine or alter that visual on the fly. However, the visuals generated by these tools are typically just HTML and CSS; they can be minimally interactive (with tooltips and animations) but use a static dataset extracted from the model.
Given that results from the reasoning flow in conversational BI are significantly more interesting and useful, it is likely that most tools will be or already are switching to this approach, moving forward. However, it is unlikely that most BI tools will allow you the full flexibility to enrich answers with context outside of that tool’s data platform, or modify the tool or model configuration, to avoid potential enterprise security risks.
Furthermore, until now, you have needed to invest significant time and effort into preparing your data models for use with AI. This means following very specific naming conventions, adding a linguistic schema, using AI instructions, and limiting what fields the AI can see and use.
When using LLM tools with an MCP server, this information might be less important. Since you have more control over the tool, server, or host application, you can get it to ask clarifying questions before using fields, or correct it to improve results. This improvement becomes even more dramatic when you use special MCP servers to give the application a persistent “memory system” where it can store learnings (about your model and data) for later use. We will talk more about memory systems in a follow-up article.
So in summary, when using MCP servers, you have control and flexibility to get conversational BI experiences to produce more useful outputs. Users can leverage these outputs with different tools in flexible ways, generating diagrams or visuals, combining data, or working in tandem with other MCP servers. To create a conversational BI experience with MCP servers, you can take an approach that requires less up-front investment to get your models ready for AI, and even build this
knowledge with the application over time.
Considering the implications of conversational BI for reporting
So as you have seen, it is becoming relatively easy, cheap, and effective to chat with your data and even generate your own custom visuals in natural language. These conversational BI experiences have potential to enable people to overcome technical barriers to use and work with data. They are also flexible; ambitious users can customize or extend the capabilities of MCP servers as much as they can or want to.
So, will everyone chat with their data instead of using dashboards and reports? Is the dashboard finally “dead”? Of course not. However, it is likely that as conversational BI tools and features mature, users might find them more convenient to find, explore, and retrieve data for deeper analysis. In the future, it is realistic to expect that conversational BI might fit well into the 300-second view (of the 3-30-300 rule) to get details-on-demand, or maybe the 30-second view to filter and zoom to what is important.
However, reports and dashboards will still remain important for people to get an overview of the most important information, and to surface where they need to spend more time and attention. This critical information must be glancable and readily available; no business user should have to ask to know what their sales are this month, or how they are performing against a target. They probably also should not have to ask what the top and bottom-performing categories are; this information should just surface, and it is so common that it is better shown in a report or dashboard.
Then, once they know where they should look deeper, conversational BI can help them dive into details, or explore data from niche, ad hoc, or individual perspectives that reports cannot easily cater to. Of course, that assumes that your semantic models are built well, that your data is of appropriate quality, and that your users have been trained to use these tools effectively…
That said, it is important to also consider the caveats and limitations of conversational BI, mainly due to the nature of the underlying technology of LLMs:
- Responses are non-deterministic, so you could get different results with the same prompts and context. You can take steps to ensure robustness from MCP server tools, but any direct output of the LLM (including generated code or text) will vary.
- LLMs can still hallucinate, providing inaccurate or fabricated information.
- The technology is still very new, so neither developers nor users know the patterns and behaviors that most reliably produce the best results.
However, unlike using LLMs alone, there exists approaches in MCP server design, development, and implementation to both mitigate and minimize these caveats and limitations, while also building on the strengths of LLMs. At SQLBI, we have been exploring deeply these topics and more, but the development of MCP servers goes beyond the scope of this introductory article, and this is an area that continues to evolve rapidly.
Conclusions
MCP servers provide a way for LLMs to integrate with external services and information. For Power BI and Fabric, that means that you can retrieve information from the items and data in your tenant. You can create your own MCP servers, which gives you control and flexibility regarding how LLMs are used and the types of outputs that people see. In general, this can lead to better and more useful outputs in conversational BI experiences over time, but it is still important to mitigate risks for the caveats and limitations of the underlying technology.
In the next article, we will demonstrate and explain how Power BI MCP servers can aid development, including via the creation of semantic models and reports. Specifically, we will explain how you might use Power BI and Fabric MCP servers to build agentic experiences that control Power BI, Fabric, and the contents of your tenant, and what the possible risks and benefits are of this approach.