UPDATE 2017-05-20: Article updated including the new tiers S8/S9.
UPDATE 2017-06-18: Added comment to Developer tier, which can be used in production.
UPDATE 2019-10-10: Article updated including the new tiers S8v2/S9v2.
Azure Analysis Services offers several levels of service, which are different in features and performance. As of April 30, 2017, there are three different tiers (Developer, Basic, and Standard), and within each tier there could be different amounts of processing power and memory available.
Azure AS tiers
Azure Analysis Services offers three tiers, which differ in which modeling features are included:
- Developer tier corresponds to SQL Server Analysis Services Developer and it has all the possible modeling features. However, this version can be used in production and it is not reserved to development purposes, despite its name
- Basic tier corresponds to Analysis Services in the SQL Server Standard edition, and does not have perspectives, partitions, and DirectQuery support.
- Standard tier corresponds to Analysis Services in the SQL Server Enterprise edition, and includes all the modeling features.
It is important to note that the concept of “Basic” tier in Azure AS corresponds to the “Standard” edition in SSAS, whereas the “Standard” tier in Azure AS corresponds to the “Enterprise” edition in SSAS. Moreover, the “Developer” tier in Azure AS can be used in production to deliver services to end users, whereas the “Developer” edition in SSAS can be used only for development purposes.
Memory available corresponds to memory used by Analysis Services, including both VertiPaq and cache for query execution. It is not clear whether some paging is allowed or not during query and processing, but it is safe to assume that paging is not possible in order to maintain a predictable performance level.
In the Developer tier, memory is 3GB. Testing queries, you might need memory for materialization – thus, in this tier you want to avoid maintaining a database that is not in use. Considering that a database cannot be detached, you should enable backup/restore on an Azure storage , so you can quickly restore a database when you need it.
The Basic tier has two options, 10 GB and 20 GB. Considering that you cannot have partitions in the Basic tier, you will likely use relatively small databases in this scenario. You will therefore choose the instance based on processing power rather than on memory available, unless you have many databases on the same instance requiring more memory.
The Standard tier has several options. At the bottom level, you have the S0, which offers the same performance as the B1 but at three times the price. The main differences are the support for partitions and for DirectQuery, which are not available in the Basic tier. Only if you need a small processing window, or if you want to enable the DirectQuery scenario, should you choose the Standard tier. Otherwise, you can probably save money using the Basic tier, despite a likely longer processing time.
Microsoft defines an index called QPUs to identify the processing power of an Azure AS instance. This indicator makes it easy to compare different service levels, but it is interesting to compare the performance with on-premises hardware in order to get an external reference. Please note that this comparison will become obsolete over time – in the future Microsoft could use different hardware generations, improving the QPU performance without changing the metric itself.
Based on our tests, 1 virtual core corresponds to 20 QPUs, and this matches the Microsoft statement in the FAQ section. This means that you have the following number of cores for each tier option:
- Developer: 1 core
- B1: 2 cores
- B2: 4 cores
- S0: 2 cores
- S1: 5 cores
- S2: 10 cores
- S4: 20 cores
- S8: 16 cores
- S8v2: 32 cores
- S9: 32 cores
- S9v2: 64 cores
It is not possible to get the exact specifications of the hardware used by Microsoft, but the tests we made seem to validate that all the service tiers available now are running the cores in a single socket – except for S8/S9, which use 2 NUMA sockets. You should carefully estimate the number of cores required. Only tables with more segments than available cores get the maximum parallelization. Each segment has 8 million rows by default, therefore a single table scan uses 20 cores if there are at least 160 million rows.
However, a very important question is the clock speed of CPUs used. As described in a previous article, the CPU speed is a critical factor in Tabular, because many operations are performed by a single thread (especially those in the formula engine, but also the scan of tables smaller than 16 million rows). The tests performed are very positive. This is the query used to stress the formula engine (you can execute this query connecting to any Analysis Services database.
EVALUATE ROW ( "x", COUNTROWS ( CROSSJOIN ( SELECTCOLUMNS ( CALENDAR ( 1, 10000 ), "Num1", INT([Date]) ), SELECTCOLUMNS ( CALENDAR ( 1, 10000 ), "Num2", INT([Date]) ) ) ) )
The execution on Azure AS ranges between 6.8 and 7.2 seconds. As a comparison, the same tests run between 7.7 and 8.4 seconds on a Xeon E5-2673 v3 at 2.40 GHz (with a turbo mode at 3.20 GHz), and around 6 seconds on an i7-6700HQ at 2.60 GHz. The formula engine test is a good test for evaluating the clock speed of the CPU, which is important for the many formula engine operations executed by SSAS Tabular evaluating DAX and MDX expressions. From this point of view, desktop CPUs have a great advantage compared to server-class CPUs with many cores, which also have to scale out with multiple cores. If your main focus is on raw performance, then an i7-4790K at 4.00 GHz would be much better (same test running in 5 seconds), but you need to balance different requirements on a server. The hardware choice made for Azure Analysis Services seems very good, because it is not easy to find faster virtual machines (for single thread operations) in a hosted environment.
You can easily change the instance level of Azure Analysis Services within the same tier. The limitation is that you cannot downgrade a Standard tier to a Basic tier, but you can upgrade a Basic tier to a Standard tier. Besides that limitation, you can do any change between choices available within the same tier. Moving from a Standard tier to a Basic tier is possible by doing a backup of the database and a following restore in a new instance created in a Basic tier. This is possible only whether the database does not use any feature reserved to the Standard tier (perspectives, partitions, and DirectQuery).
Keep in mind that every time you change the tier selection, you lose any existing connection. You should do that only when users are not working and when there are no processing operations running.
Azure Analysis Services provides a large flexibility in choosing service tiers, providing good performance even compared to alternatives you might have on-premises. Hosting the same service on other virtual machines available in a hosted service such as Microsoft Azure will hardly provide better performance. You should do a very accurate hardware selection to obtain better numbers on your server.
Considerations about the associated cost are not a goal of this article. The evaluation is complex and requires considering the total cost of ownership, plus many other variables such as the number of hours required in a month. For example, you might cut the cost by 70% by running the service only during business hours. Cost optimization through varying the service availability is completely unexplored territory, and it deserves more study.