PASS Summit Announcements: Azure Analysis Services
Microsoft usually has some interesting announcements at the PASS Summit, and this year was no exception. I’m writing a set of blogs covering the major announcements. Perhaps the biggest one is the introduction of the Azure Analysis Services Public Preview (OLAP).
This is a PaaS for SQL Server Analysis Services (SSAS). So it’s PaaS SSAS 🙂 Read the official announcement.
It is based on the analytics engine in SSAS, For those not familiar with SSAS, it is an OLAP engine and BI modeling platform that enables developers and BI professionals to create BI Semantic Models that can power highly interactive and rich analytical experiences in BI tools (such as Power BI and Excel) and custom applications. It allows for much faster query and reporting processing compared to going directly against a database or data warehouse. It also creates a semantic model over the raw data to make it much easier for business users to explore the data.
Some of the main points:
- Developers can create a server in seconds, choosing from the Developer (D1) or Standard (S1, S2, S4) service tiers. Each tier comes with fixed capacity in terms of query processing units and model cache. The developer tier (D1) supports up to 3GB model cache and the largest tier (S4) supports up to 100GB
- The Standard tiers offer dedicated capacity for predictable performance and are recommended for production workloads. The Developer tier is recommended for proof-of-concept, development, and test workloads
- Administrators can pause and resume the server at any time. No charges are incurred when the server is paused. On the roadmap is to offer administrators the ability to scale up and down a server between the Standard tiers (not available currently)
- Developers can use Azure Active Directory to manage user identity and role based security for their models
- The service is currently available in the South-Central US and West Europe regions. More regions will be added during the preview
Similarities with SSAS:
- Developers can use SQL Server Data Tools (SSDT) in Visual Studio for creating models and deploying them to the service. Administrators can manage the models using SQL Server Management Studio (SSMS) and investigate issues using SQL Server Profiler
- Business users can consume the models in any major BI tool. Supported Microsoft tools include Power BI, Excel, and SQL Server Reporting Services. Other MDX compliant BI tools can also be used, after downloading and installing the latest drivers
- The service currently supports tabular models (compatibility level 1200 only). Support for multidimensional models will be considered for a future release, based on customer demand
- Models can consume data from a variety of sources in Azure (e.g. Azure SQL Database, Azure SQL Data Warehouse) and on-premises (e.g. SQL Server, Oracle, Teradata). Access to on-premises sources is made available through the on-premises data gateway
- Models can be cached in a highly optimized in-memory engine to provide fast responses to interactive BI tools. Alternatively, models can query the source directly using DirectQuery, thereby leveraging the performance and scalability of the underlying database or big data engine
Check out the pricing, the documentation, tutorial videos, and the top-rated feature requests.
Get started with the Azure Analysis Services preview by simply provisioning a resource in the Azure Portal or using Azure Resource Manager templates, and using that server name in your Visual Studio project.
More info:
Learn more about Azure Analysis Services
First Thoughts On Azure Analysis Services
Creating your first data model in Azure Analysis Services
Why a Semantic Layer Like Azure Analysis Services is Relevant (Part 1)
Pingback:Details On Azure SSAS – Curated SQL
James, this is really great news – sort of.
I’d really like to see multi-dimensional support via ROLAP against an Azure SQL Data Warehouse. Tabular is just not that great – sorry.
A hybrid, hierarchy aware ROLAP/in-memory approach is the way to go. That way you can support as many records as the DBMS will allow without the secondary loading and users effectively choose what data they want to cache as they use it (in many cases users will run similar queries). Hierarchical awareness means that partial aggregations can be combined at higher levels for faster summary queries. The main drawback is that performance can be inconsistent as the first user to run a particular query has to wait for it to be read from disk but that can be mitigated to some extent by the running of daily reports which will warm the cache.
Cognos have a hybrid solution with their Dynamic Cube offering and I believe they borrowed the idea from MicroStrategy.