Best solution for generating flat report - sql-server

I have an SSAS multidimentional cube (sql server 2019) of 350 GB with a retention of 10 years of data.
I noticed that users often use the cube to extract data at the leaf level (Excel tables with multiple columns).
I think that SSAS is not suited for producing these type of reports.
What is the best tool / solution to let users genrate flat reports ? I know that sql is good for that but users aren't sql developers.
Could a PowerBI Model with direct query be more efficient than tha actual SSAS cube ?

Could a PowerBI Model with direct query be more efficient than tha actual SSAS cube ?
SSAS Multidimensional is exceptionally bad at generating large flattened results. Almost anything will be better. A PowerBI or SSASS Tabular DirectQuery model is much better, but not ideal for very large extracts. But be sure to extract through DAX not MDX. A Paginated Report exported to CSV or Excel is a good choice too.

Related

Long running view in ssas-tabular

I have a SQL Server database where we have created some views based on dim and fact tables. I need to build SSAS tabular model based on my tables and views. But one of the view runs for 1.5 hour inside SQL query (SSMS). Now I need to use this same view to build my SSAS tabular model but 1.5 hour is not acceptable. This view is made up of more than 10 table joins and lot of Where conditions.
1) Can I bring all these tables being used in this view inside my SSAS tabular model but then I am not sure how to join them all and use where clauses inside SSSAS and build something similar to my view. Is that possible? If yes how?
or
2) I will build one time SSAS model from that view and then if I want to incrementally load the data daily, whats is the best way to do that?
The best option is to set up a proper ETL process. That is:
Extract the tables from your source SQL database into a new SQL database that you control.
Transform the data into a star schema.
Load the data from the star schema into SSAS.
On SQL Server, the most common approach is use SSIS packages for data extraction, movement, and orchestration, and SQL Server Agent Jobs for scheduling.
To answer your questions:
Yes, it is certainly possible to bring in all of the tables directly from your source system into your tabular model, but please don't do this! You will only create problems for yourself later on when creating DAX calculations. More information here.
Incrementally loading data is something you decide for each table that is imported into your tabular model. Again, this is much easier if you have a proper star schema, as you would typically run a full processing on all your dimension tables, and then do incremental processing only on the largest fact tables.

Powerpivot vs SQL connection is there any difference in efficiency?

I use numerous SQL Data Connections to import data into Excel for use in pivot tables / slicers. Some of these take a while to update and display.Is there is any advantage in swapping some of these larger queries to Powerpivot imports? Is Powerpivot more efficient or is it essentially doing the same job as a SQL data connection?
Really it depends on the setup.
If the pivot table is using the SQL database directly, e.g. change in the slicers results in an SQL statement issued to the database server, then yes power pivot would be more be more efficient. This would be due to the fact that the pivot table would then query the Power Pivot data model which would be static snapshot of data in the data model. Only when the Power Pivot data model is refreshed would it query the SQL back-end.
The main advantage to Power Pivot would be the following:
Anything involving the pivot table would hit the Power Pivot data model, which would be local processing on the computer running excel
If data is loaded directly into the Power Pivot data model it allows you to bypass the max number of rows in an excel sheet
In addition the data within the data model is typically compressed by a factor of 10x. With a data set that has values what repeat frequently having a higher compression. Row_IDs, being unique would compress poorly.
As a real life example, I have managed to load 4.8 GB of CSV files of a retailer's by store by item by week POS data (34M rows) using excel 2016 on my low power work laptop. Since the data was fairly repetitive, it ended up creating a 280 MB excel file.
Fact that the excel version, Power BI desktop, Power BI Web Service and SSAS tabular model all use the same calculation language and design. In fact an excel Power Pivot Model can be directly loaded into Power BI desktop and then used to make dashboards
Allows for complex math to be preformed within the pivot table.
Downsides
The compression of the data models means that someone could walk away with a lot of data in a small file size
May not be as useful if people are looking for real time numbers directly from the system

How to create a SQL Data Warehouse in SSAS tabular model with denormalized tables?

We have 4 to 5 denormalized tables generated from SAP. How can we create a tabular data warehouse with denormalized tables? What is the recommended warehousing technology? All tables are pushed into SQL by SSIS by processing flat files from SAP RFC reports.
It doesn't sound like you are using Tabular for any pre-aggregation, but rather a transport mechanism to get it to Power BI. You can model these tables directly in Power BI and get all of the benefits that you have to date. You would get additional benefits as well. Power BI would then allow the user to create their own measures, enhance the model with other tables, etc. If the issue is that you don't want people accessing this SQL instance directly, you have a lot of options there as well.

Represent Oracle sql Cube with Microstrategy

Hi i have a serveral cube tables on oracle 12c database. How respresent its with Microstrategy? The Object Intelligent Cube the Microstrategy don't represent correctly this cubes and It save in-memory sqls. I need execute sql realtime to cube table
A MicroStrategy cube is an in-memory copy of the results of an SQL query executed against your data warehouse. It's not intended to be a representation of the Oracle cubes.
I assume both these "cubes" organize data in a way that is easy and fast to use for dimensional queries, but I don't think you can import directly an Oracle cube into MicroStrategy IServer memory.
I'm not an expert with Oracle Cubes, but I think you need to map dimensions and facts like you would do with any other Oracle table. At the end an Oracle cube is a tool that Oracle provide to organize your data (once dimensions and metrics are defined) and speed up your query, but you still need to query it: MicroStrategy will write your queries, but also MicroStrategy needs to be aware of your dimensions and metrics (MicroStrategy facts).
At the end the a cube speeds up your queries organizing and aggregating your data, and it seems to me that you have achieved this already with your Oracle cube. A MicroStrategy cube is an in-memory structure that saves also the time required by a query against the database.
If your requirements are that you execute SQL against your database at all times, then you need to disable caching on the MicroStrategy side (this can be done on a report-by-report basis, or at a project level).
MicroStrategy Intelligent Cubes aren't going to be a good fit for you here, because they explicitly cache data, in order to decrease response time, and reduce load on your source database.

Is a single table a bad starting point for OLAP cubes (SQL Server Analysis Services)?

I'm going to use a single table to aggregate historical data about our (very big) virtual infrastructure. The table will be composed of 15 to 30 fields, and I esitmate from 500 to 1000 records a day.
Why a single table? A couple of reasons:
Data is extracted to csv using powershell scripts. Then bulk load on a single table is very easy and fast.
I will use the table to connect excel and report through pivot tables. Then a single table is perfect (otherwise I should create views).
Now my question:
If I'm planning in the future to build cubes upon this table is the "single-table" choice a bad solution?
Do cubes rely on relational databases or they can be easily built upon single-table databases?
Thanks for any suggestion
Can't tell you specifically about SQL Server Analysis Services, but for OLAP you typically use denormalized and aggregated data. That means fewer tables than in a normal relational scenario. And as your data volume is not really big (365k rows/year - even small for OLAP), I don't see any problem using a single table for your data.

Resources