SQL Analysis Services OLAP TIME dimension - sql-server

Hi
i'm struggling with adding time dimension to OLAP cube.
I can get everything in cube to work except date.
In my source data view I have datetime column.
I go by using Dimensions->New Dimension->Generate time dimension on the server.
I end up with a nice hierachical time dimension (Date-Month-Quarter-Year).
Later I add this dimension to cube and define regular relationship with datetime column from source data view (same table which has fact data).
When I try to deploy the cube, I get error:
Errors in the OLAP storage engine: The attribute key cannot be found when processing:Table: 'table_name', Column: 'registration_date', Value: '3/29/2007 3:00:00 PM'. The attribute is 'Date'
Maybe I don't get something? Every manual I can find talks about calendar table already created in the source database. There are plenty of script which will create calendar table for you. But why should I ? Isn't Generate time dimension on the server meant for it?

I would guess that your date field in your fact table needs to be present in the time dimension. Perhaps remove the time or create a calculated field in the SSAS designer. More experience people may have better answers, I've only made one cube.

Related

SSAS Tabular table always shows in Excel

In my SSAS tabular model I have a calendar dimension, and a wave dimension for half-year data only. The data flows to these tables as such:
Fact Table ---> Wave Dim <---> Calendar
No matter what I do I cannot hide the wave table from Excel users. The table shows hidden in Visual Studio, but in Excel it shows the table with no fields in it. I have tried deleting the table and reloading it with no avail. My assumption is that it has to do with the way it's connected to the Calendar dimension, but I can't seem to find anything on my issue. Any help would be much appreciated.
This is a tough one to answer without viewing your model.bim file. Based on your information I have two guesses on what could be the issue, although this may very well not be it:
1) It sounds like you've marked all fields in the Wave table as hidden, but not the Wave table itself. Could this be the reason?
2) Perhaps you are using perspectives, and only hid the Wave table in a perspective rather than in the Model (default perspective)?
The relationships in your model should have no impact on whether a table is displayed in client tools or not.
Feel free to upload your model.bim file if the above does not help.

SSAS - Empty Generated Time Dimension Table

I'm facing a strange problem about SSAS.
I created a time dimension by using Dimension Generator Wizard.
I chosen to create a time dimension in my data source.
It works fine, I can browse by cube and filtering on date.
Now, I would like to browse this table to check values, however the time table is empty in SSMS. So when I select the "TOP 100 rows", 0 rows are retrieved.
Due to a strange mistake, when I browsed the table in SQL Server, no rows were retrieved.
I decided to delete and recreate the Time dimension (created in the data source) by using Dimension Wizard in SSDT. It works well.
To customize the table, I decided to create a view from the dimension table, then I added some changes. Finally, I changed the dimension table to the new view in SSDT (data source view).

Recommended way of adding daily data to database

I receive new data files every day. Right now, I'm building the database with all the required tables to import the data and perform the required calculations.
Should I just append each new day's data to my current tables? Each file contains a date column, which would allow for a "WHERE" query in the future if I need to analyze data for one particular day. Or should I be creating a new set of tables for every day?
I'm new to database design (coming from Excel). I will be using SQL Server for this.
Assuming that the structure of the data being received is the same, you should only need one set of tables rather than creating new tables each day.
I'd recommend storing the value of the date column from your incoming data in your database, and also having a 'CreateDate' column in your tables, with a default value of 'GetDate()' so that it automatically gets populated with the current date when the row is inserted.
You may also want to have another column to store the data filename that the row was imported from, but if you're already storing the value of the date column and the date that the row was inserted, this shouldn't really be necessary.
In the past, when doing this type of activity using a custom data loader application, I've also found it useful to create log files to log success/error/warning messages, including some type of unique key of the source data and target database - ie. if coming from an Excel file and going into a database column, you could store the row index from Excel and the primary key of the inserted row. This helps tracking down any problems later on.
You might want to consider having a look at SSIS (SqlServer Integration Services). It's the SqlServer tool for doing ETL activities.
yes, append each day's data to the tables; 1 set of tables for all data.
yes, use a date column to identify the day that the data was loaded.
maybe have another table with a date column and a clob column. The date to contain the load date and the clob to contain the file that you imported.
Good question. You most definitely should have a single set of tables and append the data daily. Consider this: if you create a new set of tables each day, what would, say, a monthly report query look like? A quarterly report query? It would be a mess, with UNIONs and JOINs all over the place.
A single set of tables with a WHERE clause makes the querying and reporting manageable.
You might do a little reading on relational database theory. Wikipedia is a good place to start. The basics are pretty straightforward if you have the knack for it.
I would have the data load into a stage table regardless and append to the main tables after. Once a week i would then refresh all data in the main table to ensure that the data remains correct as per the source.
Marcus

SQLBulkCopy and Dates (1/1/1753)

I've got an application which has been working fine for quite a while, but there is an annoying item that continues to get in the way on occasion.
Let's say that I use an object such as OracleDataReader or MySQLDataReader to pass the data to the sqlbulkcopy object for insert. Let's assume that all the columns maps just fine and for the most part, it all works well.
Granted, I don't have control over the source application or database (which is either MySQL or Oracle). So some goof goes into a different application and puts in a date on the invoice table of 5/31/0210. He really meant to put in 5/31/2010, but the application he's using is not validating the data very tightly and the Oracle database accepts it. For all intensive purposes, the data of 5/31/0210 is a valid date for the Oracle db. It might be stupid in terms of data entry, but it is what it is at this point.
Now our OracleDataReader comes along and is transferring this invoice table over to SQL Server via the SQLBulkCopy. It is passing the data to perfectly matched table with the right column names and data types. You can see what is going to happen. This date of 05/31/0210 from Oracle is not accepted by the SQL Server db engine, as the DATETIME field only allows dates from 1/1/1753 to 12/31/9999.
When it encounters this record, it simply fails and gives an overflow error. It doesn't skip the record, it kills the feed. So if it happens a thousand records in on a million record table, you don't get the remaining 999,000 records.
Is there anyway to get around this issue so that the feed will continue?
Ideally, I'd like to move the receiving SQL Server DB to 2008 and use DATETIME2, which would allow for these goofy dates, but unfortunately not all my clients are ready to move to this version yet, so I'm stuck with DATETIME in SQL 2000/2005/2008.
Any ideas on how to get around this without changing the SQL? Ideally, I wouldn't mind if it just skipped the record. I know that I could do this in the SQL for the datareder, but this would be extremely complicated when you have twenty date fields in a single query. It would be maintenance nightmare.
Any thoughts would be appreciated.
One option would be to change the datetime column type to varchar. Then add a derived column for converting the string to datetime. The trick would be to use a function in the derived column to validate the date and put an arbitrary datetime if the coversion will fail. If you do heavy date comparisons, persist the computed column and/or index it.
I say all of this under the impression that sqlbulkcopy is not able to do transforms. Maybe you can. Hopefully, someone will chime in with a way to.
SSIS would be great in this situation, as you could do the transform and also get the performance benefits of the bulk update lock.

Adding a new dimension based on a key in fact table linked to one of the dimension tables

I have a fact table that holds all date & time attributes as keys which links to actual DATE & TIME dimension.
When I create a cube on top of it using SSAS 2005, these datetime attributes are split into individual dimensions for the CUBE, which is OK.
Problem is when I add a new datetime attribute to the fact table, my cube doesn't accept that and would not create a new datetime dimension just like the other ones, unless I recreate the cube from scratch.
Can anyone please suggest, how can I add this new attribute separately as a dimension, without having to recreate the cube?
I'm struggling to understand your issue.
It sounds as if you are trying to add a new datetime column(fact) (referenced to your apporpriate Dimension/s attribute) to the Fact table. If so, this changes the structure of the cube and so requires that the cube be re-processed.
To qualify correct use of terminology, a Dimension contains Attributes. A Fact table contains Facts not attributes.
The following reference may be of use.
http://msdn.microsoft.com/en-us/library/aa905984(SQL.80).aspx
Re: Comments
Any structural changes need to be applied/registered within the Data Source View (DSV) in the Business Intelligence Development Studio (BIDS), prior to processing the cube. Clicking the refresh button on the DSV, should prompt you with an option to apply any discovered changes to your tables. Also, should any of your additions/modifications be to the underlying tables of Dimensions, then you may also need to add the attributes in question to the appoprirate Dimension .dim file, prior to re-processing the cube.
Hope this makes sense.
The problem usually comes because of Unknown Member and Null Processing options setup along with the snowflake schema if you have it in your cube. I figured out what the problem actually was.
If you have a case as one mentioned, then SSAS doesn't bring up the structural changes by itself when you refresh the Data source view. In my case, since it was date & time dimensions, I had to add new dimensions manually (Cube dimensions) and setting their NULL Processing options correctly (in my case UnknownMember and not Automatic).
Since it can be tad difficult to do these changes for all such new columns added to underlying fact table, you can try updating the XMLA script using Find & Replace method, carefully crafted.

Resources