Attribute Key Not Found Error SSAS Cube Rebuild - sql-server

I am rebuilding my SSAS cube and encountering the following error
Errors in the OLAP storage engine: The attribute key cannot be found when
processing: Table: 'MyFactTable', Column: 'MyKey', Value:
'900763'. The attribute is 'Description'. Errors in the OLAP storage
engine: The attribute key was converted to an unknown member because the
attribute key was not found. Attribute Description of Dimension: Item
from Database: OTD DATAMART, Cube: Data Mart, Measure Group: Transaction
Fact,
Partition: Transaction Fact, Record: 22438443.
I realize this could be the key was in my fact table but not in dimension so I process update the dimension first and the process the cube, but this error keeps bumping up. I can confirm that I can see the key and the entry in the dimension.
Any suggestion?

Try removing the dimension from the cube, then adding back into the cube - effectively resetting the Dimension Usage. Or maybe try changing the key of the dimension to be a different field and then change back again. Basically anything to try and jog things back into resetting the keys in the background. Then reprocess the database in full (if you can, otherwise process the dimension and then the cube).

After adding the dimension to the cube, on the dimension usage table remove all the dimension key references to the measure groups and try processing the cube. Then add them back in. It is some disconnect on the keys because the fact attribute all check out. Sometimes removing the dimension from the cube and then reprocessing the cube before deleting the dimension on the server side prior to re-adding works.

Related

Star Schema from multiple source tables

I am struggling in figuring out how to create a star schema from multiple source tables. I work at a trading firm so the data is related to user trading activity. The issue I am having is that our datasets do not have primary ids for every field that could be a dimension. Instead, we usually relate our data together using the combination of date and account number. Here is an example of 3 source tables...
I would like to turn this into a star schema, something that looks like ...
Is my only option to denormalize my source tables into one wide table (joining trades to position on account number and date, and joining the users table on account number), create keys for each dimension, then re normalizing it into the star schema? Are star schema's ever built from multiple source tables?
Star schemas are almost always created from multiple source tables.
The normal process is:
Populate your dimension tables
Create a temporary/virtual fact record using your source data
Using this fact record, look up the relevant dimension keys
Write the actual fact record to your target fact table
Data-warehousing is about query speed. The data-warehouse should not be concerned with data integrity. IT SHOULD NOT CLEAN OR CORRECT BAD DATA. It only needs to gather all the data together into a single record to present to the model for analysis. Denormalizing the data is how this is done.
In a star schema, dimensions do not know about each other and have no relationships with other dimensions. In a snowflake, dimensions are related to other dimensions. That is the primary difference between star and snowflake.
All the metadata options for events are rolled up into dimensions and used for slicing/filtering. All the measurable/calculation data for an event are in the event fact, along with a reference to the dimension(s) containing the relevant metadata. The Metadata/Dimension is reused across multiple fact records.
Based on the limited example you've provided, I'd suggest you research degenerate dimensions and junk dimensions. Your Trade and Position data may need to be turned into a fact and a dimension (degenerate), and some of your flag attributes may be best placed into a junk dimension.
You should also make sure your dimension keys are clear. You should not have multiple paths to a dimension (accountnumber: trade -> position -> user & trade -> user ) as that will cause inconsistent results when querying depending on which relationship you traverse.

Upload data in star data warehouse

I recently build a simple data warehouse with 2 dimension tables and 1 fact table.
First Dim hold the user input "queryId, dna sequence, dna database name, other parameters".
Second Dim hold database description "databaseId, other parameters".
The fact table will hold the result of the search "queryId, databaseID, hit founded, other parameters describe the hit".
Now, where should I upload the data (The result)? To the fact table? or to the dimensions table?
To where should I upload "queryId and databaseID"? because they are in dimensions and in the fact. Sorry for this question but, I am new to DW.
Thanks a lot,
You have to create an ETL that loads like this (this assumes we rebuild the DW on each import, the steps are different for incremental loading):
Truncate fact table
Truncate dimensions
Populate dimensions, (your keys should be in the dimensions)
Populate the fact with with your dimension keys and measures
Then, when querying, you'll join your dimensions onto your fact via the keys.
Neither nor.
You UPLOAD data into staging tables. Those are created for optimal upload speed. Staging tables may be flat, may not be complete and require joining with other tables.
Then you use a loading process to load them from staging into the data warehouse.

Cannot insert duplicate key row in object with unique index . The duplicate key value. The statement has been terminated

I am new to Outsystems and SQL. I am try to create a Bus Application where the entities are
When I try to create a new rider with the same name and different Route and bus Id. I get
Cannot insert duplicate key row in object 'dbo.OSUSR_6SL_RIDER' with unique index 'OSIDX_OSUSR_6SL_RIDER_4NAME'. The duplicate key value is (ABC).
The statement has been terminated.
When I check Name field in the database table 'dbo.OSUSR_6SL_RIDER' it is not having the unique identifier set up. Can anybody please help me with this.
Open the Indexes tree under your table. You will find an Index named 'OSIDX_OSUSR_6SL_RIDER_4NAME'.
Script out that Index and you will see that it is a UNIQUE index on a "name" column that you are trying to create a duplicate value in.
You must either change that Index to include Route and Bus ID, or you must abandon your attempt to create a new row with a duplicate name.
It looks like you are are creating an exact duplicate, i.e. a record with the same Id value. The index name it refers to seems to be auto generated by the db system. Therefor it is not necessarily referring to the Name field. Have a look at your indexes and look at the fields they contain. I wouldn't be surprised if OSIDX_OSUSR_6SL_RIDER_4NAME contains the Id field.
If you are using the OutSystems platform, all the database management is done/generated when you publish from Service Studio, so it isn’t advisable to manipulate the database directly: you’re setting yourself up for a lot of maintenance pain and inconsistencies between different environments.
Double-click on the Entity Rider and it’ll open the edit window of your entity. In the Indexes tab you can define and change your indexes (unique or not) and the tool will (re)generate all the needed SQL commands.
See OutSystems Platform 9 Help | Indexes Tab for more details:

SSAS 2012 - Dimension Modeling

I am working with a structure that results a lot of single attribute dimensions that require no hierarchy. Examples:
Status(Status Name)
Type(Type Name)
I get the following warning when compiling the project:
"Avoid having multiple dimensions containing a single attribute. Consider unifying them if possible."
A large number of single attribute dimensions is workable for our users, but it causes a lot of clutter in the Excel pivot table. Dimensions are listed along with the single attribute which is redundant.
I would like to unify them as the warning suggests so that I have a single dimension called 'Attributes' which contains status/type/etc, but I am unsure the best way to do so. It doesn't make conceptual sense to me with a parent/child dimension.
Any suggestions?
I agree this is a worthwhile change. I would construct a view that brings together the required attributes. Often they are all available on the fact/measure group table/view, so you can just use the same source object (in your DSV) to construct the dimension.
The tricky part may be the dimension key. The most flexible key is a Fact Surrogate Key eg a unique value per Fact row - in the future you can add any other fact-based attributes without affecting the key. However this will not scale indefinitely - you are probably OK up to 1m rows at least.
Beyond that scale, I would concatenate the attributes to form the dimension key and deliver them to a new dimension table. I would normally do this back in the ETL layer. The identical concatenation logic must be used for both the dimension and fact.

OLAP Error while Processing

I am new to OLAP, and figured out how to make a cube and process it. However, when i play with it too much, i eventually come up against this error:
Errors in the OLAP storage engine: The
attribute key cannot be found: Table:
dbo_v_MYEntities, Column: uniqueId,
Value: 2548. Errors in the OLAP
storage engine: The record was skipped
because the attribute key was not
found. Attribute: Unique Id of
Dimension: v MY Entities from
Database: Test Cube New, Cube: MYdm
MyApp - Views, Measure Group: v MY
Entities, Partition: v MY Entities,
Record: 2526.
It seems that some values get stuck, and the cube expect the value to be there, I know i can edit error properties and stop it from throwing exceptions, but i would like to be able to fix it.
I wouldn't mind clearing the cube, so that it re-generates itself from scratch, but i can't seem to be able to do that.
Once i get this error, even if i delete the cube, and create it again from scratch, the error is still there.
The only solution so far (in my test environment) was to change database name in project deployment target properties. Obviously this will not do the trick in production.
Basically,
Table: dbo_v_MYEntities, Column:
uniqueId, Value: 2548
Means that your table/view "dbo.v_MYEntities" has a column "uniqueid", which contains a value "2548" which is not in a table which is related to dbo.v_MYEntities in the dimension usage tab in BIDS. This usually happens when dbo.v_MYEntities is a fact table, and the related dimension table does not contain a key. I would check the referential integrity of the schema trying to determine why this is happening and correct it in the ETL or in the view definition.

Resources