OLAP Error while Processing - sql-server

I am new to OLAP, and figured out how to make a cube and process it. However, when i play with it too much, i eventually come up against this error:
Errors in the OLAP storage engine: The
attribute key cannot be found: Table:
dbo_v_MYEntities, Column: uniqueId,
Value: 2548. Errors in the OLAP
storage engine: The record was skipped
because the attribute key was not
found. Attribute: Unique Id of
Dimension: v MY Entities from
Database: Test Cube New, Cube: MYdm
MyApp - Views, Measure Group: v MY
Entities, Partition: v MY Entities,
Record: 2526.
It seems that some values get stuck, and the cube expect the value to be there, I know i can edit error properties and stop it from throwing exceptions, but i would like to be able to fix it.
I wouldn't mind clearing the cube, so that it re-generates itself from scratch, but i can't seem to be able to do that.
Once i get this error, even if i delete the cube, and create it again from scratch, the error is still there.
The only solution so far (in my test environment) was to change database name in project deployment target properties. Obviously this will not do the trick in production.

Basically,
Table: dbo_v_MYEntities, Column:
uniqueId, Value: 2548
Means that your table/view "dbo.v_MYEntities" has a column "uniqueid", which contains a value "2548" which is not in a table which is related to dbo.v_MYEntities in the dimension usage tab in BIDS. This usually happens when dbo.v_MYEntities is a fact table, and the related dimension table does not contain a key. I would check the referential integrity of the schema trying to determine why this is happening and correct it in the ETL or in the view definition.

Related

Issue loading data to Dynamics D365 using Azure Data Factory

I have the following issue in a copy activity:
"Code": 23605,
"Message": "ErrorCode=DynamicsOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Dynamics operation failed with error code: -2147088238, error message: A record that has the attribute values SAP Client, SAP System Id, OrderPOS-ID, Order ID, SAP Module, Account Assignment ID already exists. The entity key Purchase Order Atl Key requires that this set of attributes contains unique values. Select unique values and try again..,Source=Microsoft.DataTransfer.ClientLibrary.DynamicsPlugin,''Type=System.ServiceModel.FaultException`1[[Microsoft.Xrm.Sdk.OrganizationServiceFault, Microsoft.Xrm.Sdk, Version=9.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]],Message=The creator of this fault did not specify a Reason.,Source=Microsoft.DataTransfer.ClientLibrary.DynamicsPlugin,'";
I'm using a dynamic query(with a store procedure) and load that based on config table where i have all entities to load. And Recently we start to get this error messages for some countries(but there's no pattern). The entity in this case is Purchase Orders and the processes load part of the data to dynamics, but some countries are failing. I've already checked all possible duplicates and I didn't find none using the keys presented in message above;
This fields are defined as alternatekeys on dynamics in order to avoid duplicates, and the alternatekey are composed with: SAP Client, SAP System Id, OrderPOS-ID, Order ID, SAP Module, Account Assignment ID;
In order to make our process faster I separate PO(PurchaseOrders) in 2 view: (1)Purchase_Order_master : basically copy to dynamics new records with all keys and after this first view run we have the (2)view that got all attributes and do the upsert of all values that suffered some update on source side.
In this 2 view we have the following field in common, these are the only fields in common
sie_purchaseordersid(uniqueidentifier, null)
Executionid(uniqueidentifier, not null)
Hashkey(varbinary(32), not null)
The fields that compose the hashkey are the same that we can find on the alternate key
,HASHBYTES('SHA2_256', CONCAT(SAPSYSID,SAPMOD,MANDT,EBELN,EBELP,TRY_CONVERT(NVARCHAR(255),TRY_CONVERT(INT,ZEKKN)))) HashKey
I don't have more ideas, and i hope one of you already tackled a similar issue in a passed and can provide some help.
Thanks in advance
I've try to find duplicates based on the fields that were mention in log message, but i didn't find nothing;

Attribute Key Not Found Error SSAS Cube Rebuild

I am rebuilding my SSAS cube and encountering the following error
Errors in the OLAP storage engine: The attribute key cannot be found when
processing: Table: 'MyFactTable', Column: 'MyKey', Value:
'900763'. The attribute is 'Description'. Errors in the OLAP storage
engine: The attribute key was converted to an unknown member because the
attribute key was not found. Attribute Description of Dimension: Item
from Database: OTD DATAMART, Cube: Data Mart, Measure Group: Transaction
Fact,
Partition: Transaction Fact, Record: 22438443.
I realize this could be the key was in my fact table but not in dimension so I process update the dimension first and the process the cube, but this error keeps bumping up. I can confirm that I can see the key and the entry in the dimension.
Any suggestion?
Try removing the dimension from the cube, then adding back into the cube - effectively resetting the Dimension Usage. Or maybe try changing the key of the dimension to be a different field and then change back again. Basically anything to try and jog things back into resetting the keys in the background. Then reprocess the database in full (if you can, otherwise process the dimension and then the cube).
After adding the dimension to the cube, on the dimension usage table remove all the dimension key references to the measure groups and try processing the cube. Then add them back in. It is some disconnect on the keys because the fact attribute all check out. Sometimes removing the dimension from the cube and then reprocessing the cube before deleting the dimension on the server side prior to re-adding works.

How to send data from OLE DB source to Anchor model tables using ETL procedure?

I'm currently solving this task: some data should be sent from AdventureWorks2012 to Anchor model tables on the same server in MsSQL.
This is my Anchor Model
At this point I have a pretty simple Integration Services project in Visual Studio and it looks like this.
Control flow:
For example Load_territories is:
The main requirement is to fill all tables of Anchor model tables in MsSQL but I'm constantly facing a problem: the amount of attributes in tables are different and some of them are repeating
At this picture in the second table basically TR_ID,TR_GRP_TR_ID, TR_TID_TR_ID, TR_TNM_TR_ID contain the same values from dwh_key but it's impossible to create a one-to-many relation between attributes. My tutor has recommended me to use Lookup but I cannot figure out how to implement them in this project
This may be considered as cheating, but if you insert data into the latest view rather than the separate 6NF tables all of those ID fields will be populated by underlying trigger logic. I suspect that this would defeat the purpose of using SSIS though, since you would effectively be loading attributes sequentially rather than in parallel.
Another option is to leave surrogate key management to the ETL tool. This would require that you switch the data type for your identities from integers to GUID:s. SSIS can then generate a GUID and you can then use that very same GUID to populate all the attributes. Note that the anchor would have to be loaded first, or you will get a foreign key violation.
The most common solution though, is to leave surrogate key management to the database (and use integers). You would have a step in which you populate the metadata column in the anchor with the desired number of new identities to be created. Using the metadata number you can then select the newly generated identities and merge them into your data flow. It doesn't matter which number gets assigned to which row. After that all attributes can be populated in parallel, including their ID columns.
Of course, if this is intended to be used for more than an initial load, you would also have to add steps to detect if the data you are loading is already known or not.
I can also recommend watching the video tutorial referenced in this blog post: https://clinthuijbers.wordpress.com/2013/06/14/ssis-anchor-modeling-example-tutorial/

Cannot insert duplicate key row in object with unique index . The duplicate key value. The statement has been terminated

I am new to Outsystems and SQL. I am try to create a Bus Application where the entities are
When I try to create a new rider with the same name and different Route and bus Id. I get
Cannot insert duplicate key row in object 'dbo.OSUSR_6SL_RIDER' with unique index 'OSIDX_OSUSR_6SL_RIDER_4NAME'. The duplicate key value is (ABC).
The statement has been terminated.
When I check Name field in the database table 'dbo.OSUSR_6SL_RIDER' it is not having the unique identifier set up. Can anybody please help me with this.
Open the Indexes tree under your table. You will find an Index named 'OSIDX_OSUSR_6SL_RIDER_4NAME'.
Script out that Index and you will see that it is a UNIQUE index on a "name" column that you are trying to create a duplicate value in.
You must either change that Index to include Route and Bus ID, or you must abandon your attempt to create a new row with a duplicate name.
It looks like you are are creating an exact duplicate, i.e. a record with the same Id value. The index name it refers to seems to be auto generated by the db system. Therefor it is not necessarily referring to the Name field. Have a look at your indexes and look at the fields they contain. I wouldn't be surprised if OSIDX_OSUSR_6SL_RIDER_4NAME contains the Id field.
If you are using the OutSystems platform, all the database management is done/generated when you publish from Service Studio, so it isn’t advisable to manipulate the database directly: you’re setting yourself up for a lot of maintenance pain and inconsistencies between different environments.
Double-click on the Entity Rider and it’ll open the edit window of your entity. In the Indexes tab you can define and change your indexes (unique or not) and the tool will (re)generate all the needed SQL commands.
See OutSystems Platform 9 Help | Indexes Tab for more details:

What would you do to avoid conflicting data in this database schema?

I'm working on a multi-user internet database-driven website with SQL Server 2008 / LinqToSQL / custom-made repositories as the DAL. I have run across a normalization problem which can lead to an inconsistent database state if exploited correctly and I am wondering how to deal with the problem.
The problem: Several different companies have access to my website. They should be able to track their Projects and Clients at my website. Some (but not all) of the projects should be assignable to clients.
This results in the following database schema:
**Companies:**
ID
CompanyName
**Clients:**
ID
CompanyID (not nullable)
FirstName
LastName
**Projects:**
ID
CompanyID (not nullable)
ClientID (nullable)
ProjectName
This leads to the following relationships:
Companies-Clients (1:n)
Companies-Projects (1:n)
Clients-Projects(1:n)
Now, if a user is malicious, he might for example insert a Project with his own CompanyID, but with a ClientID belonging to another user, leaving the database in an inconsistent state.
The problem occurs in a similar fashion all over my database schema, so I'd like to solve this in a generic way if any possible. I had the following two ideas:
Check for database writes that might lead to inconsistencies in the DAL. This would be generic, but requires some additional database queries before an update and create queries are performed, so it will result in less performance.
Create an additional table for the clients-Projects relationship and make sure the relationships created this way are consistent. This also requires some additional select queries, but far less than in the first case. On the other hand it is not generic, so it is easier to miss something in the long run, especially when adding more tables / dependencies to the database.
What would you do? Is there any better solution I missed?
Edit: You might wonder why the Projects table has a CompanyID. This is because I want users to be able to add projects with and without clients. I need to keep track of which company (and therefore which website user) a clientless project belongs to, which is why a project needs a CompanyID.
I'd go with with the latter, having one or more tables that define the allowable relationships between entities.
Note, there's no circularity in the references you have, so the title is misleading.
What you have is the possibility of conflicting data, that's different.
Why do you have "CompanyID" in the project table? The ID of the company involved is implicitly given by the client you link to. You don't need it.
Remove that column and you've removed your problem.
Additionally, what is the purpose of the "name" column in the client table? Can you have a client with one name, differing from the name of the company?
Or is "client" the person at that company?
Edit: Ok with the clarification about projects without companies, I would separate out the references, but you're not going to get rid of the problem you're describing without constraints that prevent multiple references being made.
A simple constraint for your existing tables would be that not both the CompanyID and ClientID fields of the project row could be non-null at the same time.
If you want to use the table like this and avoid the all the new queries just put triggers on the table and when user tries to insert row with wrong data the trigger with stop him.
Best Regards,
Iordan
My first thought would be to create a special client record for each company with name "No client". Then eliminate the CompanyId from the Project table, and if a project has no client, use the "No client" record rather than a "normal" client record. If processing of such no-client's is special, add a flag to the no-client record to explicitly identify it. (I'd hate to rely on the name being "No Client" or something like that -- too fuzzy.)
Then there would be no way to store inconsistent data so the problem would go away.
In the end I implemented a completely generic solution which solves my problem without much runtime overhead and without requiring any changes to the database. I'll describe it here in case someone else has the same problem.
First off, the approach only works because the only table that other tables are referencing through multiple paths is the Companies table. Since this is the case in my database, I only have to check whether all n:1 referenced entities of each entity that is to be created / updated / deleted are referencing the same company (or no company at all).
I am enforcing this by deriving all of my Linq entities from one of the following types:
SingleReferenceEntityBase - The norm. Only checks (via reflection) if there really is only one reference (no matter if transitive or intransitive) to the Companies table. If this is the case, the references to the companies table cannot become inconsistent.
MultiReferenceEntityBase - For special cases such as the Projects table above. Asks all directly referenced entities what company ID they are referencing. Raises an exception if there is an inconsistency. This costs me a few select queries per CRUD operation, but since MultiReferenceEntities are much rarer than SingleReferenceEntities, this is negligible.
Both of these types implement a "CheckReferences" and I am calling it whenever the linq entity is written to the database by partially implementing the OnValidate(System.Data.Linq.ChangeAction action) method which is automatically generated for all Linq entities.

Resources