I have a cube which is unable to process anymore because of a missing key in a dimension.
The process cube action kept returning the following error message:
The attribute key cannot be found when processing: Table: 'MyFactTableName', Column: 'MyDimensionKey', Value: 'Value'. The attribute is 'MyDimensionKey'.
To explain the situation: I have a fact table FactSales with multiple columns of which ProductID is one. ProductID is a key from the table DimProducts.
The problem however is that one record in the fact table has a productID, which does not exist in DimProduct.
The problem lies with the ETL, but I have no access to that and the one having access is not available for 2 weeks. I only have the SSAS project I can work with.
So my question: is there a way (a property in the dimension/attributes etc for example) to temporarily ignore this error and still process the cube? I heard about settings that can be set in SSMS when processing manually, but this has to be done daily (with a SQL job) too, so I am looking for an option in my SSAS solution itself
I think there are two ways. When you process the cube, in the processing options you can specifically set to ignore dimension errors and continue.
Likewise,i think you can set this in processing options in visual studio for the dimension.
Longer term, it may be wise to change the etl so that any missing keys are assigned to a generic missing key to prevent further cube processing errors and let you fix the error properly
Related
I am having issue deploying SSAS package to SQL Server Analysis. It is complaining of duplicates keys whereas the column is referencing is not a primary key column. I queried the dimension table to see that the primary keys have same values in the affected columns which is normal and possible. The attribute usage and type property are already set to regular in SSDT. Please find the error I am receiving below. I will appreciate an idea to fix this issue. Thank you.
Errors and Warnings from Response
Server: The current operation was cancelled because another operation
in the transaction failed. Errors in the OLAP storage engine: A
duplicate attribute key has been found when processing: Table:
'dwh_Dim_By_Answers', Column: 'QB_AnswerText', Value: 'hazard'. The
attribute is 'QB Answer Text'.
Their is two solutions for this issue :
to avoid key duplicate error when processing a dimension you just need to set the dimension property ProcessingGroup to ByAttribute instead of ByTable.
Force SSAS to ignore key duplicate error by setting KeyDupplicate to IgnoreError in dimension key errors tab. To achieve this go to SSMS OR SSVS -> process -> in process tab click change setings -> dimension key errors -> Use costume error configuration -> set KeyDupplicate to IgnoreError.
visit : https://www.mssqltips.com/sqlservertip/3476/sql-server-analysis-services-ssas-processing-error-configurations/
My company has a really old Access 2003 .ADP front-end connected to an on-premise SQL Server. I was trying to update the front-end to MS Access 2016, which is what we're transitioning to, but when linking the tables I get all the fields in this specific table as #Deleted. I've looked around and tried to change some of the settings, but I'm really not that into SQL Server to know what I'm doing, hence asking for help.
When converting the table to local, all the info is correctly displayed, so it begs the question. Also, skipping to the last record will reveal the info on that record, or sorting/filtering reveals some of the records, but most of the table stays "#Deleted"...
Since I know you're going to ask: Yes, I need to edit the records.. Although the snapshot method would work for people trying to view the info, some of us need to edit it.
I'm hoping someone can shed some light on this,
Thanks in advance, Rafael.
There are 3 common reasons for this:
You have bit fields in SQL server, but they are null. They should be assigned a default of 0.
The table in question does NOT have a PK (primary key).
Last but not least you need (want) to add a timestamp column. Keep in mind that this is really what we call a “row version” column (so it not a date/time column, but a timestamp column). Adding this column will help access determine if a record been changed, and this is especially the case for any table/form in Access that allows editing of “real” number data types (single, double). If access does not find a timestamp column, then it reverts to a column by column comparison to determine table changes, and due to how computers handle “real” numbers (with rounding), then such comparisons often fail.
So, check for the above 3 issues. You likely should re-run the linked table manager have making any changes.
I'm struggling a bit to overcome this obstacle that is to create a table with a foreign key to another table. It looks simple right? It is, but unfortunately i'm not being successfull. The error thrown is the one in the title. Has anyone else had this error before? How did you resolved it? I'm using SQL Server 2014 but the error is thrown through Outsystems IDE.
Best regards,
Rafael Valente
It would help if you could post a picture of your datamodel for us to take a look.
One way of dealing with this kind of errors in OutSystems is inspecting the database itself. There's a system table called ossys_espace. Get your espace id from there. Then query ossys_entity to see which is the physical table name for that entity and check if there's something wrong with it.
There's also the possibility that you've created a table in the past that is causing the error. Check for the entities with the flag deleted set to true in that table. If it helps, there's this Forge component that you can you to clean those deleted entities.
If you have access to the server you can also look at the generated SQL and understand if there's a problem with it.
I find that error weird, but you might be bumping into a bug, and for sure we want to know that :)
I am trying to import data from database access file into SQL server. To do that, I have created SSIS package through SQL Server Import/Export wizard. All tables have passed validation when I execute package through execute package utility with "validate without execution" option checked. However, during the execution I received the following chunk of errors (using a picture, since blockquote uses a lot of space):
Upon the investigation, I found exactly the table and the column, which was causing the problem. However, this is problem I have been trying to solve for a couple days now, and I'm running dry on possible options.
Structure of the troubled table column
As noted from the error list, the trouble occurs in RHF Repairs table on the Date Returned column. In Access, the column in question is Date/Time type. Inside the actual table, all inputs are in a form of 'mmddyy', which when clicked upon, turn into 'mm/dd/yyyy' format:
In SSIS package, it created OLEDB Source/Destination relationship like following:
Inside this relationship, in both output columns and external columns data type is DT_DATE (I still think it is a key cause of my problems). What bugs me the most is that the adjacent to Date Returned column is exactly the same as what I described above, and none of the errors applied to it or any other columns of the same type, Date Returned is literally the only black sheep in the flock.
What have I tried
I have tried every option from the following thread, the error remains the same.
I tried Data conversion option, trying to convert this column into datestamp or even unicode string. It didn't work.
I tried to specify data type with the advanced source editor to both datestamp/unicode string. I tried specifying it only in output columns, tried in both external and output columns, same result.
Plowing through the data in access table also did not give me anything. All of them use the same 6-char formatting through it all.
At this point, I literally exhausted all options I could think of. Can you please point me in the right direction on what else I could possibly try to resolve it, since it drives me nuts for last two days.
PS: On my end, I will plow through each row individually, while not trying to get discouraged by the fact that there are 4000+ row entries...
UPDATE:
I resolved this matter by plowing through data. There were 3 faulty entries among 4000+ rows... Since the issue was resolved in a manner unlikely to help others, please close that question.
It sounds to me like you have one or more bad dates in the column. With 4,000 rows, I actually would visually scan and look for something very short or very long.
You could change your source to selecting top 1 instead of all 4,000. Do those insert? If so, that would lend weight to the bad date scenario. If 1 row does not flow through, it is another issue.
(I will just share my experience, how I overcame this problem, in case it helps someone)
My scenario:
One of the column Identifier in the ole db data source has changed from int to bigint. I was getting the error message - Conversion failed because the data value overflowed the specified type.
Basically, it was telling me the source data size was greater than the destination data size.
What I have tried:
In the ole db data source and destination both places, I clicked "show advanced editior", checkd the data type Identifier was bigint. But still, I was getting the error message
The solution worked for me:
In the ole db data source--> show advanced edition option--> Input and Output Properties--> OLE DB Source Output--> there are two options - External columns & Output columns.
In my case, though the Identifier column in the External columns was showing the data type bigint, but in the Output columns was showing the data type int. So, I changed the data type to bigint and it has solved my problem.
Now and then I get this problem, specially when I have a big table with lots of data.
I hope it helps.
We had this error when someone had entered the year as 216 instead of 2016. The data source was reading the data ok but it was failing on the OLEDB destination task.
We use a script task in the data flow for validation. By adding a check that dates aren't too far in the past we are able to trap this kind of error and at least generate a meaningful error message to find and correct the problem quickly.
I'm currently creating my first real life project in Pervasive. The task is to map a certain XML structure containing orders (as in shops and products) to 3 tables I created myself. These tables rest inside a MS-SQL-Server instance.
All of the tables have a unique key called "id", an automatically incremented column. I've dropped this column from all mappings so that Pervasive will not try to fill it itself.
For certain calculations, for a split key in one of the tables and for references to the created records in other tables, I will need the id that the database has just created. For that, I have googled the answer. I can use "select ##identity;" as a statement, and this returns the id that has most recently been created for the current connection. This means that in Pervasive, I will have to execute this statement using the already existing target connection object.
But how to do that? I am quite sure that I will need a JDImport or DJExport object, but how to get one associated with the current connection that Pervasive inserts the records by?
Or is there any other way to handle this auto increment when I need to reference the id in other tables?
Not sure how things work in Pervasive, but you may run into issues with ##identity,. Scope_identity() would probably be safer but may still not work in Pervasive.
Hopefully your tables have a natural key in addition to the generated id, in which case you can select your id based on the natural key. This will avoid any issues you may have with disparate sessions and scope.
If there is anyone looking this post up and wonders about the answer, it's "You can't". Pervasive does not allow access to their very own connection object, the one they use to query the database. Without access to it, you cannot guaranteed fetch the right id. The solution for us was this: We used a stored procedure which we called in the Before-Transformation event that created the header record and returned the id and an optional error message as a table. We executed it and it returns the id we then save and use throughout our mapping.