excel powerpivot update error "object was not found in the cube" - sql-server

I have an PowerPivot file that pulls data directly from a SQL data warehouse. Next it is fed into pivot tables. When I try and update I get the following error:
Query (20,3916) The level '&[Desktop]' object was not found in the cube when the string, [OfficeFlatFile].TopicLevel2Name]&[Desktop], was parsed.
I checked my data source and found that the member "Desktop" was no longer available (no surprise there). But I can't get the file to update now. I tried updating the PowerPivot data connection first but that didn't work either.
This is the most recent info I could find, and it doesn't help.
https://connect.microsoft.com/SQLServer/feedback/details/756691/powerpivot-data-could-not-be-retrieved-from-the-exteral-data-source
Does anyone know a solution apart from rebuilding the file?

You know, xlsx (xlsm) files are set of xml files zipped.
Try to open your Excel file with WinRar (7zip, etc) program.
Then go to xl/pivotTables folder. There you should find pivotTable1.xml file.
Then manually delete corresponding item from the .xml
Then save the changes you made and open your Excel file with the pivotTable.
Since you manually deleted the "Desktop" item there will be no error.

Remove the . in column names that are in the join. Reference.

Removing the "." in the column's name worked for me.

The cleanest solution I found up to now is to use your previous working model (the one which worked fine before the update) and find all the pivots where you were filtered on "Desktop". Set these filters to "All" and then run your update.
This way you don't lose your pivot table, which sometimes is a big rework to rebuild, specially when you had charts and other dependencies linked to such pivot.

in my case I had many sheets with power pivot reports.
one of them caused the error.
removing this excel sheet and setting filters to all in other reports solved the problem.

I used the option of clear filter in Pivot table analyse tab and after that this error is solved

I was getting the error:
The query did not run or the Data Model could not be accessed. Here's the message we got:
Query (x,y) the level '<column Name.' object> was not found in the cube when the string was parsed.
Solution:
If there is a Slicer connected to the pivot table, disconnect it from the pivot table and remove all filters from the pivot table.
If there is not Slicer connected to the pivot table, just remove all filters from the pivot table.
Please check in the database if the values in the filter column (of the pivot table) have changed. If you have the older working version of the pivot table report, compare it to check the difference.
Thanks

Related

Remove duplicates from a SQL server rows using DISTINCT

I need to remove SQL server duplicated rows when importing file into database with distinct method.
HallGroup is my table in database. I'm using this
Sql procedure:
SELECT DISTINCT * INTO tempdb.dbo.tmpTable
FROM HallGroup
DELETE FROM HallGroup
INSERT INTO HallGroup SELECT * FROM tempdb.dbo.tmpTable
DROP TABLE tempdb.dbo.tmpTable
With this procedure works fine duplicated rows are deleted, but the problem is when i try to import again data to SQL server rows are still duplicating. What i'm missing, So any hint?
How to remove SQL server duplicated rows properly when importing file into database with distinct method?
I am just getting back into SQL after being out for a bit but I would not have solved your problem in that way that you are trying (not that I completely understand why you are doing it that way) as I believe (even if it were working correctly) over time your process will take longer each time you do it as the size of the table increases.
It would be much more efficient if you inserted the new data based on the absence of a key (you indicate you are already using a stored proc). If you don't have a key to use (which very recently happened to me), make one. I just solved a similar problem to yours whereas I am importing data into a table from an external source and wanted to eliminate the possibility of duplicates. In my case, I associate name of the external source datafile (is distinct by dataset to import) with the data to be imported and use that to ensure I am not re-importing already imported data. I load the external data into a table using a dtsx and then run a stored proc to merge that data with an existing table. This gives me the added advantage of having a audit trail of where each record came from.
Hope this helps.

Pivot off of access query is misbehaving

So I have a document that has a pivot table that was previously linked to an access query. It worked fine, no issues. The problem started when I wanted to change which database the pivot table was linked to. When I tried to change the external data source to the new database, excel gave me the "cannot complete the task with available resources" error. I find this error can be a little finnicky so I tried deleting the pivot and creating a new pivot with the link I want, except now the pivot comes up empty. It pulls in the column headers (in the pivot editor thing) but no data comes up when I add fields to the pivot. I should also add that the new database is exactly the same as the old one - the only difference is a new column of data.
Any thoughts? This is driving me crazy. It might be that the results from access are too big for excel to process, but I've been paring down the results and none of it makes a difference.

SSIS Lookup transformation to prevent duplicate entries

I am trying to import some excel files to a sql server table using SSIS.
But problem is like when we consolidate data from all excel files to one then there is a chance that it may contain duplicate record.
to solve this I used Lookup transformation with "Lookup no match output" but no luck.
Can someone explain how to make it work with Lookup transform?
please refer attached image
Could you use a sort task and check the 'Remove rows with duplicate sort values' box?

Export large amounts of binary data from one SQL database and import it into another database of the same schema

I have one database with an image table that contains just over 37,000 records. Each record contains an image in the form of binary data. I need to get all of those 37,000 records into another database containing the same table and schema that has about 12,500 records. I need to insert these images into the database with an IF NOT EXISTS approach to make sure that there are no duplicates when I am done.
I tried exporting the data into excel and format it into a script. (I have doe this before with other tables.) The thing is, excel does not support binary data.
I also tried the "generate scripts" wizard in SSMS which did not work because the .sql file was well over 18GB and my PC could not handle it.
Is there some other SQL tool to be able to do this? I have Googled for hours but to no avail. Thanks for your help!
I have used SQL Workbench/J for this.
You can either use WbExport and WbImport through text files (the binary data will be written as separate files and the text file contains the filename).
Or you can use WbCopy to copy the data directly without intermediate files.
To achieve your "if not exists" approache you could use the update/insert mode, although that would change existing row.
I don't think there is a "insert only if it does not exist mode", but you should be able to achieve this by defining a unique index and ignore errors (although that wouldn't be really fast, but should be OK for that small number of rows).
If the "exists" check is more complicated, you could copy the data into a staging table in the target database, and then use SQL to merge that into the real table.
Why don't you try the 'Export data' feature? This should work.
Right click on the source database, select 'Tasks' and then 'Export data'. Then follow the instructions. You can also save the settings and execute the task on a regular basis.
Also, the bcp.exe utility could work to read data from one database and insert into another.
However, I would recommend using the first method.
Update: In order to avoid duplicates you have to be able to compare images. Unfortunately, you cannot compare images directly. But you could cast them to varbinary(max) for comparison.
So here's my advice:
1. Copy the table to the new database under the name tmp_images
2. use the merge command to insert new images only.
INSERT INTO DB1.dbo.table_name
SELECT * FROM DB2.dbo.table_name
WHERE column_name NOT IN
(
SELECT column_name FROM DB1.dbo.table_name
)

Importing CSV to database (duplicate entries)

My job requires that I look up information on a long spreadsheet that's updated and sent to me once or twice a week. Sometimes the newest spreadsheet leaves off information that was in the last spreadsheet causing me to have to look through several different spreadsheets to find the info I need. I recently discovered that I could convert the spreadsheet to a CSV file and then upload it to a database table. With a few lines of script all I have to do is type in what I'm looking for and Voila! Now I just got the newest spreadsheet and I'm wondering if I can just Import it on top of the old one. There is a unique number for each row that I have set to primary in the database. If I try to import it on top of the current info will it just skip the rows where the primary would be duplicated or would it just mess up my database?
Thought I'd ask the experts before I tried it. Thanks for your input!
Details:
the spreadsheet consists of clients of ours. Each row contains the client's name, a unique id number, their address and contact info. I can set the row containing the unique ID to primary, then upload it. My concern is that there is nothing to signify a new row in a csv file (i think). when I upload it it it gives me the option to skip duplicates but will it skip the entire row or just that cell causing my data to be placed in the wrong rows.. It's apache server IDK what versions of mysql. I'm using 000webhost for this.
Higgs,
This issue in database/ETL terminology is called deduplication strategy.
There is not a template answer for this, but I suggest these helpful readings:
Academic paper - Joint Deduplication of Multiple Record Types
in Relational Data
Deduplication article
Some open source tools:
Duke tool
Data cleaner
there's a little checkbox when you click on import near the bottom that says 'ignore duplicates' or something like that. simpler than i thought.

Resources