Excel Query Won't Update - sql-server

I have put a query that connects to SQL Server into an Excel spreadsheet (formatted as a table). I update this query daily. Today, when I tried to refresh the query, it generated an error and would not load new data.
The error says:
"This won't work because it would move cells in a table on your worksheet."
All the research I've done on this error seems to relate to users attempting to insert rows, but that's not what I'm doing.
My query will only return two rows, plus the headers. It's not possible for the query to return anything more, so cells shouldn't move in the table.
Any suggestions to solve this error?

Related

Simple SSIS from Oracle to SQL Server drops rows

I have a very simple SQL query in my SSIS (VS 2017) Data Flow. It connects to Oracle via Native OLE DB\Oracle Provider for OLE DB and uses SQL Command to query the Oracle view. The destination table is a SQL Server 2017 table. If I query only the first 20 columns or so (I am querying 57 columns), I get all 1,060,000ish records. As I start to add more columns, the rowcount drops. I have already removed any date fields from both tables, and have done quite a few data conversions (source table has several varchar2(4000) fields that need to be SUBSTR to reasonable lengths in the SQL destination table. All fields in the destination table are nullable. When I pull the SQL out of SSIS and run it in SQL Developer, I get the right row count. When I run it in SSIS, it drops from 1.06 M rows to around 28k. I already tried the SQLChick hack (https://www.sqlchick.com/entries/2012/9/2/resolving-missing-records-in-ssis-from-oracle-source.html) doesn't work and causes connection errors (I had to use VS Code to add that property to my Oracle connection, then when I went back to VS, the connection was broken. When opening it back up to re-enter connection credentials, the extra property gets dropped.) I have reduced and increased the Rows per Batch and Maximum insert commit size values to zero avail. I have also set the RetainSameConnection property to True for all the Connection Managers. I'm at a loss! (As you can see from the pics, both jobs finish "successfully".)
This code returns all records:
SELECT
PIDM,
STUDENT_ID,
LAST_NAME,
FIRST_NAME,
MIDDLE_NAME,
LFM_NAME,
FML_NAME,
SORT_NAME,
GENDER,
ETHNIC_CODE,
ETHNIC_CODE_DESC,
LEGACY_CODE,
LEGACY_CODE_DESC,
ADDR_STR_LINE1,
ADDR_STR_LINE2,
ADDR_STR_LINE3,
ADDR_CITY,
ADDR_COUNTY,
ADDR_STATE,
ADDR_NATION,
ADDR_ZIPCODE,
ADDR_AREA_CODE,
ADDR_PHONE
FROM <TABLE_NAME>
This code returns only 28k:
SELECT
PIDM,
STUDENT_ID,
LAST_NAME,
FIRST_NAME,
MIDDLE_NAME,
LFM_NAME,
FML_NAME,
SORT_NAME,
GENDER,
ETHNIC_CODE,
ETHNIC_CODE_DESC,
LEGACY_CODE,
LEGACY_CODE_DESC,
ADDR_STR_LINE1,
ADDR_STR_LINE2,
ADDR_STR_LINE3,
ADDR_CITY,
ADDR_COUNTY,
ADDR_STATE,
ADDR_NATION,
ADDR_ZIPCODE,
ADDR_AREA_CODE,
ADDR_PHONE,
ORIGIN_STR_LINE1,
ORIGIN_STR_LINE2,
ORIGIN_STR_LINE3,
ORIGIN_CITY,
ORIGIN_COUNTY,
ORIGIN_NATION,
ORIGIN_STATE,
ORIGIN_ZIPCODE,
EMAIL,
HIGH_SCHOOL_CODE,
HIGH_SCHOOL_CODE_DESC,
HIGH_SCHOOL_CITY,
HIGH_SCHOOL_STATE,
HIGH_SCHOOL_GPA,
HIGH_SCHOOL_RANK,
PRIOR_COLLEGE_CODE,
PRIOR_COLLEGE_CODE_DESC,
PRIOR_COLLEGE_DEGREE_CODE,
PRIOR_COLLEGE_DEGREE_CODE_DESC,
PRIOR_COLLEGE_CITY,
PRIOR_COLLEGE_STATE,
ADMIT_FLAG,
GENERAL_STUDENT_FLAG,
CURRENT_ENROLLMENT_FLAG,
LETTER_CODES,
CONTACT_CODES,
COMMENT_CODES,
DIRECTORY_EMAIL,
ADDR_DIVISION_CODE,
HIGH_SCHOOL_CLASS_SIZE,
ETHNICITY,
RACE_CODE,
REGULATORY_RACE,
INT_LANG
FROM <TABLE_NAME>
Troubleshooting steps from the comments
If you run the all column version of the query in sql developer (whatever the Oracle query tool is) using the same credentials as the SSIS package, do you get 28k rows or 1M?
1M records are returned in SQL Developer when I use the same credentials SSIS is using. –
As painful as it may be, I would add 1 column, run, observe results. The first time you see a drop in row count, interrogate the heck of the source data (data type, collation, whether some permission thing is at play). If nothing seems out of place, edit the question to include the full table definition and identify what the first source column is that is throwing the results off.
I've done that. Column by column. I've even added a column that already existed (ADDR_STR_LINE1) as ORIGIN_STR_LINE1 and just aliased it, knowing that ADDRR_STR_LINE1 had already worked and both fields shared the exact datatypes/lengths etc. I just ran it with this code:SELECT PIDM, ORIGIN_STR_LINE1, ORIGIN_STR_LINE2, ORIGIN_STR_LINE3, ORIGIN_CITY, ORIGIN_COUNTY, ORIGIN_NATION, ORIGIN_STATE, ORIGIN_ZIPCODE FROM ODSMGR.RECRUIT_PERSON_OSU and it returned 1m records.
While little, consolation, you hitting all the troubleshooting steps I'd employ. I suppose the next item I would try to rule out is some bizarre row width issue/bug. Add a new data flow. As your source query, take one of your varchar2(4000) fields and duplicate it 60 times i.e. SELECT ADDR_STR_LINE1 AS Col0, ADDR_STR_LINE1 AS Col1, ..., ADDR_STR_LINE1 As Col59 FROM Owner.Table and connect that to a Derived Column task (it doesn't need to do anything, just serve as an anchor point) and run it. Do you get 1M or 28k?
Adding more of my troubleshooting steps. 1) Created a view off the original table, casting all of the fields that would need to be truncated as VARCHAR(proper length based on dest table). 2) Added/substracted fields piecemeal, until I thought I had a stable query, knowing that if I added <this fields>, <this many rows> would be dropped. But, for instance, I added PRIOR_COLLEGE_CITY and the first time, my counts dropped from 1063202 to 952755, but then later, I ran it again, and the counts dropped from 1063202 to 953989, so even if it was a data issue (it's not) it's not a consistent one.
Once I got my 953989 rows into the destination table, I compared which PRIOR_COLLEGE_CITY records were missing. In the Source Data Flow, I explicitly queried for those records, and they loaded fine, so again, not a data issue.
According to the picture you provided, when Source component output the records, some records have lost, so we could determine that this problem occurs in Source component.
In this case, please try to check the following thing in your case.
1.Run the query(not the views but the query inside the view) in your Oracle environment when execute the query in Source component, then check whether the number of records(returned from Oracle environment)is equal to the number of records(returned from SSIS Source component). Do this on a separate data task.
2. Check if there are some changes on the source table.
3. If the returned results is correct when running the query in Oracle environment, please try to compare the correct results with the SSIS Source returned results, and analyze the missing data.
I had a similar problem, mostly with odbc driver for oracle!
The problem not only lies on the volume of or rows that returns but in my case, for some reason it grouped the values of the first column also.
The only solution I ve found is to use another driver besides odbc and oledb.
Using the native Oracle Destination and Oracle Source in VS2017 it worked perfect and also the performance was better than odbc and ole db.
enter image description here
I was having a similar issue: 1,470,491 rows in the Oracle view that I was querying, all would come across when I run the package in Visual Studio, but only 377,257 rows would be read when I ran the package from SQL Agent. I tried the SQLChick "UseSessionFormat" hack that you mentioned. While editing the connection string used by the job (it comes in from configuration) I noticed that the connection string in the package had a "USERNAME" parameter as well as a "user id" paramter, but the configuration used by SQL Agent only had "USERNAME". I added "user id" parameter to the configuration used by SQL Agent and after that, the job retrieves all 1,470,491 rows.

An item with the same key has already been added-No duplicate column in query

When I run the SSRS report in SharePoint it runs fine, but when I try to execute it in the development environment, it results in a blank report. No data is displaying.
And also when I am trying to refresh the fields of the dataset used in this report, I am unable to refresh. It is showing the error "could not update a list of fields for the query. Verify that you can connect to the data source and that your query syntax is correct" and when the details is showing as "An item with the same key has already been added"
I have checked for my datasource connection and it is fine and I could not find anything wrong with my SQL query also. It is giving correct result when I am running it as query. And I also checked in query that I have used no column as duplicate while using alias.
Probably there is some kind of duplicate column name, try to put your whole dataset query as subquery
example: select * from ( [query from ssrs dataset] )ssrs.
Execute such statement in mssql and see if there is an error.

Pivot off of access query is misbehaving

So I have a document that has a pivot table that was previously linked to an access query. It worked fine, no issues. The problem started when I wanted to change which database the pivot table was linked to. When I tried to change the external data source to the new database, excel gave me the "cannot complete the task with available resources" error. I find this error can be a little finnicky so I tried deleting the pivot and creating a new pivot with the link I want, except now the pivot comes up empty. It pulls in the column headers (in the pivot editor thing) but no data comes up when I add fields to the pivot. I should also add that the new database is exactly the same as the old one - the only difference is a new column of data.
Any thoughts? This is driving me crazy. It might be that the results from access are too big for excel to process, but I've been paring down the results and none of it makes a difference.

sql delete row error

I am trying to delete a row in sql server management studio 2012 but an error appears:
sql error
No rows were deleted
A problem occurred attempting to delete row 2 Error Source:
Microsoft.SqlServer.Management.DataTools Error Message: The row
value(s) updated or deleted either do not make the row unique or they
alter multiple rows(2 rows)
Is there a way to fix error that without typing any query?
Thanks #Hani
I had the same problem (actually a table with a Unique ID, but with some rows accidentally duplicated including the "unique ID" so I couldn't delete the duplicate rows), and your advise helped me solved it from the SQL Server Management GUI.
I used the GUI interface to "edit top 200 rows" in table.
I then added a filter in the SQL Criteria pane which brought up just my two duplicate rows. (This was were I couldn't deleted one of the rows from).
Inspired by your comment, I opened the SQL Pane and changed the:
SELECT TOP(200)...{snip my criteria created by filter}
to instead read:
SELECT TOP(1)...{snip my criteria created by filter}
I was then able to "Execute SQL" the tweaked SQL.
I was then able to use the interface to Delete the single line shown (no warnings this time).
Re-running the SQL Criteria with 200 rows confirmed that just one row had been successfully deleted and one remained.
Thanks for the help, this proved to be the perfect blend of GUI and SQL code for me to get the job done safely and efficiently.
i hope this helps others in a similar situation.
You don't have a primary, unique key in your table.
SQL Server is unable to delete your row because nothing discriminate it from the other rows.
The solution is to add a unique Primary key to your table. It is not recommended to have none anyway. A simple integer with autoincrement should work transparently for you.
If you are trying to delete one of the duplicate rows in a table that has no unique identifier, you could try something like.
DELETE TOP(1) FROM theTable WHERE condition1, condition2,...
You should test it first with a SELECT statement before you apply the delete query.
I've also found if you a text or ntext column you will get his error. I converted that column to NVARCHAR(MAX) and haven't had any problems since.

Combining MSSQL results with Oracle into CFSPREADSHEET

I have a report that generates an excel file daily with data extracted from a MS-SQL database. I now have to add additional columns to my spreadsheet from an Oracle database where the ID matches the ID in the MS-SQL query results.
My problem is that I have about 1200-1400+ unique IDs generated on this report from the first query. When I plug them into an IN list with the Oracle query and try to do a CFDUMP to see if the results will come out as it should, I receive a CF error saying that query cannot list more than 1000 results from the oracle query.
I basically set the values from the first query into a valuelist for the ID column and then put that into the IN clause for the Oracle query. I then do a cfdump on the Oracle where I receive that error. I've also tried wrapping cfloop query = "firstquery"> around the Oracle query and just placing #firstquery.columnIDname# but that does not work either.
So two questions I have here is ..
How do I handle the limit on Oracle with 1k limit and if I only have read only access to the Oracle database with ColdFusion?
After #1 is figured out, how could I combine the results from the Oracle Query with my MSSQL query or in other words, add the columns I'm pulling from the Oracle query to the spreadsheet for the matching ID.
Thanks.
For your quick, dirty, and sub-optimal approach, visit cflib.org and look for a function called ListSplit(). It converts a long list to an array of short lists.
You then loop through this array and run a query each time. Make sure the query name changes with each loop iteration.
After the loop, do a query of queries union query. Then do whatever you have to do to combine that data with what you got from sql server.
Note that you will probably have to use array notation to access your dynamically named query objects.

Resources