sql delete row error - sql-server

I am trying to delete a row in sql server management studio 2012 but an error appears:
sql error
No rows were deleted
A problem occurred attempting to delete row 2 Error Source:
Microsoft.SqlServer.Management.DataTools Error Message: The row
value(s) updated or deleted either do not make the row unique or they
alter multiple rows(2 rows)
Is there a way to fix error that without typing any query?

Thanks #Hani
I had the same problem (actually a table with a Unique ID, but with some rows accidentally duplicated including the "unique ID" so I couldn't delete the duplicate rows), and your advise helped me solved it from the SQL Server Management GUI.
I used the GUI interface to "edit top 200 rows" in table.
I then added a filter in the SQL Criteria pane which brought up just my two duplicate rows. (This was were I couldn't deleted one of the rows from).
Inspired by your comment, I opened the SQL Pane and changed the:
SELECT TOP(200)...{snip my criteria created by filter}
to instead read:
SELECT TOP(1)...{snip my criteria created by filter}
I was then able to "Execute SQL" the tweaked SQL.
I was then able to use the interface to Delete the single line shown (no warnings this time).
Re-running the SQL Criteria with 200 rows confirmed that just one row had been successfully deleted and one remained.
Thanks for the help, this proved to be the perfect blend of GUI and SQL code for me to get the job done safely and efficiently.
i hope this helps others in a similar situation.

You don't have a primary, unique key in your table.
SQL Server is unable to delete your row because nothing discriminate it from the other rows.
The solution is to add a unique Primary key to your table. It is not recommended to have none anyway. A simple integer with autoincrement should work transparently for you.

If you are trying to delete one of the duplicate rows in a table that has no unique identifier, you could try something like.
DELETE TOP(1) FROM theTable WHERE condition1, condition2,...
You should test it first with a SELECT statement before you apply the delete query.

I've also found if you a text or ntext column you will get his error. I converted that column to NVARCHAR(MAX) and haven't had any problems since.

Related

Simple SSIS from Oracle to SQL Server drops rows

I have a very simple SQL query in my SSIS (VS 2017) Data Flow. It connects to Oracle via Native OLE DB\Oracle Provider for OLE DB and uses SQL Command to query the Oracle view. The destination table is a SQL Server 2017 table. If I query only the first 20 columns or so (I am querying 57 columns), I get all 1,060,000ish records. As I start to add more columns, the rowcount drops. I have already removed any date fields from both tables, and have done quite a few data conversions (source table has several varchar2(4000) fields that need to be SUBSTR to reasonable lengths in the SQL destination table. All fields in the destination table are nullable. When I pull the SQL out of SSIS and run it in SQL Developer, I get the right row count. When I run it in SSIS, it drops from 1.06 M rows to around 28k. I already tried the SQLChick hack (https://www.sqlchick.com/entries/2012/9/2/resolving-missing-records-in-ssis-from-oracle-source.html) doesn't work and causes connection errors (I had to use VS Code to add that property to my Oracle connection, then when I went back to VS, the connection was broken. When opening it back up to re-enter connection credentials, the extra property gets dropped.) I have reduced and increased the Rows per Batch and Maximum insert commit size values to zero avail. I have also set the RetainSameConnection property to True for all the Connection Managers. I'm at a loss! (As you can see from the pics, both jobs finish "successfully".)
This code returns all records:
SELECT
PIDM,
STUDENT_ID,
LAST_NAME,
FIRST_NAME,
MIDDLE_NAME,
LFM_NAME,
FML_NAME,
SORT_NAME,
GENDER,
ETHNIC_CODE,
ETHNIC_CODE_DESC,
LEGACY_CODE,
LEGACY_CODE_DESC,
ADDR_STR_LINE1,
ADDR_STR_LINE2,
ADDR_STR_LINE3,
ADDR_CITY,
ADDR_COUNTY,
ADDR_STATE,
ADDR_NATION,
ADDR_ZIPCODE,
ADDR_AREA_CODE,
ADDR_PHONE
FROM <TABLE_NAME>
This code returns only 28k:
SELECT
PIDM,
STUDENT_ID,
LAST_NAME,
FIRST_NAME,
MIDDLE_NAME,
LFM_NAME,
FML_NAME,
SORT_NAME,
GENDER,
ETHNIC_CODE,
ETHNIC_CODE_DESC,
LEGACY_CODE,
LEGACY_CODE_DESC,
ADDR_STR_LINE1,
ADDR_STR_LINE2,
ADDR_STR_LINE3,
ADDR_CITY,
ADDR_COUNTY,
ADDR_STATE,
ADDR_NATION,
ADDR_ZIPCODE,
ADDR_AREA_CODE,
ADDR_PHONE,
ORIGIN_STR_LINE1,
ORIGIN_STR_LINE2,
ORIGIN_STR_LINE3,
ORIGIN_CITY,
ORIGIN_COUNTY,
ORIGIN_NATION,
ORIGIN_STATE,
ORIGIN_ZIPCODE,
EMAIL,
HIGH_SCHOOL_CODE,
HIGH_SCHOOL_CODE_DESC,
HIGH_SCHOOL_CITY,
HIGH_SCHOOL_STATE,
HIGH_SCHOOL_GPA,
HIGH_SCHOOL_RANK,
PRIOR_COLLEGE_CODE,
PRIOR_COLLEGE_CODE_DESC,
PRIOR_COLLEGE_DEGREE_CODE,
PRIOR_COLLEGE_DEGREE_CODE_DESC,
PRIOR_COLLEGE_CITY,
PRIOR_COLLEGE_STATE,
ADMIT_FLAG,
GENERAL_STUDENT_FLAG,
CURRENT_ENROLLMENT_FLAG,
LETTER_CODES,
CONTACT_CODES,
COMMENT_CODES,
DIRECTORY_EMAIL,
ADDR_DIVISION_CODE,
HIGH_SCHOOL_CLASS_SIZE,
ETHNICITY,
RACE_CODE,
REGULATORY_RACE,
INT_LANG
FROM <TABLE_NAME>
Troubleshooting steps from the comments
If you run the all column version of the query in sql developer (whatever the Oracle query tool is) using the same credentials as the SSIS package, do you get 28k rows or 1M?
1M records are returned in SQL Developer when I use the same credentials SSIS is using. –
As painful as it may be, I would add 1 column, run, observe results. The first time you see a drop in row count, interrogate the heck of the source data (data type, collation, whether some permission thing is at play). If nothing seems out of place, edit the question to include the full table definition and identify what the first source column is that is throwing the results off.
I've done that. Column by column. I've even added a column that already existed (ADDR_STR_LINE1) as ORIGIN_STR_LINE1 and just aliased it, knowing that ADDRR_STR_LINE1 had already worked and both fields shared the exact datatypes/lengths etc. I just ran it with this code:SELECT PIDM, ORIGIN_STR_LINE1, ORIGIN_STR_LINE2, ORIGIN_STR_LINE3, ORIGIN_CITY, ORIGIN_COUNTY, ORIGIN_NATION, ORIGIN_STATE, ORIGIN_ZIPCODE FROM ODSMGR.RECRUIT_PERSON_OSU and it returned 1m records.
While little, consolation, you hitting all the troubleshooting steps I'd employ. I suppose the next item I would try to rule out is some bizarre row width issue/bug. Add a new data flow. As your source query, take one of your varchar2(4000) fields and duplicate it 60 times i.e. SELECT ADDR_STR_LINE1 AS Col0, ADDR_STR_LINE1 AS Col1, ..., ADDR_STR_LINE1 As Col59 FROM Owner.Table and connect that to a Derived Column task (it doesn't need to do anything, just serve as an anchor point) and run it. Do you get 1M or 28k?
Adding more of my troubleshooting steps. 1) Created a view off the original table, casting all of the fields that would need to be truncated as VARCHAR(proper length based on dest table). 2) Added/substracted fields piecemeal, until I thought I had a stable query, knowing that if I added <this fields>, <this many rows> would be dropped. But, for instance, I added PRIOR_COLLEGE_CITY and the first time, my counts dropped from 1063202 to 952755, but then later, I ran it again, and the counts dropped from 1063202 to 953989, so even if it was a data issue (it's not) it's not a consistent one.
Once I got my 953989 rows into the destination table, I compared which PRIOR_COLLEGE_CITY records were missing. In the Source Data Flow, I explicitly queried for those records, and they loaded fine, so again, not a data issue.
According to the picture you provided, when Source component output the records, some records have lost, so we could determine that this problem occurs in Source component.
In this case, please try to check the following thing in your case.
1.Run the query(not the views but the query inside the view) in your Oracle environment when execute the query in Source component, then check whether the number of records(returned from Oracle environment)is equal to the number of records(returned from SSIS Source component). Do this on a separate data task.
2. Check if there are some changes on the source table.
3. If the returned results is correct when running the query in Oracle environment, please try to compare the correct results with the SSIS Source returned results, and analyze the missing data.
I had a similar problem, mostly with odbc driver for oracle!
The problem not only lies on the volume of or rows that returns but in my case, for some reason it grouped the values of the first column also.
The only solution I ve found is to use another driver besides odbc and oledb.
Using the native Oracle Destination and Oracle Source in VS2017 it worked perfect and also the performance was better than odbc and ole db.
enter image description here
I was having a similar issue: 1,470,491 rows in the Oracle view that I was querying, all would come across when I run the package in Visual Studio, but only 377,257 rows would be read when I ran the package from SQL Agent. I tried the SQLChick "UseSessionFormat" hack that you mentioned. While editing the connection string used by the job (it comes in from configuration) I noticed that the connection string in the package had a "USERNAME" parameter as well as a "user id" paramter, but the configuration used by SQL Agent only had "USERNAME". I added "user id" parameter to the configuration used by SQL Agent and after that, the job retrieves all 1,470,491 rows.

Excel Query Won't Update

I have put a query that connects to SQL Server into an Excel spreadsheet (formatted as a table). I update this query daily. Today, when I tried to refresh the query, it generated an error and would not load new data.
The error says:
"This won't work because it would move cells in a table on your worksheet."
All the research I've done on this error seems to relate to users attempting to insert rows, but that's not what I'm doing.
My query will only return two rows, plus the headers. It's not possible for the query to return anything more, so cells shouldn't move in the table.
Any suggestions to solve this error?

Remove duplicates from a SQL server rows using DISTINCT

I need to remove SQL server duplicated rows when importing file into database with distinct method.
HallGroup is my table in database. I'm using this
Sql procedure:
SELECT DISTINCT * INTO tempdb.dbo.tmpTable
FROM HallGroup
DELETE FROM HallGroup
INSERT INTO HallGroup SELECT * FROM tempdb.dbo.tmpTable
DROP TABLE tempdb.dbo.tmpTable
With this procedure works fine duplicated rows are deleted, but the problem is when i try to import again data to SQL server rows are still duplicating. What i'm missing, So any hint?
How to remove SQL server duplicated rows properly when importing file into database with distinct method?
I am just getting back into SQL after being out for a bit but I would not have solved your problem in that way that you are trying (not that I completely understand why you are doing it that way) as I believe (even if it were working correctly) over time your process will take longer each time you do it as the size of the table increases.
It would be much more efficient if you inserted the new data based on the absence of a key (you indicate you are already using a stored proc). If you don't have a key to use (which very recently happened to me), make one. I just solved a similar problem to yours whereas I am importing data into a table from an external source and wanted to eliminate the possibility of duplicates. In my case, I associate name of the external source datafile (is distinct by dataset to import) with the data to be imported and use that to ensure I am not re-importing already imported data. I load the external data into a table using a dtsx and then run a stored proc to merge that data with an existing table. This gives me the added advantage of having a audit trail of where each record came from.
Hope this helps.

How do I turn off the Results pane error message when editing a row

I am currently editing data in a table in SQL Server Management Studio 2012. Each time I move to the next row it gives me
Is there anyway to turn this off and it just saves automatically without asking me to commit after every row?
This is an issue in SSMS that occurs when:
The table contains one or more columns of the text or ntext data
type.
The value of one of these columns contains the following characters:
Percent sign (%)
Underscore (_)
Left bracket ([)
The table does not contain a primary key.
Apparently, the SSMS generates an incorrect UPDATE statement.
To prevent this error, you should run an UPDATE statement in the query window.
Click here for more information.
As the article said, this only happens to the 2005 version. I'll leave it here for reference.
Someone else posted the correct answer but then they deleted it? not sure why,
The answer was contained here... http://support.microsoft.com/kb/925719
I had to convert my text fields to varchar(max)

Does SQL Server transform text in any way when performing a BULK INSERT?

I'm trying to use a BULK INSERT statement to populate a large (17-million-row) table in SQL Server from a text file. One column, of type nchar(17) has a UNIQUE constraint on it. I've checked (using some Python code) that the file contains no duplicates, but when I execute the query I get this error message from SQL Server:
Cannot insert duplicate key row in object 'dbo.tbl_Name' with unique index 'IX_tbl_Name'.
Could Server be transforming the text in some way as it executes BULK INSERT? Do SQL Server databases forbid any punctuation marks in nchar columns, or require that any be escaped? Is there any way I can find out which row is causing the trouble? Should I switch to some other method for inserting the data?
Your collation settings on the column could be causing data to be seen as duplicate, whereas your code may see it as unique. Things like accents and capitals can cause issues under certain collation settings.
Another thought too would be that empty or null values count as duplicates, so your Python code may not have found any text duplicates, but what about the empties?

Resources