I get a null value when doing SELECT SCOPE_IDENTITY() After an insert. Note that at the time I wrote the code, it worked like a charm. The code is in classic ASP.
The query I use is :
INSERT INTO EID_Mandates([Uidx],[NatNr],[FirstName],[LastName],[ValidFrom],[ValidUntil],[Active])
VALUES(24387,'1234567','Paul','Tergeist','09/11/2015','09/11/2018',1);
SELECT SCOPE_IDENTITY() As MYID
When executing, the Insert is OK, but no value returned (rs.fields.Count=0)
The Id is on the id_mandate field (auto increment).
I am puzzled !
EDIT : Nov, 10 - 10:56am.
Okay, after accepting the solution of CPMunich, I continued to search for the cause of the problem, because programming is not magic. You expect logical result when using logical code.
What I discovered is even more frustrating.
1) the insertion code worked when I tested it, that is for sure, and no, I did not change anything to it.
2) I did some tests, namely using a simple script to check if Scope_Identity() was still working... Yes.
3) Then, I slightly modified my previous code to get through a test function which returned (and displayed the ID). Nope, there was exactly 1 field, containing nothing (NULL). In that very function I added the code to open another connection (the same code I use to open the main one). Dang, it worked !
4) I logged the SQL produced in both cases. Except the datetime and IDs, exactly the same.
So at this point, I know that my connection is somewhat corrupted, but have no idea why?
Finally I must add that the whole server had a big problem some month ago in the form of a rotten MS Update (KB2992611) which made the whole thing incredibly slow (this bug affected the way IIS connect via LSASS to MSSQL server. In short, the server is 10 times slower for each and every query.
Not a problem, I am currently making all the work to migrate to another one.
Conclusion : unsure, sadly. If I do new discoveries, I will keep you guys informed.
Thanks guys for your help, and StackOverflow for being there !
Remark : when you think to it, after all, PHP is also quite "ancient", no ? :)
Maybe this is an Option for you:
INSERT INTO EID_Mandates([Uidx],[NatNr],[FirstName],[LastName],[ValidFrom],[ValidUntil],[Active])
OUTPUT inserted.id_mandate
VALUES(24387,'1234567','Paul','Tergeist','09/11/2015','09/11/2018',1);
You will get back a recordset with the column id_mandate. If the insert Fails, the recordset will be empty.
Make sure
Both statements are executed using same SqlCommand
You have not set up insert trigger for that table.
If you have setup insert trigger for that table use SCOPE_IDENTITY
Related
I am using DbVisualizer 8.0.12 as the client tool towards MS SQL Server 2012 database.
I want to perform simple update:
update table1 set field1=0 where filed2='something';
I expect exactly one row to be updated, since field2 is primary key of table1.
Also, doing a:
select * from table1 where field2='something';
returns exactly one row.
But when executing the update sql, DBVisualizer informs me that there were two updates successfully executed.
11:16:58 [UPDATE - 1 row(s), 0.003 secs] Command processed
11:16:58 [UPDATE - 1 row(s), 0.003 secs] Command processed
... 2 statement(s) executed, 2 row(s) affected, exec/fetch time: 0.006/0.000 sec [2 successful, 0 warnings, 0 errors]
I don't understand why is there two updates performed? Shouldn't there be only one update?
Can anybody please advise? Thank you in advance for any kind of information.
[EDIT]
I have used MS SQL Server profiler, as #TomTom suggested.
And I also ran my SQL update using Microsoft SQL Server Management Studio.
Things I had to turn on for the profiler (and for my needs) were:
1. 'Trace properties > Events Selection > Column Filters > Database name – Like: my_db_name', since we have a lot of db on the server, so in order to trace only my database named 'my_db_name'
2. 'Trace properties > Events Selection > Stored procedures > enable SP:StmtStarting and SP:StmtCompleted', since I wanted to enable trigger trace
It seems that this info message from DBVisualizer is misleading (this happens only for tables that have triggers - in this particular case the trigger inserted data into another table(so called, archive table) on every update). Actually, only one update was done, so all fine there.
Microsoft SQL Server Management Studio shows correct info: 1 update and 1 insert.
Hope this will help someone having similar "problem". #TomTom please put your comment as an answer, so I can give you credit for it. Thank you.
Still, there is one more thing I would like to know about Profiler.
Is there a way you can actually see which rows (in which table) will be updated.
From the information I have above, I can only see that there was one update (so I am assuming it is this one on table1, which I expected). But I would like to see information, something like, in this table: 'tablename' this rows: list of rows will be updated with these values or something like that...
Is this possible with the Profiler?
Consider doing some work yourself. It is obvious that there are 2 commands issues. Fist Trace them - with the profiler - and check whether they are what you think they are.
SQL Server comes with a decent profiler out of the box. Old rule when you do stuff like that: NEVER assume, always validate. The statements may not even be the same... as long as you do not know that.... all is a vild guess.
I have used MS SQL Server profiler, as #TomTom suggested.
And I also ran my SQL update using Microsoft SQL Server Management Studio.
Things I had to turn on for the profiler (and for my needs) were:
1. 'Trace properties > Events Selection > Column Filters > Database name – Like: my_db_name', since we have a lot of db on the server, so in order to trace only my database named 'my_db_name'
2. 'Trace properties > Events Selection > Stored procedures > enable SP:StmtStarting and SP:StmtCompleted', since I wanted to enable trigger trace
It seems that this info message from DBVisualizer is misleading (this happens only for tables that have triggers - in this particular case the trigger inserted data into another table(so called, archive table) on every update). Actually, only one update was done, so all fine there.
Microsoft SQL Server Management Studio shows correct info: 1 update and 1 insert.
Hope this will help someone having similar "problem". #TomTom please put your comment as an answer, so I can give you credit for it. Thank you.
[EDIT]
#TomTom
Hmmm, maybe not.
I think you had enough time to think about it ...
Your answer wasn't helpfull at all (except the little track of light in the confirmative form of:
"Yes, SQL server has profiler included, DAAAH ..."
with no constructive suggestions of your own and with lot of "being a smarty" guy).
An answer to a question should include some more useful information and concrete guidance if you have it, otherwise, don't be a smartass.
Since I did all the work without your help, I think you don't actually deserve credit for it.
The funny thing about it is that you ACTUALLY think you do.
No comment on that, except that I really have ZERO (0.000000000000000000000 > is it going to change, hmmm, let see ... 0000000000000000000000000000000000000000000000000000000000000000000000000000000000000 ... well, I guess not > that is a little bit of smartass for you :) tolerance politics with smartasses like you.
[END OF EDIT]
Still, there is one more thing I would like to know about Profiler.
Is there a way you can actually see which rows (in which table) will be updated.
From the information I have above, I can only see that there was one update (so I am assuming it is this one on table1, which I expected). But I would like to see information, something like, in this table: 'tablename' this rows: list of rows will be updated with these values or something like that...
Is this possible with the Profiler?
Thank you in advance for your time and answers.
I'm doing some updates to an old Classic ASP website. There is a table which contains several columns of text data and a datetime field. I need to get a list of unique years from all of the values in the table. I've tried this:
set objConnection = Server.CreateObject("ADODB.Connection")
objConnection.ConnectionString = "Provider=SQLNCLI10;Server=localhost;Database=mydb;Uid=myuser;Pwd=something;"
objConnection.Open
set objRst = objConnection.execute("SELECT DISTINCT(YEAR(report_date)) AS report_year FROM report;")
if not objRst.eof then
do while not objRst.eof
response.write objRst("report_year")
objRst.movenext
loop
end if
But when I run this script in the page it just does nothing - eventually the script times-out.
Can anyone suggest how to accomplish this? Thanks!
This is going to be an index problem or a locking problem (or both).
First, try to select the data using the NOLOCK hint - this means your select query won't wait for any uncommitted transactions:
set objRst = objConnection.execute("SELECT YEAR(report_date) AS report_year FROM report (NOLOCK) GROUP BY YEAR(report_date)")
do while not objRst.eof
response.write objRst("report_year")
objRst.movenext
loop
If that still hangs it would suggest an index issue, to sort that out you'll need to run the query in SSMS (if you can) and see how long it takes there. If it takes ages in SSMS it would suggest you need to create an index on the report_date column - if it's quick in SSMS then it's something to do with your ASP, i.e. is the connection open? have you tried doing a simple query to make sure that works? i.e.
SELECT TOP 1 YEAR(report_date)
FROM report
EDIT: just noticed your comment regarding the fact that there are only 5 rows in the table - so it's probably not an indexing issue! However the NOLOCK hint and GROUP BY (as opposed to DISTINCT) might help.
Would be interesting to see an execution plan of this too (i.e. make sure there are no triggers slowing things down)
We've got a weird problem with joining tables from SQL Server 2005 and MS Access 2003.
There's a big table on the server and a rather small table locally in Access. The tables are joined via 3 fields, one of them a datetime field (containing a day; idea is to fetch additional data (daily) from the big server table to add data to the local table).
Up until the weekend this ran fine every day. Since yesterday we experienced strange non-time-outs in Access with this query. Non-time-out means that the query runs forever with rather high network transfer, but no timeout occurs. Access doesn't even show the progress bar. Server trace tells us that the same query is exectuted over and over on the SQL server without error but without result either. We've narrowed it down to the problem seemingly being accessing server table with a big table and either JOIN or WHERE containing a date, but we're not really able to narrow it down. We rebuilt indices already and are currently restoring backup data, but maybe someone here has any pointers of things we could try.
Thanks, Mike.
If you join a local table in Access to a linked table in SQL Server, and the query isn't really trivial according to specific limitations of joins to linked data, it's very likely that Access will pull the whole table from SQL Server and perform the join locally against the entire set. It's a known problem.
This doesn't directly address the question you ask, but how far are you from having all the data in one place (SQL Server)? IMHO you can expect the same type of performance problems to haunt you as long as you have some data in each system.
If it were all in SQL Server a pass-through query would optimize and use available indexes, etc.
Thanks for your quick answer!
The actual query is really huge; you won't be happy with it :)
However, we've narrowed it down to a simple:
SELECT * FROM server_table INNER JOIN access_table ON server_table.date = local_table.date;
If the server_table is a big table (hard to say, we've got 1.5 million rows in it; test tables with 10 rows or so have worked) and the local_table is a table with a single cell containing a date. This runs forever. It's not only slow, It just does nothing besides - it seems - causing network traffic and no time out (this is what I find so strange; normally you get a timeout, but this just keeps on running).
We've just found KB article 828169; seems to be our problem, we'll look into that. Thanks for your help!
Use the DATEDIFF function to compare the two dates as follows:
' DATEDIFF returns 0 if dates are identical based on datepart parameter, in this case d
WHERE DATEDIFF(d,Column,OtherColumn) = 0
DATEDIFF is optimized for use with dates. Comparing the result of the CONVERT function on both sides of the equal (=) sign might result in a table scan if either of the dates is NULL.
Hope this helps,
Bill
Try another syntax ? Something like:
SELECT * FROM BigServerTable b WHERE b.DateFld in (SELECT DISTINCT s.DateFld FROM SmallLocalTable s)
The strange thing in your problem description is "Up until the weekend this ran fine every day".
That would mean the problem is really somewhere else.
Did you try creating a new blank Access db and importing everything from the old one ?
Or just refreshing all your links ?
Please post the query that is doing this, just because you have indexes doesn't mean that they will be used. If your WHERE or JOIN clause is not sargable then the index will not be used
take this for example
WHERE CONVERT(varchar(49),Column,113) = CONVERT(varchar(49),OtherColumn,113)
that will not use an index
or this
WHERE YEAR(Column) = 2008
Functions on the left side of the operator (meaning on the column itself) will make the optimizer do an index scan instead of a seek because it doesn't know the outcome of that function
We rebuilt indices already and are currently restoring backup data, but maybe someone here has any pointers of things we could try.
Access can kill many good things....have you looked into blocking at all
run
exec sp_who2
look at the BlkBy column and see who is blocking what
Just an idea, but in SQL Server you can attach your Access database and use the table there. You could then create a view on the server to do the join all in SQL Server. The solution proposed in the Knowledge Base article seems problematic to me, as it's a kludge (if LIKE works, then = ought to, also).
If my suggestion works, I'd say that it's a more robust solution in terms of maintainability.
On a SQL Server 2005 database, one of our remote developers just checked in a change to a stored procedure that changed a "select scope_identity" to "select ##identity". Do you know of any reasons why you'd use ##identity over scope_identity?
##IDENTITY will return the last identity value issued by the current session. SCOPE_IDENTITY() returns the last identity value in the current session and same scope. They are usually the same, but assume a trigger is called which inserted something somewhere just before the current statement. ##IDENTITY will return the identity value by the INSERT statement of the trigger, not the insert statement of the block. It's usually a mistake unless he knows what he's doing.
Here is a link that may help differentiate them
looks like:
IDENTITY - last identity on the connection
SCOPE_IDENTITY - last identity you explicitly created (excludes triggers)
IDENT_CURRENT(’tablename’) - Last Identity in table regardless of scope or connection.
I can't think of any, unless there was a trigger then inserted a row (or somesuch) and I really really wanted the id of that trigger-inserted row rather than the row I physically changed.
In other words, no, not really.
DISCLAIMER: Not a T-SQL expert :)
Maybe you should ask the developer their rationale behind making the change.
If you wanted the trigger use you could get another trigger added on is the only reason I can come up with. Even then it's dangerous as another trigger could be added and again you would get the wrong identity. I suspect the developer doesn't know what he is doing. But honestly the best thing to do is to ask him why he made the change. You could change it back, but the developer needs to know not to do that again unless he needs the trigger identity as you may not catch it the next time.
So, I have 2 database instances, one is for development in general, another was copied from development for unit tests.
Something changed in the development database that I can't figure out, and I don't know how to see what is different.
When I try to delete from a particular table, with for example:
delete from myschema.mytable where id = 555
I get the following normal response from the unit test DB indicating no row was deleted:
SQL0100W No row was found for FETCH, UPDATE or DELETE; or the result of a query is an empty table. SQLSTATE=02000
However, the development database fails to delete at all with the following error:
DB21034E The command was processed as an SQL statement because it was not a valid Command Line Processor command. During SQL processing it returned: SQL0440N No authorized routine named "=" of type "FUNCTION" having compatible arguments was found. SQLSTATE=42884
My best guess is there is some trigger or view that was added or changed that is causing the problem, but I have no idea how to go about finding the problem... has anyone had this problem or know how to figure out what the root of the problem is?
(note that this is a DB2 database)
Hmm, applying the great oracle to this question, I came up with:
http://bytes.com/forum/thread830774.html
It seems to suggest that another table has a foreign key pointing at the problematic one, when that FK on the other table is dropped, the delete should work again. (Presumably you can re-create the foreign key as well)
Does that help any?
You might have an open transaction on the dev db...that gets me sometimes on SQL Server
Is the type of id compatible with 555? Or has it been changed to a non-integer type?
Alternatively, does the 555 argument somehow go missing (e.g. if you are using JDBC and the prepared statement did not get its arguments set before executing the query)?
Can you add more to your question? That error sounds like the sql statement parser is very confused about your statement. Can you do a select on that table for the row where id = 555 ?
You could try running a RUNSTATS and REORG TABLE on that table, those are supposed to sort out wonky tables.
#castaway
A select with the same "where" condition works just fine, just not delete. Neither runstats nor reorg table have any affect on the problem.
#castaway
We actually just solved the problem, and indeed it is just what you said (a coworker found that exact same page too).
The solution was to drop foreign key constraints and re-add them.
Another post on the subject:
http://www.ibm.com/developerworks/forums/thread.jspa?threadID=208277&tstart=-1
Which indicates that the problem is a referential constraint corruption, and is actually, or supposedly anyways, fixed in a later version of db2 V9 (which we are not yet using).
Thanks for the help!
Please check
1. your arguments of triggers, procedure, functions and etc.
2. datatype of arguments.