Calling SQL Server stored procedures via ODBC fails, leaving empty tables - sql-server

We are using ODBC from C on Linux. We can successfully execute direct statements ("INSERT ...", "SELECT ...", etc) using ODBC for both SQL Server 2008 and MySQL. We are migrating to stored procedures, so we first developed MySQL stored procedures. Calling MySQL stored procedures using ODBC works. Life is good.
The stored procedures are translated into T-SQL. We verify that they function by executing queries directly from Visual Studio. The database is filled, queries work. Huzzah.
We have a test program allowing us to use MySQL or SQL Server, direct execution or calling stored procedures. We call the T-SQL stored procedures from a C test program. Log output indicates that tables are being filled with data, queries are working, etc. Until the end, where a statement fails. The program exits (taking several seconds longer than normal). The other 3 cases work (direct MySQL, direct SQL Server, stored proc MySQL).
We examine the SQL Server database. It's empty. We have autocommit turned on, so I don't think it's a commit problem. The stored procs are bog simple, being copies of the direct SQL. Any ideas?

It sounds like the query is running - then errors out for some reason, and everything is wrapped up as a single transaction - and rolls back. Hence empty tables.
Does the stored procedure have any error trapping within it? SQL Server 2005 and later improved error handling enormously with TRY.. CATCH.

Related

Sql Server: logging in stored procedure

I have a stored procedure that is called from a c# application.
The transaction is started and commited/rolledback from this c# application.
The stored procedure can do some inserts/updates in various tables, which will all be commited or rolledback by the calling application.
The problem is that the stored procedure also insert records into a logtable, which must survive the rollback.
What is the best way of doing this ?
I think I remember from a company I worked for long ago they had solved this by creating a stored procedure for the logging, and this stored procedure had some exotic statements that made it work outside the transaction, something like that. As I said, long time ago I could remember this wrong.
Some times ago, I've develop a tools that logged stored procedure execution in a databases table. The tools was written as a C# assembly compiled into the Database Server and based on differents SQL procedures and functions linked to its C# entry points.
To allow a rollback without the lost of all events allready logged, the C# assembly SHOULD used a full defined connectionString to connect to its database server (SERVERNAME\INSTANCE server param instead of local).
This is perhaps the solution used by your previous company.
Meanwhile, there are some disadvantages:
thoses connections was qualified as "external" and the "truthfully" databases parameters should be set to true to allow code execution if not signed
this solution is not supported on clouded databases (AWS RDS or Azure)
A new connection is created by C# methods
For this last reason and a customer need, I've rewrite a toolbox based on 100% T-SQL source code.
I've just write a response which can be usefull see: https://stackoverflow.com/a/32988757/1183297

SQL Server transactions and parameterized queries from Access 2007 (DAO? ADO?)

I am being forced to port a SQL load script to VBA in Access 2007 where I read flatfiles and then insert into SQL Server (using ODBC and parameterized query strings). Can someone post a full example of how to use either DAO querydef or ADO so that my inserts are executed in a single transaction on the SQL Server? I not need linked tables, file is read and all transforms are done in VBA prior to insert and done using stored procs and triggers in SQL Server after the load. But I do need to be able to rollback my multiple-inserts if there is an ODBC error.
None of the other stackoverflow posts I've found make much sense to me; all I know is that "ODBCDirect workspaces" are not supported in Access 2007. Does using ADO and BeginTrans make the most sense for me?

SQL Server: INSERT/UPDATE/DELETE failed because the following SET options have incorrect settings: ‘QUOTED_IDENTIFIER’

I have this rather awkward problem:
For two weeks now, whenever after I've updated/created stored procedures using my SQL scripts, when these stored procedures are run, they fail with above error.
Other posts dealing with this problem didn't help in my case.
Here's a number of parameters, helping to exclude common solutions which do not apply in my case:
My stored procedure scripts work flawlessly on my laptop (SQL Server 2012, Windows Server 2008 R2).
My stored procedure scripts correctly create stored procedures on any other machine (which is our build machine, with SQL Server 2012 installed; our TEST server, with SQL Server 2005 installed, and our PROD server, with SQL Server 2005 installed). However, the stored procedures won't run on any other machine than mine.
I'm using a database backup of our production SQL Server (SQL Server 2005) on my machine (like any other machine here does).
Even the most basic stored procedure fails (e. g. DELETE myTable WHERE ID = #delID).
On every SQL Server installation I've checked, quoted identifier is set to OFF (!), both on server and on database level. So why do my stored procedures all of a sudden require to have this option set to ON?
I'm using SQLCMD to run my scripts. This gives me an option to dynamically set the server instance's database name in the USE statement.
My scripts only contain a USE statement and right after the ALTER PROCEDURE; or alternatively IF EXISTS (...) DROP PROCEDURE ... GO; CREATE PROCEDURE ...
This all worked for years now, but suddenly, since two weeks ago, stored procedures created with my scripts suddenly fail.
I know that I could manually set QUOTED_IDENTIFIER to ON in my scripts - but I don't want to. There is something wrong here. I want to know what that problem is.
What's happening here?
SQLCMD sets the QUOTED_IDENTIFIER option to OFF by default. You can change it with -I option.
Could it be that your stored procedure is now doing something on a table that has had an index added? I've had the same issue, and it's due to a new index on a computed column.

Stored Procedure returns schema version change error when run from SSIS, but not when run directly

I have a SQL Server stored procedure that executes correctly every time when run manually with EXEC, but when it runs as part of an SSIS package, it fails with an error like this:
Executing the query "EXECUTE (ProcName) " failed with the following error:
"The OLE DB provider "SQLNCLI10" for linked server "(OtherServer)" reported a
change in schema version between compile time ("177833127975044") and
run time ("177841717910098") for table (Server.Database.Schema.Table)".
The procedure is a MERGE statement that merges data from a view into a table in another database on the same server as the SP.
The view refers to the linked server OtherServer. The database referenced on the linked server is dropped and re-created on a nightly basis.
So far, I've tried these things:
1) Dropping and re-creating the view before running the MERGE.
2) Defining the SP that contains the MERGE WITH RECOMPILE.
3) Wrapping the MERGE statement in EXEC() so it wouldn't be compiled in advance.
4) Setting Bypass Prepare to true on the relevant step in SSIS.
Edit:
The server with the stored procedure is running SQL Server 2008. The linked server is 2008 R2.
So the problem is you're using a synonym for the linked server's objects, which doesn't play nicely with OLEDB's metadata catalog (that's what generates those numbers you see in the error message.) There are two solutions to this:
1) Call
DBCC FREEPROCCACHE
on the linked server. Since the database is dropped every day anyway, clearing the cache might not be such a burden on other users of the database.
2) Use full four part notation (ServerName.DatabaseName.SchemaName.ObjectName) in your stored procedure.

Temp table trouble in SQL Server

I've used temporary tables before without any trouble, but today, they are not working for me. This returns
. #MyTemp not found
from the last line.
scBld.CommandText = "select top 10 * into #MyTemp from elig_feeds";
scBld.ExecuteNonQuery();
scBld.CommandText = "select count(*) from #MyTemp";
int p = (int) scBld.ExecuteScalar();
If I remove the "#"s, it works fine.
The only thing that has changed recently is version compatibility of the database, but I don't see that would be a factor. The db is 2005 developer edition.
Thx.
I had similar issue today with 2005 Express both using ODBC and OLE DB.
As explained in this article this behavior might be due to using prepared statements, which are wrapped into temporal stored procedures when prepared.
In SQL Server 2005, SQL Server 2000, and SQL Server 7.0, the prepared
statements cannot be used to create temporary objects and cannot
reference system stored procedures that create temporary objects, such
as temporary tables. These procedures must be executed directly.
Supplying statements directly using SQLExecDirect did help to fix the application. Not sure how that should be applied to ADO.NET though.
Check if connection is getting closed automatically. You are executing two different commands, depending on connection settings, it may get reset after you call ExecuteNonQuery().
[Temp tables are destroyed when connection is closed.]

Resources