Using NpgsqlDataAdapter for insert with returning clause - npgsql

I use NpgsqlDataAdapter (Npgsql 3.0.5) to insert new rows in a table. The primary key value for the new rows is created by the database, therefore I want to pass it back to the application. Therefore I generated an InsertCommand with proper parameters as follows:
INSERT INTO TASK_JOURNAL(TJ_ID, TJ_COMMENT, TJ_COMPLETION_STATE)
VALUES(ID_GENERATOR.GenerateId('TASK_JOURNAL'), :p0_TJ_COMMENT, :p1_TJ_COMPLETION_STATE RETURNING TJ_ID, UPDATED_ON)
INTO :p11_TJ_ID, :p19_UPDATED_ON
However, it seems that Npgsql is not prepared to accept return values in a DataAdapter.InsertCommand, I am getting the following error:
Parameter ':p11_TJ_ID' referenced in SQL but is an out-only parameter
Call stack:
Npgsql.dll!Npgsql.SqlQueryParser.ParseRawQuery(string sql = "INSERT INTO NET_UT.TASK_JOURNAL(TJ_ID, TJ_COMMENT, TJ_COMPLETION_STATE) VALUES(NET_UT.ID_GENERATOR.GenerateId('TASK_JOURNAL'), :p0_TJ_COMMENT, :p1_TJ_COMPLETION_STATE) RETURNING TJ_ID, UPDATED_ON INTO :p11_TJ_ID, :p19_UPDATED_ON", bool standardConformantStrings = true, Npgsql.NpgsqlParameterCollection parameters = {Npgsql.NpgsqlParameterCollection}, System.Collections.Generic.List<Npgsql.NpgsqlStatement> queries = Count = 0) Line 127 C#
Npgsql.dll!Npgsql.NpgsqlCommand.ProcessRawQuery() Line 504 C#
Npgsql.dll!Npgsql.NpgsqlCommand.CreateMessagesNonPrepared(System.Data.CommandBehavior behavior = SequentialAccess) Line 563 C#
Npgsql.dll!Npgsql.NpgsqlCommand.ValidateAndCreateMessages(System.Data.CommandBehavior behavior = SequentialAccess) Line 555 + 0x12 bytes C#
Npgsql.dll!Npgsql.NpgsqlCommand.ExecuteDbDataReaderInternal(System.Data.CommandBehavior behavior = SequentialAccess) Line 914 C#
Npgsql.dll!Npgsql.NpgsqlCommand.ExecuteDbDataReader(System.Data.CommandBehavior behavior = SequentialAccess) Line 901 + 0xe bytes C#
System.Data.dll!System.Data.Common.DbCommand.System.Data.IDbCommand.ExecuteReader(System.Data.CommandBehavior behavior) + 0xf bytes
System.Data.dll!System.Data.Common.DbDataAdapter.UpdateRowExecute(System.Data.Common.RowUpdatedEventArgs rowUpdatedEvent = {Npgsql.NpgsqlRowUpdatedEventArgs}, System.Data.IDbCommand dataCommand = {Npgsql.NpgsqlCommand}, System.Data.StatementType cmdIndex = Insert) + 0xbe bytes
System.Data.dll!System.Data.Common.DbDataAdapter.Update(System.Data.DataRow[] dataRows, System.Data.Common.DataTableMapping tableMapping = {System.Data.Common.DataTableMapping}) + 0xa88 bytes
System.Data.dll!System.Data.Common.DbDataAdapter.Update(System.Data.DataRow[] dataRows) + 0x13c bytes
I know this could be achieved by using NpgsqlCommand.ExecuteReader instead, however, not using NpgsqlDataAdapter would require changing all my framework which I am also using for Oracle (which accepts returning parameters in Insert and Update).
I am interested if there is a solution using NpgsqlDataAdapter to return values in an insert statement.

Related

snowflake jdbc paramter returning VARCHAR for all datatypes

Snowflake JDBC driver is reporting parameter metadata for all the datatypes as VARCHAR. Is there any way to overcome this problem?
DDL:-
CREATE TABLE INTTABLE(INTCOL INTEGER)
Below is the output from Snowflake ODBC Driver
SQLPrepare:
In:StatementHandle = 0x00000000021B1B50, StatementText = "INSERT INTO INTTABLE(INTCOL) VALUES(?)", TextLength = 42
Return: SQL_SUCCESS=0
SQLDescribeParam:
In:StatementHandle = 0x00000000021B1B50, ParameterNumber = 1, DataTypePtr = 0x00000000001294D0, ParameterSizePtr = 0x0000000000126950,DecimalDigits =0x0000000000126980, NullablePtr = 0x00000000001269B0
Return: SQL_SUCCESS=0
Out:*DataTypePtr = SQL_VARCHAR=12, *ParameterSizePtr = 16777216, *DecimalDigits = 0, *NullablePtr = SQL_NULLABLE=1
Below is Output with Snowflake JDBC Driver.
PreparedStatement ps = c.prepareStatement("INSERT INTO INTTABLE(INTCOL) VALUES(?)");
ParameterMetaData psmd = ps.getParameterMetaData();
for(int i=1 ;i<=psmd.getParameterCount(); i++) {
System.out.println(psmd.getParameterType(i)+ " " + psmd.getParameterTypeName(i));
}
Output:-
12 text
Thank you for adding more information to your thread. I still may be doing a little guesswork though.
If you are trying to change the table values type from Varchar, and there are no values in it, you can drop the table, then re-recreate it.
If you want to ALTER what is already in the table try altering the table first: Manual Reference
There is also the CREATE OR REPLACE TABLE(col , col 2 ) that takes care of both.
Is this what you are looking for?

Having an Issue with too many parameters being passed to stored procedures in ColdFusion 2016

I have several stored procedures in an application that were functioning perfectly (for years) until our recent upgrade from ColdFusion 2010 to ColdFusion 2016. Now, I am getting error messages of either too many parameters or a certain parameter is not a parameter is not contained in the procedure that is being called.
I have opted to upload some code so people can better understand what is actually happening. Still learning how to format code here so please forgive me if it is still lacking.
In both cases I have double checked the parameter lists in the stored procedure in the procedure calls and have found that they are all indeed correct. In fact, nothing has changed in this code for over 5 years. This behavior has only begun since the upgrade has taken place.
Below is the first example. I will list the procedure call (in cfscript)
first then the parameter list from the stored procedure and then the error message it produced:
public query function readStorage(numeric group1=0,numeric group2=0) {
local.group1Value = arguments.group1?arguments.group1:"";
local.group2Value = arguments.group2?arguments.group2:"";
spService = new storedproc();
spService.setDatasource(variables.dsn);
spService.setUsername(variables.userName);
spService.setPassword(variables.password);
spService.setProcedure("usp_readCompatibilityStorage");
spService.addParam(dbvarname="#group1Id",cfsqltype="cf_sql_integer"
, type="in",value=local.group1Value,null=!arguments.group1);
spService.addParam(dbvarname="#group2Id",cfsqltype="cf_sql_integer"
,type="in",value=local.group2Value,null=!arguments.group2);
spService.addProcResult(name="rs1",resultset=1);
local.result = spService.execute();
return local.result.getProcResultSets().rs1;
}
Below is the parameter list from the stored procedure:
#groupId1 int = NULL
,#groupId2 int = NULL
Below is the error message I get:
[Macromedia][SQLServer JDBC Driver][SQLServer]#group1Id is not a
parameter for procedure usp_readCompatibilityStorage.
Second Example:
public query function read(string cribIdList="",
numeric cribNumber=0,
string isAnnex="",
numeric siteId=0,
string parentCribIdList="",
numeric supervisorId=0,
numeric statusId=0,
string orderBy="cribNumber ASC") {
local.cribNumberValue = arguments.cribNumber?arguments.cribNumber:"";
local.siteIdValue = arguments.siteId?arguments.siteId:"";
local.superIdValue = arguments.supervisorId ? arguments.supervisorId:"";
local.statusIdValue = arguments.statusId ? arguments.statusId:"";
spService = new storedproc();
spService.setDatasource(variables.dsn);
spService.setUsername(variables.userName);
spService.setPassword(variables.password);
spService.setProcedure("usp_readCrib");
spService.addParam(dbvarname="#cribIdList",cfsqltype="cf_sql_varchar"
,type="in",value=arguments.cribIdList
,null=!len(arguments.cribIdList));
spService.addParam(dbvarname="#cribNumber",cfsqltype="cf_sql_integer"
,type="in",value=local.cribNumberValue
,null=!arguments.cribNumber);
spService.addParam(dbvarname="#isAnnex",cfsqltype="cf_sql_varchar"
,type="in",value=arguments.isAnnex,null=!len(arguments.isAnnex));
spService.addParam(dbvarname="#siteId",cfsqltype="cf_sql_integer"
,type="in",value=local.siteIdValue,null=!arguments.siteId);
spService.addParam(dbvarname="#parentCribIdList"
, cfsqltype="cf_sql_varchar", type="in"
, value=arguments.parentCribIdList
, null=!len(arguments.parentCribIdList));
spService.addParam(dbvarname="#supervisorId",
cfsqltype="cf_sql_integer", type="in",value=local.superIdValue
, null=!arguments.supervisorId);
spService.addParam(dbvarname="#statusId"
, cfsqltype="cf_sql_integer", type="in"
, value=local.statusIdValue, null=!arguments.statusId);
spService.addParam(dbvarname="#orderBy",cfsqltype="cf_sql_varchar"
, type="in",value=arguments.orderBy);
spService.addProcResult(name="rs1",resultset=1);
local.result = spService.execute();
return local.result.getProcResultSets().rs1;
}
Below is the parameter list from the stored procedure:
#cribIdList varchar(500) = NULL
,#cribNumber int = NULL
,#isAnnex varchar(3) = NULL
,#siteId int = NULL
,#parentCribIdList varchar(500) = NULL
,#supervisorId int = NULL
,#statusId int = NULL
,#orderBy varchar(50)
Below is the message returned from the server:
[Macromedia][SQLServer JDBC Driver][SQLServer]Procedure or function
usp_readCrib has too many arguments specified.
In the case of both errors, they seem to be occurring at the following path:
Error Details - struct
COLUMN 0
ID CFSTOREDPROC
LINE 489
RAW_TRACE at cfbase2ecfc235349229$funcINVOKETAG.runFunction(E:\ColdFusion2016\cfusion\CustomTags\com\adobe\coldfusion\base.cfc:489)
TEMPLATE E: \ColdFusion2016\cfusion\CustomTags\com\adobe\coldfusion\base.cfc
TYPE CFML````
ColdFusion 10 and greater limit the amount of parameters in a request to 100 by default. Fortunately this can be updated and changed to reflect the required amount of parameters you need for your stored procedures.

Entity Framework updates with wrong values after insert

This issue is discovered because I have an object with a field calculated off the ID, which contains the ID as part of it with a prefix and a checksum digit. It is a requirement that these calculated values are unique, but they also cannot be random, so this seemed the best way to do it.
The code in question looks like this:
entity = new Entity() { /* values */ };
context.SaveChanges(); //generate the ID field
entity.CALCULATED_FIELD = CalculateField(prefix, entity.ID);
This works just fine in 99% of cases, but occasionally we get a value in the database which looks like:
ID: 1234
CALCULATED_FIELD : prefix000{1233}8
EXPECTED: prefix000{1234}3
With the parts in the braces being calculated from the ID column.
The fact that the calculated field is incorrect is bad enough, but the implication is that upon doing a savechanges, there is no guarantee that the row returned to Entity Framework is the one which was originally worked on! I am looking into using a stored procedure on insert in order to fix the generated field problem, but in the long run we're going to have lots of bad data if we keep working on the wrong rows.
When I told entity framework to map the table to stored procedures it generated the following boilerplate code:
INSERT [dbo].[tableName](fields...)
VALUES(values...)
DECLARE #ID int
SELECT #ID = [ID]
FROM [dbo].[tableName]
WHERE ##ROWCOUNT > 0 AND [ID] = scope_identity()
SELECT t0.[ID]
FROM [dbo].[tableName] as t0
WHERE ##ROWCOUNT > 0 AND t0.[ID] = #ID
The best idea I can come up with is that an extra insert could occur before scope_identity() is called. We are migrating this system from using stored procedures where we used ##IDENTITY in place instead, could there be a difference there?
EDIT: CalculateField:
public static string CalculateField(string prefix, int ID)
{
var calculated = prefix.PadRight(17 - ID.ToString().Length)
.Replace(" ", "0") + ID.ToString();
var multiplier = 3;
var sum = 0;
foreach (char c in calculated.ToCharArray().Reverse())
{
sum += multiplier * int.Parse(c.ToString());
multiplier = 4 - multiplier;
}
if (sum % 10 == 0) { return calculated + "0"; }
return calculated + (10 - (sum % 10)).ToString();
}
UPDATE: Changing the called method from static to an instance method and only running it later after additional changed were made instead of straight after creation appears to have solved the problem, for reasons I can't comprehend. I'm leaving the question open for now since I don't yet have a large enough sample to be completely sure the problem is resolved, and also because I have no explanation for what really changed.

Cannot understand how will Entity Framewrok generate a SQL statement for an Update operation using timestamp?

I have the following method inside my asp.net mvc web application :
var rack = IT.ITRacks.Where(a => !a.Technology.IsDeleted && a.Technology.IsCompleted);
foreach (var r in rack)
{
long? it360id = technology[r.ITRackID];
if (it360resource.ContainsKey(it360id.Value))
{
long? CurrentIT360siteid = it360resource[it360id.Value];
if (CurrentIT360siteid != r.IT360SiteID)
{
r.IT360SiteID = CurrentIT360siteid.Value;
IT.Entry(r).State = EntityState.Modified;
count = count + 1;
}
}
IT.SaveChanges();
}
When I checked SQL Server profiler I noted that EF will generated the following SQL statement:
exec sp_executesql N'update [dbo].[ITSwitches]
set [ModelID] = #0, [Spec] = null, [RackID] = #1, [ConsoleServerID] = null, [Description] = null, [IT360SiteID] = #2, [ConsoleServerPort] = null
where (([SwitchID] = #3) and ([timestamp] = #4))
select [timestamp]
from [dbo].[ITSwitches]
where ##ROWCOUNT > 0 and [SwitchID] = #3',N'#0 int,#1 int,#2 bigint,#3 int,#4 binary(8)',#0=1,#1=539,#2=1502,#3=1484,#4=0x00000000000EDCB2
I can not understand the purpose of having the following section :-
select [timestamp]
from [dbo].[ITSwitches]
where ##ROWCOUNT > 0 and [SwitchID] = #3',N'#0 int,#1 int,#2 bigint,#3 int,#4 binary(8)',#0=1,#1=539,#2=1502,#3=1484,#4=0x00000000000EDCB2
Can anyone advice?
Entity Framework uses timestamps to check whether a row has changed. If the row has changed since the last time EF retrieved it, then it knows it has a concurrency problem.
Here's an explanation:
http://www.remondo.net/entity-framework-concurrency-checking-with-timestamp/
This is because EF (and you) want to update the updated client-side object by the newly generated rowversion value.
First the update is executed. If this succeeds (because the rowversion is still the one you had in the client) a new rowversion is generated by the database and EF retrieves that value. Suppose you'd immediately want to make a second update. That would be impossible if you didn't have the new rowversion.
This happens with all properties that are marked as identity or computed (by DatabaseGenertedOption).

SQLBindParameter with SQL_VARBINARY(MAX) gives "Invalid precision value"

Using a C++ code, I am trying to insert a large binary blob into a MS SQL server using a stored procedure. The table into which I am inserting is has 5 columns, types:
int
varchar
datetime
varbinary(max)
datetime
The stored procedure takes 4 parameters:
PROCEDURE [dbo].[spr_fff]
#act AS INT
,#id AS VARCHAR(255)
,#timestamp AS DATETIME
,#blob AS VARBINARY(MAX)
I set up my statement (with checks on return values that I am not showing):
const std::string queryString("{Call [spr_fff](?,?,?,?)}");
SQLHSTMT handle = NULL;
SQLAllocHandle(SQL_HANDLE_STMT, m_hConn, &handle);
SQLPrepare(handle, (SQLCHAR *)queryString.c_str(), SQL_NTS);
I bind the first three parameters with no problem, but I seem unable to figure out how to bind the 4th parameter. The code is essentially:
std::string sData; getData(sData); //fills sData with the binary data
SQLLEN len1 = ???;
SQLBindParameter( handle, (SQLUSMALLINT)4, SQL_PARAM_INPUT, SQL_C_BINARY, SQL_VARBINARY, len1, 0, (SQLCHAR*)&sData.c_str(), (SQLLEN)sData.size(), NULL);
and the trick seems to be figuring out what len1 should be. If sData.size() < 8000, then len1 = sData.size() works fine. But if sData.size() > 8000, nothing seems to work. If I set len1 = sData.size(), or len1 = 2147483647 the call to SQLBindParameter results in the error code "Invalid precision value". If I set len1 = 0 as some of the (horrible) documentation seems to suggest, the call to SQLBindParameter works (for the 2008 native driver), but executing the statement results in a blob of size two, i.e. the two default 0 bytes with all the input blob data truncated to 0 bytes. I have tried all these combinations with with all the client drivers listed below, all to no avail. What am I doing wrong?????
Environment
Client OS: Windows XP sp3
SQL Server is
Microsoft SQL Server 09.00.3042
SQL Clients tried:
Microsoft SQL Server Native Client Version 10.00.5500 (sqlncli10.dll, 2007.100.5500.00)
Microsoft SQL Native Client Version 09.00.5000 (sqlncli.dll, 2005.90.5000.00)
Microsoft SQL Server ODBC Driver Version 03.85.1132 (sqlsrv32.dll 2000.85.1132.0)
OK, the answer to my question is actually that I screwed up in the call to SQLBindParameter. If you look at my code above, I have the final parameter as NULL. A careful reading of the documentation - and believe me, it needs much careful reading! - shows that if the final parameter is NULL, the data is treated as zero-terminated. (the documentation for SQLBindParameter says "If StrLen_or_IndPtr is a null pointer, the driver assumes that all input parameter values are non-NULL and that character and binary data is null-terminated." - emphasis mine) And, by unfortunate coincidence, the data I was supplying had a zero as the second or third byte, so the blob that was actually stored was only 1 or 2 bytes. Not sure why it worked with a size under 8000 - there may have been some interplay with size<8000 and various driver versions, but I haven't taken the time to tease that out.
Also, in the code above I state "If I set len1 = 0 as some of the (horrible) documentation seems to suggest,". This is in fact the correct thing to do.
The correct code is thus
SQLLEN len1 = 0;
SQLLEN nThisLen = (SQLLEN)sData.size();
SQLBindParameter( handle, (SQLUSMALLINT)4, SQL_PARAM_INPUT, SQL_C_BINARY, SQL_VARBINARY, len1, 0, (SQLCHAR*)&sData.c_str(), nThisLen, &nThisLen );
Jeff O was on the right track. Try changing your sproc to...
PROCEDURE [dbo].[spr_fff]
#act AS INT
,#id AS VARCHAR(255)
,#timestamp AS DATETIME
,#preblob AS nvarchar(MAX)
DECLARE #blob varbinary(MAX) = CAST(#preblob as varbinary(MAX))
/* Continue using varbinary blob as before */
Note the change of your sproc's parameter datatype and the subsequent cast to varbinary.
Cheers.

Resources