"Invalid data conversion" in DB2 with prepared statements and batch - database

I am using JDBC to create a temporary table, add records to it (with prepared statement and batch) and then transfer everything to another table:
String createTemporaryTable = "declare global temporary table temp_table (RECORD smallint,RANDOM_INTEGER integer,RANDOM_FLOAT float,RANDOM_STRING varchar(600)) ON COMMIT PRESERVE ROWS in TEMP";
statement.execute(createTemporaryTable);
String sql = "INSERT INTO session.temp_table (RECORD,RANDOM_INTEGER,RANDOM_FLOAT,RANDOM_STRING) VALUES (?,?,?,?)";
PreparedStatement preparedStatement = connection.prepareStatement(sql);
float f = 0.7401298f;
Integer integer = 123456789;
String string = "This is a string that will be inserted into the table over and over again.";
// add however many random records you want to the temporary table
int numberOfRecordsToInsert = 35000;
for (int i = 0; i < numberOfRecordsToInsert; i++) {
preparedStatement.setInt(1, i);
preparedStatement.setInt(2, integer);
preparedStatement.setFloat(3, (float) f);
preparedStatement.setString(4, string);
preparedStatement.addBatch();
}
preparedStatement.executeBatch();
// transfer everything from the temporary table just created to the main table
String transferFromTempTableToMain = "insert into main_table select * from session.temp_table";
statement.execute(transferFromTempTableToMain);
This works fine up to about 30000 records in this example. However, if I were to insert say 35000 records I get the following error:
Invalid data conversion: Requested conversion would result in a loss
of precision of 32768. ERRORCODE=-4461, SQLSTATE=42815

The problem is that field RECORD is a smallint. A smallint is a signed 16 bit integer with a range of -32768 to 32767.
So inserting an int value of 32768 is not allowed as it won't fit. You need to declare record as INTEGER instead.

Related

snowflake jdbc paramter returning VARCHAR for all datatypes

Snowflake JDBC driver is reporting parameter metadata for all the datatypes as VARCHAR. Is there any way to overcome this problem?
DDL:-
CREATE TABLE INTTABLE(INTCOL INTEGER)
Below is the output from Snowflake ODBC Driver
SQLPrepare:
In:StatementHandle = 0x00000000021B1B50, StatementText = "INSERT INTO INTTABLE(INTCOL) VALUES(?)", TextLength = 42
Return: SQL_SUCCESS=0
SQLDescribeParam:
In:StatementHandle = 0x00000000021B1B50, ParameterNumber = 1, DataTypePtr = 0x00000000001294D0, ParameterSizePtr = 0x0000000000126950,DecimalDigits =0x0000000000126980, NullablePtr = 0x00000000001269B0
Return: SQL_SUCCESS=0
Out:*DataTypePtr = SQL_VARCHAR=12, *ParameterSizePtr = 16777216, *DecimalDigits = 0, *NullablePtr = SQL_NULLABLE=1
Below is Output with Snowflake JDBC Driver.
PreparedStatement ps = c.prepareStatement("INSERT INTO INTTABLE(INTCOL) VALUES(?)");
ParameterMetaData psmd = ps.getParameterMetaData();
for(int i=1 ;i<=psmd.getParameterCount(); i++) {
System.out.println(psmd.getParameterType(i)+ " " + psmd.getParameterTypeName(i));
}
Output:-
12 text
Thank you for adding more information to your thread. I still may be doing a little guesswork though.
If you are trying to change the table values type from Varchar, and there are no values in it, you can drop the table, then re-recreate it.
If you want to ALTER what is already in the table try altering the table first: Manual Reference
There is also the CREATE OR REPLACE TABLE(col , col 2 ) that takes care of both.
Is this what you are looking for?

sqlQuery failing with memory error when row_at_time too large

I am sending this query to a sql server in R using RODBC::sqlQuery
MERGE "mytable" AS Target USING ( VALUES ('myname','POLYGON ((148.0000000000000000 -20.0000000000000000, 148.0000000000000000 -20.0000000000000000, 148.0000000000000000 -20.0000000000000000, 148.0000000000000000 -20.0000000000000000, 148.0000000000000000 -20.0000000000000000))')) AS Source ("name","polygon")
ON (Target."name" = Source."name")
WHEN MATCHED THEN
UPDATE SET Target."polygon" = Source."polygon"
WHEN NOT MATCHED BY TARGET THEN
INSERT ("name","polygon")
VALUES (Source."name", Source."polygon")
OUTPUT $action, Inserted.*, Deleted.*
It fails when row_at_time argument of sqlQuery is more than 10,
Error in odbcQuery(channel, query, rows_at_time) :
'Calloc' could not allocate memory (107374182400 of 1 bytes)
but works if row_at_time < 10. (still the query takes quite a few seconds which is surprising as the table is indexed and very small: less than 100 rows)
Any idea why?
Thank you
EDIT: This is the structure of the table I am writing on:

How to return id from Insert with libpqtype

I have a c script which writes incoming aircraft data to a postgres db.
I would like to return the new records ID after the INSERT but am having trouble.
The simplified code is like this:
int new_id;
conn = PQconnectdb("dbname = flight_dev");
PQinitTypes(conn);
PGresult *res = PQexecf(conn,"INSERT INTO aircrafts (hex) VALUES (%text)", aircraft_hex);
PQgetf(res, 0, "%int4", 0, &new_id);
The INSERT is successful however new_id is not assigned and gives the error "row number 1 is out of range 0..-1"
Any help would be great thanks.
use INSERT INTO aircrafts (hex) VALUES (%text) RETURNING id to return the id column of the newly inserted row
I had been missing the semicolon when previously using RETURNING id
PGresult *res = PQexecf(conn,"INSERT INTO aircrafts (hex) VALUES (%text) RETURNING ID;", aircraft_hex);

Number 0 is not saving to database as a prefix in SQL Server of CHAR data type column

I am trying to insert an value as '019393' into a table with a CHAR(10) column.
It is inserting only '19393' into the database
I am implementing this feature in a stored procedure, doing some manipulation like incrementing that number by 15 and saving it back with '0' as the prefix
I am using SQL Server database
Note: I tried CASTING that value as VARCHAR before saving to the database, but even that did not get the solution
Code
SELECT
#fromBSB = fromBSB, #toBSB = toBSB, #type = Type
FROM
[dbo].[tbl_REF_SpecialBSBRanges]
WHERE
CAST(#inputFromBSB AS INT) BETWEEN fromBSB AND toBSB
SET #RETURNVALUE = #fromBSB
IF(#fromBSB = #inputFromBSB)
BEGIN
PRINT 'Starting Number is Equal';
DELETE FROM tbl_REF_SpecialBSBRanges
WHERE Type = #type AND fromBSB = #fromBSB AND toBSB = #toBSB
INSERT INTO [tbl_REF_SpecialBSBRanges] ([Type], [fromBSB], [toBSB])
VALUES(#type, CAST('0' + #fromBSB + 1 AS CHAR), #toBSB)
INSERT INTO [tbl_REF_SpecialBSBRanges] ([Type], [fromBSB], [toBSB])
VALUES(#inputBSBName, #inputFromBSB, #inputToBSB)
END
Okay, without knowing the column datatypes, I would suggest trying this:
Change from
CAST('0'+#fromBSB+1 AS CHAR)
To
'0'+CAST(#fromBSB+1 AS CHAR(10))
But if the columns are integers this won't make a difference.

How can i get data from row being inserted in C trigger?

Postgresql 9.1.0. OS Ubuntu 11.10. Compiler gcc 4.6.1
Here is my table:
CREATE TABLE ttest
(
x integer,
str text
)
WITH (
OIDS=FALSE
);
ALTER TABLE ttest OWNER TO postgres;
CREATE TRIGGER tb
BEFORE INSERT
ON ttest
FOR EACH ROW
EXECUTE PROCEDURE out_trig();
out_trig is C functcion.
Now im trying to get data from each row being inserted. Here is the code:
if (TRIGGER_FIRED_BY_INSERT(trigdata->tg_event))
{
rettuple = trigdata->tg_trigtuple;
bool isnull = false;
uint32 x=rettuple->t_len;
int64 f;
f = (int64) GetAttributeByNum(rettuple->t_data, 1, &isnull);//error here
elog(INFO,"len of tuple: %d",x);
elog(INFO,"first column being inserted x: %d",f);
}
I got ERROR: record type has not been registered
SQL state: 42809
What am I doing wrong and how to do it correctly?
GetAttributeByNum (or GetAttributeByName) only works with Datums, not on-disk tuples, use heap_getattr instead.
You've declared x as integer but trying to read it as int64 (PostgreSQL uses int4 for integer types unless you explicitly specify your column as int8).
Last but not least, use DatumGet[YourType] macros when calling functions that return Datums, converting the value directly to the desired type breaks portability.
Long and short, the code should becomes something like this:
if (TRIGGER_FIRED_BY_INSERT(trigdata->tg_event))
{
HeapTuple rettuple = trigdata->tg_trigtuple;
TupleDesc tupdesc = trigdata->tg_relation->rd_att;
bool isnull = false;
uint32 x=rettuple->t_len;
int32 att = DatumGetInt32(heap_getattr(rettuple, 1, tupdesc, &isnull));
elog(INFO,"len of tuple: %d",x);
if (!isnull)
elog(INFO,"first column being inserted x: %d",att);
else
elog(INFO,"first column being inserted x: NULL");
}
You might also want to take a look at SPI interface, which simplifies access to the database from user-defined C functions:
http://www.postgresql.org/docs/current/interactive/spi.html

Resources