How can i get data from row being inserted in C trigger? - c

Postgresql 9.1.0. OS Ubuntu 11.10. Compiler gcc 4.6.1
Here is my table:
CREATE TABLE ttest
(
x integer,
str text
)
WITH (
OIDS=FALSE
);
ALTER TABLE ttest OWNER TO postgres;
CREATE TRIGGER tb
BEFORE INSERT
ON ttest
FOR EACH ROW
EXECUTE PROCEDURE out_trig();
out_trig is C functcion.
Now im trying to get data from each row being inserted. Here is the code:
if (TRIGGER_FIRED_BY_INSERT(trigdata->tg_event))
{
rettuple = trigdata->tg_trigtuple;
bool isnull = false;
uint32 x=rettuple->t_len;
int64 f;
f = (int64) GetAttributeByNum(rettuple->t_data, 1, &isnull);//error here
elog(INFO,"len of tuple: %d",x);
elog(INFO,"first column being inserted x: %d",f);
}
I got ERROR: record type has not been registered
SQL state: 42809
What am I doing wrong and how to do it correctly?

GetAttributeByNum (or GetAttributeByName) only works with Datums, not on-disk tuples, use heap_getattr instead.
You've declared x as integer but trying to read it as int64 (PostgreSQL uses int4 for integer types unless you explicitly specify your column as int8).
Last but not least, use DatumGet[YourType] macros when calling functions that return Datums, converting the value directly to the desired type breaks portability.
Long and short, the code should becomes something like this:
if (TRIGGER_FIRED_BY_INSERT(trigdata->tg_event))
{
HeapTuple rettuple = trigdata->tg_trigtuple;
TupleDesc tupdesc = trigdata->tg_relation->rd_att;
bool isnull = false;
uint32 x=rettuple->t_len;
int32 att = DatumGetInt32(heap_getattr(rettuple, 1, tupdesc, &isnull));
elog(INFO,"len of tuple: %d",x);
if (!isnull)
elog(INFO,"first column being inserted x: %d",att);
else
elog(INFO,"first column being inserted x: NULL");
}
You might also want to take a look at SPI interface, which simplifies access to the database from user-defined C functions:
http://www.postgresql.org/docs/current/interactive/spi.html

Related

Postgres C - Comparing Two Datum

I am trying to do this in C so I can learn how to program C stored procedures in postgres. Please help me find the answer in C regardless of whether there is a solution in pure SQL.
I would like to write a simple C function that can be called as follows.
SELECT f(col1, col2, col3, col4) from T
If col1 < col2, return col3
If col1 > col2, return col4
Else return null
I do not know the types of any of the columns before the function is called. However, it is guaranteed that col1 and col2 will be of the same type (again, that type won't be known until the function is called). It is also guaranteed that col3 and col4 are of the same type as one another, but not necessarily the same as the type of col1 or col2. I am deliberately placing these constraints to avoid the use of anyelement in the answers; the argument types and the return type is unknown and we cannot use variadic anyelement since all types in the signature are not the same.
I started to write the function.
Datum f( PG_FUNCTION_ARGS )
{
Datum *args;
bool *nulls;
Oid *types;
/* fetch argument values to build the object */
nargs = extract_variadic_args(fcinfo, 0, false, &args, &types, &nulls);
if (args[0] < args[1]) PG_RETURN_DATUM(args[2]); // This has to be wrong
else if (args[0] > args[1]) PG_RETURN_DATUM(args[3]);
else PG_RETURN_NULL();
}
CREATE FUNCTION f(VARIADIC "any") RETURNS ??IDontKnow??
AS '/var/lib/postgresql/f.so'
LANGUAGE C IMMUTABLE STRICT;
I quickly found myself stuck with two issues.
I do not know how to do a type-safe comparison of two datums if I don't know their types beforehand.
I am not sure what the return type is for the CREATE FUNCTION.
EDIT - Progressing To A Working Comparison (Is it right? Who knows?)
Datum max_by_cols( PG_FUNCTION_ARGS )
{
int nargs;
int i;
const char *sep = "";
StringInfo result;
Datum *args;
bool *nulls;
Oid *types;
/* fetch argument values to build the object */
nargs = extract_variadic_args(fcinfo, 0, false, &args, &types, &nulls);
Oid elemtype = get_fn_expr_argtype(fcinfo->flinfo, 0);
TypeCacheEntry *typentry = lookup_type_cache(elemtype, TYPECACHE_CMP_PROC_FINFO);
FmgrInfo *cmpfunc = &typentry->cmp_proc_finfo;
Datum c;
c = FunctionCall2Coll(cmpfunc, typentry->typcollation, args[0], args[1]);
PG_RETURN_DATUM(c);
}
CREATE OR REPLACE FUNCTION max_by_cols(VARIADIC "any") RETURNS INT
AS '/var/lib/postgresql/max_by_cols.so'
LANGUAGE C IMMUTABLE STRICT;
insert into test_t2 values
('Bob','Tom'),
('Tom', 'Bob'),
('Rich', 'Rich');
create table test_t3(rc int, rc2 int);
insert into test_t3 values
(1,2),
(2,1),
(3,3);
select max_by_cols(rc, rc2) from test_t2;
-18
18
0
select max_by_cols(rc, rc2) from test_t3;
-1
1
0
The comparison now works for at least the two different data types I tested above. I do wonder if my compares should be using SortSupport, but that seems to be even more involved. I also still don't know how to specify an unknown return type (that isn't anyelement) in a CREATE FUNCTION statement. Maybe the magic word is INTERNAL. Time will tell. By then, I suspect I might be good friends with Laurenz, but first I must pass the twelve labors.
Edit 2 - A good non-C solution
https://dba.stackexchange.com/questions/63661/in-postgresql-is-there-a-type-safe-first-aggregate-function
select (array_agg(rc ORDER BY col1 desc, col2 desc, col3 desc))[1] as rc_value_prioritized
from test_t
I will continue to try to solve in C as time permits, but this solution seems quite good.

GORM and SQL Server: auto-incrementation does not work

I am trying to insert new value into my SQL Server table using GORM. However, it keeps returning errors. Below you can find detailed example:
type MyStructure struct {
ID int32 `gorm:"primaryKey;autoIncrement:true"`
SomeFlag bool `gorm:"not null"`
Name string `gorm:"type:varchar(60)"`
}
Executing the code below (with Create inside transaction)
myStruct := MyStructure{SomeFlag: true, Name: "XYZ"}
result = tx.Create(&myStruct)
if result.Error != nil {
return result.Error
}
results in the following error:
Cannot insert the value NULL into column 'ID', table 'dbo.MyStructures'; column does not allow nulls. INSERT fails
SQL query generated by GORM looks then as follows:
INSERT INTO "MyStructures" ("SomeFlag","Name") OUTPUT INSERTED."ID" VALUES (1, 'XYZ')
On the other hand, executing Create directly on DB connection (without using transaction) results in the following error:
Table 'MyStructures' does not have the identity property. Cannot perform SET operation
SQL query generated by GORM looks then as follows:
SET IDENTITY_INSERT "MyStructures" ON;INSERT INTO "MyStructures" ("SomeFlag", "Name") OUTPUT INSERTED."ID" VALUES (1, 'XYZ');SET IDENTITY_INSERT "MyStructures" OFF;
How can I make the auto-incrementation work in this case?
Why do I get two different errors depending on whether it is inside or outside transaction?
I find this in gorm's issues:
gorm.DefaultCallback.Create().Remove("mssql:set_identity_insert")
https://github.com/go-gorm/gorm/issues/941#issuecomment-250267125
Just replace your Struct
type MyStructure struct {
ID int32 `gorm:"primaryKey;autoIncrement:true"`
SomeFlag bool `gorm:"not null"`
Name string `gorm:"type:varchar(60)"`
}
with this
type MyStructure struct {
ID int32 `gorm:"AUTO_INCREMENT;PRIMARY_KEY;not null"`
SomeFlag bool `gorm:"not null"`
Name string `gorm:"type:varchar(60)"`
}
It's always better to embed the gorm.Model in the struct which gives the fields by default: ID, CreatedAt, UpdatedAt, DeletedAt. ID will be the primary key by default, and it is auto-incremented (managed by GORM)
type MyStructure struct {
gorm.Model
SomeFlag bool `gorm:"not null"`
Name string `gorm:"type:varchar(60)"`
}
Drop the existing table: db.Migrator().DropTable(&MyStructure{}) and create the table again: db.AutoMigrate(&MyStructure{}) and then try to insert the record.

SQLITE check if table exist in C [duplicate]

How do I, reliably, check in SQLite, whether a particular user table exists?
I am not asking for unreliable ways like checking if a "select *" on the table returned an error or not (is this even a good idea?).
The reason is like this:
In my program, I need to create and then populate some tables if they do not exist already.
If they do already exist, I need to update some tables.
Should I take some other path instead to signal that the tables in question have already been created - say for example, by creating/putting/setting a certain flag in my program initialization/settings file on disk or something?
Or does my approach make sense?
I missed that FAQ entry.
Anyway, for future reference, the complete query is:
SELECT name FROM sqlite_master WHERE type='table' AND name='{table_name}';
Where {table_name} is the name of the table to check.
Documentation section for reference: Database File Format. 2.6. Storage Of The SQL Database Schema
This will return a list of tables with the name specified; that is, the cursor will have a count of 0 (does not exist) or a count of 1 (does exist)
If you're using SQLite version 3.3+ you can easily create a table with:
create table if not exists TableName (col1 typ1, ..., colN typN)
In the same way, you can remove a table only if it exists by using:
drop table if exists TableName
A variation would be to use SELECT COUNT(*) instead of SELECT NAME, i.e.
SELECT count(*) FROM sqlite_master WHERE type='table' AND name='table_name';
This will return 0, if the table doesn't exist, 1 if it does. This is probably useful in your programming since a numerical result is quicker / easier to process. The following illustrates how you would do this in Android using SQLiteDatabase, Cursor, rawQuery with parameters.
boolean tableExists(SQLiteDatabase db, String tableName)
{
if (tableName == null || db == null || !db.isOpen())
{
return false;
}
Cursor cursor = db.rawQuery(
"SELECT COUNT(*) FROM sqlite_master WHERE type = ? AND name = ?",
new String[] {"table", tableName}
);
if (!cursor.moveToFirst())
{
cursor.close();
return false;
}
int count = cursor.getInt(0);
cursor.close();
return count > 0;
}
You could try:
SELECT name FROM sqlite_master WHERE name='table_name'
See (7) How do I list all tables/indices contained in an SQLite database in the SQLite FAQ:
SELECT name FROM sqlite_master
WHERE type='table'
ORDER BY name;
Use:
PRAGMA table_info(your_table_name)
If the resulting table is empty then your_table_name doesn't exist.
Documentation:
PRAGMA schema.table_info(table-name);
This pragma returns one row for each column in the named table. Columns in the result set include the column name, data type, whether or not the column can be NULL, and the default value for the column. The "pk" column in the result set is zero for columns that are not part of the primary key, and is the index of the column in the primary key for columns that are part of the primary key.
The table named in the table_info pragma can also be a view.
Example output:
cid|name|type|notnull|dflt_value|pk
0|id|INTEGER|0||1
1|json|JSON|0||0
2|name|TEXT|0||0
SQLite table names are case insensitive, but comparison is case sensitive by default. To make this work properly in all cases you need to add COLLATE NOCASE.
SELECT name FROM sqlite_master WHERE type='table' AND name='table_name' COLLATE NOCASE
If you are getting a "table already exists" error, make changes in the SQL string as below:
CREATE table IF NOT EXISTS table_name (para1,para2);
This way you can avoid the exceptions.
If you're using fmdb, I think you can just import FMDatabaseAdditions and use the bool function:
[yourfmdbDatabase tableExists:tableName].
The following code returns 1 if the table exists or 0 if the table does not exist.
SELECT CASE WHEN tbl_name = "name" THEN 1 ELSE 0 END FROM sqlite_master WHERE tbl_name = "name" AND type = "table"
Note that to check whether a table exists in the TEMP database, you must use sqlite_temp_master instead of sqlite_master:
SELECT name FROM sqlite_temp_master WHERE type='table' AND name='table_name';
Here's the function that I used:
Given an SQLDatabase Object = db
public boolean exists(String table) {
try {
db.query("SELECT * FROM " + table);
return true;
} catch (SQLException e) {
return false;
}
}
Use this code:
SELECT name FROM sqlite_master WHERE type='table' AND name='yourTableName';
If the returned array count is equal to 1 it means the table exists. Otherwise it does not exist.
class CPhoenixDatabase():
def __init__(self, dbname):
self.dbname = dbname
self.conn = sqlite3.connect(dbname)
def is_table(self, table_name):
""" This method seems to be working now"""
query = "SELECT name from sqlite_master WHERE type='table' AND name='{" + table_name + "}';"
cursor = self.conn.execute(query)
result = cursor.fetchone()
if result == None:
return False
else:
return True
Note: This is working now on my Mac with Python 3.7.1
You can write the following query to check the table existance.
SELECT name FROM sqlite_master WHERE name='table_name'
Here 'table_name' is your table name what you created. For example
CREATE TABLE IF NOT EXISTS country(country_id INTEGER PRIMARY KEY AUTOINCREMENT, country_code TEXT, country_name TEXT)"
and check
SELECT name FROM sqlite_master WHERE name='country'
Use
SELECT 1 FROM table LIMIT 1;
to prevent all records from being read.
Using a simple SELECT query is - in my opinion - quite reliable. Most of all it can check table existence in many different database types (SQLite / MySQL).
SELECT 1 FROM table;
It makes sense when you can use other reliable mechanism for determining if the query succeeded (for example, you query a database via QSqlQuery in Qt).
The most reliable way I have found in C# right now, using the latest sqlite-net-pcl nuget package (1.5.231) which is using SQLite 3, is as follows:
var result = database.GetTableInfo(tableName);
if ((result == null) || (result.Count == 0))
{
database.CreateTable<T>(CreateFlags.AllImplicit);
}
The function dbExistsTable() from R DBI package simplifies this problem for R programmers. See the example below:
library(DBI)
con <- dbConnect(RSQLite::SQLite(), ":memory:")
# let us check if table iris exists in the database
dbExistsTable(con, "iris")
### returns FALSE
# now let us create the table iris below,
dbCreateTable(con, "iris", iris)
# Again let us check if the table iris exists in the database,
dbExistsTable(con, "iris")
### returns TRUE
I thought I'd put my 2 cents to this discussion, even if it's rather old one..
This query returns scalar 1 if the table exists and 0 otherwise.
select
case when exists
(select 1 from sqlite_master WHERE type='table' and name = 'your_table')
then 1
else 0
end as TableExists
My preferred approach:
SELECT "name" FROM pragma_table_info("table_name") LIMIT 1;
If you get a row result, the table exists. This is better (for me) then checking with sqlite_master, as it will also check attached and temp databases.
This is my code for SQLite Cordova:
get_columnNames('LastUpdate', function (data) {
if (data.length > 0) { // In data you also have columnNames
console.log("Table full");
}
else {
console.log("Table empty");
}
});
And the other one:
function get_columnNames(tableName, callback) {
myDb.transaction(function (transaction) {
var query_exec = "SELECT name, sql FROM sqlite_master WHERE type='table' AND name ='" + tableName + "'";
transaction.executeSql(query_exec, [], function (tx, results) {
var columnNames = [];
var len = results.rows.length;
if (len>0){
var columnParts = results.rows.item(0).sql.replace(/^[^\(]+\(([^\)]+)\)/g, '$1').split(','); ///// RegEx
for (i in columnParts) {
if (typeof columnParts[i] === 'string')
columnNames.push(columnParts[i].split(" ")[0]);
};
callback(columnNames);
}
else callback(columnNames);
});
});
}
Table exists or not in database in swift
func tableExists(_ tableName:String) -> Bool {
sqlStatement = "SELECT name FROM sqlite_master WHERE type='table' AND name='\(tableName)'"
if sqlite3_prepare_v2(database, sqlStatement,-1, &compiledStatement, nil) == SQLITE_OK {
if sqlite3_step(compiledStatement) == SQLITE_ROW {
return true
}
else {
return false
}
}
else {
return false
}
sqlite3_finalize(compiledStatement)
}
c++ function checks db and all attached databases for existance of table and (optionally) column.
bool exists(sqlite3 *db, string tbl, string col="1")
{
sqlite3_stmt *stmt;
bool b = sqlite3_prepare_v2(db, ("select "+col+" from "+tbl).c_str(),
-1, &stmt, 0) == SQLITE_OK;
sqlite3_finalize(stmt);
return b;
}
Edit: Recently discovered the sqlite3_table_column_metadata function. Hence
bool exists(sqlite3* db,const char *tbl,const char *col=0)
{return sqlite3_table_column_metadata(db,0,tbl,col,0,0,0,0,0)==SQLITE_OK;}
You can also use db metadata to check if the table exists.
DatabaseMetaData md = connection.getMetaData();
ResultSet resultSet = md.getTables(null, null, tableName, null);
if (resultSet.next()) {
return true;
}
If you are running it with the python file and using sqlite3 obviously. Open command prompt or bash whatever you are using use
python3 file_name.py first in which your sql code is written.
Then Run sqlite3 file_name.db.
.table this command will give tables if they exist.
I wanted to add on Diego VĂ©lez answer regarding the PRAGMA statement.
From https://sqlite.org/pragma.html we get some useful functions that can can return information about our database.
Here I quote the following:
For example, information about the columns in an index can be read using the index_info pragma as follows:
PRAGMA index_info('idx52');
Or, the same content can be read using:
SELECT * FROM pragma_index_info('idx52');
The advantage of the table-valued function format is that the query can return just a subset of the PRAGMA columns, can include a WHERE clause, can use aggregate functions, and the table-valued function can be just one of several data sources in a join...
Diego's answer gave PRAGMA table_info(table_name) like an option, but this won't be of much use in your other queries.
So, to answer the OPs question and to improve Diegos answer, you can do
SELECT * FROM pragma_table_info('table_name');
or even better,
SELECT name FROM pragma_table_list('table_name');
if you want to mimic PoorLuzers top-voted answer.
If you deal with Big Table, I made a simple hack with Python and Sqlite and you can make the similar idea with any other language
Step 1: Don't use (if not exists) in your create table command
you may know that this if you run this command that will have an exception if you already created the table before, and want to create it again, but this will lead us to the 2nd step.
Step 2: use try and except (or try and catch for other languages) to handle the last exception
here if you didn't create the table before, the try case will continue, but if you already did, you can put do your process at except case and you will know that you already created the table.
Here is the code:
def create_table():
con = sqlite3.connect("lists.db")
cur = con.cursor()
try:
cur.execute('''CREATE TABLE UNSELECTED(
ID INTEGER PRIMARY KEY)''')
print('the table is created Now')
except sqlite3.OperationalError:
print('you already created the table before')
con.commit()
cur.close()
You can use a simple way, i use this method in C# and Xamarin,
public class LoginService : ILoginService
{
private SQLiteConnection dbconn;
}
in login service class, i have many methods for acces to the data in sqlite, i stored the data into a table, and the login page
it only shows when the user is not logged in.
for this purpose I only need to know if the table exists, in this case if it exists it is because it has data
public int ExisteSesion()
{
var rs = dbconn.GetTableInfo("Sesion");
return rs.Count;
}
if the table does not exist, it only returns a 0, if the table exists it is because it has data and it returns the total number of rows it has.
In the model I have specified the name that the table must receive to ensure its correct operation.
[Table("Sesion")]
public class Sesion
{
[PrimaryKey]
public int Id { get; set; }
public string Token { get; set; }
public string Usuario { get; set; }
}
Look into the "try - throw - catch" construct in C++. Most other programming languages have a similar construct for handling errors.

"Invalid data conversion" in DB2 with prepared statements and batch

I am using JDBC to create a temporary table, add records to it (with prepared statement and batch) and then transfer everything to another table:
String createTemporaryTable = "declare global temporary table temp_table (RECORD smallint,RANDOM_INTEGER integer,RANDOM_FLOAT float,RANDOM_STRING varchar(600)) ON COMMIT PRESERVE ROWS in TEMP";
statement.execute(createTemporaryTable);
String sql = "INSERT INTO session.temp_table (RECORD,RANDOM_INTEGER,RANDOM_FLOAT,RANDOM_STRING) VALUES (?,?,?,?)";
PreparedStatement preparedStatement = connection.prepareStatement(sql);
float f = 0.7401298f;
Integer integer = 123456789;
String string = "This is a string that will be inserted into the table over and over again.";
// add however many random records you want to the temporary table
int numberOfRecordsToInsert = 35000;
for (int i = 0; i < numberOfRecordsToInsert; i++) {
preparedStatement.setInt(1, i);
preparedStatement.setInt(2, integer);
preparedStatement.setFloat(3, (float) f);
preparedStatement.setString(4, string);
preparedStatement.addBatch();
}
preparedStatement.executeBatch();
// transfer everything from the temporary table just created to the main table
String transferFromTempTableToMain = "insert into main_table select * from session.temp_table";
statement.execute(transferFromTempTableToMain);
This works fine up to about 30000 records in this example. However, if I were to insert say 35000 records I get the following error:
Invalid data conversion: Requested conversion would result in a loss
of precision of 32768. ERRORCODE=-4461, SQLSTATE=42815
The problem is that field RECORD is a smallint. A smallint is a signed 16 bit integer with a range of -32768 to 32767.
So inserting an int value of 32768 is not allowed as it won't fit. You need to declare record as INTEGER instead.

Convert SQL Server varbinary(max) into a set of primary keys of type int

Disclaimer: not my code, not my database design!
I have a column of censusblocks(varbinary(max), null) in a MS SQL Server 2008 db table (call it foo for simplicity).
This column is actually a null or 1 to n long list of int. The ints are actually foreign keys to another table (call it censusblock with a pk id of type of int), numbering from 1 to ~9600000.
I want to query to extract the censusblocks list from foo, and use the extracted list of int from each row to look up the corresponding censusblock row. There's a long, boring rest of the query that will be used from there, but it needs to start with the census blocks pulled from the foo table's censusblocks column.
This conversion-and-look-up is currently handled on the middle tier, with a small .NET utility class to convert from List<int> to byte[] (and vice versa), which is then written into/read from the db as varbinary. I would like to do the same thing, purely in SQL.
The desired query would go something along the lines of
SELECT f.id, c.id
FROM foo f
LEFT OUTER JOIN censusblock c ON
c.id IN f.censusblocks --this is where the magic happens
where f.id in (1,2)
Which would result in:
f.id | c.id
1 8437314
1 8438819
1 8439744
1 8441795
1 8442741
1 8444984
1 8445568
1 8445641
1 8447953
2 5860657
2 5866881
2 5866881
2 5866858
2 5862557
2 5870475
2 5868983
2 5865207
2 5863465
2 5867301
2 5864057
2 5862256
NB: the 7-digit results are coincidental. The range is, as stated above, 1-7 digits.
The actual censusblocks column looks like
SELECT TOP 2 censusblocks FROM foo
which results in
censublocks
0x80BE4280C42380C7C080CFC380D37580DC3880DE8080DEC980E7D1
0x596D3159858159856A59749D59938B598DB7597EF7597829598725597A79597370
For further clarification, here's the guts of the .NET utility classes conversion methods:
public static List<int> getIntegersFromBytes(byte[] data)
{
List<int> values = new List<int>();
if (data != null && data.Length > 2)
{
long ids = data.Length / 3;
byte[] oneId = new byte[4];
oneId[0] = 0;
for (long i = 0; i < ids; i++)
{
oneId[0] = 0;
Array.Copy(data, i * 3, oneId, 1, 3);
if (BitConverter.IsLittleEndian)
{ Array.Reverse(oneId); }
values.Add(BitConverter.ToInt32(oneId, 0));
}}
return values;
}
public static byte[] getBytesFromIntegers(List<int> values)
{
byte[] data = null;
if (values != null && values.Count > 0)
{
data = new byte[values.Count * 3];
int count = 0;
byte[] idBytes = null;
foreach (int id in values)
{
idBytes = BitConverter.GetBytes(id);
if (BitConverter.IsLittleEndian)
{ Array.Reverse(idBytes); }
Array.Copy(idBytes, 1, data, count * 3, 3);
count++;
} }
return data;
}
An example of how this might be done. It is unlikely to scale brilliantly.
If you have a numbers table in your database it should be used in place of nums_cte.
This works by converting the binary value to a literal hex string, then reading it in 8-character chunks
-- create test data
DECLARE #foo TABLE
(id int ,
censusblocks varbinary(max)
)
DECLARE #censusblock TABLE
(id int)
INSERT #censusblock (id)
VALUES(1),(2),(1003),(5030),(5031),(2),(6)
INSERT #foo (id,censusblocks)
VALUES (1,0x0000000100000002000003EB),
(2,0x000013A6000013A7)
--query
DECLARE #biMaxLen bigint
SELECT #biMaxLen = MAX(LEN(CONVERT(varchar(max),censusblocks,2))) FROM #foo
;with nums_cte
AS
(
SELECT TOP (#biMaxLen) ((ROW_NUMBER() OVER (ORDER BY a.type) - 1) * 8) AS n
FROM master..spt_values as a
CROSS JOIN master..spt_values as b
)
,binCTE
AS
(
SELECT d.id, CAST(CONVERT(binary(4),SUBSTRING(s,n + 1,8),2) AS int) as cblock
FROM (SELECT Id, CONVERT(varchar(max),censusblocks,2) AS s FROM #foo) AS d
JOIN nums_cte
ON n < LEN(d.s)
)
SELECT *
FROM binCTE as b
LEFT
JOIN #censusblock c
ON c.id = b.cblock
ORDER BY b.id, b.cblock
You could also consider adding your existing .Net conversion methods into the database as an assembly and accessing them through CLR functions.
This is off-topic, but I couldn't resist writing these conversions so they use IEnumerables instead of arrays and Lists. This might not be faster per se, but is more general and would allow you to perform the conversion without loading the whole array at once, which may be helpful if the arrays you are dealing with are large.
Here it is, for what it's worth:
static IEnumerable<int> BytesToInts(IEnumerable<byte> bytes) {
var buff = new byte[4];
using (var en = bytes.GetEnumerator()) {
while (en.MoveNext()) {
buff[0] = en.Current;
if (en.MoveNext()) {
buff[1] = en.Current;
if (en.MoveNext()) {
buff[2] = en.Current;
if (en.MoveNext()) {
buff[3] = en.Current;
if (BitConverter.IsLittleEndian)
Array.Reverse(buff);
yield return BitConverter.ToInt32(buff, 0);
continue;
}
}
}
throw new ArgumentException("Wrong number of bytes.", "bytes");
}
}
}
static IEnumerable<byte> IntsToBytes(IEnumerable<int> ints) {
if (BitConverter.IsLittleEndian)
return ints.SelectMany(
b => {
var buff = BitConverter.GetBytes(b);
Array.Reverse(buff);
return buff;
}
);
return ints.SelectMany(BitConverter.GetBytes);
}
Your code seems to like encoding an int into 3 bytes instead of 4, which would cause problems with values that don't fit into 3 bytes (including negatives) - is that intentional?
BTW, you should be able to adapt this (or your) code for execution in SQL Server CLR. This is not exactly "in SQL", but is "in DBMS".
you can use Convert(int, censusBlock) to convert the varchar value to int value.
the you can join on that column.
Or have i misunderstood the question?

Resources