I'm trying to post an integer array into my postgresql database. I'm aware that I could format everything as a string and then send that string as one SQL command. However, I believe the PQexecParams function should also bring some help. However, I'm kind of lost as how to use it.
//we need to convert the number into network byte order
int val1 = 131;
int val2 = 2342;
int val3[5] = { 0, 7, 15, 31, 63 };
//set the values to use
const char *values[3] = { (char *) &val1, (char *) &val2, (char *) val3 };
//calculate the lengths of each of the values
int lengths[3] = { sizeof(val1), sizeof(val2), sizeof(val3) * 5 };
//state which parameters are binary
int binary[3] = { 1, 1, 1 };
PGresult *res = PQexecParams(conn, "INSERT INTO family VALUES($1::int4, $2::int4, $3::INTEGER[])", 3, //number of parameters
NULL, //ignore the Oid field
values, //values to substitute $1 and $2
lengths, //the lengths, in bytes, of each of the parameter values
binary, //whether the values are binary or not
0); //we want the result in text format
Yes this is copied from some tutorial.
However this returns :
ERROR: invalid array flags
Using a conventional method does work:
PQexec(conn, "INSERT INTO family VALUES (2432, 31, '{0,1,2,3,4,5}')");
Inserts data just fine, and I can read it out fine as well.
Any help would be greatly appreciated! :)
libpq's PQexecParams can accept values in text or binary form.
For text values, you must sprintf the integer into a buffer that you put in your char** values array. This is usually how it's done. You can use text format with query parameters, there is no particular reason to fall back to interpolating the parameters into the SQL string yourself.
If you want to use binary mode transfers, you must instead ensure the integer is the correct size for the target field, is in network byte order, and that you have specified the type OID. Use htonl (for uint32_t) or htons (for uint16_t) for that. It's fine to cast away signedness since you're just re-ordering the bytes.
So:
You cannot ignore the OID field if you're planning to use binary transfer
Use htonl, don't brew your own byte-order conversion
Your values array construction is wrong. You're putting char**s into an array of char* and casting away the wrong type. You want &val1[0] or (equivalent in most/all real-world C implementations, but not technically the same per the spec) just val1, instead of (char*)&val1
You cannot assume that the on-wire format of integer[] is the same as C's int32_t[]. You must pass the type OID INT4ARRAYOID (see include/catalog/pg_type.h or select oid from pg_type where typname = '_int4' - the internal type name of an array is _ in front of its base type) and must construct a PostgreSQL array value compatible with the typreceive function in pg_type for that type (which is array_recv) if you intend to send in binary mode. In particular, binary-format arrays have a header. You cannot just leave out the header.
In other words, the code is broken in multiple exciting ways and cannot possibly work as written.
Really, there is rarely any benefit in sending integers in binary mode. Sending in text-mode is often actually faster because it's often more compact on the wire (small values). If you're going to use binary mode, you will need to understand how C represents integers, how network vs host byte order works, etc.
Especially when working with arrays, text format is easier.
libpq could make this a lot easier than it presently does by offering good array construct / deconstruct functions for both text and binary arrays. Patches are, as always, welcome. Right now, 3rd party libraries like libpqtypes largely fill this role.
Related
In C I can do the following:
bignum = BN_new();
BN_bin2bn(my_message, 32, bignum);
group = EC_GROUP_new_by_curve_name(NID_X9_62_prime256v1);
ecp = EC_POINT_new(group);
check = EC_POINT_set_compressed_coordinates_GFp(group, ecp, bignum, 0, NULL);
key = EC_KEY_new_by_curve_name(NID_X9_62_prime256v1);
check = EC_KEY_set_public_key(key, ecp);
check = EVP_PKEY_set1_EC_KEY(public_key, key);
In Ruby, I thought this would do the same thing, but I get an error*
bignum = OpenSSL::BN.new(my_message, 2)
group = OpenSSL::PKey::EC::Group.new('prime256v1')
group.point_conversion_form = :compressed
public_key = OpenSSL::PKey::EC::Point.new(group, bignum)
In both instances I can log bignum and see that it is the same, and I'm pretty positive prime256v1 is the correct group.
In both cases C and Ruby are using the same version of OpenSSL (OpenSSL 1.0.2p 14 Aug 2018)
Any advice on what I'm doing wrong here would be massively appreciated.
*The error message I get is invalid encoding (OpenSSL::PKey::EC::Point::Error)
The EC_POINT_set_compressed_coordinates_GFp function in C expects you to pass in the x-coordinate of the point and separately a value to specify which of the two possible points it could be (you are passing in a literal 0, in reality you should determine the actual value).
In Ruby, the Point initializer is expecting the point encoded as a string that includes information about both coordinates (I don’t know if this format has a name, but it’s pretty common and is documented by the SECG). In the case of compressed coordinates this string is basically the same 32 bytes as in the C code, but with an extra byte at the start, either 0x02 or 0x03, which correspond to passing 0 or 1 as the y-bit to EC_POINT_set_compressed_coordinates_GFp.
If the string doesn’t start with 0x02 or 0x03 (or 0x04 for uncompressed points) or is the wrong length, then you will get the invalid encoding error.
It doesn’t look like the Ruby OpenSSL bindings provide a way to specify a point using separate x and y coordinates. The simplest way would be to add the 0x02 or 0x03 prefix to the string before passing it to Point.new.
If you already have this string you can use it in C to create a point using EC_POINT_oct2point. Ruby itself calls EC_POINT_oct2point if you pass a string to Point.new.
I'm having a hard time in realizing how to use the repeated field rule.
for example, this is my .proto:
message Test
{
repeated float value = 1;
}
now, I'm initialize a new Test object:
Test test = test_init_zero()
Finally, I want to assign some values. For example:
float values[] = { 1.0, 2.2, 5.5, 7.13 }
My question is how can I assign them?
is it like
test.value = values
//or
test.value[0] = values[0] //... etc.
and then, how do I read them back?
This depends on how you define the repeated field inside the proto file. According to nanopb docs, you either just specify the repeated field like you did, and then use a callback function to handle each item separately during encoding/decoding, or you use nanopb-specific settings so have a fixed length array:
Strings, bytes and repeated fields of any type map to callback functions by default.
If there is a special option (nanopb).max_size specified in the .proto file, string maps to null-terminated char array and bytes map to a structure containing a char array and a size field.
If (nanopb).fixed_length is set to true and (nanopb).max_size is also set, then bytes map to an inline byte array of fixed size.
If there is a special option (nanopb).max_count specified on a repeated field, it maps to an array of whatever type is being repeated. Another field will be created for the actual number of entries stored.
For example, byte arrays need to use max_size:
required bytes data = 1 [(nanopb).max_size = 40, (nanopb).fixed_length = true];
And this would create the following field, when compiled using nanopb:
// byte arrays get a special treatment in nanopb
pb_byte_t data[40];
Or, for a float, you would use max_count according to rule 4.:
repeated float data = 1 [(nanopb).max_count = 40];
And then you'll get:
size_t data_count;
float data[40];
If you simply define a repeated field like you did, then nanopb will create a callback function:
// repeated float value = 1;
pb_callback_t value;
Which means you will have to provide your own function which will handle each incoming item:
yourobject.value.arg = &custom_args;
yourobject.value.funcs.decode = custom_function_for_decoding;
I am trying to append unknown number of bytes into a single large array . Which array type should I use ? I am trying to this
len=temp_i.len()
for(i=0;i<len;i++)begin
bit [7:0] temp_ascii;
temp_ascii=temp_i.getc(i);
arr = {arr,temp_ascii};
where temp_i is an input srting. My Final aim is convert input String into binary representation of its ASCII value and concatenate them together into a single large array.
I having a hard time choosing what kind of array to use dynamic or associative or if I can use queue.
Any help will be highly appreciable.
You use associative arrays when the index values are not consecutive, or the ordering is meaningless. Not applicable here.
You use queues when adding or removing one element at a time to an array. If arr was declared as a queue, you could write
string temp_i;
bit [7:0] arr[$];
int len;
len = temp_i.len();
for(int i=0,i<len;i++)
arr.push_back(temp_i.getc(i));
If your strings are small, or you plan to concatenate many strings together, a queue is your best option. But if you only plan to convert one string to an array, then using a bit-stream cast to a dynamic array will be the most efficient.
string temp_i;
typedef bit [7:0] uint8_da_t[]; // typedef required for cast to target
uint8_da_t arr; // using typedef not required here, but A VERY GOOD IDEA
arr = uint8_da_t'(temp_i);
is it supposed to be a synthesizable code or a test bench?
None of the above is synthesizable.
you would do it differently in different worlds.
Note: There are posts similar to this for C++ only, I didn't find any useful post in regards to C.
I want to set the array of elements with the same value. Of course, this can be achieved simply using a for loop.
But, that consumes a lot of time. Because, in my algorithm this setting array with same value takes place many number of times. Is there any simple way to achieve this in C.
Use a for loop. Any decent compiler will optimize this as much as possible.
It is a near certainty that you wouldn't be able to improve substantially on the speed of your for loop. There is no magic way to set a value into multiple memory locations faster than it takes to store that value into these multiple memory locations. Regardless of whether you use the for loop or not, all the locations must be written to, which takes most of the time.
There is of course the void * memset ( void * ptr, int value, size_t num ); for values composed of identical bytes1, but under the hood it has a loop. Perhaps the implementation could be very smart about using that loop, but so can the optimizing compiler.
1 Although memset takes an int, it converts it to unsigned char before setting it into the memory region.
As suggested by other users, use memset if you want to initiate your array with 0 values, but don't do it if the values are not that simple.
For more complicated values, you can have a constant copy of your initial values and copy them later with memcpy:
float original_values[100]; // don't modify these values
original_values[0] = 1.2f;
original_values[1] = 10.9f;
...
float working_values[100]; // work with these values
memcpy(working_values, original_values, 100 * sizeof(float));
// do your task
working_values[0] *= working_values[1];
...
You can use memset() . It fills no of bytes you want to fill with same byte value.Here
you can read man page.
You can use memset() function
Example:
memset(<array-name>,<initialization-value>,<len>);
You can easily memset an array to 0.
If you want a different value, it all depends on the type used.
For char* arrays you can memset them to any value, since char is almost always one byte long.
For an array of structures, if all fields of a structure are to be initialized with 0 or NULL, you can memset it with 0.
You can not memset an array or array of structures to any value other than 0, because memset operates on single bytes. So if you memset an int[] with 1, you will not have an array of 1's.
To initialize an array of structures with a custom value, just fill one structure with the desired data and do an assignment it in a for. The compiler should do it relatively efficiently for you.
If you are talking about initialization see this question. If you want to set the values at a later time then use memset
Well you can only set your values to zero for a particular array. Here is an example
int arr[5]={0};
I am trying to make a look up table. Here is the pretext:
Suppose, following is the defines list of certain macros.
#define ENTITY1 0x10001001
#define ENTITY2 0x10001002
.
.
.
The ENTITY_ is the User readable string value of the otherwise unsigned long integer type value and there can be any number of macros (say greater than 200, or even 500).
Now, there is a list which keeps track of which entity exists in which file number. Something like this:
0x10001001 1
0x10001002 2
0x10001003 3
.
.
.
The use of the long unsigned integers for each ENTITY is necessary because of proprietary conventions.
The first list is already present, and the second list needs to be generated through a program by using the macro strings in #defines of the first list as the user enters the record.
Since the number of such entries is very large, hard coding each value is a burdensome task. Also, if the first list is updated, the second list will not update appropriately if additional switch cases are not coded.
When the user makes an entry, he tells that the entry is to be made in ENTITY3 through a string variable, the system should look up if a macro exists by the name ENTITY3. If yes, then open the file with number 3 and do the necessary processing, otherwise, display warning that such an entry does not exist.
So, how do I compare the string variable entered by the user with a macro name without using SWITCH CASE?
I am using C programming. GNU C Library.
Edit: Here is the scenario.
The different entities named ENTITYn (n can be any number) can exist in different files which have a certain integer number 1,2,3...
But, the proprietary environment has built up these entities such that they are recognized using certain unsigned long integers like 0x01001001 etc. For each entity, the macros have been defined in some header files corresponding to those entities by the name ENTITY1 ENTITY2...
Now when a certain manager wants to change something, or enter certain data to a particular entity, he would address is by the name ENTITYn, and the program would look up in a lookup table for a corresponding entry. If a match is found, it would use the unsigned long integer code for that entity for subsequent processing internal to the proprietary system, access another lookup table which looks for which file number has this entry and opens that file location for processing.
I need to populate this second table with the unsigned long ints of the Entities and their corresponding locations (let all of them be in a single file 1 for now). I want to circumvent the condition, that the one making that LUT has to know the corresponding entity unsigned long integer codes. The program uses the input string i.e. ENTITY1 and directly maps it.
But now I am beginning to think that hardcoding a LUT would be a better option. :)
Macro names don't exist in a C program. The preprocessor has replaced every instance of the macro name by its substitution value. If I understand your problem correctly, you'll probably need some kind of lookup table, like:
#define ENTITY1 0x10001001
#define ENTITY2 0x10001002
#define STR(x) #x
struct lookup { char *name; unsigned value; } ;
struct lookup mylut[] = {
{ STR(ENTITY1), ENTITY1 }
, { STR(ENTITY2), ENTITY2 }
};
The preprocessor will expand that to:
struct lookup { char *name; unsigned value; } ;
struct lookup mylut[] = {
{ "ENTITY1", 0x10001001 }
, { "ENTITY2", 0x10001002 }
};
, which you can use to look up the string literals.
Macros are preprocessor features, they're not visible to the C compiler. So you cannot directly reference the "values" of macros from code.
It seem you need two look-up tables, if I get this correctly:
One table mapping a string such as ENTITY1 to to a unique unsigned integer, such as 0x10001001.
One table mapping an unsigned integer such as 0x10001001 to a "file number" which looks like a (small) unsigned integer such as 1.
Both of these tables can be generated by processing the source code you seem to have. I would recommend gathering the ENTITYn strings into something like this:
struct entity_info
{
const char *name;
unsigned int key;
};
Then have your pre-processing code build a sorted array of these:
const struct entity_info entities[] = {
{ "ENTITY1", 0x10001001 },
{ "ENTITY2", 0x10001002 },
/* and so on */
};
Now you can implement an efficient function like this:
unsigned int get_entity_key(const char *entity_name);
It could perhaps use binary-search, internally.
Then you need to do the second step, obviously. I'm not sure of the exact details of these values (how and when they can change); if the "file number" for a given entity is constant, it could of course be added directly into the entity_info structure.
So, how do I compare the string variable entered by the user with a macro name?
You can't. Macros exist only at compile-time (technically, only at preprocess-time, which happens before compile-time).
I'm not going to suggest a solution until I'm sure I understand your scenario correctly (see my comment above).