GWAN Key-Value persistent store - c

I want to use the GWAN API Key-Value to record and read a number of data (in a multi-threaded way). The problem is that my recordings are only available on the current page and therefore can not be used on my other pages.
Can you show me an example or explain how to create a persistent KV store (which will be accessible on all my subdomains) ?
Here is an example that I currently use:
kv_t store;
kv_init(&store, "users", 10, 0, 0, 0);
kv_item item;
item.key = "pierre";
item.klen = sizeof("pierre") - 1;
item.val = "pierre#example.com";
item.flags = 0;
kv_add(&store, &item);
char *p = kv_get(&store, "pierre", sizeof("pierre") - 1);
xbuf_xcat(get_reply(argv), "<br>pierre's email address: %s<br>", p);
but is not persistent.

As G-WAN scripts are compiled and linked independently, 'global' variables are 'static' (to each script) rather than available for all scripts.
So, you have to attach the KV store to a persistent pointer. G-WAN offers persistent pointers with different scopes:
US_REQUEST_DATA = 200, // Request-wide pointer
US_HANDLER_DATA, // Listener-wide pointer
US_VHOST_DATA, // Virtual-Host-wide pointer
US_SERVER_DATA, // global pointer (for maintenance script)
There are several G-WAN script examples demonstrating how to do that:
http://gwan.ch/source/persistence.c
http://gwan.ch/source/stream1.c
http://gwan.ch/source/forum.c
etc.

Related

Why calling `free(malloc(8))`?

The Objective-C runtime's hashtable2.mm file contains the following code:
static void bootstrap (void) {
free(malloc(8));
prototypes = ALLOCTABLE (DEFAULT_ZONE);
prototypes->prototype = &protoPrototype;
prototypes->count = 1;
prototypes->nbBuckets = 1; /* has to be 1 so that the right bucket is 0 */
prototypes->buckets = ALLOCBUCKETS(DEFAULT_ZONE, 1);
prototypes->info = NULL;
((HashBucket *) prototypes->buckets)[0].count = 1;
((HashBucket *) prototypes->buckets)[0].elements.one = &protoPrototype;
};
Why does it allocate and release the 8-bytes space immediately?
Another source of confusion is this method from objc-os.h:
static __inline void *malloc_zone_malloc(malloc_zone_t z, size_t size) { return malloc(size); }
While it uses only one parameter, does the signature ask for two?
For the first question I can only assume. My bet it was done to avoid/reduce memory churn, or segment the memory for some other reason. You can briefly find where it's discussed in the Changelog of bmalloc (which is not quite relevant, but i could not find a better reference):
2017-06-02 Geoffrey Garen <ggaren#apple.com>
...
Updated for new APIs. Note that we cache one free chunk per page
class. This avoids churn in the large allocator when you
free(malloc(X))
It's unclear however, if the memory churn is caused by this technique or it was supposed to address it.
For the second question, Objective-C runtime used to work with "zones" in order to destroy all allocated variables by just destroying the said zone, but it proved being error prone and later it was agreed to not use it anymore. The API, however still uses it for historical reasons (backward compatibility, i assume), but says that zones are ignored:
Zones are ignored on iOS and 64-bit runtime on OS X. You should not use zones in current development.

How do I read OpenVINO IR models from memory with the OpenVINO C API

I am having trouble reading OpenVINO IR networks (XML and bin) from memory using ie_core_read_network_from_memory() in the OpenVINO 2021.4 C API ie_c_api.h.
I suspect that I am creating the network weight blob wrong, but I cannot find any information on how to create weight blobs correctly for networks.
I have read the OpenVINO C API docs but cannot deduce from docs what I am doing wrong. The OpenVINO code repo contains some C code samples, but none of the samples seem to use ie_core_read_network_from_memory().
Below is a cut out of the code I am having trouble with.
// void* dmem->data - network memory buffer (float32)
// size_t dmem->size - size of network memory buffer (bytes)
ie_core_t* ov_core = NULL;
IEStatusCode status = ie_core_create("", &ov_core);
if (status != OK)
{
// error handling
}
const dimensions_t weights_tensor_dims =
{ 4, { 1, 1, 1, dmem->size/sizeof(float) } };
tensor_desc_t weights_tensor_desc = { OIHW, weights_tensor_dims, FP32 };
ie_blob_t* ov_model_weight_blob = NULL;
status = ie_blob_make_memory_from_preallocated(
&weights_tensor_desc, dmem->data, dmem->size, &ov_model_weight_blob);
if (status != OK)
{
// error handling
}
// char* model_xml_desc - the model's XML string
uint8_t* ov_model_xml_content = (uint8_t*)model_xml_desc;
ie_network_t* ov_network = NULL;
size_t xml_sz = strlen(ov_model_xml_content);
status = ie_core_read_network_from_memory(
ov_core, ov_model_xml_content, xml_sz, ov_model_weight_blob, &ov_network);
if (status != OK)
{
// Always get "GENERAL_ERROR (-1)"
}
The code works fine down to the ie_core_read_network_from_memory() call which results in "GENERAL_ERROR".
I have tried two models that were converted from Tensorflow. One is a simple [X] -> [Y] regression model (single input value, single output value). The other is also a regression model [X_1, X_2, ..., X_9] -> [Y] (nine input values, single output value). They work fine when reading them from file with ie_core_read_network(), but for my use case I must provide the network as a binary memory buffer and XML string.
I would appreciate any help, either by pointing out what I am getting wrong or directing me to some code samples that use ie_core_read_network_from_memory().
System information:
Windows 10
OpenVINO v2021.4.689
Microsoft Visual Studio 2019
UPDATE: An Intel employee reached out to me in another forum and pointed out that there is a unit test for ie_core_read_network_from_memory(). The unit test successfully reads a network from memory and made clear that I was in fact using a faulty tensor description to produce the weight blob, just as I suspected. Apparently the weight blob descriptor should be one dimensional, have memory layout ANY and datatype U8 even though the model weights are fp32.
From the unit test:
std::string bin_std = TestDataHelpers::generate_model_path("test_model", "test_model_fp32.bin");
const char* bin = bin_std.c_str();
//...
std::vector<uint8_t> weights_content(content_from_file(bin, true));
tensor_desc_t weights_desc { ANY, { 1, { weights_content.size() } }, U8 };
However, simply changing the tensor descriptor was not enough to get my code to work so it remains for me to properly translate the C++ code from the unit test to my C environment before the issue to can be considered solved.
Thanks
Refer to tensor_desc struct and standard layout format.
Apart from that, it is recommended to use the Benchmark_app tool to test the inference performance.

Efficient way to detect changes in structure members?

This seems like it should be simple but I wasn't able to find much related to it. I have structure which has different fields used to store data about the program operation. I want to log that data so that I can analyse it later. Attempting to continuously log data over the course of the programs operation eats up a lot of resources. Thus I would only like to call the logging function when the data has changed. I would love it if there was an efficient way to check whether the structure members have updated. Currently I am playing a shell game with 3 structures (old, current, and new) in order to detect when the data has changed. Thanks in advance.
You may track structures and its hashes in your log function.
Let you have a hash function:
int hash(void* ptr, size_t size);
Let you have a mapping from pointer to struct to struct's hash like:
/* Stores hash value for ptr*/
void ptr2hash_update_hash(void* ptr, int hash);
/* Remove ptr from mapping */
void ptr2hash_remove(void* ptr);
/* Returns 0 if ptr was not stored, or stored has otherwise*/
int ptr2hash_get_hash(void* ptr);
Then you may check if your object was changed between log calls like this:
int new_hash = hash(ptr, sizeof(TheStruct));
int old_hash = ptr2hash_get_hash(ptr);
if (old_hash == new_hash)
return;
ptr2hash_update_hash(ptr, new_hash);
/* Then do the logging */
Don't forget to remove ptr from mapping when you do free(ptr) :)
Here is simple hash table implementation, you will need it to implement ptr2hash mapping.
Simple hash functions are here.
If you're running on Linux (x86 or x86_64) then another possible approach is the following:
Install a segment descriptor for a non-writable segment in the local descriptor table using the modify_ldt system call. Place your data inside this segment (or install the segment such that your data structure is within it).
Upon write access, your process will receive a SIGSEGV (segmentation fault). Install a handler using sigaction to catch segmentation faults. Within that handler, first check that the fault occurred inside the previously set segment (si_addr member of the siginfo_t) and if so prepare to record a notification. Now, change the segment descriptor such that the segment becomes writable and return from the signal handler.
The write will now be performed, but you need a way to change the segment to be non-writable again and to actually check what was written and if your data actually changed.
A possible approach could be to send oneself (or a "delay" process and then back to the main process) another signal (SIGUSR1 for example), and doing the above in the handler for this signal.
Is this portable? No.
Is this relyable? No.
Is this easy to implement? No.
So if you can, and I really hope you do, use a interface like already suggested.
The easiest way what you can try is, You can just keep two structure pointers. Once you are receiving the new updated values that time you can just compare the new structure pointer with the old structure pointer, and if any difference is there you can detect it and then you can update to old structure pointer so that you can detect further changes in updated value in future.
typedef struct testStruct
{
int x;
float y;
}TESTSTRUCT;
TESTSTRUCT* getUpdatedValue()
{
TESTSTRUCT *ptr;
ptr->x = 5;
ptr->y = 6;
//You can put your code to update the value.
return ptr;
}
void updateTheChange(TESTSTRUCT* oldObj,TESTSTRUCT* newObj)
{
cout << "Change Detected\n";
oldObj = newObj;
}
int main()
{
TESTSTRUCT *oldObj = NULL;
TESTSTRUCT *newObj = NULL;
newObj = getUpdatedValue();
//each time a value is updated compae with the old structure
if(newObj == oldObj)
{
cout << "Same" << endl;
}
else
{
updateTheChange(oldObj,newObj);
}
return 0;
}
I am not sure, it gives you your exact answer or not.
Hope this Helps.

How to pass data between multiple Lua State(multi-thread)?

I initiate Redis connection pool in redis.lua, by calling from C, I got a redis_lua_state, this Lua state is global initiated once and other thread only get from it.
While there comes a HTTP request(worker thread), I need to fetch a redis connection from redis_lua_state, then new another Lua state to load other Lua script, and these scripts will use this redis connection to communicate with Redis, how to do this? Or how to design my Lua scripts to make it simple?
Code Sample:
/* on main thread, to init redis pool connection */
lua_State *g_ls = NULL;
lua_State *init_redis_pool(void) {
int ret = 0;
g_ls = luaL_newstate();
lua_State *ls = g_ls;
luaL_openlibs(ls);
ret = luaL_loadfile(ls, "redis.lua");
const char *err;
(void)err;
/* preload */
ret = lua_pcall(ls, 0, 0, 0);
lua_getglobal(ls, "init_redis_pool");
ret = lua_pcall(ls, 0, 0, 0);
return ls;
}
/* worker thread */
int worker() {
...
lua_State *ls = luaL_newstate();
ret = luaL_loadfile(ls, "run.lua");
/* How to fetch data from g_ls? */
...
lua_getglobal(ls, "run")
ret = lua_pcall(ls, 0, 0, 0)
lua_close(ls);
...
return 0;
}
If your Lua states are separate, then there's no way to do this. Your worker thread will have to initialize the Redis connection and do processing on it.
One way to do it is to implement copying variables between Lua states on C side. I have done a similar thing in my ERP project, but it requires a bit of a hassle, especially when it comes to copying user data.
What I did was to implement some sort of a super global variable (a C class in my instance) system that's implemented as __index and __newindex of the global table, which is a proxy to default Lua global table. When setting a global variable, __newindex would copy that to that super global. When another Lua state would try to access that global, it would retrieve it from the same structure.
And then the redis connection could be a mutex locked shared, so when one state accesses it, the other cannot, for instance.
Of course there's the issue of accessing Lua default globals that you also have to take care of.
Alternatively, you can check out Lua lanes, if that's an option (I've no Redis experience, so don't know how open the Lua is, but I see that you have full access to C api, so it should work).

Combining reading and writing of XML configuration functions.

I’m using libxml2 to create and read XML files in c that contain configuration information for the program I’m writing. The program makes its own configuration files (or another program sends it a configuration file and asks the program to run based off the config file), so the XML config files don’t need to be really easy for a human to read.
These configuration files contain lots of values and are really long. So right now I have a function that makes the XML files and another that reads the XML files. However any-time I change the write XML function than I need to also change the read xml function. So there isn’t actual code duplication, but something really close (ie. BAD) and because the configuration files are so long it is rather tedious to try to make sure everything is reading and writing the same thing.
This is the current set up.
struct config_data
{
// category one
int X
int Y
// category two
int Z
int A
}
int makeXMLsheet(char* fileout)
{
xmlDocPtr doc = NULL; /* document pointer */
xmlNodePtr root_node = NULL; /* node pointers */
LIBXML_TEST_VERSION;
doc = xmlNewDoc((xmlChar*) "1.0");
root_node = xmlNewNode(NULL, BAD_CAST "configuration_file");
xmlDocSetRootElement(doc, root_node);
// catogory one
xmlNodePtr category_one = xmlNewChild(root_node, NULL, BAD_CAST "category_one", NULL);
xmlNewChild(category_one, NULL, BAD_CAST "x", BAD_CAST "12345");
xmlNewChild(category_one, NULL, BAD_CAST "y", BAD_CAST "1");
// catogory two
xmlNodePtr category_two = xmlNewChild(root_node, NULL, BAD_CAST "category_two", NULL);
xmlNewChild(category_two, NULL, BAD_CAST "Z", BAD_CAST "12345");
xmlNewChild(category_two, NULL, BAD_CAST "A", BAD_CAST "1");
xmlSaveFormatFileEnc(fileout, doc, "UTF-8", 1);
xmlFreeDoc(doc);
xmlCleanupParser();
return 0;
}
int readXMLsheet(char* filename,struct *config_data)
{
xmlDocPtr doc = getdoc(filename);
config_data->X = getIntegerFromXML(0,doc,(xmlChar*)"//configuration_file/category_one/X");
config_data->Y = getIntegerFromXML(0,doc,(xmlChar*)"//configuration_file/category_one/Y");
config_data->Z = getIntegerFromXML(0,doc,(xmlChar*)"//configuration_file/category_two/Z");
config_data->A = getIntegerFromXML(0,doc,(xmlChar*)"//configuration_file/category_two/a");
xmlFreeDoc(doc);
return 0;
}
Where
int getIntegerFromXML(int defaultValue, xmlDocPtr doc, xmlChar *xpath)
Does as its name says and gets a integer from the opened XML document that has the xpath location, and if it fails then it fills it with the default value so that the program doesn't crash and burn.
So I want to try to some how combine the read and write functions into one. My sample struct config-data is tiny compared to the number of values I actually have in my configuration struct, so combining them would make keeping track of everything much easier.
So I was thinking something like this.
int openXMLvalue(X, Y, Z, readOrWrite, defaultValue, value);
where X, Y, Z are the parent nodes, but there might be more or less than 3.
Any ideas on how to do this? Maybe make some type of array?
I would make generic read and write functions that populate (or serialize) a generic configuration structure.
A simplified case would be to create an key/value structure in memory with get/set methods. The generic writeToXml function would simply create elements with key names containing the values.
If warranted, a hierarchical tree structure could be used instead, and perhaps add a few validation rules when reading a configuration file (a simple one would be to use an XML Schema for validation) to verify that required configuration values exist and are valid.
To add, change or remove configuration values would then only require the following steps (note that neither read or write functions require update):
Decide the new format of the configuration file
Update existing configuration files
Update any places in the application using the configuration values
Optionally update validation rules
Because of the large configuration file size, we switched to using sqlite. Then we made a function that would read a database and make an xml sheet, we made a function that would read an xml sheet and populate the database, and working on functions to print the database to stout and fill the C struct. Think this is going to make life much easier.

Resources