OCIDate getting mangled on the way into Oracle - c

I have some C code to populate an OCIDate from the epoch time:
In my main program:
OCIDate ocidate;
epoch_to_ocidate(c.f, &ocidate);
And in a library:
void epoch_to_ocidate(double d, OCIDate* ocidate) {
time_t t = (time_t)d;
struct tm *ut = localtime(&t); /* convert to a Unix time */
OCIDateSetDate(ocidate, ut->tm_year + 1900, ut->tm_mon + 1, ut->tm_mday);
OCIDateSetTime(ocidate, ut->tm_hour + 1, ut->tm_min + 1, ut->tm_sec + 1);
}
I am pretty certain this is correct, because I have a check in the calling routine:
#ifdef DEBUG
char* fmt = "DD-MON-YYYY HH24:MI:SS";
ub4 dbufsize=255;
debug("testing converted OCIDate:");
OCIDateToText(h.err, (const OCIDate*)&ocidate, (text*)fmt, (ub1)strlen(fmt), (text*)0, (ub4)0, &dbufsize, (text*)dbuf);
debug(dbuf);
#endif
And I am binding it with:
OCIBindByPos(s, &bh, h.err, (ub4)p, (dvoid*)&ocidate, (sb4)sizeof(ocidate), SQLT_ODT, 0, 0, 0, 0, 0, OCI_DEFAULT);
(dbuf is already defined). And that displays exactly what I would expect. But when it arrives in Oracle it's gibberish, resulting either in a nonsensical date (e.g. 65-JULY-7896 52:69:0 or a either ORA-1858 or ORA-1801). Has anyone seen anything like this before? Thanks!

Solved it - the problem was that ocidate was stack allocated, and binding doesn't copy the value into the bindhandle, it merely sets a pointer, so when it went out of scope, that could have been pointing to anything. So I heap-allocated it instead. Now of course I have to bookkeep it, but I guess that's straightforward enough. Cheers!

Related

Code (C) won't work when I remove junk code?

I have the following code. The code reads a string from a PHP webpage, then converts the code using the MultiByteToWideChar-function. Then it splits the string using the comma-delimiter. It then removes the "xxx=" before the value (for example, "cid={abcd-1234-5678},tid=AAD23HKJD23KVAAAHN23") and attaches the splitted substrings to an array. Then it displays the parameters one by one in a textbox.
I had it working, however I reinstalled my system 2 days ago and since then I cannot seem to get it working again (even though I backed up the working project before reinstalling, which leaves me wondering how this could ever be possible).
So I have been trying for hours and hours and hours the past day, changing every setting in the compiler and project properties I could think of, reinstalling Visual Studio, trying other versions of Visual Studio, installing extra SDK packages... Nothing helped. Then I added some junk code to test whether the wcstok function was even working (it seems that the project died there) and then randomly the entire project worked and the array I talked about before got properly returned. However, if I remove this junkcode or the code after the splitting/returning of the string (which should have no influence on the code above it, leaving me as confused as I could possibly be) it stops working again and seems to die at the "wcstok"-function.
This is my code:
BOOL Commands(LPBYTE command, DWORD size)
{
unsigned int sizeint;
sizeint = (unsigned int)size;
wchar_t params[MAXCHAR];
MultiByteToWideChar(CP_ACP, MB_COMPOSITE, (LPCCH)command, sizeint, (LPWSTR)params, size);
MessageBoxW(0, params, 0, 0);
//wchar_t input[100] = L"A bird came down the walk";
//wchar_t* buffer;
//wchar_t* token = std::wcstok(input, L" ", &buffer);
wchar_t buf2[MAXCHAR], *ptr;
int i;
for (ptr = wcstok(params, L","); ptr != NULL; ptr = wcstok(NULL, L","))
{
CWA(lstrcpyW, kernel32, buf2, ptr);
for (i = 0; i < lstrlenW(buf2); i++)
{
if (buf2[i] == '=' )
{
wchar_t *a[1000];
wcscpy(a[0] + i, buf2 + i + 1);
MessageBoxW(0, a[0] + i, 0, 0);
}
else
{
}
}
}
//HRESULT hr;
//LPCTSTR Url = _T("http://cplusplus.com/img/cpp-logo.png"), File = _T("C:\\Users\\Public\\file.exe");
//////hr = URLDownloadToFile(0, Url, File, 0, 0);
//switch (hr)
//{
// PROCESS_INFORMATION *piinfo; //size = 0x10
// STARTUPINFO *siinfo; //size = 0x44
// CWA(CreateProcessW, kernel32, 0, File, 0, 0, 0, DETACHED_PROCESS, 0, 0, siinfo, piinfo);
//}
CWA(Sleep, kernel32, 100);
return 1;
}
It doesn't work properly when I use the code above, but when I uncomment the junk-code that is commented above, it randomly does work. The first "MessageBoxW" seems to always return the string from the PHP page properly, however once I remove the junkcode, the program gets terminated once the wcstok in the splitting function is reached.
It worked before exactly like this and I am honestly clueless, I hope anyone has an idea what could cause this because it is honestly driving me nuts..
Thanks a lot in advance!
Millie Smith was the one who gave me the place where the code was bugged.
Instead of defining "a" as a wchar_t*, I had to define it as a normal w_char.
So it became "wchar_t a[MAXCHAR]". Then I had to change the "wcscpy(a[0] + i, buf2 + i + 1);" inside the splitting function into "wcscpy(a + i, buf2 + i + 1);".
Learned something new again, undefined behaviour can be a tricky thing.
Thanks for the help and have a nice day!

Lua - pcall with "entry point"

I am loading Lua script with:
lua_State * L = lua_open();
luaL_openlibs(L);
const char lua_script[] = "function sum(a, b) return a+b; end print(\"_lua_\")";
int load_stat = luaL_loadbuffer(L,lua_script,strlen(lua_script),lua_script);
lua_pcall(L, 0, 0, 0);
Now I can call
lua_getglobal(L,"sum");
and get result from it on C-side
However, when I call lua_pcall, script is executed and it leads to output "_lua_" to console. Without lua_pcall, I cannot later access lua_getglobal. Is there any way around this? I dont want to call lua_pcall before setting "entry point" function via lua_getglobal.
If you can modify the script, a different approach to this is to pack your initialization code (the print and whatever else may be there) into a separate function, like so:
lua_State * L = lua_open();
luaL_openlibs(L);
const char lua_script[] = "function sum(a,b) return a+b end return function() print'_lua_' end";
int load_stat = luaL_loadbuffer(L,lua_script,strlen(lua_script),lua_script);
lua_pcall(L, 0, 1, 0); // run the string, defining the function(s)…
// also puts the returned init function onto the stack, which you could just leave
// there, save somewhere else for later use, … then do whatever you need, e.g.
/* begin other stuff */
lua_getglobal(L, "sum");
lua_pushinteger( L, 2 );
lua_pushinteger( L, 3 );
lua_pcall(L, 2, 1, 0);
printf( "2+3=%d\n", lua_tointeger(L,-1) );
lua_pop(L, 1);
/* end other stuff (keep stack balanced!) */
// and then run the init code:
lua_pcall(L, 0, 0, 0); // prints "_lua_"
Now, while you still have to run the chunk to define the function(s), the other initialization code is returned as a function which you can run at a later time / with a modified environment / … (or not at all, if it's unnecessary in your case.)
The function sum is not defined until you run the script because function definition is an assignment in Lua, and it needs to be executed.
So, there is no way to avoid running the script that defines sum. That is what lua_pcall does. You could use lua_call, but then you wouldn't be able to handle errors.

Clang sqlite3 float error

Any ideas on how to track down this error would be appreciated.
I have some c code that runs in two or more processes. The first process listens in on a message queue and saves the resulting struct to a database. The remaining processes query one or more serial devices and pass this information through the message queue to the first process to be stored in the database.
It all works great except the following. One of the structs I am using contains a float. This struct gets sent through the queue and decoded correctly however when binding the value using sqlite3_bind_double() the resulting value in the database is 0. Placing a printf() statement around the sqlite3_bind_double() statement causes the code to work and place the correct value in the database.
But even more interesting is if I remove the printf() statement and compile the program with gcc the code works.
Any help would be great. Thanks in advance.
Code:
int
add_inverter_stat(sqlite3 *db_conn, struct inverter_stat const *istat
,int *sqlite3_err)
{
sqlite3_stmt *stmt = NULL;
*sqlite3_err = sqlite3_prepare_v2(db_conn, SQL_INSERT_INVERTER_STAT, -1
,&stmt, NULL);
*sqlite3_err = sqlite3_bind_int(stmt, 1, istat->stat_id);
*sqlite3_err = sqlite3_bind_text(stmt, 2, istat->serial_no, -1, NULL);
*sqlite3_err = sqlite3_bind_int64(stmt, 3, istat->time_taken);
*sqlite3_err = sqlite3_bind_double(stmt, 4, (double)istat->value);
*sqlite3_err = sqlite3_step(stmt);
sqlite3_finalize(stmt);
return 1;
}
That's the sort of thing I would expect from something trying to read a four-byte float as an eight-byte double, with the other four bytes being ... well, whatever. This really shouldn't work, but if you put an explicit cast (double)flt_var as the third parameter, does that help?

Need help resetting static global variables in C

I'm going to be straight here, I'm an absolute novice when it comes to C and I'm a bit out of my depth here and need a little help. I am tweaking some source code and need to reset some static globals so that they can be used again. I've tried all sorts of things which just end in bad access errors - any help would be appreciated.
static struct option long_options[2 * countof (option_data) + 1];
static char short_options[128];
static unsigned char optmap[96];
Here's what I've tried:
memset(&long_options[0], 0, 2 * countof (option_data) + 1);
memset(&short_options[0], 0, sizeof(short_options));
memset(long_options, 0, sizeof(long_options));
memset(short_options, 0, sizeof(short_options));
memset(optmap, 0, sizeof(optmap));

Implementation code for GetDateFormat Win32 function

I am porting some legacy code from windows to Linux (Ubuntu Karmic to be precise).
I have come across a Win32 function GetDateFormat().
The statements I need to port over are called like this:
GetDateFormat(LOCALE_USER_DEFAULT, 0, &datetime, "MMMM", 'January', 31);
OR
GetDateFormat(LOCALE_USER_DEFAULT, 0, &datetime, "MMMM", 'May', 30);
Where datetime is a SYSTEMTIME struct.
Does anyone know where I can get the code for the function - or failing that, tips on how to "roll my own" equivalent function?
The Linux equivalent (actually, plain ANSI C) to a call to GetDateFormat like this:
GetDateFormat(LOCALE_USER_DEFAULT, 0, &datetime, "MMMM", date_str, len);
is:
char *old_lc_time;
/* Set LC_TIME locale to user default */
old_lc_time = setlocale(LC_TIME, NULL);
setlocale(LC_TIME, "");
strftime(date_str, len, "%B", &datetime);
/* Set LC_TIME locale back */
setlocale(LC_TIME, old_lc_time);
(where datetime is now a struct tm rather than a SYSTEMTIME)
You may not need to worry about setting the locale each time and setting it back - if you are happy for all of your date/time formatting to be done in the user default locale (which is usual), then you can just call setlocale(LC_TIME, ""); once at program startup and be done with it.
Note however that the values your code is passing to GetDateFormat in the lpDateStr and cchDate parameters (second-last and last respectively) do not make sense. 'January' is a character constant, when it should be a pointer to a buffer where GetDateFormat will place its result.
The Win32 GetDateFormat function should be equivalent to the strftime function in the time.h header.

Resources