I am porting some legacy code from windows to Linux (Ubuntu Karmic to be precise).
I have come across a Win32 function GetDateFormat().
The statements I need to port over are called like this:
GetDateFormat(LOCALE_USER_DEFAULT, 0, &datetime, "MMMM", 'January', 31);
OR
GetDateFormat(LOCALE_USER_DEFAULT, 0, &datetime, "MMMM", 'May', 30);
Where datetime is a SYSTEMTIME struct.
Does anyone know where I can get the code for the function - or failing that, tips on how to "roll my own" equivalent function?
The Linux equivalent (actually, plain ANSI C) to a call to GetDateFormat like this:
GetDateFormat(LOCALE_USER_DEFAULT, 0, &datetime, "MMMM", date_str, len);
is:
char *old_lc_time;
/* Set LC_TIME locale to user default */
old_lc_time = setlocale(LC_TIME, NULL);
setlocale(LC_TIME, "");
strftime(date_str, len, "%B", &datetime);
/* Set LC_TIME locale back */
setlocale(LC_TIME, old_lc_time);
(where datetime is now a struct tm rather than a SYSTEMTIME)
You may not need to worry about setting the locale each time and setting it back - if you are happy for all of your date/time formatting to be done in the user default locale (which is usual), then you can just call setlocale(LC_TIME, ""); once at program startup and be done with it.
Note however that the values your code is passing to GetDateFormat in the lpDateStr and cchDate parameters (second-last and last respectively) do not make sense. 'January' is a character constant, when it should be a pointer to a buffer where GetDateFormat will place its result.
The Win32 GetDateFormat function should be equivalent to the strftime function in the time.h header.
Related
I have built a libpcre2-8.dll with the help of this Git Repo.
I'm now trying to access the function pcre2_compile from an ABL (Progress) program. (Progress is an old 4GL Language). I'm constantly hitting the error
Could not find the entrypoint _pcre2_compile#40. (3260)
I've already tried many things but it still doesn't work.
The Dynamic Library is 64 bit and Progress is also running in 64 bit.
In ABL (Progress) you can specify the LIBRARY-CALLING-CONVENTION but whether I set it to STDCALL or CDECL or just don't specify it, the error remains the same.
This is a snippet of the Progress ABL I'm trying to execute the function: (code comes from this Git Repo, which works, but only for 32 bit)
PROCEDURE pcre2_compile :
DEFINE INPUT PARAMETER pattern AS CHARACTER. /* const char * */
DEFINE INPUT PARAMETER options AS INTEGER. /* int */
DEFINE OUTPUT PARAMETER errcodeptr AS INTEGER. /* int * */
DEFINE OUTPUT PARAMETER errptr AS MEMPTR. /* const char ** */
DEFINE OUTPUT PARAMETER erroffset AS MEMPTR. /* int * */
DEFINE INPUT PARAMETER tableptr AS INTEGER. /* const unsigned char * */
DEFINE OUTPUT PARAMETER result AS MEMPTR. /* pcre * */
DEFINE VARIABLE libName AS CHARACTER NO-UNDO.
DEFINE VARIABLE hCall AS HANDLE NO-UNDO.
libName = get-library().
CREATE CALL hCall.
ASSIGN
hCall:CALL-NAME = "pcre2_compile"
hCall:LIBRARY = "lib/libpcre2-8.dll"
//hCall:LIBRARY-CALLING-CONVENTION = "STDCALL"
hCall:CALL-TYPE = DLL-CALL-TYPE
hCall:NUM-PARAMETERS = 6
hCall:RETURN-VALUE-DLL-TYPE = "MEMPTR".
hCall:SET-PARAMETER(1, "CHARACTER", "INPUT" , pattern ).
hCall:SET-PARAMETER(2, "LONG" , "INPUT" , options ).
hCall:SET-PARAMETER(3, "HANDLE TO LONG" , "OUTPUT", errcodeptr ).
hCall:SET-PARAMETER(4, "MEMPTR" , "OUTPUT", errptr ).
hCall:SET-PARAMETER(5, "MEMPTR" , "OUTPUT", erroffset ).
hCall:SET-PARAMETER(6, "LONG" , "INPUT" , tableptr ).
hCall:INVOKE().
ASSIGN result = hCall:RETURN-VALUE.
DELETE OBJECT hCall.
END PROCEDURE.
What am I missing?
Update: Checked with Dependency Walker and the functions seem to be visible. They do have a _8 suffix... But even when trying pcre2_compile_8 it still gives me the same error.
I think that you need to change your long integers to INT64.
Is the entrypoint externally visible/accesible?
I've used https://dependencywalker.com/ in the past to figure that out.
Does that change if you specify the ORDINAL option ?
So the problem was that the name of the entry point was "pcre2_compile_8" instead of "pcre2_compile"... Wanted to delete the question because now it looks quite dumb but leaving it anyway...
I have a C app that I need to compile in Windows. And I am really unable to wrap my head around the UNICODE and ANSI concept in Windows
I want to use GetDriveType function and there are 2 variables A and W. There is also a note here saying that GetDriveType is an alias to both and will select either based on some pre-processor.
But how should I call this function ?
This is what I am trying:
const TCHAR* path = "C:\\Users\\";
const TCHAR* trailing_slash = "\\";
size_t requiredSize = mbstowcs(NULL, path, 0);
TCHAR* win_path = (char*)malloc((requiredSize + 2) * sizeof(char));
UINT driveType = 0;
strncpy(win_path, path, requiredSize + 1);
strncat(win_path, trailing_slash, 2);
printf("Checking path: %s\n", win_path);
driveType = GetDriveType(win_path);
wprintf(L"Drive type is: %d\n", driveType);
if (driveType == DRIVE_FIXED)
printf("Success\n");
else
printf("Failure\n");
return 0;
It produces the result
Checking path: C:\Users\
Drive type is: 1
Failure
If I replace GetDriveType with GetDriveTypeA it returns the correct value 3 and succeeds.
I tried another variant too
size_t requiredSize = mbstowcs(NULL, path, 0);
uint32_t drive_type = 0;
const wchar_t *trailing_slash = L"\\";
wchar_t *win_path = (wchar_t*) malloc((requiredSize + 2) * sizeof(wchar_t));
/* Convert char* to wchar* */
size_t converted = mbstowcs(win_path, path, requiredSize+1);
/* Add a trailing backslash */
wcscat(win_path, trailing_slash);
/* Finally, check the path */
drive_type = GetDriveType(win_path);
I see this warning:
'function' : incompatible types - from 'wchar_t *' to 'LPCSTR'
So, which one to use ? How is it generic ? The path I will be reading is from an environment variable on Windows
What is TCHAR and wchar_t etc. ? I found this post, but could not understand much
This Microsoft post says
Depending on your preference, you can call the Unicode functions explicitly, such as SetWindowTextW, or use the macros
So is it Ok to use wchar_t everywhere and call GetDriveTypeW directly ?
Back in the mid-90s you had Windows 95/98/ME that did not support Unicode and NT4/2000/XP that did. You could create source code that could compile with or without Unicode support just by changing the UNICODE define.
This type of code looks like this:
UINT type = GetDriveType(TEXT("c:\\"));
There is no function named GetDriveType, 99% of all functions that take a string parameter in Windows have two versions, in this case GetDriveTypeA and GetDriveTypeW.
Inside the Windows header files you have code that looks like this:
#ifdef UNICODE
#define GetDriveType GetDriveTypeW
#else
#define GetDriveType GetDriveTypeA
#endif
If UNICODE is defined before including windows.h the above code expands to:
UINT type = GetDriveTypeW(L"c:\\");
and if not:
UINT type = GetDriveTypeA("c:\\");
These days most applications should use Unicode. Whether you should use wchar_t/WCHAR and call GetDriveTypeW directly or still rely on the defines is a style question. There might be situations where you need to force the A or W function and that is OK as well.
The same applies to the C library with the _TEXT macro and the _tcs functions except that those are controlled by the _UNICODE define.
If you get a warning about incompatible string types then you are calling the wrong function or you have not added #define UNICODE (and _UNICODE). If you are compiling cross platform code intended for Unix you might have to convert from char* to a wide string in some places.
See also:
TEXT vs. _TEXT vs. _T, and UNICODE vs. _UNICODE
I want to get the complete language name from the locale in Linux. For example, in Windows, there is one API GetLocaleInfoEx we can use, it will return "English" for locale "en-US".
wchar_t buffer[LOCALE_NAME_MAX_LENGTH];
GetLocaleInfoEx(L"en-US", LOCALE_SENGLISHLANGUAGENAME,
(LPWSTR)buffer, LOCALE_NAME_MAX_LENGTH)
This will fill the buffer with "English". Is there anything similar in Linux?
You can use
nl_langinfo(_NL_IDENTIFICATION_LANGUAGE)
from
#include <langinfo.h>
if locale is not set, you can set it with
s = getenv("LANG");
setlocale(LC_ALL, s);
After playing around with the other answers, I thought of concluding it here.
#include <langinfo.h>
#include <locale.h> // for LC_ALL_MASK flag, freelocale(), and newlocale() functions.
Access locale information from the system locale.
char *nl_langinfo(nl_item item);
Access locale information from the locale given as a parameter.
char *nl_langinfo_l(nl_item item, locale_t locale);
Use of GetLocaleInfoEx in the given example equivalent to the following on Linux.
locale_t loc = newlocale(LC_ALL_MASK, "en_US.UTF-8", NULL);
if (loc) {
language = strdup(nl_langinfo_l(_NL_IDENTIFICATION_LANGUAGE, loc));
freelocale(loc);
}
Refer to nl_langinfo for more details.
This is my first time using Ghidra and debugging. My project deals with reverse engineering a Dos executable from 2007, to understand how it generates a code.
I looked for the strings I can read when launching the program through wine (debugging under linux) and found one place :
/* Reverses the string */
__strrev(local_8);
local_4 = 0;
DISPLAY_MESSAGE(s__Code_=_%s_0040704c);
with DISPLAY_MESSAGE being :
int __cdecl DISPLAY_MESSAGE(byte *param_1)
{
int iVar1;
int errorCode;
iVar1 = FUN_004019c0((undefined4 *)&DAT_004072e8);
errorCode = FUN_00401ac0((char **)&DAT_004072e8,param_1,(undefined4 *)&stack0x00000008);
FUN_00401a60(iVar1,(int *)&DAT_004072e8);
return errorCode;
}
I named the function "DISPLAY_MESSAGE" because I saw the string on the screen ;-). I would like to name it printf but its signature does not match the one of printf since it takes byte * instead of char *, ... as input parameters and returns an int instead of void for the actual printf.
The string "Code = %s" (stripping the CRs and new lines) is actually located at address "0040704c", and I am very surprised not to see the variable holding the generated code value instead (that could help me rename the variables).
If I change the signature to the one of printf it yields :
DISPLAY_MESSAGE(s__Code_=_%s_0040704c,local_8)
which looks better, because local_8 could be the code, but I don't know if it is correct to change the signature like this (since then the local variable that I renamed errorCode is never used whereas it was returned before signature change).
void __cdecl DISPLAY_MESSAGE(char *param_1,...)
{
int iVar1;
int errorCode;
iVar1 = FUN_004019c0((undefined4 *)&DAT_004072e8);
FUN_00401ac0((char **)&DAT_004072e8,(byte *)param_1,(undefined4 *)&stack0x00000008);
FUN_00401a60(iVar1,(int *)&DAT_004072e8);
return;
}
So my questions are :
Why is Ghidra appending _0040704c to the string (should it help me, and how should I make use of this piece of info) ?
If my signature change is correct, what prevents Ghidra from finding the correct signature from its analysis ?
Should I think there is a problem with the function signature whenever I see undefinedX as it appears in DISPLAY_MESSAGE ?
Any help greatly appreciated!
I need to know how to use the ICU4C version 52 C API to display the locale Currency Symbol and code. i.e. ($ - USD)
There is probably more than one way how to do this. Here is one, that I think should work (untested):
Get the number format and format the value using it:
UErrorCode success = U_ZERO_ERROR;
UNumberFormat *nf;
const char* myLocale = "fr_FR";
// get locale specific number format
nf = unum_open( UNUM_CURRENCY, myLocale, success );
// use it to format the value
UChar buf[100];
unum_formatDouble (nf, 10.0, buf, 100, NULL, &success);
// close the format handle
unum_close(nf);
Or, more directly, use ucurr_getName() with the UCURR_SYMBOL_NAME selector. You can also use ucurr_forLocale() or ucurr_forLocaleAndDate() to get the currency code without needing a formatter. Note that there can be multiple currencies for a locale.