how to write a uint64_t to a char* buffer in C - c

So I am trying to write a uint64_t return address to a buffer and then verify that the correct return address got written to the correct spot. Here is my code.
uint64_t new_ret = ret - 8 - 32 - 32 - 1001 + (ret_buffer_offset + sizeof(shellcode));
printf("new_ret :%lu\n", new_ret);
snprintf(&buffer[ret_buffer_offset], 8, "%s", new_ret);
// debug code
char buffer_debug[10];
uint64_t* buffer_uint = (uint64_t*)buffer_debug;
bzero(buffer_debug, 10);
strncpy(buffer_debug, &buffer[ret_buffer_offset], 8);
uint64_t ret_debug = *buffer_uint;
printf("ret debug: %lu\n", ret_debug);
the two printfs should output the same thing, but the bottom printf outputs a very different number. I'm not sure if the way I'm writing to the buffer is wrong or if the way I'm getting the value back is wrong. What is the issue here?
Thanks

snprintf(&buffer[ret_buffer_offset], 8, "%s", new_ret);
buffer now contains the string representation of your original value (or at least the first 8 bytes of the string representation). Your code then takes the first 8 bytes of this string and interprets that binary sequence as if it was a uint64_t. Step through this chunk of code in a debugger and you'll see exactly where the value changes.
I'm not sure exactly what this code is trying to do, but it seems like it's doing more copying and conversion than necessary. If you have a pointer to where you want the value to go, you should be able to either do memcpy(ptr, &new_ret, sizeof(new_ret)) or possibly even *(uint64_t*)ptr = new_ret if your platform allows potentially-misaligned writes. To print out the value for debugging, you'd use printf("%"PRIu64, *(uint64_t*)ptr).

I like to use union, but if you make a char pointer (char*) to point to the address of uint64_t it will work.
Using pointer will be:
uint64_t new_ret = ret - 8 - 32 - 32 - 1001 + (ret_buffer_offset + sizeof(shellcode));
buffer = (char*) &new_ret;
Test the code below to use union and pointer:
#include <stdio.h>
#include <stdint.h>
int main(){
union {
uint64_t u64;
char str[8];
} handler;
handler.u64 = 65;
printf("Using union:\n");
printf(" uint64: %ld\n", handler.u64);
printf(" char*: %s\n", handler.str);
uint64_t u64 = 65;
char *str = (char*)&u64;
printf("Using pointer:\n");
printf(" uint64: %ld\n", u64);
printf(" char*: %s\n", str);
return 0;
}

Related

Why does sscanf result in 0 in this case?

I am trying to convert a series of decimal numbers to their hex representation in string format and then back from string to decimal. This might sound strange but is a simplified representation of a more complex situation.
So, either way, I have the following piece of code which almost works fine. For some reason my variable a is still equal to 0 at the end while it should equal 43, all the other variables seem to be alright:
#include <inttypes.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(void)
{
/********************* A *******************/
uint16_t a = 43; //0x002b
int16_t b = -43; //0xffd5
uint32_t c = 234; //0x000000ea
int32_t d = -234; //0xffffff16
char aStr[10]={0};
char bStr[10]={0};
char cStr[10]={0};
char dStr[10]={0};
snprintf(aStr, sizeof(aStr), "%04hhx", a);
snprintf(bStr, sizeof(bStr), "%04x", b & 0xFFFF);
snprintf(cStr, sizeof(cStr), "%08hhx", c);
snprintf(dStr, sizeof(aStr), "%08x", d & 0xFFFFFFFF);
fprintf(stdout, "TX a = %s.\n", aStr);
fprintf(stdout, "TX b = %s.\n", bStr);
fprintf(stdout, "TX c = %s.\n", cStr);
fprintf(stdout, "TX d = %s.\n", dStr);
/********************* B *******************/
uint16_t aOut = 0;
int16_t bOut = 0;
uint32_t cOut = 0;
int32_t dOut = 0;
sscanf(aStr, "%04hhx", &aOut);
sscanf(bStr, "%04x", &bOut);
sscanf(cStr, "%08hhx", &cOut);
sscanf(dStr, "%08x", &dOut);
fprintf(stdout, "rx a = %d\n", aOut); //<---- this line prints 0 for a. Why?
fprintf(stdout, "rx b = %d\n", bOut);
fprintf(stdout, "rx c = %d\n", cOut);
fprintf(stdout, "rx d = %d\n", dOut);
return 0;
}
Does anybody know why or what I am missing?
The line
sscanf(aStr, "%04hhx", &aOut);
is wrong. The %hhx conversion format specifier requires an argument of type unsigned char *, but you are instead passing it an argument of type uint16_t *. This invokes undefined behavior.
I suggest that you change that line to the following:
sscanf(aStr, "%04"SCNx16, &aOut);
On most platforms, the macro SCNx16 will simply expand to "hx", but it is generally safer to use the macro, in case you code happens to be running on a (future) platform on which uint16_t is not equivalent to an unsigned short.
The lines
sscanf(bStr, "%04x", &bOut);
sscanf(cStr, "%08hhx", &cOut);
sscanf(dStr, "%08x", &dOut);
are also using the wrong conversion format specifiers. I recommend that you use the following code instead:
sscanf(bStr, "%04"SCNx16, &bOut);
sscanf(cStr, "%08"SCNx32, &cOut);
sscanf(dStr, "%08"SCNx32, &dOut);
Strictly speaking, the above code also invokes undefined behavior, because the %x conversion format specifier requires a pointer to an unsigned type instead of to a signed type. However, this should not be a problem in practice, provided that the converted value can be represented both in the signed type and in the unsigned type.

Function is returning a different value every time?

I'm trying to convert a hexadecimal INT to a char so I could convert it into a binary to count the number of ones in it. Here's my function to convert it into char:
#include <stdio.h>
#include <stdlib.h>
#define shift(a) a=a<<5
#define parity_even(a) a = a+0x11
#define add_msb(a) a = a + 8000
void count_ones(int hex){
char *s = malloc(2);
sprintf(s, "0x%x", hex);
free(s);
printf("%x", s);
};
int main() {
int a = 0x01B9;
shift(a);
parity_even(a);
count_ones(a);
return 0;
}
Every time I run this, i always get different outputs but the first three hex number are always the same. Example of outputs:
8c0ba2a0
fc3b92a0
4500a2a0
d27e82a0
c15d62a0
What exactly is happening here? I allocated 2 bytes for the char since my hex int is 2 bytes.
It's too long to write a comment so here goes:
I'm trying to convert a hexadecimal INT
int are stored as a group of value, padding (possible empty) and sign bits, so is there no such thing as a hexadecimal INT but you can represent (print) a given number in the hexadecimal format.
convert a ... INT to a char
That would be lossy conversion as an int might have 4 bytes of data that you are trying to cram into a 1 byte. char specifically may be signed or unsigned. You probably mean string (generic term) or char [] (standard way to represent a string in C).
binary to count the number of ones
That's the real issue you are trying to solve and this is a duplicate of:
How to count the number of set bits in a 32-bit integer?
count number of ones in a given integer using only << >> + | & ^ ~ ! =
To address the question you ask:
Need to allocate more than 2 bytes. Specifically ceil(log16(hex)) + 2 (for 0x) + 1 (for trailing '\0').
One way to get the size is to just ask snprintf(s, 0, ...)
then allocate a suitable array via malloc (see first implementation below) or use stack allocated variable length array (VLA).
You can use INT_MAX instead of hex to get an upper
bound. log16(INT_MAX) <= CHAR_BIT * sizeof(int) / 4 and the
latter is a compile time constant. This means you can allocate your string on stack (see 2nd implementation below).
It's undefined behavior to use a variable after it's deallocated. Move free() to after the last use.
Here is one of the dynamic versions mentioned above:
void count_ones(unsigned hex) {
char *s = NULL;
size_t n = snprintf(s, 0, "0x%x", hex) + 1;
s = malloc(n);
if(!s) return; // memory could not be allocated
snprintf(s, n, "0x%x", hex);
printf("%s (size = %zu)", s, n);
free(s);
};
Note, I initialized s to NULL which would cause the first call to snprintf() to return an undefined value on SUSv2 (legacy). It's well defined on c99 and later. The output is:
0x3731 (size = 7)
And the compile-time version using a fixed upper bound:
#include <limits.h>
// compile-time
void count_ones(unsigned hex) {
char s[BIT_CHAR * sizeof(int) / 4 + 3];
sprintf(s, "0x%x", hex);
printf("%s (size = %zu)", s, n);
};
and the output is:
0x3731 (size = 11)
Your biggest problem is that malloc isn't allocating enough. As Barmar said, you need at least 7 bytes to store it or you could calculate the amount needed. Another problem is that you are freeing it and then using it. It is only one line after the free that you use it again, which shouldn't have anything bad happen like 99.9% of the time, but you should always free after you know you are done using it.

Converting a struct to a hex string in C

I have the following struct
typedef struct __attribute__((packed)) word {
uint16_t value;
uint8_t flag
} Word;
I want to convert it to a hex string. For example if value = 0xabcd and flag = 0x01 I want to get the string 01abcd
if I do some pointer juggling
Word word;
word.value = 0xabcd;
wordk.flag = 0x01;
printf("word: %X\n", *(int *)&word);
I get the output that I want (word: 1ABCD) but this doesn't seem safe
and when I tried to do this after looking at some of the answer here
char ptr[3];
memcpy(ptr, word, 3);
printf("word: %02X%02X%02X\n", ptr[2], ptr[1], ptr[0]);
I got word: 01FFFFFFABFFFFFFCD, for some reason the first two bytes are being extended to a full int
There's no real gain from messing around with pointers or type-punning, if all you want is to output the values of the two structure members. Just print them individually:
printf("word: %02x%04x\n", (unsigned int)word.flag, (unsigned int)word.value);
Use a simple sprintf to convert to a string:
int main(void)
{
Word v = { 0xabcd, 0x01 };
char s[10];
sprintf(s, "%02x%04x", v.flag, v.value);
puts(s);
}
I want to get the string 01abcd
So you want to print the binary representation of the struct on a little endian machine backwards. There's no obvious advantage of using sprintf for this apart from it being quick & dirty to implement.
If you want something with performance, then hacking this out manually isn't rocket science - simply iterate over the struct byte by byte and convert each nibble to the corresponding hex character:
void stringify (const uint8_t* data, size_t size, char* dst)
{
for(size_t i=0; i<size; i++)
{
uint8_t byte = data[size-i-1]; // read starting from the end of the data
dst[i*2] = "0123456789ABCDEF"[ byte >> 4 ]; // ms nibble
dst[i*2+1] = "0123456789ABCDEF"[ byte & 0xF ]; // ls nibble
}
dst[size*2] = '\0';
}
This will give "01ABCD" on a little endian machine and "01CDAB" on a big endian machine.
(Optionally add restrict to data and dst parameters but I doubt it will improve performance since the const correctness already blocks aliasing at some extent.)
Could join them mathematically first.
printf("word: %06lx\n", wordk.flag * 0x10000L + word.value);
Using long in case we are on a 16-bit int machine.

sscanf returning -1 while scaning short int from a memory pointer in c?

I am trying to scan a short int (16 bits/2 bytes) from a memory pointer using sscanf as shown below. But it is showing some weird behaviour.
#include <stdio.h>
#include <errno.h>
int main()
{
unsigned char p[] = {0x00, 0x1A};
short int s = 0;
int ret = sscanf(p,"%hu",&s);
printf("Actual data %02x %02x\n",p[0],p[1]);
printf("s = %hu ret = %d errno = %d\n",s,ret,errno);
return 0;
}
Output:
Actual data 00 1a
s = 0 ret = -1 errno = 0
Please help me to understand what I am doing wrong !
p is pointing at a 0x00 byte, so it looks like an empty string, hence there is nothing for sscanf to parse.
It looks like just want to copy those two bytes verbatim to your short, not convert a C string to a short ? If so then instead of the sscanf call you could just use memcpy:
memcpy(&s, p, sizeof(s));
Note that this assumes that your buffer and the destination short have the same endianness. If not then you will also need to take care of the byte order.

How much space to allocate for printing long int value in string?

I want to store a long value (LONG_MAX in my test program) in a dynamically allocated string, but I'm confused how much memory I need to allocate for the number to be displayed in the string.
My fist attempt:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <limits.h>
int main(void)
{
char *format = "Room %lu somedata\n";
char *description = malloc(sizeof(char) * strlen(format) + 1);
sprintf(description, format, LONG_MAX);
puts(description);
return 0;
}
Compiled with
gcc test.c
And then running it (and piping it into hexdump):
./a.out | hd
Returns
00000000 52 6f 6f 6d 20 39 32 32 33 33 37 32 30 33 36 38 |Room 92233720368|
00000010 35 34 37 37 35 38 30 37 20 62 6c 61 62 6c 61 0a |54775807 blabla.|
00000020 0a |.|
00000021
Looking at the output, it seems my memory allocation of sizeof(char) * strlen(format) + 1 is wrong (too less memory allocated) and it works more accidentally?
What is the correct amount to allocate then?
My next idea was (pseudo-code):
sizeof(char) * strlen(format) + strlen(LONG_MAX) + 1
This seems too complicated and pretty non-idomatic. Or am I doing something totally wrong?
You are doing it totally wrong. LONG_MAX is an integer, so you can't call strlen (). And it's not the number that gives the longest result, LONG_MIN is. Because it prints a minus character as well.
A nice method is to write a function
char* mallocprintf (...)
which has the same arguments as printf and returns a string allocated using malloc with the exactly right length. How you do this: First figure out what a va_list is and how to use it. Then figure out how to use vsnprintf to find out how long the result of printf would be without actually printing. Then you call malloc, and call vsnprintf again to produce the string.
This has the big advantage that it works when you print strings using %s, or strings using %s with some large field length. Guess how many characters %999999d prints.
You can use snprintf() to figure out the length without worrying about the size of LONG_MAX.
When you call snprintf with NULL string, it'll return a number of bytes that would have been required if it was write into the buffer and then you know exactly how many bytes are required.
char *format = "Room %lu somedata\n";
int len = snprintf(0, 0, format, LONG_MAX); // Returns the number of
//bytes that would have been required for writing.
char *description = malloc( len+1 );
if(!description)
{
/* error handling */
}
snprintf(description, len+1, format, LON_MAX);
Convert the predefined constant numeric value to a string, using macro expansion as explaned in convert digital to string in macro:
#define STRINGIZER_(exp) #exp
#define STRINGIZER(exp) STRINGIZER_(exp)
(code courtesy of Whozcraig). Then you can use
int max_digit = strlen(STRINGIZER(LONG_MAX))+1;
or
int max_digit = strlen(STRINGIZER(LONG_MIN));
for signed values, and
int max_digit = strlen(STRINGIZER(ULONG_MAX));
for unsigned values.
Since the value of LONG_MAX is a compile-time, not a run-time value, you are ensured this writes the correct constant for your compiler into the executable.
To allocate enough room, consider worst case
// Over approximate log10(pow(2,bit_width))
#define MAX_STR_INT(type) (sizeof(type)*CHAR_BIT/3 + 3)
char *format = "Room %lu somedata\n";
size_t n = strlen(format) + MAX_STR_INT(unsigned long) + 1;
char *description = malloc(n);
sprintf(description, format, LONG_MAX);
Pedantic code would consider potential other locales
snprintf(description, n, format, LONG_MAX);
Yet in the end, recommend a 2x buffer
char *description = malloc(n*2);
sprintf(description, format, LONG_MAX);
Note: printing with specifier "%lu" ,meant for unsigned long and passing a long LONG_MAX in undefined behavior. Suggest ULONG_MAX
sprintf(description, format, ULONG_MAX);
With credit to the answer by #Jongware, I believe the ultimate way to do this is the following:
#define STRINGIZER_(exp) #exp
#define STRINGIZER(exp) STRINGIZER_(exp)
const size_t LENGTH = sizeof(STRINGIZER(LONG_MAX)) - 1;
The string conversion turns it into a string literal and therefore appends a null termination, therefore -1.
And not that since everything is compile-time constants, you could as well simply declare the string as
const char *format = "Room " STRINGIZER(LONG_MAX) " somedata\n";
You cannot use the format. You need to observer
LONG_MAX = 2147483647 = 10 characters
"Room somedata\n" = 15 characters
Add the Null = 26 characters
so use
malloc(26)
should suffice.
You have to allocate a number of char equal to the digits of the number LONG_MAX that is 2147483647. The you have to allocate 10 digit more.
in your format string you fave
Room = 4 chars
somedata\n = 9
spaces = 2
null termination = 1
The you have to malloc 26 chars
If you want to determinate runtime how man digit your number has you have to write a function that test the number digit by digit:
while(n!=0)
{
n/=10; /* n=n/10 */
++count;
}
Another way is to store temporary the sprintf result in a local buffer and the mallocate strlen(tempStr)+1 chars.
Usually this is done by formatting into a "known" large enough buffer on the stack and then dynamically allocated whatever is needed to fit the formatted string. i.e.:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <limits.h>
int main(void)
{
char buffer[1024];
sprintf(buffer, "Room %lu somedata\n", LONG_MAX);
char *description = malloc( strlen( buffer ) + 1 );
strcpy( description, buffer );
puts(description);
return 0;
}
Here, your using strlen(format) for allocation of memory is bit problematic. it will allocate memory considering the %lu lexicographically, not based on the lexicographical width of the value that can be printed with %lu.
You should consider the max possible value for unsigned long,
ULONG_MAX 4294967295
lexicographically 10 chars.
So, you've to allocate space for
The actual string (containing chars), plus
10 chars (at max), for the lexicographical value for %lu , plus
1 char to represnt - sign, in case the value is negative, plus
1 null terminator.
Well, if long is a 32-bit on your machine, then LONG_MAX should be 2147483647, which is 10 characters long. You need to account for that, the rest of your string, and the null character.
Keep in mind that long is a signed value with maximum value of LONG_MAX, and you are using %lu (which is supposed to print an unsigned long value). If you can pass a signed value to this function, then add an additional character for the minus sign, otherwise you might use ULONG_MAX to make it clearer what your limits are.
If you are unsure which architecture you are running on, you might use something like:
// this should work for signed 32 or 64 bit values
#define NUM_CHARS ((sizeof(long) == 8) ? 21 : 11)
Or, play safe and simply use 21. :)
Use the following code to calculate the number of characters necessary to hold the decimal representation of any positve integer:
#include <math.h>
...
size_t characters_needed_decimal(long long unsigned int llu)
{
size_t s = 1;
if (0LL != llu)
{
s += log10(llu);
}
return s;
}
Mind to add 1 when using a C-"string" to store the number, as C-"string"s are 0-terminated.
Use it like this:
#include <limits.h>
#include <stdlib.h>
#include <stdio.h>
size_t characters_needed_decimal(long long unsigned int);
int main(void)
{
size_t s = characters_needed_decimal(LONG_MAX);
++s; /* 1+ as we want a sign */
char * p = malloc(s + 1); /* add one for the 0-termination */
if (NULL == p)
{
perror("malloc() failed");
exit(EXIT_FAILURE);
}
sprintf(p, "%ld", LONG_MAX);
printf("LONG_MAX = %s\n", p);
sprintf(p, "%ld", LONG_MIN);
printf("LONG_MIN = %s\n", p);
free(p);
return EXIT_SUCCESS;
}
Safest:
Rather than predict the allocation needed, uses asprintf(). This function allocates memory as needed.
char *description = NULL;
asprintf(&description, "Room %lu somedata\n", LONG_MAX);
asprintf() is not standard C, but is common in *nix and its source code is available to accommodate other systems.
Why Use Asprintf?
apple
android

Resources