I am making a function to get the maximum value of an array of NMEMB members each one of size SIZ,
comparing each member with memcmp(). The problem is that when comparing signed integers the result is incorrect but at the same time correct. Here is an example:
void *
getmax(const void *data, size_t nmemb, size_t siz){
const uint8_t *bytes = (const uint8_t *)data;
void *max = malloc(siz);
if (!max){
errno = ENOMEM;
return NULL;
}
memcpy(max, bytes, siz);
while (nmemb > 0){
hexdump(bytes, siz);
if (memcmp(max, bytes, siz) < 0)
memcpy(max, bytes, siz);
bytes += siz;
--nmemb;
}
return max;
}
int
main(int argc, char **argv){
int v[] = {5, 1, 3, 1, 34, 198, -12, -11, -0x111118};
size_t nmemb = sizeof(v)/sizeof(v[0]);
int *maximum = getmax(v, nmemb, sizeof(v[0]));
printf("%d\n", *maximum);
return 0;
}
hexdump() is just a debugging function, doesn't alter the program.
When compiling and executing the output is the following:
05 00 00 00 // hexdump() output
01 00 00 00
03 00 00 00
01 00 00 00
22 00 00 00
c6 00 00 00
f4 ff ff ff
f5 ff ff ff
e8 ee ee ff
-11 // "maximum" value
Which is correct since memcmp() compares an string of bytes and doesn't care about types or sign so -11 = 0xfffffff5 is the maximum string of bytes in the array v[] but at the same time is incorrect since -11 is not the maximum integer in the array.
Is there any way of getting the maximum integer of an array using this function?
memcmp compares the locations and does not care about the sign. so for it -11 means 0xFFFFFFF5 and -12 means 0xFFFFFFF4 and the biggest number in the array 198 means 0x000000C6, so out of all these, -11 is the biggest unsigned number and it is returned for you. You should not use memcmp to compare the signed numbers.
Go down the qsort route and require a custom comparator. Note that you absolutely don't need dynamic memory allocation in a function this simple:
#include <stdio.h>
void const *getmax(void const *data, size_t const count, size_t const elm_sz,
int (*cmp)(void const *, void const *)) {
char const *begin = data;
char const *end = begin + count * elm_sz;
char const *max = begin;
while (begin != end) {
if (cmp(max, begin) < 0) max = begin;
begin += elm_sz;
}
return max;
}
int int_cmp(void const *e1, void const *e2) {
int const i1 = *(int const *)e1;
int const i2 = *(int const *)e2;
if (i1 > i2) return 1;
if (i1 < i2) return -1;
return 0;
}
int main() {
int v[] = {5, 1, 3, 1, 34, 198, -12, -11, -0x111118};
int const *maximum = getmax(v, sizeof(v) / sizeof(*v), sizeof(*v), int_cmp);
printf("%d\n", *maximum);
}
All memory comparisons made by memcmp are unsigned and based on char sized array elements. When you feed this with a signed int array of cells, different size, your result can only be used to test equality of binary representations, meaning that a result of 0 or different than 0 means equality or unequality, but the sign on a different of zero result means comparing the individual bytes of the array of integeres, which, descomposed as bytes (in the machine endianness architecture), makes some of the bytes to be signed and compared as unsigned and others be signed and compared as unsigned. In addition, the significance of the different bytes in an integer will probably affect the sorting order, as the bytes are compared from lower addresses to higher addresses, that would match with the architecture endianness only in the case that the integers where stored as unsigned and (very important) stored in memory in big endian order. If probably you are using intel architecture, then this is just the opposite to be able to use that.
Related
I want to make something like a small hex editor for my project.
so i wrote a function like this(to replace the original code with the new code):
int replace(FILE *binaryFile, long offset, unsigned char *replaced, int length) {
if (binaryFile != NULL) {
for (int i = 0; i < length; i++) {
fseek(binaryFile, offset + i, SEEK_SET);
fwrite(&replaced[i], sizeof(replaced), 1, binaryFile);
}
fclose(binaryFile);
return 1;
} else return -1;
}
So I wrote this code to test the function and sent it to address 0x0:
unsigned char code[] = "\x1E\xFF\x2F\xE1";
and i got this hexadecimal result:
1e ff 2f e1 00 10 2b 35 ff fe 07 00
But I don't need data after E1 (00 10 2b 35 ff fe 07 00)
How can I write the function so that only the data sent to the function is stored?
sizeof(replaced) is wrong. replaced is a unsigned char *, so that's not the size you want.
You probably want sizeof(unsigned char) or sizeof(*replaced).
Currently, you end up writing eight times too much.
Note that you could also write in a single step:
if (binaryFile != NULL)
{
fseek(binaryFile, offset, SEEK_SET);
fwrite(replaced, sizeof(unsigned char), length, binaryFile);
}
Goal: Print variable number of bytes using a single format specifier.
Environment: x86-64 Ubuntu 20.04.3 LTS running in VM on an x86-64 host machine.
Example:
Let %kmagic be the format specifier I am looking for which prints k bytes by popping them from the stack and additing them to the output. Then, for %rsp pointing to a region in memory holding bytes 0xde 0xad 0xbe 0xef, I want printf("Next 4 bytes on the stack: %4magic") to print Next 4 bytes on the stack: deadbeef.
What I tried so far:
%khhx, which unfortunately just results in k-1 blank spaces followed by two hex-characters (one byte of data).
%kx, which I expected to print k/2 bytes interpreted as one number. This only prints 8 hex-characters (4 bytes) prepended by k - 8 blank spaces.
The number of non-blank characters printed matches the length of the format specifiers, i.e. the expected length of %hhx is 2, which is also the number of non-blank characters printed. The same holds for %x, which one expects to print 8 characters.
Question:
Is it possible to get the desired behavior? If so, how?
Is it possible to get the desired behavior? If so, how?
There does not exist printf format specifier to do what you want.
Is it possible
Write your own printf implementation that supports what you want. Use implementation-specific tools to create your own printf format specifier. You can take inspiration from linux kernel printk %*phN format speciifer.
It is not possible to using standard printf. You need to write your own function and customize the printf function.
http://www.gnu.org/software/libc/manual/html_node/Customizing-Printf.html
Example (simple dump):
int printdump (FILE *stream, const struct printf_info *info, const void *const *args)
{
const unsigned char *ptr = *(const unsigned char **)args[0];
size_t size = *(size_t*)args[1];
for(size_t i = 1; i <= size; i++)
{
fprintf(stream, "%02X%c", ptr[i-1], i % 8 ? ' ' : '\n');
}
return 1;
}
int printdumpargs (const struct printf_info *info, size_t n, int *argtypes)
{
if (n == 2)
argtypes[0] = PA_POINTER;
argtypes[1] = PA_INT;
return 2;
}
int main(void)
{
double x[4] = {456543645.6786e45, 456543654, 1e345, -345.56e67};
register_printf_function ('Y', printdump, printdumpargs);
printf("%Y\n", &x, sizeof(x));
}
As I see it is depreciated now (probably no one was using it)
https://godbolt.org/z/qKs6e1d9q
Output:
30 18 CB 5A EF 10 13 4B
00 00 00 A6 4D 36 BB 41
00 00 00 00 00 00 F0 7F
C4 5D ED 48 9C 05 60 CE
There is no standard conversion specifier for your purpose, but you can achieve your goal in C99 using an ancillary function and dynamic array:
#include <stdio.h>
char *dump_bytes(char *buf, const void *p, size_t count) {
const unsigned char *src = p;
char *dest = buf;
while (count --> 0) {
dest += sprintf(dest, "%.2X", *src++);
if (count)
*dest++ = ' ';
}
*dest = '\0'; // return an empty sting for an empty memory chunk
return buf;
}
int main() {
long n = 0x12345;
printf("n is at address %p with contents: %s\n",
(void *)&n,
dump_bytes((char[3 * sizeof(n)]){""}, &n, sizeof(n)));
return 0;
}
Output: n is at address 0x7fff523f57d8 with contents: 45 23 01 00 00 00 00 00
You can use a macro for simpler invocation:
#define DUMPBYTES(p, n) dump_bytes((char[3 * (n)]){""}, p, n)
int main() {
char *p = malloc(5);
printf("allocated 5 bytes at address %p with contents: %s\n",
p, DUMPBYTES(p, 5));
free(p);
return 0;
}
I have a struct of six 16 bit integers and 1 32 bit integer (16 byte's total) and I'm trying to read in the struct one at a time. Currently I use
printf("%.4x %.4x %.4x %.4x %.4x %.4x %.4x\n", );
with the 7 struct members as the following parameters.
My output is as following:
0001 0100 0010 0002 0058 0070 464c45
And I would like to format it as:
01 00 00 01 10 00 02 00 58 00 70 00 45 4c 46 00
I've been searching everywhere to try and find out how to properly format it. Any help would be greatly appreciated! thank you in advance!
You can just move an unsigned char pointer over the struct, reading byte for byte (I hope I don't mix things up with C++, getting into undefined behavior may happen when doing such things):
#include <stdio.h>
#include <stdint.h>
struct Data {
int16_t small[6];
int32_t big;
};
void funky_print(struct Data const * data) {
unsigned char const * ptr = (unsigned char const *)data;
size_t i;
printf("%.2hhx", *ptr);
++ptr;
for (i = 1; i < sizeof(*data); ++i) {
printf(" %.2hhx", *ptr);
++ptr;
}
}
int main(void) {
struct Data d = {{0xA0B0, 0xC0D0, 84, 128, 3200, 0}, 0x1BADCAFE};
funky_print(&d);
return 0;
}
(Live here)
I want to print a character string in hexadecimal format on machine A. Something like:
ori_mesg = gen_rdm_bytestream (1400, seed)
sendto(machine B, ori_mesg, len(mesg))
On machine B
recvfrom(machine A, mesg)
mesg_check = gen_rdm_bytestream (1400, seed)
for(i=0;i<20;i++){
printf("%02x ", *(mesg+i)& 0xFF);
}
printf("\n");
for(i=0; i<20; i++){
printf("%02x ", *(mesg_check+i));
}
printf("\n");
seed varies among 1, 2, 3, ...
The bytes generation funcion is:
u_char *gen_rdm_bytestream (size_t num_bytes, unsigned int seed)
{
u_char *stream = malloc (num_bytes+4);
size_t i;
u_int16_t seq = seed;
seq = htons(seq);
u_int16_t tail = num_bytes;
tail = htons(tail);
memcpy(stream, &seq, sizeof(seq));
srand(seed);
for (i = 3; i < num_bytes+2; i++){
stream[i] = rand ();
}
memcpy(stream+num_bytes+2, &tail, sizeof(tail));
return stream;
}
But I got results from printf like:
00 01 00 67 c6 69 73 51 ff 4a ec 29 cd ba ab f2 fb e3 46 7c
00 01 00 67 ffffffc6 69 73 51 ffffffff 4a ffffffec 29 ffffffcd ffffffba ffffffab fffffff2 fffffffb ffffffe3 46 7c
or
00 02 88 fa 7f 44 4f d5 d2 00 2d 29 4b 96 c3 4d c5 7d 29 7e
00 02 00 fffffffa 7f 44 4f ffffffd5 ffffffd2 00 2d 29 4b ffffff96 ffffffc3 4d ffffffc5 7d 29 7e
Why are there so many fffff for mesg_check?
Are there any potential reasons for this phenomenon?
Here's a small program that illustrates the problem I think you might be having:
#include <stdio.h>
int main(void) {
char arr[] = { 0, 16, 127, 128, 255 };
for (int i = 0; i < sizeof arr; i ++) {
printf(" %2x", arr[i]);
}
putchar('\n');
return 0;
}
On my system (on which plain char is signed), I get this output:
0 10 7f ffffff80 ffffffff
The value 255, when stored in a (signed) char, is stored as -1. In the printf call, it's promoted to (signed) int -- but the "%2x" format tells printf to treat it as an unsigned int, so it displays fffffffff.
Make sure that your mesg and mesg_check arrays are defined as arrays of unsigned char, not plain char.
UPDATE: Rereading this answer more than a year later, I realize it's not quite correct. Here's a program that works correctly on my system, and will almost certainly work on any reasonable system:
#include <stdio.h>
int main(void) {
unsigned char arr[] = { 0, 16, 127, 128, 255 };
for (int i = 0; i < sizeof arr; i ++) {
printf(" %02x", arr[i]);
}
putchar('\n');
return 0;
}
The output is:
00 10 7f 80 ff
An argument of type unsigned char is promoted to (signed) int (assuming that int can hold all values of type unsigned char, i.e., INT_MAX >= UCHAR_MAX, which is the case on practically all systems). So the argument arr[i] is promoted to int, while the " %02x" format requires an argument of type unsigned int.
The C standard strongly implies, but doesn't quite state directly, that arguments of corresponding signed and unsigned types are interchangeable as long as they're within the range of both types -- which is the case here.
To be completely correct, you need to ensure that the argument is actually of type unsigned int:
printf("%02x", (unsigned)arr[i]);
Yes, always print the string in hexadecimal format as:
for(i=0; till string length; i++)
printf("%02X", (unsigned char)str[i]);
You will get an error when you try to print the whole string in one go and when printing the hexadecimal string character by character which is using 'unsigned char' if the string is in format other than 'unsigned char'.
I want to learn how the computer represents the double type in bit, but the & and | bit operators can't use double. And memcpy(&d, &src, 8) also doesn't seem to work. Any suggestions?
Here:
#include <stdio.h>
int main ()
{
double decker = 1.0;
unsigned char * desmond = (unsigned char *) & decker;
int i;
for (i = 0; i < sizeof (double); i++) {
printf ("%02X ", desmond[i]);
}
printf ("\n");
return 0;
}
You can try it: http://codepad.org/onHnAcnC
union {
double d;
unsigned char c[sizeof(double)];
} d;
int main(int ac, char **av) {
int i;
char s1[80], s2[80];
d.d = 1.0;
for(i = 0; i < sizeof d; ++i) {
sprintf(s1 + i * 3, " %02x", d.c[i]);
sprintf(s2 + i * 3, " %02x", d.c[sizeof d - 1 - i]);
}
printf("%s\n%s\n", s1, s2);
return 0;
}
$ ./a.out
00 00 00 00 00 00 f0 3f
3f f0 00 00 00 00 00 00
Or you could just read about the IEEE 754 standard, which specifies representation.
http://en.wikipedia.org/wiki/IEEE_754-1985
A particular bit layout by itself is meaningless. Suppose I have the following: 1101
Maybe I say that is unsigned and it represents the value 13.
Maybe it is signed and that high bit signifies that the value is a negative which means it is now -5.
Consider further that I consider the high two bits to be a base and the low two bits to be an exponent, then I get the value 3.
You see, it isnt the storage, its the interpretation. Read up on how floating point values are represented and interpreted; it will serve you much better than seeing how the bits are packed.
That isn't going to be very enlightening unless you also know a bit about typical IEEE FP representations.
Most likely the way your machine represents doubles is spelled out here.
This works for me
#include <stdio.h>
#include <string.h> /* memmove */
int main(void) {
unsigned char dontdothis[sizeof (double)];
double x = 62.42;
printf("%f\n", x);
memmove(&dontdothis, &x, sizeof dontdothis);
/* examine/change the array dontdothis */
dontdothis[sizeof x - 1] ^= 0x80;
/* examine/change the array dontdothis */
memmove(&x, &dontdothis, sizeof dontdothis);
printf("%f\n", x);
return 0;
}
The result is
62.420000
-62.420000
The key is to convert the double to a long long (assuming sizeof(double) == sizeof(long long)) without changing binary representation. This can be achieved by one of the following methods:
cast: double a; long long b = *((long long *)&a);
union: union { double a ; long long b };
Another option is to use bitfields. Armed with such a structure and knowledge of how a double is supposed to be stored on your computer you can very easily print out the different parts of the internal representation of the double. A bit like they do here.