scanf("%d", char*) - char-as-int format string? - c

What is the format string modifier for char-as-number?
I want to read in a number never exceeding 255 (actually much less) into an unsigned char type variable using sscanf.
Using the typical
char source[] = "x32";
char separator;
unsigned char dest;
int len;
len = sscanf(source,"%c%d",&separator,&dest);
// validate and proceed...
I'm getting the expected warning: argument 4 of sscanf is type char*, int* expected.
As I understand the specs, there is no modifier for char (like %sd for short, or %lld for 64-bit long)
is it dangerous? (will overflow just overflow (roll-over) the variable or will it write outside the allocated space?)
is there a prettier way to achieve that than allocating a temporary int variable?
...or would you suggest an entirely different approach altogether?

You can use %hhd under glibc's scanf(), MSVC does not appear to support integer storage into a char directly (see MSDN scanf Width Specification for more information on the supported conversions)

It is dangerous to use that. Since there an implicit cast from a unsigned char* to an int*, if the number is more than 0xFF it is going to use bytes (max 3) next to the variable in the stack and corrupt their values.
The issue with %hhd is that depending of the size of an int (not necessarily 4 bytes), it might not be 1 byte.
It does not seem sscanf support storage of numbers into a char, I suggest you use an int instead. Although if you want to have the char roll-over, you can just cast the int into a char afterward, like:
int dest;
int len;
len = sscanf(source,"%c%d",&separator,&dest);
dest = (unsigned char)dest;

I want to read in a number never
exceeding 255 (actually much less)
into an unsigned char type variable
using sscanf.
In most contexts you save little to nothing by using char for an integer.
It generally depends on architecture and compiler, but most modern CPUs are not very well at handling data types which are of different in size from register. (Notable exception is the 32-bit int on 64-bit archs.)
Adding here penalties for non-CPU-word-aligned memory access (do not ask me why CPUs do that) use of char should be limited to the cases when one really needs a char or memory consumption is a concern.

It is not possible to do.
sscanf will never write single byte when reading an integer.
If you pass a pointer to a single allocated byte as a pointer to int, you will run out of bounds. This may be OKay due to default alignment, but you should not rely on that.
Create a temporary. This way you will also be able to run-time check it.

Probably the easiest way would be to load the number simply into a temporary integer and store it only if its in the required boundaries. (and you would probably need something like unsigned char result = tempInt & 0xFF; )

It's dangerous. Scanf will write integer sized value over the character-sized variable. In your example (most probably) nothing will happen unless you reorganize your stack variables (scanf will partially overwrite len when trying to write integer sized "dest", but then it returns correct "len" and overwrites it with "right" value).
Instead, do the "correct thing" rather than relying on compiler mood:
char source[] = "x32";
char separator;
unsigned char dest;
int temp;
int len;
len = sscanf(source,"%c%d",&separator,&temp);
// validate and proceed...
if (temp>=YOUR_MIN_VALUE && temp<=YOUR_MAX_VALUE) {
dest = (unsigned char)temp;
} else {
// validation failed
}

Related

How can I put int or size_t variables into a char array?

Hello everyone so i want to put a size_t variable of a number up to 4096 in order to keep track of how much "memory" i used in the array (yes it has to be a char array). So for example
#include <stdio.h>
#include <unistd.h>
int main(void)
{
char mem[4096] = {0};
size_t size = 4095;
mem[0] = size;
//then somehow be able to size = mem[0] to be able to recall the size later.
return 0;
}
I just want to be able to put a number in the first array elment in order to be able to keep track of how many elements ive used.
Im implementating a heap memory and i also have to implement malloc(), and free() i have a char array that simulates the heap memory, and i cant use global/ static local variables but i want to be able to keep track of the heap size. My best thought of action was to store the heap size in the "Heap" (char array) to be able to keep track of it.
If you must use a char array for storing the number of bytes used, you have (at least) two choices:
create a union of an uint16_t with the array. This will allow you to use the first two bytes to hold the number of bytes used.
essentially the same as #1 but do the math manually, and maintain the bytes yourself.
I'd recommend #1.
About the most straightforward way to do this that is defined by the C standard is with to copy the bytes from the size object to the memory:
memcpy(&mem[index], &size, sizeof size);
Good compilers with optimization enabled will optimize this to a store from size to memory if they can determine &mem[index] is suitably aligned (or an unaligned store instruction is available and a good method). The reverse, to retrieve the size, is memcpy(&size, &mem[index], sizeof size);.
It can be done with unions, but that may be cumbersome in this situation. The union would have to include the entire array.
Old C code would simply cast an address in mem to the desired type, as with * (size_t *) &mem[index] = size;. However, the C standard does not define the behavior when an array defined as an array of char is accessed through a size_t lvalue as the above does. This worked with old compilers, and modern compilers may still support it if an appropriate switch is given to ask the compiler to support it, but it is needless, so new code should avoid this in favor of more portability.
Note that you do not need to use a size_t if the size will only be 4096. For this, you can use uint16_t.
Another option is to calculate the two bytes:
static void RecordSize(unsigned char *memory, uint16_t size)
{
memory[0] = size >> 8; // Record high bits.
memory[1] = size; // Record low bits.
// This will automatically “chop off” high bits.
}
static uint16_t RetrieveSize(unsigned char *memory)
{
return (uint16_t) memory[0] << 8 | memory[1];
}
…
RecordSize(&mem[index], size);
…
size = RetrieveSize(&mem[index];
That would be used more rarely than the direct copy, but it could be useful if data is being recorded in memory to transmit to another computer system that may use a different byte ordering for its integers. This method explicitly stores the high bits first and the low bits second (or the code could be written to do the opposite, as desired).
Note that this code uses unsigned char. You should generally prefer to use unsigned char for manipulations of bytes as arbitrary data, as it avoids some problems that can occur with signed types. (The C standard leaves it implementation-defined whether the char type is signed or unsigned.)

When to cast size_t

I'm a little confused as how to use size_t when other data types like int, unsigned long int and unsigned long long int are present in a program. I try to illustrate my confusion minimally. Imagine a program where I use
void *calloc(size_t nmemb, size_t size)
to allocate an array (one- or multidimensional). Let the call to calloc() be dependent on nrow and sizeof(unsigned long int). sizeof(unsigned long int) is obviously fine because it returns size_t. But let nrow be such that it needs to have type unsigned long int. What do I do in such a case? Do I cast nrow in the call to calloc() from unsigned long int to size_t?
Another case would be
char *fgets(char *s, int size, FILE *stream)
fgets() expects type int as its second parameter. But what if I pass it an array, let's say save, as it's first parameter and use sizeof(save) to pass it the size of the array? Do I cast the call to sizeof() to int? That would be dangerous since int isn't guaranteed to hold all possible returns from sizeof().
What should I do in these two cases? Cast, or just ignore possible warnings from tools such as splint?
Here is an example regarding calloc() (I explicitly omit error-checking for clarity!):
long int **arr;
unsigned long int mrow;
unsigned long int ncol;
arr = calloc(mrow, sizeof(long int *));
for(i = 0; i < mrow; i++) {
arr[i] = calloc(ncol, sizeof(long int));
}
Here is an example for fgets() (Error-handling again omitted for clarity!):
char save[22];
char *ptr_save;
unsigned long int mrow
if (fgets(save, sizeof(save), stdin) != NULL) {
save[strcspn(save, "\n")] = '\0';
mrow = strtoul(save, &ptr_save, 10);
}
I'm a little confused as how to use size_t when other data types like
int, unsigned long int and unsigned long long int are present in a
program.
It is never a good idea to ignore warnings. Warnings are there to direct your attention to areas of your code that may be problematic. It is much better to take a few minutes to understand what the warning is telling you -- and fix it, then to get bit by it later when you hit a corner-case and stumble off into undefined behavior.
size_t itself is just a data-type like any other. While it can vary, it generally is nothing more than an unsigned int covering the range of positive values that can be represented by int including 0 (the type size was intended to be consistent across platforms, the actual bytes on each may differ). Your choice of data-type is a basic and fundamental part of programming. You choose the type based on the range of values your variable can represent (or should be limited to representing). So if whatever you are dealing with can't be negative, then an unsigned or size_t is the proper choice. The choice then allows the compiler to help identify areas where your code would cause that to be violated.
When you compile with warnings enabled (e.g. -Wall -Wextra) which you should use on every compile, you will be warned about possible conflicts in your data-type use. (i.e. comparison between signed and unsigned values, etc...) These are important!
Virtually all modern x86 & x86_64 computers use the twos-compliment representation for signed values. In simple terms it means that if the leftmost bit of a signed number is 1 the value is negative. Herein lie the subtle traps you may fall in when mixing/casting or comparing numbers of varying type. If you choose to cast an unsigned number to a signed number and that number happens to have the most significant bit populated, your large number just became a very small number.
What should I do in these two cases? Cast, or just ignore possible
warnings...
You do what you do each time you are faced with warnings from the compiler. You analyze what is causing the warning, and then you fix it (or if you can't fix it -- (i.e. is comes from some library you don't have access to) -- you understand the warning well enough that you can make an educated decision to disregard it knowing you will not hit any corner-cases that would lead to undefined behavior.
In your examples (while neither should produce warning, they may on some compilers):
arr = calloc (mrow, sizeof(long int *));
What is the range of sizeof(long int *)? Well -- it's the range of what the pointer size can be. So, what's that? (4 bytes on x86 or 8 bytes on x86_64). So the range of values is 4-8, yes that can be properly fixed with a cast to size_t if needed, or better just:
arr = calloc (mrow, sizeof *arr);
Looking at the next example:
char save[22];
...
fgets(save, sizeof(save), stdin)
Here again what is the possible range of sizeof save? From 22 - 22. So yes, if a warnings is produced complainting about the fact that sizeof returns long unsigned and fgets calls for int, 22 can be cast to int.
When to cast size_t
You shouldn't.
Use it where it's appropriate.
(As you already noticed) the libc-library functions tell you where this is the case.
Additionally use it to index arrays.
If in doubt the type suits your program's needs you might go for the useful assertion statement as per Steve Summit's answer and if it fails start over with your program's design.
More on this here by Dan Saks: "Why size_t matters" and "Further insights into size_t"
My other answer got waaaaaaay too long, so here's a short one.
Declare your variables of natural and appropriate types. Let the compiler take care of most conversions. If you have something that is or might be a size, go ahead and use size_t. (Similarly, if you have something that's involved in file sizes or offsets, use off_t.)
Try not to mix signed and unsigned types.
If you're getting warnings about possible data loss because of larger types getting downconverted to possibly smaller types, and if you can't change the types to make the warnings go away, first (a) convince yourselves that the values, in practice, will not ever actually overflow the smaller type, then (b) add an explicit downconversion cast to make the warning go away, and for extra credit (c) add an assertion to document and enforce your assumption:
.
assert(size_i_need <= SIZE_MAX);
char *buf = malloc((size_t)size_i_need);
In general, you're right, you should not ignore the warnings! And in general, if you can, you should shy away from explicit casts, because they can make your code less reliable, or silence warning which are really trying to tell you something important.
Most of the time, I believe, the compiler should do the right thing for you. For example, malloc() expects a size_t, and the compiler knows from the function prototype that it does, so if you write
int size_i_need = 10;
char *buf = malloc(size_i_need);
the compiler will insert the appropriate conversion from int to size_t, as necessary. (I don't believe I've had warnings here I had to worry about, either.)
If the variables you're using are already unsigned, so much the better!
Similarly, if you were to write
fgets(buf, sizeof(buf), ifp);
the compiler will again insert an appropriate conversion. Here, I guess I see what you're getting at, a 64-bit compiler might emit a warning about the downconversion from long to int. Now that I think about it, I'm not sure why I haven't had that problem, because this is a common idiom.
(You also asked about passing unsigned long to malloc, and on a machine where size_t is smaller than long, I suppose that might get you warnings, too. Is that what you were worried about?)
If you've got a downsize that you can't avoid, and your compiler or some other tool is warning about it, and you want to get rid of the warning safely, you could use a cast and an assertion. That is, if you write
unsigned long long size_i_need = 23;
char *buf = malloc(size_i_need);
this might get a warning on a machine where size_t is 32 bits. So you could silence the warning with a cast (on the assumption that your unsigned long long values will never actually be too big), and then back up your assumption with a call to assert:
unsigned long long size_i_need = 23;
assert(size_i_need <= SIZE_MAX);
char *buf = malloc((size_t)size_i_need);
In my experience, the biggest nuisance is printing these things out. If you write
printf("int size = %d\n", sizeof(int));
or
printf("string length = %d\n", strlen("abc"));
on a 64-bit machine, a modern compiler will typically (and correctly) warn you that "format specifies type 'int' but the argument has type 'unsigned long'", or something to that effect. You can fix this in two ways: cast the value to match the printf format, or change the printf format to match the value:
printf("int size = %d\n", (int)sizeof(int));
printf("string length = %lu\n", strlen("abc"));
In the first case, you're assuming that sizeof's result will fit in an int (which is probably a safe bet). In the second case, you're assuming that size_t is in fact unsigned long, which may be true on a 64-bit compiler but may not be true on some other. So it's actually safer to use an explicit cast in the second case, too:
printf("string length = %lu\n", (unsigned long)strlen("abc"));
The bottom line is that abstract types like size_t don't work so well with printf; this is where we can see that the C++ output style of cout << "string length = " << strlen("abc") << endl has its advantages.
To solve this problem, there are some special printf modifiers that are guaranteed to match size_t and I think off_t and a few other abstract types, although they're not so well known. (I wasn't sure where to look them up, but while I've been composing this answer, some commenters have already reminded me.) So the best way to print one of these things (if you can remember, and unless you're using old compilers) would be
printf("string length = %zu\n", strlen("abc"));
Bottom line:
You obviously don't have to worry about passing plain int or plain unsigned to a function like calloc that expects size_t.
When calling something that might result in a downcast, such as passing a size_t to fgets where size_t is 64 bits but int is 32, or passing unsigned long long to calloc where size_t is only 32 bits, you might get warnings. If you can't make the passed-in types smaller (which in the general case you're not going to be able to do), you'll have little choice to silence the warnings but to insert a cast. In this case, to be strictly correct, you might want to add some assertions.
With all of that said, I'm not sure I've actually answered your question, so if you'd like further clarification, please ask.

c memory allocation and arrays of arrays

I am currently writing a terminal based hex editor. and I have a few question regarding memory allocation.
To track changes the user has made I write them to an array of arrays like so, the [i][0] is the absolute offset of the change from the beginning of the file and [i][1] is the change itself:
unsigned long long writebuffer[10000][2];
but I have 2 problems with this. the first array (writebuffer[i][0]) NEEDS to be the sizeof unsigned long long but the second one ([i][1]) can be as small as sizeof unsigned char. Is it possible to do something like this??
also can I dynamically allocate the first index of writebuffer so I wouldn't initialize it like above but more like:
unsigned long long **writebuffer;
and then change the first index with malloc() and realloc(); while the second index would be 2 but have the size of a unsigned char.
Why not use a struct?
typedef struct {
long long offset;
int change; /* or unsigned short, or whatever you feel is right */
} t_change;
Be aware that the struct will likely get padded by the compiler to a different size if you choose to use unsigned char for the change element. What it gets padded to depends on your compiler, the compiler settings, and the target architecture.
You may define an array of type void:
void **writebuffer;
and then allocate and use each element as you like, for example:
*writebuffer[0] = (char*)malloc(sizeof(char));

Distinguish between string and byte array?

I have a lot of functions that expect a string as argument, for which I use char*, but all my functions that expect a byte-array, also use char*.
The problem is that I can easily make the mistake of passing a byte-array in a string-function, causing all kinds of overflows, because the null-terminator cannot be found.
How is this usually delt with? I can imagine changing all my byte-array functions to take an uint8_t, and then the compiler will warn about signed-ness when I pass a string. Or what is the right approach here?
I generally make an array something like the following
typedef struct {
unsigned char* data;
unsigned long length;
unsigned long max_length;
} array_t;
then pass array_t* around
and create array functions that take array_t*
void array_create( array_t* a, unsgined long length) // allocates memory, sets the max_length, zero length
void array_add(array_t* a, unsigned char byte) // add a byte
etc
The problem is more general in C than you are thinking. Since char* and char[] are equivalent for function parameters, such a parameter may refer to three different semantic concepts:
a pointer on one char object (this is the "official" definition of pointer types)
a char array
a string
In most cases where is is possible the mondern interfaces in the C standard uses void* for an untyped byte array, and you should probably adhere to that convention, and use char* only for strings.
char[] by themselves probably are rarely used as such; I can't imagine a lot of use cases for these. If you think of them as numbers you should use the signed or unsigned variant, if you see them just as bit pattern unsigned char should be your choice.
If you really mean an array as function parameter (char or not) you can mark that fact for the casual reader of your code by clearly indicating it:
void toto(size_t n, char A[const n]);
This is equivalent to
void toto(size_t n, char *const A);
but makes your intention clearer. And in the future there might even be tools that do the bounds checking for you.
Write a common structure to handle both string and bytes.
struct str_or_byte
{
int type;
union
{
char *buf;
char *str;
}pointer;
int buf_length;
}
If type is not string then access the pointer.buf only upto buf_length. Otherwise directly access pointer.str without checking buf_length and maintain it as null terminated string.
Or else maintain string also as byte array by considering only length, dont keep null terminated charater for string.
struct str_or_byte
{
char *buf;
int buf_length;
}
And dont use string manuplation functions which are not considering length. That means use strncpy, strncat, strncmp ... instead of strcpy, strcat, strcmp...
C using convention. Here's the rules I use (fashioned after the std lib)
void foo(char* a_string);
void bar(void* a_byte_array, size_t number_of_bytes_in_the_array);
This is easy to remember. If you are passing a single char* ptr, then it MUST be a null-terminated char array.

Why does memset take an int instead of a char?

Why does memset take an int as the second argument instead of a char, whereas wmemset takes a wchar_t instead of something like long or long long?
memset predates (by quite a bit) the addition of function prototypes to C. Without a prototype, you can't pass a char to a function -- when/if you try, it'll be promoted to int when you pass it, and what the function receives is an int.
It's also worth noting that in C, (but not in C++) a character literal like 'a' does not have type char -- it has type int, so what you pass will usually start out as an int anyway. Essentially the only way for it to start as a char and get promoted is if you pass a char variable.
In theory, memset could probably be modified so it receives a char instead of an int, but there's unlikely to be any benefit, and a pretty decent possibility of breaking some old code or other. With an unknown but potentially fairly high cost, and almost no chance of any real benefit, I'd say the chances of it being changed to receive a char fall right on the line between "slim" and "none".
Edit (responding to the comments): The CHAR_BIT least significant bits of the int are used as the value to write to the target.
Probably the same reason why the functions in <ctypes.h> take ints and not chars.
On most platforms, a char is too small to be pushed on the stack by itself, so one usually pushes the type closest to the machine's word size, i.e. int.
As the link in #Gui13's comment points out, doing that also increases performance.
See fred's answer, it's for performance reasons.
On my side, I tried this code:
#include <stdio.h>
#include <string.h>
int main (int argc, const char * argv[])
{
char c = 0x00;
printf("Before: c = 0x%02x\n", c);
memset( &c, 0xABCDEF54, 1);
printf("After: c = 0x%02x\n", c);
return 0;
}
And it gives me this on a 64bits Mac:
Before: c = 0x00
After: c = 0x54
So as you see, only the last byte gets written. I guess this is dependent on the architecture (endianness).

Resources