I'm reading some data from a file. The format is stated tobe
ASCII text with UNIX-style
line-endings, a series of 32-bit
signed integers in hexadecimal.
e.g
08000000
I'm using fscanf to read in this data.
long data_size;
FILE *fp;
fp=fopen("test01.bin", "r"); // open for reading
if (fp==0) {cerr << "Error openeing file"<<endl; return 1;}
fscanf(fp, "%x", &data_size);
Everything runs ok with my test file but I get the compile-time warning,
warning: format ‘%x’ expects type ‘unsigned int*’, but argument 3 has type ‘long int*’
however a hex value is unsigned and is being cast to a long dose this matter? As long will take the most significant bit as notifying the sign? Or will I end up with problems? Or am I well off the mark in my understanding?
Thanks
You should support the same pointer type as the warning states, or else you will run into serious troubles if you want to port your code to other architectures (e.g: 64 bit architectures) where long has a different size than int. This is especially tricky if you are using pointers. (I once had a bug originating from exactly this problem)
Just use int data_size and you will be fine.
The problem is that %x requires an unsigned int * to read the value in, but you have a long *. <stdint.h> header provides value types with fixed length, and <inttypes.h> defines corresponding macros for use with printf, scanf, and their derivatives. I think it'll be better for you to fscanf the data into an int32_t variable using the macro provided by <inttypes.h>:
#include <inttypes.h>
...
int32_t data_size;
fscanf(fp, "%" SCNx32, &data_size);
Related
I am trying to write an integer to a binary file and then read the same thing. However, my program reads a different number than the one that was written. What am I doing wrong?
unsigned short int numToWrite = 2079;
// Write to output
FILE *write_ptr;
write_ptr = fopen("test.bin","wb"); // w for write, b for binary
printf("numToWrite: %d\n", *(&numToWrite));
fwrite(&numToWrite, sizeof(unsigned short int), 1, write_ptr); // write 10 bytes from our buffer
fclose(write_ptr);
// Read the binary file
FILE *read_ptr = fopen(filename, "rb");
if (!read_ptr) {
perror("fopen");
exit(EXIT_FAILURE);
}
unsigned short int* numToRead = malloc(sizeof (unsigned short int));
fread(numToRead, sizeof(unsigned short int), 1, read_ptr);
printf("numToRead: %d\n", *numToRead);
free(numToRead);
fclose(read_ptr);
The output is this:
numToWrite: 2079
numToRead: 26964
man printf
Length modifier
h - A following integer conversion corresponds to a short or unsigned short argument, ...
Conversion specifiers
d,i - The int argument is converted to signed decimal notation.
Format of the format string
...Each conversion specification is introduced by the character %, and ends with a conversion specifier. In between there may be (in this order) zero or more flags, an optional minimum field width, an optional precision and an optional length modifier.
You're using unsigned short int, but that's not what you're telling to printf.
Hence, your expectations are not fulfilled.
A few things that are going on:
'printf' does not have any binary format specifier, so you would have to do it manually.
You need to take a deep dive in data types and their ranges, so I recommend using this source: Microsoft Data Type Ranges. It says C++, it is irrelevant since it gives you a good idea of the ranges.
I know this is isn't you're entire code, but just understand there are somethings from what I'm seeing that is not defined, such as 'filename'.
In case someone mentions atoi() and itoa(), keep this in mind, theoretically you could use atoi() and itoa(), however just be mindful that atoi() and itoa() is a non-standard function which is supported by some compilers.
Lastly, why are you using 'unsigned short int', '*(&numToWrite)' and is there anymore from the program that you can show us?
Why the value of the input variable is set to zero if I pass incorrectly ordered type specifier for id variable?
#include <stdio.h>
#include <stdlib.h>
#include<string.h>
#define MAX 100
int main()
{
int i=0;
int input;
char *name=(char *)malloc(sizeof(char)*MAX);
unsigned short int id;
printf("\nPlease enter input:\t");
scanf("%d", &input);
getchar();//to take \n
printf("\nEnter name.....\n");
fgets(name,MAX,stdin);
printf("\nEnter id: ");
scanf("%uh",&id);//type specifier should have been %hu
printf("%d",input);//value set to 0?
}
Why is input being overridden by scanf("%uh", &id) ?
Jyoti Rawat
You have what I call a memory over run error. Microsoft (Visual Studio) calls it:
"Stack around the variable ‘id’ was corrupted"
Not all computer systems treat memory over runs errors the same. Visual Studio/Windows catch this error and throw an exception.
OpenVMS would just execute the instruction and then continue on to the next instruction. In either case, if the program continues, its behavior will be undefined.
In English, you are writing more bits of data into a variable that cannot contain it. The extra bits are written to other memory locations not assigned to the variable. As mention by chux, statements that follow will have undefined behavior.
Solution:
As you already know and mention by others, change the format specifier from "uh" to "hu".
Things to note about your code:
As stated my Adrian, using "hu" as a format specifier will work.
This specifier expects a pointer to a unsigned short int variable. Click here to go to scanf format specifiers wiki page.
When you specify “%uh”, scanf pares out the %u as the format
specifier and treats the rest ('h') as text to be ignored.
The format specifier 'u' expects a pointer to an unsigned int variable (unsigned int *).
The format specifier 'hu' expects a pointer to an unsigned short variable (unsigned short
int *).
Why the value of the input variable is set to zero if I pass incorrectly ordered type specifier to id variable?
unsigned short int id;
scanf("%uh",&id);//type specifier should have been %hu
scanf() first uses "%u" and looks for a matching unsigned * argument. As code passed a unsigned short * the result is undefined behavior (UB).
The result of 0 is not demo'd with printf("%d",input);. Perhaps OP meant printf("%d",id);?
In any case, code after the UB of scanf() is irrelevant. It might print 0 today or crash the code tomorrow - it is UB.
EDIT (After re-reading question/comments):
Mentat has nailed the problem! I'll leave some fragments of my original answer, with some additions ...
(1) The warning about using %uh should be accepted at face value: to input a short integer (signed or unsigned), you need the h modifier before the type specifier (u or d) - that's just the way scanf format specifiers work.
(2) Your code happens to work on MSVC, so I missed the point. With clang-cl I found your error: As the h size modifier is being ignored, the value read in by '%uh' is a (long or 32-bit) unsigned, which is being written to memory that you've allocated as a short (16-bit) unsigned. As it happens, that variable (id)is in memory next to input, so part of input is being overwritten by the high 16-bits of the 32-bit unsigned you are writing to id.
(3) Try entering a value of 65537 for your first input and then the value will still be modified (probably to 65536), but not cleared to zero.
(4) Recommend: Accept and upvote the answer from Mentat. Feel free to downvote my answer!
Yours, humbly!
I wrote a code to print size of different data types in C .
#include<stdio.h>
int main()
{
printf("%d", sizeof(int));//size of integer
printf("%d", sizeof(float));
printf("%d", sizeof(double));
printf("%d", sizeof(char));
}
This does not work , but if I replace %d with %ld, it works. I did not understand why I have to take long int to print a small range number.
Both of those are wrong you must use %zu to print values of type size_t, which is what sizeof return.
This is because different values have different size, and you must match them.
It's undefined behavior to mismatch like you do, so anything could happen.
This is because sizes mismatch. By either using %zu or using %u and casting to unsigned you may fix the problem.
Currently, your implementation is undefined behaviour.
printf("%u", (unsigned)sizeof(int));//size of integer
printf("%u", (unsigned)sizeof(float));
printf("%u", (unsigned)sizeof(double));
printf("%u", (unsigned)sizeof(char));
Since stdout is new line buffered, don't forget to print \n at the end to get anything to screen.
sizeof has the return type size_t. From the Standard,
6.5.3.4 The sizeof and _Alignof operators
5 The value of the result of both operators is
implementation-defined, and its type (an unsigned integer type) is
size_t, defined in <stddef.h> (and other headers).
size_t is implementation-defined. In my linux, size_t is defined as __SIZE_TYPE__. On this topic, one can find details here.
In your case, it happens that size_t is implemented as a long , longer than int.
I did not understand why I have to take long int to print a small range number.
Because size_t may represent values much larger than what an int can support; on my particular implementation, the max size value is 18446744073709551615, which definitely won't fit in an int.
Remember that the operand of sizeof may be a parenthesized type name or an object expression:
static long double really_big_array[100000000];
...
printf( "sizeof really_big_array = %zu\n", sizeof really_big_array );
size_t must be able to represent the size of the largest object the implementation allows.
You say it does not work, but you do not say what it does. The most probable reason for this unexpected behavior is:
the conversion specifier %d expects an int value. sizeof(int) has type size_t which is unsigned and, on many platforms, larger than int, causing undefined behavior.
The conversion specifier and the type of the passed argument must be consistent because different types are passed in different ways to a vararg function like printf(). If you pass a size_t and printf expects an int, it will retrieve the value from the wrong place and produce inconsistent output if at all.
You say it works if I put %ld. This conversion may work because size_t happens to have the same size as long for your platform, but it is only a coincidence, on 64-bit Windows, size_t is larger than long.
To correct the problem, you can either:
use the standard conversion specifier %zu or
cast the value as int.
The first is the correct fix but some C libraries do not support %zu, most notably Microsoft C runtime libraries prior to VS2013. Hence I recommend the second as more portable and sufficient for types and objects that obviously have a small size:
#include <stdio.h>
int main(void) {
printf("%d\n", (int)sizeof(int));
printf("%d\n", (int)sizeof(float));
printf("%d\n", (int)sizeof(double));
printf("%d\n", (int)sizeof(char));
return 0;
}
Also note that you do not output a newline: Depending on the environment, the output will not be visible to the user until a newline is output or fflush(stdout) is called. It is even possible that the output not be flushed to the console upon program exit, causing your observed behavior, but such environments are uncommon. It is recommended to output newlines at the end of meaningful pieces of output. In your case, not doing so would cause all sizes to be clumped together as a sequence of digits like 4481, which may or may not be what you expect.
i have this line in my code
`printf("Rcvd pkt from %s:%d at <%ld.%06ld>\n", inet_ntoa(servAddr.sin_addr), ntohs(servAddr.sin_port), timeVal.tv_sec, timeVal.tv_usec);`
this is the warning i get with gcc while compiling it
`cc1: warnings being treated as errors
`client12.c: In function ‘main’:
`client12.c:131: warning: format ‘%06ld’ expects type ‘long int’, but argument 5 has type ‘__darwin_suseconds_t’
`client12.c:131: warning: format ‘%06ld’ expects type ‘long int’, but argument 5 has type ‘__darwin_suseconds_t’
what am i doing wrong??
PS - i have included time.h and sys/time.h
Cast the numbers to the correct types:
printf("Rcvd pkt from %s:%d at <%ld.%06ld>\n", inet_ntoa(servAddr.sin_addr), ntohs(servAddr.sin_port), (long int)(timeVal.tv_sec), (long int)(timeVal.tv_usec));
The type of the members of struct timeval will vary from system to system. As with many other C data types, the safe and portable thing to do is to cast the values when printing them:
printf("Rcvd pkt from %s:%d at <%ld.%06ld>\n",
inet_ntoa(servAddr.sin_addr), ntohs(servAddr.sin_port),
(long) timeVal.tv_sec, (long) timeVal.tv_usec);
This will work correctly for any data type smaller than or equal in size to a long. And this idiom is so common that people will think twice about making any of these common data types longer than a long, although be careful with data types that refer to file sizes (like off_t); those could be long long in some cases.
To be maximally safe, you'd cast to long long and use a %lld format, but that swaps one portability problem for another, since not all printf implementations support %lld yet. And I've not seen an implementation where those time values require that treatment.
See sys/types.h for an explanation of suseconds_t. Apparently, it is
a signed integral type capable of storing values at least in the range [-1, 1,000,000].
This means it might defined as an int on your system. Try removing the long specifier l from the format string and just print it as a regular decimal.
Edit
As per rra's answer, this will not be portable. It will only work on systems that define suseconds_t in the same way. We know from the spec that the type is a signed integral type of at least 32 bits. The most portable way is to cast it to the biggest signed integral intrinsic that you can get away with.
Greetings , and again today when i was experimenting on language C in C99 standard , i came across a problem which i cannot comprehend and need expert's help.
The Code:
#include <stdio.h>
int main(void)
{
int Fnum = 256; /* The First number to be printed out */
printf("The number %d in long long specifier is %lld\n" , Fnum , Fnum);
return 0;
}
The Question:
1.)This code prompted me an warning message when i try to run this code.
2.)But the strange thing is , when I try to change the specifier %lld to %hd or %ld,
the warning message were not shown during execution and the value printed out on the console is the correct digit 256 , everything also seems to be normal even if i try with
%u , %hu and also %lu.In short the warning message and the wrong printing of digit only happen when I use the variation of long long specifier.
3.)Why is this happening??I thought the memory size for long long is large enough to hold the value 256 , but why it cannot be used to print out the appropriate value??
The Warning Message :(For the above source code)
C:\Users\Sam\Documents\Pelles C Projects\Test1\Test.c(7): warning #2234: Argument 3 to 'printf' does not match the format string; expected 'long long int' but found 'int'.
Thanks for spending time reading my question.God bless.
You're passing the Fnum variable to printf, which is typed int, but it's expecting long long. This has very little to do with whether a long long can hold 256, just that the variable you chose is typed int.
If you just want to print 256, you can get a constant that's typed to unsigned long long as follows:
printf("The number %d in long long specifier is %lld\n" ,256 , 256ULL);
or cast:
printf("The number %d in long long specifier is %lld\n" , Fnum , (long long int)Fnum);
There are three things going on here.
printf takes a variable number of arguments. That means the compiler doesn't know what type the arguments (beyond the format string) are supposed to be. So it can't convert them to an appropriate type.
For historical reasons, however, integer types smaller than int are "promoted" to int when passed in a variable argument list.
You appear to be using Windows. On Windows, int and long are the same size, even when pointers are 64 bits wide (this is a willful violation of C89 on Microsoft's part - they actually forced the standard to be changed in C99 to make it "okay").
The upshot of all this is: The compiler is not allowed to convert your int to a long long just because you used %lld in the argument list. (It is allowed to warn you that you forgot the cast, because warnings are outside standard behavior.) With %lld, therefore, your program doesn't work. But if you use any other size specifier, printf winds up looking for an argument the same size as int and it works.
When dealing with a variadic function, the caller and callee need some way of agreeing the types of the variable arguments. In the case of printf, this is done via the format string. GCC is clever enough to read the format string itself and work out whether printf will interpret the arguments in the same way as they have been actually provided.
You can get away with slightly different types of arguments in some cases. For example, if you pass a short then it gets implicitly converted to an int. And when sizeof(int) == sizeof(long int) then there is also no distinction. But sizeof(int) != sizeof(long long int) so the parameter fails to match the format string in that case.
This is due to the way varargs work in C. Unlike a normal function, printf() can take any number of arguments. It is up to the programmer to tell printf() what to expect by providing a correct format string.
Internally, printf() uses the format specifiers to access the raw memory that corresponds to the input arguments. If you specify %lld, it will try to access a 64-bit chunk of memory (on Windows) and interpret what it finds as a long long int. However, you've only provided a 32-bit argument, so the result would be undefined (it will combine your 32-bit int with whatever random garbage happens to appear next on the stack).