printing uid of a file on linux system - c

i am learning c programming. I am trying to make my own program similar to ls command but with less options.what i am doing is taking input directory/file name as argument and then gets all directory entry with dirent struct(if it is directory) .
After it i use stat() to take all information of the file but here is my problem when i use write() to print these value its fine but when i want to print these with printf() i get warninng : format ‘%ld’ expects type ‘long int’, but argument 2 has type ‘__uid_t’. I don't know what should use at place of %ld and also for other special data types.

There is no format specifier for __uid_t, because this type is system-specific and is not the part of C standard, and hence printf is not aware of it.
The usual workaround is to promote it to a type that would fit the entire range of UID values on all the systems you are targeting:
printf("%lu\n", (unsigned long int)uid); /* some systems support 32-bit UIDs */

You could cast it to a long int:
printf("foobar: %ld\n", (long int)your_uid_t_variable);

Related

Please, why is this code printing with an extra "n" when I run it?

I am trying to get the pid using this code. But when I ran the compiled the code. I get an error message "warning: format specifies type 'unsigned long' but the argument has type 'pid_t' (aka 'int') [-Wformat]".
When I changed the format specifier to just "%lu", it prints without the extra character.
#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
int main()
{
pid_t pid = getpid();
printf("pid: %lun", pid);
}
I expected pid of "60316". I get pid of "60316n".
n prints out for the same reason p prints. n is not part of a specifier.
vvv --- specifier for unsigned long
"pid: %lun"
^^^^^ ^ --- non-specifier characters
Aside:
The pid_t data type is a signed integer type which is capable of representing a process ID
See What is the correct printf specifier for printing pid_t?
The format string you pass to printf is a mixture of ordinary characters and format specifiers. The ordinary characters print as themselves, and the format specifiers cause one of the extra arguments to be converted and printed. But the type of each extra argument must match the type expected by the format specifiers, more or less exactly.
We can tell from the warning message that pid is an int. So the correct format specifier is %d.
The specifier %lu would have been correct if pid had type unsigned long int.
Format specifiers are sort of their own little miniature programming language. The basic format specifier for type unsigned int is %u, and you can modify it to print unsigned long int instead with the l (letter ell) modifier: %lu.
But when you write %lun, the n wasn't actually part of any format specifier, so it printed as itself.
(Also, you got lucky: despite the mismatch between pid's type and the format specifier %lu, you got a sensible value printed anyway.)
Usually, getting the arguments and the format specifiers to line up is easy: int for %d, unsigned int for %u or %x, long int for %ld, float or double for %f, etc. But what's the right specifier to use for pid, which has type pid_t? Today, on your computer, it looks like pid_t is an int, so %d would be correct, but what if, on some other computer, it has a different type? How can you write one piece of code that will work correctly on any computer? One way is to pick a type that's probably right, and use the specifier for that type, and then use an explicit cast to convert your value to the type you picked. Like this:
printf("pid: %d\n", (int)pid);
Or like this:
printf("pid: %lu\n", (unsigned long int)pid);
If the cast isn't necessary (if the format specifier you picked is already correct for the type of pid on your computer), the cast won't hurt.
Finally, congratulations on using a compiler that actually gave you the warning message:
warning: format specifies type 'unsigned long' but the argument has type 'pid_t' (aka 'int')
Too many beginning programmers are stuck using older compilers that don't print warnings like these, and that obviously makes it much more difficult to track down problems caused by mismatched format specifiers! (Your compiler was particularly helpful in that it gave you both the apparent type of pid, namely pid_t, and also the actual underlying type, int.)

output of negative integer to %u format specifier

Consider the following code
char c=125;
c+=10;
printf("%d",c); //outputs -121 which is understood.
printf("%u",c); // outputs 4294967175.
printf("%u",-121); // outputs 4294967175
%d accepts negative numbers therefore output is -121 in first case.
output in case 2 and case 3 is 4294967175. I don't understand why?
Do
232 - 121 = 4294967175
printf interprets data you provide thanks to the % values
%d signed integer, value from -231 to 231-1
%u unsigned integer, value from 0 to 232-1
In binary, both integer values (-121 and 4294967175) are (of course) identical:
`0xFFFFFF87`
See Two's complement
printf is a function with variadic arguments. In such case a "default argument promotions" are applied on arguments before the function is called. In your case, c is first converted from char to int and then sent to printf. The conversion does not depend on the corresponding '%' specifier of the format. The value of this int parameter can be interpreted as 4294967175 or -121 depending on signedness. The corresponding parts in the C standard are:
6.5.2.2 Function call
6 - ... If the expression that denotes the called function has a type that does not include a
prototype, the integer promotions are performed on each argument, and arguments that
have type float are promoted to double. These are called the default argument
promotions.
7- If the expression that denotes the called function has a type that does include a prototype,
the arguments are implicitly converted, as if by assignment, to the types of the
corresponding parameters, taking the type of each parameter to be the unqualified version
of its declared type. The ellipsis notation in a function prototype declarator causes
argument type conversion to stop after the last declared parameter. The default argument
promotions are performed on trailing arguments.
If char is signed in your compiler (which is the most likely case) and is 8 bits wide (extremely likely,) then c+=10 will overflow it. Overflow of a signed integer results in undefined behavior. This means you can't reason about the results you're getting.
If char is unsigned (not very likely on most PC platforms), then see the other answers.
printf uses something called variadic arguments. If you make a brief research about them you'll find out that the function that uses them does not know the type of the input you're passing to it. Therefore there must be a way to tell the function how it must interpret the input, and you're doing it with the format specifiers.
In your particular case, c is a 8-bit signed integer. Therefore, if you set it to the literal -121 inside it, it will memorize: 10000111. Then, by the integer promotion mechanism you have it converted to an int: 11111111111111111111111110000111.
With "%d" you tell printf to interpret 11111111111111111111111110000111 as a signed integer, therefore you have -121 as output. However, with "%u" you're telling printf that 11111111111111111111111110000111 is an unsigned integer, therefore it will output 4294967175.
EDIT: As stated in the comments, actually the behaviour is undefined in C. That's because you have more than one way to encode negative numbers (sign and modulo, One's complement, ...) and sone other aspects (such as endianness, if I'm not wrong, influences this result). So the result is said to be implementation defined. Therefore you may get a different output rather than 4294967175. But the main concepts I explained for different interpretation of the same string of bits and the lossness of the type of data in variadic arguments still hold.
Try to convert the number into base 10, first as a pure binary number, then knowing that it's memorized in 32-bit Two's complement... you get two different results. But if I do not tell you which intepretation you need to use, that binary string can represent everything (a 4-char ASCII string, a number, a small 8-bit 2x2 image, your safe combination, ...).
EDIT: you can think of "%<format_string>" as a sort of "extension" for that string of bits. You know, when you create a file, you usually give it an extension, which is actually a part of the filename, to remember in which format/encoding that file has been stored. Let's suppose you have your favorite song saved as song.ogg file on your PC. If you rename the file in song.txt, song.odt, song.pdf, song, song.akwardextension, that does not change the content of the file. But if you try to open it with the program usually associated to .txt or .whatever, it reads the bytes in the file, but when it tries to interpret sequences of bytes it may fail (that's why if you open song.ogg with Emacs or VIm or whatever text editor you get sonething that looks like garbage information, if you open it with, for instance, GIMP, GIMP cannot read it, and if you open it with VLC you listen to your favorite song). The extension is just a reminder for you: it reminds you how to interpret that sequence of bits. As printf has no knowledge for that interpretation, you need to provide it one, and if you tell printf that a signed integer is acutally unsigned, well, it's like opening song.ogg with Emacs...

how to print struct timeVal

i have this line in my code
`printf("Rcvd pkt from %s:%d at <%ld.%06ld>\n", inet_ntoa(servAddr.sin_addr), ntohs(servAddr.sin_port), timeVal.tv_sec, timeVal.tv_usec);`
this is the warning i get with gcc while compiling it
`cc1: warnings being treated as errors
`client12.c: In function ‘main’:
`client12.c:131: warning: format ‘%06ld’ expects type ‘long int’, but argument 5 has type ‘__darwin_suseconds_t’
`client12.c:131: warning: format ‘%06ld’ expects type ‘long int’, but argument 5 has type ‘__darwin_suseconds_t’
what am i doing wrong??
PS - i have included time.h and sys/time.h
Cast the numbers to the correct types:
printf("Rcvd pkt from %s:%d at <%ld.%06ld>\n", inet_ntoa(servAddr.sin_addr), ntohs(servAddr.sin_port), (long int)(timeVal.tv_sec), (long int)(timeVal.tv_usec));
The type of the members of struct timeval will vary from system to system. As with many other C data types, the safe and portable thing to do is to cast the values when printing them:
printf("Rcvd pkt from %s:%d at <%ld.%06ld>\n",
inet_ntoa(servAddr.sin_addr), ntohs(servAddr.sin_port),
(long) timeVal.tv_sec, (long) timeVal.tv_usec);
This will work correctly for any data type smaller than or equal in size to a long. And this idiom is so common that people will think twice about making any of these common data types longer than a long, although be careful with data types that refer to file sizes (like off_t); those could be long long in some cases.
To be maximally safe, you'd cast to long long and use a %lld format, but that swaps one portability problem for another, since not all printf implementations support %lld yet. And I've not seen an implementation where those time values require that treatment.
See sys/types.h for an explanation of suseconds_t. Apparently, it is
a signed integral type capable of storing values at least in the range [-1, 1,000,000].
This means it might defined as an int on your system. Try removing the long specifier l from the format string and just print it as a regular decimal.
Edit
As per rra's answer, this will not be portable. It will only work on systems that define suseconds_t in the same way. We know from the spec that the type is a signed integral type of at least 32 bits. The most portable way is to cast it to the biggest signed integral intrinsic that you can get away with.

reading hex data from file fscanf format compile time warning

I'm reading some data from a file. The format is stated tobe
ASCII text with UNIX-style
line-endings, a series of 32-bit
signed integers in hexadecimal.
e.g
08000000
I'm using fscanf to read in this data.
long data_size;
FILE *fp;
fp=fopen("test01.bin", "r"); // open for reading
if (fp==0) {cerr << "Error openeing file"<<endl; return 1;}
fscanf(fp, "%x", &data_size);
Everything runs ok with my test file but I get the compile-time warning,
warning: format ‘%x’ expects type ‘unsigned int*’, but argument 3 has type ‘long int*’
however a hex value is unsigned and is being cast to a long dose this matter? As long will take the most significant bit as notifying the sign? Or will I end up with problems? Or am I well off the mark in my understanding?
Thanks
You should support the same pointer type as the warning states, or else you will run into serious troubles if you want to port your code to other architectures (e.g: 64 bit architectures) where long has a different size than int. This is especially tricky if you are using pointers. (I once had a bug originating from exactly this problem)
Just use int data_size and you will be fine.
The problem is that %x requires an unsigned int * to read the value in, but you have a long *. <stdint.h> header provides value types with fixed length, and <inttypes.h> defines corresponding macros for use with printf, scanf, and their derivatives. I think it'll be better for you to fscanf the data into an int32_t variable using the macro provided by <inttypes.h>:
#include <inttypes.h>
...
int32_t data_size;
fscanf(fp, "%" SCNx32, &data_size);

C int datatype and its variations

Greetings , and again today when i was experimenting on language C in C99 standard , i came across a problem which i cannot comprehend and need expert's help.
The Code:
#include <stdio.h>
int main(void)
{
int Fnum = 256; /* The First number to be printed out */
printf("The number %d in long long specifier is %lld\n" , Fnum , Fnum);
return 0;
}
The Question:
1.)This code prompted me an warning message when i try to run this code.
2.)But the strange thing is , when I try to change the specifier %lld to %hd or %ld,
the warning message were not shown during execution and the value printed out on the console is the correct digit 256 , everything also seems to be normal even if i try with
%u , %hu and also %lu.In short the warning message and the wrong printing of digit only happen when I use the variation of long long specifier.
3.)Why is this happening??I thought the memory size for long long is large enough to hold the value 256 , but why it cannot be used to print out the appropriate value??
The Warning Message :(For the above source code)
C:\Users\Sam\Documents\Pelles C Projects\Test1\Test.c(7): warning #2234: Argument 3 to 'printf' does not match the format string; expected 'long long int' but found 'int'.
Thanks for spending time reading my question.God bless.
You're passing the Fnum variable to printf, which is typed int, but it's expecting long long. This has very little to do with whether a long long can hold 256, just that the variable you chose is typed int.
If you just want to print 256, you can get a constant that's typed to unsigned long long as follows:
printf("The number %d in long long specifier is %lld\n" ,256 , 256ULL);
or cast:
printf("The number %d in long long specifier is %lld\n" , Fnum , (long long int)Fnum);
There are three things going on here.
printf takes a variable number of arguments. That means the compiler doesn't know what type the arguments (beyond the format string) are supposed to be. So it can't convert them to an appropriate type.
For historical reasons, however, integer types smaller than int are "promoted" to int when passed in a variable argument list.
You appear to be using Windows. On Windows, int and long are the same size, even when pointers are 64 bits wide (this is a willful violation of C89 on Microsoft's part - they actually forced the standard to be changed in C99 to make it "okay").
The upshot of all this is: The compiler is not allowed to convert your int to a long long just because you used %lld in the argument list. (It is allowed to warn you that you forgot the cast, because warnings are outside standard behavior.) With %lld, therefore, your program doesn't work. But if you use any other size specifier, printf winds up looking for an argument the same size as int and it works.
When dealing with a variadic function, the caller and callee need some way of agreeing the types of the variable arguments. In the case of printf, this is done via the format string. GCC is clever enough to read the format string itself and work out whether printf will interpret the arguments in the same way as they have been actually provided.
You can get away with slightly different types of arguments in some cases. For example, if you pass a short then it gets implicitly converted to an int. And when sizeof(int) == sizeof(long int) then there is also no distinction. But sizeof(int) != sizeof(long long int) so the parameter fails to match the format string in that case.
This is due to the way varargs work in C. Unlike a normal function, printf() can take any number of arguments. It is up to the programmer to tell printf() what to expect by providing a correct format string.
Internally, printf() uses the format specifiers to access the raw memory that corresponds to the input arguments. If you specify %lld, it will try to access a 64-bit chunk of memory (on Windows) and interpret what it finds as a long long int. However, you've only provided a 32-bit argument, so the result would be undefined (it will combine your 32-bit int with whatever random garbage happens to appear next on the stack).

Resources