long integer problem - c

I'm a beginner at C, and using Turbo C++ compiler (16 bit).
In the software I'm writing, the maximum answer is around 32000. If I want a number larger than that, I use long int.
If I execute the following program:
#include <stdio.h>
void main()
{
long int x;
x=40000;
printf("%d", x);
}
Then I get an error that the constant value is long in function main().
How can I get an answer more that 32000 and get rid of this error?
also nw i change %d to %ld and use 40000L bt when i use unsigned integer then also i need to use 'l' with 40000//??

Use %ld in printf for the long int. %d is for int which only has 16 bits in your compiler. And for the error message, use x=40000L.

Change long to unsigned, 40000 will fit in unsigned int.

Assuming you're on windows, the best solution to this is to target a 32 or 64-bit platform. 16-bit programs won't even run on 64-bit versions of windows; you should really upgrade.
Microsoft has a free version of Visual Studio: Visual C++ Express Edition. This is an excellent option also because it comes with a full IDE.
Gcc is also available for windows in the form of Mingw. Unfortunately, mingw itself does not release ready-to-use compilers, but others do, such as equation.com or TDM.

Perhaps brushing up on variadic formatting might help :) By the time you (or the printf() subsystem) actually gets to expanding variadic arguments, its assumed that you know what type they are.
This not only goes for printf, but any other function that employs va_*() or v*printf() when discussing printf. Don't lose track of your types.
Also, keep track of signedness to avoid unexpected results.
In other words, by the time you call printf(), or anything else accepting an elipsis, be sure of what you are passing. This isn't limited to printf(), in fact venturing beyond that will often not produce compiler warnings.

Related

Dealing with long type from a 32-bit codebase on a 64-bit system (Linux)

I have a program written originally WAY back in 1995, maintained to 2012.
It's obviously written for a 32-bit architecture, I've managed to get the damn thing running, but I'm getting stumped on how it's saving data...
My issue is with the sizeof(long) under 64-bit (a common problem I know), I've tried doing a sed across the code and replacing long with int_32t, but then I get errors where it's trying to define a variable like:
unsigned long int count;
I've also tried -m32 on the gcc options, but then it fails to link due to 64-bit libraries being required.
My main issue is where it tries to save player data (it's a MUD), at the following code lines:
if ((sizeof(char) != 1) || (int_size != long_size))
{
logit(LOG_DEBUG,
"sizeof(char) must be 1 and int_size must == long_size for player saves!\n");
return 0;
}
Commenting this out allows the file to save, but because it's reading bytes from a buffer as it reloads the characters, the saved file is no longer readable by the load function.
Can anyone offer advice, maybe using a typedef?
I'm trying to avoid having to completely rewrite the save/load routines - this is my very last resort!.
Thanks in advance for answers!
Instead of using types like int and long you can use int32_t and int64_t, which are typedef:s to types that have the correct size in your environment. They exists in signed and unsigned variants as in int32_t and uint32_t.
In order to use them you need to include stdint.h. If you include inttypes.h you will also get macros useful when printing using printf, e.g. PRIu64.

C: Warning when casting int to int* on Windows 64-bit machine when working on 32-bit program

I'm working on a legacy 32-bit program where there are a lot of casts like DWORD* a = (DWORD*)b, where b is a native int, and I get lots of these warnings:
Cast to 'DWORD *' (aka 'unsigned int*') from smaller integer type 'int' ['clang: -Wint-to-pointer-cast]
Since the sizes are equal during compilation it's fine, but I don't see how Clang would know that. What can I do to satisfy this warning other than disabling it entirely?
EDIT: The premise of the question is bad due to my misunderstanding of Clang, a compiler, and clangd, the language server which invokes Clang. The language server didn't know I was targeting x86.
So the problem is (DWORD*)b but b is of type int. This means the code needs to be redesigned, because somebody is stuffing pointers into int. Microsoft made a special type for a pointer-sized integer: DWORD_PTR. Yeah sure there's one in stdint.h and you can use that one if you want, but if you're already using DWORD you might as well use DWORD_PTR. The problem didn't happen on this line. The problem happened on the line where b was assigned the value from a pointer.
Change type of b to intptr_t, uintptr_t, or DWORD_PTR and back-propigate the change until the errors go away. If you come to a place where you can't, that part of the code needs to be redesigned.
Microsoft's own compiler now yields warnings for this stuff even in 32 bit compilation when the type isn't one of the pointer-in-integer types. Best to head the warnings.
Stuffing pointers in integers is not a recommended practice anymore, but the Win32 API does it all over the place, so when in Rome ...

How to detect if printf will support %a?

I need to losslessly represent a double precision float in a string, and so I am using
sprintf(buf, "%la", x);
This works fine on my system, but when built on MinGW under Windows, gives a warning:
unknown conversion type character 'a' in format
I coded up a workaround for this case, but have trouble detecting when I should use the workaround -- I tried #if __STDC_VERSION__ >= 199901L, but it seems Gcc/MinGW defines that even if it doesn't support %a. Is there another macro I could check?
This doesn't answer the question "How to detect if printf will support %a?" in the general case, but you can modify your compiler installation so that %a is supported.
First of all , use mingw-w64. This is an up-to-date fork of MinGW. The original version of MinGW is not well maintained and does not fix bugs such as you are experiencing (preferring to blame Microsoft or something).
Using mingw-w64 4.9.2 in Windows 10, the following code works for me:
#include <stdio.h>
int main()
{
double x = 3.14;
printf("%a\n", x);
}
producing 0x1.91eb85p+1 which is correct. This is still deferring to the Microsoft runtime.
Your question mentions %la, however %a and %la are both the same and can be used to print either a float or a double argument.
If you want to print a long double, then the Microsoft runtime does not support that; gcc and MS use different sizes of long double. You have to use mingw-w64's own printf implementation:
#define __USE_MINGW_ANSI_STDIO 1
#include <stdio.h>
int main()
{
long double x = 3.14;
printf("%La\n", x);
}
which outputs 0xc.8f5c28f5c28f8p-2. This is actually the same number as 0x1.91eb85p+1 with more precision and a different placement of the binary point, it is also correct.
As jxh already suspected, MingW uses the MSVCRT LibC from Windows. The C99 support is not complete, especially some of the options of printf(3) like a are missing.

cross-platform printing of 64-bit integers with printf

In Windows, it is "%I64d". In Linux and Solaris, it is "%lld".
If I want to write cross-platform printfs that prints long long values: what is good way of doing so ?
long long ll;
printf(???, ll);
There are a couple of approaches.
You could write your code in C99-conforming fashion, and then supply system-specific hacks when the compiler-writers let you down. (Sadly, that's rather common in C99.)
#include <stdint.h>
#include <inttypes.h>
printf("My value is %10" PRId64 "\n", some_64_bit_expression);
If one of your target systems has neglected to implement <inttypes.h> or has in some other way fiendishly slacked off because some of the type features are optional, then you just need a system-specific #define for PRId64 (or whatever) on that system.
The other approach is to pick something that's currently always implemented as 64-bits and is supported by printf, and then cast. Not perfect but it will often do:
printf("My value is %10lld\n", (long long)some_64_bit_expression);
MSVC supports long long and ll starting Visual Studio 2005.
You could check the value of the _MSC_VER macro (>= 1400 for 2005), or simply don't support older compilers.
It doesn't provide the C99 macros, so you will have to cast to long long rather than using PRId64.
This won't help if you're using older MSVC libraries with a non-MSVC compiler (I think mingw, at least, provides its own version of printf that supports ll)
No on linux and solaris it is only incidentally that this is lld for a 64bit type. C99 prescribes simple (but ugly) macros to make these things portable PRId64. Since some windows compilers don't follow the standard you might be out of luck, there, unfortunately.
Edit: In your example you are using a different thing than a 64bit integer, namely a long long. This could well be 128 on some architectures. Here C99 has typedefs that guarantee you the minimum or exact width of the type (if they are implemented on the platform). These types are found with the inttypes.h header, namely int64_t for a fixe-width 64 bit type represented in two's complement. Maybe or maybe not your windows compiler has this.
As alternative you can use code like this:
uint64_t currentTimeMs = ...;
printf("currentTimeMs = 0x%08x%08x\n",
(uint32_t)(currentTimeMs >> 32),
(uint32_t)(currentTimeMs & 0xFFFFFFFF)
);
Or maybe:
printf("currentTimeMs = %u%09u\n",
(uint32_t)(currentTimeMs / 1000000000),
(uint32_t)(currentTimeMs % 1000000000)
);

Is using %zu correct syntax in a printf format string as shown in some C code found on Wikipedia?

I just found this code on Wikipedia.
Link: http://en.wikipedia.org/wiki/Sizeof#Use
The code:
/* the following code illustrates the use of sizeof
* with variables and expressions (no parentheses needed),
* and with type names (parentheses needed)
*/
char c;
printf("%zu,%zu", sizeof c, sizeof(int));
It states that: "The z prefix should be used to print it, because the actual size can differ on each architecture."
I tried it on my compiler, but it gives the following result:
zu,zu
Yes that syntax is correct (at least for C99). Looks like your compiler isn't set up to handle it though. Just take out the z and you'll probably be fine. To be correct, make sure your printf format specifiers match the size of the types. Turning on all the warnings your compiler will give you probably helps out in that respect.
Your quotation:
The z prefix should be used to print it, because the actual size can differ on each architecture
is referring to the fact that size_t (which is the type returned by the sizeof operator) can vary from architecture to architecture. The z is intended to make your code more portable. However, if your compiler doesn't support it, that's not going to work out. Just fiddle with combinations of %u, %lu, etc. until you get the output making sense.
The z length modifier was added to C in the C99 standard; you might have a compiler that doesn't support C99.
If your C compiler doesn't support that, you can probably treat the sizes as unsigned long:
printf("%lu,%lu", (unsigned long)sizeof c, (unsigned long)sizeof(int));
Yes, but it only works on C99-compliant compilers. From wikipedia:
z: For integer types, causes printf to expect a size_t sized integer argument.
Did you tell your compiler that you want it thinking with a C99 brain? There is probably a switch to do that. For instance, -std=c99 for gcc.
If your compiler does not support it, but you know others will, you can do a PRId64 style work around (disclaimer - PSEUDO CODE AHEAD ..):
#ifdef __SOME_KNOWN_C99_COMPILER
#define PORTUNSIGNED "zu"
#else
#define PORTUNSIGNED "u"
#endif
printf("%-11" PORTUNSIGNED " ways to skin a cat\n");
Its probably better to get a compiler that has functional support for c99, however.
I've made a test using gcc 4.0. It works with -std=c99

Resources