Is there any compiler option to make time_t 64-bit in Solaris 5.8 in forte compiler. I need to develop library in 32-bit and I cannot change it to 64-bit as it effects existing client applications.
Sun does not (yet) provide any compiler option for this, other than compiling for 64-bit. If you simply need to be able to handle post-2038 dates in your 32-bit application (e.g. for a 30-year mortgage calculation) and do not need such dates in the Solaris kernel (e.g. current time, file timestamps), then there are packages that you can use within your application to handle such dates. For example y2038 is a simple package that provides a 64-bit time_t-like type and the corresponding replacements for localtime(), gmtime(), ctime(), etc. If you are not tied to the POSIX interfaces you could instead use something like libtai, which also handles leap seconds.
Short answer is no, there is no option in the compiler to make time_t a 64bit value in a 32bit application. It was extended to a 64bit value for 64bit applications as this seemed like a good change to make, but for compliance with all the various standards, it has to be kept as a signed 32bit value for 32bit applications.
If you want to use a 64bit value to represent time, then you will have to make sure that any values returned to the existing client applications do not overflow when returned. If they do overflow then you would need to support specifying this to the client application, and understand how they would deal with the values, which is all part of the API to the library.
Related
I've built a programme in C with Visual Studio using standard C libraries + a couple of windows libraries.
The code just acquires user input (with scanf, so through cmd window) does some calculations based on the input and outputs some text files, that's about it.
I'm wondering what would I then need to do to run my exe on another standard Windows computer without it needing to install any additional files e.g. the whole Windows SDK.?
Is it just a case of being able to build the release version (as opposed to the debug)?
Many thanks
If you pick the right CPU target (Project Configuration Properties -> C/C++: Enable Enhanced Instruction Set) such that the binary code doesn't include instructions understood by only a very narrow subset of CPUs, and the default for Visual Studio is to use instructions that are supported by the widest set of x86 or x64 CPUs, then your program will run on almost any Windows computer. You need to distribute any DLLs that aren't part of the base Windows installation though, which includes any additional dynamically linked language runtimes such as the Visual C++ runtime.
A good way to derive the list of DLLs that you have to package with your executable is to create a virtual machine with a fresh Windows installation without any development tools in it and then try to run your code there. If you get an error for a missing DLL, add it and repeat. When it comes to the Visual C++ runtime, Microsoft provides installable packages for the different versions that you are allowed to distribute as part of your installation (Visual C++ Whatever Redistributable Package).
Also, mind that programs compiled for 32-bit Windows will mostly run on 64-bit versions, but the opposite is not true.
Generally speaking, C programs are not "portable" meaning they can't be copied over to other machines and expected to run.
There are a few exceptional cases where C executables can safely be ran on other machines:
The CPUs support the same instruction set, with the same set of compatible side-effects and possibly the same bugs.
The Operating systems support the same system api points, with the same set of compatible side-effects and possibly the same bugs.
The installed non-operating system libraries support the same api points, with the same set of compatible side-effects and possibly the same bugs.
The API calling conventions are the same between the source (platform you built the code on) and destination (platform you will run the executable on).
Of course, not all of the CPU and OS need to be 100% compatible, just the parts your compiled program uses (which is not always easy to see as it is compiled, and not a 100% identical representation of the source code)
For these conditions to hold, typically you are using the same release of the operating system, or a compatibility interface designed by the operating system packagers that supports the current version of the operating system and older versions too.
The details on how this is most easily done differ between operating systems, and generally speaking, even if a compatibility layer is present, you need to do adequate testing as the side-effects and bugs tend to differ despite promises of multi-operating system compatibility.
Finally, there are some environments that can make non-same CPU executables run on an operating system (like QEmu) by simulating the foreign CPU instruction set at runtime, interperting those instructions into ones that are compatible with the current CPU. Such systems are not common across non-Linux operating systems; and, they may stumble if the loading of dynamic libraries can't locate and load foreign instruction set libraries.
With all of these caveats, you can see why most people decide to write portable source code and recompile it on each of the target platforms; or, write their code for an interpreter that already exists on multiple platforms. With the interpreter approach, the CPU is conceptually a virtual one, which is implemented to be identical across hardware, letting you write one set of source code to be interpreted across all the hardware platforms.
I've built a programme in C with Visual Studio using standard C libraries + a couple of windows libraries.
I'm wondering what would I then need to do to run my exe on another standard Windows computer without it needing to install any additional files e.g. the whole Windows SDK.?
You don't explain what your program is really doing. Does it have a graphical user interface? Is it a web server? Do you have some time to improve it or enhance it? Can it run on the command line?
Why cannot you share the C source code (e.g. using github)?
If you want some graphical interface, consider using GTK. It has been ported to Windows.
If a web interface is enough, consider using libonion in your program, or find some HTTP server library in C for your particular platform.
But what you need understand is mostly related to linking, not only to the C programming language (read the n1570 specification). Hence read about Linkers and loaders.
You should prefer static libraries. Their availability is platform specific. Read more about Operating Systems. Also, sharing some Microsoft Windows system WinAPI libraries could be illegal (even when technically possible), so consult your lawyer after showing him the EULA that you are bound to.
My opinion is that Linux distributions are very friendly when learning to program in C (e.g. using GCC or Clang as your C compiler). Both Ubuntu and Debian are freely downloadable, but practically require about a hundred gigabytes of consecutive free disk space. Be sure to backup your important data before installing one of them.
Did you consider porting, adapting and compiling your C code to WebAssembly? If you did that, your code would run inside most recent Web browsers. Look also into Bellard's JSLinux.
Related answer here. Notice that C can be transpiled to JavaScript and then run locally inside recent web browsers.
Is there a way to force GCC and/or Clang compiler to use LP64 data model when targeting Windows (ignoring that Windows use LLP64 data model)?
No, because the requested capability would not work
You are "targeting Windows", presumably meaning you want the compiler to produce code that will run under Windows in the usual way. In order to do that, the program must invoke functions in the Windows API. There are effectively three versions of the Windows API: win16, win32, and win64. Since you want 64-bit pointers (the "P64" in "LP64"), the only possible target is win64.
In order to call a win64 function, you must include windows.h. That header file uses long. If there were a compiler switch to insist that long be treated as a 64-bit integer (LP64) rather than 32-bit (LLP64), then the compiler's understanding of how to call functions and lay out data structures that use long would be wrong; the resulting program would not run correctly.
The same problem applies to the standard C and C++ libraries. If you link to an existing compiled library (as is typical), the calls into it won't work (since it will use LLP64). If you were to build one from source using a hypothetical switch to force LP64, its calls into the Windows API would fail.
But you can try Cygwin
Cygwin uses LP64 and produces binaries that run on Windows. That is possible, despite what I wrote above, because the Cygwin DLL acts as a bridge between the Cygwin LP64 environment and the native win64 LLP64 environment. Assuming you have code originally written for win32 that you now want to take advantage of a 64-bit address space with no or minimal code changes, I suspect this is the easiest path. But I should acknowledge that I've never used Cygwin in quite this way so there might be problems I am not aware of.
I'm developing a few programs on my pc, that runs Ubuntu 64bit.
I'd like to run these applications on another pc, that runs on 32. Is possible to compile on my machine or do I need to recompile the applications on the other?
In general you need to provide the compiler an environment similar to the target execution environment. Depending on how similar or different one environment is to another, this may be simple or complicated.
Assuming the compiler is GCC, you should only need to add -m32 to your compilation flags to make them work on a 32 bit system; assuming all other things are equal. Ensure you have the necessary 32-bit dependencies installed on your system (this means the base C library dependencies as well as a 32 bit version for each library your application links against).
Since you are only compiling for x86 on a 64 bit host, the path to this is generally simple. I would recommend however setting up a dedicated environment which you can use to compile -- typically some kind of chroot (See pbuilder, schroot, chroot, debootstrap and others).
There are compiler settings/flags that should allow you to do this on your machine; which specific ones you need would depend on the compiler you are using.
Take for example a program downloaded from some website, the different options to pick from are the usual operating systems (Linux, Mac, Windows) but what about CPU architecture? The program is a binary executable. Does it just assume amd64? Or is the program compiled into all of the supported architectures and packaged together with a script on top that chooses the right one?
I'm only interested in C and would like to know how this is accomplished.
On further investigation, thanks to the lovely information provided by the individuals below, I came across Fat Binaries with support on both Mac and Linux. It doesn't seem as though windows supports it.
The Mac OS X binary format includes a mechanism for providing code for multiple architectures in the same file, and so a single Mac application can support 32 and 64 bit x86; in the recent past PowerPC support was possible too, although those are now obsolete. But for Windows and Linux, you generally need separate binaries for each CPU architecture (as comments have pointed out, it's possible to jury-rig something similar, although it's far from standard practice.) The default, and by far the most common, is amd64, but sometimes you'll still see separate downloads for 32-bit machines. The world used to be more interesting in this respect, but nowadays things are more standardized than ever.
Suppose you are designing, and writing a compiler for, a new language called Foo, among whose virtues is intended to be that it's particularly good for implementing compilers. A classic approach is to write the first version of the compiler in C, and use that to write the second version in Foo, after which it becomes self-compiling.
This does mean you have to be careful to keep backup copies of the binary (as opposed to most programs where you only have to keep backup copies of the source); once the language has evolved away from the first version, if you lost all copies of the binary, you would have nothing capable of compiling the current version. So be it.
But suppose it is intended to support both Linux and Windows. As long as it is in fact running on both platforms, it can compile itself on each platform, no problem. Supposing however you lost the binary on one platform (or had reason to suspect it had been compromised by an attacker); now there is a problem. And having to safeguard the binary for every supported platform is at least one more failure point than I'm comfortable with.
One solution would be to make it a cross-compiler, such that the binary on either platform can target both platforms.
This is not quite as easy as it sounds - while there is no problem selecting the binary output format, each platform provides the system API in the form of C header files, which normally only exist on their native platform, e.g. there is no guarantee code compiled against the Windows stdio.h will work on Linux even if compiled into Linux binary format.
Perhaps that problem could be solved by downloading the Linux header files onto a Windows box and using the Windows binary to cross-compile a Linux binary.
Are there any caveats with that solution I'm missing?
Another solution might be to maintain a separate minimum bootstrap compiler in Python, that compiles Foo into portable C, accepting only that subset of the language needed by the main Foo compiler and performing minimum error checking and no optimization, the intent being that the bootstrap compiler will thus remain simple enough that maintaining it across subsequent language versions wouldn't cost very much.
Again, are there any caveats with that solution I'm missing?
What methods have people used to solve this problem in the past?
This is a problem for C compilers themselves. It's typically solved by the use of a cross-compiler, exactly as you suggest.
The process of cross-compiling a compiler is no more difficult than cross-compiling any other project: that is to say, it's trickier than you'd like, but by no means impossible.
Of course, you first need the cross-compiler itself. This probably means some major surgery to your build-configuration system, and you'll need some kind of "sysroot" taken from the target (header, libraries, anything else you'll need to reference in a build).
So, in the end it depends on how your compiler is structured. Either it's easier to re-bootstrap using historical sources, repeating each phase of language compatibility you went through in the first place (you did use source revision control, right?), or it's easier to implement a cross-compiler configuration. I can't tell you which from here.
For many years, the GCC compiler was always written only in standard-compliant C code for exactly this reason: they wanted to be able to bring it up on any OS, given only the native C compiler for that system. Only in 2012 was it decided that C++ is now sufficiently widespread that the compiler itself can be written in it. Even then, they're only permitting themselves a subset of the language. In future, if anybody wants to port GCC to a platform that does not already have C++, they will need to either use a cross-compiler, or first port GCC 4.7 (that last major C-only version) and then move to the latest.
Additionally, the GCC build process does not "trust" the compiler it was built with. When you type "make", it first builds a reduced version of itself, it then uses that the build a full version. Finally, it uses the full version to rebuild another full version, and compares the two binaries. If the two do not match it knows that the original compiler was buggy and introduced some bad code, and the build has failed.