Where do i get BIOS.h file, to import in Mingw? - c

Mingw don't have BIOS.h file by default. And i'm doing system programming by using netbeans IDE and a third party tool mingw. . ?
Can any one helps me, where do i get that file?
This is the code.
#include<stdio.h>
#include<BIOS.H>
#include<DOS.H>
char st[80] ={"Hello World$"};
void main()
{
_DX = (unsigned int) st;
_AH = 0x09;
geninterrupt(0x21);
}

Nowhere, you don't.
Those header files (dos.h and bios.h) are from 16-bit DOS compilers such as Turbo C or Open Watcom C. MinGW is a 32-bit compiler for Windows. As such, even if you get these header files, they will be useless because:
they are incompatible with gcc
they also need counterpart libraries because the headers themselves do not contain definitions of things like geninterrupt()
DOS interrupt services (int 21h) are not available to Win32 programs
Further, gcc does not support variables aliasing to CPU registers (e.g. _DX, _AH).
You either need to use the appropriate 16-bit DOS compiler or write a Windows program using functionality available from gcc and Win32 API.

Do you really need it? It's been obsoleted a hundred or so times. But from what I've heard, some older Turbo C versions might have it. You can also try out http://www.sandroid.org/TurboC/ , but they say the file might not have all the functions.

Related

Is it possible to compile C89 code on MS Windows?

I'm trying to work with some legacy C89 code, and am having trouble getting it to build. My usual environment is Visual Studio, but that only seems to support C99, and some C99 features (such as stdio etc. not necessarily being constant) break the code - a lot. Before I start tampering with the code I want to write some tests, so I don't break the old behaviour, but I can't test the tests, so to speak, before I can get the code to build.
So is there still any way to compile C89 code on Windows?
Edit: Steve Summit has identified that stdio and so on has never been guaranteed; it's just a feature of some compilers that my legacy code happens to depend on, in a rather deeply embedded way. So my question shifts to whether there is any Windows C compiler available (preferably free!) for Windows that supports that assumption. Alternatively, I have an Ubuntu installation in a virtual machine, although I have little experience using it - is there such a compiler available in Ubuntu?
MSVC is a C++ compiler and has just gained C99 support recently. Previously it supports only C89 with some MS extensions. To compile in strict C89 mode use the /Za option. Make sure to also enable /Tc to use C mode
/Za, /Ze (Disable Language Extensions)
The /Za compiler option disables and emits errors for Microsoft extensions to C that aren't compatible with ANSI C89/ISO C90. The deprecated /Ze compiler option enables Microsoft extensions. Microsoft extensions are enabled by default.
See Enforce ANSI C Standard in Visual Studio 2015
Most other compilers use other options like -ansi, -std=c90 or -std=iso9899:1990
However if this is just about stdin/stdout not being constant while using in a static initializer list then it's completely irrelevant to C89 and is actually an XY problem. The following snippet compiles without problem in VS2019 C++ mode, so if you don't have any possible conflict just compile the code in C++ mode
#include <stdio.h>
FILE* ifp = stdout;
int main()
{
fprintf(ifp, "test\n");
return 0;
}
Otherwise it's easy to fix that to compile in C mode by moving the initialization into main()
FILE* ifp = NULL;
int main()
{
ifp = stdout;
fprintf(ifp, "test\n");
return 0;
}
[This isn't really an answer, but it's too elaborate for a comment.]
If you've got code that does things like
#include <stdio.h>
FILE *ifp = stdin;
int main() { ... }
and if the problem you're having is errors stating that stdin is not a compile-time constant suitable for a static initializer, I think you're going to have to rewrite that aspect of your code. I could be wrong, but if I remember correctly, the idea that stdin et al. were compile-time constants was never a guarantee, just a useful property of the earliest Unix implementations. It wasn't necessarily true of all old implementations, so the "change" to the Standard that explicitly said they weren't necessarily constant wasn't a change per se, but rather, more or less a codification of the divergence of existing practice.
(In other words, if you've got a compiler that's rejecting the code, and even if it has a backwards-compatibility mode, I'd be surprised if the backwards-compatibility mode turned stdin into a compile-time constant.)
All supported (and even older) versions of Visual Studio are perfectly capable of compiling C89 code. Also C99 is backward compatible with previous revisions of the language, so a C99 compiler should be able to compile just fine C89 code.
Although you might get some warnings, the code should compile and work just fine if the code is portable of course.

Create a Static Library on Windows Which Is macOS and Linux Compatible

I would like to generate a Static Library file in Windows using MSVC / IXX which is macOS and Linux compatible.
I'm using C (Let's say C99) and the functions are really simple. For example:
void AddArray(float* mA, float* mB, float* mC, int numElements){
int ii;
for(ii = 0; ii < numElements; ii++){
mC[ii] = mA[ii] + mB[ii];
}
}
Is there a way to build the Library only once on Windows and use it everywhere?
If not on windows, could it be done on Linux and work on Windows and macOS?
The idea is compile once with the same compiler and not use MinGW on Linux for instance.
This is not possible in the simple way you want (one library file for each OS/compiler) due to the fact that static libraries are compiled binary code, which inevitably reference platform specifics and must (in general) be in a compiler specific format. Some level of compatibility exists between several compilers on the same OS, but never ever is that going to work across different OSes. Mac OS used to have the concept of fat binaries (in which both the 32-bit and 64-bit binary code resided next to each other), but since they moved to exclusively 64-bit, this isn't really relevant anymore (although they still exist and still can be used).
If you want to distribute in binary form, you will need to provide different binaries for each platform (OS/architecture/toolchain) combination you want to support.

Include Headers OpenCL (32bit vs 64bit)

Im a programming OpenCL via pyopenCL on a Ubuntu 16.04.3 64bit,
on Nvidia's Tesla K10.G2.8GB.
So far, anything runs smoothly as long as I don't include header files into my OpenCL kernel. As soon, as I put #include <stdlib.h> on top of my header file, the compilation of my openCL kernels fails with different files missing, amongst them being
gnu/stubs-32.h
sys/cdefs.h
Searching for that problem, brings up answers like
Error "gnu/stubs-32.h: No such file or directory" while compiling Nachos source code
or
https://askubuntu.com/questions/470796/fatal-error-sys-cdefs-h-no-such-file-or-directory
baiscally suggesting to install libc6-dev-i386 or gcc-multilib and g++-multilib, supposing that the underlying problem is a 64bit/32bit problem. My question is, are my OpenCL binaries for the GPU compiled as 32bit binaries (how can I check?)?
If yes:
Are there other caveats, when I want to compile 32bit binaries on a 64bit OS?
Furthermore: Can I use 64bit floats, when my kernel is compiled in 32bit?
(e.g., will #pragma OPENCL EXTENSION cl_khr_fp64 : enable still work?)
If no:
Do I have to manually locate / copy all the needed header files and include them by hand?
Also: Some of my co-workers even doubt, that including standard C headers into OpenCL kernels is possible due to missing linkers. Any light on that is also appreciated.
Standard C library and other system headers cannot be included
into OpenCL C code, basically because they are only compatible
with the current system (a host), whereas an OpenCL C code could
run on a different device with a different architecture (a GPU in
your case).
As a replacement for standard C functions, OpenCL C defines a set
of built-in functions, which are available without any #include:
printf, large number of math functions, atomics, image-related
functions, etc.
See the "OpenCL Specification: 6.12 Built-in Functions" for a
complete list:
https://www.khronos.org/registry/OpenCL/specs/opencl-1.2.pdf
That doesn't mean you can't create a header with OpenCL C code
and #include it into an OpenCL C program. This works fine:
// foo.h
void foo() {
printf("hello world!");
}
// kernel.cl
#include "foo.h"
__kernel void use_foo() {
foo();
}

Is executable file generated after compiling in C can be copied and run on any differnet OS(UNIX)?

I am a java programmer, but i have few things to be done in C. So, i started with a simple example as below. If i have compiled it and generate a executable file (hello), can i run the executable file (hello) in any unix platform without the original file (hello.c)? And also is there a way to read the data from executable file means, decompile the executable file to original file (hello.c)?
[oracle#oracleapps test]$ cat hello.c
#include <stdio.h>
int main(){
int i,data =0;
for(i=1;i<=64;i+=1){
data = i*2;
printf("data=%d\n",data);
}
return 0;
}
To compile
gcc -Wall -W -Werror hello.c -o hello
You can run the resulting executable on platforms that are ABI-compatible with the one which you have compiled the executable for. ABI-compatibility basically means that the same physical processor architecture and OS-interfaces (plus calling convention) is used on two (possibly different) OSes. For example, you can run binaries compiled for Linux on a FreeBSD system (with the same processor type), because FreeBSD includes Linux ABI-compatibility. However, it may not be possible to run a binary on all other types of Unices, unless some hackery is done. For example, you can't run Mac OS X applications on linux, however this guy has a solution with which it's possible to use some OS X command line tools (including the GCC compiler itself) on Linux.
Reverse engineering: there are indeed decompilers which aim to generate C code from machine code, but they're not (yet) very powerful. The reason for this is they're by nature extremely hard to write. Machine code patterns have to be recognized, and even then you can't gather all the original info. For example, types of loops, comments and non-static local variable names and most of the types are all gone during the compilation process. For example, if you have a C source file like this:
int main(int argc, char **argv)
{
int i;
for (i = 0; i < 10; i++)
{
printf("I is: %d\n", i); /* Write the value of I */
}
return 0;
}
a C decompiler may be able to reconstruct the following code:
int main(int _var1, void *_var2)
{
int _var3 = 0;
while (_var3 < 10)
{
printf("I is: %d\n", _var3);
_var3 = _var3 + 1;
}
return 0;
}
But this would be a rather advanced decompiler, such as this one.
You can't run the executable on any platform.
You can run the executable on other machines (or this one) without the .c file. If it is the same OS / Distro running on the same hardware.
You can use a de-compiler to disassembler to read the file and view it as assembly or C-- they won't look much like the original c file.
The compiled file is pure machine code (plus some metadata), so it is self-sufficient in that it does not require the source files to be present. The downside? Machine code is both OS and platform-specific. By platform, we usually mean just roughly the CPU's instruction set, i.e. "x86" or "PowerPC", but some code compiled with certain compiler flags may require specific instruction set extensions. The OS dependence is caused not only by different formats for executable files (e.g. ELF as opposed to PE), but also by use of OS-specific services, or common OS services in an OS-specific manner (e.g. system calls). In addition to that, almost all nontrivial code depends on some libraries (a C runtime library at least), so you probably won't be able to run an executable without having the right libraries in compatible versions. So no your executable likely won't run on a 10 year old proprietary UNIX, and may not run on different Linux distributions (though with your program there's a good chance it does, because it likely only depends on glibc).
While machine code can be easily disassembled, the result is very low-level and useless to many people. Decompilation to C is almost always much harder, though there are attempts. The algorithms can be recovered, simply because they have to be encoded in the machine code somehow. Assuming you didn't compile for debugging, it will never recover comments, formatting, variable names, etc. so even a "perfect" decompiler would yield a different C file from the one you put in.
No ... each platform may have a different executable format requirements, different hardware architectures, different executable memory layouts determined by the linker, etc. A compiled executable is "native" to it's currently compiled platform, not other platforms. You can cross-compile for another architecture on your current machine though.
For instance, even though they may have many similarities, a compiled executable on Linux x86 is not guaranteed to run under BSD, depending on it's flavor (i.e., you could probably run it under FreeBSD but typically not OSX's Darwin version of BSD even thought both machines may have the same underlying hardware architecture). You also couldn't compile something on a SGI MIPS machine running IRIX and run it on a Sun SPARC running Solaris.
With C programs, the program is tied to the environment it was compiled for (which is usually the same as the platform it was compiled on, unless you are cross-compiling). You could copy something built for one version of Linux (and a particular hardware architecture) to another machine with the same archtecture running the same version of Linux, and you'll be fine. You can often get away with running it on a related version of Linux. But you won't get x86/64 code to run on a IA32 machine, nor on a PPC machine, nor on a SPARCmachine. You can likely get IA32 code to run on an x86/64 machine, if the basic O/S is sufficiently similar. And you may or may not be able to get something compiled for Debian to run under RedHat or vice versa; it depends on which libraries your program uses.
Java avoids this by having a platform-neutral byte code program that is compiled, and a platform specific JVM (JRE) to run it on each platform. This WORM (Write Once, Run Many) behaviour was a key selling point for Java.
Yes, you can run it on any unix qemu runs on. This is pretty comparable to java programs, which you can run on any unix the jvm runs on...

How to compile assembly language in c

I am looking at lots of assembly language code that is compiled along with c. They are using simple #define assembly without any headers in boot.s code. How does this work ?
Typically .s files are processed by an assembler. Without knowing any other details, there's nothing more to say. .s file goes in, .o file comes out.
Many assemblers provide some kind of include directive to allow use of headers, which would also be in assembly language.
Ah, the code you linked is for use by the GNU as assembler. If you're on Linux or Mac, do man as to learn about it. If you're on Windows, install MinGW or Cygwin.
Compilers can frequently include in-line assembly, but I believe it is compiler specific.
I don't remember the precise details, but I think its something like:
void myFunc(void)
{
int myNum; /* plain old C */
__asm /* Assembly */
{
mov ax,bx;
xor cx,cx;
}
myNum = 5; /* more C */
}
Research your specific compiler for details.
The link you post in your comment is an assembly language source file that is meant to be first run through a c-preprocessor. It's just a programming convenience, but lots of assembly language compilers support similar constructs anyway, so I'm not sure why they went the c-preprocessor route.
If you have "main proc" inside of your code, you are using x86 architecture and your file ends with .asm you con use for compilation:
tasm fileName.asm
In result you will get your fileName.obj file. After that you need to link it and for
that you can use tlink filename.obj
To run, just enter the filename.exe on the command line
If you need to link more than one file use tlink filename1.obj filename2.obj and so on
during the compilation and linking is not necessary to specify the file extension like .obj or .asm. Using just filename should be fine.

Resources