As you can see above,I want to know how library functions (like printf) are made in C. I am using the borlandC++ compiler.
They are defined in lib files (***.lib), header files only have prototypes.
Lib files cannot be read in text editors.
So, please let me know how they could read?
C is a compiled language, so the C source code gets translated to binary machine-language code.
Because of that, you can't see the actual source code of any given library you have.
If you want to know how it works, you can see if it's an open source library, find the source code of the particular revision that generated the version you're using, and read it.
If it's not open source, you could try decompiling - use a tool that tries to guess what the original source code could have been like for generating the machine code your library has. As you can guess, this is not an accurate process - compiling isn't an isomorphic process - and, as you probably wouldn't have guessed, it could be illegal - but I'm not really sure what conditions it depends on, if any.
Related
So I'm going through CS50 introduction course.
And I'm confused about the fact that if you write in IDE #include some header file, which tells computer to find some library, and your compiler will find that code and combine it with your code.
But then how does compiler find that code? Like I just type in #include <cs50.h> for example. But how does it find that code when I don't have it on my PC? Why would I have it without finding it online and downloading it beforehand? Does it look online and then download file, which it uses now and in future for my programs when I call #include? And if so does it mean that you can't properly program without connection to internet?
And im confused about the fact that if you write in ide #include some header file which tells computer to find some library,
Be careful with terminology. #include tells the compiler to find some header. The word library in C usually refers to the file with compiled code (.a, .lib, .so, .dll).
and your compiler will find that code and combine it with your code.
This is right. By and large, the effect is the same as copying and pasting the contents of the header in the place of the #include statement.
But then how does compiler find that code? Like i just type in #include Cs50 for example.
The compiler has default search paths built in, where it looks for the header that is being #included. Typically directories like /usr/include and /usr/local/include are built into the compiler, as well as the current directory .. You can add more search paths through compiler command line arguments, for example -I/some/path in gcc and clang.
But how does it find that code when i dont have it on my pc
It doesn't. You will get an error like "Cs50: No such file or directory".
If this works on your university's system, probably the header is installed on that system in some central location.
why would i have it without finding it online and downloading it beforehand? Does it look online and then download file which it uses now and in future for my programs when i call #include?
It doesn't.
And if so does it mean that you cant properly program without connection to internet?
C was developed in the 1970s. The internet barely existed yet. You can program perfectly well in C without an internet connection – if you can do it without StackOverflow, of course ;)
The code of the library must be present on your computer or nothing will work. There's no such thing as magic downloads of libraries, C compilers and linkers have worked the same since long before the Internet was even invented.
Standard library headers are typically downloaded & installed along with the compiler. They are also very likely already pre-linked into some convenient format for the target system. You don't need to worry about manually adding standard libs to your project since the compiler will take care of that for you.
Custom headers require their corresponding .c files or linked libs to be present too, but they must be manually added to the current project. Either by adding them in your IDE's project, or as in the old days by creating a make file.
How that works in CS50 I don't know, but the lib obviously comes pre-installed somehow and they have hidden how to the students. We wouldn't want to risk CS50 students actually learning how programming works, now would we...
if you write in ide #include some header file which tells computer to find some library
Slight misunderstanding. When you type #include "somefile.h" into your program, the first stage of C and C++ compilation called the pre-processor will search for that file (called a "header") from the INCLUDE path. That is, the pre-processor will search through the local directory of the .c file, then a specified set of standard directories, and perhaps your own project directories to find a file called "somefile.h". The INCLUDE path is highly configurable with IDEs, command lines, and environment variables. Upon finding that file, the result is that the contents of that file are virtually substituted directly into your source code, exactly where the #include statement originally appeared. It's as if you had typed that exact file contents yourself into your .c file. After the textual substitution of the #include statements with the file contents, the intermediate file contents are handed off to the compiler stage to convert to object (assembly) code.
Like i just type in #include Cs50 for example. But how does it find that code when i dont have it on my pc
If the file can't be found, the pre-processor stage of the compile will fail and the whole thing comes to an end. I'm not sure what's in Cs50, but if you type #include "Cs50" and it works, then my guess is that your university environment has some project or environment configuration that adds a course specific include directory into your compilation path. Difficult to say since you didn't specify how you were building your code. Most IDEs will let you right-click on a #include statement and navigate to the actual source of the header file so you can inspect its contents and see where it originates from on your disk.
Also, your title says "linking", but you really mean "including". Linking is final stage after the compiler compiles all source files to object code to produce a final executable program.
.h files (should only) contain data type definitions, extern declarations, and function prototypes. .h files are not libraries.
So the compiler will know what parameters function take and what is the return type. The compiler will know how to call them. It will also know the type of variables defined somewhere else in the code (including the libraries). When the compiler compiles the code it generates intermediate files called object files.
Those files are later linked together by a special program called linker. This program will find the actual code of the functions in other object files or libraries (Libraries are basically sets of object files grouped in one larger file). It happens behind the scenes. If you need to tell the linker to use specific library you simply use command line option. For example to use math library you need to use -lm compiler command line option.
But how does it find that code when I don't have it on my PC?
The library file has to be present in your file system. Otherwise linker will not be able to link functions or variables from that library.
And if so does it mean that you can't properly program without
connection to internet?
No, it means that you have to have properly configured toolchain (ie all libraries needed present in your file system).
But then how does compiler find that code?
For your level of knowledge: some libraries are linked by default. Other not - so you need to tell the compiler/linker what you want to use.
--gcc & binututils related--
Linking is generally quite a complicated process and the compiler has its own configuration files called "spec files" and linker has "linker scripts". But explaining what those files do is rather an advanced topic far beyond the scope of this question.
I am trying to learn C and I have this C file that I want view the macros of. Is there a tool to view the macros of the compiled C file.
No. That's literally impossible.
The preprocessor is a textual replacement that happens before the main compile pass. There is no difference between using a macro and putting the code the macro expands to in its place.*
*Ignoring the debugger output. But even then you can do it if you know the right #pragma to tell it the file and line number.
They're always defined in the header file(s) that you've imported with #include, or that those files in turn #include.
This may involve a lot of digging. It may involve going into files that make no sense to you because they're not written for casual inspection.
Any macros of any importance are usually documented. They may use other more complex implementation-specific macros that you shouldn't concern yourself with ordinarily, but if you're curious how they work the source is all there.
That being said, this is only relevant if you have the source and more specifically a complete build environment. Once compiled all these definitions, like the source itself, do not appear in the executable and cannot be inferred directly from the executable, especially not a release build.
Unlike Java or C#, C compiles directly to machine code so there's no way to easily reverse that back to the source. There are "decompilers" that try, but they can only really guess as to the original source. VM-based languages like Java and C# only lightly compile the code, sot here are a lot of hints as to how that code was generated and reversing it is an easier process.
Specifically, my issue is that I have CUDA code that needs <curand_kernel.h> to run. This isn't included by default in NVRTC. Presumably then when creating the program context (i.e. the call to nvrtcCreateProgram), I have to send in the name of the file (curand_kernel.h) and also the source code of curand_kernel.h? I feel like I shouldn't have to do that.
It's hard to tell; I haven't managed to find an example from NVIDIA of someone needing standard CUDA files like this as a source, so I really don't understand what the syntax is. Some issues: curand_kernel.h also has includes... Do I have to do the same for each of these? I am not even sure the NVRTC compiler will even run correctly on curand_kernel.h, because there are some language features it doesn't support, aren't there?
Next: if you've sent in the source code of a header file to nvrtcCreateProgram, do I still have to #include it in the code to be executed / will it cause an error if I do so?
A link to example code that does this or something like it would be appreciated much more than a straightforward answer; I really haven't managed to find any.
You have to send the "filename" and the source of each header separately.
When the preprocessor does its thing, it'll use any #include filenames as a key to find the source for the header, based on the collection that you provide.
I suspect that, in this case, the compiler (driver) doesn't have file system access, so you have to give it the source in much the same way that you would for shader includes in OpenGL.
So:
Include your header's name when calling nvrtcCreateProgram. The compiler will, internally, generate the equivalent of a std::map<string,string> containing the source of each header indexed by the given name.
In your kernel source, use #include "foo.cuh" as usual.
The compiler will use foo.cuh as an index or key into its internal map (created when you called nvrtcCreateProgram), and will retrieve the header source from that collection
Compilation proceeds as normal.
One of the reasons that nvrtc provides only a "subset" of features is that the compiler plays in a somewhat sandboxed environment, without necessarily having all of the supporting tools and utilities lying around that you have with offline compilation. So, you have to manually handle a lot of the stuff that the normal nvcc + (gcc | MSVC| clang) combination provides.
A possible, but non-ideal, solution would be to preprocess the file that you need in your IDE, save the result and then #include that. However, I bet there is a better way to do that. if you just want curand, consider diving into the library and extracting the part you need (blech) or using another GPU-friendly rand implementation. On older CUDA versions, I just generated a big array of random floats on the host, uploaded it to the GPU, and sampled it in the kernels.
This related link may be helpful.
You do not need to load curand_kernel.h yourself and add it to the include "aliases" mechanism.
Instead, you can simply add the CUDA include directory to your (set of) include paths, e.g. by adding --include-path=/usr/local/cuda/include to your NVRTC compiler options.
(I do this in my GPU-kernel-runner test harness, by default, to be on the safe side.)
I'm attempting to write a C program incorporating OPENFILENAME and of course I need the header file. So, following the instructions provided with tcc, I downloaded the header files MinGW uses for the Win32 API (and the library files) and put them in the appropriate directories as instructed. However, when I come to compile the program, I get the following error:
In file included from sw1.c:2:
c:/prg/tcc/include/winapi/commdlg.h:503: declaration list expected
This seems rather odd, given it's a standard header. So, I look up the line and it's typedef __AW(CHOOSECOLOR) CHOOSECOLOR,*LPCHOOSECOLOR;, which doesn't look very valid to me but then I'm not really a C expert and I've mainly been writing in Linux. I have no idea why it's going wrong though and no knowledge of how to fix it? Is it a bug in tcc?
As evidence that this should be possible, here is the appropriate passage from the tcc readme:
Header Files:
The system header files (except _mingw.h) are from the MinGW
distribution:
http://www.mingw.org/
From the windows headers, only a minimal set is included. If you need
more, get MinGW's "w32api" package.
I understand that this question is similar to Error when including Windows.h with TCC but my "windows.h" file does work - it's just this that doesn't.
Does anyone know how to solve this? I'm really at a loose end!
I know that, when we call any library function in our source code, The function definitions will be loaded into RAM (assuming dynamic linking) at run time.
But where exactly the definitions of library functions stored.
If they are not in .c format, how they are stored??
If you need to get any function definition, you need to check the source code [That was obvious].
To get the function definitions which are part of a library, [ex - glibc], you've to get the source code of the library and browse through that. Usually, the library source codes, [.c format, if you mean] will be compiled to produce a library, either
static [usually, noted by .a]
dynamic [Usually, noted by .so, shared object]
to be linked with some source code to produce the final binary.
So, yes, they are in .c format (least, human readable format, I better say) which you can browse through.
Note: An online browsable version of glibc.
P.S - Sorry, if my answer is biased towards linux implementations however, it is still valid for windows(xp) PC
The header file contain the definition. Inside the header file named alloc.h, we can find that header file in the folder include. you have to specify the environment you are using.it is saved with extention. .h
You can find an example Windows implementation of malloc here. On Windows, it's mostly a wrapper for WinAPI functions such as HeapAlloc. You can find other implementations of this and other functions in various opensource libraries.
Note that on Windows, a compiler doesn't have to provide implementations for the standard C functions, as they are all available in msvcrt.dll. You can't get the source code of these implementations, but you can still disassemble the DLL and look at the assembly.