Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I want to know how can I see the functions and the contents of a library in C language?
is nm libconfig.a | c++filt what you are looking for?
In general, you cannot see the source code of functions of a library (e.g. some libfoo.a or libfoo.so.* Linux files) compiled from C source code.
You should ask for the documentation of that library. If it is developed in house, you may need to discuss with colleagues.
I do like open source libraries like GNU libc or GTK. Because you are allowed to download their source code and study it.
Be aware of code obfuscation techniques and of Rice's theorem. Decompilation of binary libraries is legally forbidden in some countries.
But a lot of libraries are not open source.
On Linux, tools like objdump(1) or readelf(1) could be helpful.
Notice that some libraries (e.g. GNU lightning) are generating new functions and machine code at runtime. Hence the notion of function might be not as easy as you think.
On Linux, programs like my manydl.c or Bismon, or RefPerSys, or SBCL, are generating code at runtime: for manydl.c and Bismon it generates C code and dlopen(3) it after having compiled it as a plugin.
Jacques Pitrat's book Artificial Beings: the conscience of a conscious machine ISBN 978-1-84821-101-8 explains the interest of generating code at runtime. A relevant concept is partial evaluation.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
In recent months, I have been seeing mentions of "LLVM" all over the place. I've looked it up, but the description of a "modern compiler infrastructure" doesn't really tell me anything. I can't find much about it, other than some mention of a c compiler that comes along with it (which doesn't seem to be any different from any other C compiler out there.)
Is there some difference between this LLVM thing and any other compiler, say, GCC? Or is it an over-hyped replacement benefiting from being newer than the competition?
There is some academic literature on the matter, I recommend the AOSA book chapter on it, written by the principal author (Chris Lattner).
LLVM is a collection of libraries built to support compiler development and related tasks. Each library supports a particular component in a typical compiler pipeline (lexing, parsing, optimizations of a particular type, machine code generation for a particular architecture, etc.). What makes it so popular is that its modular design allows its functionality to be adapted and reused very easily. This is handy when you're developing a compiler for an existing language to target a new hardware architecture (you only have to write the hardware specific components, all the lexing, parsing, machine independent optimization, etc. are handled for you), or developing a compiler for a new language (all the back end stuff is handled for you), or when you're doing something compiler adjacent (like analyzing source code, embedding a language in a larger application, etc.).
In order to support this, LLVM employs a pretty sophisticated internal representation (called the LLVM IR, creatively enough) that is basically assembly language for a theoretical hardware architecture designed to make targeting it with a compiler very easy. Most of the LLVM libraries take the IR in, operate on it, and output the modified IR, supporting the project's aim of modularity. This is in contrast to GCC, which (historically, I haven't checked recently) has a less complete IR and thus the separate phases of compilation are very tightly coupled because they have to share a lot of information.
Clang is the flagship compiler built on the LLVM framework.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have been developing for Windows for a long time, mainly WinApi (and .Net).
I'v started learning basic Linux, and I have some questions regarding the differences:
In Windows I have barely used the C Standard library.
If I needed an API, I would search MSDN and find the appropriate library\function.
From what it seems like, in Linux the C Standard library is EVERYTHING.
All the code samples I have seen used the standard library (Instead of using some Linux internal functions, like a Linux "CreateFile").
Is this really how writing "proper" linux code is done ? Using the C standard library ?
If I wish to read a file, or allocate memory are fopen\malloc the way to go ?
If the answer to my first question is yes (And I guess it will be)
The C standard library is POWERLESS compared to the powerful WinApi.
Lets say I wish to get a list of running process (CreateToolhelp32Snapshot) or create a thread or a process (CreateThread\CreateProcess), How should I do that in Linux ?
Documentation.
In Windows, all I need can be found in MSDN.
If I have a "how do I do" question (Like the questions above) where should I go ?
Where is my main source of documentation.
Thanks a lot,
Michael.
Perhaps you've forgotten that the Standard C Library isn't environment-specific, it specifies least-common-denominator functionality among all systems that can run C programs, and C runs on systems that don't even have processes.
If you want an API that provides consistent common GUI/multithread/etc. APIs, pick a likely-looking GUI/multithread/etc. API. You might start with Qt, it's quite comprehensive and produces good-looking, near-native UIs on a host of systems.
It's not generally considered polite to point this out, but most questions that get asked publicly are asked by people who lack the discipline to do even simple research. Once people can do that, they don't need to ask very many, and that's why what you see is so ... trivial. You're past that. For more options, you could start here.
For more general-purpose tools, the top hit on a search for important linux tools might be helpful.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Let’s assume that I wrote a primitive bootloader using assembly language. The computer is still on real mode. Now I want to write a primitive kernel and shell using C language.
Questions:
1.Do I need to write a C Language compiler in assembly for this new OS or can I use a C compiler running on a different OS? I guess it could be both!
2.If I use a C compiler from a different OS, functions like printf () can be compiled to target the BIOS’s functions instead of the OS’s API to avoid dependencies?
3.If my bootloader switch the computer into protected mode the kernel will need to implement the equivalent to the BIOS functions?
4.Assuming a YES to question 3: If I use a C compiler from a different OS what is required to make it target the new OS kernel functions? Rewriting the headers files?
(EDIT) PD: These are theoretical questions. I don’t need specific details about the actual implementation. I just want to validate the concepts. Don’t feel obligated to answer all of them!
You can do either. I would strongly recommend to just use a standard compiler though. Writing a good compiler is very complex and time consuming.
As long as you do not have an implementation of the standard library just don't use it. You can tell C-compilers to assume that there is no standard library and write your own printf-like functions.
You lose access to the BIOS functions in real mode. If you need them you do need to reimplement them.
Not much change is required actually. Executables are incomplete and require to be linked against something that implements the used standard library functions and whatever other libraries used. All you need to do is link the executable against a (possible self-written) library that somehow implements the required functions.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I was reading "Operating Systems Design and Implementation" by Andrew S. Tanenbaum. It has Source code of MINIX when approaching the back of the book. I know C programming and have also studied the subject of Operating Systems, but still I was not able to understand the source code. Very few lines were such which could be understood.
I found many new libraries included over there. These libraries are not taught the syllabus of my university. Also, when asked teachers of this, even they don't understand the code. So, where can whole C programming be learnt, with all of its libraries? Because university doesn't go in depths.
In a nut shell, how can we learn C programming or any other programming language to such a depth, that by just looking at any code, one can tell what is the code gonna do.
I used to refer to a book a long time ago by Plauger...you may find it useful if you implement your own versions and then compare OR just study them. Ref: http://www.amazon.com/The-Standard-Library-P-J-Plauger/dp/0131315099
You need not learn all libraries. You have to learn how to use libraries. All libraries are presented as object files in the our c source. So you will not find any code for libraries. Learn how to use the libraries and make your task done.Here is a reference manual for standard c library. You can go through it.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
I'm looking at D's licensing and see that the frontend is open source but the backend is not; what is the backend?
Why did GNU make gdc? Is it related to licensing?
There are different compilers with different goals. The frontend analyzes the source code, whereas the backend does the actual compilation. With the frontend being open-source, it can be used for multiple compilers.
DMD is the closed source default implementation of D. It is feature-complete, but may not be the best compiler performance-wise.
GDC uses the mature GNU compiler collection as backend. The same backend is widely used in C or C++ compilation and able of advanced optimization.
LDC targets the LLVM platform. This allows some interesting stuff like high-speed compilation, portable bytecode, and jitting.
As the frontend is shared across all compilers, one source file will parse the same way on every compiler. Compare this to C or C++ dialects.
DMD is just a reference implementation of the D compiler, just like, say, Glassfish is the reference implementation of the enterprise java application server.
DMD's backend has roots in DigitalMars C/C++ compiler. Makes sense as the original D creator is the the author of the DigitalMars C/C++ compiler, right? Walter could not legally completely open-source the backend because part of it has been made while it was in hands of Symantec...
Second, GNU did not make GDC - it was made by few enthusiasts, and hopefully will soon be merged into the GCC tree. GDC is GPL, simple as that.
LDC was also mentioned - it uses LLVM as backend.
What really matters is that D frontend is open-source. The fact that DMD's backend is not is irrelevant as there are so many alternatives. Both GCC and LLVM backends are superior to the DMD backend anyway.
If you are into compiler/interpreter design, I suggest you take a look at the SDC, MCI, and DIL projects. I think you have more information about them on http://wiki.dlang.org .