Tool to find global/static variables in C codebase - c

I have created a C++-wrapper (a class) for a larger C codebase which was originally written for a microprocessor. Now we want to simulate multiple instances of "agents" running this C code. As we want to see how they interact, we need to run them simultaneously. If possible, we'd like to run them in one process.
This failed at first because the C code used static variables and thus wasn't thread safe. We thought we had removed all static and global variables but still don't get the expected results. (Everything runs fine if we have only one instance.)
So my question is: instead of searching the whole code base for such variables, is there any tool that might assist to find the problem? The C code was written in Keil μVision and is now compiled in Visual Studio 2008 Team Suite.
Thanks for suggestions!

If you can build it in a more unix-ish environment, you should have a size command you can run on .o files that will tell you the data and bss segment sizes for each .o file. This is a very quick way to find variables of static storage duration (just look for nonzero sizes in either of these fields).
Perhaps you could try building with mingw or cygwin, or looking for a similar tool in the MSVC toolset.

Related

verify generated library for the encapsulation

I have two C projects prepared under Visual Studio 2015. First project is simply a static library project whereas the second one is a console application which uses the static library file generated by the first project.
I checked the static library file with DUMPBIN tool in Windows and saw that there are many variables and functions exposed to outside which is very bad for the encapsulation issues.
My question is how can I be sure that I don't expose the functions which are supposed to be private. Do I need to check every time with that tool? My question is also covering the variables. All my static global variables are also exposed to outside. How can I force them to be private?
I don't think that presence in dumpbin output can be considered an "exposing". All your static global variables require some space allocation and probably an initialization at runtime. So it's only natural for them to be present in dumpbin output. Also if you are compiling with link-time code generation then everything is actually "exposed".

LoadLibrary() an EXE?

I have an executable (that I created using Visual C++ 10), and I need to use its capabilities from another program I wrote (same environment). Due to complex deployment requirements which I won't go into, building a DLL from the required functionality and loading it in both programs is not something I can do.
So I thought that I can __declspec(dllexport) some functions in the EXE, and then LoadLibrary() will let me GetProcAddress() them.
Obviously this can't be done, though when I started looking at it - it looked feasible.
Specifically, when you __declspec(dllexport) functions in an EXE project, Visual C++ also generates a lib file for dynamic linking - so you don't even need to use LoadLibrary() - just link against the resulting lib and call the functions.
Unfortunately, the main problem is that when you declare the resulting file as an EXE, Visual C++ adds the "CRTmain" entry point into the resulting file, instead of the "CRTDLLmain" that a DLL gets. When Windows (automatically) LoadLibrary() the EXE from your main program, it doesn't call the the "CRTDLLmain" entry point (because it doesn't exist), the C runtime for the module doesn't get initialized, and as a result all interesting work (such as memory allocation) fails with interesting(*) runtime exceptions.
So as follows, my question is: is there a way to cause Visual C++ to build into the resulting file both the "CRTmain" entry point and the "CRTDLLmain" entry point?
(*) "Interesting" as in an old Chinese curse.
Yes it is possible.
http://www.codeproject.com/Articles/1045674/Load-EXE-as-DLL-Mission-Possible
The idea is a) to patch the IAT and b) to call the CRT before calling your exports.
Simply no!
The Problem is that the CRT and in the EXE you want load, uses some globle variables. You main EXE does the same. So how should Memory allocation work?
If you want to use such a structure you must use a DLL to be Aware of meulti threading, CRT initialization ans all this other stuff. You need this!
But what about COM Automation? Wouldn't tis be an easy solution to use your code in one EXE from another?
The short answer, is "no". After looking far and wide, there is no way to get VC++ to do what I want, and quite likely not any other compiler.
The main issue is that the main() entry point most people know and love is not the real entry point in C++ executables: the compiler needs to do a lot of initialization work to get the "C++ Run Time library" to a usable state, as well as initialize globals, statics and the likes. This initialization uses different code in shared libraries than in executables and there is no way to one to behave like another.
One thing that possibly can be done, is to build the shared functionality into a DLL, and for the primary executable to embed the DLL as a resource, and load it from a memory mapped region of the executable file (there are several code samples how to do this with VC++ on stackoverflow and elsewhere on the web). Now another program can do the same thing by loading the DLL from the bundling executable.

C object file compatibility between computers

First I want to state for the record that this question is related to school/homework.
Let’s say computers CP1 and CP2 both share the same operating system and machine language. If a C program is compiled on CP1, in order to move it to CP2, is it necessary to transfer the source code and recompile on CP2, or simply transfer the object files.
My gut answer is that the object files should suffice. The C code is translated into assembly by the compiler and assembled into machine code by the assembler. Because the architecture shares the same machine code and operating system, I don't see a problem.
But the more I think about it, the more confused I’m starting to get.
My questions are:
a) Since its referring to object files and not executables, I’m assuming there has been no linking. Would there be any problems that surface when linking on CP2?
b) Would it matter if the code used C11 standard on CP1 but the only compiler on CP2 was C99? I'm assuming this is irrelevant once the code has been compiled/assembled.
c) The question doesn't specify shared/dynamic linked libraries. So this would only really work if the program had no dependencies on .dll/.so/ .dylib files, or else these would be required on CP2 as well.
I feel like there are so many gotchas, and considering how vague the question is I now feel that it would be safer to simply recompile.
Halp!
The answer is, it depends. When you compile a C program and move the object files to link on a different computer, it should work. But because of factors such as endianness or name mangling, your program might not work as intended, and even might crash when you try to run it.
C11 is not supported by a C99 compiler, but it does not matter if the source has been compiled and assembled.
As long as the source is compiled with the libraries on one machine, you don't need the libraries to link or run the file(s) on the other computer (static libraries only, dynamic libraries will have to be on the computer you run the application on). This said, you should make the program independent so you don't run into the same problems as before where the program doesn't work as intended or crashes.
You could get a compiler that supports EABI so you don't run into these problems. Compilers that support the EABI create object code that is compatible with code generated by other such compilers, thus allowing developers to link libraries generated with one compiler with object code generated with a different compiler.
I have tried to do this before, but not a whole lot, and not recently. Therefore, my information may not be 100% accurate.
a) I've already heard the term "object files" being used to refer to linked binaries - even though it's kinda inaccurate. So maybe they mean "binaries". I'd say linking on a different machine could be problematic if it has a different compiler -
unless object file formats are standardized, which I'm not sure about.
b) Using different standards or even compilers doesn't matter for binary code - if it's linked statically. If it relies on functions from a dynamic lib, there could be problems. Which answers c) as well: Yes, this will be a problem. The program won't start if it doesn't have all required dynamic libs in the correct version. Depends on linking mode (static vs. dynamic), again.
Q: Let’s say computers CP1 and CP2 both share the same operating system and machine language.
A: Then you can run the same .exe's on both computers
Q: If a C program is compiled on CP1, in order to move it to CP2, is it necessary to transfer the source code
A: No. You only need the source code if you want to recompile. You only need to recompile if it's a different, incompatible CPU and/or OS.
"Object files" are generally not needed at all for program execution:
http://en.wikipedia.org/wiki/Object_files
An object file is a file containing relocatable format machine code
that is usually not directly executable. Object files are produced by
an assembler, compiler, or other language translator, and used as
input to the linker.
An "executable program" might need one or more "shared libraries" (aka .dll's). In which case the same restrictions apply: the shared libraries, if not already resident, must be copied along with the .exe, and must also be compatible with the CPU and OS.
Finally, "scripts" do not need to be recompiled. You may copy the script freely from computer to computer. But each computer must have an "interpreter" to run the script: a Perl script needs a Perl interpreter, a Python script a python interpreter, and so on.

Establish call tree for C code

I have a large code written in C, but I did not write all of it myself. I wish to create an overview of the call structure in the code for reference. That is: I wish to know what (non-standard) functions are called by the different functions in the code, and thus create a hierarchy or a tree of the different functions. Are there any free, Unix compatible programs (that means no Visual Studio, but a Vim plugin or such would be neat) that can do this, or will I have to write something that can do this myself?
Doxygen does that too, it has to be enabled though.
For an overview of available tools see
http://en.wikipedia.org/wiki/Call_graph
There is a Vim plugin C Call-Tree Explorer called CCTree
http://www.vim.org/scripts/script.php?script_id=2368
As you mentioned a Vim plug-in, check out http://sites.google.com/site/vimcctree/. It uses CScope to generate the tree, so you will need to first generate a CScope db of your source files.
Have a look at http://www.gson.org/egypt/ This uses GCC to process the code and extracts the interdependencies within the program from the AST it emits.
gprof will do that. It also generates an execution profile, but in doing so it creates a call tree.
I just downloaded SourceTrail (https://github.com/CoatiSoftware/Sourcetrail/releases) and it did what I wanted, which was pretty close to what I think you want.
(What I wanted was to find out what routines called the function I was considering changing, or needed to understand).
Note that it is no longer maintained, but it did exactly what I wanted. It runs under Windows and Linux, and made finding who calls a function pretty trivial (as well as following that function's call tree down as needed). If you care, it has a GUI (is a GUI? whatever).
It does the parsing itself, but it didn't take very long to run, perhaps about the same time or a little less than compiling the code.
But if you want text only, or don't want to use a gui, or don't want to have it scan the code, this isn't for you.
(Notes - in my case, I was hyper-focused on one or 2 functions, and didn't care what system functions were being called. I spent some time stubbing out all the include files that were needed (since I ran the parse on one machine (A Linux machine) that didn't have all the include files needed for the Windows program I was looking at, and then did the exploration on a different (Windows) machine. Which, I should mention, worked perfectly. I just copied the entire source tree from my Linux machine to my Windows machine (which included the Sourcetail project file), loaded Sourcetail and had it load the project - done.)

How do I define HAVE_STDIO_H in VC++ 2005?

I just built an updated version of SDL.dll, an open-source C DLL that my Delphi project uses, with the Express edition of Visual C++ 2005. I dropped it in the folder with my EXE and tried to run it, but it won't load:
The procedure entry point SDL_RWFromFP could not be located in the dynamic
link library SDL.dll.
Now C never was my strong point, but I remember enough of it from college to try and track this one down. I went poking around in the source code to see what had happened to this function, and I found it grayed out, beneath a preprocessor directive:
#ifdef HAVE_STDIO_H
IIRC, STDIO is the standard C I/O library. I assume this means that it's not available. Anyone know why that would be and how to fix it? Is this a Visual C++ issue or an SDL one?
Most often in the Unix/Linux world, names like HAVE_STDIO_H indicate that the code has been 'autoconfiscated' (which is the official term used to describe the state of having been made to work with the 'autotools' such as 'autoconf'). In such a set up, the configure process would determine whether <stdio.h> was available and would set #define HAVE_STDIO_H 1 in the config.h file that it generates. The compilation would then discover that the platform has <stdio.h> and would compile the matching code (the stuff that is currently greyed out).
Adapting to your Windows environment, somewhat less than 100% confidently since there could be some other significance to HAVE_STDIO_H on Windows, you might decide that it would be OK to include -DHAVE_STDIO_H in the command line options when you run the compiler. Or you might create the config file by hand, and define -DHAVE_CONFIG_H (which is the normal way to indicate that configuration settings are in the file 'config.h'). In the 'config.h' file, you'd have #define HAVE_STDIO_H 1 as mentioned above.
Note: on Unix, you normally find a shell script called 'configure' which you run to create the config.h file. If you have Cygwin, there's an outside chance that you can use that script on Windows - I've just checked that an autoconfiscated package I created on Solaris was configurable on Windows under Cygwin and it mostly worked - all except some network handling. I'd not guarantee that it will always fail (but it's software - guaranteeing anything is pretty dangerous). I should add that the problem is in my auto-configuration code (the tests for the network functionality clearly aren't quite correct), and not in Cygwin per se. If I'd done the job properly, it would have worked. (Someone said "There is no portable code; there is only code that has been ported". That applies here.)
You do need a good simulation of a Unix environment. MingW might also work.
What do the preprocessor directives above it do?
Most likely, this is simply for determining some properties of your compiler, and it's greyed out because whatever the macro signifies, does not apply to your compiler.
Your problem is most likely elsewhere.
Since it can't find the function SDL_RWFromFP, you should probably see if that function is for some reason disabled by a preprocessor directive as well.
However, my guess is that the function exists, but does not get marked as __declspec( dllexport )
Without that, it won't get exposed so programs loading the DLL can call it. Most likely you have to #define something to specify that you want to create a DLL, which will enable the necessary preprocessor magic to insert the dllexport part in front of the function.
Normally, you should not have to add any HAVE_STDIO_H: those are not meant to be public anyway (although sadly many projects pollute the public namespace with those).
I would guess you did not build SDL correctly - or that Visual studio 2005 is not well supported by SDL (I don't know much about SDL, so those are really wild guesses without much information). Is there any test suite for SDL, such as you can test the built dll ?

Resources