I am debugging a very large c file . It is approx 70000+ lines of code. The debugger is not functioning properly, However the code is compiled correctly. Is there any flag or something which needs to be set to debug this file.
Edit:
I have changed the location of the function from bottom of file to top it is now debugging the function as expected. Don't know the reason
The easiest solution is to split the file in two, keeping each file under 65535 lines. There is rarely a good reason to have files that big. Optimizing was a weak reason in the past, but Visual Studio nowadays has /LTCG link time code generation for that.
Related
I'm doing my "main" S-function based on the Matlab template with mdlStart mdlOutputs, etc, which communicates with some Ansi C files that I had, and this S-Function is used in Simulink. I compiled the files correctly with mex and in order to debug I'm using Visual Studio 2015. I can set and use breakpoints so everything is working fine.
The problem is, after finishing 'mdlOutputs' function, where all the contents are correctly printed into Matlab the debugging goes into a breakpoint and it goes into 'simulink.c'. After that, the Debug is broken saying that "libmex.pdb" cannot be found.
If I run the Simulink model without Visual Studio in the loop, Matlab just crashes and stops working.
So, any idea on how to check properly where it is giving me an error? Also, do you have any clue on why the process crashes after leaving mdlOutputs and before entering mdlTerminate? What happens in-between these two functions?
I'm using Windows 7, 64-bit and Matlab 2012b (I'm going to try and run it in the 2015b).
I think that I finally found out the solution (even though I quite don't understand it yet).
In the mdlInitializeSizes(SimStruct *S) I substituted ssSetOptions(S, 0); by ssSetOptions(S, SS_OPTION_EXCEPTION_FREE_CODE); like it is suggested here.
I was aiming at reducing the size of the executable for my C project and I have tried all compiler/linker options, which have helped to some extent. My code consists of a lot of separate files. My question was whether combining all source code into a single file will help with optimization that I desire? I read somewhere that a compiler will optimize better if it finds all code in a single file in place of separate multiple files. Is that true?
A compiler can indeed optimize better when it finds needed code in the same compilable (*.c) file. If your program is longer than 1000 lines or so, you'll probably regret putting all the code in one file, because doing so will make your program hard to maintain, but if shorter than 500 lines, you might try the one file, and see if it does not help.
The crucial consideration is how often code in one compilable file calls or otherwise uses objects (including functions) defined in another. If there are few transfers of control across this boundary, then erasing the boundary will not help performance appreciably. Therefore, when coding for performance, the key is to put tightly related code in the same file.
I like your question a great deal. It is the right kind of question to ask, in my view; and, though the complete answer is not simple enough to treat fully in a Stackexchange answer, your pursuit of the answer will teach you much. Though you may not yet realize it, your question really regards linking, a subject every advancing programmer eventually has to learn. Your question regards symbol tables, inlining, the in-place construction of return values and several, other, subtle factors.
At any rate, if your program is shorter than 500 lines or so, then you have little to lose by trying the single-file approach. If longer than 1000 lines, then a single file is not recommended.
It depends on the compiler. The Intel C++ Composer XE for example can automatically optimize over multiple files (when building using icc -fast *.c *.cpp or icl /fast *.c *.cpp, for linux/windows respectively).
When you use Microsoft Visual Studio, or a derived product (like Atmel Studio for microcontrollers), every single source file is compiled on its own (i. e. one cl, icl, or gcc command is issued for every c and cpp file in the project). This means no optimization.
For microcontroller projects I sometimes have to put everything in a single file in order make it even fit in the limited flash memory on the controller. If your compiler/IDE does it like visual studio, you can use a trick: Select all the source files and make them not participate in the build process (but leave them in the project), then create a file (I always use whole_program.c, and #include every single source (i.e. non-header) file in it (note that including c files is frowned upon by many high level programmers, but sometimes, you have to do it the dirty way, and with microcontrollers, that's actually more often than not).
My experience has been that with gnu/gcc the optimization is within the single file plus includes to create a single object. With clang/llvm it is quite easy and I recommend, DO NOT optimize the clang step, use clang to get from C to bytecode, the use llvm-link to link all of your bytecode modules into one bytecode module, then you can optimize the whole project, all source files optimized together, the llc adds more optimization as it heads for the target. Your best results are to tell clang using the something triple command line option what your ultimate target is. For the gnu path to do the same thing either use includes to make one big file compiled to one object, or if there is a machine code level optimizer other than a few things the linker does, then that is where it would have to happen. maybe gnu has an exposed ir file format, optimizer, and ir to target tool, but I think I would have seen that by now.
http://github.com/dwelch67 a number of my projects, although very simple programs, have llvm and gnu builds for the same source files, you can see where the llvm builds I make a binary from unoptimized bytecode and also optimized bytecode (llvm's optimizer has problems with small while loops and sometimes generates non-working code, a very quick check to see if it is you or them is to try the non-optimized llvm binary and the gnu binary to see if they all behave the same (you) or if only the optimized llvm doesnt work (them)).
I have created my very own (very simple) byte code language, and a virtual machine to execute it. It works fine, but now I'd like to use gcc (or any other freely available compiler) to generate byte code for this machine from a normal c program. So the question is, how do I modify or extend gcc so that it can output my own byte code? Note that I do NOT want to compile my byte code to machine code, I want to "compile" c-code to (my own) byte code.
I realize that this is a potentially large question, and it is possible that the best answer is "go look at the gcc source code". I just need some help with how to get started with this. I figure that there must be some articles or books on this subject that could describe the process to add a custom generator to gcc, but I haven't found anything by googling.
I am busy porting gcc to an 8-bit processor we design earlier. I is kind of a difficult task for our machine because it is 8-bit and we have only one accumulator, but if you have more resources it can became easy. This is how we are trying to manage it with gcc 4.9 and using cygwin:
Download gcc 4.9 source
Add your architecture name to config.sub around line 250 look for # Decode aliases for certain CPU-COMPANY combinations. In that list add | my_processor \
In that same file look for # Recognize the basic CPU types with company name. add yourself to the list: | my_processor-* \
Search for the file gcc/config.gcc, in the file look for case ${target} it is around line 880, add yourself in the following way:
;;
my_processor*-*-*)
c_target_objs="my_processor-c.o"
cxx_target_objs="my_processor-c.o"
target_has_targetm_common=no
tmake_file="${tmake_file} my_processor/t-my_processor"
;;
Create a folder gcc-4.9.0\gcc\config\my_processor
Copy files from an existing project and just edit it, or create your own from scratch. In our project we had copied all the files from the msp430 project and edited it all
You should have the following files (not all files are mandatory):
my_processor.c
my_processor.h
my_processor.md
my_processor.opt
my_processor-c.c
my_processor.def
my_processor-protos.h
constraints.md
predicates.md
README.txt
t-my_processor
create a path gcc-4.9.0/build/object
run ../../configure --target=my_processor --prefix=path for my compiler --enable-languages="c"
make
make install
Do a lot of research and debugging.
Have fun.
It is hard work.
For example I also design my own "architecture" with my own byte code and wanted to generate C/C++ code with GCC for it. This is the way how I make it:
At first you should read everything about porting in the manual of GCC.
Also not forget too read GCC Internals.
Read many things about Compilers.
Also look at this question and the answers here.
Google for more information.
Ask yourself if you are really ready.
Be sure to have a very good cafe machine... you will need it.
Start to add machine dependet files to gcc.
Compile gcc in a cross host-target way.
Check the code results in the Hex-Editor.
Do more tests.
Now have fun with your own architecture :D
When you are finished you can use c or c++ only without os-dependet libraries (you have currently no running OS on your architecture) and you should now (if you need it) compile many other libraries with your cross compiler to have a good framework.
PS: LLVM (Clang) is easier to port... maybe you want to start there?
It's not as hard as all that. If your target machine is reasonably like another, take its RTL (?) definitions as a starting point and amend them, then make compile test through the bootstrap stages; rinse and repeat until it works. You probably don't have to write any actual code, just machine definition templates.
Windows XP, Visual Studio 2005, C/C++, automation for Unigraphics NX using Open C
I'm trying to code an external program for NXOpen (i.e. a program with the NX library that runs on Windows, as opposed to an internal program that runs within NX). Right now I'm just testing to make sure the link structure is good, etc.
When I try to run the .exe that was generated, it does nothing for a few moments and then I get the following error: "The procedure entry point ?JPEG_convert_to_rgb##YAPAEHPAEPAH1#Z could not be located in the dynamic link library libimage.dll"
I have nothing to go on and Googling so far has been vastly unhelpful. The stuff on here seems to be file-specific for each case, and I'd never heard of this JPEG_convert_to_rgb before now. What can I do to fix this?
Additional info: I'm not sure if I broke something when trying to solve my last issue, or if this would have happened anyway.
It looks like you are compiling a C header file in C++ and suffering from the C++ compiler mangling your names. The DLL should export non-mangled names. Try wrapping the include of the header file in an extern "C" block.
Well, I called up GTAC. The issue turned out to be quite specific to the NX library and I'm not even fully certain what happened.
Basically, I had some environment variables that needed to be set: TC_DATA and TC_ROOT, though for some people it will be IMAN_DATA and IMAN_ROOT. These can be found if you open up NX through Teamcenter, go to Help->NX Log File, and do a ctrl-F to search for these terms. There you should find what the variables should be set to, and then set them as that. You should also make sure the UGII_BASE_DIR is set properly, and that your UGII_ROOT_DIR is at the beginning of your PATH variable. Also: call %tc_data%\tc_profilevars to set the other TC variables; call %iman_data%\iman_profilevars to set the other IMAN variables. There's also something else that I can't remember - this answer is not complete, it's just as complete as I can make it.
If this makes no sense to you, and you're using NX Open, you should probably call GTAC; if you can use an internal application instead of an external, you might be better off doing so.
Essentially, what I'm looking for is a function that would allow me to do something like this:
Dumper(some_obj); /* outputs some_objs' data structure */
Thanks.
C doesn't support any kind of reflection out of the box. Also it's not hard typed in the sense that once it's compiled to machine code, types aren't there any more (unlike in some higher level languages). You need to build your executable with all the symbols and debug info and then use some debugging tool or library to retrieve this data.
I suppose just using an estabilished debugger such as the Visual Studio Debugger or gdb would be much simpler.
Short answer: no.
Long answer: by the time your program's been compiled and linked, all of that information has been thrown away. C (and C++) don't have reflection, so none of this information can be recovered at runtime.
Intriguing answer: Since you're on Windows, you can do various things with debug information (i.e. PDB files) and the DbgHelp API.