Error: "The procedure entry point ?JPEG_convert_to_rgb##YAPAEHPAEPAH1#Z could not be located in the dynamic link library libimage.dll" - c

Windows XP, Visual Studio 2005, C/C++, automation for Unigraphics NX using Open C
I'm trying to code an external program for NXOpen (i.e. a program with the NX library that runs on Windows, as opposed to an internal program that runs within NX). Right now I'm just testing to make sure the link structure is good, etc.
When I try to run the .exe that was generated, it does nothing for a few moments and then I get the following error: "The procedure entry point ?JPEG_convert_to_rgb##YAPAEHPAEPAH1#Z could not be located in the dynamic link library libimage.dll"
I have nothing to go on and Googling so far has been vastly unhelpful. The stuff on here seems to be file-specific for each case, and I'd never heard of this JPEG_convert_to_rgb before now. What can I do to fix this?
Additional info: I'm not sure if I broke something when trying to solve my last issue, or if this would have happened anyway.

It looks like you are compiling a C header file in C++ and suffering from the C++ compiler mangling your names. The DLL should export non-mangled names. Try wrapping the include of the header file in an extern "C" block.

Well, I called up GTAC. The issue turned out to be quite specific to the NX library and I'm not even fully certain what happened.
Basically, I had some environment variables that needed to be set: TC_DATA and TC_ROOT, though for some people it will be IMAN_DATA and IMAN_ROOT. These can be found if you open up NX through Teamcenter, go to Help->NX Log File, and do a ctrl-F to search for these terms. There you should find what the variables should be set to, and then set them as that. You should also make sure the UGII_BASE_DIR is set properly, and that your UGII_ROOT_DIR is at the beginning of your PATH variable. Also: call %tc_data%\tc_profilevars to set the other TC variables; call %iman_data%\iman_profilevars to set the other IMAN variables. There's also something else that I can't remember - this answer is not complete, it's just as complete as I can make it.
If this makes no sense to you, and you're using NX Open, you should probably call GTAC; if you can use an internal application instead of an external, you might be better off doing so.

Related

How do you include standard CUDA libraries to link with NVRTC code?

Specifically, my issue is that I have CUDA code that needs <curand_kernel.h> to run. This isn't included by default in NVRTC. Presumably then when creating the program context (i.e. the call to nvrtcCreateProgram), I have to send in the name of the file (curand_kernel.h) and also the source code of curand_kernel.h? I feel like I shouldn't have to do that.
It's hard to tell; I haven't managed to find an example from NVIDIA of someone needing standard CUDA files like this as a source, so I really don't understand what the syntax is. Some issues: curand_kernel.h also has includes... Do I have to do the same for each of these? I am not even sure the NVRTC compiler will even run correctly on curand_kernel.h, because there are some language features it doesn't support, aren't there?
Next: if you've sent in the source code of a header file to nvrtcCreateProgram, do I still have to #include it in the code to be executed / will it cause an error if I do so?
A link to example code that does this or something like it would be appreciated much more than a straightforward answer; I really haven't managed to find any.
You have to send the "filename" and the source of each header separately.
When the preprocessor does its thing, it'll use any #include filenames as a key to find the source for the header, based on the collection that you provide.
I suspect that, in this case, the compiler (driver) doesn't have file system access, so you have to give it the source in much the same way that you would for shader includes in OpenGL.
So:
Include your header's name when calling nvrtcCreateProgram. The compiler will, internally, generate the equivalent of a std::map<string,string> containing the source of each header indexed by the given name.
In your kernel source, use #include "foo.cuh" as usual.
The compiler will use foo.cuh as an index or key into its internal map (created when you called nvrtcCreateProgram), and will retrieve the header source from that collection
Compilation proceeds as normal.
One of the reasons that nvrtc provides only a "subset" of features is that the compiler plays in a somewhat sandboxed environment, without necessarily having all of the supporting tools and utilities lying around that you have with offline compilation. So, you have to manually handle a lot of the stuff that the normal nvcc + (gcc | MSVC| clang) combination provides.
A possible, but non-ideal, solution would be to preprocess the file that you need in your IDE, save the result and then #include that. However, I bet there is a better way to do that. if you just want curand, consider diving into the library and extracting the part you need (blech) or using another GPU-friendly rand implementation. On older CUDA versions, I just generated a big array of random floats on the host, uploaded it to the GPU, and sampled it in the kernels.
This related link may be helpful.
You do not need to load curand_kernel.h yourself and add it to the include "aliases" mechanism.
Instead, you can simply add the CUDA include directory to your (set of) include paths, e.g. by adding --include-path=/usr/local/cuda/include to your NVRTC compiler options.
(I do this in my GPU-kernel-runner test harness, by default, to be on the safe side.)

Compile-time test if function is optimized out

I'm writing a small operating system for microcontrollers in C (not C++, so I can't use templates). It makes heavy use of some gcc features, one of the most important being the removal of unused code. The OS doesn't load anything at runtime; the user's program and the OS source are compiled together to form a single binary.
This design allows gcc to include only the OS functions that the program actually uses. So if the program never uses i2c or USB, support for those won't be included in the binary.
The problem is when I want to include optional support for those features without introducing a dependency. For example, a debug console should provide functions to debug i2c if it's being used, but including the debug console shouldn't also pull in i2c if the program isn't using it.
The methods that come to mind to achieve this aren't ideal:
Have the user explicitly enable the modules they need (using #define), and use #if to only include support for them in the debug console if enabled. I don't like this method, because currently the user doesn't have to do this, and I'd prefer to keep it that way.
Have the modules register function pointers with the debug module at startup. This isn't ideal, because it adds some runtime overhead and means the debug code is split up over several files.
Do the same as above, but using weak symbols instead of pointers. But I'm still not sure how to actually accomplish this.
Do a compile-time test in the debug code, like:
if(i2cInit is used) {
debugShowi2cStatus();
}
The last method seems ideal, but is it possible?
This seems like an interesting problem. Here's an idea, although it's not perfect:
Two-pass compile.
What you can do is first, compile the program with a flag like FINDING_DEPENDENCIES=1. Surround all the dependency checks with #ifs for this (I'm assuming you're not as concerned about adding extra ifs there.)
Then, when the compile is done (without any optional features), use nm or similar to detect the usage of functions/features in the program (such as i2cInit), and format this information into a .h file.
#ifndef FINDING_DEPENDENCIES
#include "dependency_info.h"
#endif
Now the optional dependencies are known.
This still doesn't seem like a perfect solution, but ultimately, it's mostly a chicken-and-the-egg problem. When compiling, the compiler doesn't know what symbols are going to be gc'd out. You basically need to get this information from the linker stage and feed it back to the compilation stage.
Theoretically, this might not increase build times much, especially if you used a temp file for the generated h, and then only replaced it if it was different. You'd need to use different object dirs, though.
Also this might help (pre-strip, of course):
How can I view function names and parameters contained in an ELF file?

How to automatically link symbols using TinyCC?

Using TinyCC in my C program lets me use C as a sort of scripting language, reload C files on the fly, and do a lot of fairly neat things... But, one thing is really bothering me. Linking.
I do my normal tcc_new, and tcc_set_output_type with TCC_OUTPUT_MEMORY, but if I don't include a lot of these:
tcc_add_symbol(tcc_ctx, "printf", &printf);
tcc_add_symbol(tcc_ctx, "powf", &powf);
tcc_add_symbol(tcc_ctx, "sinf", &sinf);
everything is very limited.
I want a way to automatically bring in all symbols in the host program. I don't want to have to manually link in every last function in libc, and libm. What mechanisms exist to facilitate auto linking, or adding of symbols. How can I use libm in my code without manually dropping in every last component.
I'm currently using GCC, but on another platform use Visual Studio to compile my program. I could switch entirely to TCC.
TCC comes with a rudimentary runtime library libtcc1. It includes basic functions like those you mention. Therefore, in most cases you can replace all your calls with a single tcc_add_library(tcc_ctx, "libtcc1.a").
libtcc1 is not complete, so you might have to add manually some functions.

How can I jump to function when doing C development in Emacs?

I am doing C development in Emacs. If I have a source file open with multiple functions and "the marker" is at a function call e.g. int n = get_number(arg); is there any way I can "jump to" the implementation of that function? e.g. to int get_number(int *arg) { ... }
I have done some Java development in Eclipse and is missing this functionallity, because I'm not that used to Emacs but I would like to learn.
You have to create a tag file.
Under Unix, you have the etags program that understands the syntax of C, C++, Java... and that create a tag file that can be used by Emacs.
This rather old page (2004) provides more information.
To jump to a function use M-. (that’s Meta-Period) and type the name of the function. If you
simply press enter Emacs will jump to the function declaration that matches the word under
the cursor.
There are several "tags" systems which allows that (there is one bundled with emacs, there is GNU global which isn't bundled with emacs but integrate well with it and has some advantages). Compared with Eclipse, you'll need to build the tags file.
Then there is semantic/EDE which is now bundled with emacs which should provide a solution without needing to build a database explicitly. I've not tried to use it recently. When I did, it has performance problem and I found the set up was painful. (Both possibly due to the fact that I'm working on a big -- several 10's millions lines -- and old -- some things date back to the mid 80's -- project without the possibility of reorganizing it).
I think semantic-mode should do you the same result. Although I haven't tried to jump to another file, but in one file it's very excellent. Go to a variable, issue keystroke C-c,j, it will jump to the definition of the variable. Go back to previous line using C-uC-space. To display reference to the symbol, use keystroke C-c,g
It really helps me.
I haven't tried it to jump to another file, because my current project is a modified Java program, where we are using preprocessor (a non standard java process). So I think that is where the problem lies.
Anyone success with semantic-mode???
thanks
I really like cscope for this, but etags probably works as well.

Moving libraries and headers

I have some c code which provides libfoo.so and libfoo.a along with the header file foo.h. A large number of clients currently use these libraries from /old_location/lib and /old_location/include directories which is where they are disted.
Now I want to move this code to /new_location. Yet I am not in a position to inform the clients about this change. I would want the old clients to continue accessing the libs and headers from the /old_location.
For this, will creating symlinks to the libs/headers to the new locations work?
/old_location/lib/libfoo.so -> /new_location/lib/libnewfoo.so
/old_location/lib/libfoo.a -> /new_location/lib/libnewfoo.a
/old_location/inlcude/foo.h -> /new_location/inlcude/foo.h
[Note that I need to name the new lib as libnewfoo and not libfoo due to some constraints. Can this renaming cause any problem? Yet the C code that generates these has not changed.]
It seems to work for the few simple cases I tried. But can there be cases where clients are using the libs and headers in a way which may break as a result of this change. Please let me know what kind of intricacies can be involved in this. Sorry if this seems to be a novice question, I've hardly worked with c before and am a java person.
You have to differentiate between compile time and run time.
For compile time, clients need to update their Makefile and / or configure logic.
For run time, you simply tell ld.so via ld.so.conf about where to find the .so library (or tell your clients to adjust LD_LIBRARY_PATH, a second best choice). The static library does not matter as its code is already built into the executable.
And yes, by providing symbolic links you can make the move 'disappear' as well and provide all files via the old location.
And all this is pretty testable from your end before roll-out.
I don't see any reason why this would break, this is more a question about symlinks than C. To an unsuspecting user program (one which doesn't have special code to detect symlinks and complain), a symlink is transparent.
If you do experience errors feel free to post them and we'll do our best to advise. However I see nothing off the top of my head that would cause issues.
The only problem with the symlinks could be if some clients mount the new location with a different path, which is possible in a networked unix type environment. For example, you could have the location as:
/var/stuff/new_location/include/...
and the client could be mounting that as:
/auto/var/stuff/new_location/include/..
In which case a relative symlink might work better, i.e.:
old_location/include/foo.h -> ../new_location/include/foo.h
Another thing to consider is to replace old_location/foo.h with:
/*
* Please note that this library has moved to a new location...
*/
#include "new_location/include/foo.h"
The symlinks will work on any operating system and file system that supports symlinks.

Resources