How to auto-populate "exposed-modules or other-modules" in *.cabal - cabal

Somewhat annoyingly, I keep getting this warning (for some oh 20 modules or so, polluting the build output I'd otherwise see without scrolling such as actual ghc warnings etc):
"The following modules should be added to exposed-modules or other-modules in proj-name.cabal
with:
a freshly created (via stack new proj-name simple) project,
with its .cabal set to only contain an executable proj-name (no library),
where I then right after stack new .. copied the src files/sub-dirs over from a non-stack/cabal project.
What's the supposed workflow here, am I seriously to manually keep those modules listings in the .cabal in sync with my module files?
In this thread someone suggests "the modern answer is Stack (and hpack)" but I was really hoping stack alone would somehow suffice here or could be set up to. If I am to set up yet-another (3rd after stack and thus implicitly cabal) tool just for builds, might as well go back to build scripts invoking ghc..
So the question: how can the overall very flexible powerful and robust stack help overcome also this cabal abomination, too? =)

Related

Partially pre-compile code (or maybe use .so library) while leaving another part of code open to edits

I'm trying to do a somewhat odd thing that realistically I'm not sure is even possible with current constraints but is outside of my scope of knowledge so it could be. I'll hopefully be able to make everything clear enough in the question, but it will be a little broad in scope, its too big to get detailed.
Anyway, I have a C codebase (we'll call it bar) that is rather large and takes a bit of time to compile. Not a huge deal normally, but now there is a set of files that are changed often and currently the changes can only be confirmed as good after running a compile. Due to the nature of how these are changed it could result in people running multiple compiles in a day, taking quite a lot of time.
What I want to do on a broad scale is only have to actually compile the set of files that might change (about 20, all in 1 directory, we'll call it foo) and have everything else (bar and everything under it except for foo) ready before hand. Initially was looking at .so library for the task, but not positive anymore that's correct. Either way, it still seemed likely to be reasonably possible until I realized that some of the files in directory foo were included by other files in bar. Mostly the files in foo only include files and are kind of the end point, not being included in things. But with a few of them being included I'm not sure what can be done.
My two thoughts are generate a .so library of everything outside of foo that somehow still checks on the needed included files at compile time, or get some kind of general pre-compile set up. Neither of these seem like they would work at a glance, but I very well could be wrong.
A third option, less ideal but better then nothing, is to generate the .so library with everything including any files in foo that are needed at that point, just leaving out the files that aren't included anywhere. It seems like this would work better, though even if it would I'm still not really sure how to go about it.
So basically, is there a way to do what I want to some extent, and if so what is the best method?
Sorry about the broadness of the question, the codebase is too large to provide lots of detail. I will try to edit and add in any information that people think is needed though. Thanks for the help.

How does PC-Lint (by Gimpel) look across multiple modules?

I'm using Gimpel's PC-Lint v8.00 on a C codebase and am looking to understand how it traverses modules. The PC-lint manual only goes as far as to say that PC-Lint "looks across multiple modules". How does it do this? For example, does it start with one module and combine all related include files and source files into one large piece of code to analyze? How deep does it search in order to understand the program flow?
In a second related question, I have a use case where it is beneficial for me to lint one C module from the codebase at a time instead of providing every C module in a long list to PC-Lint. However, if I only provide one C module, will it automatically find the other C modules which it depends on, and use those to understand the program flow of the specified C module?
PC Lint creates some sort of run-time database when it parses your source files, noting things like global variables, extern-declarations, etc.
When it has processed all compilation units (C files with all included files, recursively), it does what a linker does to generate your output, but in stead of generating code, it reports on certain types of errors, for instance: An extern-declaration that has not been used, an unused prototype without implementation, unused global functions. These are issues not always reported by the linker, since the code generation is very well possible: The items have never been used anywhere!
The search depth can be influenced by the option -passes, which enables a far better value-tracking at the cost of execution time. Refer to seciton 10.2.2.4 in the PDF manual (for version 9.x).
To your second question, no, if you only provide one (or a few) source (C) file name(s) on your Lint command line, PC Lint will process only that file - and all include files used, recursively. You may want to use the option -u for "unit-checkout" to tell PC Lint that it only processes a part of a full project. Lint will then suppress certain kinds of warnings not useful for a partial project.
I think in principle you're asking about LINT OBJECT MODULES, see Chapter 9 of Lint Manual PDF.
Using say lint -u a1.c -oo procudes the a1.lob, when then again can be linked together using lint *.lob to produce the inter-module messages.
Also you asked a related, specific questions ( Any tips for speeding up static analysis tool PC-Lint? Any experiences using .LOB files?) but I'm not sure if I understand your concern with "How much would you say it affected linting time?", because I would say it depends. What is your current lint-time / speed? You posted some years ago now, how about running the job on a novel machine, new cpu then? KR

Error: "The procedure entry point ?JPEG_convert_to_rgb##YAPAEHPAEPAH1#Z could not be located in the dynamic link library libimage.dll"

Windows XP, Visual Studio 2005, C/C++, automation for Unigraphics NX using Open C
I'm trying to code an external program for NXOpen (i.e. a program with the NX library that runs on Windows, as opposed to an internal program that runs within NX). Right now I'm just testing to make sure the link structure is good, etc.
When I try to run the .exe that was generated, it does nothing for a few moments and then I get the following error: "The procedure entry point ?JPEG_convert_to_rgb##YAPAEHPAEPAH1#Z could not be located in the dynamic link library libimage.dll"
I have nothing to go on and Googling so far has been vastly unhelpful. The stuff on here seems to be file-specific for each case, and I'd never heard of this JPEG_convert_to_rgb before now. What can I do to fix this?
Additional info: I'm not sure if I broke something when trying to solve my last issue, or if this would have happened anyway.
It looks like you are compiling a C header file in C++ and suffering from the C++ compiler mangling your names. The DLL should export non-mangled names. Try wrapping the include of the header file in an extern "C" block.
Well, I called up GTAC. The issue turned out to be quite specific to the NX library and I'm not even fully certain what happened.
Basically, I had some environment variables that needed to be set: TC_DATA and TC_ROOT, though for some people it will be IMAN_DATA and IMAN_ROOT. These can be found if you open up NX through Teamcenter, go to Help->NX Log File, and do a ctrl-F to search for these terms. There you should find what the variables should be set to, and then set them as that. You should also make sure the UGII_BASE_DIR is set properly, and that your UGII_ROOT_DIR is at the beginning of your PATH variable. Also: call %tc_data%\tc_profilevars to set the other TC variables; call %iman_data%\iman_profilevars to set the other IMAN variables. There's also something else that I can't remember - this answer is not complete, it's just as complete as I can make it.
If this makes no sense to you, and you're using NX Open, you should probably call GTAC; if you can use an internal application instead of an external, you might be better off doing so.

Moving libraries and headers

I have some c code which provides libfoo.so and libfoo.a along with the header file foo.h. A large number of clients currently use these libraries from /old_location/lib and /old_location/include directories which is where they are disted.
Now I want to move this code to /new_location. Yet I am not in a position to inform the clients about this change. I would want the old clients to continue accessing the libs and headers from the /old_location.
For this, will creating symlinks to the libs/headers to the new locations work?
/old_location/lib/libfoo.so -> /new_location/lib/libnewfoo.so
/old_location/lib/libfoo.a -> /new_location/lib/libnewfoo.a
/old_location/inlcude/foo.h -> /new_location/inlcude/foo.h
[Note that I need to name the new lib as libnewfoo and not libfoo due to some constraints. Can this renaming cause any problem? Yet the C code that generates these has not changed.]
It seems to work for the few simple cases I tried. But can there be cases where clients are using the libs and headers in a way which may break as a result of this change. Please let me know what kind of intricacies can be involved in this. Sorry if this seems to be a novice question, I've hardly worked with c before and am a java person.
You have to differentiate between compile time and run time.
For compile time, clients need to update their Makefile and / or configure logic.
For run time, you simply tell ld.so via ld.so.conf about where to find the .so library (or tell your clients to adjust LD_LIBRARY_PATH, a second best choice). The static library does not matter as its code is already built into the executable.
And yes, by providing symbolic links you can make the move 'disappear' as well and provide all files via the old location.
And all this is pretty testable from your end before roll-out.
I don't see any reason why this would break, this is more a question about symlinks than C. To an unsuspecting user program (one which doesn't have special code to detect symlinks and complain), a symlink is transparent.
If you do experience errors feel free to post them and we'll do our best to advise. However I see nothing off the top of my head that would cause issues.
The only problem with the symlinks could be if some clients mount the new location with a different path, which is possible in a networked unix type environment. For example, you could have the location as:
/var/stuff/new_location/include/...
and the client could be mounting that as:
/auto/var/stuff/new_location/include/..
In which case a relative symlink might work better, i.e.:
old_location/include/foo.h -> ../new_location/include/foo.h
Another thing to consider is to replace old_location/foo.h with:
/*
* Please note that this library has moved to a new location...
*/
#include "new_location/include/foo.h"
The symlinks will work on any operating system and file system that supports symlinks.

Optimized code on Unix?

What is the best and easiest method to debug optimized code on Unix which is written in C?
Sometimes we also don't have the code for building an unoptimized library.
This is a very good question. I had similar difficulties in the past where I had to integrate 3rd party tools inside my application. From my experience, you need to have at least meaningful callstacks in the associated symbol files. This is merely a list of addresses and associated function names. These are usually stripped away and from the binary alone you won't get them... If you have these symbol files you can load them while starting gdb or afterward by adding them. If not, you are stuck at the assembly level...
One weird behavior: even if you have the source code, it'll jump forth and back at places where you would not expect (statements may be re-ordered for better performance) or variables don't exist anymore (optimized away!), setting breakpoints in inlined functions is pointless (they are not there but part of the place where they are inlined). So even with source code, watch out these pitfalls.
I forgot to mention, the symbol files usually have the extension .gdb, but it can be different...
This question is not unlike "what is the best way to fix a passenger car?"
The best way to debug optimized code on UNIX depends on exactly which UNIX you have, what tools you have available, and what kind of problem you are trying to debug.
Debugging a crash in malloc is very different from debugging an unresolved symbol at runtime.
For general debugging techniques, I recommend this book.
Several things will make it easier to debug at the "assembly level":
You should know the calling
convention for your platform, so you
can tell what values are being passed
in and returned, where to find the
this pointer, which registers are "caller saved" and which are "callee saved", etc.
You should know your OS "calling convention" -- what a system call looks like, which register a syscall number goes into, the first parameter, etc.
You should
"master" the debugger: know how to
find threads, how to stop individual
threads, how to set a conditional
breakpoint on individual instruction, single-step, step into or skip over function calls,
etc.
It often helps to debug a working program and a broken program "in parallel". If version 1.1 works and version 1.2 doesn't, where do they diverge with respect to a particular API? Start both programs under debugger, set breakpoints on the same set of functions, run both programs and observe differences in which breakpoints are hit, and what parameters are passed.
Write small code samples by the same interfaces (something in its header), and call your samples instead of that optimized code, say simulation, to narrow down the code scope which you debug. Furthermore you are able to do error enjection in your samples.

Resources