I have a benchmark suite written with criterion that I invoke via stack bench.
My package.yaml contains:
benchmarks:
criterion-benchmarks:
dependencies:
- criterion
- linear
ghc-options:
- -O2
- -threaded
- -rtsopts
- -with-rtsopts=-N
main: Main.hs
source-dirs: benchmark
I invoke this with:
stack bench --ba "--output bench.html"
This runs the benchmark and generates bench.html for viewing results.
In the Stack YAML Configuration documentation it shows the use of project-wide build options, implying this:
build:
# ...
benchmark-opts:
benchmark-arguments: "--output bench.html"
My question is: package.yaml supports multiple benchmarks (children of benchmarks:), however I'd like to generate a different HTML output file for each one, rather than use the project-wide specification of bench.html and have them collide.
I haven't found any documentation that shows how to specify run-time arguments for each benchmark defined in package.yaml. The hpack documentation doesn't provide any obvious options that look relevant, as Common fields, or for test or bench targets.
Is it possible?
Related
I have a Haskell + stack + nix project that makes heavy use of FFI code. The problem is that two of the C files that I depend upon have to be generated before I can compile my Haskell project at large. These two files are (i) ./cbits/xdg-shell-protocol.c and (ii) ./include/xdg-shell-protocol.h.
First, here is the Makefile which can generate these files:
WAYLAND_PROTOCOLS=$(shell pkg-config --variable=pkgdatadir wayland-protocols)
WAYLAND_SCANNER=$(shell pkg-config --variable=wayland_scanner wayland-scanner)
# wayland-scanner is a tool which generates C headers and rigging for Wayland
# protocols, which are specified in XML. wlroots requires you to rig these up
# to your build system yourself and provide them in the include path.
xdg-shell-protocol.h:
$(WAYLAND_SCANNER) server-header \
$(WAYLAND_PROTOCOLS)/stable/xdg-shell/xdg-shell.xml $#
xdg-shell-protocol.c: xdg-shell-protocol.h
$(WAYLAND_SCANNER) private-code \
$(WAYLAND_PROTOCOLS)/stable/xdg-shell/xdg-shell.xml $#
Notice that I depend upon the system programs wayland-protocols and wayland-scanner, both of which are specified in my project's shell.nix:
buildInputs = with pkgs; [
# ...
# These are bleeding edge so I crafted my own nix expressions:
(callPackage ./nix/wayland.nix { } )
(callPackage ./nix/wayland-protocols.nix { } )
# ...
];
And finally notice that I tell Haskell about these files in my package.yaml:
c-sources:
- cbits/xdg-shell-protocol.c
include-dirs:
- include
Question: How can I make it so that everytime someone runs stack [--nix] build (with or without nix), the two files (i) ./cbits/xdg-shell-protocol.c and (ii) ./include/xdg-shell-protocol.h are ensured to be up to date?
I'm writing some code for a project that uses a Kinetis processor. These processors have a Flash Config Field which is stored in flash at a particular address. If the wrong value gets written to that field, you can lock yourself out of your chip for good.
My code for this consists of a packed struct, an instance that specifies the .flashConfig section, some static asserts to ensure the struct is the required size, and the #define that gets written to the FSEC field (the important one) is as expected. Then in the linker script I have that section stored in the correct bit of flash. Additionally I have an ASSERT to see if I have the correct amount of data in that section.
This is all pretty good, but I'm still nervous (I've seen these chips gets themselves locked up, on several occasions now). What I want to do is add an extra assert to the linker script, something like:
ASSERT(BYTE_AT(0x40C) == 0xBE);
Is that possible?
I considered using objdump / objcopy to get dump this from a .bin in a post build step. However I'm building this on windows, so no grep / awk which would be nice and easy. Other people will also have to build this, so I don't want to rely on cygwin being installed or what not. Plus this is a little more removed than the linker, and therefore could easily be missed if someone removes the post_build script.
I don't want to rely on cygwin being installed or what not.
Write a C program that performs the same check objdump and grep would have done.
Plus this is a little more removed than the linker, and therefore could easily be missed if someone removes the post_build script.
Make the verification program invoke the linker, and then verify the result. That is, instead of
${LD} -o foo.bin ${LDFLAGS} ${OBJS} && ./post_build foo.bin
do this:
./build_and_verify -o foo.bin ${LDFLAGS} ${OBJS}
I'm writing unit tests for a project in C using Throw The Switch's Ceedling/Unity/CMock combo as the unit testing framework.
I've run into an interesting dilemma where I'm using mqueue.h in one of my unit tests. When the tests compile, I get gcc linker undefined reference errors for mq_open(), mq_close(), etc..
From what I understand, based on this finding the -lrt flag needs to go at the end of the gcc command--after listing sources (and executables?) gcc test_foo.c -lrt. Unfortunately, ceedling is written to put the flag right after the command: gcc -lrt test_foo.c, and I can't find a way to change the order.
The documentation supplied with Ceedling only covers how to add flags to the gcc command, not how to change the order. I've tried poking around in Ceedling's vast source code, but it's written in Ruby, which I'm unfamiliar with.
So my questions are:
Does the placement of -lrt really effect the linking of mq_*()
Any thoughts on how to change the placement of the -lrt flag?
Almost 3 years later had similar issue. They have added this feature in https://github.com/ThrowTheSwitch/Ceedling/issues/136, but usage is still not easy to understand from documentation. I needed to include math library (requires '-lm' flag in the end of command) and ended up with following config section (particularly system part):
:libraries:
:placement: :end
:flag: "${1} " # or "-L ${1}" for example
:common: &common_libraries []
:system:
- -lm
:test:
- *common_libraries
:release:
- *common_libraries
For some reason ceedling did not add flags at all, when added to commons or particular build sections.
I have an open source project that relies on another open source project (let's call that other project X). Both are written in C. I've had to hack pieces of X to get multi-threading to work. This causes some issues when trying to package up the code for distribution. To make things easier, I've just included the entirety of X within mine along with the few little hacks I've made.
I'd like to do something more sophisticated now in order to keep the improved functionality of X (it has frequent releases and mine does not) without having to repackage the whole project (with my hacks) within my project again each time that X has a release.
There are only 3 or 4 functions in that need to override. I can follow what is going on in this IBM Tutorial, but how can I modify my Makefile.am to generate the Makefile changes suggested in that article? To summarize, the article suggests writing my own functions with the same signatures as the ones I want to override (in a file called libfuncs.c) and then add the following 'libs' target to the makefile:
all: libs setresgid-tester
libs: libfuncs.c
gcc -shared -Wl,-soname,libfuncs.so.1 -o libfuncs.so.1.0 libfuncs.c
ln -s libfuncs.so.1.0 libfuncs.so.1
ln -s libfuncs.so.1 libfuncs.so
setresgid-tester: setresgid-tester.c
gcc -o setresgid-tester setresgid-tester.c
All of that makes sense to me. What I need to do, however, is to have this 'libs' target created for me with the autotools. I have a Makefile.am works well right now. How do I modify it to produce the desired results above? It was difficult to find in the autotools documentation, but I may just not have known exactly what to look for.
I'm happy to provide more details if helpful. Thanks in advance.
I am working on Linux environment. I have two 'C' source packages train and test_train.
train package when compiled generates libtrain.so
test_train links to libtrain.so and generates executable train-test
Now I want to generate a call graph using gprof which shows calling sequence of functions in main program as well as those inside libtrain.so
I am compiling and linking both packages with -pg option and debugging level is o0.
After I do ./train-test , gmon.out is generated. Then I do:
$ gprof -q ./train-test gmon.out
Here, output shows call graph of functions in train-test but not in libtrain.so
What could be the problem ?
gprof won't work, you need to use sprof instead. I found these links helpful:
How to use sprof?
http://greg-n-blog.blogspot.com/2010/01/profiling-shared-library-on-linux-using.html
Summary from the 2nd link:
Compile your shared library (libmylib.so) in debug (-g) mode. No -pg.
export LD_PROFILE_OUTPUT=`pwd`
export LD_PROFILE=libmylib.so
rm -f $LD_PROFILE.profile
execute your program that loads libmylib.so
sprof PATH-TO-LIB/$LD_PROFILE $LD_PROFILE.profile -p >log
See the log.
I found that in step 2, it needs to be an existing directory -- otherwise you get a helpful warning. And in step 3, you might need to specify the library as libmylib.so.X (maybe even .X.Y, not sure) -- otherwise you get no warning whatsoever.
I'm loading my library from Python and didn't have any luck with sprof. Instead, I used oprofile, which was in the Fedora repositories, at least:
operf --callgraph /path/to/mybinary
Wait for your application to finish or do Ctl-c to stop profiling. Now let's generate a profile summary:
opreport --callgraph --symbols
See the documentation to interpret it. It's kind of a mess. In the generated report, each symbol is listed in a block of its own. The block's main symbol is the one that's not indented. The items above it are functions that call that function, and the ones below it are the things that get called by it. The percentages in the below section are the relative amount of time it spent in those callees.
If you're not on Linux (like me on Solaris) you simply out of luck as there is no sprof there.
If you have the sources of your library you can solve your problem by linking a static library and making your profiling binary with that one instead.
Another way I manage to trace calls to shared libraries, is by using truss. With the option -u [!]lib,...:[:][!]func, ... one can get a good picture of the call history of a run. It's not completely the same as profiling but can be very usefull in some scenarios.