When I try to compile example1.cpp that comes with Armadillo 2.4.2, I keep getting the following linking error:
/tmp/ccbnLbA0.o: In function `double arma::blas::dot<double>(unsigned int, double const*, double const*)':
main.cpp:(.text._ZN4arma4blas3dotIdEET_jPKS2_S4_[double arma::blas::dot<double>(unsigned int, double const*, double const*)]+0x3b): undefined reference to `wrapper_ddot_'
/tmp/ccbnLbA0.o: In function `void arma::blas::gemv<double>(char const*, int const*, int const*, double const*, double const*, int const*, double const*, int const*, double const*, double*, int const*)':
main.cpp:(.text._ZN4arma4blas4gemvIdEEvPKcPKiS5_PKT_S8_S5_S8_S5_S8_PS6_S5_[void arma::blas::gemv<double>(char const*, int const*, int const*, double const*, double const*, int const*, double const*, int const*, double const*, double*, int const*)]+0x68): undefined reference to `wrapper_dgemv_'
/tmp/ccbnLbA0.o: In function `void arma::blas::gemm<double>(char const*, char const*, int const*, int const*, int const*, double const*, double const*, int const*, double const*, int const*, double const*, double*, int const*)':
main.cpp:(.text._ZN4arma4blas4gemmIdEEvPKcS3_PKiS5_S5_PKT_S8_S5_S8_S5_S8_PS6_S5_[void arma::blas::gemm<double>(char const*, char const*, int const*, int const*, int const*, double const*, double const*, int const*, double const*, int const*, double const*, double*, int const*)]+0x7a): undefined reference to `wrapper_dgemm_'
collect2: ld returned 1 exit status
Can someone help? I manually installed
latest version of BLAS
lapack-3.4.0
boost-1.48.0
latest version of ATLAS
I'm using Ubuntu 11.04 on the MacBook Pro 7,1 model
Thank you so much to osgx! After reading his comment, I took a second look at the README file! It turns out I was missing '-O1 -larmadillo' in the command!
Here's the command I used to get it working:
g++ example1.cpp -o example1 -O1 -larmadillo
Stupid mistake, I know.... It just goes to remind you how important it is to read the README.
The README also mentions:
If you get linking errors, or if Armadillo was installed manually
and you specified that LAPACK and BLAS are available, you will
need to explicitly link with LAPACK and BLAS (or their equivalents),
for example:
g++ example1.cpp -o example1 -O1 -llapack -lblas
I didn't have to include '-llapack -lblas' but maybe this will help anyone else who's having similar problems.
As of 5.0.0 (might also apply to earlier versions)
You actually need -larmadillo, on Fedora 21 -llapack and -lopenblas are not excplicitly necessary anymore.
There's an oddity I just discovered by comparing previously working compilations of code with the very problem of this thread, stressing the involvement of the gnu cc (I'm no expert in this): on my machine compilation success depends on the order of parameters to the gcc/g++ where
g++ infile -o outfile -libarmadillo ... worked, but
g++ -libarmadillo infile -o outfile ... didnt with (almost) the same error as mentioned above.
(hope that helps).
Related
I tried to build WebKit GTK on my ARM Mac, but the linking process fails:
Undefined symbols for architecture arm64:
"_u_charDirection_67", referenced from:
WTF::StringImpl::stripWhiteSpace() in libWTFGTK.a(StringImpl.cpp.o)
WTF::StringImpl::simplifyWhiteSpace() in libWTFGTK.a(StringImpl.cpp.o)
WTF::StringImpl::defaultWritingDirection(bool*) in libWTFGTK.a(StringImpl.cpp.o)
int WTF::toIntegralType<int, char16_t>(char16_t const*, unsigned long, bool*, int) in libWTFGTK.a(WTFString.cpp.o)
unsigned int WTF::toIntegralType<unsigned int, char16_t>(char16_t const*, unsigned long, bool*, int) in libWTFGTK.a(WTFString.cpp.o)
long long WTF::toIntegralType<long long, char16_t>(char16_t const*, unsigned long, bool*, int) in libWTFGTK.a(WTFString.cpp.o)
unsigned long long WTF::toIntegralType<unsigned long long, char16_t>(char16_t const*, unsigned long, bool*, int) in libWTFGTK.a(WTFString.cpp.o)
...
"_u_foldCase_67", referenced from:
WTF::StringImpl::foldCase() in libWTFGTK.a(StringImpl.cpp.o)
"_u_strFoldCase_67", referenced from:
WTF::StringImpl::foldCase() in libWTFGTK.a(StringImpl.cpp.o)
"_u_strToLower_67", referenced from:
WTF::StringImpl::convertToLowercaseWithoutLocale() in libWTFGTK.a(StringImpl.cpp.o)
WTF::StringImpl::convertToLowercaseWithLocale(WTF::AtomString const&) in libWTFGTK.a(StringImpl.cpp.o)
"_u_strToUpper_67", referenced from:
WTF::StringImpl::convertToUppercaseWithoutLocale() in libWTFGTK.a(StringImpl.cpp.o)
...
WTF::normalizedNFC(WTF::StringView) in libWTFGTK.a(StringView.cpp.o)
"_unorm2_isNormalized_67", referenced from:
WTF::normalizedNFC(WTF::StringView) in libWTFGTK.a(StringView.cpp.o)
"_unorm2_normalize_67", referenced from:
WTF::normalizedNFC(WTF::StringView) in libWTFGTK.a(StringView.cpp.o)
"_utext_close_67", referenced from:
WTF::setTextForIterator(UBreakIterator&, WTF::StringView) in libWTFGTK.a(TextBreakIterator.cpp.o)
WTF::acquireLineBreakIterator(WTF::StringView, WTF::AtomString const&, char16_t const*, unsigned int, WTF::LineBreakIteratorMode) in libWTFGTK.a(TextBreakIterator.cpp.o)
WTF::TextBreakIteratorICU::TextBreakIteratorICU(WTF::StringView, WTF::TextBreakIteratorICU::Mode, char const*) in libWTFGTK.a(TextBreakIterator.cpp.o)
"_utext_setup_67", referenced from:
WTF::openLatin1UTextProvider(WTF::UTextWithBuffer*, unsigned char const*, unsigned int, UErrorCode*) in libWTFGTK.a(UTextProviderLatin1.cpp.o)
WTF::openLatin1ContextAwareUTextProvider(WTF::UTextWithBuffer*, unsigned char const*, unsigned int, char16_t const*, int, UErrorCode*) in libWTFGTK.a(UTextProviderLatin1.cpp.o)
WTF::uTextLatin1Clone(UText*, UText const*, signed char, UErrorCode*) in libWTFGTK.a(UTextProviderLatin1.cpp.o)
WTF::openUTF16ContextAwareUTextProvider(UText*, char16_t const*, unsigned int, char16_t const*, int, UErrorCode*) in libWTFGTK.a(UTextProviderUTF16.cpp.o)
WTF::uTextCloneImpl(UText*, UText const*, signed char, UErrorCode*) in libWTFGTK.a(UTextProvider.cpp.o)
ld: symbol(s) not found for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [bin/WebKitWebDriver] Error 1
I'm not sure why this is happening. It compiled just fine on my Intel Mac. Any ideas on how to fix this?
While tracing a bug in a static library we use in our iOS application I stumbled upon the following question:
The library we use has code that is automatically executed during the launch of our application, before our application's main method is executed. The stack trace while the function is executed looks as follows:
#0 0x000000010050ce04 in _runOnLoad ()
#1 0x000000012008ceb0 in ImageLoaderMachO::doModInitFunctions(ImageLoader::LinkContext const&) ()
#2 0x000000012008d050 in ImageLoaderMachO::doInitialization(ImageLoader::LinkContext const&) ()
#3 0x0000000120088808 in ImageLoader::recursiveInitialization(ImageLoader::LinkContext const&, unsigned int, char const*, ImageLoader::InitializerTimingList&, ImageLoader::UninitedUpwards&) ()
#4 0x00000001200879b0 in ImageLoader::processInitializers(ImageLoader::LinkContext const&, unsigned int, ImageLoader::InitializerTimingList&, ImageLoader::UninitedUpwards&) ()
#5 0x0000000120087a64 in ImageLoader::runInitializers(ImageLoader::LinkContext const&, ImageLoader::InitializerTimingList&) ()
#6 0x000000012007a08c in dyld::initializeMainExecutable() ()
#7 0x000000012007e0fc in dyld::_main(macho_header const*, unsigned long, int, char const**, char const**, char const**, unsigned long*) ()
#8 0x0000000120079044 in _dyld_start ()
My question is: What is going on here?
How does the image loader know that this method should be executed? When looking at the assembler of the static library, _runOnLoad() looks like a regular method. How is the information that this method should be executed on launch stored in the static library? Where is this information stored when the library is linked to our main application?
How can you specify this when compiling? Can you flag a method to be executed on load? It this something you do in code or is it a compiler argument?
While this method is run, what can I do? I assume other functions and classes are not guaranteed to be loaded yet?
This whole thing is happening inside an iOS application, but this seems to me to be very low level functionality, so I assume this works on every unix based platform?
I would love to learn some more about this.
I'm having some trouble compiling a cuda project with C Cuda and the lodepng libraries.
My makefile looks like this.
gpu: super-resolution.cu
gcc -g -O -c lodepng.c
nvcc -c super-resolution.cu
nvcc -o super-resolution-cuda super-resolution.o
rm -rf super-resolution.o
rm -rf lodepng.o
Could anyone tell me what I am doing wrong, because it is complaining about
nvcc warning : The 'compute_10' and 'sm_10' architectures are deprecated, and may be removed in a future release.
super-resolution.o: In function `main':
parallel-algorithm/super-resolution.cu:238: undefined reference to `lodepng_decode32_file(unsigned char**, unsigned int*, unsigned int*, char const*)'
parallel-algorithm/super-resolution.cu:259: undefined reference to `lodepng_encode32_file(char const*, unsigned char const*, unsigned int, unsigned int)'
parallel-algorithm/super-resolution.cu:269: undefined reference to `lodepng_encode32_file(char const*, unsigned char const*, unsigned int, unsigned int)'
parallel-algorithm/super-resolution.cu:282: undefined reference to `lodepng_encode32_file(char const*, unsigned char const*, unsigned int, unsigned int)'
parallel-algorithm/super-resolution.cu:292: undefined reference to `lodepng_encode32_file(char const*, unsigned char const*, unsigned int, unsigned int)'
parallel-algorithm/super-resolution.cu:301: undefined reference to `lodepng_encode32_file(char const*, unsigned char const*, unsigned int, unsigned int)'
...
I just need a way to compile my .cu file and add a C .o file into it during the compilation process using nvcc.
EDIT: tried suggestion. no success.
gcc -g -O -c lodepng.c
nvcc -c super-resolution.cu
nvcc warning : The 'compute_10' and 'sm_10' architectures are deprecated, and may be removed in a future release.
super-resolution.cu:1:2: warning: #import is a deprecated GCC extension [-Wdeprecated]
#import "cuda.h"
^
super-resolution.cu(106): warning: expression has no effect
super-resolution.cu(116): warning: expression has no effect
super-resolution.cu(141): warning: variable "y" was declared but never referenced
super-resolution.cu:1:2: warning: #import is a deprecated GCC extension [-Wdeprecated]
#import "cuda.h"
^
super-resolution.cu(106): warning: expression has no effect
super-resolution.cu(116): warning: expression has no effect
super-resolution.cu(141): warning: variable "y" was declared but never referenced
ptxas /tmp/tmpxft_00000851_00000000-5_super-resolution.ptx, line 197; warning : Double is not supported. Demoting to float
nvcc -o super-resolution-cuda super-resolution.o lodepng.o
nvcc warning : The 'compute_10' and 'sm_10' architectures are deprecated, and may be removed in a future release.
super-resolution.o: In function `main':
tmpxft_00000851_00000000-3_super-resolution.cudafe1.cpp:(.text+0x5d): undefined reference to `lodepng_decode32_file(unsigned char**, unsigned int*, unsigned int*, char const*)'
It still can't find the reference to the object file.
Edit: here's our .cu file.
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <cstdio>
extern "C" unsigned lodepng_encode32_file(const char* ,const unsigned char* , unsigned , unsigned h);
extern "C" unsigned lodepng_decode32_file(unsigned char** , unsigned* , unsigned* ,const char* );
don't #import. If you want to include cuda.h (which should be unnecessary) then use #include. Instead I would just delete that line from your super-resolution.cu file.
What you did not show before, but is now evident, is that in your super-resolution.cu you are including lodepng.h and also later specifying C-linkage for 2 functions: lodepng_decode32_file and lodepng_encode32_file. When I tried compiling your super-resolution.cu the compiler gave me errors like this (I don't know why you don't see them):
super-resolution.cu(8): error: linkage specification is incompatible with previous "lodepng_encode32_file"
lodepng.h(184): here
super-resolution.cu(9): error: linkage specification is incompatible with previous "lodepng_decode32_file"
lodepng.h(134): here
So basically you are tripping over C and C++ linkage.
I believe the simplest solution is to use lodepng.cpp (instead of lodepng.c), delete the following lines from your super-resolution.cu:
extern "C" unsigned lodepng_encode32_file(const char* ,const unsigned char* , unsigned , unsigned h);
extern "C" unsigned lodepng_decode32_file(unsigned char** , unsigned* , unsigned* ,const char* );
And just compile everything and link everything c++ style:
$ g++ -c lodepng.cpp
$ nvcc -c super-resolution.cu
nvcc warning : The 'compute_10' and 'sm_10' architectures are deprecated, and may be removed in a future release.
$ nvcc -o super-resolution super-resolution.o lodepng.o
nvcc warning : The 'compute_10' and 'sm_10' architectures are deprecated, and may be removed in a future release.
$
If you really want to link lodepng.o c-style instead of c++ style, then you will need to modify lodepng.h with appropriate extern "C" wrappers where the necessary functions are called out. In my opinion this gets messy.
If you want to get rid of the warnings about sm_10 then add the nvcc switch to compile for a different architecture, e.g.:
nvcc -arch=sm_20 ...
but make sure whatever you choose is compatible with your GPU.
Here is a simple snippet of the code.
The lodepng library can be gotten from here (http://lodev.org/lodepng/).
Renaming it to C will make it usable on C.
Even at this level, there's compilation issues with
"undefined reference to `lodepng_decode32_file'"
"undefined reference to `lodepng_encode32_file'"
File: Makefile
all: gpu
gcc -g -O -c lodepng.c
nvcc -c super-resolution.cu
nvcc -o super-resolution-cuda super-resolution.o lodepng.o
rm -rf super-resolution.o
rm -rf lodepng.o
File: super-resolution.cu
#import "cuda.h"
#include "lodepng.h"
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <cstdio>
extern "C" unsigned lodepng_encode32_file(const char* ,const unsigned char* , unsigned , unsigned h);
extern "C" unsigned lodepng_decode32_file(unsigned char** , unsigned* , unsigned* ,const char* );
//GPU 3x3 Blur.
__global__ void gpuBlur(unsigned char* image, unsigned char* buffer, int width, int height)
{
int i = threadIdx.x%width;
int j = threadIdx.x/width;
if (i == 0 || j == 0 || i == width - 1 || j == height - 1)
return;
int k;
for (k = 0; k <= 4; k++)
{
buffer[4*width*j + 4*i + k] = (image[4*width*(j-1) + 4*(i-1) + k] +
image[4*width*(j-1) + 4*i + k] +
image[4*width*(j-1) + 4*(i+1) + k] +
image[4*width*j + 4*(i-1) + k] +
image[4*width*j + 4*i + k] +
image[4*width*j + 4*(i+1) + k] +
image[4*width*(j+1) + 4*(i-1) + k] +
image[4*width*(j+1) + 4*i + k] +
image[4*width*(j+1) + 4*(i+1) + k])/9;
}
}
int main(int argc, char *argv[])
{
//Items for image processing;
//int threshold = 100;
unsigned int error;
unsigned char* image;
unsigned int width, height;
//Load the image;
if (argc > 1)
{
error = lodepng_decode32_file(&image, &width, &height, argv[1]);
printf("Loaded file: %s[%d]\n", argv[1], error);
}
else
{
return 0;
}
unsigned char* buffer =(unsigned char*)malloc(sizeof(char) * 4*width*height);
//GPU Blur Section.
unsigned char* image_gpu;
unsigned char* blur_gpu;
cudaMalloc( (void**) &image_gpu, sizeof(char) * 4*width*height);
cudaMalloc( (void**) &blur_gpu, sizeof(char) * 4*width*height);
cudaMemcpy(image_gpu,image, sizeof(char) * 4*width*height, cudaMemcpyHostToDevice);
cudaMemcpy(blur_gpu,image, sizeof(char) * 4*width*height, cudaMemcpyHostToDevice);
gpuBlur<<< 1, height*width >>> (image_gpu, blur_gpu, width, height);
cudaMemcpy(buffer, blur_gpu, sizeof(char) * 4*width*height, cudaMemcpyDeviceToHost);
//Spit out buffer as an image.
error = lodepng_encode32_file("GPU_OUTPUT1_Blur.png", buffer, width, height);
cudaFree(image_gpu);
cudaFree(blur_gpu);
free(buffer);
free(image);
}
I met the following error while compiling z3. It seems to be an error for ld. I wonder what I can do to make it compile. It is a problem from the opt branch in git. I am on iMac with OS X 10.9.2 (13C1021)
I am with xcode Version 5.1.1 (5B1008) with xcode-select tools installed to version 2333. I use port with version 2.2.1 with ld installed.
The problem seems to be a linking problem. I am using link loader as: ld64 #136_2+llvm33 (active)
My gcc is gcc (MacPorts gcc48 4.8.2_0) 4.8.2
Thank you very much!
g++ -o z3 shell/datalog_frontend.o shell/dimacs_frontend.o
shell/gparams_register_modules.o shell/install_tactic.o shell/main.o
shell/mem_initializer.o shell/smtlib_frontend.o shell/z3_log_frontend.o api/api.a opt/opt.a parsers/smt/smtparser.a tactic/portfolio/portfolio.a tactic/ufbv/ufbv_tactic.a tactic/smtlogics/smtlogic_tactics.a muz/fp/fp.a muz/duality/duality_intf.a muz/bmc/bmc.a muz/tab/tab.a muz/clp/clp.a muz/pdr/pdr.a muz/rel/rel.a muz/transforms/transforms.a muz/base/muz.a duality/duality.a qe/qe.a tactic/sls/sls_tactic.a smt/tactic/smt_tactic.a tactic/fpa/fpa.a tactic/bv/bv_tactics.a smt/user_plugin/user_plugin.a smt/smt.a smt/proto_model/proto_model.a smt/params/smt_params.a ast/rewriter/bit_blaster/bit_blaster.a ast/pattern/pattern.a ast/macros/macros.a ast/simplifier/simplifier.a ast/proof_checker/proof_checker.a parsers/smt2/smt2parser.a cmd_context/extra_cmds/extra_cmds.a cmd_context/cmd_context.a interp/interp.a solver/solver.a tactic/aig/aig_tactic.a math/subpaving/tactic/subpaving_tactic.a nlsat/tactic/nlsat_tactic.a tactic/arith/arith_tactics.a sat/tactic/sat_tactic.a tactic/core/core_tactics.a math/euclid/euclid.a math/grobner/grobner.a parsers/util/parser_util.a ast/substitution/substitution.a tactic/tactic.a model/model.a ast/normal_forms/normal_forms.a ast/rewriter/rewriter.a ast/ast.a math/subpaving/subpaving.a math/realclosure/realclosure.a math/interval/interval.a math/simplex/simplex.a math/hilbert/hilbert.a nlsat/nlsat.a sat/sat.a math/polynomial/polynomial.a util/util.a -lpthread -fopenmp
0 0x1079c1a68 __assert_rtn + 144
1 0x107a3bccd mach_o::relocatable::Parser<x86_64>::parse(mach_o::relocatable::ParserOptions const&) + 1039
2 0x107a2b899 mach_o::relocatable::Parser<x86_64>::parse(unsigned char const*, unsigned long long, char const*, long, ld::File::Ordinal, mach_o::relocatable::ParserOptions const&) + 313
3 0x107a290f0 mach_o::relocatable::parse(unsigned char const*, unsigned long long, char const*, long, ld::File::Ordinal, mach_o::relocatable::ParserOptions const&) + 208
4 0x107a18797 archive::File<x86_64>::makeObjectFileForMember(archive::File<x86_64>::Entry const*) const + 795
5 0x107a182b3 archive::File<x86_64>::justInTimeforEachAtom(char const*, ld::File::AtomHandler&) const + 139
6 0x1079c5d46 ld::tool::InputFiles::searchLibraries(char const*, bool, bool, bool, ld::File::AtomHandler&) const + 210
7 0x107a0b772 ld::tool::Resolver::resolveUndefines() + 200
8 0x107a0d6e1 ld::tool::Resolver::resolve() + 75
9 0x1079c1d44 main + 370
A linker snapshot was created at:
/tmp/z3-2014-03-25-110931.ld-snapshot
ld: Assertion failed: (cfiStartsArray[i] != cfiStartsArray[i-1]), function parse, file src/ld/parsers/macho_relocatable_file.cpp, line 1555.
collect2: error: ld returned 1 exit status
make: *** [z3] Error 1
It is because we used port and installed gcc and ld and other packages.
Another possibility is that ld was depend on llvm 3.3 rather than llvm 3.4. The problem was solved after updating ld.
I'm developing software that uses DB2 database via ODBC (unixodbc). The issue is that running test suite with valgrind produces massive amount of errors. Let alone one connect and disconnect generates 4k error messages (code provided below). My question is:
Am I doing something wrong with connect and disconnect?
Is there clean up function that frees allocated memory by libdb2?
Valgrind also has message suppression feature, is there maintained suppression file for libdb2.so library?
Code:
static void
connect_disconnect(SQLCHAR *dsn)
{
SQLRETURN ret = -1;
SQLHENV env = NULL;
SQLHDBC dbc = NULL;
SQLCHAR msg[1024];
SQLSMALLINT msglen = 0;
/* env handle */
SQLAllocHandle(SQL_HANDLE_ENV, SQL_NULL_HANDLE, &env);
SQLSetEnvAttr(env, SQL_ATTR_ODBC_VERSION, (void*)SQL_OV_ODBC3, 0);
/* connection */
SQLAllocHandle(SQL_HANDLE_DBC, env, &dbc);
ret = SQLDriverConnect(dbc, NULL, (SQLCHAR *)dsn,
SQL_NTS, msg, sizeof(msg), &msglen, SQL_DRIVER_COMPLETE);
if (!SQL_SUCCEEDED(ret))
{
fprintf(stderr, "Failed to connect to database '%s'.\n", dsn);
extract_error(dbc, SQL_HANDLE_DBC);
}
SQLDisconnect(dbc);
SQLFreeHandle(SQL_HANDLE_DBC, dbc);
dbc = NULL;
SQLFreeHandle(SQL_HANDLE_ENV, env);
env = NULL;
return;
}
I'm using:
libdb2.so acquired from DSClients-linuxx64-odbc_cli-10.1.0.2-FP002 package for Linux 64bit.
libodbc.so version 2.3.1
Edit
Last valgrind message (bigest leak):
==1318== 425,880 bytes in 1 blocks are possibly lost in loss record 145 of 145
==1318== at 0x4C2C04B: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==1318== by 0x68B313D: _ossMemAlloc (in /usr/local/lib/libdb2.so.1)
==1318== by 0x6A3B513: sqlnlscmsg(char const*, SQLNLS_MSG_FILE_HEADER**, char const*, bool*, char*) (in /usr/local/lib/libdb2.so.1)
==1318== by 0x6A3AC90: sqlnlsMessage (in /usr/local/lib/libdb2.so.1)
==1318== by 0x6A3A589: sqlnlsMessage (in /usr/local/lib/libdb2.so.1)
==1318== by 0x6C43128: sqloMessage (in /usr/local/lib/libdb2.so.1)
==1318== by 0x6BDCDE0: sqllcGetMessage(char const*, int, char*, char*, unsigned long, bool, char const*) (in /usr/local/lib/libdb2.so.1)
==1318== by 0x6BE0F79: sqllcInitComponent(unsigned int) (in /usr/local/lib/libdb2.so.1)
==1318== by 0x6BE14E2: sqllcInitData() (in /usr/local/lib/libdb2.so.1)
==1318== by 0x6BD813C: sqllcGetInstalledKeyType (in /usr/local/lib/libdb2.so.1)
==1318== by 0x6C26653: sqloGetInstalledKeyType (in /usr/local/lib/libdb2.so.1)
==1318== by 0x6B42494: sqleuInvokeActivationRoutine(db2UCconHandle*, SQLEU_UDFSP_ARGS*, sqlca*, bool, unsigned int) (in /usr/local/lib/libdb2.so.1)
==1318== by 0x6B413A3: sqleuPerformServerActivationCheck(db2UCconHandle*, sqlca*) (in /usr/local/lib/libdb2.so.1)
==1318== by 0x6B3FF72: sqleUCappConnect (in /usr/local/lib/libdb2.so.1)
==1318== by 0x69E8F9A: CLI_sqlConnect(CLI_CONNECTINFO*, sqlca*, CLI_ERRORHEADERINFO*) (in /usr/local/lib/libdb2.so.1)
==1318== by 0x699997D: SQLConnect2(CLI_CONNECTINFO*, unsigned char*, short, unsigned char*, short, unsigned char*, short, unsigned char*, short, unsigned char) (in /usr/local/lib/libdb2.so.1)
==1318== by 0x69B2640: SQLDriverConnect2(CLI_CONNECTINFO*, void*, unsigned char*, short, unsigned char*, short, short*, unsigned short, unsigned char, unsigned char, CLI_ERRORHEADERINFO*) (in /usr/local/lib/libdb2.so.1)
==1318== by 0x698BD4E: SQLDriverConnect (in /usr/local/lib/libdb2.so.1)
==1318== by 0x4E45962: SQLDriverConnect (in /usr/lib/libodbc.so.2.0.0)
==1318== by 0x400BF2: connect_disconnect (in /.../db2_leak/test)
==1318== by 0x400A8F: main (in /.../db2_leak/test)
Most of leaks are static (initialization). Each connect disconnect adds 80bytes to definitely lost byte count.
A bit bigger part of valgrind output (could not paste more then 500k): http://pastebin.com/xZfjy21Q
Biggest issue is that I cant find issues caused by my actions.
Edit
Double checked binaries, all are 64bit.