MSYS2 and SDL2, distributing binary? - c

I just compiled a SDL2 program in MSYS2 and would like to distribute a binary copy of the program in a ZIP file.
But to run the program in "Normal" Windows I have to copy a long list of DLLs from /mingw64/bin.
Is there a better way of doing this? maybe linking statically?
How would the license work either way?
My own program is BSD 3 clause.
I'm using SDL2, SDL2_image, SDL2_ttf and all their dependencies.

You can build statically. The licensing situation in your case makes no difference between static/dynamic linkage.
I'm not sure how you build but here is an example with CMake: CMake - Compile in Linux, Execute in Windows
(In the example the binary is cross-compiled but the static stuff is the same for native compilation as well).

I just found out that all library licenses are located in /mingw64/share/licenses
So it's pretty easy to copy the needed licenses to the ZIP file.

Related

How can I make a binary that uses openmp and compiled with intel's C compiler portable?

Normally I compile code (all in a single file main.c) with the intel oneapi command prompt like so
icl.exe main.c -o binary_name
I can then run binary_name.exe without issue from a regular command prompt. However, when I recently exploited openmp multithreading and compiled like so.
icl.exe main.c -o binary_name /Qopenmp /MD /link libiomp5md.lib
Then, when I try to run it through an ordinary command prompt, I get this message:
I'd ultimately like to move this simple code around (say, to another computer with the same OS). Is there some procedure through a command prompt or batch file for packaging and linking a dynamic library? It is also looking like statically linking for openmp is not supported on windows
Either make a statically linked version, or distribute the dependency DLL file(s) along with the EXE file.
You can check the dependencies of your EXE with Dependency Walker.
As you correctly statedk statically linking for OpenMP is not supported on Windows. Depending on your use case you have a couple of options. The simplest one for simple testing is to just ship the Dynamic-Link Library with you executable and place it in the same directory in the target system. Having built a lot of systems using DLLs, this is typically what most developers do to ensure capability with their code in a production environment even.
If you are looking to do something more complex on the target system you can place the Dynamic-Link library in a shared location and follow the search order suggestions from the Microsoft Build site:
https://learn.microsoft.com/en-us/windows/win32/dlls/dynamic-link-library-search-order

Statically linking libvips to a Rust program in Windows

There is a lib-sys for libvips on crates.io, however it uses pkg-config which searches the system for the library to link to dynamically, not statically.
I want to provide libvips with the final binary of my software in .dll or .exe along with it since the user should only install one executable with everything in it, C and Rust code.
Looking at the Rust book's FFI linking section, we can link lib.a files easily, but libvips is a huge complex C library that has releases for Windows in .dll and .exe format. I do not know how to link these Windows binaries of libvips to a Rust program statically.
Potentially, I could build it from source manually and link it manually, but the build process seems very complex and it uses scripts and Docker. I would have to replace those scripts with a build.rs of my own that did the same but that seems very hard to me since I'm a beginner at this. I know I would have to set rustc-link-search in build.rs, but I don't know how to compile the .a files for libvips. Rust book on this
My goal is to FFI into libvips from Rust. I am using Windows 10 and want to cargo build the project and have the Rust code and libvips in one distributable binary with no other dependencies.
edit-1:
the vips-dev-w64-web-x.y.z.zip contains libvips.lib , libvips.dll.a files, pkgconfig .pc files, as well as the normal .dll and vips.exe. Can i use these .lib and .a files and if so exactly how? Will it link statically? Are these .pc files useful for my situation?
Just ship the libvips bin area in your tree somewhere, add it to PATH on startup, and link dynamically. This is how electron apps and node.js packages that use libvips work. Static linking for complex libraries died a while ago, it's not worth trying.
As much as anything, it would be a potential licence violation. libvips is LGPL, so you MUST link dynamically (or include enough of your code to allow relinking), or your whole application becomes open source.

How to work with external libraries when cross compiling?

I am writing some code for raspberry pi ARM target on x86 ubuntu machine. I am using the gcc-linaro-armhf toolchain. I am able to cross compile and run some independent programs on pi. Now, I want to link my code with external library such as ncurses. How can I achieve this.
Should I just link my program with the existing ncurses lib on host machine and then run on ARM? (I don't think this will work)
Do I need to get source or prebuilt version of lib for arm, put it in my lib path and then compile?
What is the best practice in this kind of situation?
I also want to know how it works for the c stdlib. In my program I used the stdio functions and it worked after cross compiling without doing anything special. I just provided path for my arm gcc in makefile. So, I want to know, how it got correct std headers and libs?
Regarding your general questions:
Why the C library works:
The C library is part of your cross toolchain. That's why the headers are found and the program correctly links and runs. This is also true for some other very basic system libraries like libm and libstdc++ (not in every case, depends on the toolchain configuration).
In general when dealing with cross-development you need some way to get your desired libraries cross-compiled. Using binaries in this case is very rare. That is, especially with ARM hardware, because there are so many different configurations and often everything is stripped down much in different ways. That's why binaries are not very much binary compatible between different devices and Linux configurations.
If you're running Ubuntu on the Raspberry Pi then there is a chance that you may find a suitable ncurses library on the internet or even in some Ubuntu apt repository. The typical way, however, will be to cross compile the library with the specific toolchain you have got.
In cases when a lot and complex libraries need to be cross-compiled there are solutions that make life a bit easier like buildroot or ptxdist. These programs build complete Linux kernels and root file systems for embedded devices.
In your case, however, as long as you only want ncurses you can compile the source code yourself. You just need to download the sources, run configure while specifying your toolchain using the --host option. The --prefix option will choose the installation directory. After running make and make install, considering everything went fine, you will have got a set of headers and the ARM-compiled library for your application to link against.
Regarding cross compilation you will surely find loads of information on the internet and maybe ncurses has got some pointers in its shipped documentation, too.
For the query How the C library works in cross-tools
When compiling and building cross-tool chain during configuration they will provide sysroot.
like --with-sysroot=${CLFS_CROSS_TOOLS}
--with-sysroot
--with-sysroot=dir
Tells GCC to consider dir as the root of a tree that contains (a subset of) the root filesystem of the target operating system. Target system headers, libraries and run-time object files will be searched for in there. More specifically, this acts as if --sysroot=dir was added to the default options of the built compiler. The specified directory is not copied into the install tree, unlike the options --with-headers and --with-libs that this option obsoletes. The default value, in case --with-sysroot is not given an argument, is ${gcc_tooldir}/sys-root. If the specified directory is a subdirectory of ${exec_prefix}, then it will be found relative to the GCC binaries if the installation tree is moved.
So instead of looking /lib /usr/include it will look /Toolchain/(libc) and (include files) when its compiling
you can check by
arm-linux-gnueabihf-gcc -print-sysroot
this show where to look for libc .
also
arm-linux-gnueabihf-gcc -print-search-dirs
gives you clear picture
Clearly, you will need an ncurses compiled for the ARM that you are targeting - the one on the host will do you absolutely no good at all [unless your host has an ARM processor - but you said x86, so clearly not the case].
There MAY be some prebuilt libraries available, but I suspect it's more work to find one (that works and matches your specific conditions) than to build the library yourself from sources - it shouldn't be that hard, and I expect ncurses doesn't take that many minutes to build.
As to your first question, if you intend to use ncurses library with your cross-compiler toolchain, you'll have its arm-built binaries prepared.
Your second question is how it works with std libs, well it's really NOT the system libc/libm the toolchain is using to compile/link your program is. Maybe you'll see it from --print-file-name= option of your compiler:
arm-none-linux-gnuabi-gcc --print-file-name=libm.a
...(my working folder)/arm-2011.03(arm-toolchain folder)/bin/../arm-none-linux-gnuabi/libc/usr/lib/libm.a
arm-none-linux-gnuabi-gcc --print-file-name=libpthread.so
...(my working folder)/arm-2011.03(arm-toolchain folder)/bin/../arm-none-linux-gnuabi/libc/usr/lib/libpthread.so
I think your Raspberry toolchain might be the same. You can try this out.
Vinay's answer is pretty solid. Just a correction when compiling the ncurses library for raspberry pi the option to set your rootfs is --sysroot=<dir> and not --with-sysroot . Thats what I found when I was using the following compiler:
arm-linux-gnueabihf-gcc --version
arm-linux-gnueabihf-gcc (crosstool-NG linaro-1.13.1+bzr2650 - Linaro GCC 2014.03) 4.8.3 20140303 (prerelease)
Copyright (C) 2013 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Dealing with static libraries when porting C code from one operating system to another

I have been working on some C code on a windows machine and now I am in the process of transferring it to a Linux computer where I do not have full privileges. In my code I link to several static libraries.
Is it correct that these libraries need to be re-made for a Linux computer?
The library in question is GSL-1.13 scientific library
Side question, does anyone have a pre-compiled version of the above for Linux?
I have tried using automake to compile the source on the Linux machine, but no makefile seems to be created and no error is output.
Thanks
Yes, you do need to compile any library again when you switch from Windows to GNU/Linux.
As for how to do that, you don't need automake to build GSL. You should read the file INSTALL that comes inside the tarball (the file gsl-1.16.tar.gz) very carefully. In a nutshell, you run the commands
$ ./configure
$ make
inside the directory that you unpacked from the tarball.

How do I install C packages on windows

I have to use LU decompostion to fit a simple model to some data (simulated) in C. An example of what I need to do is here:
However, I'm stuck with a more basic problem: how do I install packages in C and call them in my code?
I'm new in C and I'm used to R. But I have this assingment to do some tests about Matrix inversion, LU decomposision and the professor suggested using Lapack to easy things (thus, I don't need to code myself the LU decomposition etc.). But I don't know how to install the package and call it in my code, in order to use the functions of LAPACK.
I have a windows 7 64 bits and I'm using compiler Code Blocks 8.02
Thanks for any help.
Normally you don't "install" C libraries in that sense. Normally, in Windows you have three types of files. The header files, typically ending in .h, the dynamic library, .dll, and most likely some linker files (typically, .lib, .a or something). The linker and compiler will need to be able to find these files somewhere. Normally you set the include directory paths, and library directory paths.
E.g. Let's say you downloaded a library called foo, and you extract it to C:\foo.
In that folder, libfoo.a, foo.dll and foo.h reside. In Code::Blocks you will have to point include directory path to C:\foo and library path to C:\foo so that the linker and compiler know where to look for these files. Since you're linking against the foo library, you will also have to set -lfoo or something similiar in linker command line. This is GCC syntax, but I think Code::Blocks uses GCC compiler behind the scenes anyways.
In the C code you can just #include <foo.h> and the compiler will find it for you.
You need to install that library and it might actually supply a tool for that. Check their documentation (e.g. a file INSTALL or README in their distributed sources). If the library uses only headers you might only need to copy it's headers to some directory on your system, but their buildsystem might be able to do that for you.
Once that is done you would tell your IDE on where to look for the sources and if the library uses not just headers to link against the actual library file. See the documentation in the Code::Blocks Wiki on how this is done for some example cases and adapt for your library.
The simplest thing to do in your situation is to install Cygwin. You can use the setup.exe installer to install the GCC and the LAPACK libraries. When you want to use the LAPACK library, you will add the -llapack option to your GCC command line.

Resources