Swift Package Manager link shared archive library on MacOS - linker

I have a swift package manager library with multiple targets and one product.
Its Package.swift looks something like this:
let package = Package(
name: "FooPackage",
products: [
.library(
name: "FooLibrary",
targets: ["FooLibrary"]),
],
dependencies: [],
targets: [
.target(name: "FooHeaders"),
.target(
name: "FooLibrary",
dependencies: [
"FooHeaders"
],
path: nil,
exclude: [],
sources: nil,
resources: nil,
publicHeadersPath: "include",
cSettings: nil,
cxxSettings: nil,
swiftSettings: nil,
linkerSettings: [
.linkedLibrary("/path/to/libfoo.a")
]
),
.testTarget(
name: "FooTests",
dependencies: [
"FooHeaders",
"FooLibrary"
],
path: nil,
exclude: [],
sources: nil,
cSettings: nil, cxxSettings: nil, swiftSettings: nil,
linkerSettings: nil)
]
)
I also have a directory structure that looks like this:
FooPackage
Libraries
libfoo.a
Sources
FooHeaders
include
some_header_files.h
FooLibrary
SomeSwiftFiles.swift (these call a bunch of the methods exposed in the header files)
Tests
FooTests
SomeTestFiles.swift
Now my issue is that whenever I run swift test, I get the following error message on Mac:
Building for debugging...
ld: library not found for -l/path/to/libfoo.a
[0/1] Linking FooTests
error: fatalError
Note that I specify the path to libfoo.a from root. Oddly enough, running swift build does work on Mac, though it seems that that is merely a compilation step, and does not do any linking.
However, when I compile the shared library on a Linux machine, be it physical or inside Docker, with everything else being exactly equal, there, swift test does work. That seems to be the case on both Debian and Ubuntu; I haven't tried testing other operating systems yet.
What could be going wrong, and what am I doing wrong? I have looked at a bunch of similar posts on StackOverflow, though nobody appears to have their project in a state where the compilation and running actually does work – just not on Mac. I would also like to stress that I am not using Xcode; instead, all of this is compiled straight from the command line.
Also, if that's any help, the shared library is compiled from a Rust project. I simply run cargo build and copy the library inside target/debug into the Libraries folder in my project, whereto the path is specified in Package.swift.
Some things I tried doing was using the same linker settings in both the standard target, as well as the test target. I also tried using the .dylib extension on Mac instead of .a (cargo build generates both), as well as removing the file extension altogether, and I have also tried various experiments with specifying the path relative to the root folder of the project, and various folders within it. So far, all to no avail.

I believe I have figured it out, and the solution is as follows.
In Package.swift, add this snippet in the beginning:
var linkerSettings: [PackageDescription.LinkerSetting] = [.linkedLibrary("/path/to/libfoo.a")]
#if os(macOS)
linkerSettings = [
.unsafeFlags(["-L/path/to/"]),
.linkedLibrary("foo")
]
#endif
And then, where it used to say
linkerSettings: [
.linkedLibrary("/path/to/libfoo.a")
]
replace that code with
linkerSettings: linkerSettings
instead.

Related

How to set up intellisense in vscode for the rp2040

I'm aware there are some questions just like this one out there, but I wasn't able to get any value out of them (bc there's probably something I'm missing here).
I'm trying to set up intellisense in VScode to work with the rp2040 (pico). I am already able to build the project with cmake which was the first huge wall I was struggling to get across, but I'm now having difficulties trying to set up the intellisense to work just like it would with a normal C/C++ file when using VScode, but I can't get it to find or recognize the libraries in the pico-sdk directory, even if pointing directly at it (like giving it the exact path to the .h and .c files)
{
"configurations": [
{
"name": "RP2040",
"includePath": [
"${workspaceFolder}/**",
"C:/Users/Julio/VSARM/sdk/pico/pico-sdk/src/**",
"${env:PICO_SDK_PATH}/**",
"${workspaceFolder}/build/pico-sdk/**"
],
"defines": [
"_DEBUG",
"UNICODE",
"_UNICODE"
],
"windowsSdkVersion": "10.0.22000.0",
"compilerPath": "C:/Users/Julio/VSARM/armcc/10 2021.10/bin/arm-none-eabi-gcc.exe",
"intelliSenseMode": "gcc-arm",
"cStandard": "c11",
"cppStandard": "c++17",
"configurationProvider": "ms-vscode.makefile-tools"
}
],
"version": 4
}
That's what's inside my c_cpp_properties.json file for this project, I attempted various calls to what I used to believe was the location of the libraries, but maybe I misunderstood that as well because it doesn't seem to care. "${workspaceFolder}/build/pico-sdk/**" is there because when I build with cmake it creates a pico-sdk folder inside the build folder as well, so I thought maybe doing that would work, but it didn't.
cmake_minimum_required(VERSION 3.12)
# Pull in SDK (must be before project)
include(pico_sdk_import.cmake)
project(blink C CXX ASM)
set(CMAKE_C_STANDARD 11)
set(CMAKE_CXX_STANDARD 17)
# Initialize the SDK and include the libraries
pico_sdk_init()
# Tell CMake where to find the executable source file
add_executable(${PROJECT_NAME}
main.c
)
# Create map/bin/hex/uf2 files
pico_add_extra_outputs(${PROJECT_NAME})
# Link to pico_stdlib (gpio, time, etc. functions)
target_link_libraries(${PROJECT_NAME}
pico_stdlib
)
# Enable usb output, disable uart output
pico_enable_stdio_usb(${PROJECT_NAME} 1)
pico_enable_stdio_uart(${PROJECT_NAME} 0)
add_compile_options(-Wall
-Wno-format # int != int32_t as far as the compiler is concerned because gcc has int32_t as long int
-Wno-unused-function # we have some for the docs that aren't called
-Wno-maybe-uninitialized
)
That's the content of my CMakeLists.txt just in case it's important to know. Is there maybe a way to make VScode intellisense work with CMake? Perhaps that could work better I don't know. I know for a fact the project builds successfully because I uploaded the file to the MCU and it works just fine, it's more of a VScode problem I think rather than CMake(?
UPDATE: I was able to get intellisense to detect the libraries for the Pico before actually trying to set CMake to be the intellisense provider as I was advised to do. The problem with that is that I'm not really sure how I managed it, I opened an unrelated cpp project in parallel to the RP2040 project and some automatic configuration from one of the extensions kicked in and fixed it. I assume this is a bad practice since I'm not aware of the reason why it works now. So I'm going to try the CMake way in short regardless.

Linker error in Meson & Ninja when trying to add custom dependency

I am trying to add source-based library to my project using Meson. But when I try doing that, I get object file linking errors.
I tried adding custom dependency to project executable, but of course, it says it is undefined(srclibdep in code ahead), since it is defined after project executable. But if I define before Project executable, then I can't link.
This is my ./meson.build
project('ProjectName', 'cpp', version: '0.1', default_options: ['warning_level=3', 'cpp_std=c++14'])
srclibinc = include_directories('SourceLibraryName')
cpp = meson.get_compiler('cpp')
add_languages('cpp')
proj = executable('ProjectName', 'main.cpp', install: true, include_directories: srclibinc)
srclibdep = declare_dependency(include_directories: srclibinc, link_with: proj)
And ./SourceLibraryName/meson.build
files = run_command('files.sh').stdout().strip().split('\n')
foreach f: files
install_headers(f)
endforeach
srclib = shared_library('SourceLibrary', files, install: true)
pkg_mod = import('pkgconfig')
pkg_mod.generate(libraries: srclib, version: '0.1', name: 'libsrc', description: 'Source-based library.')
I am getting hundreds of linking errors saying that x::Y reference doesn't exist, but compiler compiled the code as if dependency is already there.
I think it should be
// make srclib available to code below:
subdir('SourceLibraryName')
// create dependency object with library to link against:
srclibdep = declare_dependency(link_with: srclib)
// add this object to dependencies:
proj = executable('ProjectName', 'main.cpp', install: true, include_directories: srclibinc, dependencies : srclibdep)
PS:
Not related to the matter, but just noticed:
you don't have to generate pkg-config file if you use shared library only in your project
it's good practice to add version to shared library, especially if it will be shared with other projects:
shared_library('SourceLibrary', files, install: true, version : meson.project_version())
meson.project_version() you can use for pkg-config file as well, so you won't forget update it in all places
you don't install any headers for library, so other projects won't find API that your library provides

Is there a way, in meson, to overwrite the built-in options from the cross build definition file?

I'm currently evaluating different build-systems for embedded projects (ex: FreeRTOS based) and I came across meson. I find it good, mostly the idea to have a cross build definition file to define how my project needs to be compiled.
Nevertheless, I do have an issue with some of the base options such as:
b_pch
b_staticpic
That is, as default, set to true. In my project, these options generate a wrong binary...
The current solution, as meson propose it, is:
meson debug --cross-file boards/SensGate/meson_config_stm32l4_gcc8.ini -Db_pch=false -Db_staticpic=false
cd debug && ninja hex
But I somehow do not find it pretty to have to define compile and linker options outside the cross build definition file...
I was wondering if there was a way maybe to overwrite these options in the file itself...
If not, do you think I should create a ticket in the meson project to request this feature?
I would expect something like:
[binaries]
c = 'arm-none-eabi-gcc'
[buildin_option] # New section?
b_pch = false
b_staticpic = false
[properties]
objcopy = 'arm-none-eabi-objcopy'
objcopy_args = [
...]
c_args = [
...]
c_link_args = [
...]
[host_machine]
...
Thanks to #Matt for the support here.
My cross build definition file looks like:
[binaries]
...
[properties]
...
project_configuration = [
'b_pch=false',
'b_staticpic=false']
...
[host_machine]
...
and in my root meson.build, I have:
# Define the project
project('Project', 'c', default_options: meson.get_cross_property('project_configuration'))
...
This way, I just need to call as commands:
meson debug --cross-file boards/SensGate/meson_config_stm32l4_gcc8.ini
cd debug && ninja hex

Swift Package Manager C-interop: Non-system libraries

How can I use the Swift Package Manager to include C code (in my case, a single .c file and a header file) without requiring the user to install my C library into /usr/local/lib?
I had thought to create a Package in a subdirectory of my main package containing the header + lib, and use relative paths, and finally build with swift build -Xlinker ./relative/path/to/mylib, however I'm not having any success resolving the dependency since it's expected to be a standalone git repository. Error message is:
error: failed to clone; fatal: repository '/absolute/path/to/mylib' does not exist
Moreover it's not clear to me whether using the -Xlinker flag is the correct approach.
I can't use a bridging header with a pure SwiftPM approach and installing my library system-wide seems overkill as well as not very portable.
Any ideas?
I have done that in this project on github. It replaces pthread_once_t by wrapping it in C and re-exposing it to swift. It was done as a fun exercise in getting around what Swift tries to limit you into since pthread_once_t and dispatch_once are not available directly.
Here is a trimmed down version the Package.swift file:
// swift-tools-version:4.0
import PackageDescription
let package = Package(
name: "Once",
products: [
.library(
name: "Once",
targets: ["OnceC", "Once"]),
],
dependencies: [
],
targets: [
.target(
name: "OnceC",
dependencies: [],
path: "Sources/OnceC"),
.target(
name: "Once",
dependencies: ["OnceC"],
path: "Sources/Swift"),
.testTarget(
name: "OnceTests",
dependencies: ["Once"]),
]
)
You can easily replace the product library with an executable. The main part is that the product's targets needs to contain both the C and Swift targets needed to build.
Then in your targets section make the swift target lists the C target as a dependency.
You can learn more about the required layout for C targets in the SwiftPM Usage.md here
C language targets
The C language targets are similar to Swift targets except that the C language
libraries should contain a directory named include to hold the public headers.
To allow a Swift target to import a C language target, add a target
dependency in the manifest file. Swift Package Manager will
automatically generate a modulemap for each C language library target for these
3 cases:
If include/Foo/Foo.h exists and Foo is the only directory under the
include directory then include/Foo/Foo.h becomes the umbrella header.
If include/Foo.h exists and include contains no other subdirectory then
include/Foo.h becomes the umbrella header.
Otherwise if the include directory only contains header files and no other
subdirectory, it becomes the umbrella directory.
In case of complicated include layouts, a custom module.modulemap can be
provided inside include. SwiftPM will error out if it can not generate
a modulemap w.r.t the above rules.
For executable targets, only one valid C language main file is allowed i.e. it
is invalid to have main.c and main.cpp in the same target.
The only other important thing is how you actually do your #import in the C code once it is compiled as a compatible module. If you use the import/Foo/Foo.h organization you need to use #include <Foo/Foo.h> and if you do import/Foo.h you can use #import "Foo.h".

Biometric matching with futronic sdk using nodejs server

I have successfully taken bio-metric prints and posted to the node server using the futronic sdk. I want to be able to use this library likewise for matching in the server because that's where the bio-metric prints for all users are stored. I stubbled upon the node-ffi library that helps define equivalent C functions that I have exported and compiled it down to a .dll file.
Now the challenge here is that I have tried to port the ftrAnsiSDK functions but the ftrScanAPI.dll and the ftrAnsiSDK.dll file could not be compiled together. It gives this error:
...collect2.exe [Error] ld returned 5 exit status
When I compile and export the functions that are not dependent on these two libraries, my code works fine and the functions are easily exported and used in the node server. Please can any one give me a hint?
Here is the link to the repo. It consists of the lib and .dll library that is been used.
For the server code here is a snippet of what I am trying to achieve:
var libm = ffi.Library('lib/visystem', {
'HelloWorld': [ 'void', [] ],
'PrintErrorMessage': [ 'void', ['int'] ],
'CaprureImage': [ 'int', ['int','int','int'] ]});
The HelloWord and PrintErrorMessages are methods that I used as a test case to ensure the functions are being exported before I proceeded to the main functions (you can see the function definition in from the code in the repo.. ) that depends on the futronic lin and sdk.
I am currently using a 64-bit operation system and I installed the same program on a 32-bit machine to be sure, but it still did not compile and export the function. The code editor I am using is Dev++C.
Can anyone help or even give me hint on how to achieve this goal?
As a disclaimer, I'm not familiar with the Dev-C++ IDE or MinGW development.
However, after a cursory look at your github repo, according to your libvisystem.def file, it appears that the only functions that are exported by your DLL are:
HelloWorld
PrintErrorMessage
ReadTemplateFile
SaveBmpFile
SaveTemplateFile
This is also confirmed when looking at the libvisystem.a library header:
So you should probably start by manually add the rest of the exported functions in your dll.h to the def file, in a similar manner to the ones that are already there, and see if that changes anything.
NOTE:
I'm not sure whether the __declspec(dllexport) directive is ignored by the Dev-C++ compiler/linker and it uses the def file instead. Perhaps others on SO have an idea.

Resources