How to configure build input packages/dependencies within Nix development shells? - alsa

I'm not using NixOS but I wrote a flake that I'm using to generate a dev shell to build a Rust project (this is essentially just the audio example from the Bevy repository). My issue is that I encounter the following error when attempting to run the project in the dev shell:
$ nix --extra-experimental-features nix-command --extra-experimental-features flakes develop
bash-4.4$ cargo run
Finished dev [unoptimized + debuginfo] target(s) in 6.62s
Running `target/debug/audio`
ALSA lib pcm_dmix.c:1075:(snd_pcm_dmix_open) unable to open slave
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: NoDevice', /home/a/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_audio
-0.5.0/src/audio_output.rs:22:67
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
bash-4.4
One of the dependencies of the project is the Bevy crate which requires ALSA, so I'm assuming the issue is because the ALSA package exposed by the dev shell is probably misconfigured. I use PipeWire on my actual system (I think it also uses ALSA as a backend) and I tried adding ALSA and PipeWire as one of the buildInputs for the flake, but I'm not sure how I'm supposed to configure these within the dev shell. According to this issue on the Bevy repository, the usual fix for this issue, at least for Arch-based distros, is to install the pipewire-alsa package. I'm not sure what the equivalent of doing that is in the context of a Nix dev shell is though, since there is no pipewire-alsa package in nixpkgs that I can add to my flake. So with that said, how should I go about configuring ALSA or PipeWire in the dev shell?

Related

how to start code server terminal with chroot?

I have a code server that runs on Android with termux, for university reasons, there are times when I must share my vscode environment and I would not like to expose my entire system and personal files through the terminal.
So I was wondering if it was possible to expose a terminal from an alpine distro with chroot (prrot in the case of termux) by default every time code server opens a terminal
After some time reading the vscode config, I realized that you can use the shell option to force it to start inside with alpine and not expose my files, nor my android system with termux
"terminal.integrated.shell.linux":"/data/data/com.termux/files/usr/bin/startalpine"

Set up kenlm for Windows

The official website makes it pretty clear that there is no support for kenlm in Windows. There is a Windows tag at the github repository but it seems to be maintained by few random contributors then and there.
How to set up kenlm for Windows then?
The new DeepSpeech PlayBook also includes instructions for setting up a Docker image and running training from within a Docker container. If you have Docker on Windows, this might be another solution.
The information for building a new Scorer is still in a PR, but may also be useful.
The solution is to use Ubuntu in Windows through Windows Subsystem for Linux
Get WSL for Windows
From your ubuntu bash navigate to the folder where you want to do the setup. You can access the Windows file system from the /mnt/c/ folder, which you can find at the root directory.
From there simply follow the official instructions, that is clone the git repo, and run cmake .. & make -j2 in order to build the project (after first making the necessary installations in your Ubuntu system).
Obviously, you must train the models or scorers using the Linux bash. You can also use these models from Windows using the kenlm python library.
E.g.
The two steps to build a scorer for the deepspeech-model as described here should be executed from your Ubuntu system. But after you have the scorer you should be able to run the command
deepspeech --model deepspeech-0.9.3-models.pbmm --scorer kenlm.scorer --audio audio.wav
from Windows. However, once you have WSL there's no need to do this work from Windows. Things will work nicely #your Ubuntu system.
I've faced the same problem and solved it by building kenlm wheel from Cygwin terminal as home page advices (pip wheel pypi-kenlm).
I've also uploaded wheel to pypi called kenlm-cygwin, but it's only python3.7.

Build app in AppCenter that uses Carthage

I've inherited a project that builds with Carthage. Using Xcode 12, I was faced with this error:
fatal error: /Applications/Xcode_12.3.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/lipo: /Users/runner/Library/Caches/org.carthage.CarthageKit/DerivedData/12.3_12C33/AEXML/4.6.0/Build/Intermediates.noindex/ArchiveIntermediates/AEXML iOS/IntermediateBuildFilesPath/UninstalledProducts/iphoneos/AEXML.framework/AEXML and /Users/runner/Library/Caches/org.carthage.CarthageKit/DerivedData/12.3_12C33/AEXML/4.6.0/Build/Products/Release-iphonesimulator/AEXML.framework/AEXML have the same architectures (arm64) and can't be in the same fat output file
Building universal frameworks with common architectures is not possible. The device and simulator slices for "AEXML" both build for: arm64
Rebuild with --use-xcframeworks to create an xcframework bundle instead.
Quick Google search brought me to this which works for my local machine.
Using AppCenter for the first time, I created a Pre-Build script with the following:
#!/usr/bin/env bash
# Pre-build
# See: https://learn.microsoft.com/en-us/appcenter/build/custom/scripts/#pre-build
echo "Pre-build has started."
sh ./carthage.sh update --use-submodules
echo "Pre-build has ended."
I assume Carthage should be used to build this? I get the error in AppCenter:
*** Building scheme "AEXML iOS" in AEXML.xcodeproj
A shell task (/usr/bin/xcrun lipo -create /Users/runner/Library/Caches/org.carthage.CarthageKit/DerivedData/12.3_12C33/AEXML/4.6.0/Build/Intermediates.noindex/ArchiveIntermediates/AEXML\ iOS/IntermediateBuildFilesPath/UninstalledProducts/iphoneos/AEXML.framework/AEXML /Users/runner/Library/Caches/org.carthage.CarthageKit/DerivedData/12.3_12C33/AEXML/4.6.0/Build/Products/Release-iphonesimulator/AEXML.framework/AEXML -output /Users/runner/work/1/s/Carthage/Build/iOS/AEXML.framework/AEXML) failed with exit code 1:
fatal error: /Applications/Xcode_12.3.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/lipo: /Users/runner/Library/Caches/org.carthage.CarthageKit/DerivedData/12.3_12C33/AEXML/4.6.0/Build/Intermediates.noindex/ArchiveIntermediates/AEXML iOS/IntermediateBuildFilesPath/UninstalledProducts/iphoneos/AEXML.framework/AEXML and /Users/runner/Library/Caches/org.carthage.CarthageKit/DerivedData/12.3_12C33/AEXML/4.6.0/Build/Products/Release-iphonesimulator/AEXML.framework/AEXML have the same architectures (arm64) and can't be in the same fat output file
Building universal frameworks with common architectures is not possible. The device and simulator slices for "AEXML" both build for: arm64
Rebuild with --use-xcframeworks to create an xcframework bundle instead.
How to build in AppCenter?
--use-xcframeworks
This option is only available from Carthage 0.37.0. The appcenter's carthage version is 0.36.0. They need to update the carthage used in appcenter projects.
Can you look at the logs and see if this script is being run? Or is it that appcenter is running the carthage binary?
EDIT
The good news is that appcenter identifies carthage 0.37.0! I added a appcenter-post-clone.sh in my project directory:
#!/usr/bin/env bash
set -e
set -x
carthage update --cache-builds --use-xcframeworks --platform ios
carthage version
echo "" > Cartfile
echo "" > Cartfile.resolved
appcenter recognises that --use-xcframeworks is used and therefore 0.37.0 is required.
NOTE: I'm emptying the Cartfile* so that appcenter doesn't run its native carthage command (which it does on noticing Cartfile and Cartfile.resolved).
EDIT 2
I'm now considering using something like carthage_cache in appcenter as the carthage checkout and build ends up taking a lot of time.
Try this one (you may need to upgrade your Carthage first)
carthage update --no-use-binaries --use-xcframeworks --platform iOS

VOLTTRON install on rasbian buster

Can I get a tip for installing on rasp buster? Im hung up on the install directions to check the status of the rabbitMQ server. Traceback of bash console:
(volttron) pi#raspberry:~/Desktop/volttron $ echo 'export RABBITMQ_HOME=$HOME/rabbitmq_server/rabbitmq_server-3.7.7'|sudo tee --append ~/.bashrc
export RABBITMQ_HOME=$HOME/rabbitmq_server/rabbitmq_server-3.7.7
(volttron) pi#raspberry:~/Desktop/volttron $ source ~/.bashrc
pi#raspberry:~/Desktop/volttron $ RABBITMQ_HOME/sbin/rabbitmqctl status
bash: RABBITMQ_HOME/sbin/rabbitmqctl: No such file or directory
There are a few tracebacks earlier on the installation...
If it makes a difference or not here is the entire bash console process. The git gist link I just created the name install.py even though its just bash commands copied pasted per install directions...
`pi#raspberry:~/Desktop $ git clone https://github.com/VOLTTRON/volttron --branch releases/7.x`
It looks like there are a couple of different issues going on here:
The issue you quote above (RABBITMQ_HOME/sbin/rabbitmqctl: No such file or directory) is that your shell isn't finding the rabbitmqctl command. It looks like you added the RABBITMQ_HOME environment variable to your .bashrc, but used the string RABBITMQ_HOME instead of the variable expansion $RABBITMQ_HOME when you tried to run the command. Try running it as $RABBITMQ_HOME/sbin/rabbitmqctl status instead.
The rabbitmqctl status command will check the status of the rabbitmq application, but I don't think you've done anything to start it yet (that happens when you bootstrap the platform and/or start the platform configured to use the RMQ broker)
I think that the traces earlier in the installation process are problematic (appears to be the same error hit two different ways), but you just haven't run into them yet. I haven't seen any issues building gevent on the RPi 4 with buster (though it is pretty slow), but the ctypes error makes me wonder if there's an issue with the underlying c library it is trying to build on top of. I did notice that you're getting amd64 erlang packages, are you running Raspbian on an x86 processor? (if so this isn't a permutation we've tried and you may be hitting some package compatibility edge case we haven't seen)
One thing to try is to manually install cython into your virtualenvironment and then try running the bootstrap script again with the virtualenvironment activated. You could also try and pip install gevent==20.6.1 directly in that virtualenvironment (this is what the bootstrap script was doing at the failure point). VOLTTRON depends on gevent, so if that isn't installing the platform won't be able to run.

Heroku C application. Server using TCP/UDP sockets

I am working on a server application using BSD Sockets, its a C project and has been built on Heroku, using a custom buildpack.
I cant figure out how to execute the binary afterwards?
The buildpack contains:
bin/
detect.sh
compile.sh
release.sh
release.sh
#!/usr/bin/env bash
# bin/release <build-dir>
cat <<EOF
---
config_vars:
PATH: /app/bin:/usr/local/bin:/usr/bin:/bin
EOF
The binary builds fine using make as reported in the activity feed of the dashboard.
I need to run the server so I can connect to it using the client I have developed from my local machine.
EDIT: I have added a Procfile and to run the binary the procfile contents are:
spinup: bin/serverUDP 1071
serverUDP is the name of the binary file inside the bin/ folder of the application.
EDIT:
Build log:
-----> Fetching set buildpack https://github.com/damorton/heroku-buildpack-c.git... done
-----> C app detected
-----> Compiling with Make
make: `vendor/bin/busltee' is up to date.
-----> Discovering process types
Procfile declares types -> spinup
-----> Compressing... done, 4K
-----> Launching... done, v20
https://hangman-udp.herokuapp.com/ deployed to Heroku
Logs:
2015-12-04T10:45:25.977074+00:00 heroku[spinup.1]: Process exited with status 0
2015-12-04T10:45:25.992332+00:00 heroku[spinup.1]: State changed from up to crashed
2015-12-04T10:51:53.697297+00:00 heroku[api]: Deploy ebe93d8 by damorton#xmail.com
2015-12-04T10:51:53.697370+00:00 heroku[api]: Release v21 created by damorton#xmail.com
2015-12-04T10:51:55.209687+00:00 heroku[spinup.1]: Starting process with command `bin/serverUDP 1071`
2015-12-04T10:51:55.814271+00:00 heroku[spinup.1]: State changed from starting to up
2015-12-04T10:51:57.750368+00:00 heroku[spinup.1]: State changed from up to crashed
Command after deploy:
heroku ps:scale spinup=1
I found out that the Procfile is used to execute the binary after the build. The problem I was having then wasnt related to the binary being executed, it was that the binary wasnt being built. So I used a cmake build pack to install cmake. Then I used cmake to build my project. All worked out fine on the build side except for linking to a relative directory for the shared libs.
For anyone with the same problem:
Use buildpacks for cmake and then c
Use Procfile to execute binary with arguments after build
Assuming it built and deployed successfully a single dyno should load the slug and execute the command. To scale and/or change the dyno configuration, you'll need to issue a command with your chosen option. For example:
$ heroku ps:scale web=2 queue=1
This would start three dynos; two for web and a single one for queue processes. You can also scale the individual power of the dynos by increasing the RAM and CPU share using a similar command:
$heroku ps:scale web=2:standard-2x queue=1

Resources