How to fake macports into using my build script for ffmpeg? - macports

I have my own bash script to configure ffmpeg which successfully updates my git repo, builds, tests and installs everything correctly.
Basically I want to bypass macports env and configuration but recognize my build but only for ffmpeg, I don't want macports to look in /usr/local or another location, I want it installed in /opt/local
So it probably boils down to, how to completely disable all the macports env and just launch a subshell with my script?
Created tarball, checksums, etc
PortSystem 1.0
PortGroup muniversal 1.0
name ffmpeg
epoch 1
version 9.9.9
revision 9
license LGPL-2.1+
categories multimedia
maintainers nomaintainer
platforms darwin
homepage http://www.ffmpeg.org/
master_sites file:///Volumes/Apps_Media/my_repo
use_zip yes
checksums {they work}
depends_build port:pkgconfig \
port:gmake \
port:texinfo
use_configure no
build.cmd $HOME/bin/configFFMPEG
macports chokes on these lines in ffmpeg's configure
FFmpeg/configure: line 3596: ffbuild/config.log: Operation not permitted
FFmpeg/configure: line 3597: ffbuild/config.log: Operation not permitted
echo "# $0 $FFMPEG_CONFIGURATION" > $logfile
set >> $logfile

MacPorts runs builds as the macports user, not yoru normal user or root. Make sure that user can both read your script and write to the locations where your script is trying to write.
Even though you invoke MacPorts with superuser privileges, it will not use these privileges while building software.

Related

How to completely download Anaconda Cloud bz2 files and dependencies for offline package installation? [duplicate]

I want to create a Python environment with the data science libraries NumPy, Pandas, Pytorch, and Hugging Face transformers. I use miniconda to create the environment and download and install the libraries. There is a flag in conda install, --download-only to download the required packages without installing them and install them afterwards from a local directory. Even when conda just downloads the packages without installing them, it also extracts them.
Is it possible to download the packages without extracting them and extract them afterwards before installation?
There is no simple command in the CLI to prevent the extraction step. The extraction is regarded as part of the FETCH operation to populate the package cache before running the LINK operation to transfer the package to the specified environment.
The alternative would be to do something manually. Naively, one could search Anaconda Cloud and manually download, however, it would probably be better to go through the solver to ensure package compatibility. All the info for operations to be run can be viewed by including the --json flag. This could be filtered to just the tarball URLs and then downloaded directly. Here's a script along these lines (assuming Linux/Unix):
File: conda-download.sh
#!/bin/bash -l
conda create -dn null --json "$#" |\
grep '"url"' | grep -oE 'https[^"]+' |\
xargs wget -c
which can be used as
./conda-download.sh -c conda-forge -c pytorch numpy pandas pytorch transformers
that is, it accepts all arguments conda create would, and will download all the tarballs locally.
Ignoring Cached Packages
If you already have some packages cached then the above will not redownload them. Instead, if you wish to download all tarballs needed for an environment, then you could use this alternate version which overrides the package cache using an empty temporary directory:
File: conda-download-all.sh
#!/bin/bash -l
tmp_dir=$(mktemp -d)
CONDA_PKGS_DIRS=$tmp_dir conda create -dn null --json "$#" |\
grep '"url"' | grep -oE 'https[^"]+' |\
xargs wget -c
rm -r $tmp_dir
Do you really want to use conda-pack? That lets you archive a conda-environment for reproducing without using the internet or re-solving for dependencies. To just prevent re-solving you can also use conda env export --explict but that still ties you to the source (internet or local disk repository).
If you have a static environment (read-only) and want to really reduce docker size, you can volume mount the environment at runtime. You would need to match the file paths (ie: /opt/anaconda => /opt/anaconda).

How do I run End to End tests for the Azure IoT C SDK on a Raspberry Pi?

I'm trying to compile and test the azure-iot-sdk-c on a raspberry pi. How do I compile it on the raspberry pi and then run the E2E tests provided in the SDK?
In order to achieve this, there are a couple approaches you could take. You could download a cross-compiler for the Pi and keep the source code only on your development machine. Then when you wanted to run code/tests on the Pi, you would use the cross-compiler to produce an output that could run on the Pi, transfer the executables to the Pi, and return the results back to the development machine. This approach would probably be quite fast, and if your project contains many files, it might be a good way to go about it. Setting up a cross-compiler isn’t the simplest thing to do, but there are many documented cases online of people who have already done it.
The other approach would be to develop the source code on your development machine but build the code for the Pi on the Pi itself. This removes the need to set up a cross-compiler and it makes getting the test results back to your development machine very simple.
You can use your text editor to develop the code on development machine. Then rsync to transfer your source files to the Raspberry Pi. Finally, You can install Ruby and Ceedling (a C unit testing tool) on your development machine and on the Pi to assist in running tests. Here’s how to make it all happen.
Set Up SSH Keys
This step is important because it allows you to transfer files from your development machine to the Pi and execute commands remotely without having to type in a username and password every time. First, make sure you have an SSH key generated on your development machine. If you don’t, or if you’re not sure, check out this excellent GitHub article that explains how to generate one.
ow if you open up your ~/.ssh (or your/user/directory/.ssh on Windows) directory on your development machine, you should have a file called rd_isa.pub. This is the “public” piece of your SSH key. You need to transfer this file to the Raspberry Pi so that it can recognize you as an approved user. Do that with the following command:
scp ~/.ssh/id_rsa.pub user#remote.host:pubkey.txt
Make sure to replace ‘user’ with a username on the Raspberry Pi and ‘remote.host’ with the IP address of the Pi.
Once you’ve done that, you need to append the key to the “authorized_keys” file on the Pi. To do so you will need to SSH into the Pi and manually edit/create the file. That can be done as follows:
scp ~/.ssh/id_rsa.pub user#remote.host:pubkey.txt
ssh user#remote.host
mkdir ~/.ssh
cat pubkey.txt >> ~/.ssh/authorized_keys
rm ~/pubkey.txt
Install ‘rsync’
The next step is to install rsync, a utility that allows you synchronize directories between two computers. When we make changes on our local machine, rsync will transfer those changes to the Pi for testing. rsync is smart enough to only transfer files that have been updated since the last transfer, which will speed up the process. For rsync to work, it must be installed on both your development machine and the Raspberry Pi. To install it on the Pi execute the following command.
sudo apt-get install rsync
The process for installing rsync on your development machine will vary greatly depending on which OS you are running. On the Mac, it’s already installed. Some Linux distros come with it as well. Windows, on the other hand, is a little behind the game. Search Google for “Installing rsync on Windows” for instructions on getting it setup.
Install Ruby
Ruby is another component that needs to be installed on by the development machine and the target. Ruby is a scripting language that Ceedling uses to automate unit test execution. Again, refer to the all-wise Google for instructions on installing the latest version on your dev machine. To install Ruby on the Raspberry Pi use the following command:
sudo apt-get install ruby
Install Rake
Rake is a Ruby gem (package) that provides build automation support similar to ‘make’. Once you have Ruby installed, Rake is as simple to install as typing the following:
sudo gem install rake
Setup a Ceedling Project
Finally We can already write code locally and execute tests on our development machine using the command “rake test:all”.
The final thing we need to do is set up a custom rake task that will run tests on the Pi without having to manually SSH into it. Look in the root directory of your Ceedling project and you will see a file named Rakefile.rb. This is where we will put our custom rake task. Add the following to the bottom of the file:
desc "Run rake test:all on RPi with latest changes"
desc "Update the RPi with the latest changes on dev machine."
task :update_pi_source do
#send the latest changes to the pi
puts cmd = "rsync -r -v . #{REMOTE_RPI_USER}##{REMOTE_RPI_IP_ADDR}:#{REMOTE_RPI_PROJ_ROOT} --exclude=#{PROJECT_BUILD_ROOT}"
system(cmd)
end
desc "Run rake test:all in the project directory on the pi"
task :run_all_tests_pi do
#execute tests on the pi
puts cmd = "ssh #{REMOTE_RPI_USER}##{REMOTE_RPI_IP_ADDR} "cd #{REMOTE_RPI_PROJ_ROOT} && rake test:all""
system(cmd)
end
task :pi_test_all > [:update_pi_source, :run_all_tests_pi] do
end
This actually defines three rake tasks. The first one, update_pi_src, is the task that uses rsync to update the source code on the Pi. The second one, run_all_tests_pi, uses SSH to execute the necessary command to compile the code and run the tests on the Pi. The third task, pi_test_all, is just a wrapper that combines the first two.
Hope it helps.

How do I run only some makefile commands as root?

I have an install target in my Makefile and wish to run some commands that install shared libraries(requires root permissions) and some that install config files into $HOME/.config
Usually I'd just tell the user to run sudo make install, however that results in the config file being installed to /root/.config instead of the actual users config directory.
How do I work around this issue?
Thanks alot.
You can just change the owner and permissions of the config files, although a Makefile that installs per user configuration files, is not a good idea because it would ideally need to find out how many users exist on the system to install the files for each user.
If you use the install command, you could even do
install -v -m644 -o$(USERNAME) -g$(USERGROUP) $(FILE) $(USERHOME)/.config/$(FILE)
A better approach would be to let the program install the default config files from a system wide directory when it doesn't find them, for example
/usr/share/my-application/default-config/config.conf
and then the program would search for the files in the appropriate directoy and copy them to the $HOME directory of the user that is currently running the program, that if the files are modifiable by the user, otherwise you just access them from their system-wide location.

Why is my debian postinst script not being run?

I have made a .deb of my app using fpm:
fpm -s dir -t deb -n myapp -v 9 -a all -x "*.git" -x "*.bak" -x "*.orig" \
--after-remove debian/postrm --after-install debian/postinst \
--description "Automated build." -d mysql-client -d python-virtualenv home
Among other things, the postinst script is supposed to create a user for the app:
#!/bin/sh
set -e
APP_NAME=myapp
case "$1" in
configure)
virtualenv /home/$APP_NAME/local
#supervisorctl start $APP_NAME
;;
# http://www.debian.org/doc/manuals/securing-debian-howto/ch9.en.html#s-bpp-lower-privs
install|upgrade)
# If the package has default file it could be sourced, so that
# the local admin can overwrite the defaults
[ -f "/etc/default/$APP_NAME" ] && . /etc/default/$APP_NAME
# Sane defaults:
[ -z "$SERVER_HOME" ] && SERVER_HOME=/home/$APP_NAME
[ -z "$SERVER_USER" ] && SERVER_USER=$APP_NAME
[ -z "$SERVER_NAME" ] && SERVER_NAME=""
[ -z "$SERVER_GROUP" ] && SERVER_GROUP=$APP_NAME
# Groups that the user will be added to, if undefined, then none.
ADDGROUP=""
# create user to avoid running server as root
# 1. create group if not existing
if ! getent group | grep -q "^$SERVER_GROUP:" ; then
echo -n "Adding group $SERVER_GROUP.."
addgroup --quiet --system $SERVER_GROUP 2>/dev/null ||true
echo "..done"
fi
# 2. create homedir if not existing
test -d $SERVER_HOME || mkdir $SERVER_HOME
# 3. create user if not existing
if ! getent passwd | grep -q "^$SERVER_USER:"; then
echo -n "Adding system user $SERVER_USER.."
adduser --quiet \
--system \
--ingroup $SERVER_GROUP \
--no-create-home \
--disabled-password \
$SERVER_USER 2>/dev/null || true
echo "..done"
fi
# … and a bunch of other stuff.
It seems like the postinst script is being called with configure, but not with install, and I am trying to understand why. In /var/log/dpkg.log, I see the lines I would expect:
2012-06-30 13:28:36 configure myapp 9 9
2012-06-30 13:28:36 status unpacked myapp 9
2012-06-30 13:28:36 status half-configured myapp 9
2012-06-30 13:28:43 status installed myapp 9
I checked that /etc/default/myapp does not exist. The file /var/lib/dpkg/info/myapp.postinst exists, and if I run it manually with install as the first parameter, it works as expected.
Why is the postinst script not being run with install? What can I do to debug this further?
I think the example script you copied is simply wrong. postinst is not
supposed to be called with any install or upgrade argument, ever.
The authoritative definition of the dpkg format is the Debian Policy
Manual. The current version describes postinst in chapter
6
and only lists configure, abort-upgrade, abort-remove,
abort-remove, and abort-deconfigure as possible first arguments.
I don't have complete confidence in my answer, because your bad example
is still up on debian.org and it's hard to believe such a bug could slip
through.
I believe the answer provided by Alan Curry is incorrect, at least as of 2015 and beyond.
There must be some fault with the way the that your package is built or an error in the postinst file which is causing your problem.
You can debug your install by adding the -D (debug) option to your command line i.e.:
sudo dpkg -D2 -i yourpackage_name_1.0.0_all.deb
-D2 should sort out this type of issue
for the record the debug levels are as follows:
Number Description
1 Generally helpful progress information
2 Invocation and status of maintainer scripts
10 Output for each file processed
100 Lots of output for each file processed
20 Output for each configuration file
200 Lots of output for each configuration file
40 Dependencies and conflicts
400 Lots of dependencies/conflicts output
10000 Trigger activation and processing
20000 Lots of output regarding triggers
40000 Silly amounts of output regarding triggers
1000 Lots of drivel about e.g. the dpkg/info dir
2000 Insane amounts of drivel
The install command calls the configure option and in my experience the postinst script will always be run. One thing that may trip you up is that the postrm script of the "old" version, if upgrading a package, will be run after your current packages preinst script, this can cause havoc if you don't realise what is going on.
From the dpkg man page:
Installation consists of the following steps:
1. Extract the control files of the new package.
2. If another version of the same package was installed before
the new installation, execute prerm script of the old package.
3. Run preinst script, if provided by the package.
4. Unpack the new files, and at the same time back up the old
files, so that if something goes wrong, they can be restored.
5. If another version of the same package was installed before
the new installation, execute the postrm script of the old pack‐
age. Note that this script is executed after the preinst script
of the new package, because new files are written at the same
time old files are removed.
6. Configure the package.
Configuring consists of the following steps:
1. Unpack the conffiles, and at the same time back up the old
conffiles, so that they can be restored if something goes wrong.
2. Run postinst script, if provided by the package.
This is an old issue that has been resolved, but it seems to me that the accepted solution is not totally correct and I believe that it is necessary to provide information for those who, like me, are having this same problem.
Chapter 6.5 details all the parameters with which the preinst and postinst files are called
At https://wiki.debian.org/MaintainerScripts the installation and uninstallation flow is detailed.
Watch what happens in the following case:
apt-get install package
- It runs preinst install and then postinst configure
apt-get remove package
- Execute postrm remove and the package will be set to "Config Files"
For the package to actually be in the "not installed" state it must be used:
apt-get purge package
That's the only way we'll be able to run preinst install and postinst configure the next time the package is installed.

xdebug not loading. not found in phpinfo() after apache restart

I've been scouring every resource I could find, but came up empty. I get the dreaded "Waiting for Connection" message in NetBeans 6.9 when I start a debug session. After much reading, most folks are able to get phpinfo() to display that it loaded the xdebug module. Not so with me.
I downloaded the source through SVN using this call
svn co svn://svn.xdebug.org/svn/xdebug/xdebug/trunk xdebug
I switched to the xdebug directory and then ran phpize on the source
sudo /Applications/MAMP/bin/php5/bin/phpize
Password:
grep: /Applications/MAMP/bin/php5/include/php/main/php.h: No such file or directory
grep: /Applications/MAMP/bin/php5/include/php/Zend/zend_modules.h: No such file or directory
grep: /Applications/MAMP/bin/php5/include/php/Zend/zend_extensions.h: No such file or directory
Configuring for:
PHP Api Version:
Zend Module Api No:
Zend Extension Api No:
A big fat nothing! The referenced directories don't even exist. So, I make the assumption that any .ini tweaking I do beyond this point is useless. If I do a whereis php, I find it in /usr/bin. That's the default php pre-loaded with the OS. I don't want that one. I need to use the php installed with MAMP. I cannot believe how insanely frustrating it is to get this thing working!
For the record, my xdebug section in my php.ini looks like this:
[xdebug]
; xdebug config for Linux and Mac OS X
zend_extension="/Applications/MAMP/bin/php5/lib/php/extensions/no-debug-non-zts-20060613/xdebug.so"
xdebug.remote_enable=1
xdebug.remote_handler=dbgp
xdebug.remote_mode=req
xdebug.remote_host=localhost
xdebug.remote_port=9000
xdebug.idekey="netbeans-xdebug"
xdebug.profiler_enable=1
xdebug.profiler_output_name=xdebug.cachegrind-out.%s.%p
xdebug.remote_log="/Applications/MAMP/logs/xdebug_log.log"
It's a mish-mash of many different attempts to get xdebug to work. So, I don't know which pieces are valid or not.
I throw myself on the mercy of the experts because I obviously am not one of them. I have absolutely no idea how to proceed at this point.
Thanks in advance.
To use phpize in the MAMP directory instead of your system path, you should add MAMP's directory for PHP binaries to your $PATH. Below I'm using MAMP 1.9.1, which offers PHP 5.2 and PHP 5.3. We'll assume you're compiling for PHP 5.3.
Open or create ~/.bash_profile and put the following contents:
#Add MAMP binaries to path
export PATH="/Applications/MAMP/Library/bin:/Applications/MAMP/bin/php5.3/bin:$PATH"
You may also need to chmod the binaries inside /Applications/MAMP/bin/php5.3/bin to be executable:
chmod 755 /Applications/MAMP/bin/php5.3/bin/pear
chmod 755 /Applications/MAMP/bin/php5.3/bin/peardev
chmod 755 /Applications/MAMP/bin/php5.3/bin/pecl
chmod 755 /Applications/MAMP/bin/php5.3/bin/phar
chmod 755 /Applications/MAMP/bin/php5.3/bin/phar.phar
chmod 755 /Applications/MAMP/bin/php5.3/bin/php
chmod 755 /Applications/MAMP/bin/php5.3/bin/php-config
chmod 755 /Applications/MAMP/bin/php5.3/bin/phpcov
chmod 755 /Applications/MAMP/bin/php5.3/bin/phpize
Restart your Terminal session for the new $PATH to be loaded. Run the command which phpize and it should display /Applications/MAMP/bin/php5.3/bin/phpize. If not, the path to phpize in your MAMP directory is not being loaded in your $PATH. Use echo $PATH in Terminal to make sure /Applications/MAMP/bin/php5.3/bin is in the $PATH.
To get xDebug to compile, you need the header files from when PHP was compiled. These are available on the MAMP website in a DMG, and called "MAMP Components": http://www.mamp.info/en/downloads/index.html
Unpack MAMP Components and copy MAMP_src to your Desktop. Unpack MAMP_src/php-5.3.2.tar.gz and move it into the include path present in php-config --includes which should include /Applications/MAMP/bin/php5.3/include/php.
cd ~/Desktop/MAMP_src
tar -xvzf php-5.3.2.tar.gz
mkdir -p /Applications/MAMP/bin/php5.3/include
mv php-5.3.2/ /Applications/MAMP/bin/php5.3/include/php
You can now run phpize in the xDebug source dir.
I had a similar problem with XAMPP on Mac OSX 10.6.
I got no version Numbers when running phpize in the xdebug download directory.
PHP Api Version:
Zend Module Api No:
Zend Extension Api No
I had to install the 'Development Package' which adds /Applications/XAMPP/xamppfiles/include and other files to your XAMPP install. Installing the Development Package also fixed pecl so I tried using pecl to install xdebug.
pecl install xdebug
but apache failed to start with error
Failed loading /Applications/XAMPP/xamppfiles/lib/php/php-5.3.1/extensions/no-debug-non-zts-20090626/xdebug.so: dlopen(/Applications/XAMPP/xamppfiles/lib/php/php-5.3.1/extensions/no-debug-non-zts-20090626/xdebug.so, 9): no suitable image found. Did find:
/Applications/XAMPP/xamppfiles/lib/php/php-5.3.1/extensions/no-debug-non-zts-20090626/xdebug.so: mach-o, but wrong architecture
I tried compiling from source and got the same 'wrong architecture' errors
Finally I just used the KomodoIDE binary from active state which worked.
I just started working with xdebug myself due to problems with PHP 5.3.1. I had used PECL per instructions a couple weeks ago but it looks like phpize is the new black. I looked over the new instructions (generated from my phpinfo()) # http://xdebug.org/find-binary.php
this is of note:
Run: phpize
As part of its output it should show:
Configuring for:
PHP Api Version: 20090626
...
Zend Extension Api No: 220090626
If it does not, you are using the wrong phpize. Please follow this FAQ entry and skip the next step.
2 things:
have you checked that phpize is up to date?
if that doesnt work try these instructions: http://xdebug.org/docs/install

Resources