OS: ubuntu 18.04
Installation: ROS2 Dashing
Installation date: 2021/05/29
Official documentation
"https://docs.ros.org/en/dashing/Installation/Ubuntu-Install-Debians.html"
I tried to install it referring to the official documentation, but I can't get the apt repository because the public key isn't available.
W: GPG error: http://packages.ros.org/ros2/ubuntu bionic InRelease: The following signatures were invalid: EXPKEYSIG F42ED6FBAB17C654 Open Robotics <info#osrfoundation.org>
E: The repository 'http://packages.ros.org/ros2/ubuntu bionic InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
I tried.
curl -s https://raw.githubusercontent.com/ros/rosdistro/master/ros.asc | sudo apt-key add-
Another article said that doing this would solve it, so I did it, but it still didn't work.
sudo apt-key adv -keyserver keyserver.ubuntu.com -recv-keys F42ED6FBAB17C654
Has there been another change recently?
please Help me.
Just had a similar issue and for me, this fixed it. Basically, I had add the new repository key and delete the old one. Listing the commands here for convenience:
# add new repository key:
sudo apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654
# remove old repository key:
sudo apt-key del 421C365BD9FF1F717815A3895523BAEEB01FA116
Hope this helps you as well!
Update: If it does not, please watch these two:
https://answers.ros.org/question/379190/apt-update-signatures-were-invalid-f42ed6fbab17c654/
https://discourse.ros.org/t/ros-gpg-key-expiration-incident/20669/3
Related
Few hour ago my setup in google colab for selenium worked fine. Now it stopped working all of a sudden.
This is a sample:
!pip install selenium
!apt-get update
!apt install chromium-chromedriver
from selenium import webdriver
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage')
driver = webdriver.Chrome('chromedriver',
chrome_options=chrome_options)
I get the error:
WebDriverException: Message: Service chromedriver unexpectedly exited. Status code was: 1
Any ideas on solving it?
This error message...
WebDriverException: Message: Service chromedriver unexpectedly exited. Status code was: 1
...implies that the chromedriver service unexpectedly exited.
This is because of the of an issue induced as the colab system was updated from v18.04 to ubuntu v20.04 LTS recently.
The main reason is, with Ubuntu v20.04 LTS google-colaboratory no longer distributes chromium-browser outside of a snap package.
Quick Fix
#mco-gh created a new notebook following #metrizable's guidance
(details below) which is working perfect as of now:
https://colab.research.google.com/drive/1cbEvuZOhkouYLda3RqiwtbM-o9hxGLyC
Solution
As a solution you can install a compatible version of chromium-browser from the Debian buster repository using the following code block published by #metrizable in the discussion Issues when trying to use Chromedriver in Colab
%%shell
# Ubuntu no longer distributes chromium-browser outside of snap
#
# Proposed solution: https://askubuntu.com/questions/1204571/how-to-install-chromium-without-snap
# Add debian buster
cat > /etc/apt/sources.list.d/debian.list <<'EOF'
deb [arch=amd64 signed-by=/usr/share/keyrings/debian-buster.gpg] http://deb.debian.org/debian buster main
deb [arch=amd64 signed-by=/usr/share/keyrings/debian-buster-updates.gpg] http://deb.debian.org/debian buster-updates main
deb [arch=amd64 signed-by=/usr/share/keyrings/debian-security-buster.gpg] http://deb.debian.org/debian-security buster/updates main
EOF
# Add keys
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys DCC9EFBF77E11517
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 648ACFD622F3D138
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 112695A0E562B32A
apt-key export 77E11517 | gpg --dearmour -o /usr/share/keyrings/debian-buster.gpg
apt-key export 22F3D138 | gpg --dearmour -o /usr/share/keyrings/debian-buster-updates.gpg
apt-key export E562B32A | gpg --dearmour -o /usr/share/keyrings/debian-security-buster.gpg
# Prefer debian repo for chromium* packages only
# Note the double-blank lines between entries
cat > /etc/apt/preferences.d/chromium.pref << 'EOF'
Package: *
Pin: release a=eoan
Pin-Priority: 500
Package: *
Pin: origin "deb.debian.org"
Pin-Priority: 300
Package: chromium*
Pin: origin "deb.debian.org"
Pin-Priority: 700
EOF
# Install chromium and chromium-driver
apt-get update
apt-get install chromium chromium-driver
When I do yum update I get the following error response:
One of the configured repositories failed (Unkown),
and yum doesn't have enough cached data to continue. At this point the
only
safe thing yum can do is fail. There are a few ways to work "fix" this:
Contact the upstream for the repository and get them to fix the problem.
Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
Run the command with the repository temporarily disabled
yum --disablerepo= ...
Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:
yum-config-manager --disable
or
subscription-manager repos --disable=
Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=.skip_if_unavailable=true
database is locked
I already did yum clean all, rm -f /var/lib/rpm/__db* and rpm --rebuilddb without any change.
After spending couple of days, finally fixed this error by deleting the following folder
/var/lib/yum/history
On the Yesod homepage (http://www.yesodweb.com/page/quickstart) the following installation sequence is suggested:
wget http://www.stackage.org/lts/cabal.config
cabal update # download package list
cabal install alex happy yesod-bin # install build tools
yesod init --bare # answer questions as prompted
cabal sandbox init # set up a sandbox
cabal install --run-tests # install libraries
yesod devel # launch devel server
My question is:
why is "cabal sandbox init" not directly after "cabal update"?
In the suggested way alex happy yesod-bin are all installed in the global space instead of inside the sandbox.
Thanks,
Alex.
Because it generally confuses people when they can't run yesod directly; installing the executables into ~/.cabal/bin means that the user can always access them. It does leak some information outside of the sandbox, but it's typically the right trade-off to take.
I would like to have Drush on Hostgator shared hosting. I just spent 1 hour trying various outdated tutorials (Drush now requires composer). Does somebody have proved, tested and working solution how to install Drush there? I'm using PHP 5.4.
My last achieved step is drush st error:
Unable to load autoload.php. Drush now requires Composer in order to install its dependencies and autoload classes. Please see README.md
Content-type: text/html
When I run php composer.phar diagnose I see:
Content-type: text/html
Warning: Composer should be invoked via the CLI version of PHP, not the cgi-fcgi SAPI
Checking platform settings: OK
Checking git settings: OK
Checking http connectivity: OK
Checking disk free space: OK
Checking composer version: OK
Incase it helps others, I had been using a variety of google inspired resources to install Drush, which just complicated the situation for me. I would highly recommend following the official documentation which is the main source of information. I even read on there that they only maintain the documentation here, not even on drupal.org.
I was missing this step based on other instructions,
Now add Drush to your system path by placing export
PATH="$HOME/.composer/vendor/bin:$PATH"
into your ~/.bash_profile (Mac OS users) or into your ~/.bashrc (Linux users).
These instructions helped me resolve the error:
curl -sS https://getcomposer.org/installer | php mv composer.phar
/usr/local/bin/composer ln -s /usr/local/bin/composer
/usr/bin/composer
git clone https://github.com/drush-ops/drush.git /usr/local/src/drush
cd /usr/local/src/drush git checkout 7.0.0-alpha5 #or whatever
version you want. ln -s /usr/local/src/drush/drush /usr/bin/drush
composer install drush --version
I think you are trying to use Drush version 7.x.
Try using Drush 6.x, I don't think it requires composer. Drush releases. I have had drush 6.4 installed on shared hosting environment successfully without any problems.
I am trying to use the static binary of wkhtmltopdf on Ubuntu server 10.0.4. The reason for is that it apparently has a built in modified QT that will allow me to run wkhtmltopdf without an X Server.
Result:
Once installed (see steps below), when I execute wkhtmltopdf in the terminal, it does not fire up... just returns me to the prompt - like it ran and did something, no error but no output:
:/usr/bin$ wkhtmltopdf
:/usr/bin$
Same behavior if I put args:
:/usr/bin$ wkhtmltopdf http://www.google.com test.pdf
:/usr/bin$
Am I doing something wrong --- my understanding that the static binary should just fire up. Perhaps missing some dependency? Is there a way to get some verbose output?
These are the steps I have followed:
In /usr/bin:
1) Confirmed that the existing (non-static) wkhtmltopdf resides there and that it executes. When I execute it with no args I get the help/about output from the app.
2) Moved the existing wkhtmltopdf out of the directory (renamed it)
3) Get the static binary: sudo curl -C - -O http:
//wkhtmltopdf.googlecode.com/files/wkhtmltopdf-0.9.9-static-i386.tar.bz2
4) Untar: tar xvjf wkhtmltopdf-0.9.9-static-i386.tar.bz2
5) Rename: mv wkhtmltopdf-i386 wkthtmltopdf
6) Get (apparently) necessary packages: sudo apt-get install openssl build-essential xorg libssl-dev
I was having the same problem. I removed the existing wkhtmltopdf and followed the steps below and the installation worked.
First, installing dependencies
sudo aptitude install openssl build-essential xorg libssl-dev
for 64-bit OS
wget http://wkhtmltopdf.googlecode.com/files/wkhtmltopdf-0.9.2-static-amd64.tar.bz2
tar xvjf wkhtmltopdf-0.9.2-static-amd64.tar.bz2
chown root:root wkhtmltopdf-amd64
mv wkhtmltopdf-amd64 /usr/bin/wkhtmltopdf
The only difference is that I put it in /usr/local/bin/wkhtmltopdf.
I hope this helps!
Following deb's answer got it working for me on Ubuntu 10.04 64bit - thanks!
Although rather than downloading 0.9.2 as per deb's instructions, I would suggest people download the latest version by:
Go to http://code.google.com/p/wkhtmltopdf/downloads/list
Download the latest version of wkhtmltopdf-[version number]-static-amd64.tar.bz2
At this time, the latest 64bit is http://wkhtmltopdf.googlecode.com/files/wkhtmltopdf-0.11.0_rc1-static-amd64.tar.bz2.
In my debian server trying to run wkhtmltopdf-i386 lead to same blank prompt.
Non-static (with non-patched QT) version of wkhtmltopdf installed with "aptitude install wkhtmltopdf" is worked.
Problem solved by switching to wkhtmltopdf-amd64, server was a 64 bit and i missed it.
After that, wkhtmltopdf-amd64 says 'libxrender shared library not found', this problem was solved by "aptitude install xorg"
0.11.0_rc1 seems to be buggy.
It keeps throwing the error "Cannot create a QPixmap when no GUI is being used".
Reverting to 0.9.9 worked for me.