How to get the current dev_appserver version? - google-app-engine

How can I get the GAE SDK to tell me what version it is? I could not find anything like this:
dev_appserver.py --version
Note that this is different from os.environ['CURRENT_VERSION_ID'], which returns the application version, and it seems that os.environ['SERVER_SOFTWARE'] always returns Development/1.0 when I run it inside the Interactive Console.
I would like to create a GAE SDK updater script that performs the following logic:
Checks to see what the latest version of GAE SDK for Python on Linux is (as of this writing 1.7.5 which is available for download at https://storage.googleapis.com/appengine-sdks/deprecated/175/google_appengine_1.7.5.zip.
Checks the currently installed version of the GAE SDK.
If the available version > installed version, downloads the latest package and unzips it into the correct directory.
If there is no "supported" way to do step #1, I am willing to hard-code the "latest version" in the script, but I still only want to download/install it once even if the script itself is run multiple times. In other words, the script should be idempotent.

The directory where the GAE SDK zip is unpacked to contains a VERSION file with the following contents:
release: "1.7.5"
timestamp: 1357690550
api_versions: ['1']
So I wrote a script to pull the version out of there:
#!/bin/sh
INSTALLEDVERSION=`cat /usr/local/google_appengine/VERSION | grep release | cut -d: -f 2 | cut -d\" -f 2`
LATESTVERSION="1.7.5"
if [ $INSTALLEDVERSION != $LATESTVERSION ]; then
echo "Update GAE SDK"
fi
Or, you can use this to obtain the version string on non-default installs, but readlink may not work correctly on Linux:
INSTALLEDDIR=`which dev_appserver.py | xargs readlink | xargs dirname`
INSTALLEDVERSION=`cat $INSTALLEDDIR/VERSION | grep release | cut -d: -f 2 | cut -d\" -f 2`
But this still does not provide a way to perform step 1, which would query the web for the latest version and do auto-updating.

Related

How to completely download Anaconda Cloud bz2 files and dependencies for offline package installation? [duplicate]

I want to create a Python environment with the data science libraries NumPy, Pandas, Pytorch, and Hugging Face transformers. I use miniconda to create the environment and download and install the libraries. There is a flag in conda install, --download-only to download the required packages without installing them and install them afterwards from a local directory. Even when conda just downloads the packages without installing them, it also extracts them.
Is it possible to download the packages without extracting them and extract them afterwards before installation?
There is no simple command in the CLI to prevent the extraction step. The extraction is regarded as part of the FETCH operation to populate the package cache before running the LINK operation to transfer the package to the specified environment.
The alternative would be to do something manually. Naively, one could search Anaconda Cloud and manually download, however, it would probably be better to go through the solver to ensure package compatibility. All the info for operations to be run can be viewed by including the --json flag. This could be filtered to just the tarball URLs and then downloaded directly. Here's a script along these lines (assuming Linux/Unix):
File: conda-download.sh
#!/bin/bash -l
conda create -dn null --json "$#" |\
grep '"url"' | grep -oE 'https[^"]+' |\
xargs wget -c
which can be used as
./conda-download.sh -c conda-forge -c pytorch numpy pandas pytorch transformers
that is, it accepts all arguments conda create would, and will download all the tarballs locally.
Ignoring Cached Packages
If you already have some packages cached then the above will not redownload them. Instead, if you wish to download all tarballs needed for an environment, then you could use this alternate version which overrides the package cache using an empty temporary directory:
File: conda-download-all.sh
#!/bin/bash -l
tmp_dir=$(mktemp -d)
CONDA_PKGS_DIRS=$tmp_dir conda create -dn null --json "$#" |\
grep '"url"' | grep -oE 'https[^"]+' |\
xargs wget -c
rm -r $tmp_dir
Do you really want to use conda-pack? That lets you archive a conda-environment for reproducing without using the internet or re-solving for dependencies. To just prevent re-solving you can also use conda env export --explict but that still ties you to the source (internet or local disk repository).
If you have a static environment (read-only) and want to really reduce docker size, you can volume mount the environment at runtime. You would need to match the file paths (ie: /opt/anaconda => /opt/anaconda).

gcloud: how to download the app via cli

I depolyed an app with gcloud preview app deploy.
Is there a way to download it to an other local machine?
How can I get the files? I tried it via ssh with no success (can't access the docker dir)
UPDATE:
I found this:
gcloud preview app modules download default --version 1 --output-dir=my_dir
but it's not loading files
Log
Downloading module [default] to [my_dir/default]
Fetching file list from server...
|- Downloading [0] files... -|
I am coming to Google App Engine after two years, I see that they have made lots of improvements and added tons of features. But sadly, their documentation sometimes leaves much to be desired.
I used to download my code of the uploaded version with the appcfg.pyusing the following command.
appcfg.py download_app -A <app_id> -V <version> <output-dir>
But of course now that they have culminated everything in the gcloud shell where appcfg.py is not accessible.
However, the following method helped me to download the deployed code:
Go the console and in to the Google App Engine.
Select the project you want to work with.
Once the project's dashboard opens, Click on the top right to
open the built in console window.
Which should load the cloud shell at the bottom, now if you check appcfg.py is available to you to use in this VM.
Hence, use appcfg.py download_app -A <app_id> -V <version> <output-dir> to download the code.
Now once you have the code in the desired folder, in order to download it on your local machine - You can open the docker code editor
Now here I assumed if I rightclicked and exported the desired
folder it would work,
but instead it gave me the following error message.
{"Error":"'concurrency' must be a number but it is [object Undefined]","Message":"'concurrency' must be a number but it is [object Undefined]"}
So, I thought maybe it would play along nicely if the the folder
was an archive. Go back to the cloud shell and using whatever
utility you fancy make an archive of the folder
zip -r mycode.zip mycode
Go to the docker code editor, export and download.
Now. Of course there might many more ways do it (hopefully) but this is what made sense to me after returning to Google App Engine after 2 years.
Currently, the best way to do this is to pull the files out of Docker.
Put instance into self-managed mode, so that you can ssh into it:
$ gcloud preview app modules set-managed-by default --version 1 --self
Find the name of the instance:
$ gcloud compute instances list | grep gae-default-1
Copy it out of the Docker container, change the permissions, and copy it back to your local machine:
$ gcloud compute ssh --zone=us-central1-f gae-default-1-1234 'sudo docker cp gaeapp:/app /tmp'
$ gcloud compute ssh --zone=us-central1-f gae-default-1-1234 "chown -R $USER /tmp/app"
$ gcloud compute copy-files --zone=us-central1-f gae-default-1-1234:/tmp/app /tmp/
$ ls /tmp/app
Dockerfile
[...]
IMHO, the best option today (Aug 2018) is:
Under the main menu, under Products, go to Tools -> Cloud Build -> Build history.
There, click the ID of the build you want.
Then, in the opened window (Build details), click the source link, the download of your compressed code begins.
As simple as that.
HTH.
As of Feb 2021, you can install appengine-sdk using pip
pip install appengine-sdk
Once installed, appcfg can be used to download the app code.
python -m appcfg download_app -A app_id [ -V version ] out-dir
Nothing works. Finally I found the source code this way. Simply go to google cloud storage. choose buckets starting with us.artifacts...., select containers > images > download the latest one (look by created date). unzip after downloaded file. it will have all the deployed source code of app engine.

How do i ignore/unmark certain ports?

I would like to know how to either ignore upgrading certain ports or unmark them as "outdated".
This is motivated by certain ports failing to upgrade, while I wish to upgrade all the rest. I know about sudo port install -n, which allows one to install a port without upgrading port dependencies, as in the case of mongodb requiring an older (not the current) version of theboost libraries, but this is not applicable here.
For example:
$ sudo port list outdated
gdb #7.5 devel/gdb
py27-scikits-image #0.7.1 python/py-scikits-image
As gdb#7.5 fails to update, I would just like to upgrade the others, ie. py27-scikits-image, without going thru the whole sudo port list outdated | awk '{print $1}' | grep -v gdb | xargs sudo port upgrade pipeline.
Much appreciated.
I would advise to create a local portfile for gdb with a lower version number.
Create a local portfile repository: howto
Copy the gdb portfile directory (a directory called "gdb" containing the file "Portfile" and directory "files") into your local portfile repository
Change the version number in the portfile to e.g. 0.0
Run portindex in your local portfile repository
The local portfile overrides the one downloaded from the default port repository. The low version number makes macports think your version of gdb is up to date.
I hope this can help.
BTW: you can do sudo port upgrade outdated and not gdb

Why is my debian postinst script not being run?

I have made a .deb of my app using fpm:
fpm -s dir -t deb -n myapp -v 9 -a all -x "*.git" -x "*.bak" -x "*.orig" \
--after-remove debian/postrm --after-install debian/postinst \
--description "Automated build." -d mysql-client -d python-virtualenv home
Among other things, the postinst script is supposed to create a user for the app:
#!/bin/sh
set -e
APP_NAME=myapp
case "$1" in
configure)
virtualenv /home/$APP_NAME/local
#supervisorctl start $APP_NAME
;;
# http://www.debian.org/doc/manuals/securing-debian-howto/ch9.en.html#s-bpp-lower-privs
install|upgrade)
# If the package has default file it could be sourced, so that
# the local admin can overwrite the defaults
[ -f "/etc/default/$APP_NAME" ] && . /etc/default/$APP_NAME
# Sane defaults:
[ -z "$SERVER_HOME" ] && SERVER_HOME=/home/$APP_NAME
[ -z "$SERVER_USER" ] && SERVER_USER=$APP_NAME
[ -z "$SERVER_NAME" ] && SERVER_NAME=""
[ -z "$SERVER_GROUP" ] && SERVER_GROUP=$APP_NAME
# Groups that the user will be added to, if undefined, then none.
ADDGROUP=""
# create user to avoid running server as root
# 1. create group if not existing
if ! getent group | grep -q "^$SERVER_GROUP:" ; then
echo -n "Adding group $SERVER_GROUP.."
addgroup --quiet --system $SERVER_GROUP 2>/dev/null ||true
echo "..done"
fi
# 2. create homedir if not existing
test -d $SERVER_HOME || mkdir $SERVER_HOME
# 3. create user if not existing
if ! getent passwd | grep -q "^$SERVER_USER:"; then
echo -n "Adding system user $SERVER_USER.."
adduser --quiet \
--system \
--ingroup $SERVER_GROUP \
--no-create-home \
--disabled-password \
$SERVER_USER 2>/dev/null || true
echo "..done"
fi
# … and a bunch of other stuff.
It seems like the postinst script is being called with configure, but not with install, and I am trying to understand why. In /var/log/dpkg.log, I see the lines I would expect:
2012-06-30 13:28:36 configure myapp 9 9
2012-06-30 13:28:36 status unpacked myapp 9
2012-06-30 13:28:36 status half-configured myapp 9
2012-06-30 13:28:43 status installed myapp 9
I checked that /etc/default/myapp does not exist. The file /var/lib/dpkg/info/myapp.postinst exists, and if I run it manually with install as the first parameter, it works as expected.
Why is the postinst script not being run with install? What can I do to debug this further?
I think the example script you copied is simply wrong. postinst is not
supposed to be called with any install or upgrade argument, ever.
The authoritative definition of the dpkg format is the Debian Policy
Manual. The current version describes postinst in chapter
6
and only lists configure, abort-upgrade, abort-remove,
abort-remove, and abort-deconfigure as possible first arguments.
I don't have complete confidence in my answer, because your bad example
is still up on debian.org and it's hard to believe such a bug could slip
through.
I believe the answer provided by Alan Curry is incorrect, at least as of 2015 and beyond.
There must be some fault with the way the that your package is built or an error in the postinst file which is causing your problem.
You can debug your install by adding the -D (debug) option to your command line i.e.:
sudo dpkg -D2 -i yourpackage_name_1.0.0_all.deb
-D2 should sort out this type of issue
for the record the debug levels are as follows:
Number Description
1 Generally helpful progress information
2 Invocation and status of maintainer scripts
10 Output for each file processed
100 Lots of output for each file processed
20 Output for each configuration file
200 Lots of output for each configuration file
40 Dependencies and conflicts
400 Lots of dependencies/conflicts output
10000 Trigger activation and processing
20000 Lots of output regarding triggers
40000 Silly amounts of output regarding triggers
1000 Lots of drivel about e.g. the dpkg/info dir
2000 Insane amounts of drivel
The install command calls the configure option and in my experience the postinst script will always be run. One thing that may trip you up is that the postrm script of the "old" version, if upgrading a package, will be run after your current packages preinst script, this can cause havoc if you don't realise what is going on.
From the dpkg man page:
Installation consists of the following steps:
1. Extract the control files of the new package.
2. If another version of the same package was installed before
the new installation, execute prerm script of the old package.
3. Run preinst script, if provided by the package.
4. Unpack the new files, and at the same time back up the old
files, so that if something goes wrong, they can be restored.
5. If another version of the same package was installed before
the new installation, execute the postrm script of the old pack‐
age. Note that this script is executed after the preinst script
of the new package, because new files are written at the same
time old files are removed.
6. Configure the package.
Configuring consists of the following steps:
1. Unpack the conffiles, and at the same time back up the old
conffiles, so that they can be restored if something goes wrong.
2. Run postinst script, if provided by the package.
This is an old issue that has been resolved, but it seems to me that the accepted solution is not totally correct and I believe that it is necessary to provide information for those who, like me, are having this same problem.
Chapter 6.5 details all the parameters with which the preinst and postinst files are called
At https://wiki.debian.org/MaintainerScripts the installation and uninstallation flow is detailed.
Watch what happens in the following case:
apt-get install package
- It runs preinst install and then postinst configure
apt-get remove package
- Execute postrm remove and the package will be set to "Config Files"
For the package to actually be in the "not installed" state it must be used:
apt-get purge package
That's the only way we'll be able to run preinst install and postinst configure the next time the package is installed.

How to capture screenshot of specified website?

I want to know technique to capture screenshot if I have a url list of those sites like google fastflip. What technology or techniques require for this kind of task. If this technique available in rails it would be great.
Thanks
You'll need a HTML rendering engine for that.
The easy way is to use a browser plugin for that task.
Check out this: 15 Ways To Create Website Screenshots
I have been using this excellent Firefox plugin Grab Them All https://addons.mozilla.org/en-US/firefox/addon/7800/ which is a version of the author's also excellent Screengrab add on.
Grab them All allows you to point the browser at a list of URLs and then will produce all the screenshots for you in a specified directory. It works brilliant with most websites.
However I am trying to generate screenshots of Google Maps URLS which is not working for me at the moment because the pages are not standard pages - they use frames etc. But for most purposes the above is great - super quick and easy to set up. Hope that helps.
Hey, i'm using a headless webbrowser and Xvfb. First, install the package dependencies for example Ubuntu:
sudo apt-get install xvfb imagemagick x11-apps
Then run the shellcript below using sudo to some "nobody user", like this:
/usr/bin/sudo -u nobody /path/screengrab.sh www.ibm.com 34344 >>/tmp/screengrab.log 2>&1
You might need to adjust the cropping etc.
#!/bin/bash
rm -rf /home/nobody/.mozilla/
XAUTHORITY=
Xvfb :$2 -pixdepths 32 -screen 0 1024x1024x24 >/dev/null 2>&1 &
XPID=$!
sleep 1
firefox -width 2000 -height 1024 --display :$2 http://$1 &
FPID=$!
sleep 6
xwd -display :$2 -root -out /tmp/$2-$$.xwd
convert /tmp/$2-$$.xwd /u0/screengrabs/$1.png # Cache
convert -resize 300x300 /tmp/$2.xwd /tmp/$2-$$.png
convert -crop 287x248+0+29 /tmp/${2}-$$.png /tmp/${2}2-$$.png
mkdir -p /home/je/www/domaintool.se/docs/images/$1
cp /tmp/${2}2-$$.png /home/je/www/domaintool.se/docs/images/$1/`date +%Y%m%d`.png
rm -f /tmp/$2-$$.png /tmp/$2-$$.xwd /tmp/${2}2-$$.png
kill $XPID >/dev/null 2>&1
kill $FPID >/dev/null 2>&1

Resources