I know I can configure which host to upload gems to with:
gem inabox -c
How can I tell geminabox to upload to multiple hosts at the same time? We have two gem servers, and when I run:
gem inabox my_custom_gem.gem
I'd like for it to upload to both of our private gem servers.
Related
I'm using a bitbucket pipeline to deploy my react application.
Right now my pipeline looks like this:
image: node:10.15.3
pipelines:
default:
- step:
name: Build
script:
- npm cache clean --force
- rm -rf node_modules
- npm install
- CI=false npm run deploy-app
artifacts: # defining build/ as an artifact
- 'build-artifact/**'
- step:
name: Deploy
script:
- apt-get update
- apt-get install ncftp
- ncftpput -v -u "$USERNAME" -p "$PASSWORD" -R $SERVER build 'build-artifact/*'
- echo Finished uploading build
It works really well like this, but the ftp upload takes about 8 minutes, which is way to long because with the free plan of Bitbucket I can only use the pipeline feature for 50 minutes per month.
It seems like the uploads of every small file takes forever. That's why I thought that maybe uploading a single zip file will be way more performant.
So my question is: Is it really faster? And how it is possible to ZIP the artifact, upload the zip to the server and unzip it there?
Thanks for your help
In fact, you should consider using another tool to upload file, for example rsync which has a couple of useful features, such as compression of the data. It also only uploads files that were changed from the previous upload, which is gonna speed up the uploads as well. You can use the rsync-deploy pipe for example:
script:
- pipe: atlassian/rsync-deploy:0.3.2
variables:
USER: 'ec2-user'
SERVER: '127.0.0.1'
REMOTE_PATH: '/var/www/build/'
LOCAL_PATH: 'build'
EXTRA_ARGS: '-z'
Note the -z option passed via the EXTRA_ARGS. This will enable the data compression when transferring files.
I see the below info in flink's documentation - to copy the respective jar to plugins directory to use s3.
How can I do it if I deploy Flink using Kubernetes.
"To use flink-s3-fs-hadoop or flink-s3-fs-presto, copy the respective JAR file from the opt directory to the plugins directory of your Flink distribution before starting Flink, e.g.
mkdir ./plugins/s3-fs-presto
cp ./opt/flink-s3-fs-presto-1.9.0.jar ./plugins/s3-fs-presto/"
If you are referencing to k8s setup in official docs you can simply re-create your image.
check out Docker file in Github repository
download flink-s3-fs-presto-1.9.0.jar to the same folder as your Docker file
add following right before COPY docker-entrypoint.sh
# install Flink S3 FS Presto plugin
RUN mkdir ./plugins/s3-fs-presto
COPY ./flink-s3-fs-presto-1.9.1.jar ./plugins/s3-fs-presto/
build the image, tag it and push to Docker hub
In your Deployment yml file, change the image name to what you just created
You can then use s3://xxxxx in your config yml file (e.g. flink-configuration-configmap.yaml)
If you are using the build.sh script that's part of flink to build an application-specific docker image, it has a parameter (--job-artifacts) that allows you to specify a list of artifacts (JAR files) to include in the image. These jar files all end up in the lib directory. See https://github.com/apache/flink/blob/master/flink-container/docker/build.sh.
You could extend on this to deal with the plugins correctly, or not worry about it for now (putting them in the lib directory is still supported).
I am creating new Reactjs application using Docker and I want to create new instance without installing Node.js to host system. I have seen many of tutorials but everytime first step was to install Node.js to the host, init app and then setup Docker. Problem I ran into was the official Node.je Docker images are designed for run application only instead of to run like detached container, so I cannot use container command line to initial install. I was about to create image based on any linux distro and install Node.js on my own, but with these approache I cannot use advantages of prepared official images of Node.js.
Does exist any option how to init React app using Docker without installing Node.js to the host system?
Thank You
EDIT: Based od #David Maze answer I decide to use docker-compose, just mount project directory to container and put command: ["sleep", "infinity"] to docker-compose file. So I didn't need to install Node.js to host and I can manage everthing from container command line as usual in project folder. I wasn't solving any shared global cache, but I am not really sure that it is needed if I will have more versions of node containered because of conflict of npms of different versions. Maybe I try to mount it like volume to containers from some global place in the host one day, but disk space is not so big problem ...
You should be able to run something like:
sudo docker run \
--rm \
-it \
-u$(id -u):$(id -g) \
-w/ \
-v"$PWD":/app \
node:10 \
npx create-react-app app
You will have to repeat this litany of Docker options every time you want to do anything to use a Docker-packaged version of Node.
Ultimately this sequence of things starts in the container root directory (-w/) and uses create-react-app to create an app directory; the -v option has that backed by the current directory on the host, and the -u option is needed to make filesystem permissions line up. The -it options make it possible to answer interactive questions, and --rm causes the container to clean up after itself.
I suspect you will find it much easier to just install Node.
Google official documentation is available here:
https://cloud.google.com/appengine/downloads#Google_App_Engine_SDK_for_PHP
But it doesn't provide sufficient information about the following step:
"4 - Build and install the PHP interpreter and App Engine PHP extension. Specify the path to php-cgi and gae_runtime_module.so when running the development server."
I'm using a new Virtualbox machine with Ubuntu 15.10 and PhpStorm to test GAE.
Could someone please provide clear instructions about step 4? What do I need to do to install the php interpreter and the App Engine php extension?
P.s. I've already searched with google but I only found old/confusing tutorials
That GAE PHP extension seems like a quite new thing. Don't remember using it on the SDK in Ubuntu 14.04.
You need to build PHP and that extension from source. You should grab the latest PHP5.5 branch from their source repo (http://php.net/git.php) and build it. That linked page contains instructions on building PHP but the procedure is similar to the following:
$ git clone <php-src>
$ cd ./php-src/
$ git checkout PHP-5.5
$ ./buildconf
$ ./configure --prefix="/opt/php55"
$ sudo make && sudo make install
And remember to pick the modules and packages you want to compile with PHP5.5 to be used in the SDK. I think Google had an official list of modules and extensions they use inside GAE PHP and inside the SDK PHP. The prefix argument tells the compiler where to install the resulting application.
Then you need to get that source for the PHP extension and build it
$ git clone https://github.com/GoogleCloudPlatform/appengine-php-extension
$ cd appengine-php-extension
$ phpize # remember to use the phpize from the just built PHP5.5 binaries
$ ./configure
$ sudo make && sudo make install
(That Git repository contains detailed building instructions so you should probably refer to them when building.)
Enable the resulting .so for the PHP5.5 you just built using the PHP configuration files.
After that you need to install the PHP SDK and configure it to use the newly built PHP binary
$ dev_appserver.py <...> --php_executable_path=/opt/php55/bin/php-cgi
The SDK will let you know if the built PHP binaries are incompatible with the SDK version you use. I remember compiling the PHP from source around 5 times before it worked without any warnings.
But essentially they are telling you to compile PHP from source, then compile their extension from source and then use the built PHP+extension with the downloaded SDK. These instructions are from the top of my head so you may need to adjust the commands and procedures.
The process can be simplified by using Docker, here is an image you can use: https://hub.docker.com/r/mhariri/docker-google-appengine-php/
To run your app, you just need docker installed, and then run the following command in your app directory:
docker run -it -v $(pwd):/app --rm --net=host mhariri/docker-google-appengine-php
I depolyed an app with gcloud preview app deploy.
Is there a way to download it to an other local machine?
How can I get the files? I tried it via ssh with no success (can't access the docker dir)
UPDATE:
I found this:
gcloud preview app modules download default --version 1 --output-dir=my_dir
but it's not loading files
Log
Downloading module [default] to [my_dir/default]
Fetching file list from server...
|- Downloading [0] files... -|
I am coming to Google App Engine after two years, I see that they have made lots of improvements and added tons of features. But sadly, their documentation sometimes leaves much to be desired.
I used to download my code of the uploaded version with the appcfg.pyusing the following command.
appcfg.py download_app -A <app_id> -V <version> <output-dir>
But of course now that they have culminated everything in the gcloud shell where appcfg.py is not accessible.
However, the following method helped me to download the deployed code:
Go the console and in to the Google App Engine.
Select the project you want to work with.
Once the project's dashboard opens, Click on the top right to
open the built in console window.
Which should load the cloud shell at the bottom, now if you check appcfg.py is available to you to use in this VM.
Hence, use appcfg.py download_app -A <app_id> -V <version> <output-dir> to download the code.
Now once you have the code in the desired folder, in order to download it on your local machine - You can open the docker code editor
Now here I assumed if I rightclicked and exported the desired
folder it would work,
but instead it gave me the following error message.
{"Error":"'concurrency' must be a number but it is [object Undefined]","Message":"'concurrency' must be a number but it is [object Undefined]"}
So, I thought maybe it would play along nicely if the the folder
was an archive. Go back to the cloud shell and using whatever
utility you fancy make an archive of the folder
zip -r mycode.zip mycode
Go to the docker code editor, export and download.
Now. Of course there might many more ways do it (hopefully) but this is what made sense to me after returning to Google App Engine after 2 years.
Currently, the best way to do this is to pull the files out of Docker.
Put instance into self-managed mode, so that you can ssh into it:
$ gcloud preview app modules set-managed-by default --version 1 --self
Find the name of the instance:
$ gcloud compute instances list | grep gae-default-1
Copy it out of the Docker container, change the permissions, and copy it back to your local machine:
$ gcloud compute ssh --zone=us-central1-f gae-default-1-1234 'sudo docker cp gaeapp:/app /tmp'
$ gcloud compute ssh --zone=us-central1-f gae-default-1-1234 "chown -R $USER /tmp/app"
$ gcloud compute copy-files --zone=us-central1-f gae-default-1-1234:/tmp/app /tmp/
$ ls /tmp/app
Dockerfile
[...]
IMHO, the best option today (Aug 2018) is:
Under the main menu, under Products, go to Tools -> Cloud Build -> Build history.
There, click the ID of the build you want.
Then, in the opened window (Build details), click the source link, the download of your compressed code begins.
As simple as that.
HTH.
As of Feb 2021, you can install appengine-sdk using pip
pip install appengine-sdk
Once installed, appcfg can be used to download the app code.
python -m appcfg download_app -A app_id [ -V version ] out-dir
Nothing works. Finally I found the source code this way. Simply go to google cloud storage. choose buckets starting with us.artifacts...., select containers > images > download the latest one (look by created date). unzip after downloaded file. it will have all the deployed source code of app engine.