Drone IO Publish to ECR from another account - continuous-deployment

I'm using the Drone.IO CI/CD software to build and publish an image to an ECR in another AWS account, in the same region EU-WEST-1.
I have setup the IAM artefacts and it is connecting but failing on the last step. It seems that Drone is ignoring my registry option im passing it and just using the registry from the AWS API call.
Drone is running in account 0987654321 and Im trying to push the image to account 1234567890.
ecr:
image: plugins/ecr
registry: 12345677890.dkr.ecr.eu-west-1.amazonaws.com
repo: repo1/app_name
region: eu-west-1
tags: latest
dockerfile: Dockerfile
I get the following error:
+ /usr/local/bin/docker tag blah 0987654321.dkr.ecr.eu-west-1.amazonaws.com/repo1/app_name:latest
+ /usr/local/bin/docker push 0987654321.dkr.ecr.eu-west-1.amazonaws.com/repo1/app_name:latest
The push refers to repository [0987654321.dkr.ecr.eu-west-1.amazonaws.com/repo1/app_name]
name unknown: The repository with name repo1/app_name does not exist in the registry with id '0987654321'
Which makes it appear as though its not even using the registry attribute and using the registry from the API call.

I know it might be a little late, but used Drone in my old job for this specific case of pushing images to an ECR repository in another AWS account. In the old bash ECR plugin (now is Go) it used to be with --registry-ids option if not wrong. Seems now it is documented.

This looks like a known bug in the ECR plugin. The registry setting is getting ignored in favour of the default registry returned from AWS - which will be the one in the account Drone is running in.
Once this PR is merged it should work.

Related

How to deploy SQL Server Express on Docker Desktop Kubernetes

I've been studying "Kubernetes Up and Running" by Hightower et al (first edition) Chapter 13 where they discussed creating a Reliable MySQL Singleton (Since I just discovered that there is a second edition, I guess I'll be buying it soon).
Using their MySQL reliable singleton example as a model, I've been looking for some sample YAML files to make a similar deployment with Microsoft SQL Server (Express) on Docker Desktop for Kubernetes.
Apparently I need YAML files to deploy
Persistent Volume
Volume claim (should this be NFS?)
SQL Server (Express edition) replica set (in spite of the fact that this is just a singleton).
I've tried this example but I'm confused because it does not contain a persistent volume & claim and it does not work. I get the error
Error: unable to recognize "sqlserver.yml": no matches for kind "Deployment" in version "apps/v1beta1"
Can someone please point me to some sample YAML files that are not Azure specific that will work on Docker Desktop Kubernetes for Windows 10? After debugging my application, I'll want to deploy this to Azure (AKS).
Wed Jul 15 2020 Update
I left out the "-n namespace" for the helm install command (possibly because I'm using Helm and you are using helm v2?).
That install command still did not work. Then I did a
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
Now this command works:
helm install todo-app-database stable/mssql-linux
Progress!
When I do a "k get pods" I see that my todo-app-mssql-linux database is in the pending state. So I did a
kubectl get events
and I see
Warning FailedScheduling pod/todo-app-database-mssql-linux-8668d9b88c-lsh5l 0/1 nodes are available: 1 Insufficient memory.
I've been google searching for "Kubernetes insufficient memory" and can find no match.
I suspect this is a problem specific to "Docker Desktop Kubernetes".
When I look at the output for
helm -n ns-todolistdemo template todo-app-database stable/mssql-linux
I see the deployment is asking for 2Gi. (Interesting: when I use the template command, the "-n ns-todolistdemo" does not cause an error like it does with the install command).
So I do
kubectl describe deployment todo-app-database-mssql-linux >todo-app-database-mssql-linux.yaml
I edit the yaml file to change 2Gi to 1Gi.
kubectl apply -f todo-app-database-mssql-linux.yaml
I get this error:
error: error parsing todo-app-database-mssql-linux.yaml: error converting YAML to JSON: yaml: line 9: mapping values are not allowed in this context
Hmm... that did not work. I try delete:
kubectl delete deployment todo-app-database-mssql-linux
kubectl create -f todo-app-database-mssql-linux.yaml
I get this error:
error: error validating "todo-app-database-mssql-linux.yaml": error validating data: invalid object to validate; if you choose to ignore these errors, turn validation off with --validate=false
So I try apply:
kubectl apply -f todo-app-database-mssql-linux.yaml
Same error!
Shucks.... Is there a way to adjust the memory allocation for Docker Desktop?
Thank you
Siegfried
Short answer
https://github.com/helm/charts/blob/master/stable/mssql-linux/templates/pvc-master.yaml
Detailed Answer
Docker For Desktop comes already with a default StorageClass :
This storage class is responsible for auto-provisioning of PV whenever you create a PVC.
If you have a YAML definition of PVC (persistent volume claim), you just need to keep storageClass empty, so it will use the default.
k get storageclass
NAME PROVISIONER AGE
hostpath (default) docker.io/hostpath 11d
This is fair enough as Docker-For-Desktop Cluster is a one node cluster. So if your DB crashes and the cluster opens it again , it will not move to another node, because simply, you have a single node :)
Now should write the YAML of PVC from scratch ?
No , you don't need. Because Helm should be your best friend.
( I explained below Why you have to use Helm even without deep learning curve)
Fortunately, the community provides a chart called stable/mssql-linux..
Let's run it together :
helm -n <your-namespace> install todo-app-database stable/mssql-linux
# helm -n <namespace> install <release-name> <chart-name-from-community>
If you want to check the YAML (namely PVC) that Helm computed, you can run template instead of install
helm -n <your-namespace> template todo-app-database stable/mssql-linux
Why I give you the answer with Helm ?
Writing YAML from scratch lets reinventing the wheel while others do it.
The most efficient way is to reuse what community prepare for you.
However, you may ask: How can i reuse what others doing ?
That's why Helm comes.
Helm comes to be your installer of any application on top of kubernetes regardless how much YAML does your app require.
Install it now and hit the ground choco install kubernetes-helm

Mesosphere installation PermissionError:/genconf/config.yaml

I got a Mesosphere-EE, and install on fedora 23 server (kernel 4.4)with:
$bash dcos_generate_config.ee.sh --web –v
then output:
Running mesosphere/dcos-genconf docker with BUILD_DIR set to/home/mesos-ee/genconf
Usage of loopback devices is strongly discouraged for production use.Either use `--storage-opt dm.thinpooldev` or use `--storage-opt
dm.no_warn_on_loop_devices=true` to suppress this warning.
07:53:46:: Logger set to DEBUG
07:53:46:: ====> Starting DCOS installer in web mode
07:53:46:: DCOS Installer v1
07:53:46:: Starting server ('0.0.0.0', 9000)
Then I start firefox though vnc, the vnc is on root. then:
07:53:57:: Root page requested. 07:53:57:: Serving/usr/local/lib/python3.4/site-packages/dcos_installer/templates/index.html
07:53:58:: Request for configuration type made.
07:53:58::Configuration file not found, /genconf/config.yaml. Writing new onewith all defaults.
07:53:58:: Error handling request
PermissionError: [Errno 13] Permission denied: '/genconf/config.yaml'
But I already have a genconf/config.yaml, it look like:
bootstrap_url: http://<bootstrap_public_ip>:<your_port>
cluster_name: '<cluster-name>'
exhibitor_storage_backend: zookeeper
exhibitor_zk_hosts: <host1>:2181,<host2>:2181,<host3>:2181
exhibitor_zk_path: /dcos
master_discovery: static
master_list:
- <master-private-ip-1>
- <master-private-ip-2>
- <master-private-ip-3>
superuser_username: <username>
superuser_password_hash: <hashed-password>
resolvers:
- 8.8.8.8
- 8.8.4.4
I do not know what’s going on. If you have any idear, please let me know, thank you very much!
Disable Selinux!
Configure SELINUX=disabled in the /etc/selinux/config file and then reboot!
Be ensure the selinux is disabled by the command getenforce.
$ getenforce
Disabled
zhe.
Correctly installing the enterprise edition depends on the correct system prerequisites. Anyway I suppose you're still on the bootstrap node so I will give you some path to succed in your current task.
Run the script as root or as a user issuing sudo dcos_generate_config.ee.sh
The script will also generate the config file automatically; if you want to use your own configuration file then create a folder named genconf and put it inside before running the script. You should changes the values inside <> with your specific configuration. If you need more help for your specific case send me an email to infofs2 at gmail.com

What is CrashLoopBackOff status for openshift pods?

There is more than one example where I have seen this status from a pod running in openshift origin. In this case it was the quickstart for the cdi camel example. I was able to successfully build and run it locally (non - openshift) but when I try to deploy on my local openshift (using mvn -Pf8-local-deploy), I get this output for that particular example (snipped for relevance) :-
[vagrant#vagrant camel]$ oc get pods
NAME READY STATUS RESTARTS AGE
cdi-camel-z4czs 0/1 CrashLoopBackOff 4 2m
Tail of the logs are as under:-
Error occurred during initialization of VM
Error opening zip file or JAR manifest missing : agents/jolokia.jar
agent library failed to init: instrument
Can someone help me solve this ?
If the state of the pod goes in to CrashLoopBackOff it usually indicates that the application within the container is failing to start up properly and the container is exiting straight away as a result.
If you use oc logs on the pod name, you may not see anything useful though as it would capture what the latest attempt to start it up is doing and may miss messages.
What you should do is instead provide the --previous or -p option to oc logs along with the pod name. That will show you the complete logs from the previous attempt to start up the container.
If this is an arbitrary Docker image you are using, a common problem that can occur and which would cause the container not to start, is an application image which requires to be run as the root user. Because running an application inside of a container as root still has risks, OpenShift doesn't allow you to do that by default and will instead run as an arbitrary assigned user ID. The application image may not be designed with this possibility in mind and so is failing.
So try and get those logs messages and see what the problem is.
Temporary workaround -> https://github.com/fabric8io/ipaas-quickstarts/issues/1157
Basically, the src/main/hawt-app directory needs to be deleted.

GAE Upload Download Data / Import Data to localhost for testing on my dev server

I needed to test some changes on my local dev server before pushing to production. Doing so required having the full dataset on my local machine.
A colleague directed me to:
https://developers.google.com/appengine/docs/python/tools/uploadingdata?csw=1
I downloaded the data using an administrator's username and password, but unfortunately, I was unable to upload the data to my localhost "dev" app engine server.
Ran this command from the commandline:
appcfg.py upload_data --filename=../data/data1.dat --url=http://localhost:9080/_ah/remote_api ./
Where:
9080 was my app port on my localhost copy of the app
I was running this command from my app directory
Had the downloaded data stored in relative directory
../data/data1.dat
Received this error:
raise _ToDatastoreError(err)
google.appengine.api.datastore_errors.BadRequestError: app "dev~appname" cannot access app "appname"'s data
UPDATE: It seems that the answer was as simple as adding the following to my upload_data call:
--application="dev~appname"
Thanks #DavidBennett.
ORIGINAL ANSWER: (which also works)
After a ton of searching on SO and code.google.com, the solution I found that worked was a comment on this question:
devappserver2, remote_api, and --default_partition
I used my original command as described in the question:
appcfg.py upload_data --filename=../data/data1.dat --url=http://localhost:9080/_ah/remote_api ./
The username and password I entered when prompted were my apps username (in my case, my email) and the corresponding password. (If that doesn't work you might want to try blank or test#example.com based on other comments I've read, but have not tested that theory.)
I also restarted my app engine with the following flag: (don’t forget to remove the flag the next time you restart the server) (You might want to try without using this flag, since I can’t confirm that it affects anything - I’m including it here, since it was a setting that I used.)
--clear_datastore=yes
The commenter recommends to delete “dev~” in your local server code on line 84 in this file:
google/appengine/tools/devappserver2/application_configuration.py, line 84
Where:
that base directory 'google' is located inside of:
/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/
assuming your GoogleAppEngineLauncher.app directory is in your Applications directory on your Mac
IMPORTANT: Restart your local app engine server for the changes to take effect.

How do I download the source code of a google app engine project?

This seems like it should be very easy but I don't see a link to it anywhere.
How do I download the source code of a google app engine project?
Windows
appengine-java-sdk\bin\appcfg.cmd -A <your_app_id> -V <your_app_version> download_app <output-dir>
Linux
./appengine-java-sdk/bin/appcfg.sh -A <your_app_id> -V <your_app_version> download_app <output-dir>
For completeness, using the Python implementation:
appcfg.py download_app -A $appID -V $appVersionNumber $downloadDirectory --oauth2
--oauth2 is of course optional, you can omit it and provide your email + app-specific password (or your password, and then go implement two-factor authentication right after), but it's easier, and frankly there's no reason not to.
Documentation.
App Engine actually recently added the ability for the developer who uploaded a given app version to download its source code.
As of October 2019 you can simply go to --> App Engine --> Services and in the tool dropdown select 'source' and the source code is there
Posting this since none of the listed methods above didn't take me to the code (by June 2021)
You could try accessing it through;
Google Cloud Platform > Debugger > choosing the version of the
Application from combo at top.
This will list the files of that version on the left pane. There is no way to download it automatically but you can copy-paste the code.
Hope you will find this helpful.
IMHO, the best option today (Aug 2018) is:
Under the main menu, under Products, go to Tools -> Cloud Build -> Build history.
There, click the ID of the build you want (for me - the last one).
Then, in the opened window (Build details), click the "source" link, the download of your compressed code begins.
As simple as that.
HTH.
Working with App engine standard using Go, the debugger isn't available yet.
How I managed to download the source code for an existing service was to use the gcloud tool.
First: Get the version id of your service using the app engine console or running: gcloud app versions list
Second: use the version and service name and run: gcloud app versions describe <versionID> --service=<service name>
the describe parameter will give you the storage locations for your source files that looks like this:
cmd/main.go:
sha1Sum: e3fe5848c2640eca7ac3591490e1debc2d3a9b09
sourceUrl: https://storage.googleapis.com/<project>/<file id>
Third: you can then use the storage console, using the file id, to download the files you are interested in.
this process based on java sdk
Its works for me...
Download Google cloud SDK
gcloud init
enter image description here
Follow through process of logging in using your credentials
Enter following command from SDK
C:\Program Files (x86)\Google\appengine-java-sdk-1.9.49\bin
enter image description here
Enter Following command to download source code
appcfg.sh -A [YOUR_APP_ID] -V [YOUR_APP_VERSION] download_app [OUTPUT_DIR]
Eg: appcfg.sh -A my-project-name-1234 -V 2 download_app C:\Users\india\Desktop\my project
Note: this progress based on java-appengine sdk so we use appcfg.sh instead of appcfg.py
check if your app is uploaded with same email id that is in your app engine. if you are not sure then in app engine > control > Clear deployment credentials and then click on any project, deploy to sign in again then use this
appcfg.py download_app -A {app id from google app engine} -V {1} "{c:\path}" --oauth2_credential_file=C:\Users\{your account name}/.appcfg_oauth2_tokens
change all {} to your needs
Things have changed since this question was asked so I'm adding an updated answer. Note that this only applies to GAE Standard Environment
Google has deprecated appcfg.py and so the previous responses appcfg.py download_app no longer works.
gcloud which is the SDK in use (it replaced appcfg) does not have the functionality to download your source code.
When you deploy your app via gcloud app deploy, it copies your source code to a bucket. The default bucket is staging.<project_name>.appspot.com. Your files will stay in this bucket for a maximum of 15 days before they are deleted. You can modify the rule so that the files are retained for longer or less time.
The file names in the bucket are encoded so you can't figure out what each file is unless you open it (i.e. download it). Google has a mapping of the encoded names to the original file names. To get this mapping, you run the gcloud app versions describe command and it will list the file names and their encoded names. To download the files, you have to manually click each url one by one. So essentially, you have to download each file manually and then use the mapping to rename them (or open the file, check the content and then rename them). Also note that downloading the files manually will not maintain the folder structure in which they were uploaded.
If you do not wish to go through all of the above hassles (imagine having to manually open each url for each file if you have a small to mid-sized project which has hundreds of files), our App - https://nocommandline.com - now supports downloading source code from the default bucket - staging.<project_name>.appspot.com (so far as your files are still there which means any deployment i.e update not older than 15 days from your current date unless you previously increased the deletion age on your staging bucket's lifecycle page).
In simple terms, you enter your project name, the version number and our App will take care of retrieving the original file name to encoded name mapping, automatically downloading the files and renaming them to the original names, while maintaining the folder structure. For more information, refer to https://nocommandline.com/help/#faq_download_source_code_from_gae.
Log in to the console.developers.google.com
Select the project you want to download the code from (Google App Engine Standard Envoronment).
Go to the App Engine Dashboard. Under Summary is Debug and Source. Click on Source.
Select each file one at a time and copy it (highlight the code, copy and paste into your local editor.)
Select the next file....
You need to use svn to checkout the files.
If you are on Windows, you can use tortoise svn for your GUI end.
Here are tutorials on how to do it, here is the related question.

Resources