using unit tests to keep index.yaml updated - google-app-engine

So far the only way I have been able to keep index.yaml updated when I make code changes is to either hit the urls via browser or using TransparentProxy and the application is being served through dev_appserver.
This sucks.
Is there a way to bootstrap the appengine environment in the unit test runner so that what ever process is used to update the index.yaml can be run without incurring the overhead of the single threaded dev_appserver.
The difference is significant. My testsuite(80% coverage) runs at 2 minutes but does not update index.yaml, if I run the same suite using TransparentProxy to forward request to port 8080, the index.yaml does get updated but it takes about 4 hours. Again, this sucks.

You can use my Nose plugin for this, called nose-gae-index. It uses the internal IndexYamlUpdater class from the SDK, so it is definitely better than proxying requests.
Despite this improvement, there is definitely no need to have it enabled all the time. I use it before deployment and to inspect changes to index configuration caused by new commits.
Remember not to use queries that require indexes in the tests themselves, or they will be added to the configuration file as well!

Related

Cache busting a Reactjs web application

I'm developing an application in ReactJS where I quite often push new changes to the the application.
When the users load upp the application they do not always get the newest version of the application causing breaking changes and errors with the express backend I have.
From what I have researched you can invalidate the cache using "cache busting" or a similar method. Although from all the questions I have seen on stackoverflow they have no clear consensus on how to do it, and the latest update was sometime in 2017.
How would one in a modern day ReactJS application invalidate the browsers cache in an efficient and automatic way when deploying?
If it's relevant, I'm using docker and docker-compose to deploy my application
There's not one-fits-all solution. Pretty common is adding some random hash to the bundle file, which will cause browser to process the file again from server.
Something like: app.js?v=435893452 instead of app.js. Most modern bundle tools like Webpack can do all of that automatically but it's hard to give you direction without knowing your setup.

GAE: Specify min_instances only for default service version

We have a service running on Google App Engine.
If that service does not receive a traffic for some time then all instances are killed and the next call takes a few additional seconds to start the application.
We are thinking about specifying a min_instances option in app.yaml to always keep at least one instance alive.
We deploy new versions of that service quite frequently and keeping old versions for some time. Those old versions are not serving traffic and kept just in case.
What we would like to do is to always keep at least one instance of default service version alive and leave all other non-default versions with default behavior – we want them to be scaled automatically to 0 instances if they do not receive any traffic.
I didn't find such option in the documentation (https://cloud.google.com/appengine/docs/standard/python3/config/appref#scaling_elements) and didn't come to any workarounds.
I am thinking about creating a cron job (https://cloud.google.com/appengine/docs/flexible/python/scheduling-jobs-with-cron-yaml) which will periodically "ping" only default version of my application periodically thus making it always asleep. But I am not sure if it is good solution.
Are there any better solutions to such case?
Thanks!
min_idle_instances config option seems to solve my problem.
Note following in the documentation: "This setting only applies to the version that receives most of the traffic" which is almost exactly my case:
automatic_scaling:
min_idle_instances: 1

Frontend App shows a blank page if I scale up kubernetes deployment to 3

I have a frontend application that works perfectly fine when I have just one instance of the application running in a kubernetes cluster. But when I scale up the deployment to have 3 replicas it shows a blank page on the first load and then after the refresh, it loads the page. As soon as I scale down the app to 1, it starts loading fine again.
Here is the what the console prints in the browser.
hub.xxxxx.me/:1 Refused to execute script from 'https://hub.xxxxxx.me/static/js/main.5a4e61df.js' because its MIME type ('text/html') is not executable, and strict MIME type checking is enabled.
Adding the screenshot as well. Any ideas what might be the case. I know it is an infrastructure issue since it happens only when I scale the application.
One thing I noticed is that 2 pods have a different js file that the other pod.
2 pods have this file - build/static/js/main.b6aff941.js
The other pod has this file - build/static/js/main.5a4e61df.js
I think the mismatch is causing the problem. Any Idea how to fix this mismatch issue so that the pods always have the same build?
I think the mismatch is causing the problem. Any Idea how to fix this mismatch issue so that the pods always have the same build?
Yes, this is actually pretty common in a build where those resources change like that. You actually won't want to use the traditional rolling-update mechanism, because your deployment is closer to a blue-green one: only one "family" of Pods should be in service at a time, else the html from Pod 1 is served but the subsequent request for the javascript from Pod 2 is 404
There is also the pretty grave risk of a browser having a cached copy of the HTML, but kubernetes can't -- by itself -- help you with that.
One pretty reasonable solution is to scale the Deployment to one replica, do the image patch, wait for the a-ok, then scale them back up, so there is only one source of truth for the application running in the cluster at a time. A rollback would look very similar: scale 1, rollback the deployment, scale up
An alternative mechanism would be to use label patching, to atomically switch the Service (and presumably thus the Ingress) over to the new Pods all at once, but that would require having multiple copies of the application in the cluster at the same time, which for a front-end app is likely more trouble than it's worth.

How can docker help software automation testers?

How can docker help automation testers?
I know it provides linux containers which is similar to virtual machines but how can I use those containers in software automation testing.
Short answer
You can use Docker to easily create an isolated, reproducible and portable environment for testing. Every dependency goes to an image and whenever you need an environment to test your application you just run some images.
Long answer
Applications have a lot of dependencies
A typical application has a lot of dependencies to other system. You might have a database, a LDAP, a Memcache or a many more things your system depends on. The application itself needs a certain run time (Java, Python, Ruby) in a dedicated version (Java 7 or Java 8). You might also need a server (Tomcat, Jetty, NGINX) with settings for your application. You might need a special folder structure for your application and so on.
Setting up an test environment becomes complicated
All this things make up the environment you need for your application. You need this environment to run your application in production, to develop it and to test it (manual or automated). This environment can become quite complicated and maintaining it will cost you a lot of time and trouble.
Dependencies become images
This is where Docker comes into play: Docker let's you put your database (with the initial data of your application already set up) to a Docker image. The same goes for your LDAP, your Memcache and all other applications you depend on. Docker let's you even package your own application into an image which provides the correct run time, server, folder structure and configuration.
Images make your environment easily reproducible
Those images are self-contained, isolated and portable. This means you can pull them on every machine and just run them as they are. Instead of installing a database, LDAP, Memcache and configure all of them you just pull the images and run them. This makes it super easy to spin up a new and fresh environment in seconds whenever you need.
Testing becomes easier
And that's the basic for your tests, because you would need a clean, fresh and reproducible environment to perform tests against. Especially "reproducible" and "fresh" is important. If you run automated tests (locally on the developer maschine or on your build server) you must use the same environment. Otherwise your tests are not reliable. Fresh is important because it means you can just stop all containers when your tests are finished and every data mess your tests created is gone. When you run your tests again you just spin up a new enviroment which is clean and in its initial state.

Zero downtime deployment for angular app

I have a restful angular app that is hosted on a AWS and I'm looking for a clean and quick deployment solution to put the new site live without taking down the previous. I don't have much DevOps experience so any advice would be great. The site is full RESTFUL so its just static pages.
I was looking at setting up a dokku with AWS plugin solution but was pretty sure its overkill and may not be able to detect my app because its just static pages (no node, rails, etc).
The best way to do this is to reconfigure the web server on the fly to point to the new application.
Install the new version of the app to a new location, update the web server config files to point to the new location, and reload the server.
For inflight requests, they will be satisfied by the old application, and all the new requests will hit the new application, with no down time between them save for the trivial delay when refreshing the web server (don't restart it, just tickle it to reload it's configuration files).
Similarly, you can do this solely at the file system, by installing the new app in a new directory parallel to the old one. Then:
mv appdir appdir.bak
mv appdir.new appdir
But this is not zero downtime, but it is a very, very short down time as the two inodes are renamed. Just ensure that both the new and old directories are on the same filesystem, and the mv will be instantaneous. The advantage is that you can trivially "undo" the operation in the same way.
There IS a window where you have no app at all. For a fraction of a second there will be no appdir, and you will serve up 404's for those few microseconds. So, do it when the system is quiet. But it's trivial to instrument and do.
We ended up going with TeamCity for our build/tests and deploying via Shipit.
https://github.com/shipitjs/grunt-shipit
https://www.jetbrains.com/teamcity/
Try to use git repo for live deployment https://danbarber.me/using-git-for-deployment/
A simple solution is to use a ELB. This will enable you to deploy a new instance, deploy the code, test it, update the ELB to switch traffic to the new instance and then you can then remove the old instance.
An easy solution to this is to always be running two instances, a production and a staging. These guys should be identical and interchangeable (because they are going to switch. Assign an elastic ip to your production. When it's time to update, copy the code onto the staging, make sure it's working, and then attach the elastic ip to staging. It is now production and production is now staging. This is not an ideal solution but it is very easy and the same principals apply to better solutions.
A better solution involves an elastic load balancer. Make sure you have 2 instances attached. When it is time to update, detach an instance, perform your update, make sure it is working and reattach it. Now you will have a brief point in time where the client could get either your new website or your old website. Detach the other old note, perform the update and reattach.
The fact of the matter is even if you just overwrite files on the live server there will only be a 10ms window or so where the client could get a new version of one file (e.g. the html) and the old version of another (e.g. the css). After that it will be perfect again.

Resources