How can docker help software automation testers? - selenium-webdriver

How can docker help automation testers?
I know it provides linux containers which is similar to virtual machines but how can I use those containers in software automation testing.

Short answer
You can use Docker to easily create an isolated, reproducible and portable environment for testing. Every dependency goes to an image and whenever you need an environment to test your application you just run some images.
Long answer
Applications have a lot of dependencies
A typical application has a lot of dependencies to other system. You might have a database, a LDAP, a Memcache or a many more things your system depends on. The application itself needs a certain run time (Java, Python, Ruby) in a dedicated version (Java 7 or Java 8). You might also need a server (Tomcat, Jetty, NGINX) with settings for your application. You might need a special folder structure for your application and so on.
Setting up an test environment becomes complicated
All this things make up the environment you need for your application. You need this environment to run your application in production, to develop it and to test it (manual or automated). This environment can become quite complicated and maintaining it will cost you a lot of time and trouble.
Dependencies become images
This is where Docker comes into play: Docker let's you put your database (with the initial data of your application already set up) to a Docker image. The same goes for your LDAP, your Memcache and all other applications you depend on. Docker let's you even package your own application into an image which provides the correct run time, server, folder structure and configuration.
Images make your environment easily reproducible
Those images are self-contained, isolated and portable. This means you can pull them on every machine and just run them as they are. Instead of installing a database, LDAP, Memcache and configure all of them you just pull the images and run them. This makes it super easy to spin up a new and fresh environment in seconds whenever you need.
Testing becomes easier
And that's the basic for your tests, because you would need a clean, fresh and reproducible environment to perform tests against. Especially "reproducible" and "fresh" is important. If you run automated tests (locally on the developer maschine or on your build server) you must use the same environment. Otherwise your tests are not reliable. Fresh is important because it means you can just stop all containers when your tests are finished and every data mess your tests created is gone. When you run your tests again you just spin up a new enviroment which is clean and in its initial state.

Related

Vespa application config best practices

What is the best way to dynamically provide configuration to a vespa application?
It seems that the only method that is talked about is baking configuration values into the application package but is there any way to provide configuration values outside of that? ie are there cli tools to update individual configuration values at runtime?
Are there any recommendations or best practices for managing configuration across different environments (ie production vs development) ? At Oath/VMG is configuration checked into source control or managed outside of that?
Typically all configuration changes are made by deploying an updated application package. As you suggest, this is usually done by a CI/CD setup which builds and deploys the application package from a git repository whenever that changes.
This way it is easy to ensure changes have been reviewed (before merge), track all changes that have been made and roll them back if necessary. It is also easy to verify that the same changes which have been deployed and tested (preferably by automated tests) in a development / test environment are the ones that are deployed to production - because the same application package is deployed through each of those environments in order.
It is however also possible to update files in a deployed application package and create a new session from this, which may be useful if your application package has some huge resources. See https://docs.vespa.ai/documentation/cloudconfig/deploy-rest-api-v2.html#use-case-modify

Deploying Create-React-App applications into different environments

I've got an app that I'm working on that consists of a Frontend - written using CRA - and a Backend - written in Java. We would like to support deploying this in multiple environments throughout our process, namely:
Local development machine - This is easy. Just run "npm start" and use the "development" environment
Local End-to-end tests - these build an infrastructure in Docker consisting of the Frontend, Backend and Database, and runs all of the tests against that
Development CI system
QA CI System
Pre-production System
Production System
The problem that I've got is - in the myriad of different systems, what is the best way for the Frontend to know where the Backend is? I'm also hoping that the Frontend can be deployed onto a CDN, so if it can be static files with the minimal complexity that would be preferred.
I've got the first two of these working so far - and they work by the use of the .env files and having a hard-coded hostname to call.
Once I get into the third and beyond - which is my next job - this falls down, because in each of the cases there is a different hostname to be calling, but with the same output of npm run build doing the work.
I've seen suggestions of:
Hard-code every option and determine it based on the current browser location in a big switch statement. That just scares me - not only do I have to maintain my deployment properties inside my source code, but the Production source will know the internal hostnames, which is arguably a security risk.
Run different builds with provided environment variables - e.g. REACT_APP_BACKEND=http://localhost:8080/backend npm run build. This will work, but then the code going into QA, PreProd and Prod are actually different and could arguably invalidate testing.
Adding another Javascript file that gets deployed separately onto the server and is loaded from index.html. That again means that the different test environments are technically different code.
Use DNS Tricks so that the hostnames are the same everywhere. That's just a mess.
Serve the files from an active web server, and have it dynamically write the URL out somehow. That would work, but we're hoping that the Production one can just be a simple CDN that just serves up static files.
So - what is the best way to manage this?
Cheers

How to launch app from the web/cloud

I have developed an app in Twilio which I would like to run from the cloud. I tried learning about AWS and Google App Engine but am quite confused at this stage:
I have 2 questions which I hope to get your help on:
1) How can I store my scripts and database in the cloud? Right now, everything is running out of my local machine but I would like to transfer the scripts and db to another server and run my app at a predetermined time of day. What would be the best way to do this?
2) How can I write a batch file to run my app at a predetermined time of day in the cloud?
I understand this does not have code, but I really hope someone can point me to the right direction. I have spent lots of time trying to understand this myself but still am unsure. Tks in adv.
Update: The application is a Twilio app that makes calls to people, the script simply applies an algorithm to make calls in a certain fashion and the database is a mysql db that provides the details of people to be called.
This is quite difficult to provide an exact answer without understanding what is the application, what is the DB or what is the script that you wish to run.
I can give you a couple of ideas that might be helpful in such cases.
OpsWorks (http://aws.amazon.com/opsworks/) is a managed service for managing applications. You can define your stack (multiple layers like web, workers, DB...) and what are the chef recipes that should run in various points in the life of the instances in each layer (startup, shutdown, app deployment or stack modification..). Then you can use the ability to add instances to each layer in specific days and hours, to implement the functionality of running at predetermined times as you requested.
In such a solution you can either have some of your instances (like DB) always on, or even to bootstrap them using the chef recipes every day, with restore from snapshot on start and create snapshot on shutdown.
Another AWS service that you use is Data Pipeline (http://aws.amazon.com/datapipeline/). It is designed to move data periodically between data sources, for example from a MySQL database to Amazon Redshift, the Data warehouse service. But you can use it to trigger scripts and run random shell scripts that you wish (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-shellcommandactivity.html), and schedule it to run in various conditions like every hour/day or specific times (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-concepts-schedules.html).
A simple path here would be just to create an EC2 instance in AWS, and put the components needed to run your app there. A thorough walk through is here:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/get-set-up-for-amazon-ec2.html
Essentially you will create an EC2 virtual machine, which you can for most purposes treat just like any other Linux server. You can install MySQL on it, copy your script there, and run it. Of course whatever container or support libraries your code requires will need to be installed as well.
You don't say what OS you are using locally, but if it is Mac or Linux, you should be able to follow almost the same process to get your script running on an EC2 instance that you used on your local machine.
As you get to know AWS, there are sophisticated services you can use for deployment, infrastructure orchestration, database services, and so on. But just to get started running a script from a virtual machine should be pretty straightforward.
I recently developed a Twilio application using Ruby on Rails for the backend and found Heroku extremely simple to setup and launch. While Heroku does cost more than AWS, I found that the time I saved using Heroku more than made up this. As an early stage startup, we wanted to spend our time developing important features, and not "wasting" time optimizing our AWS cloud.
However, while I believe Heroku is ideal for early-stage websites/startups I do believe hosting should be reevaluated once a company reaches a certain size. At some point it becomes economically viable to devote resources into optimizing an AWS cloud solution because it will be cheaper than Heroku in the long run.

Integration tests in Continuous Integration environment: Database and filesystem state

I'm trying to implement automated integration tests for my application. It's a very complex monster. You could say that its database and part of the filesystem are part of its state, because it saves image files in the hard drive, and references to those in the DB. The software needs all those, in a coherent state, to work properly.
Back to writing tests: To run any relevant test, I need some image files in the filesystem, and certain records filled in the database. I thought of putting all of these in a separate folder called TestEnvironmentData in the repository, and retrieving them from the Continuous Integration Server (Team City), but a colleague said the repo is quite full as it is, and that I should set up a special directory, and databases, only in the Continuous Integration server. I don't like that because the tests success depend on me manually mantaining stuff in the server, and restoring initial state before every test becomes cumbersome.
What do you guys do when you need to write integration tests for an app like this? The main goal is having an automated test harness to approach a large scale refactoring. There's lots of spaghetti code and the app's current architecture is hardly unit testable, that's why I decided on integration tests first.
Any alternative approach is welcome.
Developer Repeatability is key when setting up a Continous Integrations Server. I have set one up for my last three employers and I have found the key to success is the developers being able to run the same tests from their dev system in order to get the same results as the CI Server.
The easiest way to do this would be to check in the test artifacts into source control but you could also use dropbox or a Network Share that you copy them from in one of the build steps.
For a .Net solution I have always used MsBuild as you can most easily replicate the build process of Visual Studio and get the same binaries/deployables. As for keeping your database in sync so that tests can be repeatable in the past I used the MbUnit test framework and the [Rollback] attribute as it would roll back any changes to Sql Server that happened in the test. I believe that Nunit now has this attribute as well.
The CI server is great for finding code that breaks existing functionality but unless developers can reproduce the error on their machine they won't trust the CI server for some time.
First of all, we use Maven to build our code. It's like ant, but it relies on convention instead of configuration for many things, like Ruby On Rails does. One of those conventions is a standardized directory structure:
(project)----src----main----(language)
| | \--resources
| \--test----(language)
| \--resources
\--target---...
Using a directory structure like this makes it easy to keep your application resources and testing resources near each other, yet still be able to build for test or build for production, or just build both but just package up the application parts after running the tests.
As far as resetting the database between tests, how you do that is greatly dependent on the DBMS you're using. For instance, if you're using MySQL it's very easy to get the test data the way you want and do a mysqldump to a file you then load before the test. With other DBMSs you may have to drop and recreate the tables and reload the data, or make separate tables for the starting point and use a CREATE/SELECT sql statement to duplicate it each time.
There really is no reliable way around the "reset the database between tests" step.

Which tool or framework can be used to run WPF automation tests in Parallel?

What I want is to run my WPF automation tests (integration tests) in the continuous integration process when possible. It means, everytime something is pushed to the source control I want to trigger an event that starts the WPF automation tests. However integration tests are slower than unit tests that is why I would like to execute them in Parallel, in several Virtual Machines. Is there any framework or tools that allows me to run my WPF automation tests in parallel?
We use Jenkins. Our system tests are built on top of a proprietary framework written in C#.
Jenkins allows jobs to be triggered by SCM changes (SVN, Git, and Mercurial are all supported via plugins). It also allows jobs to be run on remote slaves (in parallel, if needed). You do need to configure your jobs and slaves by hand. Configuring jobs can be done with build parameters: say, you have only one job that accepts test id's as parameters, but it can run on several slaves; you can configure one trigger job that will start several test jobs on different slaves passing to them test id's as parameters.
Configuring slaves is made much easier when your slaves are virtual machines. You configure one VM and then copy it (make sure that node-specific information, such as Node Name is not hard-coded and is easily configurable).
The main advantages of Jenkins:
It's free
It has an extendable architecture allowing people to extend it via plugins. As a matter of fact, at this stage (unlike, say, a year and a half ago) everything I need to do can be done either via plugins or Jenkins HTTP API.
It's can be rapidly deployed. Jenkins runs in its own servlet container. You can deploy and start playing with it in less than an hour.
Active community. Check, for example, [jenkins], [hudson], and [jenkins-plugins] tags on SO.
I propose that you give it a try: play with it for a couple of days, chance are you'll like it.
Update: Here's an old answer I've liked recommending Jenkins.

Resources