Any way to run multiple Google App Engine local instances for appengineFunctionalTest? - google-app-engine

Background
From the docs, at https://github.com/GoogleCloudPlatform/gradle-appengine-plugin
I see that by putting my functionalTests in /src/functionalTests/java does the following:
Starts the Local GAE instance
runs tests in the functionalTests directory
Stops the Local instance after the tests are complete
My Issue
For my microservices, I need to have 2 local servers for running my tests. 1 server is responsible for a lot of auth operations, and the other microservices talk to this server for some verification operations.
I've tried
appengineFunctionalTest.dependsOn ':authservice:appengineRun'
this does start the dependent server, but then it hangs and the tests don't continue. I see that I can set deamon = true and start the server on a background thread, but I can only seem to do that in isolation.
Is there a way to have a 'dependsOn' also be able to pass parameters to the dependent task? I haven't found a way to make that happen.
Or perhaps there is another way to accomplish this.
Any help appreciated

Related

How to not register global commands to my bot's test server?

I've been developing my Discord bot in a test server where I deploy all the commands locally to only that server. However, I finished developing my bot today, and so I deployed all my commands globally to test them in other servers and everything works great...
Except now in my test server I have two copies of every command?? One local copy and one global copy...
I would like to keep only the local versions in the test server, because the global version take up to one hour to update, which is not convenient for quick testing.
As an example, this is what I see when I try to run the /ping command to my bot:
Is there a way to not register the global commands to my test server? Or at least some way to mark which commands are the local (updated) ones and which ones are the global (outdated) ones? This has been a huge pain for testing and I have to think there's some way to get rid of it...

Test Rest Api and verify using db access, pros and cons

I'm testing REST API end to end and when I want to check the application behaved.
One is using existing REST API where possible but in other cases I don't have REST API available so have two options:
creating an api for tests purposes or checking in the database the data have changed as expected.
Which one is better and why? Is there any harm in doing db calls from your tests?
Since you want to test the API itself (E2E testing) I would probably go with the route of creating a test DB and working on that. If you are performing this manually you can easily, albeit cumbersomely, reset the DB after each test.
You can also, and I would probably go this route, create a Docker container with a ready to use DB that you would update as needed, along with the development of the API itself across time. You could spawn a new container/DB every time you need to perform testing and this would also ease automation and integration in CI/CD pipelines.
Any of these solutions enable you to test the real API, in a real DB. Providing more accurate test results.
Hope this helps!
Cheers!
In e2e tests or acceptance tests you usually only check the observable behavior, so the behavior the user would be able to see, in your case the user of the API, so in theory you should not need to check the database.
If you really want to check the results in the database I would use a project like xmysql https://github.com/o1lab/xmysql to have an Rest API that can be used to check the DB. Reasons are:
you don't have to maintain your own code that serves API endpoints that are not needed in production. That code might get public and might become a security issue
you don't need to mock any database or calls, you can test what you fly, and fly what you test
its easy to test if the system where you run the tests and the system-under-test are different computers. You can test all remotely without needing to open ports to you DB. e.g. its easy to use docker to set it all up, run the tests and then deploy. The only thing the deployment does not have is xmysql

How to launch app from the web/cloud

I have developed an app in Twilio which I would like to run from the cloud. I tried learning about AWS and Google App Engine but am quite confused at this stage:
I have 2 questions which I hope to get your help on:
1) How can I store my scripts and database in the cloud? Right now, everything is running out of my local machine but I would like to transfer the scripts and db to another server and run my app at a predetermined time of day. What would be the best way to do this?
2) How can I write a batch file to run my app at a predetermined time of day in the cloud?
I understand this does not have code, but I really hope someone can point me to the right direction. I have spent lots of time trying to understand this myself but still am unsure. Tks in adv.
Update: The application is a Twilio app that makes calls to people, the script simply applies an algorithm to make calls in a certain fashion and the database is a mysql db that provides the details of people to be called.
This is quite difficult to provide an exact answer without understanding what is the application, what is the DB or what is the script that you wish to run.
I can give you a couple of ideas that might be helpful in such cases.
OpsWorks (http://aws.amazon.com/opsworks/) is a managed service for managing applications. You can define your stack (multiple layers like web, workers, DB...) and what are the chef recipes that should run in various points in the life of the instances in each layer (startup, shutdown, app deployment or stack modification..). Then you can use the ability to add instances to each layer in specific days and hours, to implement the functionality of running at predetermined times as you requested.
In such a solution you can either have some of your instances (like DB) always on, or even to bootstrap them using the chef recipes every day, with restore from snapshot on start and create snapshot on shutdown.
Another AWS service that you use is Data Pipeline (http://aws.amazon.com/datapipeline/). It is designed to move data periodically between data sources, for example from a MySQL database to Amazon Redshift, the Data warehouse service. But you can use it to trigger scripts and run random shell scripts that you wish (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-shellcommandactivity.html), and schedule it to run in various conditions like every hour/day or specific times (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-concepts-schedules.html).
A simple path here would be just to create an EC2 instance in AWS, and put the components needed to run your app there. A thorough walk through is here:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/get-set-up-for-amazon-ec2.html
Essentially you will create an EC2 virtual machine, which you can for most purposes treat just like any other Linux server. You can install MySQL on it, copy your script there, and run it. Of course whatever container or support libraries your code requires will need to be installed as well.
You don't say what OS you are using locally, but if it is Mac or Linux, you should be able to follow almost the same process to get your script running on an EC2 instance that you used on your local machine.
As you get to know AWS, there are sophisticated services you can use for deployment, infrastructure orchestration, database services, and so on. But just to get started running a script from a virtual machine should be pretty straightforward.
I recently developed a Twilio application using Ruby on Rails for the backend and found Heroku extremely simple to setup and launch. While Heroku does cost more than AWS, I found that the time I saved using Heroku more than made up this. As an early stage startup, we wanted to spend our time developing important features, and not "wasting" time optimizing our AWS cloud.
However, while I believe Heroku is ideal for early-stage websites/startups I do believe hosting should be reevaluated once a company reaches a certain size. At some point it becomes economically viable to devote resources into optimizing an AWS cloud solution because it will be cheaper than Heroku in the long run.

BIRT and iServer, dev/qa/production environments

I'm trying to go about setting up my BIRT reports and the iServer they sit on such that the database the Data Sources connect to are determined by the environment. Our setup is that currently there is just one iServer instance and many environments running a tomcat webapp that hit it (this may be the problem...).
Essentially the ideal is that the report connects differently in these places:
Local developement, which is running a local tomcat instance of the application which talks to the iPortal/iServer. Local database, but should be able to easily change to other databases for debugging etc.
QA deploy, qa database
Production deploy, production database
I've seen two options for how to fix this:
First option is to bind the Data Source to a configuration file in resources somewhere. Problem here is that if you have only one iServer, its resources are local to the server it is on, and not where the webapp. So, if I understand it correctly, this does not provide the flexibility I'm looking for.
Second option is to pass in all the connection info as report parameters and get the application to determine the correct parameters to send in. This way the application could pull from a local configuration file. This option would work, but I'm weary of the security (or lack thereof) in passing around connection info/credentials.
Does anyone have a better option? Or have people just run local iServer instances for developement? I can see running an iServer for each environment may simplify this issue and allow the reports released to production to be updated and tested in a QA environment without disrupting production, so maybe that is the solution.
One possible approach would be to set each of the connection properties conditionally in the Property Binding section of the Edit Data Source dialog, based on the value of a hidden parameter indicating which environment is to be accessed.
An example of this approach can be found here.
You mention that you are looking for an option for development, including the possibility of a local iServer. I think this would be overkill. Do you Dev & initial testing in BIRT; you do not need an iServer to run the report. If you need resources on the iServer to run & test the report you can reference those through the Server explorer in BIRT Pro. Once you are ready to deploy, I would follow Mark's strategy above using property bindings on the data source itself. That is as close to a best practice as exists for this migration requirement as exists in BIRT.

Running UI automation tests on build server

We use UI Automation and Nunit to create tests UI tests for WPF application.
We've created tests that work fine when you run them from a local machine. Those tests never run successfully on our build server (using TeamCity). Build always hang after opening application window. But if I am logged in (remote desktop), on our build server all UI Automation tests also run successfully.
So I am guessing that it probably has something to do with running active windows session. Any ideas how to convince our build server to create active windows session or any other solutions for making those tests run on build server?
You don't have many options. I will list the two I know, the most preferred option first:
Set up a virtual machine on your build server. Your builds execute in the virtual machine. You can lock the host (aka your buildserver) keeping things secure.
Keep someone logged on all the time. This offcourse creates a security problem. You can alleviate this problem a little by removing the mouse, keyboard and the screen and only access the buildserver through RDP or something similar.
Edit
Take a look at this TestComplete FAQ item: Can TestComplete execute scripts when the computer is locked?
OK, I'm just guessing here.
Try and run the TeamCity service with a local build server user instead of the system account.
Maybe you have to login with that account once, before starting a new build.
It definatley sounds like you need to run your tests with an interactive session as opposed to a service. Adding the "Allow Service to interact with desktop" might help, but this is not supported in Vista any more apparently.
If you can run your builds interactivley as a command line, not a serivice that should work too.
We used to run our UIAutomation tests using the visual studo 2008 load agent to distribute them, running as a command line tool on VM's with no problem.
I also agree that you probably should't be running UI tests on a build server a part of your daily build.
Build always hang after opening application window.
Tests that instantiate the UI? That's not going to work, e.g. if you get a modal dialog the build will hang. This is the reason the MVP pattern was invented, to isolate the active presentation code from a concrete view.
Are you using a mock view in your automated tests?

Resources