Setting up selenium automation framework to run tests depending on cloud settings - selenium-webdriver

We have about 200 products having the same codebase. Apps are identified by appID and the content/features/feature configs are controlled via CMS. Like Feature 1 is enabled for AppID=1 and AppID=18 but not for AppID=53. Feature 1 can behave different in AppID=1 and AppID=18(again depending configs received from CMS).
We have automated tests based on AppID=1 and run them in CI environment. The picture is whenever we make a change e.g. for AppID=1 it affects to other apps as well so we want to test all apps to make sure nothing is broken.
Actually we can get the config from CMS for the AppID, but we don't know how should we proceed with it.
Can someone share his experience/solution if any?

Related

Load testing a Google App Engine Application using JMeter

I've created an application and I'd like to test how well it scales to large numbers of users.
To run my application a user has to go to the homepage, sign in to a Google account, click a button and then upload a video file.
First of all, is this possible to emulate using JMeter? I'm signed into my Google account locally but am not sure whether simulated users will have access to it?
Secondly, I've recorded a session in JMeter doing the actions above and have run the test with 10 simulated users, however, the App Engine dashboard doesn't detect any activity. I've followed the steps mentioned here but obviously with details of my application etc.
Here's a screenshot of the summary report.
Is there anything obvious I might be doing wrong? Am I using JMeter in the correct way to test the application as desired?
Apologies for my JMeter inexperience.
This is not something you will be able to record and replay, my expectation is that your application is protected by OAuth so you will need some token in order to execute your calls.
Not knowing the details of your application implementation it's quite hard to guess what's went wrong, I would recommend
Running your test with 1 user and 1 loop first to ensure that it's doing what it is supposed to be doing by adding View Results Tree listener and inspecting request and response details for each sampler (especially for failed ones).
Once you figure out what's wrong with this particular request - amend JMeter configuration so it would be successful. Repeat until you're happy with the test end-to-end.
Add load only after that and be careful as test might be sensitive to extra users/loops, especially if you're using a single login account (which is not recommended)
References:
How to Handle Correlation in JMeter
How to Run Performance Tests on OAuth Secured Apps with JMeter

How to specify different api URL in Azure deployment vs running locally?

So my setup is like this.
I have a solution with two projects. The first project is an ASP.NET WebAPI project that represents a REST API. It is completely view-less and returns only JSON responses for the API calls.
The second project is an AngularJS client. I started by creating an empty Web app in Visual Studio. So this project does have a Web.Config and an Azure publish profile but no C# controllers, routes, app_start, etc. It is all JavaScript and HTML.
The two projects are deployed as two independent Web Apps in Azure. Project_API and Project_Web.
My question is in my Angular App when the service responsible for communicating with the REST API how do I gracefully detect or set the URL based on whether I am deployed in Azure vs running locally?
// Use this api URL when running locally
var BaseURL = 'http://localhost:15774/api/games/';
// Use this api URL when deployed to Azure
// var BaseURL = 'http://Project_API.azurewebsites.net/api/games/';
It is similar to how inside of the Project_API project I can set a different connection string for my local vs production database. That I understand though because the C# code can read the database connection string from Web.Config, and I can override that value in the Azure application settings for the deployed app. I guess I don't know the right way to do the similar action for a JavaScript client web app though.
The actual solution I went with was to create an ASP 5 project type. This project type has built-in support for the gulp task runner. It still feels a little bit unnatural to use Visual Studio to develop an AngularJS client in the first place, but at least this brings it closer to a common front-end webdev client development feel with the taskrunner support.
The other suggest solution I am sure works also. It just seems to me that if you choose:
To separate your REST API and client front-end into separate independent projects rather than serving up both your client and server from a single project.
Write your front-end client as an Angular SPA.
Then it would be undesirable to have to use C# and Razor in the Angular client. That might be common in traditional ASP development, but not common and standard in most Angular client development. Using taskrunners for Angular clients seems more like the general practice. However, as the previous solution points out at this time it is brand new for Visual Studio to support this.
Rest of the details for my solution:
Pull into gulp a new module/dependency: gulp-ng-constant
Create app/config.json:
{
"development": { "ApiEndpoint": "http://localhost:15774/api/games/" },
"production":{ "ApiEndpoint": "http://myapp.azurewebsites.net/api/games/" }
}
Setup new gulp task: "gulp config"
Set this task to call the ngConstant function that comes with gulp-ng-constant. Have it load the development settings from the config file:
var myConfig = require(paths.webroot + 'js/app/config.json');
var envConfig = myConfig["development"];
Follow gulp-ng-constant documentation to specify any options and specify the name of which Angular module you want the constants to be registered in.
Bind this new task to the after-build event in task-runner explorer, so it will run after every build.
Setup new gulp task: gulp production:config
Make it exactly the same as step 3, except myConfig["production"]
Don't bind it to the after-build event, rather add it to the pre-publish tasks in your project.json file:
"prepublish": [ "npm install", "bower install", "gulp clean", "gulp production:config", "gulp min" ]
Now whenever you build and/or publish the gulp task will automatically generate a file /app/ngConstants.js. If you setup the task correctly the file will contain the Angular code to register the constants with the correct module.
angular.module("game", [])
.constant("ApiEndpoint", "http://localhost:15774/api/games/")
The only thing I don't really like about this solution is that there is no obvious way to tell in gulp if a build is "Debug" vs "Release". Reading some forums it sounds like the VS team is aware of this issue and planning to fix it in the future. It needs some method to expose the build config to the taskrunner. In my solution it will write the "development" constants every build and then it overwrites them to "production" values on publish. This works for this API endpoint case, but other constants might have different requirements and need that release vs debug configuration or you would be forced to run the release tasks by hand which might be acceptable depending on how often you are running the release build locally.
In your case, you should have a cshtml file which provides you more information on the page. You will need MVC if you're intending to deploy this with IIS. Otherwise, your options would be different with something like node.
Whether you read that information from a registry value, environment variable, database, web config, or whatever is up to you.
At the end of the day, you will have something that sets that value, which you generate in the cshtml with Razor:
<script>window.ENDPOINT = '#someEndpoint'</script>
And then you can either just read that off the window in your JavaScript, or you can make a constant in your app and use it that way:
app.constant('myAppGlobal', window.ENDPOINT || {});

Orchestrating Non-JavaScript Backend from Protractor Test

I am writing an application which has a LAMP backend, which provides a RESTful API. This API is consumed by an AngularJS front-end.
I have unit and integration tests for the API / GUI in isolation. Now I am expanding my test horizon to encompass a full front-to-back test suite.
I've been playing around with Protractor, which has been extremely useful for the browser-based inputs for my front-to-back tests.
However, I'm strugging to see how I can integrate this with the orchestration of the PHP backend.
Within the PHP integration tests, I have a set of utility classes that populate the system (i.e. database) with a pre-determined universe. For example; there is a pre-created user called "Bob", and an administrator called "Alice".
Between each integration test case, the entire universe is re-set to this base state, so that any interactions with Bob & Alice in the test are re-set.
This is one example of the orchestration I would like to manage from Protractor. However, I cannot see an easy pattern to integrate with the PHP code which constructs the test universe.
After the initial population of the world, there are other orchestration tasks I would like to be able to execute to get the system into the desired target state; e.g. locking Bob's user account.
Without wanting to access the MySQL DB directly from the protractor code, how can I re-use my PHP logic and invoke the business logic from Protractor?
I could offer RESTful orchestration URLs. But this seems dangerous, as it would pose a serious risk if ever released to production.
There are a number of suggestions on other StackOverflow questions (e.g. this one) which suggest using Mocks. These seem to completely miss the mark; given protractor is designed for front-to-back tests.
Business Logic
We have done that but in a different way. We have written that business logic in our API service code that it will check what kind of environment you are trying to reset. If your target environment is your local codebase or test environment then it will offer your Rest /reset apis.
if(environment === qa || environment === local) {
// Allow reset service
}
else {
// No reset service
}

How to work with authentication in local Google App Engine tests written in Go?

I'm building a webapp in Go that requires authentication. I'd like to run local tests using appengine/aetest that validate the authentication behavior. However, I do not see any way to create an aetest.Context with a dummy user. Am I missing something?
I had a similar issue with Python sdk. The gist of the solution is to bypass authentication when tests run locally.
You should have access to the [web] app object at the the test setup time - create a user object and save it into the app (or wherever your get_current_user() method will check).
This will let you unit test all application functions except authentication itself. For the later part you can deploy your latest changes as unpublished google app version, then test authentication and if all works - publish the version.
I've discovered some header values that seem to do the trick. appengine/user/user_dev.go has the following:
X-AppEngine-Internal-User-Email
X-AppEngine-Internal-User-Federated-Identity
X-AppEngine-Internal-User-Federated-Provider
X-AppEngine-Internal-User-Id
X-AppEngine-Internal-User-Is-Admin
If I set those headers on the Context's Request when doing in-process tests, things seem to work as expected. If I set the headers on a request that I create separately, things are less successful, since the 'user.Current()' call consults the Context's Request.
These headers might work in a Python environment as well.

Unit Testing the Server Interface for a Silverlight-Facebook Application

I have a Silverlight 4 client running on a Facebook page hosted on Google App Engine. It's using gminifb to communicate with the Facebook API. The Silverlight client uses POST calls to the URIs for each method and passes the session information from Facebook with each call.
The project's growing and it would be super-useful if I could set up a unit-testing system to make a variety of the server calls so that when I make changes I can ensure everything else still works. I've worked with nUnit before and I like what I've read of PEX but I'm not sure how to apply them to this situation.
What're the choices for creating a test system for this? Pros/cons of each?
How do I get started setting something like this up?
Solved. I did it as follows:
Created a special user account to be used for testing on the server that bypassed the authentication. Only did this on the test environment by checking a debug flag in that environment's settings. This avoided creating any security hole in the live site (since the same debug flag will be false there.)
Created a C#.NET solution to test each API call. The host project is a console app (no need for a GUI) with three reusable synchronous methods:
SendFormRequest(WebRequest request, Dictionary<string,string> pairs),
GetJsonFromResponse(HttpWebResponse response),
and ResetAccount().
These three methods allow the host project to make HTTP requests on the server and to read the JSON responses.
Wrapped each server API call inside a method call in the host project.
Created an nUnit test project in the solution. Then simply created tests that call each wrapper method in the host project, using different parameters and changing values on the server.
Created a series of tests to verify correct error handling for invalid parameters and data.
It's working perfectly and has already identified a few minor issues that have been found. The result is immensely useful and will check for breaking changes on new deployments.

Resources