I've created an application and I'd like to test how well it scales to large numbers of users.
To run my application a user has to go to the homepage, sign in to a Google account, click a button and then upload a video file.
First of all, is this possible to emulate using JMeter? I'm signed into my Google account locally but am not sure whether simulated users will have access to it?
Secondly, I've recorded a session in JMeter doing the actions above and have run the test with 10 simulated users, however, the App Engine dashboard doesn't detect any activity. I've followed the steps mentioned here but obviously with details of my application etc.
Here's a screenshot of the summary report.
Is there anything obvious I might be doing wrong? Am I using JMeter in the correct way to test the application as desired?
Apologies for my JMeter inexperience.
This is not something you will be able to record and replay, my expectation is that your application is protected by OAuth so you will need some token in order to execute your calls.
Not knowing the details of your application implementation it's quite hard to guess what's went wrong, I would recommend
Running your test with 1 user and 1 loop first to ensure that it's doing what it is supposed to be doing by adding View Results Tree listener and inspecting request and response details for each sampler (especially for failed ones).
Once you figure out what's wrong with this particular request - amend JMeter configuration so it would be successful. Repeat until you're happy with the test end-to-end.
Add load only after that and be careful as test might be sensitive to extra users/loops, especially if you're using a single login account (which is not recommended)
References:
How to Handle Correlation in JMeter
How to Run Performance Tests on OAuth Secured Apps with JMeter
Related
As whwn I have recorded the requests are not visible
How to create test case for Performance testing on Angular Js and React JS based application
As whwn I have recorded the requests are not visible
As per JMeter project main page:
JMeter is not a browser, it works at protocol level. As far as web-services and remote services are concerned, JMeter looks like a browser (or rather, multiple browsers); however JMeter does not perform all the actions supported by browsers. In particular, JMeter does not execute the Javascript found in HTML pages. Nor does it render the HTML pages as a browser does (it's possible to view the response as HTML etc., but the timings are not included in any samples, and only one sample in one thread is ever displayed at a time).
Assuming above:
JMeter won't execute any JavaScript hence it won't generate any traffic connected with AJAX requests
If JavaScript call doesn't generate a HTTP Request - you don't need to worry about it as it runs only on the client side
If JMeter doesn't record anything - first of all check jmeter.log file for any suspicious entries. The most common reasons are:
people forget to import JMeter's certificate into their browser, see HTTPS recording and certificates chapter of HTTP(S) Test Script Recorder user manual entry for more details
people fail to configure browser properly, i.e. Firefox cannot record local traffic unless you set network.proxy.allow_hijacking_localhost property to true
Also be aware of an alternative way of recording a JMeter test: JMeter Chrome Extension. In this case you don't need to worry about proxies and certificates, just follow your test scenario steps in your browser and in the end you will be able to export the recorded script in form of JMeter .jmx test plan
I have Jmeter and webdriver plugin (chrome, firefox, phantomJS, ...)
The problem is when I launch the scenario with multi threads all headless (Chrome, PhantomJS) open the first thread and log into but all other threads don't log in, the reason we are already connected on the application (the aim have several users same time on the application), I don't know how to isolate session like firefox (the problem with firefox is not headless and only version 45 works)
I try to test recording controller via proxy and test recording in workbench but when i try to relaunch test the request don't go well (asynchrone) there is an explication tells "use transaction controller" then well but how ? i don't want to go on blazemater website i want to make it work locally anyone could make it work ? nobody stress test angularJS application ?
I prefer the 2nd solution call the browser via jmeter and test ajax via the http request but i don't know how it works
any idea ?
Depending on how many users do you need:
You can parameterize your test so different JMeter Threads (virtual users) would use different credentials to log into the application from different browsers via i.e. CSV Data Set Config. All browsers which are kicked off by the WebDriver Sampler should be isolated from each other and given you use different credentials you should be good to go. But it will only play for several users, as per WebDriver Sampler 10 Minute Guide
However, for the Web Driver use case, the reader should be prudent in the number of threads they will create as each thread will have a single browser instance associated with it. Each browser consumes a significant amount of resources, and a limit should be placed on how many browsers the reader should create.
If you go the HTTP Requests way the easiest option to mimic AJAX calls would be putting them under the Parallel Controller so your test would look like:
Transaction Controller
Main Request
Parallel Controller
AJAX request 1
AJAX request 2
etc.
Strangely, i make a simple configuration and it works, my angularJS application is embedded in a war but i don't know if it is doing a difference the structure is like this:
Plan
Thread Group
HTTP Cookie Manager
HTTP Header Manager
HTTP request Defaults
Recording Controller
I recorded the scenario and simply play it (i assume that the login is in the right order) it is html pages i don't see the JS because the application is in application server
I am developing an application and decided Nagios3 for performing monitoring stuff. But I am stuck at two points. I am using check_http plug-in for monitoring load on my service api. Now I want to perform below tasks.
I need to set a threshold in check_http for performing some task after crossing that threshold. I tried below command
'check_command check_nrpe_1arg!check_service_api'
but it only tells me the load, not any threshold is set. while below one doesn't work.
'check_command check_service_api!100!200'
I need to send simple text message on some port(my application).
I am new to Nagios, so please help me figuring out the solution except email notification stuff.
There is a check command that you can download called "notify_sms" that integrates with an API server hosted by a company called Esendex. They charge for their service but it works well.
I'm trying to migrate an a web app from Google Appengine to a dedicated server and I've got stuck to the logging issue. Basically I would like to organise the logs per request/context(like on GAE) so that I can easily review the errors/trace on each request. The most advanced logging library I could find is the glog package but still I can't figure it out how to log per request/context.
Each request gives you a http.Request-object to work with.
If you're using sessions, then you'll have a sessions.Session-object to work with.
You will want to use those objects to help log per request/context, as they identify the request / session.
I have a Silverlight 4 client running on a Facebook page hosted on Google App Engine. It's using gminifb to communicate with the Facebook API. The Silverlight client uses POST calls to the URIs for each method and passes the session information from Facebook with each call.
The project's growing and it would be super-useful if I could set up a unit-testing system to make a variety of the server calls so that when I make changes I can ensure everything else still works. I've worked with nUnit before and I like what I've read of PEX but I'm not sure how to apply them to this situation.
What're the choices for creating a test system for this? Pros/cons of each?
How do I get started setting something like this up?
Solved. I did it as follows:
Created a special user account to be used for testing on the server that bypassed the authentication. Only did this on the test environment by checking a debug flag in that environment's settings. This avoided creating any security hole in the live site (since the same debug flag will be false there.)
Created a C#.NET solution to test each API call. The host project is a console app (no need for a GUI) with three reusable synchronous methods:
SendFormRequest(WebRequest request, Dictionary<string,string> pairs),
GetJsonFromResponse(HttpWebResponse response),
and ResetAccount().
These three methods allow the host project to make HTTP requests on the server and to read the JSON responses.
Wrapped each server API call inside a method call in the host project.
Created an nUnit test project in the solution. Then simply created tests that call each wrapper method in the host project, using different parameters and changing values on the server.
Created a series of tests to verify correct error handling for invalid parameters and data.
It's working perfectly and has already identified a few minor issues that have been found. The result is immensely useful and will check for breaking changes on new deployments.