Does JMeter scripts actually creates records in database - database

Let's say I run a recorded script for 'New User Registration' function of a web site to evaluate the response time for entire scenario. When I run the recorded script from JMeter, for each registration script, is there a new user record getting created in the application database ?

Yes, if you record registration and correlate it (meaning you create a valid unique name for every request) you will create a real user in your environment.
JMeter is simulating a real scenario which effect your environment.
That is part of the reason JMeter will be executed in different environment than production (as stage)

Well-behaved JMeter script must represent a real user using a real browser as close as it is possible.
Browsers execute HTTP requests and render the response
JMeter executes the same HTTP requests but doesn't render the response, instead it records performance metrics like response time, connect time, latency, throughput, etc.
HTTP is a stateful protocol therefore given you execute the same request you will get the same response. So if there are no mistakes in your script it either should create a new user or fail due to non-unique username error.

Yes, if your script accurately represents the full set of data flows associated with the business process, "New User Registration," then the end state of that process should be identical to that of the user behavior so modeled.
A record will be created in the database. If not, then your user is not accurate in its behavior

Related

azure logic app to grab email attachment slow to trigger

I have a azure logic app that monitors my emails and when target is found, it drops the attachment into blob storage. The plan is a consumption plan.
The issue is, sometimes it takes up to 50 minutes for the email to be grabbed and dropped. I know there is a startup time when things go idle, but I was reading seconds/minutes. Not close to an hour. Does anyone know how I can trouble shoot this?
sometimes it takes up to 50 minutes to grab and drop the email
Based on this doc ,
The reason for delay is:
When the triggers encounter a new file, it will try to ensure that the new file is completely written. For instance, it is possible that the file is being written or modified, and updates are being made at the time the trigger polled the file server. To avoid returning a file with partial content, the trigger will take note of the timestamp such files which are modified recently, but will not immediately return those files. Those files will be returned only when the trigger polls again. Sometimes, this may lead a delay up to twice the trigger polling interval. This also means that the trigger does not guarantee to return all files in a single run when "Split On" option is disabled.
For more information you can refer this:
. Automate tasks to process emails by using Azure Logic Apps | MS DOC, .
.How to Send an Email with one or more attachments after getting the content from Blob storage? | SO Thread & Logic app Created with add email attachments in Blob storage .

Programatically listing and sending requests to dynamic App Engine instances

I want to send a particular HTTP request (or otherwise communicate a message) to every (dynamic/autoscaled) instance which is currently running for a particular App Engine application.
My goal is to trigger each instance to discard some locally cached data (because I have just modified the underlying data and want them to reload it).
One possible solution is to store a value in Memcache, and have instances check this each time they handle a request to see if they should flush their cache. But this adds latency to every request.
Another possible solution would be to somehow stop all running instances. No fixed overhead, but some impact while instances are restarted.
An even less desirable solution would be to redeploy the application code in order to cause all instances to be stopped. This now adds additional delay on my end as a deployment takes some time.
You could use the management API to list instances for a given version, but I'd suggest that you'd probably want to use something like the PubSub API to create a subscription on each of your App Engine instances. Since each instance has its own subscription, any messages sent to the monitored queue will be received by all instances.
You can create the subscription at startup (the /_ah/start endpoint may be useful), and then delete it at shutdown (using the /_ah/stop endpoint).

Best approach for real time process information / Server + JS Client

I have a C# Web API project on server side and on front-end I have ExtJS 4.2.1 (Javascript framework client).
There is a section in my app where I request to start a long running process (about 5 minutes) and I want to show the user the status of the process being executed.
Basically, the process will run a special calculation for every employee in the database (about 800), so I want to let the user know which Employee is being processed in that moment.
So I was thinking in two ways of doing this, and maybe I don't know if having both is ok.
Use SignalR to show the information of the process in Real Time.
Write to a database table all the process log (every employee that its being processed).
If I use the first approach, if the user close the browser he will loose all the information about the process and if he log into the app again he will only see the actual status.
If I use the second approach, if he log into the app again he could see all the information, and using maybe a timer on client side the data could be refreshed every 5 seconds.
Does anyone have implemented something like this? Any advice is appreciated.
You should use a combination of the two. When you have calculated a employee save the state to the database and publish the change on a service bus.
Let SignalR pick these messages up and forward them to the client. This way the user will see old state when he connects and new state then they arrive with SignalR. I have created a Event aggregator proxy that makes this very easy.
https://github.com/AndersMalmgren/SignalR.EventAggregatorProxy/wiki
Follow the wiki to set it up, here is a demo project
https://github.com/AndersMalmgren/SignalR.EventAggregatorProxy/tree/master/SignalR.EventAggregatorProxy.Demo.MVC4
Live demo
http://malmgrens.org/Signalr/

Appengine, excel files and the 30 seconds request limit

How can I upload parse and download excel files in Google appengine that require more than 30secs ? I use java poi and backend tasks, but as soon as the backend does the job I cannot notify the client. I cannot download the excel that is created from the backend task... Any suggestions would be much appreciated.
The best approach here is not to fight HTTP and a web service architecture but rather to work with it.
Introduce a notion of a job id. When your client uploads a file, immediately return a token that represents that job. Extra credit, include an estimated duration of the job. For starters, lets say its 2 minutes.
The client is then responsible for querying the server for the state of that job id using the token. The server either returns the answer, or it returns the token back with an updated ETA.
For starters, you could just always tell the client to check back in 2 minutes (or whatever constant makes most sense for your workload). As your server processing becomes smarter, you could give more accurate estimates, and decrease the busy-waiting the client does.

Heavy database load when using CodeIgniter Session class?

After reading about how CodeIgniter handles sessions, it has me concerned about the performance impact when sessions are configured to be stored and retrieved from the database.
This is from the CI documentation: "When session data is available in a database, every time a valid session is found in the user's cookie, a database query is performed to match it."
So every AJAX call, every HTML fragment I request is going to have this overhead? That is potentially a huge issue for systems that are trying to scale!
I would have guessed that CI would have implemented it better: include the MD5 hash to cover both the sessionID+timestamp when encoding them in the session record. Then only check the database for the session record every X minutes whenever the sessionID gets regenerated. Am I missing something?
You can make your AJAX requests use a different controller, for example Ajax_Controller instead of MY_Controller. MY_Controller would load the Session class but the Ajax_Controller doesn't. That way when you call to your AJAX, it doesn't touch session data and therefore doesn't make any erroneous calls to the database that aren't necessary.
If you are autoloading the Session class, maybe you can try unloading it for the AJAX requests? I've never tried it but it's talked about here http://codeigniter.com/forums/viewthread/65191/#320552 and then do something like this
if($this->input->is_ajax_request()){
// unload session class code goes here
}

Resources