Comment Box update time - reactjs

I have an app wherein i have a comments box. Everything is working fine. However there is a small thing that is bugging me. I am using React and set the update Interval to 2 sec. So every 2 sec, a REST call is made which will return a new comment or no comment (i do this by sending last updated timestamp in the API call). However this rest call, is still returning 200 B, when empty. now on its own this size is minimal. But if a user stays on the page for 10 minutes, even with no new comments, he would download 10*60/2*200 B ~ 60000 B ~ 60 KB.
Is this considered appropriate or should i look into other solutions?

I would use a websocket.
You can then poll your comments-source for changes from the server with no need to involve the browser. Only if you detect new comments on the server would you then broadcast an appropriate socket event with the payload. All listening clients would then update their comments only when required.
In this way you avoid any overhead, either the server load caused by creating and destroying the http connections, or client load receiving 'empty' payloads.

Related

JMeter Think Time

Apologies if this request is similar to others - I am new to JMeter and have searched for other relevants posts but couldn't find anything - or maybe I just didn't understand them!
I'm performance testing a system with a web based application. The front end system will be processing records submitted into the system via MQ - the front end allows the user to pick up a record from the queue, validate some detail, make changes and submit the changes.
There will be 20 users using the front end to do this message validation, update and submission.
Each user is expected to need 30 seconds to pick a message from queue, make changes and resubmit - so we are expecting 1 user to process 120 records/hour, so 20 users will be expected to process 2400 records/hour
The picking up the record off the queue, changing it and submitting the changes will be done via 3 individual web pages.
SO - think time across the 3 pages has been defined as 24 seconds (leaving 6 of the 30 second limit for rendering, server responses, db calls etc.)
However I don't know how to specify this within JMeter. From my reading I can see that I can add a Timer in as a parent to a sampler and I assume I can add a Timer in as a parent of the Recording Controller? - but I need to be able to specify that the 24 second think time is spread across those 3 different pages.
I read a post elsewhere suggesting that if I record using the proxy after adding the Gaussian Random Timer in as a child of the Test Plan (parent to everything else) then the http proxy will record the think time as a ${T} variable in the Gaussian Random Timer - I tried this and this didn't work (also I don't want to rely on this - I'd like to be able to understand and make changes to think time properly rather than relying on JMETER to do it for me.)
To reiterate - 20 users, 30 seconds for 1 user to complete a transaction, TT defined as 24 seconds - I am struggling what Timer to use, where to put it so that the think-time is spread across the samplers that equate to the GETS associated with the 3 pages the user will navigate through.
Apologies for the lengthy post - I just wanted to be clear and concise.
Many thanks in advance,
As per JMeter Timers documentation
Note that timers are processed before each sampler in the scope in which they are found; if there are several timers in the same scope, all the timers will be processed before each sampler.
Timers are only processed in conjunction with a sampler. A timer which is not in the same scope as a sampler will not be processed at all.
To apply a timer to a single sampler, add the timer as a child element of the sampler. The timer will be applied before the sampler is executed. To apply a timer after a sampler, either add it to the next sampler, or add it as the child of a Test Action Sampler.
Now regarding "what timer to use"
There are 2 scenarios:
Virtual-User-oriented scenario - when you try to simulate N users working together
Goal-Oriented-scenario - when you try to produce N hits per second load.
In case of scenario 1 even Constant Timer can be quite enough, besides it will provide repeatability of results. See above quote for information on where to put your timer(s)
In case of scenario 2 you'll need Constant Throughput Timer. If 20 users process 2400 records per hour and each record assumes 3 web page calls, it means that 7200 requests will be made in one hour which in its turn stands for 120 requests per minute (this is what you should enter into the timer's "throughput" area) or 2 requests per second.

Extend Store class to always execute a function after load on ExtJS

I am working on a project where we were asked to "patch" (they don't want a lot of time spent on development as they soon will replace the system) a system implemented under ExtJS 4.1.0.
That system is used under a very slow and non-stable network connection. So sometimes the stores don't get the expected data.
First two things that come to my mind as patches are:
1. Every time a store is loaded for the first time, wait 5 seconds and try again. Most times, a page refresh fix the problem of stores not loading.
Somehow, check detect that no data was received after loading a store and, try to get it again.
This patches should be executed only once to avoid infinite loops or unnecessary recursivity, given that it's ok that some times, it's ok that stores don't get any data back.
I don't like this kind of solutions but it was requested by the client.
This link should help with your question.
One of the posters suggests adding the below in an overrides.js file which is loaded in between the ExtJs source code and your applications code.
Ext.util.Observable.observe(Ext.data.Connection);
Ext.data.Connection.on('requestexception', function(dataconn, response, options){
if (response.responseText != null) {
window.document.body.innerHTML = response.responseText;
}
});
Using this example, on any error instead of echoing the error in the example you could log the error details for debugging later and try to load again. I would suggest adding some additional logic into this so that it will only retry a certain number of times otherwise it could run indefinitely while the browser window is open and more than likely crash the browser and put additional load on your server.
Obviously the root cause of the issue is not the code itself, rather your slow connection. I'd try to address this issue rather than any other.

AFNetworking url request every 20 sec

I want to download some information from url every 20 seconds and update view based on that info (2-3 labels change text values). I'm using AFNetworking for making request operations in my app.
Should I use NSTimer and make it call method with AFNetworking request every 20 seconds ? Or is there some better way to implement this ?
Thanks
You can use an NSTimer. There is a repeats parameters in the NSTimer scheduleWithTimeInterval to do repeating request.
Instead you can also define a method you can call every 20 seconds and in that method you can decide whether to make the request based on some logic(like a boolean) whether the previous request was successful or not. This can be useful, if there is a server problem and you continue requesting the server unnecessarily.

Timeout in Ext Direct

I'm using Ext Direct to communicate with the server side. My server side takes more than 45 seconds to return all the data to extjs. I can see in the network ( in chrome browser ), that my request was cancelled since the operation took more than 30 seconds.
Where can i override this setting ?
Is it possible ?
I understand in Leo's answer that he suggests to edit directly ExtJS code, I don't think this is a good practice, all the more so as the parameter exists in the REMOTING_API:
Ext.app.REMOTING_API = {
"url":"/usermanagement/extdirect/router",
"actions":{"myService":[{"len":0,"name":"myMethod"}]},
"type":"remoting",
"timeout":120
};
I'm pretty sure it's browser thing. It's not ExtJs breaking your connection attempt but the browser itself.
Update: I haven't tried using ExtDirect with huge data. And honestly speaking - you should not force your user to just wait on load such long time. It's very bad design. If you have something that takes that long - you need to provide some kind of feedback of the progress and break whole communication into smaller pieces.
In your ext-all-debug.js,
under
Ext.define('Ext.data.Connection', { timeout:30000
You can edit the timeout to a higher value, the default value is 30 seconds.

Is there an elegant way to post messages to AWS SQS with visibility delay of longer than 15 minutes?

In Amazon Web Services, their queues allow you to post messages with a visibility delay up to 15 minutes. What if I don't want messages visible for 6 months?
I'm trying to come up with an elegant solution to the poll/push problem. I can write code to poll the SQS (or a database) every few seconds, check for messages that are ready to be visible, then move them to a "visible queue", or something like that. I wish there was a simpler, more reliable method to have messages become visible in queues far into the future without me having to worry about my polling application working perfectly all the time.
I'm not married to AWS, SQS or any of that, but I'd prefer to find a cloud-friendly solution that is stable, reliable and will trigger an event far into the future without me having to worry about checking on its status every day.
Any thoughts or alternate trees for me to explore barking up are welcome.
Thanks!
It sounds like you might be misunderstanding the visibility delay. Its purpose is to make sure that the polling application doesn't pull the same item off the queue more than once.
In other words, when the item is pulled off the queue it becomes invisible for a predetermined period of time (default is 30 seconds, max is 15 minutes) in case the polling system has a cluster of machines reading from the queue all at once.
Here's the relevant documentation:
http://docs.amazonwebservices.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/IntroductionArticle.html#AboutVT
...and the sentence in particular that relates to my comment is:
"Immediately after the component receives the message, the message is still in the queue. However, you don't want other components in the system receiving and processing the message again. Therefore, Amazon SQS blocks them with a visibility timeout, which is a period of time during which Amazon SQS prevents other consuming components from receiving and processing that message."
You should be able to use SQS for your purpose since you can leave an item in the queue for as long as you want.
7 years later, and Amazon still doesn't support the feature you need!
The two ways you can sort of get it to work are:
have messages contain a delivery target datetime in their message_attributes, and have the workers that consume the queue's messages just delete and recreate any message that is consumed before its target, with delay = max(0, min(secs_until_target_datetime, 900)) ; that would allow you to effectively schedule a message for any arbitrary future time;
or,
(slightly less frequent and costly:) similarly, if a message isn't due to be handled yet, recreate it and change its visibility timeout to be timeout = max(0, min(secs_until_target_datetime, 43200))
The disadvantage of using visibility timeout is that any read will re-trigger it.
There has been a direct AWS solution possible since 2016-12-01: AWS Step Functions
Each execution can last/idle up to one year, persists the state between transitions, and doesn't cost you any money while it waits.

Resources