Sending request each n seconds with libcurl in c - c

I am trying to figure out how to make a request in c each n seconds. I want it to be asynchronous, meaning the requests are made even if the previous ones have not been responded.
I want to achieve this in order to test a server.
Any ideas?
Thank you.

Use the multi interface. Add a new handle and start a new request every N seconds and let it take its time. It'll handle "any" amount of simultaneous transfers for you. "any" because there's probably a limit in number of open sockets a process is allowed to use (depending on the environment you want this for).

Related

Setup Gatling tests to limit by number of requests

So, I've written a few Gatling tests and know how to write test setup for a max duration.
setUp(testScenario.inject(atOnceUsers(3))).maxDuration(5 minutes)
Now, I want to achieve something along this:
setUp(testScenario.inject(atOnceUsers(3))).maxRequests(1000 requests)
How should I approach that?
Here instead of limiting my time, I'm limiting my test setup by achieving a number of requests.
Any assistance is appreciated. Thanks.
In general there is no maxRequests() option. You should think of each injected user as of actual user that independently executes some steps and finish his work rather than a thread that executes steps in loop. With that approach it is as simple as setting up certain injection strategy fe.: inject(constantUsersPerSec(10) during(100 seconds)). This way you will simulate actual users behavior (real users are independent and do not relay on other users). Of course there may be some cases where you want simulate users that makes lot of requests but in that case you should write scenario that executes certain number of requests fe.: with repeat loop:
val floodingScenario = scenario("Flood").repeat(250){
// some execs here
}
setUp(
floodingScenario.inject(
atOnceUsers(4) // each user executes steps 250 times = 1000 executes total
)
)

Alexa: How to know where large responses are interrupted?

My skill has some intents which give out very large reponses (text). So there is a good chance the user might want to interrupt it and listen to the remaining part of the response later. I want to make the intent continue from where it left off (I guess I will have to use user state management). Is there a way for the backend to know where it was interupted? or even better, is there a way to send the response line by line so that the backend exactly knows which line was read out last?
Currently there is no way to find where the speech was interrupted nor you can send multiple responses line by line. However, you could calculate the time difference between when the response was sent and the interrupted request was received. And based on the time difference you could roughly determine where was it interrupted. Again, this is not an accurate way, it just a hack and you should keep in mind the network latency.
When you send the response, include response generated timestamp in sessionAttributes, so that you can use it to verify time difference.

Is it possible to detect if you are in the last processing attempt of an specific task?

When using push task queues in Google AppEngine, I know we can use the "X-AppEngine-TaskRetryCount" and "X-AppEngine-TaskExecutionCount" request header parameters to tell how many times we have tried to process an specific task.
Is it possible to detect if it's the last attempt or not?
A workaround is to pass the max retry count as a parameter in the HTTP request when you add tasks to TaskQueue. Then, you can detect if is the last attempt comparing the header attribute "X-AppEngine-TaskRetryCount" with your custom param:
Boolean isLastAttempt = (taskRetryCount == (maxRetryCount - 1));
Not exactly a good design approach though...

DeadlineExceededException and DataStore/Task Queue Operations

I'm doing some operations that should complete under 60 seconds but there may be some rare cases where it takes longer (but will never take longer than 10 minutes). It says in the app engine docs if you catch a DeadlineExceededException you have less than a second to do operations before it permanently fails. Would this be enough time to add a task to a queue and/or do a datastore write? I assume the safest way would be to add a task async/write a datastore entity (async) at the beginning of an operation and remove it from the queue if the operation completes. The latter method would use up twice as many api calls but is it worth it?
I would suggest to use the queue as default for all operations so you won't have to implement the fallback to it if you catch a dead line exceed error. It is more clean and easier to maintain along with the fact that the user doesn't have to wait for the operation to complete. In order to achieve this you can trigger your queue with an ajax call and get the result in the background, so the user will not wait for the operation to complete. Yes it worth's it, since it can "guarantee" the window of time you might need.
The runtime environment gives the request handler a little bit more time (less than a second) after raising the exception to prepare a custom response. so it would be sufficient to add that it into task queue.
If you do not want the client to keep polling for a task queue result, I suggest you have a look at the Channel API. It will enable you to implement push notifications to the client.
At the end of your task queue, you'll just have to send a notification to the client to let him now that is task has been processed.

How to make my U_store.load() waiting infinitely in extjs

how to make my U_store.load() waiting infinitely
var U_store = new Ext.data.JsonStore({
id:'jfields',
totalProperty:'totalcount',
root:'rows',
url: 'first-utility/index_json.php',
});
my index_json.php returns result in 10 min but the load() in extjs does not wait so much it return immediately , can somebody help me how to get result from index_json.php ??
Your users are going to wait 10 mins for data to load?
You'd probably be better off with a solution based on periodic polling rather than "infinte" waiting. Maybe the initial call starts your long process and you have a separate call that checks for the results? Without knowing what you're doing it's hard to know what the best approach is.
Why don't you gather all information and save it in the database/file by a separate process and then you can easily load store from database data.
In fact if it takes 10 minutes to load the data then I think it should be a huge amount of data. If possible then you can go for partial loading of data depending on the client events/actions.
To answer his question as asked, I'd try to use the Ext.Ajax object since you can define a timeout on there.
On the success of the Ajax you can take the response object and create the data store using something like :
var myResponseData = response.responseText;
myStore.loadData(myResponseData);
The drawback to going this route is that you cannot use to Ext.Ajax again while it is processing the query since it is a static member.
It might take some tweaking but I hope the basic idea is sound. If anyone sees a problem with this idea, I'd like to know. This has me thinkin'

Resources