I came across this article by Brian Ford which talks about throttling Socket.io requests to help with digests in a large app - http://briantford.com/blog/huuuuuge-angular-apps.html
I recently built a Factory to support PUBNUB's JS API and am having difficulties implementing a throttle in JS to prevent an apply/digest from hapenning every time a message is received. Here is a working Plunkr of the Factory in action - http://plnkr.co/edit/0w6dWQ4lcqTtdxbOa1EL?p=preview
I think the main issue i'm having is understanding how Brian's example does it with regards to Socket.io's syntax, and how that can apply to the way PubNub handles message callbacks.
Thanks!
PubNub Rate Limiting vs. Throttling Messages in AngularJS
Before diving into the solution, we'd want to talk about to variations of potential desirable characteristics or behavior of rate limiting and throttling. To start you may need to limit the rate the frequency at which the UI is updated, or when a function() is invoked.
If you want to skip to the plunker source code: http://plnkr.co/edit/Kv698u?p=preview
The behavior of rate limiting will demonstrate a max Message Per Second with an evenly distributed delay between each event triggered by the rate at which messages arrive. Rather than triggering all events by the rate messages arrive in the pipe, you can rate limit your recognition of the message and evenly spread the event triggering over X milliseconds.
However the behavior of throttling is different from rate limiting where intentional dropping of messages occurs only using the most recent message received instead. Throttling is one step further from the rate limiting method and altogether excludes messages that are recognized by tossing each message away and leaving only the most recent available to be processed at a set interval.
There is also the concept of capping which, over a timespan, would only allow x messages to arrive and then pause events until timespan complete. Capping is not the same as rate limiting or throttling where the rate at which messages receive is the same rate at which they are processed rather than evenly distributing each event over an interval. All messages are recognized (dropping after quota exceeded is optional).
Plunker JavaScript Source Editor for AngularJS
http://plnkr.co/edit/Kv698u?p=preview - Code View with AngularJS bindings.
Use Plunker to preview a working example!
PubNub Rate Limiting in AngularJS
This process requires a queue to receive and store all the messages until they are processed in a slow steady fashion. This method will not drop any messages, but instead slowly chew through each message at a set rate until all are processed regardless of the network receive rate.
//
// Listen for messages and process all messages at steady rate limit.
//
pubnub.subscribe({
channel : "hello_world",
message : limit( function(message) {
// process your messages at set interval here.
}, 500 /* milliseconds */ )
});
//
// Rate Limit Function
//
function limit( fun, rate ) {
var queue = [];
setInterval( function() {
var msg = queue.shift();
msg && fun(msg);
}, rate );
return function(message) { queue.push(message) };
}
Notice the function of interest is limit() and how it is used in the message response callback of the subscribe call.
PubNub Throttling in AngularJS
This is a process which only keeps the most recent message to be processed at a regular interval while purposefully dropping all older messages that are received in a certain window of time.
//
// Listen for events and process last message each 500ms.
//
pubnub.subscribe({
channel : "hello_world",
message : throttle( function(message) {
// process your last message here each 500ms.
}, 500 /* milliseconds */ )
});
//
// Throttle Function
//
function throttle( fun, rate ) {
var last;
setInterval( function() {
last !== null && fun(last);
last = null;
}, rate );
return function(message) { last = message };
}
The throttle() function will drop messages received in a certain window while processing always the last received message at a set interval.
Or Combine Both
//
// Listen for events and process last message each 500ms.
//
pubnub.subscribe({
channel : "hello_world",
message : thrimit( function( last_message, all_messages ) {
// process your last message here each 500ms.
}, 500 /* milliseconds */ )
});
//
// Throttle + Limit Function
//
function thrimit( fun, rate ) {
var last;
var queue = [];
setInterval( function() {
last !== null && fun( last, queue );
last = null;
queue = []
}, rate );
return function(message) {
last = message;
queue.push(message);
};
}
Related
I am loading data (idle time and timeout time) from the database via rest api call. From frontend side I am using AngularJs 1.3.1 $http get call to get the data.
I need the above data (idle time and timeout time) at the app.config() level --- which is working fine.
I need to pass the same data (idle time and timeout time) from app.config() to app.run(). Any idea how to do that?
Also how to make sure $http get call is completed and idle time and timeout time is fetched from the database before idle time and timeout time is sent to app.run()?
I hope people will understand the question and respond to it. I am stuck at it right now.
code block:
angular.module().config(function() {
var idleWarnTime, idleTimeOut;
var http = angular.injector([ 'ng' ]).get('$http');
http.get('timeout').success(function(response) {
idleWarnTime = response.data.warntime;
idleTimeOut = response.data.timeout;
}).error(function(error) {
console.log('session timeout details fetching from db failed');
});
});
angular.module().run(function() {
//need idleWarnTime and idleTimeOut values here and only after "timeout" rest api provide the result
});
Tying to learn how use Akka.net Streams to process items in parallel from a Source.Queue, with the processing done in an Actor.
I've been able to get it to work with calling a function with Sink.ForEachParallel, and it works as expected.
Is it possible to process items in parallel with Sink.ActorRefWithAck (as I would prefer it utilize back-pressure)?
About to press Post, when tried to combine previous attempts and viola!
Previous attempts with ForEachParallel failed when I tried to create the actor within, but couldn't do so in an async function. If I use an single actor previous declared, then the Tell would work, but I couldn't get the parallelism I desired.
I got it to work with a router with roundrobin configuration.
var props = new RoundRobinPool(5).Props(Props.Create<MyActor>());
var actor = Context.ActorOf(props);
flow = Source.Queue<Element>(2000,OverflowStrategy.Backpressure)
.Select(x => {
return new Wrapper() { Element = x, Request = ++cnt };
})
.To(Sink.ForEachParallel<Wrapper>(5, (s) => { actor.Tell(s); }))
.Run(materializer);
The Request ++cnt is for console output to verify the requests are being processed as desired.
MyActor has a long delay on every 10th request to verify the backpressure was working.
I am making lots of API calls in my applications i.e say 50.
The total time for completing all the api calls will be around 1 minute. The priority for all the api calls will 2. I have enabled the angular cache.
So in meantime if the user of my applications just want to focus on the some of the api calls among the all i.e say just 6 api calls.
Then once again I will project that 6 api calls with priority 1 .
But still I dont get what I aimed ? i.e these 6 api calls need to receive the data asap.
Kindly refer the below example code .
On Initial load :
for(var i=1,priority=19;i<=19,priority>=1;i++,priority--)
{
$http.get("http://localhost:65291/WebService1.asmx/HelloWorld"+i+"?test=hari",{priority:2})
.then(function(response) { });
}
}
On some event click :
$http.get("http://localhost:65291/WebService1.asmx/HelloWorld7?test=hari",{priority:1})
.then(function(response) { });
}
if you want send multiple http request one shot then use $q.all
Inside the loop push the http requests to an array and send that http array at once.
var httpArr = []
for (var i = 1, priority = 19; i <= 19, priority >= 1; i++, priority--) {
httpArr.push($http.get("http://localhost:65291/WebService1.asmx/HelloWorld" + i + "?test=hari", {
priority: 2
}))
}
$q.all(httpArr).then(function(response) {
console.log(response[0].data) //1st request response
console.log(response[1].data) //2nd request response
console.log(response[2].data) //3rd request response
})
I use a interval of 10 seconds for sending a request to get the most recent data:
var pollInterval = 10000;
var poll;
poll= $interval(function()
{
getNewestData();//$resource factory to get server data
}, pollInterval );
This works fine voor 99% of the time, but if the internet speed is really slow(I have actually experienced this), It will send the next request before the current is finished. Is there a way to just skip the current interval request if the previous one is still busy? Obsiously I could just use booleans to keep the state of the request, but I wonder if there is a better(native to angular) way of doing this?
Use the $resolved property of the Resource object to check if the previous operation is done.
From the Docs:
The Resource instances and collections have these additional properties:
$promise: the promise of the original server interaction that created this instance or collection.
$resolved: true after first server interaction is completed (either with success or rejection), false before that. Knowing if the Resource has been resolved is useful in data-binding.
$cancelRequest: If there is a cancellable, pending request related to the instance or collection, calling this method will abort the request.
-- AngularJS ngResource $resource API Reference.
How about making the request, then waiting for that to complete and then wait 10 seconds before making the same request again? Something along this line:
var pollInterval = 10000;
var getNewestData = function () {
// returns data in promise using $http, $resource or some other way
};
var getNewestDataContinuously = function () {
getNewestData().then(function (data) {
// do something with the data
$timeout(function () {
getNewestDataContinuously();
}, pollInterval);
});
};
getNewestData is the function that actually makes the request and returns the data in a promise.
And once data is fetched, a $timeout is started with timer as 10 seconds which then repeats the process.
I'm testing my angular application with Protractor.
Once the user is logged in to my app, I set a $timeout to do some job in one hour (so if the user was logged-in in 13:00, the $timeout will run at 14:00).
I keep getting these failures:
"Timed out waiting for Protractor to synchronize with the page after 20 seconds. Please see https://github.com/angular/protractor/blob/master/docs/faq.md. The following tasks were pending: - $timeout: function onTimeoutDone(){....."
I've read this timeouts page: https://github.com/angular/protractor/blob/master/docs/timeouts.md
so I understand Protractor waits till the page is fully loaded which means he's waiting for the $timeout to complete...
How can I make Protractor NOT wait for that $timeout?
I don't want to use:
browser.ignoreSynchronization = true;
Because then my tests will fail for other reasons (other angular components still needs the time to load...)
The solution will be to flush active timeouts (as #MBielski mentioned it in comments), but original flush method itself is available only in anuglar-mocks. To use angular-mocks directly you will have to include it on the page as a <script> tag and also you'll have to deal with all overrides it creates, it produces a lot of side effects. I was able to re-create flush without using angular-mocks by listening to any timeouts that get created and then reseting them on demand.
For example, if you have a timeout in your Angular app:
$timeout(function () {
alert('Hello World');
}, 10000); // say hello in 10 sec
The test will look like:
it('should reset timeouts', function () {
browser.addMockModule('e2eFlushTimeouts', function () {
angular
.module('e2eFlushTimeouts', [])
.run(function ($browser) {
// store all created timeouts
var timeouts = [];
// listen to all timeouts created by overriding
// a method responsible for that
var originalDefer = $browser.defer;
$browser.defer = function (fn, delay) {
// originally it returns timeout id
var timeoutId = originalDefer.apply($browser, arguments);
// store it to be able to remove it later
timeouts.push({ id: timeoutId, delay: delay });
// preserve original behavior
return timeoutId;
};
// compatibility with original method
$browser.defer.cancel = originalDefer.cancel;
// create a global method to flush timeouts greater than #delay
// call it using browser.executeScript()
window.e2eFlushTimeouts = function (delay) {
timeouts.forEach(function (timeout) {
if (timeout.delay >= delay) {
$browser.defer.cancel(timeout.id);
}
});
};
});
});
browser.get('example.com');
// do test stuff
browser.executeScript(function () {
// flush everything that has a delay more that 6 sec
window.e2eFlushTimeouts(6000);
});
expect(something).toBe(true);
});
It's kinda experimental, I am not sure if it will work for your case. This code can also be simplified by moving browser.addMockModule to a separate node.js module. Also there may be problems if you'd want to remove short timeouts (like 100ms), it can cancel currently running Angular processes, therefore the test will break.
The solution is to use interceptors and modify the http request which is getting timeout and set custom timeout to some milliseconds(your desired) to that http request so that after sometime long running http request will get closed(because of new timeout) and then you can test immediate response.
This is working well and promising.