Currently I'm working on an application which embeds the mongoose webserver. In some cases, I have to call additional functions inside the begin_request_handler to create the desired HTTP header. During this, I realized that theses functions are called after the request handler is done. For example:
void test() {
printf("HELLO");
}
static int begin_request_handler(struct mg_connection *conn) {
test();
const struct mg_request_info *request_info = mg_get_request_info(conn);
...
return 1;
}
Here the HELLO is getting printed right after the browser closes the tcp connection. Is there even a way to call functions from inside the callbacks? Or am I just missing something?
If you want to create the desired HTTP header. Then the function you mentioned above (begin_request_handler) may not be the correct approach. Look into the structure mg_request_info which is field in structure mg_connection. Here the name and value of headers is set.
I think these structures are populated at the very start after connection establishment. Also look at pull() and read(). These are ground-level function where all data is set.
And yes there is a way to call functions from callbacks.You can write your own callback and make callback function to point in struct of mg_context to make it point to your callback.
and then in handle_request() you can call it appropriately. You can add it to struct mg_callbacks in mongoose.h
Example:
memset(&callbacks, 0, sizeof(callbacks));
callbacks.begin_request => begin_request_handler;
//place your function in place of begin_request_handler
// Start the web server.
ctx = mg_start(&callbacks, NULL, options);
Please specify any more details you maybe interested in.
Well, got it. I got confused by the printf() buffers in stdout. The methods ARE called at the right time, yet the results aren't shown. Thank anyways.
Related
I am working on a project where I need to send back 302 reply. Everything seems to work, except I can't remove certain headers, i.e. From, Contact, etc. (I don't want to remove them completely, but rather substitute with my own version of it). I use KEMI with Lua to do so:
KSR.hdr.remove("From")
As I mentioned, this does not work (while other functions from hdr work fine in the same context, namely KSR.hdr.append_to_reply(...).
I decided to look at the Kamailio source code and found following lines of code in kemi.c file:
int sr_kemi_hdr_remove(sip_msg_t *msg, str *hname)
{
...
anchor=del_lump(msg, hf->name.s - msg->buf, hf->len, 0);
if (anchor==0) {
LM_ERR("cannot remove hdr %.*s\n", hname->len, hname->s);
return -1;
}
}
return 1;
}
Looking at the last parameter that del_lump takes, it is of type _hdr_types_t which describes an enum of different header types. Now, in particular to me, there were three headers I was working with:
From (type 4)
Contact (type 7)
Other (type 0)
So my question is, why does that function is hardcoded to take only OTHER headers, but not other ones, i.e. From and Contact? Is that to safeguard from breaking the SIP request (inadvertently removing required headers)?
And as a follow up question, is it even possible to remove From and Contact from reply messages?
I assume the 302 is generated by Kamailio, then several headers are copied from the incoming request, like From, To, Call-Id, CSeq. Therefore if you want a different From in the generated reply, change it in the request and then do msg_apply_changes().
Contact headers for redirect (3xx) replies are generated from the destination set of the request (modified R-URI and branches that can be created by append_branch(), lookup("location") etc.).
More headers can be added to the generated replies using append_to_reply().
Note that I gave the name of the functions for the native kamailio.cfg, but you can find them exported to Kemi as well (by core or textops, textopsx modules).
I know how to source stream from an entity via a POST request, but I want to be able to also create a source stream from the query parameters of a GET request.
I know i can get query parameters to a case class via a as[] directive, but it seems like a miss to have to wrap that in a source in order to source stream it.
The query parameters that are part of the URL are not "streamed" from the client, rather they are part of the request line. Therefore, when you have an HttpRequest object in your memory you have already consumed enough space to hold the query parameters. This means that you lose any back-pressure benefits from using a Source. I recommend analyzing why you want to create a Source in the first place...
If you absolutely have to create a Source out of the parameters then you can use the parameterSeq Directive:
val route =
parameterSeq { params : Seq[(String, String)] =>
val parameterSource : Source[(String, String), _] = Source(params)
}
I have an Angular 1 app that I am trying to increase the performance of a particular service that makes a lot of calculations (and probably is not optimized but that's besides the point for now, running it in another thread is the goal right now to increase animation performance)
The App
The app runs calculations on your GPA, Terms, Courses Assignments etc. The service name is calc. Inside Calc there are user, term, course and assign namespaces. Each namespace is an object in the following form
{
//Times for the calculations (for development only)
times:{
//an array of calculation times for logging and average calculation
array: []
//Print out the min, max average and total calculation times
report: function(){...}
},
//Hashes the object (with service.hash()) and checks to see if we have cached calculations for the item, if not calls runAllCalculations()
refresh: function(item){...},
//Runs calculations, saves it in the cache (service.calculations array) and returns the calculation object
runAllCalculations: function(item){...}
}
Here is a screenshot from the very nice structure tab of IntelliJ to help visualization
What Needs To Be Done?
Detect Web Worker Compatibility (MDN)
Build the service depending on Web Worker compatibility
a. Structure it the exact same as it is now
b. Replace with a Web Worker "proxy" (Correct terminology?) service
The Problem
The problem is how to create the Web Worker "Proxy" to maintain the same service behavior from the rest of the code.
Requirements/Wants
A few things that I would like:
Most importantly, as stated above, keep the service behavior unchanged
To keep one code base for the service, keep it DRY, not having to modify two spots. I have looked at WebWorkify for this, but I am unsure how to implement it best.
Use Promises while waiting for the worker to finish
Use Angular and possibly other services inside the worker (if its possible) again WebWorkify seems to address this
The Question
...I guess there hasn't really been a question thus far, it's just been an explanation of the problem...So without further ado...
What is the best way to use an Angular service factory to detect Web Worker compatibility, conditionally implement the service as a Web Worker, while keeping the same service behavior, keeping DRY code and maintaining support for non Web Worker compatible browsers?
Other Notes
I have also looked at VKThread, which may be able to help with my situation, but I am unsure how to implement it the best.
Some more resources:
How to use a Web Worker in AngularJS?
http://www.html5rocks.com/en/tutorials/workers/basics/
https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers#Worker_feature_detection
In general, good way to make a manageable code that works in worker - and especially one that also can run in the same window (eg. when worker is not supported) is to make the code event-driven and then use simple proxy to drive the events through the communication channel - in this case worker.
I first created abstract "class" that didn't really define a way of sending events to the other side.
function EventProxy() {
// Object that will receive events that come from the other side
this.eventSink = null;
// This is just a trick I learned to simulate real OOP for methods that
// are used as callbacks
// It also gives you refference to remove callback
this.eventFromObject = this.eventFromObject.bind(this);
}
// Object get this as all events callback
// typically, you will extract event parameters from "arguments" variable
EventProxy.prototype.eventFromObject = (name)=>{
// This is not implemented. We should have WorkerProxy inherited class.
throw new Error("This is abstract method. Object dispatched an event "+
"but this class doesn't do anything with events.";
}
EventProxy.prototype.setObject = (object)=> {
// If object is already set, remove event listener from old object
if(this.eventSink!=null)
//do it depending on your framework
... something ...
this.eventSink = object;
// Listen on all events. Obviously, your event framework must support this
object.addListener("*", this.eventFromObject);
}
// Child classes will call this when they receive
// events from other side (eg. worker)
EventProxy.prototype.eventReceived = (name, args)=> {
// put event name as first parameter
args.unshift(name);
// Run the event on the object
this.eventSink.dispatchEvent.apply(this.eventSink, args);
}
Then you implement this for worker for example:
function WorkerProxy(worker) {
// call superconstructor
EventProxy.call(this);
// worker
this.worker = worker;
worker.addEventListener("message", this.eventFromWorker = this.eventFromWorker.bind(this));
}
WorkerProxy.prototype = Object.create(EventProxy.prototype);
// Object get this as all events callback
// typically, you will extract event parameters from "arguments" variable
EventProxy.prototype.eventFromObject = (name)=>{
// include event args but skip the first one, the name
var args = [];
args.push.apply(args, arguments);
args.splice(0, 1);
// Send the event to the script in worker
// You could use additional parameter to use different proxies for different objects
this.worker.postMessage({type: "proxyEvent", name:name, arguments:args});
}
EventProxy.prototype.eventFromWorker = (event)=>{
if(event.data.type=="proxyEvent") {
// Use superclass method to handle the event
this.eventReceived(event.data.name, event.data.arguments);
}
}
The usage then would be that you have some service and some interface and in the page code you do:
// Or other proxy type, eg socket.IO, same window, shared worker...
var proxy = new WorkerProxy(new Worker("runServiceInWorker.js"));
//eg user clicks something to start calculation
var interface = new ProgramInterface();
// join them
proxy.setObject(interface);
And in the runServiceInWorker.js you do almost the same:
importScripts("myservice.js", "eventproxy.js");
// Here we're of course really lucky that web worker API is symethric
var proxy = new WorkerProxy(self);
// 1. make a service
// 2. assign to proxy
proxy.setObject(new MyService());
// 3. profit ...
In my experience, eventually sometimes I had to detect on which side am I but that was with web sockets, which are not symmetric (there's server and many clients). You could run into similar problems with shared worker.
You mentioned Promises - I think the approach with promises would be similar, though maybe more complicated as you would need to store the callbacks somewhere and index them by ID of the request. But surely doable, and if you're invoking worker functions from different sources, maybe better.
I am the author of the vkThread plugin which was mentioned in the question. And yes, I developed Angular version of vkThread plugin which allows you to execute a function in a separate thread.
Function can be defined directly in the main thread or called from an external javascript file.
Function can be:
Regular functions
Object's methods
Functions with dependencies
Functions with context
Anonymous functions
Basic usage:
/* function to execute in a thread */
function foo(n, m){
return n + m;
}
// to execute this function in a thread: //
/* create an object, which you pass to vkThread as an argument*/
var param = {
fn: foo // <-- function to execute
args: [1, 2] // <-- arguments for this function
};
/* run thread */
vkThread.exec(param).then(
function (data) {
console.log(data); // <-- thread returns 3
}
);
Examples and API doc: http://www.eslinstructor.net/ng-vkthread/demo/
Hope this helps,
--Vadim
I am using services for my AngularJS project and trying to call a service.method using a for loop, like this :
for( key in URLs) {
Service.fetchXML(key);
}
Service description:
[..]
fetchXML : function(resource) {
var prop = SomeVar[resource]; //SomeVar is declared within the service
$.get('xmlURL.xml?resource='+prop, function(data) {
//adds data to the IndexedDB after properly parsing it.
console.log(resource)
dbAdd();
})
Problem is when I try resource inside fetchXML() method; its set permanently, means if the loop runs for five times, only one instance of fetchXML() is created and console.log(resource) returns the same for all five iterations.
Please tell me what am I doing wrong here.
for( key in URLs) {
Service.fetchXML();
}
Should be passing parameter to function since it is used as resource to create prop.
for( key in URLs) {
Service.fetchXML(key);
}
This should have been fairly easy to troubleshoot. First it would be apparent in the request url inspected in browser console/dev tools.
Also using some simple degugger or console.log() statements in function would have helped. Or setting breakpoint on the function and stepping through it to see variable values
I'm writing an API using Kohana. Each external request must be signed by the client to be accepted.
However, I also sometime need to do internal requests by building a Request object and calling execute(). In these cases, the signature is unnecessary since I know the request is safe. So I need to know that the request was internal so that I can skip the signature check.
So is there any way to find out if the request was manually created using a Request object?
Can you use the is_initial() method of the request object? Using this method, you can determine if a request is a sub request.
Kohana 3.2 API, Request - is_initial()
It sounds like you could easily solve this issue by setting some sort of static variable your app can check. If it's not FALSE, then you know it's internal.
This is how I ended up doing it: I've overridden the Request object and added a is_server_side property to it. Now, when I create the request, I just set this to true so that I know it's been created server-side:
$request = Request::factory($url);
$request->is_server_side(true);
$response = $request->execute();
Then later in the controller receiving the request:
if ($this->request->is_server_side()) {
// Skip signature check
} else {
// Do signature check
}
And here is the overridden request class in application/classes/request.php:
<?php defined('SYSPATH') or die('No direct script access.');
class Request extends Kohana_Request {
protected $is_server_side_ = false;
public function is_server_side($v = null) {
if ($v === null) return $this->is_server_side_;
$this->is_server_side_ = $v;
}
}
Looking through Request it looks like your new request would be considered an internal request but does not have any special flags it sets to tell you this. Look at 782 to 832 in Kohana_Request...nothing to help you.
With that, I'd suggest extending the Kohana_Request_Internal to add a flag that shows it as internal and pulling that in your app when you need to check if it is internal/all others.
Maybe you are looking for is_external method:
http://kohanaframework.org/3.2/guide/api/Request#is_external
Kohana 3.3 in the controller :
$this->request->is_initial()
http://kohanaframework.org/3.3/guide-api/Request#is_initial