Kamailio removing headers from reply - c

I am working on a project where I need to send back 302 reply. Everything seems to work, except I can't remove certain headers, i.e. From, Contact, etc. (I don't want to remove them completely, but rather substitute with my own version of it). I use KEMI with Lua to do so:
KSR.hdr.remove("From")
As I mentioned, this does not work (while other functions from hdr work fine in the same context, namely KSR.hdr.append_to_reply(...).
I decided to look at the Kamailio source code and found following lines of code in kemi.c file:
int sr_kemi_hdr_remove(sip_msg_t *msg, str *hname)
{
...
anchor=del_lump(msg, hf->name.s - msg->buf, hf->len, 0);
if (anchor==0) {
LM_ERR("cannot remove hdr %.*s\n", hname->len, hname->s);
return -1;
}
}
return 1;
}
Looking at the last parameter that del_lump takes, it is of type _hdr_types_t which describes an enum of different header types. Now, in particular to me, there were three headers I was working with:
From (type 4)
Contact (type 7)
Other (type 0)
So my question is, why does that function is hardcoded to take only OTHER headers, but not other ones, i.e. From and Contact? Is that to safeguard from breaking the SIP request (inadvertently removing required headers)?
And as a follow up question, is it even possible to remove From and Contact from reply messages?

I assume the 302 is generated by Kamailio, then several headers are copied from the incoming request, like From, To, Call-Id, CSeq. Therefore if you want a different From in the generated reply, change it in the request and then do msg_apply_changes().
Contact headers for redirect (3xx) replies are generated from the destination set of the request (modified R-URI and branches that can be created by append_branch(), lookup("location") etc.).
More headers can be added to the generated replies using append_to_reply().
Note that I gave the name of the functions for the native kamailio.cfg, but you can find them exported to Kemi as well (by core or textops, textopsx modules).

Related

PollingConsumer using smb protocol and pollEnrich()

Problem: The file is not consumed from the server
I am using
from("test")
.routeId("test")
.pollEnrich()
.simple("smb://myUrl?password=test&fileName=${in.headers.test}")
.aggregationStrategy((Exchange oldExchange, Exchange newExchange) -> {
//do things
return newExchange;
})
I have no error, I am sure that the url is ok, because when I am using the same url in the from(), the file gets consumed.
I don't understand what is happening here, I am using camel 2.24.0 and camel-extra:camel-jcifs:2.23.1. I have tried to use smb2 using the library from github.jborza.camel-smbj, still the same outcome.
I tried to debug, I can see in the GenericFileComponent class in the createEndpoint method, that the endpoint is correctly created, then I tried (in debug mode) to get the exchanges from my endpoint, I can get them successfully, further this will be a SmbEndpoint, when I try to get the exchanges from my smbEndpoint it returns exactly the needed file from the server, further a EventDrivenPollingConsumer is created for this endpoint, I had a look at it, is started (seems ok). When it hits the consumer.receive() from the PollEnricher it blocks, no file is consumed. I tried using a timeout, than returns null, so somehow cannot find the file, or the consumer is wrong, I honestly have no clue at this point.
I had a look here too: https://github.com/apache/camel/blob/b9a3117f19dd19abd2ea8b789c42c3e86fe4c488/core/camel-core/src/test/java/org/apache/camel/component/file/FileConsumePollEnrichFileTest.java
and I have played with delays
&consumer.initialDelay=100&consumer.delay=100&consumer.bridgeErrorHandler=true
Then I tried to implement with processor like here:
https://github.com/apache/camel/blob/b9a3117f19dd19abd2ea8b789c42c3e86fe4c488/core/camel-core/src/test/java/org/apache/camel/component/file/FileConsumePollEnrichFileUsingProcessorTest.java
The same result :(
At some point the file was consumed, suddenly, but this happened only once, I cannot understand this behavior.
Sounds that you have a readlock problem, can you find any files in .done with the same name as the file you try to consume?

Errors when retrieving request body in Mongoose WebServer

I'm working with an old version of mongoose (open source web server) in C, which did not provide native access to the requests payload. In order to support POST and PUT requests, I manually modified it: after mongoose reads the headers, I check if Content-Length is set and, if so, I read again from the socket for Content-Lenght characters.
findCL = strstr(conn->buf, "Content-Length:");
if (findCL)
{
// skip "Content-Length:" string
findCL += 15 * sizeof(char);
findCLEnd = (char*)strchr(findCL, delimiter);
sizeLen = findCLEnd - findCL;
strncpy(CLSize, findCL, sizeLen);
CLSize[sizeLen] = '\0';
size = strtoll(CLSize, NULL, 10);
if (size > 0)
{
conn->content_len = read_request(NULL, conn->client.sock, conn->ssl,
conn->buf, conn->buf_size, &conn->data_len);
conn->content_len = size;
perror("recv");
body = (char*)malloc(sizeof(char) * (size + 1));
strncpy(body, conn->buf + conn->request_len, size);
body[size] = '\0';
}
}
So far so good, even if the code is not that beautiful it does the dirty job. Problem is, while in debug the code works fine, but when the code runs as a simple background process the body is not parsed correctly: sometimes the resulting body is truncated, some other times it is just empty. It seems that the problem is caused by the fast queries from the clients.
Not a real answer, but that's how I solved.
The web server module started as a Mongoose web server; I ported it to Civetweb some months ago, hoping that the latest versions of the project also supported the parsing of the body. It was not so, and I had to implement it manually. After some times I discovered that Civetweb had issues in serving Javascript files to IE8 browsers (for some mysterious reason). I reverted to Mongoose and all worked, except the body parsing, which brought to the code above and the subsequent errors. I finally solved by reverting back to Civetweb, stable version 1.9.1, keeping the manual body parsing procedure. This solved both the truncated requests body and the truncated served javascript files. Probably the first version of Civetweb was a not-so-stable beta, although I wonder how did it manage to work for months.
I still haven't checked the diffs between the two versions, but I expect it to be something related to maximum response sizes depending on platform or request headers or whatever else.

How to read the body of ssl response using BIO

Hello I've got a problem trying to create a twitter console app. When getting a response I've got some thrash in it.
Here my code is:
char new_request[1024] = "GET /1.1/statuses/user_timeline.json?count=4&screen_name=twitterapi HTTP/1.1\r\nHost: api.twitter.com\r\nUser-Agent: twitter-terminal-app$
strcat(new_request, bearer);
strcat(new_request, "\r\nAccept-Encoding: gzip\r\n\r\n\0");
BIO_write(bio, new_request, strlen(new_request));
printf("%s\n", new_request);
p = BIO_read(bio, ans, 2047); // Getting header
ans[p] = 0;
printf("%s\n", ans);
char ans2[100000] = "";
p = BIO_read(bio, ans2, 10000); // Getting body
BIO_should_retry(bio);
FILE *file = fopen("result.txt", "w+");
fputs(ans2, file);
printf("%s\n%i\n", ans2, p);
The answer I have in body looks like that:
▒▒is▒HǿJ▒_
▒▒▒▒V▒▒▒▒.▒▒T▒▒f
▒Tm▒m
▒P▒▒1▒|▒}▒%▒▒▒▒Q^▒▒▒▒?▒cx{Չ
F%▒C*;▒▒ŴD/#▒L▒▒▒▒?L▒A▒▒▒N▒▒ĝ▒▒▒▒$▒El▒▒▒▒X▒▒B0▒~%▒▒5▒˲#Y)GY▒ctz▒h&-▒▒▒O>AG▒▒▒l6b▒z▒:K
▒T▒\▒▒2+▒b▒H▒&B ▒▒▒B▒qV▒▒▒▒▒▒
▒▒a_▒l▒?#▒o▒▒▒W▒▒u%▒▒▒;▒1M▒v▒▒L▒▒
Maybe the answer is encoded somehow and that's the problem. Tried to serf dev.twitter.com but didn't find any answer. If I use BIO_gets() instead of BIO_read() the answer is -2. Any ideas?
strcat(new_request, "\r\nAccept-Encoding: gzip\r\n\r\n\0");
...
p = BIO_read(bio, ans, 2047); // Getting header
...
p = BIO_read(bio, ans2, 10000); // Getting body
... Maybe the answer is encoded somehow
You've declared in your HTTP request that you support data compressed with gzip and that's why the server has send you the data compressed. If you would not only read and ignore the HTTP response header but actually take a look at it you would probably notice:
Content-Encoding: gzip
Apart from explicitly allowing compressed data you are doing a HTTP/1.1 request. This means that you would also need to be able to deal with chunked transfer encoding. HTTP/1.1 also implies that by default that connections are persistent so you would need to properly parse the header to find out where the response really ends instead of relying on connection end. You also cannot rely on a fixed size of the header or that header and body can be read with separate BIO_read calls. For example you might need multiple reads for the body or the body might already be included in the single read you do for the header.
Unless you really want to deal with all these problems yourself I recommend you better use an existing library which implements this properly and thus gets you the correct response reliably and not by chance.
If you instead want to learn how this is done I recommend you start with learning more about HTTP, i.e. by reading the wikipedia entry and then continue with all the standards referenced there. I suggest to start with HTTP/1.0 since this is simpler than HTTP/1.1.

Provide a callback URL in Google Cloud Storage signed URL

When uploading to GCS (Google Cloud Storage) using the BlobStore's createUploadURL function, I can provide a callback together with header data that will be POSTed to the callback URL.
There doesn't seem to be a way to do that with GCS's signed URL's
I know there is Object Change Notification but that won't allow the user to provide upload specific information in the header of a POST, the way it is possible with createUploadURL's callback.
My feeling is, if createUploadURL can do it, there must be a way to do it with signed URL's, but I can't find any documentation on it. I was wondering if anyone may know how createUploadURL achieves that callback calling behavior.
PS: I'm trying to move away from createUploadURL because of the __BlobInfo__ entities it creates, which for my specific use case I do not need, and somehow seem to be indelible and are wasting storage space.
Update: It worked! Here is how:
Short Answer: It cannot be done with PUT, but can be done with POST
Long Answer:
If you look at the signed-URL page, in front of HTTP_Verb, under Description, there is a subtle note that this page is only relevant to GET, HEAD, PUT, and DELETE, but POST is a completely different game. I had missed this, but it turned out to be very important.
There is a whole page of HTTP Headers that does not list an important header that can be used with POST; that header is success_action_redirect, as voscausa correctly answered.
In the POST page Google "strongly recommends" using PUT, unless dealing with form data. However, POST has a few nice features that PUT does not have. They may worry that POST gives us too many strings to hang ourselves with.
But I'd say it is totally worth dropping createUploadURL, and writing your own code to redirect to a callback. Here is how:
Code:
If you are working in Python voscausa's code is very helpful.
I'm using apejs to write javascript in a Java app, so my code looks like this:
var exp = new Date()
exp.setTime(exp.getTime() + 1000 * 60 * 100); //100 minutes
json['GoogleAccessId'] = String(appIdentity.getServiceAccountName())
json['key'] = keyGenerator()
json['bucket'] = bucket
json['Expires'] = exp.toISOString();
json['success_action_redirect'] = "https://" + request.getServerName() + "/test2/";
json['uri'] = 'https://' + bucket + '.storage.googleapis.com/';
var policy = {'expiration': json.Expires
, 'conditions': [
["starts-with", "$key", json.key],
{'Expires': json.Expires},
{'bucket': json.bucket},
{"success_action_redirect": json.success_action_redirect}
]
};
var plain = StringToBytes(JSON.stringify(policy))
json['policy'] = String(Base64.encodeBase64String(plain))
var result = appIdentity.signForApp(Base64.encodeBase64(plain, false));
json['signature'] = String(Base64.encodeBase64String(result.getSignature()))
The code above first provides the relevant fields.
Then creates a policy object. Then it stringify's the object and converts it into a byte array (you can use .getBytes in Java. I had to write a function for javascript).
A base64 encoded version of this array, populates the policy field.
Then it is signed using the appidentity package. Finally the signature is base64 encoded, and we are done.
On the client side, all members of the json object will be added to the Form, except the uri which is the form's address.
var formData = new FormData(document.forms.namedItem('upload'));
var blob = new Blob([thedata], {type: 'application/json'})
var keys = ['GoogleAccessId', 'key', 'bucket', 'Expires', 'success_action_redirect', 'policy', 'signature']
for(field in keys)
formData.append(keys[field], url[keys[field]])
formData.append('file', blob)
var rest = new XMLHttpRequest();
rest.open('POST', url.uri)
rest.onload = callback_function
rest.send(formData)
If you do not provide a redirect, the response status will be 204 for success. But if you do redirect, the status will be 200. If you got 403 or 400 something about the signature or policy maybe wrong. Look at the responseText. If is often helpful.
A few things to note:
Both POST and PUT have a signature field, but these mean slightly different things. In case of POST, this is a signature of the policy.
PUT has a baseurl which contains the key (object name), but the URL used for POST may only include bucket name
PUT requires expiration as seconds from UNIX epoch, but POST wants it as an ISO string.
A PUT signature should be URL encoded (Java: by wrapping it with a URLEncoder.encode call). But for POST, Base64 encoding suffices.
By extension, for POST do Base64.encodeBase64String(result.getSignature()), and do not use the Base64.encodeBase64URLSafeString function
You cannot pass extra headers with the POST; only those listed in the POST page are allowed.
If you provide a URL for success_action_redirect, it will receive a GET with the key, bucket and eTag.
The other benefit of using POST is you can provide size limits. With PUT however, if a file breached your size restriction, you can only delete it after it was fully uploaded, even if it is multiple-tera-bytes.
What is wrong with createUploadURL?
The method above is a manual createUploadURL.
But:
You don't get those __BlobInfo__ objects which create many indexes and are indelible. This irritates me as it wastes a lot of space (which reminds me of a separate issue: issue 4231. Please go give it a star)
You can provide your own object name, which helps create folders in your bucket.
You can provide different expiration dates for each link.
For the very very few javascript app-engineers:
function StringToBytes(sz) {
map = function(x) {return x.charCodeAt(0)}
return sz.split('').map(map)
}
You can include succes_action_redirect in a policy document when you use GCS post object.
Docs here: Docs: https://cloud.google.com/storage/docs/xml-api/post-object
Python example here: https://github.com/voscausa/appengine-gcs-upload
Example callback result:
def ok(self):
""" GCS upload success callback """
logging.debug('GCS upload result : %s' % self.request.query_string)
bucket = self.request.get('bucket', default_value='')
key = self.request.get('key', default_value='')
key_parts = key.rsplit('/', 1)
folder = key_parts[0] if len(key_parts) > 1 else None
A solution I am using is to turn on Object Changed Notifications. Any time an object is added, a Post is sent to a URL - in my case - a servlet in my project.
In the doPost() I get all info of objected added to GCS and from there, I can do whatever.
This worked great in my App Engine project.

Setting the header to be content image in CakePHP

I am writing an action in a controller where in a certain case, I want to output raw image data directly, and want to set the header content-type appropriate. However I think the header is already being set earlier by CakePHP (I am setting render to be false).
Is there a way to get around this? Thanks!
As said before, CakePHP does not send headers when render is false. Beware though, that any code doing an 'echo' will send headers (except you are using output-buffering). This includes messages from PHP (warnings etc.).
Sending the file can be done in numerous ways, but there are two basic ways:
Send the file using plain PHP
function send_file_using_plain_php($filename) {
// Avoids hard to understand error-messages
if (!file_exists($filename)) {
throw RuntimeException("File $filename not found");
}
$fileinfo = new finfo(FILEINFO_MIME);
$mime_type = $fileinfo->file($filename);
// The function above also returns the charset, if you don't want that:
$mime_type = reset(explode(";", $mime_type));
// gets last element of an array
header("Content-Type: $mime_type");
header("Content-Length: ".filesize($filename));
readfile($filename);
}
Use X-Sendfile and have the Webserver serve the file
// This was only tested with nginx
function send_file_using_x_sendfile($filename) {
// Avoids hard to understand error-messages
if (!file_exists($filename)) {
throw RuntimeException("File $filename not found");
}
$fileinfo = new finfo(FILEINFO_MIME);
$mime_type = $fileinfo->file($filename);
// The function above also returns the charset, if you don't want that:
$mime_type = reset(explode(";", $mime_type));
// gets last element of an array
header("Content-Type: $mime_type");
// The slash makes it absolute (to the document root of your server)
// For apache and lighttp use:
header("X-Sendfile: /$filename");
// or for nginx: header("X-Accel-Redirect: /$filename");
}
The first function occupies one PHP-process / thread while the data is being send and supports no Range-Requests or other advanced HTTP-features. This should therefore only be used with small files, or on very small sites.
Using X-Sendfile you get all that, but you need to know which webserver is running and maybe even a change to the configuration is needed. Especially when using lighttp or nginx this really pays off performance-wise, because these webservers are extremly good at serving static files from disk.
Both functions support files not in the document-root of the webserver. In nginx there are so called "internal locations" (http://wiki.nginx.org/HttpCoreModule#internal). These can be used with the X-Accel-Redirect-Header. Even rate-throtteling is possible, have a look at http://wiki.nginx.org/XSendfile.
If you use apache, there is mod_xsendfile, which implements the feature needed by the second function.
It's not $this->render(false), it's $this->autoRender=false; The header is not sent in the controller action unless you echo something out.
If render is false, cake will not send a header.
You can rely on plain ol' php here.
PNG:
header('Content-Type: image/gif');
readfile('path/to/myimage.gif');
JPEG:
header('Content-Type: image/jpeg');
readfile('path/to/myimage.jpg');
PNG:
header('Content-Type: image/png');
readfile('path/to/myimage.png');

Resources