Errors when retrieving request body in Mongoose WebServer - c

I'm working with an old version of mongoose (open source web server) in C, which did not provide native access to the requests payload. In order to support POST and PUT requests, I manually modified it: after mongoose reads the headers, I check if Content-Length is set and, if so, I read again from the socket for Content-Lenght characters.
findCL = strstr(conn->buf, "Content-Length:");
if (findCL)
{
// skip "Content-Length:" string
findCL += 15 * sizeof(char);
findCLEnd = (char*)strchr(findCL, delimiter);
sizeLen = findCLEnd - findCL;
strncpy(CLSize, findCL, sizeLen);
CLSize[sizeLen] = '\0';
size = strtoll(CLSize, NULL, 10);
if (size > 0)
{
conn->content_len = read_request(NULL, conn->client.sock, conn->ssl,
conn->buf, conn->buf_size, &conn->data_len);
conn->content_len = size;
perror("recv");
body = (char*)malloc(sizeof(char) * (size + 1));
strncpy(body, conn->buf + conn->request_len, size);
body[size] = '\0';
}
}
So far so good, even if the code is not that beautiful it does the dirty job. Problem is, while in debug the code works fine, but when the code runs as a simple background process the body is not parsed correctly: sometimes the resulting body is truncated, some other times it is just empty. It seems that the problem is caused by the fast queries from the clients.

Not a real answer, but that's how I solved.
The web server module started as a Mongoose web server; I ported it to Civetweb some months ago, hoping that the latest versions of the project also supported the parsing of the body. It was not so, and I had to implement it manually. After some times I discovered that Civetweb had issues in serving Javascript files to IE8 browsers (for some mysterious reason). I reverted to Mongoose and all worked, except the body parsing, which brought to the code above and the subsequent errors. I finally solved by reverting back to Civetweb, stable version 1.9.1, keeping the manual body parsing procedure. This solved both the truncated requests body and the truncated served javascript files. Probably the first version of Civetweb was a not-so-stable beta, although I wonder how did it manage to work for months.
I still haven't checked the diffs between the two versions, but I expect it to be something related to maximum response sizes depending on platform or request headers or whatever else.

Related

Kamailio removing headers from reply

I am working on a project where I need to send back 302 reply. Everything seems to work, except I can't remove certain headers, i.e. From, Contact, etc. (I don't want to remove them completely, but rather substitute with my own version of it). I use KEMI with Lua to do so:
KSR.hdr.remove("From")
As I mentioned, this does not work (while other functions from hdr work fine in the same context, namely KSR.hdr.append_to_reply(...).
I decided to look at the Kamailio source code and found following lines of code in kemi.c file:
int sr_kemi_hdr_remove(sip_msg_t *msg, str *hname)
{
...
anchor=del_lump(msg, hf->name.s - msg->buf, hf->len, 0);
if (anchor==0) {
LM_ERR("cannot remove hdr %.*s\n", hname->len, hname->s);
return -1;
}
}
return 1;
}
Looking at the last parameter that del_lump takes, it is of type _hdr_types_t which describes an enum of different header types. Now, in particular to me, there were three headers I was working with:
From (type 4)
Contact (type 7)
Other (type 0)
So my question is, why does that function is hardcoded to take only OTHER headers, but not other ones, i.e. From and Contact? Is that to safeguard from breaking the SIP request (inadvertently removing required headers)?
And as a follow up question, is it even possible to remove From and Contact from reply messages?
I assume the 302 is generated by Kamailio, then several headers are copied from the incoming request, like From, To, Call-Id, CSeq. Therefore if you want a different From in the generated reply, change it in the request and then do msg_apply_changes().
Contact headers for redirect (3xx) replies are generated from the destination set of the request (modified R-URI and branches that can be created by append_branch(), lookup("location") etc.).
More headers can be added to the generated replies using append_to_reply().
Note that I gave the name of the functions for the native kamailio.cfg, but you can find them exported to Kemi as well (by core or textops, textopsx modules).

How to read the body of ssl response using BIO

Hello I've got a problem trying to create a twitter console app. When getting a response I've got some thrash in it.
Here my code is:
char new_request[1024] = "GET /1.1/statuses/user_timeline.json?count=4&screen_name=twitterapi HTTP/1.1\r\nHost: api.twitter.com\r\nUser-Agent: twitter-terminal-app$
strcat(new_request, bearer);
strcat(new_request, "\r\nAccept-Encoding: gzip\r\n\r\n\0");
BIO_write(bio, new_request, strlen(new_request));
printf("%s\n", new_request);
p = BIO_read(bio, ans, 2047); // Getting header
ans[p] = 0;
printf("%s\n", ans);
char ans2[100000] = "";
p = BIO_read(bio, ans2, 10000); // Getting body
BIO_should_retry(bio);
FILE *file = fopen("result.txt", "w+");
fputs(ans2, file);
printf("%s\n%i\n", ans2, p);
The answer I have in body looks like that:
▒▒is▒HǿJ▒_
▒▒▒▒V▒▒▒▒.▒▒T▒▒f
▒Tm▒m
▒P▒▒1▒|▒}▒%▒▒▒▒Q^▒▒▒▒?▒cx{Չ
F%▒C*;▒▒ŴD/#▒L▒▒▒▒?L▒A▒▒▒N▒▒ĝ▒▒▒▒$▒El▒▒▒▒X▒▒B0▒~%▒▒5▒˲#Y)GY▒ctz▒h&-▒▒▒O>AG▒▒▒l6b▒z▒:K
▒T▒\▒▒2+▒b▒H▒&B ▒▒▒B▒qV▒▒▒▒▒▒
▒▒a_▒l▒?#▒o▒▒▒W▒▒u%▒▒▒;▒1M▒v▒▒L▒▒
Maybe the answer is encoded somehow and that's the problem. Tried to serf dev.twitter.com but didn't find any answer. If I use BIO_gets() instead of BIO_read() the answer is -2. Any ideas?
strcat(new_request, "\r\nAccept-Encoding: gzip\r\n\r\n\0");
...
p = BIO_read(bio, ans, 2047); // Getting header
...
p = BIO_read(bio, ans2, 10000); // Getting body
... Maybe the answer is encoded somehow
You've declared in your HTTP request that you support data compressed with gzip and that's why the server has send you the data compressed. If you would not only read and ignore the HTTP response header but actually take a look at it you would probably notice:
Content-Encoding: gzip
Apart from explicitly allowing compressed data you are doing a HTTP/1.1 request. This means that you would also need to be able to deal with chunked transfer encoding. HTTP/1.1 also implies that by default that connections are persistent so you would need to properly parse the header to find out where the response really ends instead of relying on connection end. You also cannot rely on a fixed size of the header or that header and body can be read with separate BIO_read calls. For example you might need multiple reads for the body or the body might already be included in the single read you do for the header.
Unless you really want to deal with all these problems yourself I recommend you better use an existing library which implements this properly and thus gets you the correct response reliably and not by chance.
If you instead want to learn how this is done I recommend you start with learning more about HTTP, i.e. by reading the wikipedia entry and then continue with all the standards referenced there. I suggest to start with HTTP/1.0 since this is simpler than HTTP/1.1.

Provide a callback URL in Google Cloud Storage signed URL

When uploading to GCS (Google Cloud Storage) using the BlobStore's createUploadURL function, I can provide a callback together with header data that will be POSTed to the callback URL.
There doesn't seem to be a way to do that with GCS's signed URL's
I know there is Object Change Notification but that won't allow the user to provide upload specific information in the header of a POST, the way it is possible with createUploadURL's callback.
My feeling is, if createUploadURL can do it, there must be a way to do it with signed URL's, but I can't find any documentation on it. I was wondering if anyone may know how createUploadURL achieves that callback calling behavior.
PS: I'm trying to move away from createUploadURL because of the __BlobInfo__ entities it creates, which for my specific use case I do not need, and somehow seem to be indelible and are wasting storage space.
Update: It worked! Here is how:
Short Answer: It cannot be done with PUT, but can be done with POST
Long Answer:
If you look at the signed-URL page, in front of HTTP_Verb, under Description, there is a subtle note that this page is only relevant to GET, HEAD, PUT, and DELETE, but POST is a completely different game. I had missed this, but it turned out to be very important.
There is a whole page of HTTP Headers that does not list an important header that can be used with POST; that header is success_action_redirect, as voscausa correctly answered.
In the POST page Google "strongly recommends" using PUT, unless dealing with form data. However, POST has a few nice features that PUT does not have. They may worry that POST gives us too many strings to hang ourselves with.
But I'd say it is totally worth dropping createUploadURL, and writing your own code to redirect to a callback. Here is how:
Code:
If you are working in Python voscausa's code is very helpful.
I'm using apejs to write javascript in a Java app, so my code looks like this:
var exp = new Date()
exp.setTime(exp.getTime() + 1000 * 60 * 100); //100 minutes
json['GoogleAccessId'] = String(appIdentity.getServiceAccountName())
json['key'] = keyGenerator()
json['bucket'] = bucket
json['Expires'] = exp.toISOString();
json['success_action_redirect'] = "https://" + request.getServerName() + "/test2/";
json['uri'] = 'https://' + bucket + '.storage.googleapis.com/';
var policy = {'expiration': json.Expires
, 'conditions': [
["starts-with", "$key", json.key],
{'Expires': json.Expires},
{'bucket': json.bucket},
{"success_action_redirect": json.success_action_redirect}
]
};
var plain = StringToBytes(JSON.stringify(policy))
json['policy'] = String(Base64.encodeBase64String(plain))
var result = appIdentity.signForApp(Base64.encodeBase64(plain, false));
json['signature'] = String(Base64.encodeBase64String(result.getSignature()))
The code above first provides the relevant fields.
Then creates a policy object. Then it stringify's the object and converts it into a byte array (you can use .getBytes in Java. I had to write a function for javascript).
A base64 encoded version of this array, populates the policy field.
Then it is signed using the appidentity package. Finally the signature is base64 encoded, and we are done.
On the client side, all members of the json object will be added to the Form, except the uri which is the form's address.
var formData = new FormData(document.forms.namedItem('upload'));
var blob = new Blob([thedata], {type: 'application/json'})
var keys = ['GoogleAccessId', 'key', 'bucket', 'Expires', 'success_action_redirect', 'policy', 'signature']
for(field in keys)
formData.append(keys[field], url[keys[field]])
formData.append('file', blob)
var rest = new XMLHttpRequest();
rest.open('POST', url.uri)
rest.onload = callback_function
rest.send(formData)
If you do not provide a redirect, the response status will be 204 for success. But if you do redirect, the status will be 200. If you got 403 or 400 something about the signature or policy maybe wrong. Look at the responseText. If is often helpful.
A few things to note:
Both POST and PUT have a signature field, but these mean slightly different things. In case of POST, this is a signature of the policy.
PUT has a baseurl which contains the key (object name), but the URL used for POST may only include bucket name
PUT requires expiration as seconds from UNIX epoch, but POST wants it as an ISO string.
A PUT signature should be URL encoded (Java: by wrapping it with a URLEncoder.encode call). But for POST, Base64 encoding suffices.
By extension, for POST do Base64.encodeBase64String(result.getSignature()), and do not use the Base64.encodeBase64URLSafeString function
You cannot pass extra headers with the POST; only those listed in the POST page are allowed.
If you provide a URL for success_action_redirect, it will receive a GET with the key, bucket and eTag.
The other benefit of using POST is you can provide size limits. With PUT however, if a file breached your size restriction, you can only delete it after it was fully uploaded, even if it is multiple-tera-bytes.
What is wrong with createUploadURL?
The method above is a manual createUploadURL.
But:
You don't get those __BlobInfo__ objects which create many indexes and are indelible. This irritates me as it wastes a lot of space (which reminds me of a separate issue: issue 4231. Please go give it a star)
You can provide your own object name, which helps create folders in your bucket.
You can provide different expiration dates for each link.
For the very very few javascript app-engineers:
function StringToBytes(sz) {
map = function(x) {return x.charCodeAt(0)}
return sz.split('').map(map)
}
You can include succes_action_redirect in a policy document when you use GCS post object.
Docs here: Docs: https://cloud.google.com/storage/docs/xml-api/post-object
Python example here: https://github.com/voscausa/appengine-gcs-upload
Example callback result:
def ok(self):
""" GCS upload success callback """
logging.debug('GCS upload result : %s' % self.request.query_string)
bucket = self.request.get('bucket', default_value='')
key = self.request.get('key', default_value='')
key_parts = key.rsplit('/', 1)
folder = key_parts[0] if len(key_parts) > 1 else None
A solution I am using is to turn on Object Changed Notifications. Any time an object is added, a Post is sent to a URL - in my case - a servlet in my project.
In the doPost() I get all info of objected added to GCS and from there, I can do whatever.
This worked great in my App Engine project.

chan->cdr no data after upgrade from Asterisk 1.4.21

I have a legacy Asterisk application in C which does authentication of users, routing and billing using MySQL. I have kept it with Asterisk 1.4.21 because none of the CDR data is returned in newer versions of Asterisk.
Apparently there have been some changes in 1.4.22 https://issues.asterisk.org/jira/browse/ASTERISK-13064 that have completely changed the way CDR-s are handled. Unfortunately no helpful information was given on how to properly migrate existing code.
They have changed the order of execution, the 'h' extension is called and the CDR data is reset.
My code:
ast_log(LOG_NOTICE,"Dialing string: '%s'\n", dialstr);
app = pbx_findapp("Dial");
if (app)
res = pbx_exec(chan, app, dialstr);
ast_log(LOG_NOTICE,"Return from pbx_exec '%i', Disposition: '%s'\n", res, ast_cdr_disp2str(chan->cdr->disposition));
Other parts of the code handle chan->cdr->billsec etc, but it always gives 0 values.
After a successful call I always get this log from CLI:
Return from pbx_exec '-1', Disposition: 'NO ANSWER' while the same code works fine on 1.4.21
One solution I heard is to use ast_reset() before Dial but I am not sure how to implement it.
Any help on how to adapt this application?
You can just get DIALSTATUS variable,that is enought for you application and will be supported in future releases.
pbx_builtin_getvar_helper(chan, "DIALSTATUS");

Fiddler: is it possible to compress/gzip the request body?

Great tool, does everything I need. Love its Transform tab that allows compression of the response. But what about request? Seems like a simple thing but I don't see that functionality. Am I missing something?
Fiddler Web Debugger, V2.3.4.4.
You can write a bit of script to compress the request body. Click Rules > Customize Rules, and add something like this:
static function OnBeforeRequest(oSession: Session){
if (oSession.requestBodyBytes != null && oSession.requestBodyBytes.Length>0){
oSession.requestBodyBytes = Utilities.GzipCompress(oSession.requestBodyBytes);
oSession["Content-Length"] = oSession.requestBodyBytes.Length.ToString();
oSession["Content-Encoding"] = "gzip";
}
However, I'm not aware of any servers that actually support compressed requests. There's no good way for a server to signal that it supports compressed requests, and Zip Bomb attacks are a real threat for servers.

Resources