Making HTTPS requests from ESP32 - c

I am making a post request from my ESP32 S2 Kaluga kit.
I have tested the HTTP request while running a server program in my LAN.
I am using
esp_http_client_handle_t and esp_http_client_config_t from
esp_http_client.h to do this.
Now, I have a HTTPS api setup in AWS API gateway. I get following error with https now:
E (148961) esp-tls-mbedtls: No server verification option set in esp_tls_cfg_t structure. Check esp_tls API reference
E (148961) esp-tls-mbedtls: Failed to set client configurations, returned [0x8017] (ESP_ERR_MBEDTLS_SSL_SETUP_FAILED)
E (148971) esp-tls: create_ssl_handle failed
E (148981) esp-tls: Failed to open new connection
E (148981) TRANSPORT_BASE: Failed to open a new connection
E (148991) HTTP_CLIENT: Connection failed, sock < 0
How can I solve this? Thank you
Edit:
Following is the code I use
I create a http client for post request:
esp_err_t client_event_get_handler(esp_http_client_event_handle_t evt)
{
switch (evt->event_id)
{
case HTTP_EVENT_ON_DATA:
printf("HTTP GET EVENT DATA: %s", (char *)evt->data);
break;
default:
break;
}
return ESP_OK;
}
static void post_rest_function( char *payload , int len)
{
esp_http_client_config_t config_post = {
.url = SERVER_URL,
.method = HTTP_METHOD_POST,
.event_handler = client_event_get_handler,
.auth_type = HTTP_AUTH_TYPE_NONE,
.transport_type = HTTP_TRANSPORT_OVER_TCP
};
esp_http_client_handle_t client = esp_http_client_init(&config_post);
esp_http_client_set_post_field(client, payload, len);
esp_http_client_set_header(client, "Content-Type", "image/jpeg");
esp_http_client_perform(client);
esp_http_client_cleanup(client);
}
and I use it in main with an image payload:
void app_main(){
....
post_rest_function( (char *)pic->buf, pic->len);
....
}

You need certificate to make https requests. In case you dont want to implement this, just edit your sdkconfig "Allow potentially insecure options" -> true
"Skip server certificate verification by default" -> true
Careful, this is unsafe.

Additionally, you may choose to include the certificates to make sure that your transfer is safe (valid server).
You can obtain the root SSL certificate of your host like so watch through till 56 minute mark for a complete explanation.
OR you may use the included certificate bundle that espressif provides in the IDF framework, for that:
In your code include #include "esp_crt_bundle.h"
and in your client_config_t add these:
.transport_type = HTTP_TRANSPORT_OVER_SSL, //Specify transport type
.crt_bundle_attach = esp_crt_bundle_attach, //Attach the certificate bundle
after which the process remains quite the same.
The video I linked above is quite helpful, I recommend you watch the whole thing :)

Related

nodejs modify server to accept only requests from array of IPs

I've a TCP server which I need to modify, to accept only requests from predefined IPs. My idea was to create an array, containing all IPs which are allowed, but how to do the check and how to put this check around my existing code?
code:
// Load the TCP Library
var net = require('net')
// Start a TCP Server
net.createServer(function (socket) {
socket.setKeepAlive(true)
// TODO: Add mysql connection list entry
console.log('connected', socket.remoteAddress)
socket.on('close', function(err){
if (err) throw err;
// TODO: Add mysql connection list entry
console.log('disconnected', socket.remoteAddress)
})
}).listen(5000);
// Put a friendly message on the terminal of the server.
console.log("Server running at port 5000");
I think this is the wrong tool for the job. You should configure access to the application using the system firewall. Firewalls allow you to:
select ip ranges in a flexible manner
e.g blocking as well as allowing
work with different ip versions
work with different protocols
better integrate into IT infrastructure
However, if you don't have access to the firewall and you need something quick and dirty you could easily kick connections that are not in your list by checking the ip address against a list:
var allow_list = ['10.1.1.1', '10.1.1.2'];
var net = require('net')
net.createServer(function (socket) {
if (allow_list.indexOf(socket.remoteAddress) < 0) {
socket.destroy();
return;
}
socket.setKeepAlive(true)
// do stuff
}).listen(5000);
console.log("Server running at port 5000");

Cannot receive multiple responses to HTTP requests with lwIP Raw TCP connection

I am unable to receive the response to multiple HTTP requests when I attempt to enqueue data to send to a server.
We are able to establish a connection to a server and immediately issue an HTTP request inside the connected_callback() function (called as soon as a connection to the server is established) using the tcp_write() function. However, if I attempt to generate two HTTP resquests or more using the following syntax:
err_t connected_callback(void *arg, struct tcp_pcb *tpcb, err_t err) {
xil_printf("Connected to JUPITER server\n\r");
LWIP_UNUSED_ARG(arg);
/* set callback values & functions */
tcp_sent(tpcb, sent_callback);
tcp_recv(tpcb, recv_callback);
if (err == ERR_OK) {
char* request = "GET /circuits.json HTTP/1.1\r\n"
"Host: jupiter.info.polymtl.ca\r\n\r\n";
(void) tcp_write(tpcb, request, 100, 1);
request = "GET /livrable1/simulation.dee HTTP/1.1\r\n"
"Host: jupiter.info.polymtl.ca\r\n\r\n";
(void) tcp_write(tpcb, request, 100, 1);
tcp_output(tpcb);
xil_printf("tcp_write \n");
} else {
xil_printf("Unable to connect to server");
}
return err;}
I manage to send all of the data to the server, but I never receive any data for the second HTTP request. I manage to print the payload for the first request (the JSON file) but I never manage to receive anything for the .dee file. Are there any specific instructions to enqueue HTTP requests together with lwIP or am I missing something?
If you require any more code to accurately analyze my problem, feel free to say so.
Thanks!
The problem I see is that you have double \r\n combination at the end of your request header statement.
You need \r\n\r\n only at the end of your header. Now, you have double times. Remove from first write.

301/302 error in http c client sockets

I am making a http c client socket. So far i have made a custom url parser and now the problem is connecting to absolute urls. The program works fine with relative urls but not absolute ones.
Here is a sample output for the results of both absolute and relative urls:
absolute url: http://www.google.com
relative url : http://techpatterns.com/downloads/firefox/useragentswitcher.xml
In an absolute url it gives a 301/302 status code while in a relative url the status is 200 OK
Here is a sample code of the key areas
char ip[100],*path, *domain, *abs_domain, *proto3;
char *user_agent = "Mozilla/5.0 (Windows NT 6.2; WOW64; rv:31.0) Gecko/20100101 Firefox/31.0";
char *accept_type = "Accept: text/html, application/xhtml+xml, */*\r\nAccept-Language: en-US\r\n";
char *encoding = "Accept-Encoding: gzip, deflate\r\n";
char *proxy_conn = "Proxy-Connection: Keep-Alive\r\n";
char hostname[1000];
url:
fgets(hostname,sizeof(hostname), stdin);
for(i=0; i<strlen(hostname);i++){//remove new line
if(hostname[i]=='\n'){
hostname[i]='\0';
}
}
proto3 = get_protocol(hostname); //get protocol i.e. http, ftp, etc
//get domain ie http://mail.google.com/index -> mail.google.com
//http://www.google/com/ssl_he -> www.google.com
domain = get_domain(hostname);
if(strlen(domain)==0){
printf("invalid url\n\n");
goto url;
}
abs_domain = get_abs_domain(hostname);//gets abs domain google.com, facebook.com etc
path = get_path(hostname);
//getting the ip address from the hostname
if ( (he = gethostbyname( abs_domain ) ) == NULL)
{
printf("gethostbyname failed : %d" , WSAGetLastError());
goto url;
}
//Cast the h_addr_list to in_addr , since h_addr_list also has the ip address in long format only
addr_list = (struct in_addr **) he->h_addr_list;
for(i = 0; addr_list[i] != NULL; i++)
{
//Return the first one;
strcpy(ip , inet_ntoa(*addr_list[i]) );
}
clientService.sin_addr.s_addr = inet_addr(ip);
clientService.sin_family = AF_INET;
clientService.sin_port = htons(80);
sprintf(sendbuf, "GET /%s HTTP/1.1\r\n%sUser-Agent: %s\r\nHost: %s\r\n\r\n", path,accept_type,user_agent, abs_domain);
Brief exlanation of the code:
i.e. if the url entered by the user is http://mail.deenze.com/control_panel/index.php
the protocol will be -> http
the domain will be -> mail.deenze.com
the abs_domain will be -> deenze.com
the path will be control_panel/index.php
Finally this values in conjunction with the user agent will be used to send the data.
301 and 302 status codes are redirects, not errors. They indicate that you should try the request at a different URL instead.
In this case, it looks like despite the fact that you entered the URL http://www.google.com/, the Host header you are sending only includes google.com. Google is sending you back a redirect telling you to use www.google.com instead.
I notice that you seem to have a get_abs_domain function that is stripping the www off; there is no reason you should do this. www.google.com and google.com are different hostnames, and may give you entirely different contents. In practice, most sites will give you the same result for them, but you can't depend on that; some will redirect from one to the other, some will simply serve up the same content, and some may only work at one or the other.
Instead of trying to rewrite one to the other, you should just follow whatever redirect you are returned.
I would recommend using an existing HTTP client library rather than trying to write your own (unless this is just an exercise for your own edification). For example, there's cURL if you want to be portable or HttpClient if you only need to work on Windows (based on your screenshots, I'm assuming that's the platform you're using). There is a lot of complexity in writing an HTTP client that can actually handle most of the web; SSL, compression, redirects, chunked transfer encoding, etc.
#Brian Campbell, i think the problem was the www, because if i use www.google.com it gives me a redirect url: https://www.google.com/?gws_rd=ssl same as my browser, but because it is a https i think i will have to use ssl, thank for your answer
I cant copy paste the text in my terminal but i have increase the fonts for visibility purposes

Building realtime app using Laravel and Latchet websocket

I'm building a closed app (users need to authenticate in order to use it). I'm having trouble in identifying the currently authenticated user from my Latchet session. Since apache does not support long-lived connections, I host Latchet on a separate server instance. This means that my users receive two session_id's. One for each connection. I want to be able to identify the current user for both connections.
My client code is a SPA based on AngularJS. For client WS, I'm using the Autobahn.ws WAMP v1 implementation. The ab framework specifies methods for authentication: http://autobahn.ws/js/reference_wampv1.html#session-authentication, but how exactly do I go about doing this?
Do I save the username and password on the client and retransmit these once login is performed (which by the way is separate from the rest of my SPA)? If so, won't this be a security concearn?
And what will receive the auth request server side? I cannot find any examples of this...
Please help?
P.S. I do not have reputation enough to create the tag "Latchet", so I'm using Ratchet (which Latchet is built on) instead.
Create an angularjs service called AuthenticationService, inject where needed and call it with:
AuthenticationService.check('login_name', 'password');
This code exists in a file called authentication.js. It assumes that autobahn is already included. I did have to edit this code heavily removing all the extra crap I had in it,it may have a syntax error or two, but the idea is there.
angular.module(
'top.authentication',
['top']
)
.factory('AuthenticationService', [ '$rootScope', function($rootScope) {
return {
check: function(aname, apwd) {
console.log("here in the check function");
$rootScope.loginInfo = { channel: aname, secret: apwd };
var wsuri = 'wss://' + '192.168.1.11' + ':9000/';
$rootScope.loginInfo.wsuri = wsuri;
ab.connect(wsuri,
function(session) {
$rootScope.loginInfo.session = session;
console.log("connected to " + wsuri);
onConnect(session);
},
function(code,reason) {
$rootScope.loginInfo.session = null;
if ( code == ab.CONNECTION_UNSUPPORTED) {
console.log(reason);
} else {
console.log('failed');
$rootScope.isLoggedIn = 'false';
}
}
);
function onConnect(sess) {
console.log('onConnect');
var wi = $rootScope.loginInfo;
sess.authreq(wi.channel).then(
function(challenge) {
console.log("onConnect().then()");
var secret = ab.deriveKey(wi.secret,JSON.parse(challenge).authextra);
var signature = sess.authsign(challenge, secret);
sess.auth(signature).then(onAuth, ab.log);
},ab.log
);
}
function onAuth(permission) {
$rootScope.isLoggedIn = 'true';
console.log("authentication complete");
// do whatever you need when you are logged in..
}
}
};
}])
then you need code (as you point out) on the server side. I assume your server side web socket is php coding. I can't help with that, haven't coded in php for over a year. In my case, I use python, I include the autobahn gear, then subclass WampCraServerProtocol, and replace a few of the methods (onSessionOpen, getAuthPermissions, getAuthSecret, onAuthenticated and onClose) As you can envision, these are the 'other side' of the angular code knocking at the door. I don't think autobahn supports php, so, you will have to program the server side of the authentication yourself.
Anyway, my backend works much more like what #oberstat describes. I establish authentication via old school https, create a session cookie, then do an ajax requesting a 'ticket' (which is a temporary name/password which i associate with the web authenticated session). It is a one use name/password and must be used in a few seconds or it disappears. The point being I don't have to keep the user's credentials around, i already have the cookie/session which i can create tickets that can be used. this has a neat side affect as well, my ajax session becomes related to my web socket session, a query on either is attributed to the same session in the backend.
-g
I can give you a couple of hints regarding WAMP-CRA, which is the authentication mechnism this is referring:
WAMP-CRA does not send passwords over the wire. It works by a challenge-response scheme. The client and server have a shared secret. To authenticate a client, the server will send a challenge (something random) that the client needs to sign - using the secret. And only the signature is sent back. The client might store the secret in browser local storage. It's never sent.
In a variant of above, the signing of the challenge the server sends is not directly signed within the client, but the client might let the signature be created from an Ajax request. This is useful when the client was authenticated using other means already (e.g. classical cookie based), and the signing can then be done in the classical web app that was authenticating.
Ok, Greg was kind enough to provide a full example of the client implementation on this, so I wont do anything more on that. It works with just a few tweaks and modifications to almost any use-case I can think of. I will mark his answer as the correct one. But his input only covered the theory of the backend implementation, so I will try to fill in the blanks here for postparity.
I have to point out though, that the solution here is not complete as it does not give me a shared session between my SPA/REST connection and my WS connection.
I discovered that the authentication request transmitted by autobahn is in fact a variant of RPC and for some reason has hardcoded topic names curiously resembling regular url's:
- 'http://api.wamp.ws/procedure#authreq' - for auth requests
- 'http://api.wamp.ws/procedure#auth' - for signed auth client responses
I needed to create two more routes in my Laravel routes.php
// WS CRA routes
Latchet::topic('http://api.wamp.ws/procedure#authreq', 'app\\socket\\AuthReqController');
Latchet::topic('http://api.wamp.ws/procedure#auth', 'app\\socket\\AuthReqController');
Now a Latchet controller has 4 methods: subscribe, publish, call and unsubscribe. Since both the authreq and the auth calls made by autobahn are RPC calls, they are handled by the call method on the controller.
The solution first proposed by oberstet and then backed up by Greg, describes a temporary auth key and secret being generated upon request and held temporarily just long enough to be validated by the WS CRA procedure. I've therefore created a REST endpoint which generates a persisted key value pair. The endpoint is not included here, as I am sure that this is trivial.
class AuthReqController extends BaseTopic {
public function subscribe ($connection, $topic) { }
public function publish ($connection, $topic, $message, array $exclude, array $eligible) { }
public function unsubscribe ($connection, $topic) { }
public function call ($connection, $id, $topic, array $params) {
switch ($topic) {
case 'http://api.wamp.ws/procedure#authreq':
return $this->getAuthenticationRequest($connection, $id, $topic, $params);
case 'http://api.wamp.ws/procedure#auth':
return $this->processAuthSignature($connection, $id, $topic, $params);
}
}
/**
* Process the authentication request
*/
private function getAuthenticationRequest ($connection, $id, $topic, $params) {
$auth_key = $params[0]; // A generated temporary auth key
$tmpUser = $this->getTempUser($auth_key); // Get the key value pair as persisted from the temporary store.
if ($tmpUser) {
$info = [
'authkey' => $tmpUser->username,
'secret' => $tmpUser->secret,
'timestamp' => time()
];
$connection->callResult($id, $info);
} else {
$connection->callError($id, $topic, array('User not found'));
}
return true;
}
/**
* Process the final step in the authentication
*/
private function processAuthSignature ($connection, $id, $topic, $params) {
// This should do something smart to validate this response.
// The session should be ours right now. So store the Auth::user()
$connection->user = Auth::user(); // A null object is stored.
$connection->callResult($id, array('msg' => 'connected'));
}
private function getTempUser($auth_key) {
return TempAuth::findOrFail($auth_key);
}
}
Now somewhere in here I've gone wrong. Cause if I were supposed to inherit the ajax session my app holds, I would be able to call Auth::user() from any of my other WS Latchet based controllers and automatically be presented with the currently logged in user. But this is not the case. So if somebody see what I'm doing wrong, give me a shout. Please!
Since I'm unable to get the shared session, I'm currently cheating by transmitting the real username as a RPC call instead of performing a full CRA.

Redis - Pub/Sub Internals

I've written a small Socket.IO server, which works fine, I can connect to it, I can send/receive messages, so everything is working ok. Just the relevant part of the code is presented here:
var RedisStore = require('socket.io/lib/stores/redis');
const pub = redis.createClient('127.0.0.1', 6379);
const sub = redis.createClient('127.0.0.1', 6379);
const store = redis.createClient('127.0.0.1', 6379);
io.configure(function() {
io.set('store', new RedisStore({
redisPub : pub,
redisSub : sub,
redisClient : store
}));
});
io.sockets.on('connection', function(socket) {
socket.on('message', function(msg) {
pub.publish("lobby", msg);
});
/*
* Subscribe to the lobby and receive messages.
*/
var sub = redis.createClient('127.0.0.1', 6379);
sub.subscribe("lobby");
sub.on('message', function(channel, msg) {
socket.send(msg);
});
});
Here, I'm interested in the problem where certain client is subscribed to a different room, which is why I'm also using the sub Redis variable inside each socket connection: because each client can be subscribed to a different room and can receive messages from there. I'm not entirely sure whether the code above is ok, so please let me know if I need to do anything else than define the sub Redis connection inside the Socket.IO connection: this also means that a new Redis connection is spawned for each client connecting serving his messages from the subsribed room? I guess this is quite an overhead, so I would like to solve it anyway possible?
Thank you
Both node.js and redis are very good at handling lots of connections (thousands is no problem), so what you're doing is fine.
As a side note, you will want to look into upping your file descriptor limits if you do intend on supporting thousands of connections.

Resources