Lightweight web authentication for embedded system - c

I'm working on a slightly esoteric project where we need to implement some basic authentication in a small/slow embedded micro (no OS). The device serves a couple of web-pages through its serial port which then get squirted over the IP network by a bit of hardware we have no control over.
The server code, such as it is (think nweb on a starvation diet), takes in HTTP GET/POST requests and spits out pages & changes its settings accordingly.
We need some way of authenticating a user login/session so that we don't allow people to see data or change settings they shouldn't.
The device is not intended to be directly exposed to the internet or be 100% impregnable to serious hacking (network security / separation is the customer's issue*), the security requirement is more about keeping the lower ranks from touching the blinkenlights ;)
Due to the lack of space/processing power (assume we have ~2k of code space and not many MHz) we can't implement things like SSL, but it would be nice to go at least one better than the bog standard HTTP access control.
We can handle GET, POST and set/read cookie data. One thing our micro does have is a decent crypto-standard hardware random number generator, should that be of any help at all.
= Really the customers should be hanging the device on its own network, physically disconnected or at least firewalled to death, from anything else. But hey, if it works for Boeing...

If you only want to protect against access:
Any time there is a GET request, look for a cookie of the password.
If the cookie isn't set, send back a HTML login form that POSTs password to server.
If the server gets the POST data with the right password send back a "logged in ok" page that sets the COOKIE to the password. Then anyone who login (with the right password obviously) will have the cookie set in all future GET requets. Anyone who has never logged in will always see the login page. You can 'hide' it by setting two cookie values: A random number, and the XOR of the random and the password. This way clients won't be able to figure out what the values in the cookies are. If you go futher and XOR it with say the client IP, clients won't be able to copy the cookies to other computers. The server will always be able to un-do everything and figure out the password/ip from the random number and the other cookie values.
If you want to have simple encryption you could use XMLHTTPREQUESTS in javascript. Have the server encrypt data with a simple puesdo random number generator (or a simply XOR obsfuncitation or whatever) and have the client do the same thing backwards in javascript. You can have the server encrypt every page except say index.html, and in index.html you can have it so it XMLHTTPREQUESTS the other pages in javascript and decrypts them, then puts the contents into a div using innerHTML or whatever.

Related

Authorizing requests to get webpack code chunks

When chunking client code with webpack, i.e. using React lazy loading, is there any way to restrict access to a particular chunk by gating it behind some sort of authorization credentials?
All approaches I've seen so far have involved client-side code that only renders a given component when certain conditions are met on the client side (e.g. something along the lines of this as a simple example). This does not strike me as particularly secure, more "security by obscurity" than anything: a motivated attacker could modify either the client code they do have, or perhaps their own memory, to force the browser to request the additional chunks - or just send requests to the server using some other agent.
Is there any way to enforce this on the server side, in the CDN that serves the client code? Perhaps some way to attach credentials to the requests to obtain additional chunks such that the CDN could verify that the client is indeed authorized to be sent the code? (The server side authorization would be trivial enough to implement myself, but I don't know of a way to have the requests actually attach the credentials - that whole mechanism is opaque and I can't find any documentation.)
Alternatively, is there some justification for the client-side-only approach actually being secure?

Decrypting HTTPS traffic with a proxy

I am implementing a Web proxy (in C), with the end goal of implementing some simple caching and adblocking. Currently, the proxy supports normal HTTP sites, and also supports HTTPS sites by implementing tunneling with HTTP CONNECT. The proxy works great running from localhost and configured with my browser.
Despite all of this, I'll never be able to implement my desired features as long as the proxy can not decrypt HTTPS traffic. The essence of my question is: what general steps do I need to take to be able to decrypt this traffic and implement what I would like? I've been researching this, and there seems to be a good amount of information on existing proxies that are capable of this, such as Squid.
Currently, my server uses select() and keeps all client ids in an fd_set. When a CONNECT request is made, it makes a TCP connection to the specified host, and places the file descriptor of both the client and the host into the fd_set. It also places the tuple of fd's into a list, and the list is scanned whenever more data is ready from select() to see if data is coming from an existing tunnel. The data is then read and forwarded blindly. I am struggling to see how to intercept this data at all, due to the nature of the CONNECT verb requiring opening a simple TCP socket to the desired host, and then "staying out of it" while the client and host set up their own SSL sockets. I am simply asking for the right direction for how I can go about using the proxy as a MITM attacker in order to read and manipulate the data coming in.
As a brief aside, this project is solely for my own use, so no security or advanced functionality is needed. I just need it to work for one browser, and I am happy to get any warnings from the browser if certificate-spoofing is the best approach.
proxy can not decrypt HTTPS traffic
You are trying to mount a man-in-the-middle attack. SSL is designed to prevent that. But - there is a weak point - a list of trusted certificate authorities.
I am simply asking for the right direction for how I can go about using the proxy as a MITM attacker in order to read and manipulate the data coming in.
You can get inspiration from Fiddler. The Fiddler has its own CA certificate (certification authority) and once you add this CA certificate as trusted, then Fiddler generates server certificates for each connection you use on the fly.
It comes with serious security consideration, your browser will trust any site. I've even seen using the Fiddler core inside a malware, so be careful

Do I need a service worker when my application often communicates with the server?

I'm working on a simple chat application that uses this frameworks, libraries: react, socket.io, express.
When a user opens the web app for the first time, he sees a login form, and after login, the server retrieves the list of all users and sends it to the client. When someone writes a new message, the server sends the message to all the clients.
As you can see, every part of the app depends on the server.
Does it make sense to use a service worker? Can it be at all?
As far as I know, a service worker is good at storing images, css, js files, and it help the users to use the app while they don't have internet connection.
But I do not know when everything depends on the server what can be done.
You have a great question.
You can most certainly use a Service Worker but most likely not to the extent some other apps could use it. You have outlined the problem yourself: your website depends on the server so it's not possible to make it offline or so. Some other websites could be made offline or could be made mostly offline showing some content without network connection and giving the full experience when connectivity comes back, but that doesn't sound like to be the case for your website.
Based on the description you've given, there's still something you could easily use Service Worker for, however. You've understand correctly that SW is very good at storing (caching) static assets and serving them from the device's cache without any network connectivity. You could use this feature and make your site faster. You could use a SW to proactively cache all the static assets of your site and have the SW return them from the local cache without requesting anything from the network. This would make your site a bit or much faster, depending on the user's connectivity (if the user has a slow 3G connection, then the SW would make the site super fast; if the user has a steady fiber or whatnot, then the difference wouldn't be that huge).
You could also make your site available offline without any internet connectivity. In that situation you would of course show the user a message saying "Hey, it seems like you're offline! Shoot! You need connectivity to use the app. We'll continue as soon as we get the bits flowing!" since this would probably make the user experience nicer.
So, in conclusion: you can leverage SW to make the initial loading of the site faster but you most likely won't get as much out of a SW configuration as some other site would get.
If you have any other questions or would like to have some clarifications, just comment :)
Sure you can benefit from having a Service Worker, it is universal enough to have an application for all kinds of applications and I don't agree it is only good for static assets.
It all depends on the actual requirements for you application, obviously. But technically there is no limitation that would prevent you from caching your users response in the Service Worker.
Remember that "offline" is a condition that happens in multiple circumstances - not only being far from the network coverage, but also outages, interferences, lie-fi or going through a tunnel. So it can as well happen intermittently during your app operation and it might make sense to prepare for it.
You can for example store your messages for offline in IndexedDB and for messages sent during that time, register a Background Sync event to send it to the server when the connectivity is back. This way users might still be able to use the app in the limited fashion (read the previously exchanged messages and post their own messages to be sent out later).

How to authenticate a WPF application against the server?

Assume the following:
I have a WPF Application which reads a text from a file an sends the
text to my server REST API via a HTTPS and the server sends a
response which depends on the text which was send in request
The WPF Application should be the only one which gets a useful response
to this request - so the WPF Application has to show somehow to
the server, that the request is send from the application itself.
The user of the WPF Application should not be asked to enter any login credentials
What are the best practices here?
My thoughts:
the WPF Application could send a hard-coded password along with the
request which is checked on the server side - but that sounds not
like a good solution to me because the security depends on the fact that
nobody is able to sniff the HTTPS Request.
Is it possible to sniff the HTTPS Request to get the password easily?
Thanks in advance
If your server already supports HTTPS the client knows the server is trusted based on the cert it is using, so that side is handled. (client trusts server)
To ensure trust the server needs to do the same. (server trusts client) The client should hold a cert it can pass to the server so the server can verify the clients identity.
Like always this brings up the problem of how to hide the key in the client, of which there are various schemes but since the client needs to get the key eventually you cannot prevent a dedicated hacker from finding that info, only make it harder for them. (obfuscation etc)
Depending on your application the best is a simple white-list of clients allowed to connect. Some apps can do this but many cannot since they don't have the users IP's etc, but it's something else to keep in mind if it fits your use-case.
You can send a password to the server like you suggest. As long as the message is encrypted (HTTPS) your probably fine. Nothing is 100% secure. It can be intercepted via a man-in-the-middle style attack, but these are fairly rare, or at least very targeted, so it would depend on what your software does etc.

Best practices for authentication and authorization in Angular without breaking RESTful principles?

I've read quite a few SO threads about authentication and authorization with REST and Angular, but I'm still not feeling like I have a great solution for what I'm hoping to do. For some background, I'm planning to building an app in AngularJS where I want to support:
Limited guest access
Role-based access to the application once authenticated
Authentication via APIs
All of the calls to the REST API will be required to occur over SSL. I'd like to do build the app without breaking RESTful principles, namely not keeping session state stored on the server. Of course, whatever is done vis-a-vis authorization on the client-side has to be reinforced on the server side. Since we need to pass the entire state with each request, I know I need to pass some sort of token so that the backend server receiving the REST request can both authenticate and authorize the call.
With that said, my main question is around authentication - what are the best practices here? It seems there are lots of different approaches discussed, here's just a few that I've found:
http://broadcast.oreilly.com/2009/12/principles-for-standardized-rest-authentication.html
http://frederiknakstad.com/2013/01/21/authentication-in-single-page-applications-with-angular-js/
http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html
There was a similar question asked (AngularJS best practice application authentication), but unless I'm misunderstanding the answer, it seems to imply that a server session should be used, which is breaking RESTful principles.
My main concern with the Amazon AWS and the George Reese article is it seems to assume that the consumer is a program, rather than an end user. A shared secret can be issued to a programmer in advance, who can then use it to encode calls here. This isn't the case here - I need to call the REST API from the app on behalf of the user.
Would this approach be enough? Let's say I have a session resource:
POST /api/session
Create a new session for a user
To create a session, you need to POST a JSON object containing the "username" and "password".
{
"email" : "austen#example.com",
"password" : "password"
}
Curl Example
curl -v -X POST --data '{"username":"austen#example.com","password":"password"}' "https://app.example.com/api/session" --header "Content-Type:application/json"
Response
HTTP/1.1 201 Created {
"session": {
"id":"520138ccfa4634be08000000",
"expires":"2014-03-20T17:56:28+0000"
}
}
Status Codes
201 - Created, new session established
400 - Bad Request, the JSON object is not valid or required information is missing
401 - Unauthorized, Check email/password combo
403 - Access Denied, disabled account or license invalid
I'm leaving out the HATEOAS details for clarity. On the backend, there would be a new, limited duration session key created and associated with the user. On subsequent requests, I could pass this as part of the HTTP headers:
Authorization: MyScheme 520138ccfa4634be08000000
Then the backend servers would be responsible for digesting this out of the request, finding the associated user and enforcing authorization rules for the request. It should probably update the expiration for the session as well.
If all this is happening over SSL, am I leaving the door open to any kind of attacks that I should be protecting against? You could try to guess session keys and place them in the header, so I suppose I could additionally append a user GUID to the session key to further prevent brute force attacks.
It's been a few years since I've actively programmed and I'm just getting back into the swing here. Apologies if I'm being obtuse or unnecessarily reinventing the wheel, just hoping to run my ideas by the community here based on my reading thus far and see if they pass the litmus test.
When someone asks about REST authentication, I defer to the Amazon Web Services and basically suggest "do that". Why? Because, from a "wisdom of the crowds" point of view, AWS solves the problem, is heavily used, heavily analyzed, and vetted by people that know and care far more than most about what makes a secure request than most. And security is a good place to "not reinvent the wheel". In terms of "shoulders to stand on", you can do worse than AWS.
Now, AWS does not use a token technique, rather it uses a secure hash based on shared secrets and the payload. It is arguably a more complicated implementation (with all of its normalization processes, etc.).
But it works.
The downside is that it requires your application to retain the persons shared secret (i.e. the password), and it also requires the server to have access to that a plain text version of the password. That typically means that the password is stored encrypted, and it then decrypted as appropriate. And that invite yet more complexity of key management and other things on the server side vs secure hashing technique.
The biggest issue, of course, with any token passing technique is Man in the Middle attacks, and replay attacks. SSL mitigates these mostly, naturally.
Of course, you should also consider the OAuth family, which have their own issues, notably with interoperability, but if that's not a primary goal, then the techniques are certainly valid.
For you application, the token lease is not a big deal. Your application will still need to operate within the time frame of the lease, or be able to renew it. In order to do that it will need to either retain the user credential or re-prompt them for it. Just treat the token as a first class resource, like anything else. If practical, try and associate some other information with the request and bundle it in to the token (browser signature, IP address), just to enforce some locality.
You are still open to (potential) replay problems, where the same request can be sent twice. With a typical hash implementation, a timestamp is part of the signature which can bracket the life span of the request. That's solved differently in this case. For example, each request can be sent with a serial ID or a GUID and you can record that the request has already been played to prevent it from happening again. Different techniques for that.
Here is an incredible article about authentication and login services built with angular.
https://medium.com/opinionated-angularjs/7bbf0346acec
This SO question do a good job of summing up my understanding of REST
Do sessions really violate RESTfulness?
If you store a token in a session you are still creating state on the server side (this is an issue since that session is typically only stored on the one server, this can be mitigated with sticky sessions or other solutions).
I'd like to know what your reasoning is for creating a RESTful service though because perhaps this isn't really a large concern.
If you send a token in the body along with every request (since everything is encrypted with SSL this is okay) then you can have any number of servers (load balanced) servicing the request without any previously knowledge of state.
Long story short I think aiming for RESTful implementations is a good goal but being purely stateless certainly creates an extra layer of complexity when it comes to authentication and verifying authorization.
Thus far I've started building my back-ends with REST in mind, making URIs that make sense and using the correct HTTP verbs, but still use a token in a session for the simplicity of authentication (when not using multiple servers).
I read through the links you posted, the AngularJS one seems to focus just on the client and doesn't seem to explicitly address the server in that article, he does link to another one (I'm not a Node user so forgive me if my interpretation is wrong here) but it appears the server is relying on the client to tell it what level of authorization it has which is clearly not a good idea.

Resources