How to protect WebRTC from MitM attack? - static

I am working on a PoC with WebRTC for static assets, and discovered it could be hacked with a MitM : https://webrtchacks.com/webrtc-and-man-in-the-middle-attacks/. Is there a way to prevent WebRTC from such an attack ?

Note that the post starts by compromising the signaling server. You can safely assume that if the signaling server is compromised everything goes down the drain. The reason for that, is that at the end of the day you need to trust some kind of an entity to broker the communication and that's the signaling server.
Only thing you need to do to protect against such an attack is to protect your signaling server - something quite common in the web today.

Related

elastic4s automatic reconnections when connection is dropped

Is there a way (or best practice) for handling automatic reconnections in elastic4s?
I have the situation where the elastic cluster gets rebooted behind my application (security updates etc). [Obviously this is not ideal and would be better handled by a rolling restart but we're not quite there yet.]
However when this happens the connection is dropped and never recovers when the cluster comes back online. It keeps saying no nodes are available. If I restart the application it will reconnect without issues.
Is there a way to handle this nicely without having to create a new connection (ie TcpClient)? Currently I'd have to distribute the new TcpClient to the various parts of the application or wrap the API in something which handles this situation. Neither appeal much.
Thanks
You could consider switching to the HttpClient, which will obviously work after a cluster restart because it doesn't maintain a connection. The elastic4s API is the same regardless of which underlying client you are using, so, in theory, it should be an easy change.

Security considerations for TCP client request / server response of non-sensitive information

I have read several similar topics on stackoverflow in which fellow programmers discourage the practice of using simple client/server applications and raw TCP sockets for communication. I acknowledge that there are concerns but for what I'm trying to accomplish I don't see any other reasonable way.
Here is what I'm planning:
I have a simple working prototype client/server that I wrote in C. The client application sends a request to my server to remotely execute code, generates a value and then relays this value to the client. The transmitted data is not sensitive, will only be held in RAM and will be rejected if it exceeds a predefined length. If I run a (hardened) dedicated server with the sole purpose of remote code execution to generate a response are there any security issues I'm overlooking?
I am less worried about my server being compromised and more worried about possible harm to client computers. I'm not blind to the potential that my server gets hacked - I'm just trying to convey that their won't be any sensitive data on it even if it does get compromised. I don't see how anything malicious could be injected (mitm) given the narrow scope of the data being transmitted but maybe I'm naive and overlooking something? Please let me know.
I could accomplish this over HTTP with a re-write trick but that is convoluted, I'll incur more overhead than I want and I'm unsure it would be any safer.
Thanks.
You need to think about the possibilty of your server being damaged or wiped or even powered off by an intruder. Anything involving remote execution of code rings my alarms. You must at least use a secure client authentication scheme.
Confidentiality - not needed if you feel data is not sensitive
authentication - certificates can be used to make sure client talks to real server. Pls check openssl
Make sure the server is running with reduced privilege rather than root so that server can't be compromised completely in case of memory corruption attacks.
Using HTTP won't make any difference, HTTP uses TCP connections at transport layer.
HTTPS would be a possible solution.

Single database connection for application

My collegue is defending that opening a single database connection for an application is much better and faster than opening and closing it using a pool.
He has an ApplicationStart method where he inits Application('db') and keeps this connection live across the app. This app is mostly contains readonly data.
How can I persuade him?
That depends a lot on what the "application" here is. If this is a client application that works on a single thread and does things sequentially, then frankly there won't be any noticeable difference either way. In that scenario, if you use the pool it will basically be a pool of 1 item, and opening a connection from the pool will be virtually instantaneous (and certainly not noticeable compared to network IO). In that scenario I would still say use the inbuilt pooling, as it will avoid assumptions when you change scenario.
However, if you application uses more than one thread, or via any other mechanism does more than one thing at a time (async) etc, using a single connection would be very bad; either it will outright fail, or you will need to synchronize around the connection, which would limit you severely. Note that any server-side application (any kind of web application, WCF service, SOAP service, or socket service) would react very badly to his idea.
Perhaps the main way to convince him is simply: ask him to prove it. Ask for a repeatable test / demonstration that shows this difference.

Implementing Comet on the database-side

This is more out of curiosity and "for future reference" than anything, but how is Comet implemented on the database-side? I know most implementations use long-lived HTTP requests to "wait" until data is available, but how is this done on the server-side? How does the web server know when new data is available? Does it constantly poll the database?
What DB are you using? If it supports triggers, which many RDBMSs do in some shape or form, then you could have the trigger fire an event that actually tells the HTTP request to send out the appropriate response.
Triggers remove the need to poll... polling is generally not the best idea.
PostgreSQL seems to have pretty good support (even PL/Python).
this is very much application dependent. The most likely implementation is some sort of messaging system.
Most likely, your server side code will consist of quite a few parts:
a few app servers that hansle incoming requests,
a (separate) comet server that deals with all the open connections to clients,
the database, and
some sort of messaging infrastructure
the last one, the messaging infrastructure is really the key. This provides a way for the app servers to talk to the comet server. So when a request comes in the app server will put a message into the message queue telling the comet server to notify the correct client(s)
How messaging is implemented is, again, very much application dependent. A very simple implementation would just use a database table called messages and poll that.
But depending on the stack you plan on using there should be more sphisticated tools available.
In Rails I'm using Juggernaut which simply listens on some network port. Whenever there is data to send the Rails Application server opens a connection to this juggernaut push server and tells it what to send to the clients.

.NET CF mobile device application - best methodology to handle potential offline-ness?

I'm building a mobile application in VB.NET (compact framework), and I'm wondering what the best way to approach the potential offline interactions on the device. Basically, the devices have cellular and 802.11, but may still be offline (where there's poor reception, etc). A driver will scan boxes as they leave his truck, and I want to update the new location - immediately if there's network signal, or queued if it's offline and handled later. It made me think, though, about how to handle offline-ness in general.
Do I cache as much data to the device as I can so that I use it if it's offline - Essentially, each device would have a copy of the (relevant) production data on it? Or is it better to disable certain functionality when it's offline, so as to avoid the headache of synchronization later? I know this is a pretty specific question that depends on my app, but I'm curious to see if others have taken this route.
Do I build the application itself to act as though it's always offline, submitting everything to a local queue of sorts that's owned by a local class (essentially abstracting away the online/offline thing), and then have the class submit things to the server as it can? What about data lookups - how can those be handled in a "Semi-live" fashion?
Or should I have the application attempt to submit requests to the server directly, in real-time, and handle it if it itself request fails? I can see a potential problem of making the user wait for the timeout, but is this the most reliable way to do it?
I'm not looking for a specific solution, but really just stories of how developers accomplish this with the smoothest user experience possible, with a link to a how-to or heres-what-to-consider or something like that. Thanks for your pointers on this!
We can't give you a definitive answer because there is no "right" answer that fits all usage scenarios. For example if you're using SQL Server on the back end and SQL CE locally, you could always set up merge replication and have the data engine handle all of this for you. That's pretty clean. Using the offline application block might solve it. Using store and forward might be an option.
You could store locally and then roll your own synchronization with a direct connection, web service of WCF service used when a network is detected. You could use MSMQ for delivery.
What you have to think about is not what the "right" way is, but how your implementation will affect application usability. If you disable features due to lack of connectivity, is the app still usable? If you have stale data, is that a problem? Maybe some critical data needs to be transferred when you have GSM/GPRS (which typically isn't free) and more would be done when you have 802.11. Maybe you can run all day with lookup tables pulled down in the morning and upload only transactions, with the device tracking what changes it's made.
Basically it really depends on how it's used, the nature of the data, the importance of data transactions between fielded devices, the effect of data latency, and probably other factors I can't think of offhand.
So the first step is to determine how the app needs to be used, then determine the infrastructure and architecture to provide the connectivity and data access required.
I haven't used it myself, but have you looked into the "store and forward" capabilities of the CF? It may suit your needs. I believe it uses an Exchange mailbox as a message queue to send SOAP packets to and from the device.
The best way to approach this is to always work offline, then use message queues to handle sending changes to and from the device. When the driver marks something as delivered, for example, update the item as delivered in your local store and also place a message in an outgoing queue to tell the server it's been delivered. When the connection is up, send any queued items back to the server and get any messages that have been queued up from the server.

Resources