How can a number of angular clients communicate between themselves even when they lose connection to a central server? - angularjs

So the scenario is like this...
I have a number of different users in an organization. Each has his own session of an AngularJS app running in their browser. They share an internet connection over a local LAN.
I need them to continue working together (data, notifications, ... etc) even when they lose internet i.e. server side communication.
What is the best architecture for solving this?

Having clients communicate directly, without a server, requires peer-to-peer connections.
If your users are updating data that should be reflected in the database, then you will have to cache that data locally on the client until the server is available again. But if you want to first send that data to other peers, then you need to think carefully about which client will then update the database when the server comes back up (should it be the original client that made the edit - who may not be online anymore - or should it be the first client that establishes server connection). Lots to consider in your architecture.
To cope with this scenario you need angular service-worker library, which you can read about here.
If you just want the clients/users to communicate without persisting data in the database (eg. simple chat messages) then you don't have to worry about the above complexity.
Refer to this example which shows how to use simple-peer library with Angular2.

An assisting answer (doesn't fit in a comment) was provided here: https://github.com/amark/gun/issues/506
Here is it:
Since GUN can connect to multiple peers, you can have the browser connect to both outside/external servers AND peers running on your local area network. All you have to do is npm install gun and then npm start it on a few machines within your LAN and then hardcode/refresh/update their local IPs in the browser app (perhaps could even use GUN to do that, by storing/syncing a table of local IPs as the update/change)
Ideally we would all use WebRTC and have our browsers connect to each other directly. This is possible however has a big problem, WebRTC depends upon a relay/signal server every time the browser is refreshed. This is kinda stupid and is the browser/WebRTC's fault, not GUN (or other P2P systems). So either way, you'd have to also do (1) either way.
If you are on the same computer, in the same browser, in the same browser session, it is possible to relay changes (although I didn't bother to code for this, as it is kinda useless behavior) - it wouldn't work with other machines in your LAN.
Summary: As long as you are running some local peers within your network, and can access them locally, then you can do "offline" (where offline here is referencing external/outside network) sync with GUN.
GUN is also offline-first in that, even if 2 machines are truly disconnected, if they make local edits while they are offline, they will sync properly when the machines eventually come back online/reconnect.
I hope this helps.

Related

How to prevent people or a program from extracting data out of a system?

Let us say, there is a system containing data, where the user can view or manipulate it, using the options in the system, but should not be able to copy/ extract/ export the data out of the system. Also, any bots such as RPA or crawlers should not be exporting too. The data strictly recides in the system.
Eg: VDI - Virtual Desktop Infrastructure, does some sort of this work. People can connect to remote machines and do some work, but cannot extract data out of it to their local machine, unless it allows the user to do so. Even RPA bots will not be allowed to run in that remote system, only can be run in local system but it will be tedious to build such a bot, providing a closer solution to the above problem.
I am just looking for alterate light-weight options. Please let me know, if there is any solution available.
There is simply no way of stopping all information export.
A user could just take a photo to the screen and share the info.
If by exporting you mean exporting files, then simply do not allow exporting the files in your program or restrict the option, if you need to store data on the disk, store it encrypted.
The best options would be to configure a machine only to use that software, so on boot it would lauch the software fullscreen, deny any usb autorun keys and have something like Veyon insyalled to be remotely controlled and have some config data on the disk but pretty much all the data on a remote server.
If you need a local cache, you can keep it encrypted.
That said theoretically if a user had access to the ram physically, he/she could retrieve that data but it is highly unlikely.
First of all, you'll have to make ssh and ftp useless! this is to prevent scp or other ftp software from being used to move things from inside the system out and vice versa, block ports 20, 21 and 22!
If possible, I'd block access to cloud storage services (DNS/Firewall), so that no one with access to the machine would be able to upload stuff to common cloud services or if you have a known address that might be a potential goal for your protected data. Make sure that online code repositories are also blocked! if the data can be stored as text, it can be also transfered to github/gitlab/bitbucket as a normal repo... you can block them also at DNS level. Make sure that users don't have the previlage to change network settings, otherwise they can bypass your DNS blocks!
You should prevent any kind of external storage connectivity.. by disallowing your VM from connecting to the server's USB ports or even bluetooth if exists.
That's off the top of my head... I'll edit this answer if I remember any more things to block.

Vuejs/JavaScript and Outbound Transfer Behavior

I'm currently trying to wrap my head around outbound transfer as a project of mine makes it a concern.
I discovered that if I try to play music directly off of my server, it counts towards my outbound transfer. This is understandable and I can understand the logic of that.
The idea I have is if I happen to host the file elsewhere, would the outbound transfer be counted towards my initial server, my 3rd party server, or both? I'm considering putting the music on dropbox for example and stream it from there through the server.
Is what I want even possible?
"outbound transfer" in this case most likely refers to the amount of bytes sent from that server. If you proxy the 3rd party server, you still send that data through your own server, so it won't net you any benefit, other than storage space. In fact, the latency will probably increase.
What you want to do is of course possible if you let the client connect directly to the streaming service. Just make sure that service allows you to stream data that way through their TOS. Also make sure that the service is actually designed for live streaming of data, or your user experience will be horrible.

Scaling WebSockets on Google Compute Engine

I would like to implement a chat system as part of a game I am developing on App Engine. To implement this, I would like to use WebSockets, and have clients connect to each other though a hub, in this case an instance of GCE. Assuming this game needed to scale to multiple instances on GCE, how would this work? If I had a client 1, and the load balancer directed that request of client 1 to instance A, and another client (2) came in and was directed to instance B, but those clients wanted to chat with each other, they would each be connected to different hubs, and would be unable to reach each other. How would this be set up to work with scale? Would I implement it using queues, where each instance listens on that queue, and if so, how would I do that?
Google Play Game Services offers exactly the functionality that you want but in regard to Android and ios clients. So this option may not be compatible with your game tech design.
In general you're reasoning correctly. Messages from client who want to talk to each other will most of the time hit different server instances. What you want to do is to make instances handle the communication between users. Pub/sub (publish-subscribe pattern) is very suitable pattern in this scenario. Roughly:
whenever there's a message directed to client X a message is published on the channel X,
whenever client X creates a session, instance handling it subscribes to channel X.
You can use one of many existing solutions for starters. It's very easy to set this up using redis. If you need something more low-level and more flexible check out zeromq.
You can expect single instance of either solution to be able to handle thousands of QPS.
Unfortunately I don't have any experience with scaling neither of these solutions so can't offer you any practical advice as to the limits of their scalability.
PS. There are also other topics you may want to explore such as: message persistence and failure recovery I didn't address here at all.
I didn't try to implement this yet but I'll probably have to soon, I think it should be fairly simple to handle it yourself.
You have: server 1 with list of clients and you have server 2 with another list of clients,
so if client wants to send data to another client which might be on server 2, you have to:
Lookup if the receiver is on current server - if it is, you just send it (standard)
Otherwise you send the same data to all other servers you have, so they would check their lists for particular client (or clients) and send data to them.

Best method to secure connection to firebird over internet

I have a client-server application which use a firebird server 2.5 over internet.
I have met the problem of given a secure access to FB databases and as a first approch a tried to solve this problem by integrating a tunnel solution in the application (STunnel software more exactly). BUT, this approch suffer from many aspects :
- this add more resource consumption (CPU, memory, threads) at both client/server side,
- sotware deployment become a serious problem because STunnel software is writen as a WinNT Service, not a Dll or a Component (WinNT Service need administrator privileges for install)
and my client application need to run without installation !
SO, i decided to take the bull by the horn (or the bird by the feathers as we talk about Firebird). I have downloaded the Firebird 2.5 source code and injected secure tunnelization code directly in his low level communication layer (the INET socket layer).
NOW, encryption/decryption is done directly by the firebird engine for each TCP/IP packet.
What do you think about this approach vs external tunnelization ?
I would recommend to wrap data exchange in SSL/TLS stream, from both sides. This is proven standard.
While custom implementations, with static keys, can be insecure.
For instance, CTR mode with constant IV can reveal a lot of information, since it only encrypts incremented vector and XORes it with data, so XORing two encrypted packets will show the xored version of unencrypted packets.
In general, my view of security critical code is this, "you want as many eyes on the code in question as possible and you do not want to be maintaining it yourself." The reason is that we all make mistakes and in a collaborative environment these are more likely to be caught. Additionally these are likely to be better tested.
In my view there are a few acceptable solutions here. All approaches do add some overhead but this overhead could, if you want, be handled on a separate server if that becomes necessary. Possibilities include:
stunnel
IPSec (one of my favorites). Note that with IPSec you can create tunnels, and these can then be forwarded on to other hosts, so you can move your VPN management onto a computer other than your db host. You can also do IPSec directly to the host.
PPTP
Cross-platform vpn software like tinc and the like.
Note here in security there is no free lunch and you need to review your requirements very carefully and make sure you thoroughly understand the solutions you are working with.
The stunnel suggestion is a good one, but, if that's not suitable, you can run a true trusted VPN of sorts, in a VM. (Try saying that a few times.) It's a bit strange, but it would work something like this:
Set up a VM on the firebird machine and give that VM two interfaces,
one which goes out to your external LAN (best if you can actually
bind a LAN card to it) and one that is a host-only LAN to firebird.
Load an openvpn server into that VM and use both client and server
certificates
Run your openvpn client on your clients
Strange, but it ensures the following:
Your clients don't get to connect to the server unless BOTH the
client and server agree on the certificates
Your firebird service only accepts connections over this trusted VPN
link.
Technically, local entities could still connect to the firebird
server outside of the VPN if you wanted it -- for example, a
developer console on the same local LAN.
The fastest way to get things done would not be to improve firebird, but improve your connection.
Get two firewall devices which can do SSL certificate authentication and put it in front of your DB server and your firebird device.
Let the firewall devices do the encryption/decryption, and have your DB server do its job without the hassle of meddling with every packet.

.NET CF mobile device application - best methodology to handle potential offline-ness?

I'm building a mobile application in VB.NET (compact framework), and I'm wondering what the best way to approach the potential offline interactions on the device. Basically, the devices have cellular and 802.11, but may still be offline (where there's poor reception, etc). A driver will scan boxes as they leave his truck, and I want to update the new location - immediately if there's network signal, or queued if it's offline and handled later. It made me think, though, about how to handle offline-ness in general.
Do I cache as much data to the device as I can so that I use it if it's offline - Essentially, each device would have a copy of the (relevant) production data on it? Or is it better to disable certain functionality when it's offline, so as to avoid the headache of synchronization later? I know this is a pretty specific question that depends on my app, but I'm curious to see if others have taken this route.
Do I build the application itself to act as though it's always offline, submitting everything to a local queue of sorts that's owned by a local class (essentially abstracting away the online/offline thing), and then have the class submit things to the server as it can? What about data lookups - how can those be handled in a "Semi-live" fashion?
Or should I have the application attempt to submit requests to the server directly, in real-time, and handle it if it itself request fails? I can see a potential problem of making the user wait for the timeout, but is this the most reliable way to do it?
I'm not looking for a specific solution, but really just stories of how developers accomplish this with the smoothest user experience possible, with a link to a how-to or heres-what-to-consider or something like that. Thanks for your pointers on this!
We can't give you a definitive answer because there is no "right" answer that fits all usage scenarios. For example if you're using SQL Server on the back end and SQL CE locally, you could always set up merge replication and have the data engine handle all of this for you. That's pretty clean. Using the offline application block might solve it. Using store and forward might be an option.
You could store locally and then roll your own synchronization with a direct connection, web service of WCF service used when a network is detected. You could use MSMQ for delivery.
What you have to think about is not what the "right" way is, but how your implementation will affect application usability. If you disable features due to lack of connectivity, is the app still usable? If you have stale data, is that a problem? Maybe some critical data needs to be transferred when you have GSM/GPRS (which typically isn't free) and more would be done when you have 802.11. Maybe you can run all day with lookup tables pulled down in the morning and upload only transactions, with the device tracking what changes it's made.
Basically it really depends on how it's used, the nature of the data, the importance of data transactions between fielded devices, the effect of data latency, and probably other factors I can't think of offhand.
So the first step is to determine how the app needs to be used, then determine the infrastructure and architecture to provide the connectivity and data access required.
I haven't used it myself, but have you looked into the "store and forward" capabilities of the CF? It may suit your needs. I believe it uses an Exchange mailbox as a message queue to send SOAP packets to and from the device.
The best way to approach this is to always work offline, then use message queues to handle sending changes to and from the device. When the driver marks something as delivered, for example, update the item as delivered in your local store and also place a message in an outgoing queue to tell the server it's been delivered. When the connection is up, send any queued items back to the server and get any messages that have been queued up from the server.

Resources