Client Server architecture design - database

The application is a client server model.
Client application has local database which the customer will use on their day to day transaction.
The server holds another database which have consolidated information from client along with some other key things.
Periodically, client and server need to communicate for: data migration, accessing data from server (which is not available at client side), etc.
Neither the client or the server has static IP addresses.
How to make sure client can connect to the server seamlessly?
To put everything at a single location (e.g. in cloud or at a datacentre) is not an option due to the business requirements.

If there's a single server, why can't it have a static IP?
Does it always have an outside-visible IP? Then dynamic DNS is for you - the server notifies DNS servers on IP address change.
If neither of them have a publicly reacheable address, you'll need a mediator (proxy), which has. Now, this mediator will either know of the data, or the data will be encrypted to it.
Such a mediator could be anything, for example XMPP server, where the server would be assigned a specific JID, like server#mydomain, and clients would be assigned to their own IDs (like, customername#mydomain), or perhaps some PubSub solution, or it could be even an e-mail based solution (yeah, that's dirty), where both the client and server periodically read their mailboxes.
I guess also most ESB solutions would do.
The main thing is, in order to create a client-server architecture on the internet, the server (or a mediator which helps to reach the server with its own application-specific protocol) must be publicly reachable.

Related

Connecting to a remote SQL Server without configuring the firewall

I have an SQL Server located in the US, I've written a program that connects to a database on the server and takes the data from the server. The users of the program are spread around the world. The majority of them can easily use the program (i.e. the connection is successfully established).
But some of the users who try to run the program from inside their office building can't connect to the server because of their companies' firewalls. Since the number and location of the users is not known (the application is distributed for free with no notifications to me), customizing every firewall isn't really an option (even though it helped when I was able to do this).
I believe there should be an option like a kind of "certificate" that has to be embedded in my program and has to be registered somewhere on a user's machine that would allow establishing the connection. Or anything of that sort. Unfortunately, I haven't found anything specific in the Internet. Most probably because I googled wrong words or so.
Any help or advice is very much appreciated!
If a firewall (or other security device) is blocking, then there is no magic bullet. You need to avoid directly talking to SQL Server.
Even if you changed the port, many of those company workers will be limited to HTTP(S) access, and then only via a proxy.
So you need to talk HTTP to an API you provide, and the implementation of that API then talks (under your control) to the database.
This has the enormous advantage of giving you an extra layer protecting the integrity of the data in the database.
To build a connection you need firewall at client's place to allow access to the ip where your sql server present.
You can give a custom message to the users to allow access to the IP address of the SQL server, but it is not safe to do so due to security concerns.
Instead you can make a third application to take requests from clients and forward it to your sql server. Like some web service. Host this application on public IP and inform the clients that they need to open the IP in their firewall to run the program. It will ensure security as well as your problem will be solved.

Transactional replication on tablet device

I have a MS SQL 2012 Enterprise server (publisher & distributor) on a desktop PC which is constantly online (public static IP address) and it needs to do a transactional replication on tablet devices that also contain MS SQL 2012 (Express). Since tablets don't use static IP addresses I'm worried will this work?
I thought on using push subscription, but if tablets change their IP's constantly will this work? Or, should I do pull subscription? Or both will be fine?
The IP address won't be an issue as long as DNS is constantly updated to resolve the correct name/IP. I've done this previously with fixed devices (desktop PC's). you may have to bump down the TTL's on those machine names to something pretty low if they reconnect frequently with new IP's.
My .02, I would probably advise against creating a push model at the server because of the dynamic nature of the roaming client. You might want to create the publication at the server side and use a pull style method. If the clients are constantly connecting and disconnecting it will most likely aggravate the distribution agent. If you do use Push you'll want to really consider the frequency of how you build the snapshots and how you handle initializations and expirations.
MSDN suggests the following use cases;
http://msdn.microsoft.com/en-us/library/ms151170.aspx

Database mirroring between Windows Azure VMs

I have two windows server 2012 data-center R2 VMs with SQL Server standard 2012 running. I installed my both application and the db server on each of VMs. Both VMs reside within the same cloud service.
Also I setup load balancing between two VMs through port:80. Now it's a matter of mirroring the databases. I tried to setup SQL Mirroring but no luck so far. I'm not sure how these two VMs communicate with each other via the same port 5022.
Also I've done some reading but still I'm not sure what is the possible way of doing this. I definitely need a help now.
Questions:
a) Do I need to set up a virtual network in order to mirror databases?
b) Can I mirror databases resides within the same virtual network?
c) If my assumptions above are incorrect, what is the best way forward on this?
Thanks in advance!!
UPDATE: I managed to setup Principle server in VM1 and both Mirror & Witness servers in VM2 (If you have resources best way to have them in separated VMs). Both VMs reside within same Virtual Network and same cloud service.
So when the Principle is not available, Witness automatically set the Mirror to Principle and it's no longer in the recovery state.
If you're planning to have both Witness & Mirror SQL instances within the same server, Make sure you use a different Port for Witness server.
eg.
- Principle : 5022
- Mirror : 5022
- Witness : 5023
They can be in same Cloud Service and placing them into the same CS will make life simpler, because of no need to deal with public endpoints. In fact Cloud Service is just a container for VMs and/or Roles that's also associated with an implicitly created network (thus you need an endpoint to access it from outside).
It is also recommended to put VMs into the same Availability Set, in this case Azure will try to do not shutdown all the VMs at the same time.
The simplest way to setup DBM is to use certificates. You can see example here.
Note: don't forget to setup witness if you need an automatic failover.
In order for two Azure VM's to talk they need to be set up in the same affinity group.
A virtual network is a specialized affinity group, where you are able to control the IPs.
Once the virtual network is set up, you should have no problem setting up replication for your database.

Payment Card Industry DSS - Storing card holder data in systems not connected to internet

Background
Though I've looked through some posts on stack-overflow that partially cover this point I'm yet to find one that provides a comprehensive question/answer.
As a developer of POS systems the PCI DSS has two components I'm interested in:
PA DSS (Payment Application) which regards the software I develop
PCI DSS (Merchants) which regards all my clients that use the software
The PA DSS seems to put the point most bluntly:
"9.1 The payment application must be developed such that the database server and web server are not required to be on the same server, nor is the database server required to be in the DMZ with the web server"
Testing Procedures:
9.1.a To verify that the payment application stores cardholder data in the internal network, and never in the DMZ, obtain evidence that the payment application does not require data storage in the DMZ, and will allow use of a DMZ to separate the Internet from systems storing cardholder data (e.g., payment application must not require that the database server and web server be on the same server, or in the DMZ with the web server).
9.1.b If customers could store cardholder data on a server connected to the Internet, examine PA-DSS Implementation Guide prepared by vendor to verify customers and resellers/integrators are told not to store cardholder data on Internet-accessible systems (e.g., web server and database server must not be on same server).
And from the merchant's PCI DSS:
1.3.5 Restrict outbound traffic from the cardholder data environment to the Internet such that outbound traffic can only access IP addresses within the DMZ.
Question
My question is quite simple - can the database and application server be logically different (on different virtualised OS) or must they be physically different (on different physical/dedicated servers)?
Also, I'm a bit concerned about having to place a database server with no connection to the Internet whatsoever. How am I supposed to administer this server remotely?
Or is it okay to access the database server via the application server - though surely that defeats the purpose?
No simple answer, sadly.
The SSC has released a new supplement on virtualisation which has some relevant information: https://www.pcisecuritystandards.org/documents/Virtualization_InfoSupp_v2.pdf
While mixing guest OSs of different functions on the same hypervisor is not prohibited, you will need to show that you've thought about the extra risk that this brings.
They will also have to be logically separated with network traffic from one VM to the other going through a firewall of some sort to protect the different OSs and applications. Being on the same physical host is not an excuse for skipping controls like firewalling so you may have to be creative about how you meet these requirements.

How to secure database traffic the other way around, that is to say, from client to server

My scenary:
I am trying to develop a service which will query different databases.
To clear the above statement up:
I use the word service in its broadest sense: a sofware component that will provide some value to the database owner.
These databases will be in no way under my control as they will belong to different companies. They won't be known beforehand and multiple vendors are to be supported: Oracle, MS (SQL Server), MySql, PostgreSQL. Also, OLE DB and ODBC connections will be supported.
The problem: security of database credentials and overall traffic is a big concern but the configuration effort should be reduced at a minimum. Ideally, all the security issues should be addressed programmatically in the service implementation and require no configuration effort for the database owner other than provide a valid connection string.
Usually, database SSL support is done through server certificates which I want to avoid as it is cumbersome for the client (the database owner).
I have been looking into how to do this to no avail. Hopefully this might be done with openssl, SSPI, client SSL certificate or any form of tunneling; or may be it is just not posible. Some advice would be greatly apreciatted.
I am having a bit of difficulty understanding how this service would work without being extremely cumbersome for the database owner even before you try to secure the traffic with the database.
Take Oracle in particular (though I assume there would be similar issues with other databases). In order for your service to access an Oracle database, the owner of the database would have to open up a hole in their firewall to allow your server(s) to access the database on a particular port so they would need to know the IP addresses of your servers and there is a good chance that they would need to configure a service that does all of its communication on a single port (by default, the Oracle listener will frequently redirect the client to a different port for the actual interaction with the database). If they are at all security conscious, they would have to install Oracle Connection Manager on a separate machine to proxy the connection between your server and the database rather than exposing the database directly to the internet. That's quite a bit of configuration work that would be required internally and that's assuming that the database account already exists with appropriate privileges and that everyone signs off on granting database access from outside the firewall.
If you then want to encrypt communication with the database, you'd either need to establish a VPN connection to the database owner's network (which would potentially eliminate some of the firewall issues) or you'd need to use something like Oracle Advanced Security to encrypt the communication between your servers. Creating VPN connections to many different customer networks would require a potentially huge configuration effort and could require that you maintain one server per customer because different customers will have different VPN software requirements that may be mutually incompatible. The Advanced Security option is an extra cost license on top of the enterprise edition Oracle license that the customer would have to go out and purchase (and it would not be cheap). You'd only get to the point of worrying about getting an appropriate SSL certificate once all these other hoops had been jumped through. The SSL certificate exchange would seem like the easiest part of the whole process.
And that's just to support Oracle. Support for other databases will involve a similar series of steps but the exact process will tend to be slightly different.
I would tend to expect that you'd be better served, depending on the business problem you're trying to solve, by creating a product that your customers could install on their own servers inside their network that would connect to a database and would have an interface that would either send data to your central server via something like HTTPS POST calls or that would listen for HTTPS requests that could be sent to database and the results returned over HTTP.
SSL is very important in order to keep a client's database safe. But there is more than just that. You have to make sure that each database account is locked down. Each client must only have access to their own database. Further more, every database has other privileges which are nasty. For instance MySQL has FILE_PRIV which allows an account to read/write files. MS-SQL has xp_cmdshell which allows the user to access cmd.exe from sql (why would they do this!?). PostgreSQL allows you to write stored procedures in any language and from there you can call all sorts of nasty functions.
Then, there are other problems. A Malformed query can cause a buffer overflows which will give an attacker the keys to the kingdom. You have to make sure all of your databases are up to date, and then pray no one drops an 0-day.

Resources