I have an SQL Server located in the US, I've written a program that connects to a database on the server and takes the data from the server. The users of the program are spread around the world. The majority of them can easily use the program (i.e. the connection is successfully established).
But some of the users who try to run the program from inside their office building can't connect to the server because of their companies' firewalls. Since the number and location of the users is not known (the application is distributed for free with no notifications to me), customizing every firewall isn't really an option (even though it helped when I was able to do this).
I believe there should be an option like a kind of "certificate" that has to be embedded in my program and has to be registered somewhere on a user's machine that would allow establishing the connection. Or anything of that sort. Unfortunately, I haven't found anything specific in the Internet. Most probably because I googled wrong words or so.
Any help or advice is very much appreciated!
If a firewall (or other security device) is blocking, then there is no magic bullet. You need to avoid directly talking to SQL Server.
Even if you changed the port, many of those company workers will be limited to HTTP(S) access, and then only via a proxy.
So you need to talk HTTP to an API you provide, and the implementation of that API then talks (under your control) to the database.
This has the enormous advantage of giving you an extra layer protecting the integrity of the data in the database.
To build a connection you need firewall at client's place to allow access to the ip where your sql server present.
You can give a custom message to the users to allow access to the IP address of the SQL server, but it is not safe to do so due to security concerns.
Instead you can make a third application to take requests from clients and forward it to your sql server. Like some web service. Host this application on public IP and inform the clients that they need to open the IP in their firewall to run the program. It will ensure security as well as your problem will be solved.
Related
SUMMARY: if sites have separate application pools, can their traffic avoid contention through "NIC teaming"?
((Let me know if this is better posted on http://networkengineering.stackexchange.com))
DETAILS:
Our hosting provider has priced a scenario where NIC teaming could be done, between the server hosting our websites, and the server hosting our databases.
Tech details (in case they matter):
(1) The websites are hosted on a server running Windows Server 2008, with IIS 7.0.
(2) The databases are hosted on a server running Windows Server 2003, with SQL Server 2005.
(3) NIC teaming scenario they described would involve each of the two servers having a 10GBE dual-port NIC card, with crossover cables between.
(4) Each site has its own web.config, and its own application pool in IIS.
(5) Currently, the connection strings to SQL Server, for each website, all look exactly the same, but we could make each website use a different connection string.
HOWEVER, the hosting provider told us we will only see "bandwidth aggregation" if
(A) Our application is coded to use the NIC teaming (it is not), or
(B) Our communication goes over more than one TCP stream.
So, here's my first 2 questions... call this "PLAN A" --
(I) because our sites all have separate application pools (detail #4 above -- resulting in "w3wp.exe" appearing over 10 times, in Task Manager),
would that mean we have more than one TCP stream?
(II) could there be any effective decrease in network contention -- that is, could the traffic from the different sites / different application pools travel on separate tNICs?
My third question... call this "PLAN B":
(III) If the answer to both the above is "No", then I still see a possibility of giving ONE of our sites a separate SQL Server connection string, to give it a separate NIC, or separate tNIC. Does that make sense?
It sounds like it does, if I'm understanding another post here at StackOverflow:
.NET SqlConnection NIC usage
But I'd still PREFER plan a -- automatic decrease of contention, based on separate application pools -- because I trust a NIC Teaming Solution to direct traffic in a much more intelligent way -- based on varying demand -- than it would be to exclusively dedicate a port to one site's SQL Server.
Please forgive if this is TMI... feedback welcome.
Thanks for your interest...
The number of application pools does not determine the number of TCP streams. Each HTTP request to your server will be a separate TCP stream, unless a client reuses an existing connection (HTTP keep-alive).
If you are experiencing network contention, using a teamed NIC should help you decrease it. You are creating another physical path to the server, but the router or switch will have to know to use it.
The application is a client server model.
Client application has local database which the customer will use on their day to day transaction.
The server holds another database which have consolidated information from client along with some other key things.
Periodically, client and server need to communicate for: data migration, accessing data from server (which is not available at client side), etc.
Neither the client or the server has static IP addresses.
How to make sure client can connect to the server seamlessly?
To put everything at a single location (e.g. in cloud or at a datacentre) is not an option due to the business requirements.
If there's a single server, why can't it have a static IP?
Does it always have an outside-visible IP? Then dynamic DNS is for you - the server notifies DNS servers on IP address change.
If neither of them have a publicly reacheable address, you'll need a mediator (proxy), which has. Now, this mediator will either know of the data, or the data will be encrypted to it.
Such a mediator could be anything, for example XMPP server, where the server would be assigned a specific JID, like server#mydomain, and clients would be assigned to their own IDs (like, customername#mydomain), or perhaps some PubSub solution, or it could be even an e-mail based solution (yeah, that's dirty), where both the client and server periodically read their mailboxes.
I guess also most ESB solutions would do.
The main thing is, in order to create a client-server architecture on the internet, the server (or a mediator which helps to reach the server with its own application-specific protocol) must be publicly reachable.
My scenary:
I am trying to develop a service which will query different databases.
To clear the above statement up:
I use the word service in its broadest sense: a sofware component that will provide some value to the database owner.
These databases will be in no way under my control as they will belong to different companies. They won't be known beforehand and multiple vendors are to be supported: Oracle, MS (SQL Server), MySql, PostgreSQL. Also, OLE DB and ODBC connections will be supported.
The problem: security of database credentials and overall traffic is a big concern but the configuration effort should be reduced at a minimum. Ideally, all the security issues should be addressed programmatically in the service implementation and require no configuration effort for the database owner other than provide a valid connection string.
Usually, database SSL support is done through server certificates which I want to avoid as it is cumbersome for the client (the database owner).
I have been looking into how to do this to no avail. Hopefully this might be done with openssl, SSPI, client SSL certificate or any form of tunneling; or may be it is just not posible. Some advice would be greatly apreciatted.
I am having a bit of difficulty understanding how this service would work without being extremely cumbersome for the database owner even before you try to secure the traffic with the database.
Take Oracle in particular (though I assume there would be similar issues with other databases). In order for your service to access an Oracle database, the owner of the database would have to open up a hole in their firewall to allow your server(s) to access the database on a particular port so they would need to know the IP addresses of your servers and there is a good chance that they would need to configure a service that does all of its communication on a single port (by default, the Oracle listener will frequently redirect the client to a different port for the actual interaction with the database). If they are at all security conscious, they would have to install Oracle Connection Manager on a separate machine to proxy the connection between your server and the database rather than exposing the database directly to the internet. That's quite a bit of configuration work that would be required internally and that's assuming that the database account already exists with appropriate privileges and that everyone signs off on granting database access from outside the firewall.
If you then want to encrypt communication with the database, you'd either need to establish a VPN connection to the database owner's network (which would potentially eliminate some of the firewall issues) or you'd need to use something like Oracle Advanced Security to encrypt the communication between your servers. Creating VPN connections to many different customer networks would require a potentially huge configuration effort and could require that you maintain one server per customer because different customers will have different VPN software requirements that may be mutually incompatible. The Advanced Security option is an extra cost license on top of the enterprise edition Oracle license that the customer would have to go out and purchase (and it would not be cheap). You'd only get to the point of worrying about getting an appropriate SSL certificate once all these other hoops had been jumped through. The SSL certificate exchange would seem like the easiest part of the whole process.
And that's just to support Oracle. Support for other databases will involve a similar series of steps but the exact process will tend to be slightly different.
I would tend to expect that you'd be better served, depending on the business problem you're trying to solve, by creating a product that your customers could install on their own servers inside their network that would connect to a database and would have an interface that would either send data to your central server via something like HTTPS POST calls or that would listen for HTTPS requests that could be sent to database and the results returned over HTTP.
SSL is very important in order to keep a client's database safe. But there is more than just that. You have to make sure that each database account is locked down. Each client must only have access to their own database. Further more, every database has other privileges which are nasty. For instance MySQL has FILE_PRIV which allows an account to read/write files. MS-SQL has xp_cmdshell which allows the user to access cmd.exe from sql (why would they do this!?). PostgreSQL allows you to write stored procedures in any language and from there you can call all sorts of nasty functions.
Then, there are other problems. A Malformed query can cause a buffer overflows which will give an attacker the keys to the kingdom. You have to make sure all of your databases are up to date, and then pray no one drops an 0-day.
I am about to start working on a new system which will need to support multiple users and potentially allow the database to be accessed over the Internet.
The system will be win32, not web based, the database will just be in an office and accessible anywhere. I am not sure if this is a dangerous approach security wise, am open to suggestions
The database will be SQL Server and the system will be implemented in Delphi 6
Does anyone know how I go about starting this? I will need to take into account record locking as well.
If anyone could provide links to good articles that would be appreciated.
Cheers
Paul
IMHO, the easiest way for you is to create a VPN exposing securely your database over Internet.
Security will be very good, because access to the database will be available only through a trusted VPN connection.
And your database will be available from anywhere, using the Internet just as a tunnel to transport your database packets safely.
So your Delphi code will connect to the database just as usual, using TCP/IP connection, via the VPN secure tunnel.
No need to add additional Delphi-only artifacts, like Indy components and such.
And you will be able to connect to your database for not-Delphi client, which could be a good idea to use some database browsing tool.
Exposing the database on the Internet is a security risk. Security flaws could be easily exploitable remotely.
Solutions are:
VPN, as said in other answers. Simple and secure, but requires some setup on both end-ponts (clients and VPN server), and may require proper software on the server - or a VPN router/appliance - and on the client as well if you're not using standard VPN protocols).
A n-tier application, where only the application server is exposed to the internet. You still have to protect the application server properly and the transmission channel. May require less setup on the client side. Delphi 6 offers Datasnap as a n-tier library (it also still supports CORBA, but it was dropped since D7). DCOM is not very firewall friendly (but can be configured to work across them) but can secure the channel on its own, the other two options (socket and HTTP) are easier to setup but a little less secure (they work using DCOM proxies, thereby the client identity is lost, and require custom code or certificates to secure the channel).
A third solution could be to let user connect remotely via remote desktop, but it requires licenses and a machine able to sustain the remote sessions load.
Record locking is handled by the database itself - read the documentation about SQL Server locking mode carefully to avoid bad surprises later. If the connection is not fast enough you may choose to cache some data on the client side (TClientDataset works well for that) and it can also reduce locking issues, but it can introduce udpate conflicts.
You probably mean a client server system that communicates trough TCP/IP.
You can create this using the Indy components. Be sure to check the examples because they are not easy to use, but you can create almost anything network related with them.
Actually, there are dozens of techniques possible, depending on your experiences, preferences and tools that you have available. I would advise you to use ADO to connect to the database and not the BDE, though. To do this, you can use the ADO components that are part of Delphi or import the msado15.dll type library into your project to use raw ADO API calls. The latter will require a lot more experience!SQL Server is able to just expose itself to the Internet, although this creates a security risk. Still, someone who wants to access it will need a username and password to get a connection and you would need to open the ports that SQL Server uses. But technically speaking, to use ADO over the Internet, all you need to know is the IP address of a working server, plus login information. It's a security risk, though. And for that reason, most developers will not expose SQL Server to a database but just write web services to wrap around the specific database functions that you want to expose.Record locking is something SQL Server will do for you, and if you use transactions you can make it even a bit more secure.In the end, the things you need to learn and read about depend heavily on the things you want to do in your application. So before you even start to write some code, start writing a functional design to get an overview of what you want and what you would need for this. From this document, start writing technical documents to describe more precisely what your code needs to do. Once you have this, you can ask more direct questions about the things you need, yet don't know at the moment.
Typically I use a database such as MySQL or PostGreSQL on the same machine as the application using it, which makes access easy and secure. I'm just now building the first site that will have a separate physical database server (later this year it will). I'm wondering 3 things:
(security) What things should I look into for starters pertaining to security of accessing a separate machine's database?
(scalability) Are their scalability issues that I should think about pertaining to this (technology agnostic)?
(more ServerFaultish but related) If starting the DB out on the same physical server (using a separate VMWare VM) and later moving to a different physical server, are there implicit problems that I'll have to deal with? Isn't another VM still accessed via localhost?
If these questions are completely ludicrous, I apologize to you DB experts.
Easy, I'll grant you. Secure.. well, security has very little to do with the physical location of the database server.
To get to your three questions though:
First, look at how you can limit access to database tables using the database servers security model. Namely, if your application does not need to drop tables, make sure the user it uses to connect does not have that ability. Second, look into how to encrypt the connection between the database server and your application. In windows this is pretty transparent through kerberos and can even be enforced by group policy settings, not sure about other platforms. Third, look into what features the database has for encrypting the data "at rest". Meaning, does it natively support encryption of the actual data files themselves?
The point here is that your application is only one possible entry point to the database server itself. Ask yourself, what would happen if someone can connect directly without going through your application using your apps credentials. Next ask, what can happen if they find a SQL Injection issue.. Also, ask yourself, what information can be gleaned if someone is able to monitor the IP traffic going between your app and the server. Can they discern any data? Finally, ask yourself, what if they get a copy of the database itself?
The lengths you go for #1 is going to be dependent on several factors such as How valuable is the data (eg: what would happen to you, your company, or your clients if it was lost); and, How much time do you have to come up with an ideal solution?
scalability: This is purely a function of load. Unfortunately, the only way to scale most database applications is to scale up. Meaning that you acquire a larger database server as the need arises. Stack Overflow went through this not too long ago. Some database types (nosql, mongodb, etc) support a concept known as shredding or sharding. MySql, PostGreSql, etc don't. Instead you'll have to specifically design the app to handle it. Which means not using things like auto incrementing keys, etc. This can be a royal PITA... which is why scaling up is a much easier prospect depending on your application.
Another VM is not accessible via "localhost". localhost defines access to your current server. Whether that server is a VM or not is immaterial. You'll have to reference your database server by name. Now, transitioning the database VM to another physical server should have zero impact as your are referencing it by name. Beyond that there aren't any other considerations.
In addition to Chris's valid response,
Security
Use a security mechanism on the network in addition to whatever security features the database or app framework provides. Perhaps this is a simple as firewalling the network, running IPSEC, or over an ssl tunnel. The point is that you shouldn't assume the DB authors are network security experts, or that the DB authentication mechanism has even addressed network security at all.
Scalability
One scalability issue comes to mind when moving from local to remote dbs. Remote TCP/IP communication is much slower than local pipe communication. Your app may have hidden scalability issues due to frequent round-trips to the DB. Between each query, your app waits for each DB response in succession. On a local system, the latency is so small you may not have noticed it.