What are the security issues for exposing the database connection string at the client side for 2-tier applications - database

Recently I am doing an investigation for creating a multiple tier application. Every topic that I have read suggests that the 3-tier architecture is better than the 2-tier architecture because by exposing the connection string of the database at the client side you create a big security hole at your system. All of these articles just explains that it is a bad idea to expose the location of the database and none of them explains why.
Can anybody help me and explain to me the threads of exposing the location of the database? I mean they will know the location but they will not know the username and the password in order to log in and to modify the database. What make the 3-tier architecture more safe than the 2-tier architecture? Is it only the extra hope in order to reach the database?
Thanks in advance,
Constantin Patak

The connection string includes the username and password. If your client application can hit the database directly, then the user can inspect the client application and extract the connection credentials to do the same.
The middle tier will provide APIs which correspond to the operations you want clients to be able to perform. The client is shielded from the internal implementation which may or may not include a database. You will be able to change the implementation without affecting the client. Perhaps you will find that the load is so high you need to switch from RDS to NoSQL. The client doesn't need to know or change. Perhaps you will start caching some results without hitting your database. Again, the client doesn't need to know or change. This is why the industry has standardized around not hitting the database directly from client applications.

Related

Why do we use REST to connect to a database on a mobile app?

I am currently studying how to make cross-platform mobile apps (with xamarin forms), and I have heard that the "correct" way to connect to a database in a non-locale server (in my case located in Azure) is by using Rest Services (or rest APIs, or however is called), instead of connecting directly to the database with the server explorer option of VS like you would do in windows forms for example(using the SQL connection, dataset, etc. Which I think they are not necessary in the first case, I am not sure).
The only answer that I have received about this is that in mobile apps "They are not permanent connections. It connects, gives you data and disconnects. They are Asynchronous connections.", and that this is done "For optimization of connection resources. The mobile is suspended or the user passes the App to the background.".
But I still don't know if this is the actual reason, and if it is I don't understand how it optimizes the connection resources. So if someone has time to explain this I would appreciate it.
Thank you for your time, I hope I have explained myself correctly, and that you all have a great day.
As Jason said,the Security issues,with proper authorization having mediator is definitely much more secure than giving a user direct access to the database, because you restrict him to the end points which run only the queries you want to.And from the platform independence and maintenance,if the apps are developed in different languages and on different platforms,it may have benefit to create a common REST interface to allow sharing of data model, caching etc.For performance and scalability,that HTTP layer of your REST API provides another valuable caching mechanism. Your servers for your REST API can put caching headers on their responses, and these responses can be cached at the network layer, which scales exceptionally well.
you could read this link Why do people do REST API's instead of DBAL's?,I think the answers are pretty good

On database communication security

So, I've been reading about security in relation to desktop applications and database servers. Previously when I've built applications that are linked to a database, I've taken the easy route and simply stored the connection string hard coded in the source code directly. This has worked since the binaries were not distributed to third parties. However, now I'm working on a project whose binaries are bound for third party use, and in this case the communication with the server becomes a security issue that I need to deal with.
Since it is a priority that there be no direct connection to the remote database from the client machine, I understand that a server/client database service is a good choice. In this case, the client machine sends requests using TCP to a server, which then processes the request using stored procedures and responds accordingly to the client.
My questions in relation to this are:
i. Would this setup be an advisable one, or are other setups of which I am unaware more advisable for the kind of project I am working on?
ii. How does one go about securing such a connection? I can easily set up an SSL connection to the server using a security certificate generated by OpenSSL, however I'm not sure whether this is the correct way of securing the connection for a desktop application, or whether this method is primarily used for HTTPS. And WHEN should one in general secure the connection (are there instances where this wouldn't matter, for instance if all I do is send booleans back and forth?)? Any good resources that discuss these issues? For instance, I have a lot of application installed on my Windows PC that are networked, but I don't see many of them installing a security certificate on my PC. What gives?
Full disclosure: I'm a C++ (hobbyist) programmer using Boost libraries for my network programming needs and OpenSSL for my SSL cryptography. However, I hope this can be answered without paying too much attention to these facts :)
Answers:
i. Having your application talk to a web service that then talks to the database is a better setup. This abstracts the database away from the clients (and therefore direct access from the internet).
ii. This depends on what the threats to your system are. If the data you are vending from the web service mentioned above is data that is not sensitive, and is not user specific (say an app that allows searching of public photo galleries, so your web service simply returns a result set with URLs) then you might be able to get by with simply using SSL. Other apps get around installing their own cert in a myriad of ways. They can either get a cert from a CA like verisign, so your computer already trusts it. Or they can deploy the public cert with the binary of their app, and handle it inside of their app (this is a form of certificate pinning).
ii part 2. If you need the clients to authenticate, for reasons of either wanting to make sure that not just anyone can use your web service, or to support a more advanced authorization model, then you would need to implement some sort of authentication. That would be a much bigger question to address.
Make sure you use CA-signed certificates, and not self-signed. You might also want to consider mutual authentication between your service and the database.

Best way to access a remote database: via webservice or direct DB-access?

I'm looking to develop an application for Mac and iOS-devices. The application will rely on information stored in a remote database. It needs both read (select) and write (insert, update, delete) access to the database. The application will be a multi-user application.
Now I'm looking at two different approaches to access the database:
- via web service: the application accesses the web service (REST, JSON) which accesses the database. Authentication will be done via HTTP authentication over SSL (https).
- access the remote database directly over a VPN.
The app will be used by a maximum of let's say 100 people and is aimed at small groups/organizations/businesses.
So my question is: what would be the best approach to access the database? What about security and performance? What would a typical implementation for a small business look like?
Any advice will be appreciated.
Thanks
Using web services adds a level of indirection between the clients and the database. This has several advantages that are all due to the fact that the clients need to have no knowledge of the database, only of your web service interface. Since client applications are more complicated to control and update than your server side code, it pays to add a level of business logic on the server that lets you tweak your system without pushing updates to the clients. Main advantages:
Flexibility - you can change the database configuration / replace the data layer altogether and change nothing on the client apps as long as you keep the same web service interface.
Security - implement some authentication mechanism for your web services, and avoid giving clients access credentials to your database engine.
There are some disadvantages too: you pay for that flexibility by adding a level of complexity - it'd probably be faster to just code the database access into the clients and get done with it. Consider the web services layer as an investment that might pay dividends down the road. Whether it's worth it really depends on your business requirements and outlook.
Given the information you have provided, the answer is almost certainly web services, unless the VPN is fast.
If the VPN is fast enough to handle the traffic, you will save a lot of time, effort and expense by accessing the database directly from your application.
You can also provide remote access to virtual PC sessions, if that's your thing.
So it's all going to depend on what your requirements are. There are a lot of ways to do this, and each has its advantages and disadvantages. Making the right decision will require a fair amount of systems analysis, probably beyond the scope of a question posted on StackOverflow.

storing original password text

My web application stores external website login/passwords for interaction with them. To interact with these websites I need to use the original password text, so storing just the hash in my database is not going to work.
How should I store these passwords?
Edit:
I am concerned if someone gets access to my server. If I use some kind of 2-way encryption and they have server access then they can just check how the passwords are decrypted in my backend code.
It seems to me that you want to store passwords in a similar fashion as Firefox and Chrome. So why not look at how they do it?
This is how Chrome does it:
http://www.switchonthecode.com/tutorials/how-google-chrome-stores-passwords
If you MUST do this, you should use a two-way encryption. There are a lot algorithms (ciphers) for this, but basically you encrypt your data with an encryption key, and use the same key for decrypting them again.
Choosing the right cipher depends on which are supported by the programming language of your choice, but examples are:
Blowfish
3DES
Skipjack
They come in different complexity and some are harder to crack than others. You should realize though, that no two-way encryption is safe from cracking, given enough time. So it all depends on, how sensitive these passwords are.
/Carsten
Decide what you are protecting them against. Options include (but are not limited to): Accidental disclosure, disclosure by you, disclosure in transmission, disclosure due to code error, disclosure due to physical theft of hardware, etc.
If this is a web application, and each user is storing his/her own set of passwords, then you might encrypt these passwords with their login password to your application. If this is an application that each user installs separately, and which keeps its own local database, you could have an optional master password (like Firefox does).
If you are just ensuring that the data is safe if the hardware is stolen, you might use a full disk encryption solution like TrueCrypt or PGP WDE, or Ubuntu, Debian, or Fedora's built-in approach, and require a PIN or password on every boot.
If you just care about secure transmission, have code to ensure that you use transport security, and don't worry about encrypting the data in your database.
I would go about this in the following way.
Protect data against hardware being stolen:
Use disc encryption as discussed in previous posts.
Protecting data if server is compromised (hacked):
I would use two different servers for this project, one worker server and one front server.
A) Worker server
This has the DB with passwords etc,
it also connects to other services.
To connect to worker server, users
can do it through an API. API should
have on function, insertUserData,
which allows userdata to be inserted,
API escaped all the input.
API uses
a DB user which only has input
privilegies on the userData table.
This would be the only way to contact
this server.
Only allow SSL
connections.
This server in turn runs chron jobs that connect to external services, pulls data from them and populate it's DB. Use a different DB with different user privileges.
This server runs another chron JOB which connects to the front server and pushes new data to front server.
Minimal amount of services running
Only SSH/SCP from your IP, tight firewalling. Block if connections exced X / min etc as they only would do an occasional insert.
NO FTP etc.
B) Front server
Receives data from Worker server, never uses the passwords itself. Only way to contact worker server is through API mentioned above, only for new user information. This is where all users login to see their information etc.
The problem with doing it all on the same server, if you get hacked the hacker can sit and sniff all incoming data / passwords etc.. so even if they are stored / encrypted / decrypted securely, with some patience he would sniff them all.
When the application is first run, it will generate a random key. This key will be used to encrypt and decrypt sensitive data. Store the key in a local file, and set the file permissions so that nobody else can read it. Ensure that the user running the web server has no login access (this is a good idea anyway).
Possible ways to break this system:
Get root access.
Get sudo access.
Deploy a malicious application on the web server - this application will then have access to the key, and may be able to send it elsewhere.
As long as you take reasonable precautions against all of these, you should be OK.
EDIT: Come to think of it, you could just store the sensitive data right in the key file. Encryption would provide an extra layer of security, but it wouldn't be a very strong layer; if an attacker gets access to the file, chances are good that he also has access to the DB.

Would it be a bad idea to develop a desktop application that directly accesses the SQL server?

I want to install a desktop application (on many stations - about 10-20) should access the SQL Server directly, no Services, and no server-DALs.
The application will be installed on a local network on about 10 machines, while one of them is a server.
When I will install the program I will set the connection string, and the applications will talk directly to the SQL server.
Is this a bad idea?
If yes, then how bad?
It is not necessarily a bad idea. If you won't need to scale then it's a valid approach.
What you are describing is often called a 2-tier client-server architecture.
You should probably encrypt the connection string in the config file (but this will only stop prying eyes, not someone intent on recovering your password). The other option is to use Windows authentication via a trusted connection, but you do lose the ability to connection pool, but that should not be an issue with 10 - 50 clients (ballpark).
Of course not.
What your describing is classic Client Server architecture, and around 50% of apps are still
built this way, according to a survey I saw recently.
Bob.
I have built plenty apps like that. I would suggest you build a DAL that is in the app itself so that if you ever need to separate data access and presentation layers, you can do so easily (plus there are other benefits like standardization of code, single place to change things, etc).
I don't see an issue with it as long as you are consistent and follow best practices.
If it's on your local network, go for it.
If it's over the internet, I'd create a web service.
Also note that there are plenty of off-the-shelf DALs (NHibernate, Entity Framework) so you don't need to roll your own, and the work just as well in client server architectures.

Resources