I'm working on JSF project where I in one webapplication have many users each with an account and a separate password, and back-end the application is connected to a PostgreSQL database. I want the most tight security possible and therefore would like to create a system where, if the JSF webapplication get hacked/compromised, then there will not be access to most of the user accounts in the database (I expect the database to run on a separate server (or separate servers if necessary) hosted by a secured OpenBSD OS).
I think that normally a webapplication connect to a database as a role with a password and then the application has direct access to all the users in the database (for that webapplication). But to make it more secure I am thinking about giving each client user of the webapplication a separate role in the database (each with a separate password of-course). But then there will become millions of roles in the database, and therefore I would like to ask if anyone else have experience with this, and the performance penalties that might occur from using such a design?
And what will happen if the system gets very popular (such as Facebook) and maybe the needs for roles grows to several hundreds of millions (just setting up a best-case scenario)? Will such a design become a dead end?
Thanks.
Related
I have been researching about data balancing. It allows the traffic to be distributed among backend servers. But what about the database, since one database server can get overloaded? If we have many database servers then how is it kept synchronized? And shouldn't it happen instantly?
If I create a new account on a website, then the balancer decides on which database server to put my new account information. But if another user loads the page from a different database server, he won't see my new account until the databases are synced? In which case real time collaboration isn't possible.. Or is my understanding here completely wrong?
We are building a web application and plan to run it on AWS. Created a RDS instance with MySQL. The proposed architecture is as follows:
Data is being uploaded from company data mart to Core DB in RDS. On the other side, user is sending data through our Rest API to post data. This user input data will be saved in a separate DB within the same RDS, as one of our architects suggested. The data will then be periodically copied to a table inside Core DB. We will have a rule engine running based Core DB. Whenever an exception is detected, notification will be sent to customers.
The overall structure seems fine. One thing I would change though is instead of having two separate DBs, we can just have one DB and have user input data in a table in the same Database. The logic behind separate DBs, according to our architect, is for security concerns. Since Core DB will have data from our company, it is better to be on its own. So the http requests from clients will only affect the user input DB.
While it makes sense, I am not sure it is really necessary. First all the user input is authenticated. Secondly the web api provides another protection layer against database since it only allows certain requests, which in this case couple of endpoints for post request. Besides if someone can somehow still hack into the User Input DB in RDS, since the it resides on the same RDS instance plus there is data transfer between DBs, it is not impossible they can't get to Core.
That said, do we really need separate DBs? And if this is the way to go, what is best way to sync from User Input DB to a User Input TB in Core DB?
In terms of security reason, separating the db are not magically make it true. My suggestion :
Restrict the API layer, such as only have write access ( just in case to avoiding accidentally deleting data)
For credentials data, don't put it on source code, you can put it as environment variables, example on ElasticBeanstalk Environment Variables
For RDS itself, put it under VPC
In term of synchronizing data if you have to go with 2 db.
if your two database are exactly same on the schema, you can use db replication capability (such as mysql replication)
if not, you can send it to message broker service (SQS) then create a worker to pulling it then save it to target database
or you can use another service such as datapipeline
We have a big system running with thousands of users (some from android apps, other from the web app, etc.).
The system is distributed, with databases in two locations (within the same country). In one location there are 5 servers in the same network, and each one has a copy of the database (via replication).
Among the software developers, a few have direct access to the production databases. Sometimes due to technical support requested by users to modify some operations not possible from the system itself, the developers/support team have to access the database directly and modify some records.
We know this is not the ideal manner of working. But it's been like this since years.
Recently we have found a few problems. One day one person updated hundreds of records from a table by mistake.
Since then we are analyzing how to improve this access.
We are looking for some way of improving the security. We would like to have a two-phase authentication system in place. Something that asks the user for two passwords when accessing from Sql Server Management Studio...
Is that possible? Or is there any other approach we can use to improve the security but still allow devs/support team to access the production database when necessary?
Users also (currenty) have access via remote desktop to all servers.
At least we would like to KNOW when this access is being done.
Make access to PROD read only for those users. Allow them to write their scripts and then submit them for review at a minimum and testing if possible like any other deployable. Then follow standard deployment processes with someone who has access.
If my other answer isn't workable and these updates are always the same kind of fixes...you could create support stored procs maybe to do the fixes and only give permission on the procs...but this is highly dependent on the commonality of fixes being made and less preferable to my other answer.
I haven't used it myself but EXECUTE AS might let you give the users read-only permission while the procs would execute under credentials with higher access.
Background: [No copyright implementation]
[No copyright implementation] My company develops inventory control application for clients in the area where we don't have copyright protection by the government. The only option is to hide and protect things by ourselves. It is common for competitors here to copy other company's database and build front end on it and then start selling their own app.
The Problem
We use MS SQL server express edition and some times standard edition. We have found that any of customer can stop sql engine, copy the files from the pc where application was installed and then attach that database files into another system where they have full windows admin rights and that's it they can fully explore our database.
I am looking for
Is there any reliable solution to protect our database design being viewed by other people. Only our application may connect through the users we have created inside the db?
In past i had heard that sybase adaptive server has such functionality, windows users had no access in it and users were stored inside each db itself. there was no way to login if someone don't have password of the users stored in db itself.
Thanks
Your help will be highly appreciated.
As suggested by Sean, hosting it yourself or in a cloud service like Azure SQL DB is your best bet. Still no guarantee but makes it significantly harder to get everything but a lot easier to lockdown than alternatives. It's also a lot easier to manage and handle user requests for restricted data compared to something deployed onsite.
Outside of that, there's really no practical way to do it if deployed at the customer's site. Even if you lock down all logins and users (regardless of Windows or SQL Server logins) so no customer login has admin level privileges, you still can't prevent them from copying the database file, mounting it on a different instance where they have admin privileges or even just running it as an app to reset SA password. If they have physical access, all bets are off. It's only a matter of knowledge and time.
You can make it harder by encrypting the entire database and only your app holds the key. Users then have to either break the encryption algorithm (hard if done right) or your application that holds the key (easier but still not trivial). Both are expensive to do correctly and they really just delay, not prevent access. You will also introduce other problems like key management and rotation which, if not done right can result in customers losing access to their data.
You could leave a cookie trail (e.g. functions and tables that are active,look like they're part of the app and are tightly coupled to useful parts of the app but actually aren't related to the application's core functionality). That makes it easier to prosecute later but if the country has no laws protecting intellectual property, this will only be useful if the software is re-used or resold in a country that has such laws.
Working with an application that needs to provide row and column level security for user reports. The logic for the row filtering an column masking is in place, but there are still decisions to be made about identifying users at report execution time.
The application uses a single SQL Server login to authenticate, as all rights are data driven within the application itself. This mechanism does not carry well to reports, as clients like Crystal and MS Office do not authenticate through the application (web and WinForms).
The traditional approach of using SQL Server logins and database users will work will, but may have one issue. In some implementations of the application, the number of users who run reports and need to be uniquely identified may run into the hundreds.
Are there any practical limits to the number of logins or users on a SQL Server database (v 2005+) where this approach may cause problems? Administration of the users on the database server can be automated by the application, but the potential number of credentials may be a concern.
We have looked into user impersonation techniques, but they become difficult to implement when a report client such as Excel authenticates directly to the server.
Edit: The concern is not concurrency or workload, but rather administration issues on remote instances where a local DBA is not available, especially when the server is not dedicated to the application. Interested in scenarios where the numbers of logins were problematic.
I've used your described approach (SQL Server accounts managed automatically by our application) and we didn't have any trouble. However, we only ever had a maximum of perhaps 200 SQL accounts. But we didn't experience any kind of administrative overhead except when "power users" restored databases without telling us, causing the SQL login account to become out of synch with the database*.
I think your approach is sound.
EDIT: Our solution for this was a proc that simply ran through the user accounts and called our procs that deleted/created the user accounts. When the power users called this proc all was well, and it was reasonably fast.