User or User Profile model for app wide relationships - database

I recently read a tweet that suggested that if one wants to avoid headaches in the future of an app, they should have the user table have only authentication information and a user profile table for everything else. That is if you have bikes and peaches in the system they should be linked to the user that owns them via the user profile id. The tweet was not clear on what the consequences of using the user profile. Are there maintainability/scalability repercussions to not following this especially in a large web app?

Well, don't take it as a dogma, though it isn't completely worthless. Dependency is a problem: if you have to have a lot of different data that represent particular user, you'll change underlying database oftenly. In case everything is stored in a single column, you might find yourself doing repetative monkey job of "making it work" with your types/ORM and whatsnot gonna be involved in DB <-> RUNTIME communication.
It is all about splitting complicated task into smaller less complex subtasks: auth is self-standing - one of the most important - task itself and it definitely deserves some dedicated space. However, your app might be not that big, or not that concerned with users, and thus it won't be very helpful to split data into multiple columns. You must develop a deep sense of purpose and measure when it comes down to a software design.

Related

which methods require the least amount of resources?

I am currently working on the development of a social network with Cassandra. My problem is that I hesitate between two solutions to optimize the consumption of my server.In the first case, when a user publishes a post, it contains all the information like the nickname or the profile picture. But when the user change his nickname for example, I have to change the value for all the posts.
In case two, the information of the user and the posts are stored separately, but each time a user recovers a post he makes two commades instead of one.
In the long run, which of these solutions is the best to optimize the speed of my server?
Thank you for your answers,
Jesver
Since you mentioned that it's a social media application, so the assumption is that there can be plenty of posts by a single user. Also since you have mentioned about using Cassandra, the first scenario will not scale well for a few reasons.
Extensive updates can somehow be costly since it involves a read operation prior to write. It can be considered as an Antipattern.
The information about user will be redundant in each post. In the longer run, it will be a pain ensuring consistent user information in all the post.
The second alternative might be better in which 2 tables are created for users & posts, along with maintaining the relationship at the application level.

How to structure/coordinate multiple databases?

Imagine a large corp with dozens of companies, each with their own website and each website will have their own unique functional requirements
Most data on each website will be specific to that website
Each website can edit its own data
Some data will be shared across all websites
There will be a central CMS that is allowed to edit this data, but other websites can read and use that data
e.g. say you're planning the infrastructure for a company that owns multiple sub-companies that make different kinds of products, some in the same category (cereal, food), others in completely different categories (books, instruments). Some are marketing websites, some are for CRM, some are online stores
there are a list of regulatory requirements that affect all products
each company should manage the status of compliance of its own products to each requirement
when a new requirement surfaces, details regarding that requirement should only be entered once
How would the multiple databases be coordinated?
edit: added more info per Bob's suggestions
Thanks for the incredibly insightful questions!
compliance data is not shared, silo'd within each site
shared data is only on the one enterprise-wide database, they will mostly be "types of [thing]"
no conclusive list of instances where they'll be used but currently it'd be to populate CMS dropdowns for individual sites.
changes to shared data would occur a few times a year.
Ideally changes would be reflected within a few minutes, but an hour or so should be acceptable
very low volume in shared data.
All DBs will be new, decision on which DB is pending current investigation.
Sub-systems will expose REST api
Here are some ways I have seen this handled, you need to think about the implications of each structure based on the details of your particular business domain. All can work, but all have to be carefully set up if they are going to work.
One database for shared information and one for each client for client-specific information. Set up the overall application so that the first thing you put in the application on log in is the client and it connects to the correct client. People might have to also have a way to change the client if users will handled multiples.
Separate servers for each client if they completely need to be siloed. Database changes are by script (and in source control) and are applied to each server as need be. So the changes to the central database might have a job that runs to push any data changes to the other servers
All the data in one database, but making sure each table has a client_id so that the data is always filtered correctly by client. You can set up separate views by client, so that the users can only see the clients they are supposed to see. This only works if the data for each client is substantially in the same form.
And since you are in a regulatory environment, I strongly urge that you create an audit database that is updated by database triggers (never audit from the application, you will lose changes to the data) for each database.
I agree with Chris that, even after both the sets of questions, there is still a big set of possible solutions. For instance, if the databases were the same technology, and the shared data were stored in the same way in each one, you could do db-level replication from the central db to the others. Is it OK to have 2 separate dbs per application (one with shared stuff and one with not-shared?) - this would influence the kind of replication.
Or you could have a purely code solution, where clicking publish in a GUI that updates the central db calls a set of APIs that also update the other dbs. Or micro-services - updating the central db also creates a message on a shared queue, that is picked up by services that each look after a different db and apply the updates in whatever form makes sense for that db.
It depends on (among the things already mentioned) what your organisation's technology strategy is, what technology and skills you already have in-house, and so on.
So this is as much an architecture question as it is a db question.
I don't think this question is sufficiently clear to get a single answer. However there are a few possibilities.
In many cases, where you have shared data you want to have a single point of ownership of that information. It could be in a database, in an excel file (which can then be turned into csv and periodically loaded on all dbs), or some other form. The specifics depend on what is shared exactly.
Now in this case it sounds like you are going to have some sort of legal department in charge of some shared information and they will manage that data, which will then be shared to the other sites. This might be done with an application they manage which aggregates information from the other companies or it could be data which is pushed to their systems.
A final point:
Software is at its best when it facilitates human solutions to human problems, not when it tries to solve those problems directly. In these cases, you probably want a good human solution in place and then to look at what software can do to support that. A lot of the issues (who owns the information?) will already have been solved and you will be simply automating what is already done.

How to merge user data after login?

It doesn't matter if you're building an eshop or any other application which uses session to store some data between requests.
If you don't want to annoy the user by requiring him to register, you need to allow him to do certain tasks anonymously when possible (user really have to have a reason for registering).
There comes a problem - if user decides to login with his existing profile, he may already have some data in his "anonymous" session.
What are the best practices of merging these data? I'm guessing the application should merge it automatically where possible or let the user decide where not possible.
But what I'm asking more is if there are any resources about how to do the magic in database (where the session data are usually stored) effectively.
I have two basic solutions in my mind:
To keep anonymous session data and just add another "relation" saying what's actually used where and how it's merged
To physically merge these data
We could say that the first solution will probably be more effective, because the information about any relation will probably mean less data than data about the user. But it will also mean more effort when reading the data (as we firstly need to read the relation to get to actual user data).
Are there any articles/resources for designing data structures for this particular use case (anonymous + user data)?
An excellent question that any app developer using user data should ask, and, sadly very few do :(
In fact, there are two completely independent questions here:
Q1 - At what stage require user to sign in/up?
Q2 - Data concurrency and conflict resolution (see below).
And here some analysis for each of the questions. Please excuse my extra passion coming from my own "frustrated user" experience. :)
Q1 is a pure usability question. To which the answer is actually obvious:
Avoid or delay to force the user sign in as much as possible!
Even the need to save state is not enough a reason by itself. If I am as user not interested in saving that state, then don't force me to sign! Please!
The only reason for you (as website) to justify forcing me to sign is when I (as user) want to save my data for later use. Here I speak as user having wasted time on signing only to find the site useless. If you want to get rid of too many users, that is the right way. In any other case - please, delay it as much as possible!
Why so many sites completely disregard such an obvious rule? The possible reasons I can see:
R1- developer friendly vs user friendly. Yes, it is developer friendly to require sign in right away, so we don't need to bother with concurrency (Q2). So we can save developer costs, time etc. But every saving comes at a cost! Which in this case is called User Experience. Which is not necessarily where you would like to look for saving. Especially, since the solution should not be that hard (see below).
R2 - Designer or Manager making the decision is an "indoor enthusiast" :) She lives happy life surrounded by super-fast computers with super-fast internet connection and can't imagine singing up can be that hard for any user. So why is it such a big deal? So many reasons:
It breaks the application flow. Sites living in previous century still replace the whole screen with sometimes rather lengthy form. Some forms are badly designed, some have erratic instructions, some simply don't work. Some have submit buttons that are for some reason disabled in the browser used.
Some form designers have genius idea to lock certain fields with barely noticeable change or colour. Then don't show me the field if you don't want me to fill it!
If the site is serious about user's data, it must request Email and must verity it! Why? How else shall I get back to user who forgot all other credentials? Why verify? What if user mistyped the email? If I don't verify it, next time the user tries to recover password with her correct email, the recovery fails and all data are lost! Obvious, yet there are still sites out there not doing it. Then I need to wait till the verification email is received and click on, hopefully, well-formatted and uniquely identifiable link that does not break in my browser, nor get some funny characters due to broken encoding detection, making the whole link unusable.
The internet connection can be slow or broken, making every additional step a piece of pain. Even with good connection, it happens here and there that page suddenly takes much longer to load. Also the email may not arrive right away. Then impatient user starts furiously clicking the "resend verification" link. In which case 90% of sites resend their link with new token but also disable all previous tokens. Then several emails arrive in unpredictable order and poor user has to guess in vain, which one (and only one) is still valid. Now why those sites find it so hard to keep several tokens active, just for this case, is beyond my understanding.
Finally there is still this so hard to unlearn habit for sites to insist on the so-called "username". So now, beside my email, I have to think hard to come up with this unique username, different from any previous user! Thank you so much for making it sweet and easy! My own way of dealing with it is to use my email as username. Sadly, there are still sites not accepting it! Then what if some fun type used my email as his username? Not so unrealistic if your email is bill#gates.com. But why simply not use Email and Password to avoid all this mess?
Here some possible guidelines to relieve user's pain:
Only force me to sign in/up if you absolutely need and give me a chance to choose not to!
Make it one page form, so I know what I am up to and, needless to say, use as few input fields as possible. Ideally only Email and Password (possibly twice), no Username!
Show your sign in form as small window on top of your page without reloading, and allow me to get rid of it with single click away from that window. Don't force me to look for "close" button or, even worse, icon I could confuse for something else!
Account for user to click back/forth and reload buttons. Don't clear the form upon reload! Don't put clear button at all! It is too easy to click by accident. The data you are ask me to fill should not be so long in first place, that I could not re-enter it without the need of "assistance" to clear.
Now to question Q2. Here we have well known problem of conflict resolution that occurs any time two data need to be merged. For instance, the anonymous data and the registered user data, but also whenever two users modify the same data, or the same user modifies it from different devices at different times, or locally stored data conflict with server data, and so on.
But whatever the source is, the problem is always the same. We have two data, say two objects $obj1 and $obj2 and you need to produce your single merged object $obj3. The logic can be as simple as the rule that server's object always wins, or that the last modified object always wins, or the last modified object keys always win or any more complicated logic. This really depends on the nature of your application. But in each case, all you need to do is to write your logic function with arguments $obj1, $obj2 that returns $obj3.
A solution that will possibly work in many cases is to store timestamp on each object attribute (key) and let the latest changed key win at the moment of synchronisation. That accounts e.g. for the situation when the same user modifies different attributes when being anonymous from different devices.
Imagine I had modified keys A and B on device AA yesterday, then logged today from device BB to enter another B and saved it to the server, then switched back to my device AA, where I am anonymous, to enter yet another A without changing the old B from yesterday, and then realised I want to log in and synchronise. Then my local B is obviously old and should clearly not overwrite the value of B that I changed more recently on device BB. In this seemingly complicated case, the above solutions works seamlessly and effectively. In contrast, putting the timestamp only on whole objects would be wrong.
Now in some cases, it could make sense to keep both objects, and, e.g. distinguish them by adding extra properties, like in case 1 suggested in Radek's question. For instance, Dropbox adds something like "conflicted copy by user X" to the end of the file. Like in Dropbox case, this is sensible in case of collaboration apps, where users like to have some version control.
However, in those cases, you as developer simply save two copies and let the users deal with that problem.
If on the other hand, you have to write a complicated logic based on user's data, having two different copies hanging around can be a nightmare. In that case, I would split data into two groups (e.g. create two objects out of one). The first group has data representing the state of the application as a whole, that is important to be unique. For that data I would use the conflict resolution as above or similar. Then the second group is user-specific, where I would store both data as two separate entries in the database, properly mark them (like Dropbox does), and then let users deal with the list of two (or more) entries of their project.
Finally, if that additional complication of database management makes the developer uneasy, and since Radek asked to give a resource reference, I want to "kill two flies with one shot" by mentioning the blog entry StackMob offline Sync, whose solution provides both database and user management functionality and so relieves the developer from that pain. Surely there is a lot more info to be found when searching for data concurrence, conflict resolution and the likes.
To conclude, I have to add the obligatory disclaimer, that all written here are merely my own thoughts and suggestions, that everyone should use at own risk and don't hold me responsible if you suddenly get too many happy users making your system crash :)
As I am myself working on an app, where I am implementing all those aspects, I am certainly very interested to hear other opinions and what else folks have to say on the subject.
From my experience - both as a user of sites that require a login, and as a developer working with logged in users - I don't think I've ever seen a site behave this way.
The common pattern is to let a user be anonymous and the first time they do something that would require saving state, they are prompted to login. Then the previous action is remembered and the user can continue. For example, if they try to add something to their shopping cart, they are prompted to login and then after login, the item is in their cart.
I suppose some places would allow you to fill a cart and then login at which point the cart is associated with a concrete user.
I would create a SessionUser object that has the state of the site interaction and one field called UserId that is used to retrieve other things like name, address, etc.
With anonymous users, I would create the SessionUser object with an empty reference for UserId. This means we can't resolve a name or an address, but we can still save state. The actions they are performing, the pages they're viewing, etc.
Once they login we don't have to merge two objects, we just populate the UserId field in SessionUser and now we can traverse an object graph to get name, email, address or whatever else.

Building a web application with multiple database instances or just a single instance

I am currently designing a web application where I will have customers signing up as companies. Each company will have its own set of users. As I am designing this I am wondering which approach would work best. I see sites like fogbugz or basecamp which use subdomains. In cases with subdomains do you have a database instance per sub domain? I'm wondering if it is recommended to have a database instance per company or if I should have some kind of company table and manage the company and user data/credentials all from one database.
Which approach is best? Is there literature on this subject (i.e. any web or book)?
thanks in advance!
You have to weigh up your options, as some of this will be a matter of opinion and might not be feasible for your implementation.
That being said, I'd consider the single database approach, for these reasons:
Maintenance: when running a database per registered 'client', you will very easily reach a situation where any changes or upgrades you make to your app's schema have to be applied to every single database instance. This will get ridiculous, fast.
Convenience: You might want analytics and usage stats, or some way to administrate all these databases. Querying a single database is comparatively trivial to trying to aggregate the same query for all your databases. This isn't going to scale.
Scalability *: As mentioned in 2, you're going to require a special sort of aggregation to query things about your clients, and your app as a whole. The bigger your app gets, the more complex your querying. The other issue is, if one client uses the app a lot more than another, what will you be encouraged to optimise? Your app, the bigger client's database, or the smaller client's? Not forgetting anything you do change has to be copied to all databases.
Backups: You can backup one database easily, just by creating a dump and stashing it somewhere. Get a thousand clients and now you have to run 1000 database dumps, and name them well enough to be able to identify them if one single database corrupts. How will you even know if this happens? Database errors will be localised to that specific one, as opposed to your entire app.
UI: A user signs up or is invited to use your app, and belongs to one particular client. Are you going to save that user account to the client's database? If so, see scalability for the issue of working with that data when the user wants to change their password, or you want to email them. So, do you tell the user to let you know which database they're in so you can find them?
Simplification: You have a database per client and want to just use a single one. How do you merge them all together without significantly breaking things? There'll be primary key conflicts if you use auto incremented IDs; bookmarked URLs will break if you decide to just regenerate the keys; foreign keys across tables will no longer point to the right records. Your data integrity will go down the pan.
You mention 'white label' services that offer their product through custom subdomains. I'm not privy to how these work, but the subdomain is only a basic CNAME or A record in their DNS zonefile. The process of adding these can be automated, and the design of the application and a bit of server configuration can deal with linking these subdomains to the correct accounts and data. They're just URLs, so maybe on the backend, the app doesn't differentiate between:
http://client.example.com
http://example.com/client
Overall though, you may decide that all these problems are things you can and would prefer to deal with. Be warned, however, that by doing so you may be shooting yourself in the foot, and you can gain a lot more from crafting a well-designed single database schema and a well-abstracted front-end.
*#xQbert mentions the very real benefit of scalability with multiple databases. I've amended this answer to clarify that I was more concerned with other aspects.

Should application users be database users?

My previous job involved maintenance and programming for a very large database with massive amounts of data. Users viewed this data primarily through an intranet web interface. Instead of having a table of user accounts, each user account was a real first-class account in the RDBMS, which permitted them to connect with their own query tools, etc., as well as permitting us to control access through the RDBMS itself instead of using our own application logic.
Is this a good setup, assuming you're not on the public intranet and dealing with potentially millions of (potentially malicious) users or something? Or is it always better to define your own means of handling user accounts, your own permissions, your own application security logic, and only hand out RDBMS accounts to power users with special needs?
I don't agree that using the database for user access control is as dangerous others are making it out to be. I come from the Oracle Forms Development realm, where this type of user access control is the norm. Just like any design decision, it has it's advantages and disadvantages.
One of the advantages is that I could control select/insert/update/delete privileges for EACH table from a single setting in the database. On one system we had 4 different applications (managed by different teams and in different languages) hitting the same database tables. We were able to declare that only users with the Manager role were able to insert/update/delete data in a specific table. If we didn't manage it through the database, then each application team would have to correctly implement (duplicate) that logic throughout their application. If one application got it wrong, then the other apps would suffer. Plus you would have duplicate code to manage if you ever wanted to change the permissions on a single resource.
Another advantage is that we did not need to worry about storing user passwords in a database table (and all the restrictions that come with it).
I don't agree that "Database user accounts are inherently more dangerous than anything in an account defined by your application". The privileges required to change database-specific privileges are normally MUCH tougher than the privileges required to update/delete a single row in a "PERSONS" table.
And "scaling" was not a problem because we assigned privileges to Oracle roles and then assigned roles to users. With a single Oracle statement we could change the privilege for millions of users (not that we had that many users).
Application authorization is not a trivial problem. Many custom solutions have holes that hackers can easily exploit. The big names like Oracle have put a lot of thought and code into providing a robust application authorization system. I agree that using Oracle security doesn't work for every application. But I wouldn't be so quick to dismiss it in favor of a custom solution.
Edit: I should clarify that despite anything in the OP, what you're doing is logically defining an application even if no code exists. Otherwise it's just a public database with all the dangers that entails by itself.
Maybe I'll get flamed to death for this post, but I think this is an extraordinarily dangerous anti-pattern in security and design terms.
A user object should be defined by the system it's running in. If you're actually defining these in another application (the database) you have a loss of control.
It makes no sense from a design point of view because if you wanted to extend those accounts with any kind of data at all (email address, employee number, MyTheme...) you're not going to be able to extend the DB user and you're going to need to build that users table anyway.
Database user accounts are inherently more dangerous than anything in an account defined by your application because they could be promoted, deleted, accessed or otherwise manipulated by not only the database and any passing DBA, but anything else connected to the database. You've exposed a critical system element as public.
Scaling is out of the question. Imagine an abstraction where you're going to have tens or hundreds of thousands of users. That's just not going to manageable as DB accounts, but as records in a table it's just data. The age old argument of "well there's onyl ever going to be X users" doesn't hold any water with me because I've seen very limited internal apps become publicly exposed when the business feels it's could add value to the customer or the company just got bought by a giant partner who now needs access. You must plan for reasonable extensibility.
You're not going to be able to share conn pooling, you're not going to be any more secure than if you just created a handful of e.g. role accounts, and you're not necessarily going to be able to affect mass changes when you need to, or backup effectively.
All in there seems to be numerous serious problems to me, and I imagine other more experienced SOers could list more.
I think generally. In your traditional database application they shouldnt be. For all the reason already given. In a traditional database application there is a business layer that handles all the security and this is because there is such a strong line between people who interact with the application, and people who interact with the database.
In this situation is is generally better to manage these users and roles yourself. You can decide what information you need to store about them, and what you log and audit. And most importantly you define access based on pure business rules rather than database rules. Its got nothing to do with which tables they access and everything to do with whether they can insert business action here. However these are not technical issues. These are design issues. If that is what you are required to control then it makes sense to manage your users yourself.
You have described a system where you allow users to query the database directly. In this case why not use DB accounts. They will do the job far better than you will if you attempt to analyse the querys that users write and vet them against some rules that you have designed. That to me sounds like a nightmare system to write and maintain.
Don't lock things down because you can. Explain to those in charge what the security implications are but dont attempt to prevent people from doing things because you can. Especially not when they are used to accessing the data directly.
Our job as developers is to enable people to do what they need to do. And in the situation you have described. Specifically connect to the database and query it with their own tools. Then I think that anything other than database accounts is either going to be insecure, or unneccasarily restrictive.
"each user account was a real first-class account in the RDBMS, which permitted them to connect with their own query tools, etc.,"
not a good idea if the RDBMS contains:
any information covered by HIPAA or Sarbanes-Oxley or The Official Secrets Act (UK)
credit card information or other customer credit info (POs, lines of credit etc)
personal information (ssn, dob, etc)
competitive, proprietary, or IP information
because when users can use their own non-managed query tools the company has no way of knowing or auditing what information was queried or where the query results were delivered.
oh and what #annakata said.
I would avoid giving any user database access. Later, when this starts causing problems, taking away their access becomes very dificult.
At the very least, give them access to a read-only replica of the database so they can't kill your whole company with a bad query.
A lot of database query tools are very advanced these days, and it can feel a real shame to reimplement the world just to add restrictions. And as long as the database user permissions are properly locked down it might be okay. However in many cases you can't do this, you should be exposing a high-level API to the database to insert objects over many tables properly, without the user needing specific training that they should "just add an address into that table there, why isn't it working?".
If they only want to use the data to generate reports in Excel, etc, then maybe you could use a reporting front end like BIRT instead.
So basically: if the users are knowledgeable about databases, and resources to implement a proper front-end are low, keep on doing this. However is the resource does come up, it is probably time to get people's requirements in for creating a simpler, task-oriented front-end for them.
This is, in a way, similar to: is sql server/AD good for anything
I don't think it's a bad idea to throw your security model, at least a basic one, in the database itself. You can add restrictions in the application layer for cosmetics, but whichever account the user is accessing the database with, be it based on the application or the user, it's best if that account is restricted to only the operations the user is allowed.
I don't speak for all apps, but there are a large number I have seen where capturing the password is as simple as opening the code in notepad, using an included dll to decrypt the configuration file, or finding a backup file (e.g. web.config.bak in asp.net) that can be accessed from the browser.
*not a good idea if the RDBMS contains:
* any information covered by HIPAA or Sarbanes-Oxley or The Official Secrets Act (UK)
* credit card information or other customer credit info (POs, lines of credit etc)
* personal information (ssn, dob, etc)
* competitive, proprietary, or IP information*
Not true, one can perfectly manage which data a database user can see and which data it can modify. A database (at least Oracle) can also audit all activities, including selects. To have thousands of database users is also perfectly normal.
It is more difficult to build good secure applications because you have to program this security, a database offers this security and you can configure it in a declarative way, no code required.
I know, I am replying to a very old post, but recently came across same situation in my current project. I was also thinking on similar lines, whether "Application users be Database users?".
This is what I analysed:
Definitely it doesn't make sense to create that big number of application users on database(if your application is going to be used by many users).
Let's say you created X(huge number) of users on database. You are opening a clear gateway to your database.
Let's take a scenario for the solution:
There are two types of application users (Managers and Assistant). Both needs access to database for some transactions.
It's obvious you would create two roles, one for each type(Manager and Assistant) in database. But how about database user to connect from application. If you create one account per user then you would end up linearly creating the accounts on the database.
What I suggest:
Create one database account per Role. (Let's say Manager_Role_Account)
Let your application have business logic to map an application user with corresponding role.(User Tom with Manager role to Manager_Role_Account)
Use the database user(Manager_Role_Account) corresponding to identified role in #2 to connect to database and execute your query.
Hope this makes sense!
Updated: As I said, I came across similar situation in my project (with respect to Postgresql database at back end and a Java Web app at front end), I found something very useful called as Proxy Authentication.
This means that you can login to the database as one user but limit or extend your privileges based on the Proxy user.
I found very good links explaining the same.
For Postgresql below Choice of authentication approach for
financial app on PostgreSQL
For Oracle Proxy Authentication
Hope this helps!
It depends (like most things).
Having multiple database users negates connection pooling, since most libraries handle pooling based on connection strings and user accounts.
On the other hand, it's probably a more secure solution than anything you or I will do from scratch. It leaves security up to the OS and Database server, which I trust much more than myself. However, this is only the case if you go to the effort to configure the database permissions well. If you're using a bunch of OS/db users with the same permissions,it won't help much. You'll still get an audit trail, but that's about it.
All that said, I don't know that I'd feel comfortable letting normal users connect directly to the database with their own tools.
I think it's worth highlighting what other answers have touched upon:
A database can only define restrictions based on the data. Ie restrict select/insert/update/delete on particular tables or columns. I'm sure some databases can do somewhat cleverer things, but they'll never be able to implement business-rule based restrictions like an application can. What if a certain user is allowed to update a column only to certain values (say <1000) or only increase prices, or change either of two columns but not both?
I'd say unless you are absolutely sure you'll never need anything but table/column granularity, this is reason enough by itself.
This is not a good idea for any application where you store data for multiple users in the same table and you don't want one user to be able to read or modify another user's data. How would you restrict access in this case?

Resources