I have noticed a lot of companies use a prefix for their database tables. E.g. tables would be named MS_Order, MS_User, etc. Is there a good reason for doing this?
The only reason I can think of is to avoid name collision. But does it really happen? Do people run multiple apps in one database? Is there any other reason?
Personally, I don't see any value in it. In fact, it's a bummer for intellisense-like features because everything begins with MS_. :) The Master agrees with me too.
Huge schemas often have many tables with similar, but distinct, purposes. Thus, various "segmented" naming conventions.
Darn, didn't get first post :-)
In SQL Server 2005 and above the schema feature eliminates the need for any kind of prefix. A good example of their usage can be found by reading about the Schemas in AdventureWorks.
In some older versions of SQL server, having a prefix to create a pseudo namespace might of been useful with DBs with lots of tables.
Other than that I can't really see the point.
Even when the database only contains one application, the prefixes can be useful in grouping like parts of the application together. So tables that containtain cutomer information might be prefixed with cust_, those that contain warehouse information might be prefixed with inv_ (for inventory), those that contain finacial information might be prefixed with fin_, etc.
I've worked on systems where there is an existing database for an application which was created and maintained by a different company and we've needed to add another app that uses large amounts of the same data with just a few extra tables of our own, so in that case having an app specific prefix can help with the separation.
Slightly tangentially to the original question, I've seen databases use prefixes to indicate the type of data that a table is holding. There'd be a prefix for lookup tables which are obviously pretty static in both size and content and a different prefix for the tables that contain variable data. This in turn may be broken into having one prefix for tables that are added to but not really changed like logging, processed orders, customer transactions etc, and and another for more variable data like customer balance or whatever. Link tables could also have their own prefix to separate them out too.
I have never seen a naming collision, as it usually doesn't make sense to put tables from different applications into the same database namespace. If you had some sort of reusable library that could be integrated into different applications, perhaps that might be a reason, but I haven't seen anything like that.
Though, now that I think about it, there are some cheap web hosting providers that only allow users to create a very small number of databases, so it would be possible to run a number of different applications using a single database, so long as the names didn't collide (and such a prefixing convention would certainly help).
Multiple applications using a particular table, correct. Prefixes prevent name collision. Also, it makes it rather simple to backup tables and keep them in the same database, just change the prefix and your backup will be fully functional, etc. Aside from that, it's just good practice.
Prefixes are a good way to sort out which sql objects are associated with which app when multiple apps dip into the same database.
I have also prefixed sql objects differently within the same app to facilitate easier management of security. i.e. all the objects with admin_ need this security applied and the rest need something else.
Prefixes can be handy for humans, search tools and scripts. If the situation is a simple one, however, there is probably no use for them at all.
It's most often use if several applications are sharing one database. For example, if you install Wordpress, it prefixes all tables with "wp_". This is good if you want your applications to share data very easily(sessions across all applications in your company, for example.)
There are better ways to accomplish this however, and I never prefix my table names, as each application has it's own self-contained database.
Related
I'm developing a tool where I've prefixed tables etc. with "dbo" now I get requests for custom schema names. I'm thinking of skipping them and instead let the user control this via the associated login against the Db. I know there's talk about "performance" since it needs to search the users's schemes and then fallback on dbo etc, but is that really an issue? Opinions?
First, I would look at this question as a feature request from your customers (users?). So the immediate decision to make is, should you even consider looking into this now, or do you have other feature requests that are obviously more important and deliver more benefit to the customer?
For example, for now you could simply tell customers that your application requires its own database that should not be shared with other applications or manipulated in any way by the customer. Then you don't have to worry about schemas or the same object name in two schemas because your application 'owns' the database. Perhaps this is already the case, but if so then I don't understand why your customers care which schema your objects are in.
Second, assuming that you do decide to work on it, you should gather some information about why people are asking for this, to make sure that you clearly understand what they expect you to deliver and what the benefit is for them. If customers are really saying "your application runs slowly" then the choice of schema is highly unlikely to be the reason, it's much more probable that indexing, schema design or your application code are the areas to look at.
Finally, if you still want to go ahead you need to find a technical solution. This is partly a deployment issue and partly a coding issue. It's a deployment issue because you have to deploy your database objects in a specific schema that is specified at installation time, and all your patches and later releases need to be aware of that too. The coding issue is that you need your database code to be "schema-aware", in case you end up in a situation where you have dbo.TableName, MyTool.TableName and OtherSchema.TableName all in the same database. The solution is obviously to reference the schema name in all code, which is considered an important best practice anyway. But exactly how you do this depends on how you have structured your application, if you use an ORM etc.
I want to learn how practical using an LDAP server (say AD) as a storage base. To be more clear; how much does it make sense using an LDAP server instead of using RDBMS to store data?
I can guess that most you might just say "it doesn't" but there might be some reasons to make it meaningful (especially business wise);
A few points first;
Each table becomes a container entity and each row becomes a new entity as a child. Row entities contains attributes for columns. So you represent your data in this way. (This should be the most meaningful representation I think, suggestions are welcome)
So storing data like a DB server is possible but lack of FK and PK (not sure about PK) support is an issue. On the other hand it supports attribute (relates to a column) indexing (Not sure how efficient). So consistency of data is responsibility of the application layer.
Why would somebody do this ever?
Data that application uses/stores closely matches with the existing data in AD. (Users, Machines, Department Info etc.) (But still some customization is required to existing entity schema, and new schema definitions are needed for not very much related data.)
(I think strongest reason would be this: business related) Most mid-sized companies have very well configured AD servers (replicated, backed-up etc.) but they don't have such DB setup (you can make comment to this as much as you want). Say when you sell your software which requires a DB setup to these companies, they must manage their DB setup; but if you say "you don't need DB setup and management; you can just use existing AD", it sounds appealing.
Obviously there are many disadvantages of giving up using DB, feel free to mention them but let's assume they are acceptable. (I can mention more if question is not clear enough.)
LDAP is a terrible tool for maintaining most business data.
Think about a typical one-to-many relationship - say, customer and orders. One customer has many orders.
There is no good way to represent this data in an LDAP directory.
You could try having a mock "foreign key" by making every entry of that given object class have a "foreign key" attribute, but your referential integrity just went out the window. Cascade deletes are impossible.
You could try having a "customer" object that has "order" children. However, you've just introduced a specific hierachy - you're now tied to it.
And that's the simplest use case. Once you start getting into more complex relationships, you're basically re-inventing an RDBMS in a system explicity designed for a different purpose. The clue's in the name - directory.
If you're storing a phonebook, then sure, use LDAP. For anything else, use a real database.
For relatively small, flexible data sets I think an LDAP solution is workable. However an RDBMS provides a number significant advantages:
Backup and Recovery: just about any database will provide ACID properties. And, RDBMS backups are generally easy to script and provide several options (e.g. full vs. differential). Just don't know with LDAP, but I imagine these qualities are not as widespread.
Reporting: AFAIK LDAP doesn't offer a way to JOIN values easily, much the less do things like calculate summations. So you would put a lot of effort into application code to reproduce those behaviors when you do need reporting. And what application doesn't ultimately?
Indexing: looks like LDAP solutions have indexing, but again, seems hit or miss. Whereas seemingly all databases out there have put some real effort into getting this right.
I think any serious business system's storage should be backed up in the same fashion you believe LDAP is in most environments. If what you're really after is its flexibility in terms of representing hierarchy and ability to define dynamic schemas I'd suggest looking into NoSQL solutions or the Java Content Repository.
LDAP is very usefull for storing that information and if you want it, you may use it. RDMS is just more comfortable with ORM systems. Your persistence logic with LDAP will so complex.
And worth mentioning that this is not a standard approach -> people who will support the project will spend more time on analysis.
I've used this approach for fun, i generate a phonebook from Active Directory, but i don`t think that it's good idea to use LDAP as a store for business applications.
In short: Use the right tool for the right job.
When people see LDAP you already set an expectation on your system. Don't forget that the L Lightweight. LDAP was designed for accessing directories over a network.
With a “directory database” you can build a certain type of application. If you can map your data to a tree like data structure it will work. I surely would not want to steam videos from LDAP! You can probably hack something but I would prefer a steaming server..
There might be some hidden gotchas down the line if you use a tool not designed for what it is supposed to do. So, the downside is you'll have to test stuff that would have been a given in some cases.
It's not is not just a technical concern. Your operational support team might “frown” on your application as they would have certain expectations/preconceptions based on your applications architectural nature. Imagine their surprise if you give them CRM system (website + files and popped email etc.) in a LDAP server as database to maintain.
If I was in your position, I would steer towards one of the NoSQL db solutions rather than trying to use LDAP. LDAP is fine for things like storing user and employee information, but is terrible to interact with when you need to make changes. A NoSQL db will allow you to store your data how you want without the RDBMS overhead you would like to avoid.
The answer is actually easy. Think of CRUD (Create, Read, Update, Delete). If a lot of Read will be made in your system, you can think of using LDAP. Because LDAP is quick in read operations and designed so. If the other operations will be made more, the RDMS would be a better option.
I am practicing SQL, and suddenly I have so many tables. What is a good way to organize them? Is there a way to put them into different directories?
Or is the only option to create a tablespace as explained here?
It depends what you mean by organise - tablespaces are really focused on organising storage.
For organising tables, grouping them into different SCHEMAS may be more useful.
This is more like the concept of a 'namespace' - i.e. schema1.people is not the same as schema2.people.
It often pays off to separate Operational and Configuration data into different schemas.
If you are talking about organising tables within a schema - and in a real world application, having hundreds of tables in one schema is not unknown - then all you can really do is come up with good naming conventions.
Some places group tables with prefixes at the start of the table name. Personally, I think this leads to duplication - EMP_ADDRESSES and CUST_ADDRESSES rather than a properly linked Addresses.
It depends why you want to organise them and why (and when) you're creating them. If the number is just overwhelming when you look in, say, user_tables, then splitting into tablespaces wont help much as you'd need to specify which one you wanted to query each time. And there isn't really a 'directory' equivalent.
If you're creating practice tables just to experiment with mini projects, then one option might be to create a new Oracle user for each project and create all the related tables under that user schema. Then you'd only see relevant tables when logged in as that user, while working on that project. This has the advantage of allowing you to reuse table names, which can simplify things a bit of you're doing lots of similar projects.
You should also probably be thinking about tidying up a bit, dropping tables when you're sure you've finished that bit of experimentation.
They are allready organised because they are in a database and you have a repository.
I have a SQL Server with a number of databases. Most are for applications, but some store data for reporting and analysis. I also have information that is not specific to any one database, but can be used by several of them.
A good example is my company's fiscal calendar. I store this information in a table. Putting the same fiscal calendar table in each database is a bad idea for me. Even with the negative of having multiple database dependencies, I think it is worth it because otherwise there is too much risk for inconsistency. What I do now is put the fiscal calendar and other similar functions and procedures in a database simply titled "Community".
I have the rare and glorious opportunity of moving to a new server and refactoring everything as I go. I am wondering if I should change this practice. Below are a few specific questions:
Am I unaware of any disadvantages of my current method?
Is there a better place or name to use to store this type of information?
What is your experience with issues like this, and am I missing what should be an obvious solution?
Thanks
You've already taken the important step of separating the shared data into its own database. I don't think there's a better approach. The name is fairly subjective, but Common is another term frequently used for this purpose.
I would hide this behind a "shared data service" or something. Not rely on the existence of a database.
You don't have to be a big shop before you need to put one app onto it's own servers then you're bollixed.
At the very least, I'd use a linked server to hide it even if on the same server so you are independent of actual server names.
We have a set of applications that work with multiple database engines including Sql Server and Access. The schemas for each are maintained separately and are not stored in text form making source control difficult. We are interested in moving to a system where the schema is stored in some text-based format (such as XML or YAML) with descriptions of field data types, foreign key relationhsips, etc.
When all is said and done, we want to have a single text file in source control that can be used to generate a clean database that works with both SQL Server, Access at least (and preferably is capable of working with Oracle, DB2 and other engines).
I'm certain that there are tools or libraries out there that can get us at least part of the way there. For one, I've found Altova MapForce that looks like it may do the trick but I'm interested in hearing about any alternative tools or libraries or even entirely different solutions for those in the same predicament.
Note: The applications are written in C++ and ORM solutions are both not readily available in C++ and would take far too long to integrate into our aging products.
If you don't use a object relational mapper that does this (and many other things for you) the easiest way might be to whip up a few structures to define your tables and attributes in some form of (static) code and write little generators to create actual databases from that description.
That makes it easy for source control, and if you're careful when designing those structures, you can easily re-use them for other DBs if need arises.
The consensus when I asked a similar (if rather more naive) question seem to be to use raw SQL, and to manage the RDMS dependencies with an additional layer. Good luck.
Tool you're looking for is liquibase. No support for Access though...