I have to implement some SQL views for a third-party report engine.
Each view has a complex query that joins multiple tables (some tables contain millions of rows)
Some view queries have subqueries
Views are not accessed by the application (Only read by the report engine)
I have the following concerns
we have some doughts this will badly impact the current application (since the complex queries in the view are running every time when accessing the view)
What would be the performance impact of executing the complex views? (memory, time, etc)
These are some other solutions we have for now.
Use new tables instead of views and update the tables using triggers and stored procedures.
Use a replicated database and create those views on that table (So, it will not affect the current system)
Can you give me comments on the above concerns and the solutions, please? New suggestions are welcome.
I know Oracle offers several refreshmode options for their materialized views (on demand, on commit, periodically).
Does Microsoft SQLServer offer the same functions for their indexed views?
If not, how can I else use indexed views on SQLServer if my purpose is to export data on a daily+
on-demand basis, and want to avoid performance overhead problems? Does a workaround exist?
A materialized view in SQL Server is always up to date, with the overhead on the INSERT/UPDATE/DELETE that affects the view.
I'm not completely sure of what your require, you question isn't completely clear to me. However, if you only want the overhead one time, on a daily+ on-demand basis , I suggest that you drop the index when you don't need it and recreate it when you do. The index will be built when you create it, and it will be then up to date. When the index is dropped there will not be any overhead on your INSERT/UPDATE/DELETE commands.
Currently I am designing a database for use in our company. We are using SQL Server 2008. The database will hold data gathered from several customers. The goal of the database is to acquire aggregate benchmark numbers over several customers.
Recently, I have become worried with the fact that one table in particular will be getting very big. Each customer has approximately 20.000.000 rows of data, and there will soon be 30 customers in the database (if not more). A lot of queries will be done on this table. I am already noticing performance issues and users being temporarily locked out.
My question, will we be able to handle this table in the future, or is it better to split this table up into smaller tables for each customer?
Update: It has now been about half a year since we first created the tables. Following the advices below, I created a handful of huge tables. Since then, I have been experimenting with indexes and decided on a clustered index on the first two columns (Hospital code and Department code) on which we would have partitioned the table had we had Enterprise Edition. This setup worked fine until recently, as Galwegian predicted, performance issues are springing up. Rebuilding an index takes ages, users lock each other out, queries frequently take longer than they should, and for most queries it pays off to first copy the relevant part of the data into a temp table, create indices on the temp table and run the query. This is not how it should be. Therefore, we are considering to buy Enterprise Edition for use of partitioned tables. If the purchase cannot go through I plan to use a workaround to accomplish partitioning in Standard Edition.
Start out with one large table, and then apply 2008's table partitioning capabilities where appropriate, if performance becomes an issue.
Datawarehouses are supposed to be big (the clue is in the name). Twenty million rows is about medium by warehousing standards, although six hundred million can be considered large.
The thing to bear in mind is that such large tables have a different physics, like black holes. So tuning them takes a different set of techniques. The other thing is, users of a datawarehouse must understand that they are dealing with huge amounts of data, and so they must not expect sub-second response (or indeed sub-minute) for every query.
Partitioning can be useful, especially if you have clear demarcations such as, as in your case, CUSTOMER. You have to be aware that partitioning can degrade the performance of queries which cut across the grain of the partitioning key. So it is not a silver bullet.
Splitting tables for performance reasons is called sharding. Also, a database schema can be more or less normalized. A normalized schema has separate tables with relations between them, and data is not duplicated.
I am assuming you have your database properly normalized. It shouldn't be a problem to deal with the data volume you refer to on a single table in SQL Server; what I think you need to do is review your indexes.
Since you've tagged your question as 'datawarehouse' as well I assume you know some things about the subject. Depending on your goals you could go for a star-schema (a multidemensional model with a fact and dimensiontables). Store all fastchanging data in 1 table (per subject) and the slowchaning data in another dimension/'snowflake' tables.
An other option is the DataVault method by Dan Lindstedt. Which is a bit more complex but provides you with full flexibility.
http://danlinstedt.com/category/datavault/
In a properly designed database, that is not a huge anmout of records and SQl server should handle with ease.
A partioned single table is usually the best way to go. Trying to maintain separate indivudal customer tables is very costly in termas of time and effort and far more probne to errors.
Also examine you current queries if you are experiencing performance issues. If you don't have proper indexing (did you for instance index the foreign key fields?) queries will be slow, if you don't have sargeable queries they will be slow if you used correlated subqueries or cursors, they will be slow. Are you returning more data than is striclty needed? If you have select * anywhere in your production code, get rid of it and only return the fields you need. If you used views that call views that call views or if you used EAV table, you willhave performance iisues at this level. If you allowed a framework to autogenerate SQl code, you may well have badly perforimng queries. Remember Profiler is your friend. Of course you could also have a hardware issue, you need a pretty good sized dedicated server for that number of records. It won't work to run this on your web server or a small box.
I suggest you need to hire a professional dba with performance tuning experience. It is quite complex stuff. Databases desigend by application programmers often are bad performers when they get a real number of users and records. Database MUST be designed with data integrity, performance and security in mind. If you didn't do that the changes of having them are slim indeed.
Partioning is definately something to look into. I had a database that had 2 tables sharded. Each table contained around 30-35million records. I have since merged this into one large table and assigned some good indexes. So far, I've not had to partition this table as it's working a treat, but I'm keep partitioning in mind. One thing that I have noticed, compared to when the data was sharded, and that's the data import. It is now slower, but I can live with that as the Import tool can be re-written ;o)
One table and use table partitioning.
I think the advice to use NOLOCK is unjustified based on the information given. NOLOCK means you will get inaccurate and unreliable results from your queries (dirty and phantom reads). Before using NOLOCK you need to be sure that's not going to be a problem for your customers.
Is this a single flat table (no particular model)? Typically in data warehouses, you either have a normalized data model (third normal form at least - usually in an entity-relationship-model) or you have dimensional data (Kimball method or variations - usually fact tables with associated dimension tables in a set of stars).
In both cases, indexes play a large part, and partitioning can also play a part in getting queries to perform (but partitioning is not usually about performance but about maintenance being able to add and drop partitions quickly) over very large data sets - but it really depends on the order of aggregation and the types of queries.
One table, then worry about performance. That is, assuming you are collecting the exact same information for each customer. That way, if you have to add/remove/modify a column, you are only doing it in one place.
If you're on MS SQL server and you want to keep the single table, table partitioning could be one solution.
Keep one table - 20M rows isn't huge, and customers aren't exactly the kind of table that you can easily 'archive off', and the aggrevation of searching multiple tables to find a customer isn't worth the effort (SQL is likely to be much more efficient at BTree searching than your own invention is)
You will need to look into the performance and locking issues however - this will prevent your db from scaling.
You can also create supplemental tables that hold already calculated details on historical information if there are common queries.
There has been a lot of talk recently about NoSQL.
The #1 reason why I hear people use NoSQL is because they start to de-normalize their DBMS data so much so, to increase performance, that they end up with just one table with all of their data within that single table.
With Materialized Views however, you can keep your data normalized, yet have it stored as a single table view for the same reasons why you'd use NoSQL.
As such, why would someone use NoSQL over Materialized Views?
One reason is that materialized views will perform poorly in an OLTP situation where there is a heavy amount of INSERTs vs. SELECTs.
Everytime data is inserted the materialized views indexes must be updated, which not only slows down inserts but selects as well. The primary reason for using NoSQL is performance. By being basically a hash-key store, you get insanely fast reads/writes, at the cost of less control over constraints, which typically must be done at the application layer.
So, while materialized views may help reads, they do nothing to speed up writes.
NoSQL is not about getting better performance out of your SQL database. It is about considering options other than the default SQL storage when there is no particular reason for the data to be in SQL at all.
If you have an established SQL Database with a well designed schema and your only new requirement is improved performance, adding indexes and views is definitely the right approach.
If you need to save a user profile object that you know will only ever need to be accessed by its key, SQL may not be the best option - you gain nothing from a system with all sorts of query functionality you won't use, but being able to leave out the ORM layer while improving the performance of the queries you will be using is quite valuable.
Another reason is the dynamic nature of NoSQL. Each view you create will need created before-hand and a "guess" as to how an application might use it.
With NoSQL you can change as the data changes; dynamically varying your data to suit the application.
I'm familiar with SQL Server Indexed Views (or Oracle Materialized Views), we use them in our OLAP applications. They have the really cool feature of being able to usurp an execution plan and remap it to the indexed view w/out having to change existing code.
IE. Let's say I had a SPROC that was a really expensive join.
SELECT [SOME COLUMNS]
FROM Table1 INNER JOIN Table2 [DETAILS]
INNER JOIN Table3 [BUNCH MORE JOINS]
...
If I authored an indexed view that held a similar result set then the Query Optimizer will very likely send the SPROC to my indexed view as opposed to the base tables and I get a big performance increase.
Now say I wanted to use indexed views in an OLTP!? I mean most OLTPs (like this site) are relatively read heavy, if they have expensive joins then we could speed them up a ton AND potentially reduce locking contention (http://www.codinghorror.com/blog/archives/001166.html). Even better is you wouldn't have to change any code, just author the indexed view.
But this also means the database gets bigger since we need to keep a copy of these data in the indexed view...
Has anyone ever used indexed views to solve contention or speed issues in an OLTP? How come I've never seen this in use?
Materialized views can be useful for reporting against OLTP, especially is large numbers of rows are aggregated to get the results. The space requirements are completely dependent on how much data you are saving. Think of it as a cache.
The tricky balance is between how recent the data needs to be for the reports, and how much of a hit you can take on OLTP performance. If somewhat stale data is OK, you may be able to schedule the updates to the views during a time when system activity is low.
The one time I could not, and need very current data, I ended up using some custom development. Each update to the base table fired a trigger which wrote a record to a transaction table. The view looked at a cached aggregate, plus the delta stored in the transaction table. As system resources allowed, the transactions were applied to the aggregate table as delta transactions. This allowed me up to the second data, good performance on reporting (the only aggregation happening was recent transactions) and fairly little load on the database (only doubling the size of every write, not re-calculating a huge aggregate every time).
Unfortunately, it was complex to maintain, and did not use simple built in tools. If you can wait on your reporting data, it is often best to use the built in materialized views and defer the refresh.
We use materialized views to speed up things where I work. Most often for reports against the OLTP system. Many of our reports run from a data warehouse, but since we refresh the warehouse overnight, up to the moment data has to come from the OLTP tables.