I have a log table that will receive inserts from several web apps. I wont be doing any searching/sorting/querying of this data. I will be pulling the data out to another database to run reports. The initial table is strictly for RECEIVING the log messages.
Is there a way to ensure that the web applications don't have to wait on these inserts? For example I know that adding a lot of indexes would slow inserts, so I won't. What else is there? Should I not add a primary key? (Each night the table will be pumped to a reports DB which will have a lot of keys/indexes)
If performance is key, you may not want to write this data to a database. I think most everything will process a database write as a round-trip, but it sounds like you don't want to wait for the returned confirmation message. Check if, as S. Lott suggests, it might not be faster to just append a row to a simple text file somewhere.
If the database write is faster (or necessary, for security or other business/operational reasons), I would put no indexes on the table--and that includes a primary key. If it won't be used for reads or updates, and if you don't need relational integrity, then you just don't need a PK on this table.
To recommend the obvious: as part of the nightly reports run, clear out the contents of the table. Also, never reset the database file sizes (ye olde shrink database command); after a week or so of regular use, the database files should be as big as they'll ever need to be and you won't have to worry about the file growth performance hit.
Here are a few ideas, note for the last ones to be important you would have extremly high volumns:
do not have a primary key, it is enforced via an index
do not have any other index
Create the database large enough that you do not have any database growth
Place the database on it's own disk to avoid contention
Avoid software RAID
place the database on a mirrored disk, saves the calculating done on RAID 5
No keys,
no constraints,
no validation,
no triggers,
No calculated columns
If you can, have the services insert async, so as to not wait for the results (if that is acceptable).
You can even try to insert into a "daily" table, which should then be less records,
and then move this across before the batch runs at night.
But mostly on the table NO KEYS/Validation (PK and Unique indexes will kill you)
Related
We have a database where logs about customer activities are saving every second.
On the other hand, the dashboard site selects recent data from this database every second. So there are hundreds of insert and select queries executed every second.
Should I use indexes on the database to reduce select statement execution time?
I would avoid indexes.
If you are logging in real time, every user action that gets logged requires an insert statement. Indexes will slow down the inserts substantially from O(1) to O(log n) where log n is the size of the table. If your application is synchronous and single threaded... This will be bad.
I would ask myself these questions:
Do I need real time dashboards.
If so can I make logging occur on a separate thread or separate server so the slow insert speed isn't an issue?
Without indexes your selects are going to become slower and slower over time as more data needs to be scanned. This is going to be unacceptable. Eventually you'll be forced to index anyway.
Add indexes.
Probably not. As a believer of having a clustered index on (almost) every table (whether every table should have a clustered index has become a religious debate for some people) I used to be in school of thought that every table should have a clustered index. But due to very difficult issues I have discovered that while very useful for selects it can (will ?) slow down insertions so much that it will hurt the application.
You probably want to test, with, and without, a clustered index to see if this happens. I also have noticed that at a certain point (between 8 and 12 indexes on a table) insertions can really slow down so even if you decide to go with the indexes (which can be a fine idea) too many can slow down your app. Test, and monitor, your app carefully.
You can get the best of both worlds (no indexes on the log table so its fast and a table with lots of indexes for reporting) by using CDC (change data capture) on the log table. This is a low impact way of copying all changes to another table which can have indexes up to kazoo for fast selects with very little impact on the log table.
I have a production Oracle database which contains a large amount of data backed up in tables which were made during previous work. The tables are independent of each other and the rest of the database.
I want to remove these backups, preferably in one shot. I know in more recent versions of Oracle dropped tables don't actually get dropped until purged from from the recycle bin. I will take of that.
Is it safe to DROP them all at once? Is there a performance penalty during the DROP operation? Is there a chance to run out of resources during the operation?
What is the safest way to do this?
It's probably safe to drop them all at once.
In general, dropping a table is very quick regardless of the size of the table. DROP doesn't really change any data, Oracle just changes the data dictionary to mark the space as available. I've dropped lots of tables with hundreds of gigabytes or more of data and never had a problem. (Your datafiles may not be sized properly anymore, but that's another issue.)
Other than dependencies and locks, the only time I've ever seen a drop take a (relatively) long time was because of delayed block cleanout. Basically, if you update, delete, or insert (without append) a lot of data, Oracle may write some transaction data to the blocks. The reason for this is to make COMMIT instantaneous, but it means that the next query that even reads from the table may have to clean up the old transaction entries.
But your chances of running into that problem are small. If you have very large tables they were probably created with direct path inserts, or someone else has already queried the table and cleaned out the blocks. Even in the worst case, if your system was good enough to write the data it will probably be good enough to get rid of it (although you could run into ORA-01555 snapshot too old if the transactions are too old, or out of archive log space from the extra redo from delayed block cleanout, etc.).
If the tables have no dependents and are not in use, its safe to drop them all at once. If you are worry about the new recyclebin feature, you can do "drop table table_name purge" and it'll bypass the recyclebin and get purge without having to purge them from the recyclebin.
I am developing a Job application which executes multiple parallel jobs. Every job will pull data from third party source and process. Minimum records are 100,000. So i am creating new table for each job (like Job123. 123 is jobId) and processing it. When job starts it will clear old records and get new records and process. Now the problem is I have 1000 jobs and the DB has 1000 tables. The DB size is drastically increased due to lots of tables.
My question is whether it is ok to create new table for each job. or have only one table called Job and have column jobId, then enter data and process it. Only problem is every job will have 100,000+ records. If we have only one table, whether DB performance will be affected?
Please let me know which approach is better.
Don't create all those tables! Even though it might work, there's a huge performance hit.
Having a big table is fine, that's what databases are for. But...I suspect that you don't need 100 million persistent records, do you? It looks like you only process one Job at a time, but it's unclear.
Edit
The database will grow to the largest size needed, but the space from deleted records is reused. If you add 100k records and delete them, over and over, the database won't keep growing. But even after the delete it will take up as much space as 100k records.
I recommend a single large table for all jobs. There should be one table for each kind of thing, not one table for each thing.
If you make the Job ID the first field in the clustered index, SQL Server will use a b-tree index to determine the physical order of data in the table. In principle, the data will automatically be physically grouped by Job ID due to the physical sort order. This may not stay strictly true forever due to fragmentation, but that would affect a multiple table design as well.
The performance impact of making the Job ID the first key field of a large table should be negligible for single-job operations as opposed to having a separate table for each job.
Also, a single large table will generally be more space efficient than multiple tables for the same amount of total data. This will improve performance by reducing pressure on the cache.
I am running an archive script which deletes rows from a large (~50m record DB) based on the date they were entered. The date field is the clustered index on the table, and thus what I'm applying my conditional statement to.
I am running this delete in a while loop, trying anything from 1000 to 100,000 records in a batch. Regardless of batch size, it is surprisingly slow; something like 10,000 records getting deleted a minute. Looking at the execution plan, there is a lot of time spent on "Index Delete"s. There are about 15 fields in the table, and roughly 10 of them have some sort of index on them. Is there any way to get around this issue? I'm not even sure why it takes so long to do each index delete, can someone shed some light on exactly whats happening here? This is a sample of my execution plan:
alt text http://img94.imageshack.us/img94/1006/indexdelete.png
(The Sequence points to the Delete command)
This database is live and is getting inserted into often, which is why I'm hesitant to use the copy and truncate method of trimming the size. Is there any other options I'm missing here?
Deleting 10k records from a clustered index + 5 non clustered ones should definetely not take 1 minute. Sounds like you have a really really slow IO subsytem. What are the values for:
Avg. Disk sec/Write
Avg. Disk sec/Read
Avg. Disk Write Queue Length
Avg. Disk Read Queue Length
On each drive involved in the operation (including the Log ones!). If you placed indexes in separate filegroups and allocated each filegroup to its own LUN or own disk, then you can identify which indexes are more problematic. Also, the log flush may be a major bottleneck. SQL Server doesn't have much control here, is all in your own hands how to speed things up. that time is not spent in CPU cycles, is spent waiting for IO to complete and you need an IO subsystem calibrated for the load you demand.
To reduce the IO load you should look into making indexes narrower. Primarily, make sure the clustered index is the narrowest possible that works. Then, make sure the nonclustered indexes don't include sporious unused large columns (I've seen that...). A major gain may be had by enabling page compression. And ultimately, inspect index usage stats in sys.dm_db_index_usage_stats and see if any index is good for the axe.
If you can't reduce the IO load much, you should try to split it. Add filegroups to the database, move large indexes on separate filegroups, place the filegroups on separate IO paths (distinct spindles).
For future regular delete operations, the best alternative is to use partition switching, have all indexes aligned with the clustered index partitioning and when the time is due, just drop the last partition for a lightning fast deletion.
Assume for each record in the table there are 5 index records.
Now each delete is in essence 5 operations.
Add to that, you have a clustered index. Notice the clustered index delete time is huge? (10x) longer than the other indexes? This is because your data is being reorganized with every record deleted.
I would suggest dropping at least that index, doing a mass delete, than reapplying. Index operations on delete and insert are inherently costly. A single rebuild is likely a lot faster.
I second the suggestion that #NickLarsen made in a comment. Find out if you have unused indexes and drop them. This could reduce the overhead of those index-deletes, which might be enough of an improvement to make the operation more timely.
Another more radical strategy is to drop all the indexes, perform your deletes, and then quickly recreate the indexes for the now smaller data set. This doesn't necessarily interrupt service, but it could probably make queries a lot slower in the meantime. Though I am not a Microsoft SQL Server expert, so you should take my advice on this strategy with a grain of salt.
More of a workaround, but can you add an IsDeleted flag to the table and update that to 1 rather than deleting the rows? You will need to modify your SELECTs and UPDATEs to use this flag.
Then you can schedule deletion or archiving of these records for off-hours.
It would take some work to implement it given this is in production, but if you are on SQL Server 2005 / 2008 you should investigate and convert the table to being partitioned, then the removal of old data can be achieved extremely quickly. It is designed for a 'rolling window' type effect and prevents large scale deletes tieing up a table / process.
Unfortunately with the table in production, migrating it across to this technique will take some T-SQL coding, knowledge and a weekend to upgrade / migrate it. Once in place though any existing selects and inserts will work against it seamlessly, the partition maintenance and addition / removal is where you need the t-sql to control the process.
DBA (with only 2 years of google for training) has created a massive data management table (108 columns and growing) containing all neccessary attribute for any data flow in the system. Well call this table BFT for short.
Of these columns:
10 are for meta-data references.
15 are for data source and temporal tracking
1 instance of new/curr columns for textual data
10 instances of new/current/delta/ratio/range columns for multi-value numeric updates
:totaling 50 columns.
Multi valued numeric updates usually only need 2-5 of the update groups.
Batches of 15K-1500K records are loaded into the BFT and processed by stored procs with logic to validate those records shuffle them off to permanent storage in about 30 other tables.
In most of the record loads, 50-70 of the columns are empty through out the entire process.
I am no database expert, but this model and process seems to smell a little, but I don't know enough to say why, and don't want to complain without being able to offer an alternative.
Given this very small insight to the data processing model, does anyone have thoughts or suggestions? Can the database (SQL Server) be trusted to handle records with mostly empty columns efficiently, or does processing in this manner wasted lots of cycles/memory,etc.
Sounds like he reinvented BizTalk.
I typically have multiple staging tables corresponding to the input loads. These may or may not correspond to the destination tables, but we don't do what you're talking about. If he doesn't like to have a lot of what are basically temporary work tables, they could be put into their own schema or even a separate database.
As far as the columns which are empty, if they aren't referenced in the particular query which is processing BFT it doesn't matter - HOWEVER, what will happen is that the indexing becomes much more crucial that the index chosen is a non-clustered covering index. When your BFT is used and a table scan or clustered index scan is chosen, the unused column have to be read and ignored or skipped, and this definitely seems to affect processing in my experience. Whereas with a non-clustered index scan or seek, less columns are read, and hopefully this doesn't include (m)any of the unused columns.
Normalization is the keyword here. If you have so many NULL values, chances are high that you're wasting a lot of space. Normalizing the table should also make data integrity in this table easier to enforce.
One thing that might make things a little more flexible (other than normalizing) could be to create one or more views or table functions to present the data. Particularly if the table is outside your control, these would enable you to filter the spurious crap out and grab only what you need from the table.
However, if you're going to be one of the people who will be working with (and frowning every time you have to crack open) that massive table, you might want to trump the DBA's "design" and normalize that beast, and maybe give the DBA the task of creating some views and/or table functions to help you out.
I currently work with a similar but not so huge table which has been around on our system for years and has had new fields and indices and constraints rather hastily tacked on Frankenstein-style. Unfortunately some other workgroups rely on the structure as gospel, so we've created such views and functions to enable us to "shape" the data the way we need it.