I am about to import around 500 million rows of telemetry data into SQL Server 2008 R2, and I want to make sure I get the indexing/schema right to allow for fast searches of the data. I've been working with databases for a while but nothing on this scale. I'm hoping I can describe my data and the application, and someone can advise me on a good strategy for indexing it.
The data is instrument readings from a data collection system, and has 3 columns: SentTime (datetime2(3)), Topic (nvarchar(255), and Value(float). The SentTime precision is to the millisecond, and is NOT unique. There are around 400 distinct Topics (ex: "Voltage1", "PumpPressure", etc) in the data, and my plan was to break out the data into about 30 tables, each with 10-15 columns, grouped into logical groupings like Voltages, Pressures, Temperatures, etc, each with their own SentTime column.
A typical search will be to retrieve various Values (could be across several tables) for a given time range. Another possible search will be to retrieve all times/values for a given value range and topic. The user interface will show coarse graphs of the data, to allow the user to find the interesting data and export it to Excel or CSV.
My main question is, if I add an index based on SentTime alone, will that speed searches for a given time range? Would it be better to make a composite index on time and value, since the time is not unique? Any point in adding a unique primary key? Is there any other overall strategy or schema I should be looking at for this application?
Another note, I will not be inserting any data once the import is done, so no need to worry about the insertion overhead of indexes.
It seems that you'll be doing a lot of range searches over the SentTime column. In that case, I would create a clustered index on SentTime; with the nonclustered index there would be the overhead of lookups (to retrieve additional data). It is not important that SentTime is not unique, engine will add an uniquifier to it.
Does the Topic column have to be nvarchar; why not a varchar?
My relational self will punish me for this, but it seems that you don't need an additional PK. The data is read-only, right?
One more thought: check the sparse columns feature, it seems that it would be a perfect fit in your scenario. There could be a large number of sparse columns (up to 10.000 if I'm not mistaken), they can be grouped and manipulated as XML, and the main point is that NULLs are almost free storage-wise.
Related
I have an sql datadas, where among other things I have a prices table, where I have one price per product per store.
There are 50 stores and over 500000 products, so this table Will easily have 25 to 30 million records.
This table is feed daily over night with prices updates, and has huge read operations during day. Reads are made with readonly intent.
All queries contain storeid as part of identifying the record to update or read.
I m not able yet to determine how this Will behave since I m expecting external supply of prices but I m expecting performance issues at least on read operations, even though indexes are in place for now...
My question is if I should consider table partition by store since it is always part of queries. But then I have indexes where storeid is not the only column that is part of the index.
Based on this scenario, would you recommend partitioning? The alternative I see is having 50 tables one per store, but it seems painless and if possible to avoid the better
if I should consider table partition by store since it is always part of queries
Yes. That sounds promising.
But then I have indexes where storeid is not the only column that is part of the index.
That's fine. So long as the partitioning column is one of the clustered index columns, you can partition by it. In fact with partitioning, you can get partition elimination for a trailing column of the clustered index, then a clustered index seek within the target partition.
Hi all and thank you for your replies.
I was able to generate significant information on a contained environment where I was able to confirm that I can achieve excelent performance indicators by using only the appropriate indexes.
So for now we will keep it "as is" and have the partition strategy on hand just in case.
Thanks again, nice tips guys
I have a large amount of data around 5M that are stored in a very flat table which has 12 Columns. This table contains aggregated data and it does not have any relationship with other tables. I want to run dynamic queries on this data for reporting purpose. The table contains Fields like District, City, Year, Category, SubCategory, SaleAmount etc
I want to view reports such as Sales between year 2010 and 2013.
Sales of each product in various year and compare them.
Sales by specific salesmen in a year.
Sales by category, Subcategory etc.
I am using SQL Server 2008, but I am not a DBA hence I do not know things like what type of indexes should I create? Which Columns should I index in order to make my queries work.
If the amount of data was small I would not have bothered about all these questions and just proceeded but knowing which columns to index and what type of indexes to create is vital in this case.
Kindly let me know the best way to ensure fast execution of queries.
Will it work if I create a clustered index on all my columns? or will it hurt me.
Keep in mind that this table will not be updated very frequently maybe on monthly basis.
Given your very clear and specific requirements, I would suggest you create a non-clustered index for each field and leave it to the optimiser as a first step. (ie you create 12 indexes) Place only a single field in each index. Dont index ( or at least use caution ) any long text type fields. Also dont index a field such as M/F that has only 2 values and a 50/50 split. I am assuming you have predicates on each field, but dont bother indexing any fields that are never used for selection purposes.
If you still have problems after this, find the query analyser in sql server and use it to guide how queries are processed.
Multi segmented indexes are sometimes better, but if your queries are mostly restricting to a small subset of the table then single field indexs will be fine.
You might have residual performance issues with queries that use "order by", but lets just leave that as a heads up at this stage.
My reasoning is based on
You only have 12 columns, so we wont overload anything
There are only 5M rows. This is quite easy for sql/server to handle
The growth in the data is small, so index updates shouldnt be too much of an issue.
The optimiser will love these queries combined with indexes.
We dont't have typical query examples to specify multi segment indexes, and the question seems to imply highly variable queries.
Typically, the databases are designed as below to allow multiple types for an entity.
Entity Name
Type
Additional info
Entity name can be something like account number and type could be like savings,current etc in a bank database for example.
Mostly, type will be some kind of string. There could be additional information associated with an entity type.
Normally queries will be posed like this.
Find account numbers of this particular type?
Find account numbers of type X, having balance greater than 1 million?
To answer these queries, query analyzer will scan the index if the index is associated with a particular column. Otherwise, it will do a full scan of all the rows.
I am thinking about the below optimization.
Why not we store the hash or integral value of each column data in the actual table such that the ordering property is maintained, so that it will be easy for comparison.
It has below advantages.
1. Table size will be lot less because we will be storing small size values for each column data.
2. We can construct a clustered B+ tree index on the hash values for each column to retrieve the corresponding rows matching or greater or smaller than some value.
3. The corresponding values can be easily retrieved by having B+ tree index in the main memory and retrieving the corresponding values.
4. Infrequent values will never need to retrieved.
I am still having more optimizations in my mind. I will post those based on the feedback to this question.
I am not sure if this is already implemented in database, this is just a thought.
Thank you for reading this.
-- Bala
Update:
I am not trying to emulate what the database does. Normally indexes are created by the database administrator. I am trying to propose a physical schema by having indexes on all the fields in the database, so that database table size is reduced and its easy to answer few queries.
Updates:(Joe's answer)
How does adding indexes to every field reduce the size of the database? You still have to store all of the true values in addition to the hash; we don't just want to query for existence but want to return the actual data.
In a typical table, all the physical data will be stored. But now by generating a hash value on each column data, I am only storing the hash value in the actual table. I agree that its not reducing the size of the database, but its reducing the size of the table. It will be useful when you don't need to return all the column values.
Most RDBMSes answer most queries efficiently now (especially with key indices in place). I'm having a hard time formulating scenarios where your database would be more efficient and save space.
There can be only one clustered index on a table and all other indexes have to unclustered indexes. With my approach I will be having clustered index on all the values of the database. It will improve query performance.
Putting indexes within the physical data -- that doesn't really make sense. The key to indexes' performance is that each index is stored in sorted order. How do you propose doing that across any possible field if they are only stored once in their physical layout? Ultimately, the actual rows have to be sorted by something (in SQL Server, for example, this is the clustered index)?
The basic idea is that instead of creating a separate table for each column for efficient access, we are doing it at the physical level.
Now the table will look like this.
Row1 - OrderedHash(Column1),OrderedHash(Column2),OrderedHash(Column3)
Google for "hash index". For example, in SQL Server such an index is created and queried using the CHECKSUM function.
This is mainly useful when you need to index a column which contains long values, e.g. varchars which are on average more than 100 characters or something like that.
How does adding indexes to every field reduce the size of the database? You still have to store all of the true values in addition to the hash; we don't just want to query for existence but want to return the actual data.
Most RDBMSes answer most queries efficiently now (especially with key indices in place). I'm having a hard time formulating scenarios where your database would be more efficient and save space.
Putting indexes within the physical data -- that doesn't really make sense. The key to indexes' performance is that each index is stored in sorted order. How do you propose doing that across any possible field if they are only stored once in their physical layout? Ultimately, the actual rows have to be sorted by something (in SQL Server, for example, this is the clustered index)?
I don't think your approach is very helpful.
Hash values only help for equality/inequality comparisons, but not less than/greater than comparisons, compared to pretty much every database index.
Even with (in)equality hash functions do not offer 100% guarantee of having given you the right answer, as hash collisions can happen, so you will still have to fetch and compare the original value - boom, you just lost what you wanted to save.
You can have the rows in a table ordered only one way at a time. So if you have an application where you have to order rows differently in different queries (e.g. query A needs a list of customers ordered by their name, query B needs a list of customers ordered by their sales volume), one of those queries will have to access the table out-of-order.
If you don't want the database to have to work around colums you do not use in a query, then use indexes with extra data columns - if your query is ordered according to that index, and your query only uses columns that are in the index (coulmns the index is based on plus columns you have explicitly added into the index), the DBMS will not read the original table.
Etc.
The database I'm working with is currently over 100 GiB and promises to grow much larger over the next year or so. I'm trying to design a partitioning scheme that will work with my dataset but thus far have failed miserably. My problem is that queries against this database will typically test the values of multiple columns in this one large table, ending up in result sets that overlap in an unpredictable fashion.
Everyone (the DBAs I'm working with) warns against having tables over a certain size and I've researched and evaluated the solutions I've come across but they all seem to rely on a data characteristic that allows for logical table partitioning. Unfortunately, I do not see a way to achieve that given the structure of my tables.
Here's the structure of our two main tables to put this into perspective.
Table: Case
Columns:
Year
Type
Status
UniqueIdentifier
PrimaryKey
etc.
Table: Case_Participant
Columns:
Case.PrimaryKey
LastName
FirstName
SSN
DLN
OtherUniqueIdentifiers
Note that any of the columns above can be used as query parameters.
Rather than guess, measure. Collect statistics of usage (queries run), look at the engine own statistics like sys.dm_db_index_usage_stats and then you make an informed decision: the partition that bests balances data size and gives best affinity for the most often run queries will be a good candidate. Of course you'll have to compromise.
Also don't forget that partitioning is per index (where 'table' = one of the indexes), not per table, so the question is not what to partition on, but which indexes to partition or not and what partitioning function to use. Your clustered indexes on the two tables are going to be the most likely candidates obviously (not much sense to partition just a non-clustered index and not partition the clustered one) so, unless you're considering redesign of your clustered keys, the question is really what partitioning function to choose for your clustered indexes.
If I'd venture a guess I'd say that for any data that accumulates over time (like 'cases' with a 'year') the most natural partition is the sliding window.
If you have no other choice you can partition by key module the number of partition tables.
Lets say that you want to partition to 10 tables.
You will define tables:
Case00
Case01
...
Case09
And partition you data by UniqueIdentifier or PrimaryKey module 10 and place each record in the corresponding table (Depending on your unique UniqueIdentifier you might need to start manual allocation of ids).
When performing a query, you will need to run same query on all tables, and use UNION to merge the result set into a single query result.
It's not as good as partitioning the tables based on some logical separation which corresponds to the expected query, but it's better then hitting the size limit of a table.
Another possible thing to look at (before partitioning) is your model.
Are you in a normalized database? Are there further steps which could improve performance by different choices in the normalization/de-/partial-normalization? Are there options to transform the data into a Kimball-style dimensional star model which is optimal for reporting/querying?
If you aren't going to drop partitions of the table (sliding window, as mentioned) or treat different partitions differently (you say any columns can be used in the query), I'm not sure what you are trying to get out of the partitioning that you won't already get out of your indexing strategy.
I'm not aware of any table limits on rows. AFAIK, the number of rows is limited only by available storage.
When working with tables in Oracle, how do you know when you are setting up a good index versus a bad index?
This depends on what you mean by 'good' and 'bad'. Basically you need to realise that every index you add will increase performance on any search by that column (so adding an index to the 'lastname' column of a person table will increase performance on queries that have "where lastname = " in them) but decrease write performance across the whole table.
The reason for this is when you add or update a row, it must add-to or update both the table itself and every index that row is a member of. So if you have five indexes on a table, each addition must write to six places - five indexes and the table - and an update may be touching up to six places in the worst case.
Index creation is a balancing act then between query speed and write speed. In some cases, such as a datamart that is only loaded with data once a week in an overnight job but queried thousands of times daily, it makes a great deal of sense to overload with indexes and speed the queries up as much as possible. In the case of online transaction processing systems however, you want to try and find a balance between them.
So in short, add indexes to columns that are used a lot in select queries, but try to avoid adding too many and so add the most-used columns first.
After that its a matter of load testing to see how the performance reacts under production conditions, and a lot of tweaking to find an aceeptable balance.
Fields that are diverse, highly specific, or unique make good indexes. Such as dates and timestamps, unique incrementing numbers (commonly used as primary keys), person's names, license plate numbers, etc...
A counterexample would be gender - there are only two common values, so the index doesn't really help reduce the number of rows that must be scanned.
Full-length descriptive free-form strings make poor indexes, as whoever is performing the query rarely knows the exact value of the string.
Linearly-ordered data (such as timestamps or dates) are commonly used as a clustered index, which forces the rows to be stored in index order, and allows in-order access, greatly speeding range queries (e.g. 'give me all the sales orders between October and December'). In such a case the DB engine can simply seek to the first record specified by the range and start reading sequentially until it hits the last one.
#Infamous Cow -- you must be thinking of primary keys, not indexes.
#Xenph Yan --
Something others have not touched on is choosing what kind of index to create. Some databases don't really give you much of a choice, but some have a large variety of possible indexes. B-trees are the default but not always the best kind of index. Choosing the right structure depends on the kind of usage you expect to have. What kind of queries do you need to support most? Are you in a read-mostly or write-mostly environment? Are your writes dominated by updates or appends? Etc, etc.
A description of the different types of indexes and their pros and cons is available here: http://20bits.com/2008/05/13/interview-questions-database-indexes/ .
Here's a great SQL Server article:
http://www.sql-server-performance.com/tips/optimizing_indexes_general_p1.aspx
Although the mechanics won't work on Oracle, the tips are very apropos (minus the thing on clustered indexes, which don't quite work the same way in Oracle).
Some rules of thumb if you are trying to improve a particular query.
For a particular table (where you think Oracle should start) try indexing each of the columns used in the WHERE clause. Put columns with equality first, followed by columns with a range or like.
For example:
WHERE CompanyCode = ? AND Amount BETWEEN 100 AND 200
If columns are very large in size (e.g. you are storing some XML or something) you may be better off leaving them out of the index. This will make the index smaller to scan, assuming you have to go to the table row to satisfy the select list anyway.
Alternatively, if all the values in the SELECT and WHERE clauses are in the index Oracle will not need to access the table row. So sometimes it is a good idea to put the selected values last in the index and avoid a table access all together.
You could write a book about the best ways to index - look for author Jonathan Lewis.
A good index is something that you can rely on to be unique for a specific table row.
One commonly used index scheme is the use of numbers which increment by 1 for each row in the table. Every row will end up having a different number index.