If I have a table column with data and create an index on this column, will the index take same amount of disc space as the column itself?
I'm interested because I'm trying to understand if b-trees actually keep copies of column data in leaf nodes or they somehow point to it?
Sorry if this a "Will Java replace XML?" kind question.
UPDATE:
created a table without index with a single GUID column, added 1M rows - 26MB
same table with a primary key (clustered index) - 25MB (even less!), index size - 176KB
same table with a unique key (nonclustered index) - 26MB, index size - 27MB
So only nonclustered indexes take as much space as the data itself.
All measurements were done in SQL Server 2005
The B-Tree points to the row in the table, but the B-Tree itself still takes some space on disk.
Some database, have special table which embed the main index and the data. In Oracle, it's called IOT -- index-organized table.
Each row in a regular table can be identified by an internal ID (but it's database specific) which is used by the B-Tree to identify the row. In Oracle, it's called rowid and looks like AAAAECAABAAAAgiAAA :)
If I have a table column with data and
create an index on this column, will
the index take same amount of disc
space as the column itself?
In a basic B-Tree, you have the same number of node as the number of item in the column.
Consider 1,2,3,4:
1
/
2
\ 3
\ 4
The exact space can still be a bit different (the index is probably a bit bigger as it need to store links between nodes, it may not be balanced perfectly, etc.), and I guess database can use optimization to compress part of the index. But the order of magnitude between the index and the column data should be the same.
I'm almost sure it's quite a DB dependent, but generally – yeah, they take additional space. This happens because of two reasons:
This way you can utilize the fact
the data in BTREE leafs is sorted;
You gain lookup speed advantage as
you don't have to seek back and
forth to fetch neccessary stuff.
PS just checked our mysql server: for a 20GB table indexes take 10GB of space :)
Judging by this article, it will, in fact, take at least the same amount of space as the data in the column (in PostgreSQL, anyway).
The article also goes to suggest a strategy to reduce disk and memory usage.
A way to check for yourself would be to use e.g. the derby DB, create a table with a million rows and a single column, check it's size, create an index on the column and check it's size again. If you take the 10-15 minutes to do so, let us know the results. :)
Related
I got a database that have 2TB of data, and i wanna reduce it to 500Go by dropping some rows and removing some useless columns, but i have other ideas of optimizations, and i need an answer of some questions before.
My database got one .mdf file, and 9 other .ndf file and each file has an initiale size of 100Go.
Should I reduce the initiale size of each .ndf file to 50Go? can this operation affect my data?
Dropping an index help to reduce space?
PS : My Database contains only one single table, that has one clustered index and two other non clustered indexes,
I want to remove the two non clustered indexes
Remove the insertdate column
If you have any other ideas of optimizations, it would be very helpful
Before droping any indexes run these two views.
sys.dm_db_index_usage_stats
sys.dm_db_index_operational_stats
They will let you know if any of them are being used to support queries. The last thing you want is to remove an index and start seeing full table scans on a 2TB table.
If you can't split up the table into a relational model then try these for starters.
Check your data types.
-Can you replace NVARCHAR with VARCHAR or NCHAR with CHAR? (they take up half the space)
-Does your table experience a lot of Updates or a lot of Inserts (above view will tell you this)? If there are very few updates then consider changing CHAR fields to VARCHAR fields. Heavy updates can cause page splits and result in poor Page fullness.
-Check that columns only storing a Date with no time are not declared as Datetime
-Check value ranges in numeric fields i.e. try and use Smallint instead of Int.
Look at the activity on the table, update & insert behaviour. If the activity means very few Pages are re-arranged then consider increasing your Fill Factor.
Look at the Plan Cache, get an idea of how the table is being queried, if the bulk of queries focus on a specific portion of the table then implement a Filtered Index.
Is your Clustered Index Unique? If not then SQL creates a "hidden extra Integer column" that creates uniqueness under the bonnet.
I am reading that RDMS stores table data on disk in some form of B-tree, and also that table indexes are stored in the B-tree form.
I read that primary key index is created automatically for a primary key defined, but that it could also be dropped anytime. So, it implies that primary-key index is an additional structure next to the B-tree used for just storing table data.
Isn't that wasting of resources - why wouldn't all table table be kept through primary-key index?
If it isn't like that, which order is then used for the B-tree used to store table data?
Thanks for clarifying
The primary key index is an optimization for finding the place on disk where the row is held. As a structure, it contains simply the PK data, not the whole row.
On a database, performance is often gated by how many pages are read from disk vs. cache. Since the PK index is smaller than the whole table, it is more likely to be in cache, it causes fewer blocks to be read from disk, and less blocks of other tables are removed from cache. It therefore is a major performance optimization.
Further, while modifying the table data, rows are locked. If the primary key were being scanned from the table data on disk, locked rows would slow access for all the other queries. By separating the index as a separate structure, the index can be used even while the row being pointed to is locked.
So overall, the separate PK structure is a classic space-for-time optimization.
EDIT What is the order of the rows in the table? The following answer is for Oracle, but is applicable to many databases.
Short answer: rows are not ordered on disk which is why the PK index (and other indexes) are so important.
Long answer:
While the primary-key b-tree structure is necessarily sorted (the b-tree) the rows of the table are scattered across the table-space. To understand this we need to drill down the various data structures.
First, the database is structured into logical entities called a tablespaces. A tablespace is the space in one or more files on one or more disks. The files start empty. When the tablespace become full (technically when the data in it reaches a threshold) the tablespace can be automatically grown. It can also be grown manually by enlarging the file (adding an 'extent', or adding new files). Tablespaces can be clustered across multiple machines as well as disks.
Second: A tablespace is divided segments, each segment for the use of a single table or index.
Third: The segment is divided into blocks, each block has space for one or more rows. These blocks are not the same as disk or OS blocks; Oracle blocks are one or more OS blocks. (This is for transportability, and for managing media with different block sizes).
On insert, a database will select a space in a block from anywhere in the tablespace. The row can be inserted sequentially (especially bulk inserting into an empty table), but normally the database will also reuse space where rows have been deleted or moved due to some types of update. While the placement is theoretically kind-a predictable, in practice you should never rely on or expect the row to be placed in any specific block.
One interesting thing in Oracle is the ROWID. This is the reference stored in the index that allows the DB to look up the row:
An extended rowid has a four-piece format, OOOOOOFFFBBBBBBRRR:
The first 6 characters OOOOOO represent data object number, using 32bits
The next 3 characters FFF represent tablespace-relative datafile number, using 10bits.
The next 6 characters BBBBB represent block number, using 22bits.
The last 3 character RRR represent row number, using 16bit
For much more detail, see http://docs.oracle.com/cd/E11882_01/server.112/e25789/logical.htm#autoId0
One other thought: There is a concept in the DB world called partitions, where a dataset is divided across different tablespaces (frequently different disks or nodes in a cluster) depending on some expression logic. For example, on a table of customers, a vertical partition could be defined by the country of the person. That way you can ensure that the US customers are physically on one disk while the Australians are on another.
i have a huge database, around 1 TB in size, most of the space is consumed by a table which stores images, the tables has right now almost 800k rows.
server response time has increased, i would like to know which techniques should i use or you recomend, partitioning? o how to reorganize the table
every row is accessed by the image id column, and it has its clustered index by that column, and every two days i reorganize the index and every 7 days i rebuild it, but it seems not to be working
any suggestions?
If the table is clustered by image_id and you access always by image_id then the size of the table is irrelevant, and so is the fragmentation (no need to rebuild).
If you see performance decrease, then there most be something else at play. You are doing range scans? Look in sys.dm_db_index_usage_stats, does the user_scans column differ from 0? It means you have queries that do scans.
Unless you measure where the time increase occurs, you'll be shooting blanks in the dark and never solve the problem correctly. Apply a methodological approach, like Waits and Queues to identify the problem.
One thing I can tell you right now: partitioning is never a performance improvement. It is intended for data maintenance (switch in/switch out) and for spreading the load on controlled fashion on filegroups. But you can never expect partitioning to improve performance, you can at best hope for equal performance with non-partitioned table.
If the response time is increasing, you must be doing more with this table than just pulling images for ids?
What other data columns are stored in your images table?
If you have a clustered index on an id (probably identity), that's fine, but adding an additional nonclustered index which can be covering for search criteria will probably help.
Say you also have columns for name or tag or region or whatever in this images table (and assuming you aren't going to vertically partition this table into separate tables), then having a nonclustered index on tag, id INCLUDE(name), say or something which matches your usage patterns will help a lot.
Remember: A clustered index is not an index, it's just the way the data is organized. It will usually not help much in any kind of search operations - it primarily works well on identity lookups, when you are reading almost every column, and streaming data in the order of the clustered index.
Typically, the databases are designed as below to allow multiple types for an entity.
Entity Name
Type
Additional info
Entity name can be something like account number and type could be like savings,current etc in a bank database for example.
Mostly, type will be some kind of string. There could be additional information associated with an entity type.
Normally queries will be posed like this.
Find account numbers of this particular type?
Find account numbers of type X, having balance greater than 1 million?
To answer these queries, query analyzer will scan the index if the index is associated with a particular column. Otherwise, it will do a full scan of all the rows.
I am thinking about the below optimization.
Why not we store the hash or integral value of each column data in the actual table such that the ordering property is maintained, so that it will be easy for comparison.
It has below advantages.
1. Table size will be lot less because we will be storing small size values for each column data.
2. We can construct a clustered B+ tree index on the hash values for each column to retrieve the corresponding rows matching or greater or smaller than some value.
3. The corresponding values can be easily retrieved by having B+ tree index in the main memory and retrieving the corresponding values.
4. Infrequent values will never need to retrieved.
I am still having more optimizations in my mind. I will post those based on the feedback to this question.
I am not sure if this is already implemented in database, this is just a thought.
Thank you for reading this.
-- Bala
Update:
I am not trying to emulate what the database does. Normally indexes are created by the database administrator. I am trying to propose a physical schema by having indexes on all the fields in the database, so that database table size is reduced and its easy to answer few queries.
Updates:(Joe's answer)
How does adding indexes to every field reduce the size of the database? You still have to store all of the true values in addition to the hash; we don't just want to query for existence but want to return the actual data.
In a typical table, all the physical data will be stored. But now by generating a hash value on each column data, I am only storing the hash value in the actual table. I agree that its not reducing the size of the database, but its reducing the size of the table. It will be useful when you don't need to return all the column values.
Most RDBMSes answer most queries efficiently now (especially with key indices in place). I'm having a hard time formulating scenarios where your database would be more efficient and save space.
There can be only one clustered index on a table and all other indexes have to unclustered indexes. With my approach I will be having clustered index on all the values of the database. It will improve query performance.
Putting indexes within the physical data -- that doesn't really make sense. The key to indexes' performance is that each index is stored in sorted order. How do you propose doing that across any possible field if they are only stored once in their physical layout? Ultimately, the actual rows have to be sorted by something (in SQL Server, for example, this is the clustered index)?
The basic idea is that instead of creating a separate table for each column for efficient access, we are doing it at the physical level.
Now the table will look like this.
Row1 - OrderedHash(Column1),OrderedHash(Column2),OrderedHash(Column3)
Google for "hash index". For example, in SQL Server such an index is created and queried using the CHECKSUM function.
This is mainly useful when you need to index a column which contains long values, e.g. varchars which are on average more than 100 characters or something like that.
How does adding indexes to every field reduce the size of the database? You still have to store all of the true values in addition to the hash; we don't just want to query for existence but want to return the actual data.
Most RDBMSes answer most queries efficiently now (especially with key indices in place). I'm having a hard time formulating scenarios where your database would be more efficient and save space.
Putting indexes within the physical data -- that doesn't really make sense. The key to indexes' performance is that each index is stored in sorted order. How do you propose doing that across any possible field if they are only stored once in their physical layout? Ultimately, the actual rows have to be sorted by something (in SQL Server, for example, this is the clustered index)?
I don't think your approach is very helpful.
Hash values only help for equality/inequality comparisons, but not less than/greater than comparisons, compared to pretty much every database index.
Even with (in)equality hash functions do not offer 100% guarantee of having given you the right answer, as hash collisions can happen, so you will still have to fetch and compare the original value - boom, you just lost what you wanted to save.
You can have the rows in a table ordered only one way at a time. So if you have an application where you have to order rows differently in different queries (e.g. query A needs a list of customers ordered by their name, query B needs a list of customers ordered by their sales volume), one of those queries will have to access the table out-of-order.
If you don't want the database to have to work around colums you do not use in a query, then use indexes with extra data columns - if your query is ordered according to that index, and your query only uses columns that are in the index (coulmns the index is based on plus columns you have explicitly added into the index), the DBMS will not read the original table.
Etc.
I have an ETL process performance problem. I have a table with 4+ billion rows in it. Structure is:
id bigint identity(1,1)
raw_url varchar(2000) not null
md5hash char(32) not null
job_control_number int not null
Clustered unique index on the id and non clustered unique index on md5hash
SQL Server 2008 Enterprise
Page level compression is turned on
We have to store the raw urls from our web-server logs as a dimension. Since the raw string > 900 characters we cannot put a unique index on that column. We use an md5 hash function to create the unique 32 character string for indexing purposes. We cannot allow duplicate raw_url strings in the table.
The problem is poor performance. The md5hash is of course random by nature so the index fragmentation drives to 50% which leads to inefficient IO.
Looking for advice on how to structure this to allow better insertion and lookup performance as well as less index fragmentation.
I would break up the table into physical files, with the older non-changing data in a read-only file group. Make sure the non-clustered index is also in the filegroup.
Edit (from comment): And while I'm thinking about it, if you turn off page level compression, that'll improve I/O as well.
I would argue that it should be a degenerate dimension in the fact table.
And figure some way to do partitioning on the data. Maybe take the first xxx characters and store them as a separate field, and partition by that.
Then when you're doing lookups, you're passing the short and long columns, so it's looking in a partition first.