Sybase nonclustered index selection - sybase

We have a table with two nonclustered indexs. The two indexes both have the same three columns, in the same order they only differ in that one is sorted ascending the other descending. A developer had created a stored procedure that does a select where he intended to (but forgot!) to force the use of an index rather than do an Order by. When one user runs the query one index is consistently selected (ironically the correct one which masked this error for some time), when another user runs the procedure the other index is returned. What would be different between two users running the exact same procedure that would influence index selection?
(Note: this code will be rewritten, but I am trying to come to an understnading of what went on here for an After Action Report).
Thanks in Advance

You have not specified which Sybase you have. I will assume ASE.
Index selection is dependent on several factors.
Given your case, where the code has not changed, and the two users are using the same stored proc, there are two possibilities:
check that statistics are up to date. Depending on how your DBA has automated the UPDATE STATISTICS function, and the level (either the index or table level); one index could be up to date and the other could be out of date. Unlike the ASE 12.5.4 Optimiser, the ASE 15.x Optimiser is sensitive to statistics.
Each user is using a different set of data, Search Arguments, variables, etc that they supply as input to the same stored proc. ASE makes index choices at run time, based on (a) the exact input data (Search Arguments) vs (b) the usefulness of the indices. And all it knows it the statistics info as per the last UPDATE STATS.

Indexes are little more complex than they seem. A database system decides to use index (or not) based on the query plan, table volume, number of rows, database cache.
The Database system does a cost estimation (cardinality probability, i/o estimates etc) based on query and above data.
If you have two similar indexes with different sort schemes, there is a chance that the required index key (i) is located at almost n/2 WHERE n=index size
There is also a possibility that based on the data (duplicate data / serial data) in the table, sybase doesn't care about indexes and thus can't decide which one to use.
Drop one index at a time and see what happens.

Related

Azure Database Large Table Group By Performance

I'm looking for design and/or index recommendations for the problem listed below.
I have a couple of denormalized tables in an Azure S1 Standard (20 DTU) database. One of those tables has ~20 columns and a million rows. My application requirements need me to support sub-second (or at least close to it) querying of this table by any combination of columns in my WHERE clause, as well as sub-second (or at least close to it) querying of DISTINCT values in each column.
In order to picture the use case behind this, here is an example. Imagine you were using an HR application that allowed you to search for employees and view employee information. The employee table might have 5 columns and millions of rows. The application allows you to filter by any column, and provides an interface to allow this. Therefore, the underlying SQL queries that must be made are:
A GROUP BY (or DISTINCT) query for each column, which provides the interface with the available filter options
A general employee search query, that filters all rows by any combination of filters
In order to solve performance issues on the first set of queries, I've implemented the following:
Index columns with a large variety of values
Full-Text index columns that require string matching (So CONTAINS querying instead of LIKE)
Do not index columns with a small variety of values
In order to solve the performance issues on the second query, I've implemented the following:
Forcing the front end to use pagination, implemented using SELECT * FROM table OFFSET 0 ROWS FETCH NEXT n ROWS ONLY, and ensuring the order by column is indexed
Locally, this seemed to work fine. Unfortunately, and Azure Standard database doesn't have the same performance as my local machine, and I'm seeing issues. Specifically, the columns I am not indexing (the ones with a very small set of distinct values) are taking 30+ seconds to query for. Additionally, while the paging is initially very quick, the query takes longer and longer the higher and higher I increase the offset.
So I have two targeted questions, but any other advice or design suggestions would be most welcome:
How bad is it to index every column in the table? Know that the table does need to be updated, but the columns that I update won't actually be part of any filters or WHERE clauses. Will the indexes still need to be rebuilt on update? You can also safely assume that the table will not see any inserts/deletes, except for once a month where the entire table is truncated and rebuilt from scratch
In regards to the paging getting slower and slower the deeper I get, I've read this is expected, but the performance becomes unacceptable at a certain point. Outside of making my clustered column the sort by column, are there any other suggestions to get this working?
Thanks,
-Tim

SQL Server: Best technique to regenerate a computed table

We have a few tables that are periodically recomputed within SQL Server. The computation takes a few seconds to a few minutes and we do the following:
Dump the results in computed_table_tmp
Drop computed_table
Rename computed_table_tmp to computed_table. (and all indexes).
However, we seem to still run into concurrency issues where we have our application requesting a view that utilizes this computed table at the precise moment where it no longer exists.
What would be the best technique to avoid this type of problem while ensuring high availability?
If this table is part of your high-availability requirement, then you can't do this the way you've been doing it. Dropping a table in a production SQL environment breaks the concept of high availability.
You might be able to accomplish what you're trying to achieve by creating one or more partitions on this table. A partitioned table is divided into subgroups of rows that can be spread across more than one filegroup in your database. For querying purposes, however, the table is still a single logical entity. The advantage of using a table partition is that you can move around subsets of your data without breaking the integrity of the database, i.e., high-availability is still in place.
In your scenario, you'd have to modify your process such that all activity takes place in the production version of the table. The new rows are dumped in to a separate partition, based on the value of your partition function. Then you'll need to switch the partitions.
One of the things you'll need to do is identify a column in your table that may be used as the partition column, which determines which partition a row will be allocated to. This might be, for example, a datetime column indicating when the row was generated. You can even use a computed column for this purpose, provided it is a PERSISTED column.
One caveat: Table partitioning is not available in all editions of SQL Server... I don't believe Standard has it.

Handling large datasets with SQL Server

I'm looking to manage a large dataset of log files. There is an average of 1.5 million new events per month that I'm trying to keep. I've used access in the past, though it's clearly not meant for this, and managing the dataset is a nightmare, because I'm having to split the datasets into months.
For the most part, I just need to filter event types and count the number. But before I do a bunch of work on the data import side of things, I wanted to see if anyone can verify that this SQL Server is a good choice for this. Is there an entry limit I should avoid and archive entries? Is there a way of archiving entries?
The other part is that I'm entering logs from multiple sources, with this amount of entries, is it wise to put them all into the same table, or should each source have their own table, to make queries faster?
edit...
There would be no joins, and about 10 columns. Data would be filtered through a view, and I'm interested to see if the results from a select query that filter based on one or more columns would have a reasonable response time? Does creating a set of views speed things up for frequent queries?
In my experience, SQL Server is a fine choice for this, and you can definitely expect better performance from SQL Server than MS-Access, with generally more optimization methods at your disposal.
I would probably go ahead and put this stuff into SQL Server Express as you've said, hopefully installed on the best machine you can use (though you did mention only 2GB of RAM). Use one table so long as it only represents one thing (I would think a pilot's flight log and a software error log wouldn't be in the same "log" table, as an absurdly contrived example). Check your performance. If it's an issue, move forward with any number of optimization techniques available to your edition of SQL Server.
Here's how I would probably do it initially:
Create your table with a non-clustered primary key, if you use a PK on your log table -- I normally use an identity column to give me a guaranteed order of events (unlike duplicate datetimes) and show possible log insert failures (missing identities). Set a clustered index on the main datetime column (you mentioned that your're already splitting into separate tables by month, so I assume you'll query this way, too). If you have a few queries that you run on this table routinely, by all means make views of them but don't expect a speedup by simply doing so. You'll more than likely want to look at indexing your table based upon the where clauses in those queries. This is where you'll be giving SQL server the information it needs to run those queries efficiently.
If you're unable to get your desired performance through optimizing your queries, indexes, using the smallest possible datatypes (especially on your indexed columns) and running on decent hardware, it may be time to try partitioned views (which require some form of ongoing maintenance) or partitioning your table. Unfortunately, SQL Server Express may limit you on what you can do with partitioning, and you'll have to decide if you need to move to a more feature-filled edition of SQL Server. You could always test partitioning with the Enterprise evaluation or Developer editions.
Update:
For the most part, I just need to filter event types and count the number.
Since past logs don't change (sort of like past sales data), storing the past aggregate numbers is an often-used strategy in this scenario. You can create a table which simply stores your counts for each month and insert new counts once a month (or week, day, etc.) with a scheduled job of some sort. Using the clustered index on your datetime column, SQL Server could much more easily aggregate the current month's numbers from the live table and add them to the stored aggregates for displaying the current values of total counts and such.
Sounds like one table to me, that would need indexes on exactly the sets of columns you will filter. Restricting access through views is generally a good idea and ensures your indexes will actually get used.
Putting each source into their own table will require UNION in your queries later, and SQL-Server is not very good optimizing UNION-queries.
"Archiving" entries can of course be done manually, by moving entries in a date-range to another table (that can live on another disk or database), or by using "partitioning", which means you can put parts of a table (e.g. defined by date-ranges) on different disks. You have to plan for the partitions when you plan your SQL-Server installation.
Be aware that Express edition is limited to 4GB, so at 1.5 million rows per month this could be a problem.
I have a table like yours with 20M rows and little problems querying and even joining, if the indexes are used.

How would you optimize this SQL Server 2008 R2 Table

I have a private messaging system for my browser game. When I check most CPU time using queries I see that this table is the most CPU using one. I am not good with indexes, query time optimization. So I would like to get your optimization tips about this table.
Alright now I am going to show you table structure first:
structure image
Alright this following query reads how many unread messages does user have and this query is the most CPU using one since it reads at every page load:
SELECT COUNT([Id]) [Number]
FROM [MyTable]
WHERE [ReceiverUserId] = #1
AND [ReceiverReaded] = #2
AND [ReceiverDeleted] = #3
So what kind of indexes etc might improve my performance?
Why allow NULLs on those columns at all - either it's read or not. Just default to 0. Then index on Read/Deleted/ReceivedUser (in that order they will be "partitioned" if you need a lot of ALL READ access, alternatively, if most reads are just for a single user, put an index on ReceivedUser)
What you want to do is see your index be covering. In your case, you could put an index on ReceiverUserId and INCLUDE columns ReceiverReaded and ReceiverDeleted and it would be covering (for that query). In the execution plan, you should just see an index seek, since you have a single user.
You could capture the workload and then run it through the index tuning wizard in SQL Server and it would probably make pretty good suggestions. You need to interpret what it's telling you, of course.
You always want indexes on the fields you are searching for, so you would probably improve the query performance by adding indexes on [ReceiverUserId], [ReceiverReaded] and [ReceiverDeleted].
Of course the more columns you index, the slower your UPDATES and INSERTS will be.
A fairly simple rule of thumb in db optimization is to index any column that appears as part of a predicate in a WHERE clause or a JOIN. From your example these would include:
ReceiverUserId
ReceiverReaded
ReceiverDeleted
There are also a number of optimizer tools available that will "observe" your db and tell you what columns to index for best performance.
Different approach that may be viable for your application: don't query the messages table at all when the user isn't explicitly requesting any content, e.g. when he's not in the "messages" section of your game.
try extending your user table with integer valued columns indicating how many messages are there and how many are read already. Every time you modify the message table, you also modify the corresponding value in the user table.
This way you won't need to look through the whole table on every refresh. Note though, that the downturn of this method is some extra synchronization work on the programmer's part. If you've encapsulated the modification of the messages table (add message, read message, delete message) properly, this shouldn't be a problem.
Index your search fields (per #PaulStock's answer).
Change your tinyint fields to bit fields (default value = 0)
Does your body really need to be nvarchar(4000)? That's HUGE! Consider much shorter messages (such as nvarchar(300) or smaller -- for reference, Twitter is just 140.)

How to deal with billions of records in an sql server?

I have an sql server 2008 database along with 30000000000 records in one of its major tables. Now we are looking for the performance for our queries. We have done with all indexes. I found that we can split our database tables into multiple partitions, so that the data will be spread over multiple files, and it will increase the performance of the queries.
But unfortunatly this functionality is only available in the sql server enterprise edition, which is unaffordable for us.
Is there any way to opimize for the query performance? For example, the query
select * from mymajortable where date between '2000/10/10' and '2010/10/10'
takes around 15 min to retrieve around 10000 records.
A SELECT * will obviously be less efficiently served than a query that uses a covering index.
First step: examine the query plan and look for and table scans and the steps taking the most effort(%)
If you don’t already have an index on your ‘date’ column, you certainly need one (assuming sufficient selectivity). Try to reduce the columns in the select list, and if ‘sufficiently’ few, add these to the index as included columns (this can eliminate bookmark lookups into the clustered index and boost performance).
You could break your data up into separate tables (say by a date range) and combine via a view.
It is also very dependent on your hardware (# cores, RAM, I/O subsystem speed, network bandwidth)
Suggest you post your table and index definitions.
First always avoid Select * as that will cause the select to fetch all columns and if there is an index with just the columns you need you are fetching a lot of unnecessary data. Using only the exact columns you need to retrieve lets the server make better use of indexes.
Secondly, have a look on included columns for your indexes, that way often requested data can be included in the index to avoid having to fetch rows.
Third, you might try to use an int column for the date and convert the date into an int. Ints are usually more effective in range searches than dates, especially if you have time information to and if you can skip the time information the index will be smaller.
One more thing to check for is the Execution plan the server uses, you can see this in management studio if you enable show execution plan in the menu. It can indicate where the problem lies, you can see which indexes it tries to use and sometimes it will suggest new indexes to add.
It can also indicate other problems, Table Scan or Index Scan is bad as it indicates that it has to scan through the whole table or index while index seek is good.
It is a good source to understand how the server works.
If you add an index on date, you will probably speed up your query due to an index seek + key lookup instead of a clustered index scan, but if your filter on date will return too many records the index will not help you at all because the key lookup is executed for each result of the index seek. SQL server will then switch to a clustered index scan.
To get the best performance you need to create a covering index, that is, include all you columns you need in the "included columns" part of your index, but that will not help you if you use the select *
another issue with the select * approach is that you can't use the cache or the execution plans in an efficient way. If you really need all columns, make sure you specify all the columns instead of the *.
You should also fully quallify the object name to make sure your plan is reusable
you might consider creating an archive database, and move anything after, say, 10-20 years into the archive database. this should drastically speed up your primary production database but retains all of your historical data for reporting needs.
What type of queries are we talking about?
Is this a production table? If yes, look into normalizing a bit more and see if you cannot go a bit further as far as normalizing the DB.
If this is for reports, including a lot of Ad Hoc report queries, this screams data warehouse.
I would create a DW with seperate pre-processed reports which include all the calculation and aggregation you could expect.
I am a bit worried about a business model which involves dealing with BIG data but does not generate enough revenue or even attract enough venture investment to upgrade to enterprise.

Resources