How to optimize a SQL Server full text search - sql-server

I want to use fulltextsearch for an autocomplete service, which means I need it to work fast! Up to two seconds max.
The search results are drawn from different tables and so I created a view that joins them together.
The SQL function that I'm using is FREETEXTTABLE().
The query runs very slowly, sometimes up to 40 seconds.
To optimize the query execution time, I made sure the base table has a clustered index column that's an integer data type (and not a GUID)
I have two questions:
First, any additional ideas about how to make the full text search faster? (not including upgrading the hardware...)
Second, How come each time after I rebuild the full text catalog, the search query works very fast (less then one second), but only for the first run. The second time I run the query it takes a few more seconds and it's all down hill from there.... any idea why this happens?

The reason why your query is very fast the first time after rebuilding the catalog might be very simple:
When you delete the catalog and rebuild it, the indexes have to be rebuilt, which takes some time. If you make a query before the rebuilding is finished, they query is faster, simply because there is less data. You should also notice, that your query-result contains less rows.
So testing the query speed only makes sense after rebuilding of the indexes is finished.
The following select might come handy to check the size (and also fragmentation) of the indexes. When the size stops growing, rebuilding of the indexes is finished ;)
-- Compute fragmentation information for all full-text indexes on the database
SELECT c.fulltext_catalog_id, c.name AS fulltext_catalog_name, i.change_tracking_state,
i.object_id, OBJECT_SCHEMA_NAME(i.object_id) + '.' + OBJECT_NAME(i.object_id) AS object_name,
f.num_fragments, f.fulltext_mb, f.largest_fragment_mb,
100.0 * (f.fulltext_mb - f.largest_fragment_mb) / NULLIF(f.fulltext_mb, 0) AS fulltext_fragmentation_in_percent
FROM sys.fulltext_catalogs c
JOIN sys.fulltext_indexes i
ON i.fulltext_catalog_id = c.fulltext_catalog_id
JOIN (
-- Compute fragment data for each table with a full-text index
SELECT table_id,
COUNT(*) AS num_fragments,
CONVERT(DECIMAL(9,2), SUM(data_size/(1024.*1024.))) AS fulltext_mb,
CONVERT(DECIMAL(9,2), MAX(data_size/(1024.*1024.))) AS largest_fragment_mb
FROM sys.fulltext_index_fragments
GROUP BY table_id
) f
ON f.table_id = i.object_id

Here's a good resource to check out. However if you really want to improve performance you'll have to think about upgrading your hardware. (I saw a significant performance increase by moving my data and full text index files to separate read-optimized disks and by moving logs and tempdb to separate write-optimized disks -- a total of 4 extra disks plus 1 more for the OS and SQL Server binaries.)
Some other non-hardware solutions I recommend:
Customize the built-in stop word list to define more stop words, thereby reducing the size of your full text index.
Change the file structure of tempdb. See here and here.
If your view performs more than 1 call to FREETEXTTABLE then consider changing your data structure so that the view only has to make 1 call.
However none of these by themselves are likely to be the silver bullet solution you're looking for to speed things up. I suspect there may be other factors here (maybe a poor performing server, network latency, resource contention on the server..) especially since you said the full text searches get slower with each execution which is the opposite of what I've seen in my experience.

Related

Is there a way I can tell what OPTION (FAST 1) does at the end of my query?

I have a number of horribly large queries which work ok on small databases but when the volume of data gets larger then the performance of these queries get slower. They are badly designed really and we must address that really. These queries have a very large number of LEFT OUTER JOINS. I note that when the number of LEFT OUTER JOINS goes past 10 then performance gets logarithmically slower each time a new join is added. If I put a OPTION (FAST 1) at the end of my query then the results appear nearly immediately. Of course I do not want to use this as it firstly, it is not going to help all of the time (if it did then every query would have it) and secondly I want to know how to optimise these joins better. When I run the query without the OPTION set then the execution plan shows a number of nested loops on my LEFT OUTER JOINS are showing a high percentage cost, but with the option off it does not. How can I find out what it does to speed the query up so I can reflect it in the query ?
I cannot get the query nor the execution plans today as the server I am on does not let me copy data from it. If they are needed for this I can arrange to get them sent but will take some time, in the morning.
Your comments would be really interesting to know.
Kind regards,
Derek.
You can set column to primary key and column automatically will be Clustered Index.
Clustered Index->Benefit and Drawback
Benefit: Performance boost if implemented correctly
Drawback: requires understanding of clustered/non clustered indexes and storage implications
Note: varchar foreign keys can lead to poor performance as well. Change the base table to have an integer primary key instead.
And also
I would suggest to use database paging(f.e. via ROW_NUMBER function) to partition your result set and query only the data you want to show(f.e. 20 rows per page in a GridView).

Create a large dataset for speed testing

I need a Microsoft SQL Server 2005 or above stored procedure that will create a large number of rows (example: one million) so that I can then try various things like seeing how much slower SELECT * is compared to selecting each individual field name, or selecting from a view that selects from another view rather than selecting directly from the tables.
Does that make sense?
If it is just the number of rows you want and you don't mind having the same content in each row, then you can do this in SQL Server Management Studio easily. Write your insert statement to insert a single row, then use:
GO 1000000
This will execute the batch the number of times specified after the GO statement.
If you need different data per row (or cannot have duplicate data because of indices, etc..), then there are tools such as SQL Data Generator will help. They enable you to define the type of data that gets generated so that the tool generates realistic data.
I can tell you right now how much slower it is to perform SELECT * instead of SELECT specific_column_names. If the columns you are selecting are not covered by any index, it will make hardly any difference at all; if the columns you would normally be selecting are covered by an index, and the table contains any significant amount of data, it will be an order of magnitude slower, maybe worse.
Here's a quick and dirty example. First create the test schema and data:
CREATE TABLE #TestTable
(
ID int NOT NULL IDENTITY(1, 1) PRIMARY KEY CLUSTERED,
Name varchar(50) NOT NULL,
Age int NOT NULL
)
INSERT #TestTable (Name, Age)
SELECT 'John', s1.number % 10 + 25
FROM master.dbo.spt_values s1
CROSS JOIN master.dbo.spt_values s2
WHERE s1.type = 'P' AND s2.type = 'P'
AND s2.number < 20
CREATE INDEX IX_#TestTable_Age ON #TestTable (Age)
Now run this query in SSMS and turn on the actual execution plan:
SELECT ID
FROM #TestTable
WHERE Age = 30
SELECT *
FROM #TestTable
WHERE Age = 30
The first SELECT is executed as an index seek, which on my machine is 7% of the total cost. On the second query, the optimizer decides that the IX_#TestTable_Age index isn't worth it and does a clustered index scan instead, using up 93% of the total cost, or 13 times as expensive as the non-SELECT * version.
If we force a nested loop key lookup, to mimic the absence of a clustered index or a very large clustered index, it gets even worse:
SELECT *
FROM #TestTable
WITH (INDEX(IX_#TestTable_Age))
WHERE Age = 30
This takes more than 100 times as long as the covering query. Compared to the very first query, the cost is simply astronomical.
Why I bothered to write all that information:
Before you start going out and "testing" things, you need to shake off the common misconception that the exact order in which you write your query statements, or irrelevant factors like views selecting from other views, actually makes any appreciable difference if your database is even remotely optimized.
Indexing is the first thing that matters in the area of database performance. How you use them is the second thing that matters. The way in which you write your query may matter - such as performing a SELECT * when your WHERE condition is on anything other than the clustered index, or using non-sargable functions like DATEPART in your WHERE condition, but for the most part, chucking a bunch of random data into a table without seriously thinking about how the table will actually be used is going to give you mostly-meaningless results in terms of performance.
Data generators are useful when you are planning a large project and need to perform scalability tests. If you are simply experimenting, trying to understand performance differences between different types of queries in an abstract sense, then I would have to say that you'll be better off just grabbing a copy of the Northwind or AdventureWorks database and banging around on that one - it's already normalized and indexed and you'll be able to glean meaningful information about query performance in an actual production database.
But even more importantly than that, before you even start to think about performance in a SQL database, you need to actually start reading about performance and understand what factors affect performance. As I mentioned, the number one factor is indexing. Other factors including sort orders, selectivity, join types, cursor types, plan caching, and so on. Don't just go and start fooling around, thinking you'll learn how best to optimize a database.
Educate yourself before fumbling around. I would start with the slightly-dated but still comprehensive Improving SQL Server Performance article from Microsoft Patterns and Practices. Also read about Indexing Basics and Covering Indexes. Then go to sites like SQL Server Performance and try to absorb whatever you can from the articles.
Then, and only then, should you start playing around with large-scale test data. If you're still not completely sure why a SELECT * can hurt performance then it is way too early to be running tests.
Take a look at http://databene.org/databene-benerator. It's free, quick, provides realistic data and you have the option of using your own plugins.

Any suggestions for identifying what indexes need to be created?

I'm in a situation where I have to improve the performance of about 75 stored procedures (created by someone else) used for reporting. The first part of my solution was creating about 6 denormalized tables that will be used for the bulk of the reporting. Now that I've created the tables I have the somewhat daunting task of determining what Indexes I should create to best improve the performance of these stored procs.
I'm curious to see if anyone has any suggestions for finding what columns would make sense to include in the indexes? I've contemplated using Profiler/DTA, or possibly fasioning some sort of query like the one below to figure out the popular columns.
SELECT name, Count(so.name) as hits, so.xtype
from syscomments as sc
INNER JOIN sysobjects so ON sc.id=so.id
WHERE sc.text like '%ColumnNamme%'
AND xtype = 'P'
Group by name,so.xtype
ORDER BY hits desc
Let me know if you have any ideas that would help me not have to dig through these 75 procs by hand.
Also, inserts are only performed on this DB once per day so insert performance is not a huge concern for me.
Any suggestions for identifying what indexes need to be created?
Yes! Ask Sql Server to tell you.
Sql Server automatically keeps statistics for what indexes it can use to improve performance. This is already going on in the background for you. See this link:
http://msdn.microsoft.com/en-us/library/ms345417.aspx
Try running a query like this (taken right from msdn):
SELECT mig.*, statement AS table_name,
column_id, column_name, column_usage
FROM sys.dm_db_missing_index_details AS mid
CROSS APPLY sys.dm_db_missing_index_columns (mid.index_handle)
INNER JOIN sys.dm_db_missing_index_groups AS mig ON mig.index_handle = mid.index_handle
ORDER BY mig.index_group_handle, mig.index_handle, column_id;
Just be careful. I've seen people take the missing index views as Gospel, and use them to push out a bunch of indexes they don't really need. Indexes have costs, in terms of upkeep at insert, update, and delete time, as well as disk space and memory use. To make real, accurate use of this information you want to profile actual execution times of your key procedures both before and after any changes, to make sure the benefits of an index (singly or cumulative) aren't outweighed by the costs.
If you know all of the activity is coming from the 75 stored procedures then I would use profiler to track which stored procedures take the longest and are called the most. Once you know which ones are then look at those procs and see what columns are being used most often in the Where clause and JOIN ON sections. Most likely, those are the columns you will want to put non-clustered indexes on. If a set of columns are often times used together then there is a good chance you will want to make 1 non-clustered index for the group. You can have many non-clustered indexes on a table (250) but you probably don't want to put more than a handful on it. I think you will find the data is being searched and joined on the same columns over and over. Remember the 80/20 rule. You will probably get 80% of your speed increases in the first 20% of the work you do. There will be a point where you get very little speed increase for the added indexes, that is when you want to stop.
I concur with bechbd - use a good sample of your database traffic (by running a server trace on a production system during real office hours, to get the best snapshot), and let the Database Tuning Advisor analyze that sampling.
I agree with you - don't blindly rely on everything the Database Tuning Advisor tells you to do - it's just a recommendation, but the DTA can't take everything into account. Sure - by adding indices you can speed up querying - but you'll slow down inserts and updates at the same time.
Also - to really find out if something helps, you need to implement it, measure again, and compare - that's really the only reliable way. There are just too many variables and unknowns involved.
And of course, you can use the DTA to fine-tune a single query to perform outrageously well - but that might neglect the fact that this query is only ever called one per week, or that by tuning this one query and adding an index, you hurt other queries.
Index tuning is always a balance, a tradeoff, and a trial-and-error kind of game - it's not an exact science with a formula and a recipe book to strictly determine what you need.
You can use SQL Server profiler in SSMS to see what and how your tables are being called then using the Database Tuning Tool in profiler to at least start you down the correct path. I know most DBA's will probably scream at me for recommending this but for us non-DBA types such as myself it at least gives us a starting point.
If this is strictly a reporting database and you need performance, consider moving to a data warehouse design. A star or snowflake schema will outperform even a denormalized relational design when it comes to reporting.

SQL Server 2008 Performance: No Indexes vs Bad Indexes?

i'm running into a strange problem in Microsoft SQL Server 2008.
I have a large database (20 GB) with about 10 tables and i'm attempting to make a point regarding how to correctly create indexes.
Here's my problem: on some nested queries i'm getting faster results without using indexes! It's close (one or two seconds), but in some cases using no indexes at all seems to make these queries run faster... I'm running a Checkpoiunt and a DBCC dropcleanbuffers to reset the caches before running the scripts, so I'm kinda lost.
What could be causing this?
I know for a fact that the indexes are poorly constructed (think one index per relevant field), the whole point is to prove the importance of constructing them correctly, but it should never be slower than having no indexes at all, right?
EDIT: here's one of the guilty queries:
SET STATISTICS TIME ON
SET STATISTICS IO ON
USE DBX;
GO
CHECKPOINT;
GO
DBCC DROPCLEANBUFFERS;
GO
DBCC FREEPROCCACHE;
GO
SELECT * FROM Identifier where CarId in (SELECT CarID from Car where ManufactId = 14) and DataTypeId = 1
Identifier table:
- IdentifierId int not null
- CarId int not null
- DataTypeId int not null
- Alias nvarchar(300)
Car table:
- CarId int not null
- ManufactId int not null
- (several fields followed, all nvarchar(100)
Each of these bullet points has an index, along with some indexes that simultaneously store two of them at a time (e.g. CarId and DataTypeId).
Finally, The identifier table has over million entries, while the Car table has two or three million
My guess would be that SQL Server is incorrectly deciding to use an index, which is then forcing a bookmark lookup*. Usually when this happens (the incorrect use of an index) it's because the statistics on the table are incorrect.
This can especially happen if you've just loaded large amounts of data into one or more of the tables. Or, it could be that SQL Server is just screwing up. It's pretty rare that this happens (I can count on one hand the times I've had to force index use over a 15 year career with SQL Server), but the optimizer is not perfect.
* A bookmark lookup is when SQL Server finds a row that it needs on an index, but then has to go to the actual data pages to retrieve additional columns that are not in the index. If your result set returns a lot of rows this can be costly and clustered index scans can result in better performance.
One way to get rid of bookmark lookups is to use covering indexes - an index which has the filtering columns first, but then also includes any other columns which you would need in the "covered" query. For example:
SELECT
my_string1,
my_string2
FROM
My_Table
WHERE
my_date > '2000-01-01'
covering index would be (my_date, my_string1, my_string2)
Indexes don't really have any benefit until you have many records. I say many because I don't really know what that tipping over point is...It depends on the specific application and circumstances.
It does take time for the SQL Server to work with an index. If that time exceeds the benefit...This would especially be true in subqueries, where a small difference would be multiplied.
If it works better without the index, leave out the index.
Try DBCC FREEPROCCACHE to clear the execution plan cache as well.
This is an empty guess. Maybe if you have a lot of indexes, SQL Server is spending time on analyzing and picking one, and then rejecting all of them. If you had no indexes, the engine wouldn't have to waste it's time with this vetting process.
How long this vetting process actually takes, I have no idea.
For some queries, it is faster to read directly from the table (clustered index scan), than it is to read the index and fetch records from the table (index scan + bookmark lookup).
Consider that a record lives along with other records in a datapage. Datapage is the basic unit of IO. If the table is read directly, you could get 10 records for the cost of 1 IO. If the index is read directly, and then records are fetched from the table, you must pay 1 IO per record.
Generally SQL server is very good at picking the best way to access a table (direct vs index). There may be something in your query that is blinding the optimizer. Query hints can instruct the optimizer to use an index when it is wrong to do so. Join hints can alter the order or method of access of a table. Table Variables are considered to have 0 records by the optimizer, so if you have a large Table Variable - the optimizer may choose a bad plan.
One more thing to look out for - varchar vs nvarchar. Make sure all parameters are of the same type as the target columns. There's a case where SQL Server will convert the whole index to the parameter's type in the event of a type mismatch.
Normally SQL Server does a good job at deciding what index to use if any to retrieve the data in the fastest way. Quite often it will decide not to use any indexes as it can retrieve small amounts of data from small tables quicker without going away to the index (in some situations).
It sounds like in your case SQL may not be taking the most optimum route. Having lots of badly created indexes may be causing it to pick the wrong routes to get to the data.
I would suggest viewing the query plan in management studio to check what indexes its using, and where the time is being taken. This should give you a good idea where to start.
Another note is it maybe that these indexes have gotten fragmented over time and are now not performing to their best, it maybe worth checking this and rebuilding some of them if needed.
Check the execution plan to see if it is using one of these indexes that you "know" to be bad?
Generally, indexing slows down writing data and can help to speed up reading data.
So yes, I agree with you. It should never be slower than having no indexes at all.
SQL server actually makes some indexes for you (e.g. on primary key).
Indexes can become fragmented.
Too many indexes will always reduce performance (there are FAQs on why not to index every col in the db)
also there are some situations where indexes will always be slower.
run:
SET SHOWPLAN_ALL ON
and then run your query with and without the index usage, this will let you see what index if any are being used, where the "work" is going on etc.
No Sql Server analyzes both the indexes and the statistics before deciding to use an index to speed up a query. It is entirely possible that running a non-indexed version is faster than an indexed version.
A few things to try
ensure the indexes are created and rebuilt, and re-organized (defragmented).
ensure that the auto create statistics is turned on.
Try using Sql Profiler to capture a tuning profile and then using the Database Engine Tuning Advisor to create your indexes.
Surprisingly the MS Press Examination book for Sql administration explains indexes and statistics pretty well.
See Chapter 4 table of contents in this amazon reader preview of the book
Amazon Reader of Sql 2008 MCTS Exam Book
To me it sounds like your sql is written very poorly and thus not utilizing the indexes that you are creating.
you can add indexes till you're blue in the face but if your queries aren't optimized to use those indexes then you won't get any performance gain.
give us a sample of the queries you're using.
alright...
try this and see if you get any performance gains (with the pk indexes)
SELECT i.*
FROM Identifier i
inner join Car c
on i.CarID=c.CarID
where c.ManufactId = 14 and i.DataTypeId = 1

MS-SQL Server 2000 slow full text indexing

We have a full text index on a fairly large table of 633,569 records. The index is rebuilt from scratch as part of a maintenance plan every evening, after a bunch of DTS packages run that delete / insert records. Large chunks of data are deleted, then inserted (to take care of updates and inserts), so incremental indexing is not a possibility. Changing the packages to only delete when necessary is not a possibility either as it is a legacy application that will eventually be replaced.
The FTI includes two columns - one a varchar(50) not null and a varchar(255) null.
There is a clustered index on the primary key column, which is just an identity column. There is also an combined index on an integer column and the varchar(50) column mentioned above. This latter index was added for performance reasons.
The problem is that the re-indexing is painfully slow - about 8 hours.
The server is fairly robust (dual processor, 4gb of ram), and everything runs quickly beyond this re-indexing.
Any tips on how to speed this up?
UPDATE
Our client has access to the sql box. Turns out they turned on change tracking on the table that is part of the full text index. We turned this off, and the full population took less than 3 hours. Still not great, but better than 8.
UPDATE 2
The FTI is again taking ~8 hours to populate.
SQL Server's indexing is slow primarily because of its asynchronous data extraction scheme.
Use change tracking with the "update
index in background" option.
The easiest way to improve the performance of full-text indexing is to use change tracking with the "update index in background" option.When you index a table (FTI, like "standard" SQL indexes, works on a per-table basis), you specify full population, incremental population, or change tracking. When you opt for full population, every row in the table you're full-text indexing is extracted and indexed. This is a two-step process.
First, you (or Enterprise Manager) run this system stored procedure:
sp_fulltext_getdata CatalogID, object_id
After all the results sets of all of the timestamps and PK values are returned to MSSearch, MSSearch will issue another sp_fulltext_getdata, but this time, once for every row in your table.So if you have 50 million rows in your database, this procedure will be issued 50 million times.
On the other hand, if you use an incremental population, MSSearch will issue an initial:
sp_fulltext_getdata CatalogID, object_id
for each row in the table that you're full-text indexing. So if you have 50 million rows in your database, this statement will also be issued 50 million times. Why? Because even with an incremental population, MSSearch must figure out exactly which rows have been changed, updated, and deleted. Another problem with incremental populations is that they'll index or re-index a row even if the change was made to a column that you aren't full-text indexing.
Although an incremental population is generally faster than a full population, you can see that for large tables, either will be time-consuming.
I recommend you enable change tracking with background or scheduled updating. If you do, you'll see that MSSearch will first issue another:
sp_fulltext_getdata CatalogID, object_id
for every row in the table with change tracking enabled.Then, for every row that has a column that you're full-text indexing and that's modified after your initial full population, the row information will be written (in the database you're indexing) to the sysfulltextnotify table. MSSearch will then issue the following only for the rows that apear in this table and will then remove them from the sysfulltextnotify table.
Consider using a separate build
server
Tables that are heavily updated while you're indexing can create locking problems, so if you can live with a catalog that's periodically out of date and an MSSearch engine that's sometimes unavailable consider using a separate build server. You do this by making sure the indexing server has a copy of the table to be full-text indexed and exporting the catalog .Clearly, if you need real-time or near real-time updates to your catalog, this is not a good solution
Limit activity when population is
running
When population is running, don't run Profiler, and limit other database activity as much as possible. Profiler consumes significant resources.
Increase the number of threads for
the indexing process
Increase the number of threads you're running for the indexing process. The default is only five, and on quads or 8-ways, you can bump this up to much higher values. MSSearch will, however, throttle itself if it's slurping too much data from SQL Server, so avoid doing this on single- or dual-processor systems.
Stop any anti-virus or open
file-agent backup software.
If this is not possible, try to prevent them from scanning the temporary directories being used by SQL FTI and the catalog directories
Place the catalog,temp directory and
pagefiles on their own controllers
If you can make that investment.Place the catalog on its own controller, preferably on a RAID-1 array.Place the temp directory on a RAID-1 array. Similarly, consider putting pagefile on its own RAID-1 array with its own controller.
Consider creating secondary data
files for the Temp DB - 1 per CPU /
Core.
Do you have enough RAM?
What are your file drive placements in terms of RAID configuration?
Are you seeing high tempDB activity?
(BTW, half a million records is not large; it's not even medium... ;) )
Is the system offline whilst you are doing the reindex or live ?
Are these the only items in your full text catalog; if not you might want to consider separating them out from the remainder of your FTS data. (Might help with monitoring too) In the index is the identity column configured as the unique key ?
Can you quantify the large amounts of changes? There are 3 basic options for repopulation; You might want to try switching to full or incremental as one may suit you better than the one you are using now. In my experience incremental works well if changes to the total DB are less than 40% (had a similar issue during large data take ons into the database.) If >40% change then full is likely better (from my experience - i index documents so it might work differently for you) The third option you might want to consider try the Change Tracking with scheduled update reindex option.
If you can take the server off-line to users then what performance settings do you have FTS running under whilst reindexing? You can check this Full-Text Search Service Properties / Performance tab - System Resource Usage as a slider (think there are 4 or 5 positions). There is probably a system proc to change this dont know it and dont have a 2000 machine to check anymore.
FTS / Reindexing loves ram and lots of it; the general rule of thumb is have virtual memory 3x the physical memory; if you have several physical disks then create several Pagefile.sys files, so that each Pagefile.sys file will be placed on its own physical disk. Are you on NT or Windows 2000 ? check that extended memory over 2gb is actually configured properly.
Try putting the index on a separate physical disk than the database.
EDIT: Scott reports this is already the case.
Disallowing nulls in the column that currently does might not speed up the index, but in my experience is a better practice, especially for indexing purposes. The only columns I can justify allowing nulls in are date columns.
Here is a checklist of parameters for FT-indexing performance on SQL Server. Most of them are already quoted and checked here. I don't find one of them on your comments though:
The SQL Server MAX SERVER MEMORY setting should be set manually (dynamic memory allocation is turned off) so that enough virtual memory is left for the Full-Text Search service to run. To achieve this, select a MAX SERVER MEMORY setting that once set, leaves enough virtual memory so that the Full-Text Search service is able to access an amount of virtual memory equal to 1.5 times the amount of physical RAM in the server. This will take some trial and error to achieve this setting.
Improve the Performance of Full-Text Indexes: http://msdn.microsoft.com/en-us/library/ms142560.aspx

Resources