I have tables
Book:
Id | Name | ...
UrlRecord:
Id | EntityId | Entityname | Slug >> to store id-less url for many other tables like Category | Book | BookChapter...
So the data is huge.
EntityId=> contains Id in other table like bookid, categoryid, chapterId...
Id EntityId Entityname Slug
1 2 Category truyen-tranh
2 2 BookChapter chapter-one
....
SearchBookDetails stored procedure:
SELECT p.Source,
(SELECT Slug from UrlRecord url where EntityName = 'Category' and EntityId = (SELECT top(1) CategoryId from Book_Category_Mapping bc where bc.BookId = p.Id)
) as CategorySeName
FROM ....
the performance is very slow, up to 22 seconds if I have the CategorySeName clause above because it's a heavy query.
However, i don't know how to improve the performance and still get the CategorySeName value return like above.
Your problem is the correlated subnquery. This is an extremely poor technique that changes your select statment into a what is basically a cursor and runs it row-by-agonizing-row. Never use them if you have a large data set. Use a derived table or a CTE or a temp table instead.
You use EntityId to point to N other tables, like bookid, categoryid, chapterId.
You're table design is wrong, it's actually impossible to set a foreign key.
It is wrong, because that way you cannot enforce foreign-keys.
And much worse, this will result in slow query performance, because there has been no index created automagically, as it does when you create a foreign key.
The query optimizer will thus come up with a very ugly execution plan, which explains why it is that slow.
If you must have an object id, you can createa a view and do:
COALESCE(bookid, categoryid, chapterId) AS EntityId
but I very much doubt object_id, or EntityId as you call it, is of any use to you that way.
PS:
string comparison instead of using an id is always a bad idea
where EntityName = 'Category'
combining those two antipatterns is an especially good idea.
Related
I'm using Access 2016 to view data from a table on our SQL server. I have a massive audit log where the record being viewed is represented by a "FolderID" field. I have another table that has values for the FolderID (represented as "fid") along with columns identifying the record's name and other ID numbers.
I want to be able to replace the FolderID value in the first table with CUSTOMER_NAME value from the second table so I know what's being viewed at a glance.
I've tried googling different join techniques to build a query that will accomplish this, but my google-fu is weak or I'm just not caffeinated enough today.
Table 1.
EventTime EventType FolderID
4/4/2019 1:23:39 PM A 12345
Table 2
fid acc Other_ID Third_ID CUSTOMER_NAME
12345 0 9875 12345678 Doe, John
Basically I want to query Table 2 to search for fid using the value in Table 1 for FolderID, and I want it to respond with the CUSTOMER_NAME associated with the FolderID/fid. The result would look like:
EventTime EventType FolderID
4/4/2019 1:23:39 PM A Doe, John
I'm stupid because I thought I was too smart to use the freaking Query Wizard. When I did, and it prompted me to create relationships and actually think about what I was doing, it came up with this.
SELECT [table1].EventTime, [table1].EventType, [table1].FolderID, [table1].ObjRef, [table1].AreaID, [table1].FileID, [table2].CUSTOMER_NAME, [table2].fid FROM [table2]
LEFT JOIN [table1] ON [table2].[fid] = [table1].[FolderID];
You can run this query and check if it helps!.
Select EventTime, EventType , CUSTOMER_NAME AS FolderID FROM Table1, Table2 Where Table1.FolderID = Table2.fid;
Basically, 'AS' is doing what you want here as you can rename your column to whatever you want.
I've got 2 tables,
'[Item] with field [name] nvarchar(255)
'[Transaction] with field [short_description] nvarchar(3999)
And I need to do thus :
Select [Transaction].id, [Item].id
From [Transaction] inner join [Item]
on [Transaction].[short_description] like ('%' + [Item].[name] + '%')
The above works if limited to a handful of items, but unfiltered is just going over 20 mins and I cancel.
I have a NC index on [name], but I cannot index [short_description] due to its length.
[Transaction] has 320,000 rows
[Items] has 42,000.
That's 13,860,000,000 combinations.
Is there a better way to perform this query ?
I did poke at full-text, but I'm not really that familiar, the answer was not jumping out at me there.
Any advice appreciated !!
Starting a comparison string with a wildcard (% or _) will NEVER use an index, and will typically be disastrous for performance. Your query will need to scan indexes rather than seek through them, so indexing won't help.
Ideally, you should have a third table that would allow a many-to-many relationship between Transaction and Item based on IDs. The design is the issue here.
After some more sleuthing I have utilized some Fulltext features.
sp_fulltext_keymappings
gives me my transaction table id, along with the FT docID
(I found out that 'doc' = text field)
sys.dm_fts_index_keywords_by_document
gives me FT documentId along with the individual keywords within it
Once I had that, the rest was simple.
Although, I do have to look into the term 'keyword' a bit more... seems that definition can be variable.
This only works because the text I am searching for has no white space.
I believe that you could tweak the FTI configuration to work with other scenarios... but I couldn't promise.
I need to look more into Fulltext.
My current 'beta' code below.
CREATE TABLE #keyMap
(
docid INT PRIMARY KEY ,
[key] varchar(32) NOT NULL
);
DECLARE #db_id int = db_id(N'<database name>');
DECLARE #table_id int = OBJECT_ID(N'Transactions');
INSERT INTO #keyMap
EXEC sp_fulltext_keymappings #table_id;
select km.[key] as transaction_id, i.[id] as item_id
from
sys.dm_fts_index_keywords_by_document ( #db_id, #table_id ) kbd
INNER JOIN
#keyMap km ON km.[docid]=kbd.document_id
inner join [items] i
on kdb.[display_term] = i.name
;
My actual version of the code includes inserting the data into a final table.
Execution time is coming in at 30 seconds, which serves my needs for now.
I am trying to store meta data about a document into a SQL Server. The document are stored into a document archive, and returns back an identifier so I can get back that document by asking the archive to get the document by identifier.
Our user would like to be able to search for this document based on different meta data. The meta data could be 1 attribute or 5 depending on the document type, and the users should be able to create new document types from a admin site.
I can see two solution here. One is that each documenttype gets it's own metadata table, where all metadata attributes are predefined, and if one should be added a new column needs to be created. And if a new documenttype is created a new metadata table needs to be created. Our DBA will freak out with a solution like this, and I also see a problem with indexes. Because if the documenttype has 5 different meta data attributes it needs to be searchable with 1 or 4 of them specified in the search. Then I would need to write index for all the different combinations of possible searchs.
here is an example (fictiv)
|documentId | Name | InsertDate | CustomerId | City
| 1 | John | 2014-01-01 | 2 | London
| 2 | John | 2014-01-20 | 5 | New York
| 3 | Able | 2014-01-01 | 10 | Paris
I could here say:
Give me all documents where Name = 'John'
Give me all documets where Name = 'John' And CustomerId = 5
Give me all document where InserDate = '2014-01-01' and City = 'London'
This will be 3 differnet indexes and then I haven't coverd all possible combinations. This isn't practical.
So I am look in to the evil 'EAV' (anti)pattern.
So instead of having the metadata as columns I can have the as rows.
|documentId | MetaAttribute | MetaValue
| 1 | Name | John
| 1 | InsertDate | 2014-01-01
| 1 | CustomerId | 2
| 1 | City | London
| 2 | Name | John
| 2 | InsertDate | 2014-01-20
| 2 | CustomerId | 5
| 2 | City | New York
| 3 | Name | Able
| 3 | InserDate | 2014-01-01
| 3 | CustomerId | 10
| 3 | City | Paris
Here it's simple to create one index om MetaAttribute och metaValue, and it's covered. If a new documenttype is created, new metadata can be created with that documenttype into a MetaAttributeTable (that contains all MetaAttribute for the different documenttype). So no need to create new tables or coulms if a new documenttype is added or if a new attribute is added to a documenttype. Instead all MetaValues most be strings :( and the SQL Query to find the document id is a bit more complicated.
This is what I figured out. (In this example the MetaAttribute is a string, but would be an ID to the MetaAttribute Table)
SELECT * FROM [Document]
WHERE ID IN (SELECT documentId FROM [MetaData]
WHERE ((MetaAttribute = 'Name' AND MetaValue = 'John')
OR (MetaAttribute = 'CustomerId' and MetaValue = '5'))
GROUP BY [documentId]
HAVING Count(1) = 2)
Here I need to ask if the Name = 'John' and CustomerId = 5. I do that by finding all records that have Name = 'John' and CustomerId = '5' and the Group it on the documentId and count number of items in the group. If I got 2 then both Name = 'John' and CustomerId = '5' is true for this search. Return the documentId and use that to retrive information about the document, like the document archive storage id.
There should be a better SQL statement for this isn't there?
So my question is. Is there a better approche than these 2. Is the EAV-pattern so bad that I should stick with the first approche and have a Freaked out DBA and "ten millions of indexes"
We are talking about a system that will have around 10-20 millions of new records each month, and contain data for at least 3 years.... So the tables will be preatty big and good indexes are neccasary for performance.
Best Regards
Magnus
The EAV model is appealing if you have unbounded attributes--that is, anyone can set up anything as an attribute. However, it sounds from your description that this is not the case--the possible document attributes come from a known and fairly limited set. If this is the case, routine normalization suggests the following:
-- One per document
CREATE TABLE Document
(
DocumentId -- primary key
,DocumentType
,<etc>
)
-- One per "type" of document
CREATE TABLE DocumentType
(
DocumentTypeId -- pirmary key
,Name
)
-- One per possible document attribute.
-- Note that multiple document types can reference the same attribute
CREATE TABLE DocumentAttributes
(
AttributeId -- primary key
,Name
)
-- This lists which attributes are used by a given type
CREATE TABLE DocumentTypeAttributes
(
DocumentTypeId
,AttributeId
-- compound primary key on both columns
-- foeign keys on both columns
)
-- This contains the final association of document and attributes
CREATE TABLE DocumentAttributeValues
(
DocumentId
,AttributeId
,Value
-- compound primary key on DocumentId, AttributeId
-- foeign keys on both columns ot their respective parent tables
)
A tighter model with more robust keys could be implemented to ensure at the database level that an attribute cannot be assigned to a document with an “inappropriate” type.
Queries have to use joins, but (presumably) only the Documents and DocumentAttributes tables will ever be large. An index on on (AttributeId + Value) facilitiate lookups by attribute type, and depending on cardinality an index on (Value + AttributeId) could make searches for specific attributes quite efficient.
(Edit)
Ooh, clever, I created two tables with the same name. I've renamed the last one to DocumentAttributeValues. (Free advice is clearly worth what you paid for it!)
This shows how ugly these systems can get in SQL, as you have to “look up” both attributes separately. On the plus side you don’t have to worry about “does this type go with this document”, as those rules have (better had) been applied when the data was loaded. Two examples:
This one spells everything out in joins, and as such I think it might perform worse than the next:
-- Top-down
SELECT do.DocumentId
from Documents do
inner join DocumentAttributes da1
on da.Name = 'Name'
inner join DocumentAttributeValues dav1
on dav1.AttributeId = da1.AttributeId
and dav1.Value = 'John'
inner join DocumentAttributes da2
on da2.Name = 'CustomerId'
inner join DocumentAttributeValues dav2
on dav2.AttributeId = da2.AttributeId
and dav2.Value = '5'
This one picks out the attributes, then finds which documents have all of them. It might perform better, as there’s one less table to process:
-- Bottom-up
SELECT xx.DocumentId
from (-- All documents with name "John"
select dav.DocumentId
from DocumentAttributes da
inner join DocumentAttributeValues dav
on dav.AttributeId = da.AttributeId
where da.Name = 'Name'
and dav.Value = 'John'
-- This combines the two sets, with "all" keeping any duplicate entries
union all
-- All documents with CustomerId = "5"
select dav.DocumentId
from DocumentAttributes da
inner join DocumentAttributeValues dav
on dav.AttributeId = da.AttributeId
where da.Name = 'CustomerId'
and dav.Value = '5') xx -- Have to give the subquery an alias
group by xx.DocumentId
having count(*) = 2
While further refinements might be possible, the more more attributes you’re filtering on, the uglier the queries will be. Five attributes max might work ok in SQL, but if you’ve got tons of attributes, a NoSQL solution might be what you’re looking for.
(Please note that, as with my original post, I have not tested this code, so there may be typos or subtle--or not so subtle--errors in here.)
SQL Server 2008+ offers three related features for dealing with such cases:
Sparse Columns which allow you to define hundreds of columns even if only a subset are used at a time
Column Sets allow you to group these columns and treat them as a group
Filtered indexes can index only the rows that actually have values in them.
These features allow you to work with more-or-less normal SQL statements to handle all metadata columns.
These features were specifically added to address the EAV/metadata scenario.
EDIT
If you have a limited set of attributes that are always filled, there is no need for Sparse Columns or the EAV anti-pattern either.
You can create your tables as you normally would and add indexes to optimize the real workload you encounter. Certain types of queries will occur far more often than others and SQL Server's Index tuning advisor can propose the indexes and statistics to use based on a trace captured using SQL Server's Profiler.
It's quite possible that only a subset of the columns will accelerate searches and the rest can be added as include columns in the index.
Full Text Search
A more powerful option is to use SQL Server's Full Text Search. This will allow you to execute queries using arbitrary attributes. This is another technique using by document/content management systems, ERPs and CRMs to handle arbitrary attributes.
With FTS you simply specify the columns to include in one FTS index and don't have to create separate indexes for each attribute.
You can use FTS predicates in SELECT queries like this:
SELECT Name, ListPrice
FROM Production.Product
WHERE ListPrice = 80.99
AND CONTAINS(Name, 'Mountain')
This can result in much simpler queries (you just write a modified select) and administration (no worries about column order in indexes, only one FTS index to manage)
I have a specific need for a computed column called ProductCode
ProductId | SellerId | ProductCode
1 1 000001
2 1 000002
3 2 000001
4 1 000003
ProductId is identity, increments by 1.
SellerId is a foreign key.
So my computed column ProductCode must look how many products does Seller have and be in format 000000. The problem here is how to know which Sellers products to look for?
I've written have a TSQL which doesn't look how many products does a seller have
ALTER TABLE dbo.Product
ADD ProductCode AS RIGHT('000000' + CAST(ProductId AS VARCHAR(6)) , 6) PERSISTED
You cannot have a computed column based on data outside of the current row that is being updated. The best you can do to make this automatic is to create an after-trigger that queries the entire table to find the next value for the product code. But in order to make this work you'd have to use an exclusive table lock, which will utterly destroy concurrency, so it's not a good idea.
I also don't recommend using a view because it would have to calculate the ProductCode every time you read the table. This would be a huge performance-killer as well. By not saving the value in the database never to be touched again, your product codes would be subject to spurious changes (as in the case of perhaps deleting an erroneously-entered and never-used product).
Here's what I recommend instead. Create a new table:
dbo.SellerProductCode
SellerID LastProductCode
-------- ---------------
1 3
2 1
This table reliably records the last-used product code for each seller. On INSERT to your Product table, a trigger will update the LastProductCode in this table appropriately for all affected SellerIDs, and then update all the newly-inserted rows in the Product table with appropriate values. It might look something like the below.
See this trigger working in a Sql Fiddle
CREATE TRIGGER TR_Product_I ON dbo.Product FOR INSERT
AS
SET NOCOUNT ON;
SET XACT_ABORT ON;
DECLARE #LastProductCode TABLE (
SellerID int NOT NULL PRIMARY KEY CLUSTERED,
LastProductCode int NOT NULL
);
WITH ItemCounts AS (
SELECT
I.SellerID,
ItemCount = Count(*)
FROM
Inserted I
GROUP BY
I.SellerID
)
MERGE dbo.SellerProductCode C
USING ItemCounts I
ON C.SellerID = I.SellerID
WHEN NOT MATCHED BY TARGET THEN
INSERT (SellerID, LastProductCode)
VALUES (I.SellerID, I.ItemCount)
WHEN MATCHED THEN
UPDATE SET C.LastProductCode = C.LastProductCode + I.ItemCount
OUTPUT
Inserted.SellerID,
Inserted.LastProductCode
INTO #LastProductCode;
WITH P AS (
SELECT
NewProductCode =
L.LastProductCode + 1
- Row_Number() OVER (PARTITION BY I.SellerID ORDER BY P.ProductID DESC),
P.*
FROM
Inserted I
INNER JOIN dbo.Product P
ON I.ProductID = P.ProductID
INNER JOIN #LastProductCode L
ON P.SellerID = L.SellerID
)
UPDATE P
SET P.ProductCode = Right('00000' + Convert(varchar(6), P.NewProductCode), 6);
Note that this trigger works even if multiple rows are inserted. There is no need to preload the SellerProductCode table, either--new sellers will automatically be added. This will handle concurrency with few problems. If concurrency problems are encountered, proper locking hints can be added without deleterious effect as the table will remain very small and ROWLOCK can be used (except for the INSERT which will require a range lock).
Please do see the Sql Fiddle for working, tested code demonstrating the technique. Now you have real product codes that have no reason to ever change and will be reliable.
I would normally recommend using a view to do this type of calculation. The view could even be indexed if select performance is the most important factor (I see you're using persisted).
You cannot have a subquery in a computed column, which essentially means that you can only access the data in the current row. The only ways to get this count would be to use a user-defined function in your computed column, or triggers to update a non-computed column.
A view might look like the following:
create view ProductCodes as
select p.ProductId, p.SellerId,
(
select right('000000' + cast(count(*) as varchar(6)), 6)
from Product
where SellerID = p.SellerID
and ProductID <= p.ProductID
) as ProductCode
from Product p
One big caveat to your product numbering scheme, and a downfall for both the view and UDF options, is that we're relying upon a count of rows with a lower ProductId. This means that if a Product is inserted in the middle of the sequence, it would actually change the ProductCodes of existing Products with a higher ProductId. At that point, you must either:
Guarantee the sequencing of ProductId (identity alone does not do this)
Rely upon a different column that has a guaranteed sequence (still dubious, but maybe CreateDate?)
Use a trigger to get a count at insert which is then never changed.
I have a large table with a lot of duplicate string data. To save space, I have moved the string data to a separate table. My tables now look something like this:
MyRecords
RecordId (int) | FieldA (int) | FieldB (datetime) | FieldC (...) | MyString1Id (int) | MyString2Id (int) | MyString3Id (int) | ...
MyStrings
StringId (int) | StringValue (varchar)
The MyRecords table has about 10 foreign keys to the string table. I have a stored procedure GetMyRecords that retrieves a list of records with the actual string values. This sp now has 10 joins to the string table for each string relation:
SELECT [Field1], [Field2], [Field3], ..., [Strings1].[StringValue], [Strings2].[StringValue], ...
FROM MyRecords INNER JOIN
MyStrings AS Strings1 ON MyRecords.MyString1Id = Strings1.StringId INNER JOIN
MyStrings AS Strings2 ON MyRecords.MyString2Id = Strings2.StringId INNER JOIN
MyStrings AS Strings3 ON MyRecords.MyString3Id = Strings3.StringId INNER JOIN
(more joins)
WHERE [Field1] = #Field1 AND [Field2] = #Field2
GetMyRecords is considerably slower than I would want because of all the joins. How could I improve performance for this sp? Can I somehow turn this into a single join?
The strings table has a clustered primary key on StringId, and all the where fields are in a nonclustered index on the MyRecords table.
You should probably take one further step toward normalization and create a join table. Instead of having the MyStringNId columns in MyRecords, have a third table:
CREATE TABLE RecordsStrings (
RecordId [theDataType] NOT NULL REFERENCES MyRecords (RecordId),
StringId [theDataType] NOT NULL REFERENCES MyStrings (StringId)
)
It is not convenient then to have all the strings in the same row of the returned data from the SELECT (though maybe there's a way to do this with a pivot somehow), so it's probably better to restructure the calling code to deal with results returned from:
SELECT [StringValue]
FROM [MyStrings] s
INNER JOIN [RecordsStrings] rs ON rs.StringId = s.StringId
INNER JOIN [MyRecords] r ON rs.RecordId = r.RecordId
WHERE r.Field1 = #Field1 AND r.Field2 = #Field2
If you need the other fields from MyRecords, you can select those as well, though they would appear in every relevant row. If you have multiple matches on Field1 and Field2, though, that may be helpful.
Can I somehow turn this into a single join?
If it is common for the same combination of strings to occur on multiple rows of MyRecords then it would make sense to store those combinations in a separate table. Then you could do a single join.
So long as you are only storing individual strings, then it is not possible to do this in a single join, since it has to search for each string separately.
You can make the queries easier to read and write by creating a view of the table that includes all of the joins. This will not improve performance, but it will make your queries look a lot better.
How could I improve performance for this sp?
There are things you can do, depending on the form of the data.
If the strings in one field contains (mostly) different information than another field, then you could try putting them into different tables. There is a chance this could improve performance if the maximum length of one field is much smaller than the other or if the number of different values for one field is much smaller than the other.
First step would be to run a performance analysis to see where the problems are.
Just on a lark though, you can pick up a bit of a performance gain by using (nolock) on the joined tables.