Why use cross Apply to get the values in XML? - sql-server

Few days ago, i tried the Extended Events for replace the SQL Server profiler.
Then i wanted to put the xel files generated in the sql server database with sql.
What i think odds, it's a lot of site use the function nodes with a Cross Apply to get the value in the XML, even if it's more slower than not to use it.
I don't know if i am missing something?
Sample of my queries

In short: You can cut bread with a chain saw or you might use some bare wire, but don't blame the tool, if the result is not convincing (even if it's very fast :-D )...
You need .nodes() if there are 1:n related sub-nodes in order to retrieve them as a derived table.
Many people use .nodes() just to get a bit more readability into the code, especially with very deeply nested elements, when the XPath grows to some size...
You get a named current node from where you can continue with XPath navigation or XQuery. With multiple nested XML you can dive deeper and deeper using a cascade of .nodes().
If you can be sure that there will be only one element below <event> with the name <data> with an attribute #Name with the value NTCanonicalUserName there's no need for .nodes(). Assumeably the fastest would be to use something like
/event[1]/data[#name="NTCanonicalUserName"][1]/value[1]/text()[1]
Important!
If there might be more than one <event> or more than one <data> with the given attribute, your "fast" call would return the first occurance whereever it is found. This might be the wrong one... And you would never find out, that there is another one...
Your "fast" call is fast, because the engine knows in advance, that there will be exactly one result. With .nodes() the eninge will have to look for further occurances and will do the overhead to create the structures for a derived table.
General advise
If you know something, the engine cannot know, but would make the reading faster, you should help the engine...
Good to know
Internally the real / native XML data type is not stored as the string representation you see, but as a hierarchically structured tree. Dealing with XML is astonishingly fast... In your case the search is just jumping down the tree directly to the wanted element.

Related

What is the best solution to store a volunteers availability data in access 2016 [duplicate]

Imagine a web form with a set of check boxes (any or all of them can be selected). I chose to save them in a comma separated list of values stored in one column of the database table.
Now, I know that the correct solution would be to create a second table and properly normalize the database. It was quicker to implement the easy solution, and I wanted to have a proof-of-concept of that application quickly and without having to spend too much time on it.
I thought the saved time and simpler code was worth it in my situation, is this a defensible design choice, or should I have normalized it from the start?
Some more context, this is a small internal application that essentially replaces an Excel file that was stored on a shared folder. I'm also asking because I'm thinking about cleaning up the program and make it more maintainable. There are some things in there I'm not entirely happy with, one of them is the topic of this question.
In addition to violating First Normal Form because of the repeating group of values stored in a single column, comma-separated lists have a lot of other more practical problems:
Can’t ensure that each value is the right data type: no way to prevent 1,2,3,banana,5
Can’t use foreign key constraints to link values to a lookup table; no way to enforce referential integrity.
Can’t enforce uniqueness: no way to prevent 1,2,3,3,3,5
Can’t delete a value from the list without fetching the whole list.
Can't store a list longer than what fits in the string column.
Hard to search for all entities with a given value in the list; you have to use an inefficient table-scan. May have to resort to regular expressions, for example in MySQL:
idlist REGEXP '[[:<:]]2[[:>:]]' or in MySQL 8.0: idlist REGEXP '\\b2\\b'
Hard to count elements in the list, or do other aggregate queries.
Hard to join the values to the lookup table they reference.
Hard to fetch the list in sorted order.
Hard to choose a separator that is guaranteed not to appear in the values
To solve these problems, you have to write tons of application code, reinventing functionality that the RDBMS already provides much more efficiently.
Comma-separated lists are wrong enough that I made this the first chapter in my book: SQL Antipatterns, Volume 1: Avoiding the Pitfalls of Database Programming.
There are times when you need to employ denormalization, but as #OMG Ponies mentions, these are exception cases. Any non-relational “optimization” benefits one type of query at the expense of other uses of the data, so be sure you know which of your queries need to be treated so specially that they deserve denormalization.
"One reason was laziness".
This rings alarm bells. The only reason you should do something like this is that you know how to do it "the right way" but you have come to the conclusion that there is a tangible reason not to do it that way.
Having said this: if the data you are choosing to store this way is data that you will never need to query by, then there may be a case for storing it in the way you have chosen.
(Some users would dispute the statement in my previous paragraph, saying that "you can never know what requirements will be added in the future". These users are either misguided or stating a religious conviction. Sometimes it is advantageous to work to the requirements you have before you.)
There are numerous questions on SO asking:
how to get a count of specific values from the comma separated list
how to get records that have only the same 2/3/etc specific value from that comma separated list
Another problem with the comma separated list is ensuring the values are consistent - storing text means the possibility of typos...
These are all symptoms of denormalized data, and highlight why you should always model for normalized data. Denormalization can be a query optimization, to be applied when the need actually presents itself.
In general anything can be defensible if it meets the requirements of your project. This doesn't mean that people will agree with or want to defend your decision...
In general, storing data in this way is suboptimal (e.g. harder to do efficient queries) and may cause maintenance issues if you modify the items in your form. Perhaps you could have found a middle ground and used an integer representing a set of bit flags instead?
Yes, I would say that it really is that bad. It's a defensible choice, but that doesn't make it correct or good.
It breaks first normal form.
A second criticism is that putting raw input results directly into a database, without any validation or binding at all, leaves you open to SQL injection attacks.
What you're calling laziness and lack of SQL knowledge is the stuff that neophytes are made of. I'd recommend taking the time to do it properly and view it as an opportunity to learn.
Or leave it as it is and learn the painful lesson of a SQL injection attack.
I needed a multi-value column, it could be implemented as an xml field
It could be converted to a comma delimited as necessary
querying an XML list in sql server using Xquery.
By being an xml field, some of the concerns can be addressed.
With CSV: Can't ensure that each value is the right data type: no way to prevent 1,2,3,banana,5
With XML: values in a tag can be forced to be the correct type
With CSV: Can't use foreign key constraints to link values to a lookup table; no way to enforce referential integrity.
With XML: still an issue
With CSV: Can't enforce uniqueness: no way to prevent 1,2,3,3,3,5
With XML: still an issue
With CSV: Can't delete a value from the list without fetching the whole list.
With XML: single items can be removed
With CSV: Hard to search for all entities with a given value in the list; you have to use an inefficient table-scan.
With XML: xml field can be indexed
With CSV: Hard to count elements in the list, or do other aggregate queries.**
With XML: not particularly hard
With CSV: Hard to join the values to the lookup table they reference.**
With XML: not particularly hard
With CSV: Hard to fetch the list in sorted order.
With XML: not particularly hard
With CSV: Storing integers as strings takes about twice as much space as storing binary integers.
With XML: storage is even worse than a csv
With CSV: Plus a lot of comma characters.
With XML: tags are used instead of commas
In short, using XML gets around some of the issues with delimited list AND can be converted to a delimited list as needed
Yes, it is that bad. My view is that if you don't like using relational databases then look for an alternative that suits you better, there are lots of interesting "NOSQL" projects out there with some really advanced features.
Well I've been using a key/value pair tab separated list in a NTEXT column in SQL Server for more than 4 years now and it works. You do lose the flexibility of making queries but on the other hand, if you have a library that persists/derpersists the key value pair then it's not a that bad idea.
I would probably take the middle ground: make each field in the CSV into a separate column in the database, but not worry much about normalization (at least for now). At some point, normalization might become interesting, but with all the data shoved into a single column you're gaining virtually no benefit from using a database at all. You need to separate the data into logical fields/columns/whatever you want to call them before you can manipulate it meaningfully at all.
If you have a fixed number of boolean fields, you could use a INT(1) NOT NULL (or BIT NOT NULL if it exists) or CHAR (0) (nullable) for each. You could also use a SET (I forget the exact syntax).

SORT transformation takes forever

I have a big XML file about 800 MB with many tags and attributes. I need to pull different values from this file, therefore, I have used many SORT and JOIN transformations. All of them work well and do not take too much time, except for the last SORT transformation shown in a red oval in the picture below. This takes forever.
If I use a smaller XML file, it will go thru and will not take too much time. So I assume the problem is with the size of dataset its dealing with. I was wondering if you know of any way that can help me handle this situation. Any property that needs to get changed to improve the performance of this particular case. I'm using Visual Studio 2015. Thanks!
You can't really do much to speed up the Sort transformation in SSIS. The best solution is to find a way to not have to use the Sort transformation at all. This usually means putting the data into an indexed database table, and doing the sort in a SELECT...ORDER BY query.

Is storing XML with a standard structure in SQL Server a bad use of the XML datatype?

We have a table in our database that stores XML in one of the columns. The XML is always in the exact same format out of a set of 3 different XML formats which is received via web service responses. We need to look up information in this table (and inside of the XML field) very frequently. Is this a poor use of the XML datatype?
My suggestion is to create seperate tables for each different XML structure as we are only talking about 3 with a growth rate of maybe one new table a year.
I suppose ultimately this is a matter of preference, but here are some reasons I prefer not to store data like that in an XML field:
Writing queries against XML in TSQL is slow. Might not be too bad for a small amount of data, but you'll definitely notice it with a decent amount of data.
Sometimes there is special logic needed to work with an XML blob. If you store the XML directly in SQL, then you find yourself duplicating that logic all over. I've seen this before at a job where the guy that wrote the XML to a field was long gone and everyone was left wondering how exactly to work with it. Sometimes elements were there, sometimes not, etc.
Similar to (2), in my opinion it breaks the purity of the database. In the same way that a lot of people would advise against storing HTML in a field, I would advise against storing raw XML.
But despite these three points ... it can work and TSQL definitely supports queries against it.
Are you reading the field more than you are writing it?
You want to do the conversion on whichever step you do least often or the step that doesn't involve the user.

CakePHP virtual HABTM table/relation, something like fixture

first of all I'd like to tell you that you're terrific audience.
I'm making an application where I have model Foo with table Foos. And I'd like to give Foo another parameter, HABTM parameter, lets say Bar. But I'd rather don't create table for Bar. Because Bar will have like 5 positions on start and in 5 years it will grow to maybe 7 positions or not at all. So I don't see a need to create another table and make CakePHP look into that table with another SELECT. Anyone have an idea this can be achieved ?
One solution I think is making an fixture for Bars table and adding only Bars_Foos table for real (it won't be big anyway). But I can't find a way to use test fixtures in normal Controller
Second solution is to save a JSON or serialized array in Foo one field and move logic to model, but I don't know if it is best solution. Something like virtual field.
Real life example:
So I have like Bikes. And every Bike have its main_type. Which is for now {"MTB","Road","Trekking","City","Downhill"}. I know that in long time this list would not grow much. Maybe 2 or 5 positions in few years. Still it will relatively short.
(For those who say that there maybe a hundred of specialized bike types. I have another parameter column specialized_type)
It needs to be a HABTM relation, but main_types table will be very small, so Id like to avoid creating it and find a way for simpler solution.
Because
It bothers MySQL for such small amount of data
It complicates MySQL queries
I have to make additional model for MainType
I have more models to unbind when I don't need most of data and would like use recursive
Insert here anything you'd like...
Judging from your real life example, I'd say you're on the wrong track. The queries won't be complicated, CakePHP uses additional queries for HABTM relations, it would be just one additional query which shouldn't be very costly, also it's very easy to sparse it out by using the containable behaviour. And if you really need to use recursive only (for whatever reason), then it's just one single additional model to unbind, that doesn't seem like overkill to me.
This might not be what you wanted to hear, but I really think a proper database solution is better than trying to hack in "virtual data". Also note that fixtures as used in tests, only define data which is written to the database on the fly when running the test, so that would be definitely more costly than using data that already exists in the database.
Maybe you'll get a small performance boost for selects that do not query the main type when using an additional column to store the data, but you'll definitely lose all the flexibility that the RDBMS has to offer, including faster selects using proper indexing, affecting multiple records by updating a single related value, etc. That doesn't sound like a good trade-off to me. Think about it, how would you select all Downhill Tracking bikes when this information is stored as a string in a single column? You would probably end up using ugly LIKE selects.
Now wait, there's a SET data type in MySQL hat can hold multiple values. Right, and it looks easier and less complex. Right, but in the background it isn't, while using a complex looking join-query can be pretty fast using proper indexing, the query for the SET type will have to scan every single row since the data stored in the column cannot be indexed appropriately in order to make more specific selects.
In the end it probably depends on your data, so I'd suggest testing both methods in your specific environment and see how they compare under workload.

Identifying some parts of Query Plan in SQL Server

I have recently encountered a problem I have hard time working around. I have to tune a rather nasty query that extensively uses - in many forms and on several layers - XML. The problem is that while it is easy to spot the slow part of the whole process - Table Valued Function [XML Reader] - it is rather hard to map it to any specific part of the query. The properties does not indicate on which XML column does this particular object work and I'm really puzzled why is that. Generally, it is relatively easy to map object to a part of a query but these XML objects are giving me a pause.
Thanks in advance.

Resources