String Building in SQL Server - sql-server

According to the Microsoft Documentation sp_send_dbmail must be a semicolon delimited list. However, in many of my applications I am unable to guarantee that this list will be clean. I need a robust way for ensuring a syntactically valid list of emails such that my applications will not break if it isn't.
I've been looking into XML PATH, STUFF, and other methods, but the syntax is hideously confusing. Does anyone have a handy method in their bag of tricks (no stored procedures or functions please, I'd like an out-of-the-box solution)?
Conceptually I thought it best to part the CSV list into a temp table, handle each email separately, then recompose the list using a semicolon delimiter instead. I suspect there is a better method though.
Again, I would like an out-of-the-box solution if at all possible. :)

Related

What is the best solution to store a volunteers availability data in access 2016 [duplicate]

Imagine a web form with a set of check boxes (any or all of them can be selected). I chose to save them in a comma separated list of values stored in one column of the database table.
Now, I know that the correct solution would be to create a second table and properly normalize the database. It was quicker to implement the easy solution, and I wanted to have a proof-of-concept of that application quickly and without having to spend too much time on it.
I thought the saved time and simpler code was worth it in my situation, is this a defensible design choice, or should I have normalized it from the start?
Some more context, this is a small internal application that essentially replaces an Excel file that was stored on a shared folder. I'm also asking because I'm thinking about cleaning up the program and make it more maintainable. There are some things in there I'm not entirely happy with, one of them is the topic of this question.
In addition to violating First Normal Form because of the repeating group of values stored in a single column, comma-separated lists have a lot of other more practical problems:
Can’t ensure that each value is the right data type: no way to prevent 1,2,3,banana,5
Can’t use foreign key constraints to link values to a lookup table; no way to enforce referential integrity.
Can’t enforce uniqueness: no way to prevent 1,2,3,3,3,5
Can’t delete a value from the list without fetching the whole list.
Can't store a list longer than what fits in the string column.
Hard to search for all entities with a given value in the list; you have to use an inefficient table-scan. May have to resort to regular expressions, for example in MySQL:
idlist REGEXP '[[:<:]]2[[:>:]]' or in MySQL 8.0: idlist REGEXP '\\b2\\b'
Hard to count elements in the list, or do other aggregate queries.
Hard to join the values to the lookup table they reference.
Hard to fetch the list in sorted order.
Hard to choose a separator that is guaranteed not to appear in the values
To solve these problems, you have to write tons of application code, reinventing functionality that the RDBMS already provides much more efficiently.
Comma-separated lists are wrong enough that I made this the first chapter in my book: SQL Antipatterns, Volume 1: Avoiding the Pitfalls of Database Programming.
There are times when you need to employ denormalization, but as #OMG Ponies mentions, these are exception cases. Any non-relational “optimization” benefits one type of query at the expense of other uses of the data, so be sure you know which of your queries need to be treated so specially that they deserve denormalization.
"One reason was laziness".
This rings alarm bells. The only reason you should do something like this is that you know how to do it "the right way" but you have come to the conclusion that there is a tangible reason not to do it that way.
Having said this: if the data you are choosing to store this way is data that you will never need to query by, then there may be a case for storing it in the way you have chosen.
(Some users would dispute the statement in my previous paragraph, saying that "you can never know what requirements will be added in the future". These users are either misguided or stating a religious conviction. Sometimes it is advantageous to work to the requirements you have before you.)
There are numerous questions on SO asking:
how to get a count of specific values from the comma separated list
how to get records that have only the same 2/3/etc specific value from that comma separated list
Another problem with the comma separated list is ensuring the values are consistent - storing text means the possibility of typos...
These are all symptoms of denormalized data, and highlight why you should always model for normalized data. Denormalization can be a query optimization, to be applied when the need actually presents itself.
In general anything can be defensible if it meets the requirements of your project. This doesn't mean that people will agree with or want to defend your decision...
In general, storing data in this way is suboptimal (e.g. harder to do efficient queries) and may cause maintenance issues if you modify the items in your form. Perhaps you could have found a middle ground and used an integer representing a set of bit flags instead?
Yes, I would say that it really is that bad. It's a defensible choice, but that doesn't make it correct or good.
It breaks first normal form.
A second criticism is that putting raw input results directly into a database, without any validation or binding at all, leaves you open to SQL injection attacks.
What you're calling laziness and lack of SQL knowledge is the stuff that neophytes are made of. I'd recommend taking the time to do it properly and view it as an opportunity to learn.
Or leave it as it is and learn the painful lesson of a SQL injection attack.
I needed a multi-value column, it could be implemented as an xml field
It could be converted to a comma delimited as necessary
querying an XML list in sql server using Xquery.
By being an xml field, some of the concerns can be addressed.
With CSV: Can't ensure that each value is the right data type: no way to prevent 1,2,3,banana,5
With XML: values in a tag can be forced to be the correct type
With CSV: Can't use foreign key constraints to link values to a lookup table; no way to enforce referential integrity.
With XML: still an issue
With CSV: Can't enforce uniqueness: no way to prevent 1,2,3,3,3,5
With XML: still an issue
With CSV: Can't delete a value from the list without fetching the whole list.
With XML: single items can be removed
With CSV: Hard to search for all entities with a given value in the list; you have to use an inefficient table-scan.
With XML: xml field can be indexed
With CSV: Hard to count elements in the list, or do other aggregate queries.**
With XML: not particularly hard
With CSV: Hard to join the values to the lookup table they reference.**
With XML: not particularly hard
With CSV: Hard to fetch the list in sorted order.
With XML: not particularly hard
With CSV: Storing integers as strings takes about twice as much space as storing binary integers.
With XML: storage is even worse than a csv
With CSV: Plus a lot of comma characters.
With XML: tags are used instead of commas
In short, using XML gets around some of the issues with delimited list AND can be converted to a delimited list as needed
Yes, it is that bad. My view is that if you don't like using relational databases then look for an alternative that suits you better, there are lots of interesting "NOSQL" projects out there with some really advanced features.
Well I've been using a key/value pair tab separated list in a NTEXT column in SQL Server for more than 4 years now and it works. You do lose the flexibility of making queries but on the other hand, if you have a library that persists/derpersists the key value pair then it's not a that bad idea.
I would probably take the middle ground: make each field in the CSV into a separate column in the database, but not worry much about normalization (at least for now). At some point, normalization might become interesting, but with all the data shoved into a single column you're gaining virtually no benefit from using a database at all. You need to separate the data into logical fields/columns/whatever you want to call them before you can manipulate it meaningfully at all.
If you have a fixed number of boolean fields, you could use a INT(1) NOT NULL (or BIT NOT NULL if it exists) or CHAR (0) (nullable) for each. You could also use a SET (I forget the exact syntax).

Why use cross Apply to get the values in XML?

Few days ago, i tried the Extended Events for replace the SQL Server profiler.
Then i wanted to put the xel files generated in the sql server database with sql.
What i think odds, it's a lot of site use the function nodes with a Cross Apply to get the value in the XML, even if it's more slower than not to use it.
I don't know if i am missing something?
Sample of my queries
In short: You can cut bread with a chain saw or you might use some bare wire, but don't blame the tool, if the result is not convincing (even if it's very fast :-D )...
You need .nodes() if there are 1:n related sub-nodes in order to retrieve them as a derived table.
Many people use .nodes() just to get a bit more readability into the code, especially with very deeply nested elements, when the XPath grows to some size...
You get a named current node from where you can continue with XPath navigation or XQuery. With multiple nested XML you can dive deeper and deeper using a cascade of .nodes().
If you can be sure that there will be only one element below <event> with the name <data> with an attribute #Name with the value NTCanonicalUserName there's no need for .nodes(). Assumeably the fastest would be to use something like
/event[1]/data[#name="NTCanonicalUserName"][1]/value[1]/text()[1]
Important!
If there might be more than one <event> or more than one <data> with the given attribute, your "fast" call would return the first occurance whereever it is found. This might be the wrong one... And you would never find out, that there is another one...
Your "fast" call is fast, because the engine knows in advance, that there will be exactly one result. With .nodes() the eninge will have to look for further occurances and will do the overhead to create the structures for a derived table.
General advise
If you know something, the engine cannot know, but would make the reading faster, you should help the engine...
Good to know
Internally the real / native XML data type is not stored as the string representation you see, but as a hierarchically structured tree. Dealing with XML is astonishingly fast... In your case the search is just jumping down the tree directly to the wanted element.

Is there a way to translate database table rows into Prolog facts?

After doing some research, I was amazed with the power of Prolog to express queries in a very simple way, almost like telling the machine verbally what to do. This happened because I've become really bored with Propel and PHP at work.
So, I've been wondering if there is a way to translate database table rows (Postgres, for example) into Prolog facts. That way, I could stop using so many boring joins and using ORM, and instead write something like this to get what I want:
mantenedora_ies(ID_MANTENEDORA, ID_IES) :-
papel_pessoa(ID_PAPEL_MANTENEDORA, ID_MANTENEDORA, 1),
papel_pessoa(ID_PAPEL_IES, ID_IES, 6),
relacionamento_pessoa(_, ID_PAPEL_IES, ID_PAPEL_MANTENEDORA, 3).
To see why I've become bored, look at this post. The code there would be replaced for these simple lines ahead, much easier to read and understand. I'm just curious about that, since it will be impossible to replace things around here.
It would also be cool if something like that was possible to be done in PHP. Does anyone know something like that?
check the ODBC interface of swi-prolog (maybe there is something equivalent for other prolog implementations too)
http://www.swi-prolog.org/pldoc/doc_for?object=section%280,%270%27,swi%28%27/doc/packages/odbc.html%27%29%29
I can think of a few approaches to this -
On initialization, call a method that performs a selects all data from a table and asserts it into the db. Do this for each db. You will need to declare the shape of each row as :- dynamic ies_row/4 etc
You could modify load_files by overriding user:prolog_load_files. From this activity you could so something similar to #1. This has the benefit of looking like a load_files call. http://www.swi-prolog.org/pldoc/man?predicate=prolog_load_file%2F2 ... This documentation mentions library(http_load), but I cannot find this anywhere (I was interested in this recently)!
There is the Draxler Prolog to SQL compiler, that translates some pattern (like the conjunction you wrote) into the more verbose SQL joins. You can find in the related post (prolog to SQL converter) more info.
But beware that Prolog has its weakness too, especially regarding aggregates. Without a library, getting sums, counts and the like is not very easy. And such libraries aren't so common, and easy to use.
I think you could try to specialize the PHP DB interface for equijoins, using the builtin features that allows to shorten the query text (when this results in more readable code). Working in SWI-Prolog / ODBC, where (like in PHP) you need to compose SQL, I effettively found myself working that way, to handle something very similar to what you have shown in the other post.
Another approach I found useful: I wrote a parser for the subset of SQL used by MySQL backup interface (PHPMyAdmin, really). So routinely I dump locally my CMS' DB, load it memory, apply whathever duty task I need, computing and writing (or applying) the insert/update/delete statements, then upload these. This can be done due to the limited size of the DB, that fits in memory. I've developed and now I'm mantaining this small e-commerce with this naive approach.
Writing Prolog from PHP should be not too much difficult: I'd try to modify an existing interface, like the awesome Adminer, that already offers a choice among basic serialization formats.

Extracting information from millions of simple but inconsistent text files

We have millions of simple txt documents containing various data structures we extracted from pdf, the text is printed line by line so all formatting is lost (because when we tried tools to maintain the format they just messed it up). We need to extract the fields and there values from this text document but there is some variation in structure of these files (new line here and there, noise on some sheets so spellings are incorrect).
I was thinking we would create some sort of template structure with information about the coordinates (line, word/words number) of keywords and values and use this information to locate and collect keyword values like that using various algorithms to make up for inconsistant formatting.
Is there any standard way of doing this, any links that might help? any other ideas?
the noise can be corrected or ignored by using fuzzy text matching tools like agrep: http://www.tgries.de/agrep/
However, the problem with extra new-lines will remain.
One technique that i would suggest is to limit the error propagation in a similar way compilers do. For example, you try to match your template or a pattern, and you can't do that. Later on in the text there is a sure match, but it might be a part of the current un-matched pattern.
In this case, the sure match should be accepted and the chunk of text that was un-matched should be left aside for future processing. This will enable you to skip errors that are too hard to parse.
Larry Wall's Perl is your friend here. This is precisely the sort of problem domain at which it excels.
Sed is OK, but for this sort of think, Perl is the bee's knees.
While I second the recommendations for the Unix command-line and for Perl, a higher-level tool that may help is Google Refine. It is meant to handle messy real-world data.
I would recoomnd using graph regular expression here with very weak rules and final accpetion predicate. Here you can write fuzzy matching on token level, then on line level etc.
I suggest Talend data integration tool. It is open source (i.e. FREE!). It is build on Java and you can customize your data integration project anyway you like by modifying underlying java code.
I used it and found very helpful on low budget highly complex data integration projects. Here's the link to their WEB site;Talend
Good luck.

I need help splitting addresses (number, addition, etc)

Apologies for the fuzzy title...
My problem is this; I have a SQL Server table persons with about 100.000 records. Every person has an address, something like "Nieuwe Prinsengracht 12 - III". The customer now wants to separate the street from the number and addition (so each address becomes two or three fields). The problem is that we can not be sure of the format the current address is in, it could also simply be something like "Velperweg 30".
The only thing we do know about it is that it's a piece of text, followed by a number, possibly followed by some more text (which can contain a number).
A possible solution would be to do this with regexes, but I would much (much, much) rather do this using a query. Is there any way to use regexes in a query? Or do you have any other suggestions how to solve such a problem?
Something like this maybe?
SELECT
substring([address_field], 1, patindex('%[1-9]%', [address_field])-1) as [STREET],
substring([address_field], patindex('%[1-9]%', [address_field]), len([address_field])) as [NUMBER_ADDITON]
FROM
[table]
It relies on the assumption that the [street] field will not contain any numbers, and the [number_addition] field will begin with a number.
SQL Server and T-SQL are rather limited in their processing prowess - if you're really serious about heavy-lifting and regexes etc., you're best bet is probably creating an assembly in C# or VB.NET that does all that tricky Regex business, and then deploying that into SQL-CLR and use the functions in T-SQL.
"Pure" T-SQL cannot really handle much string manipulation beyond SUBSTRING and CHARINDEX - but that's about it.
In answer to your "Is there any way to use regexes in a query?", then yes there is, but it needs a little .NET knowledge. Create a CLR assembly with a user-defined function that does your regex work. Visual Studio 2008 has a template project for this. Deploy it to your SQL server and call it from your query.
Name and Address parsing and standardization is probably one of the most difficult problems we can encounter as programmers for precisely the reasons you've mentioned.
I assume that whoever you work for their main business is not address parsing. My advice is to buy a solution rather than build one of your own.
I am familiar with this company. Your address examples appear to be non US or Canadian so I don't know if their products would be useful, but they may be able to point you to another vendor.
Other than a user of their products I am not affiliated with them in any way.
This sounds like the common "take a piece of complex text that could look like anything and make it look like what we now want it to look like" problem. These tend to be very hard to do using only T-SQL (which does not have native regex functionality). You will probably have to work with complex code outside of the database to solve this problem.
TGnat is correct. Address standardization is complicated.
I've encountered this problem before.
If your customer doesn't want to spring for the custom software, develop a simple GUI that allows a person to take an address and split it manually. You'd delete the address row with the old format and insert the row with the new address format.
It wouldn't take long for typists familiar with your addresses to manually make 100,000 changes. Of course, it's up to the customer if he wants to spend the money on custom software or typists.
But you shouldn't be stuck with the data cleaning bill, either.
I realize that this is an old question, but for future reference, I still decided to add an answer using regex (also so I don't forget it myself). Today, I ran into a similar problem in Excel, in which I had to split the address in street and house number too. In the end, I've copied the column to SublimeText (a shareware text editor), and used a regex to do the job (CTRL-H, enable regex):
FIND: ^('?\d?\d?\d?['-\.a-zA-Z ]*)(\d*).*$
REPLACE FOR THE HOUSE NUMBER: $2
REPLACE FOR THE STREET NAME: $1
Some notes:
Some addresses started with a quote, e.g. 't Hofje, so I needed to add '?
Some addresses contained digits at the start, e.g. 17 Septemberplein or 2e Molendwarsstraat, so I added \d?\d?\d?
Some addresses contained a -, e.g. Willem-Alexanderlaan or a '

Resources