Combining multiple text fields into one text field - sql-server

I'm trying to merge multiple text columns into one concatenated text column. Each of the fields were previously used for various descriptions, but per new reqs, I need all of those fields to be combined into one.
I tried converting them to varchar(max) first then concatenating, but some of the rows have values in these columns which are longer than the max and are being truncated in the result.
Is there a way to combine multiple text fields in SQL Server 2000?

The best advice I have for you is to either
perform the concatenation in your middle or presentation tier (or add an abstraction layer that allows this, including routing your query through a newer version of SQL Server which performs the concatenation after pulling through a linked server to 2000); or,
upgrade.
You can't fool SQL Server 2000 into supporting [n]varchar(max), and the limitation you've come across is just one of many, many, many reasons the [n]text data types were deprecated.

Related

How to create an Excel Spreadsheet that formats a field in one of a few different ways based on the data in the field

I have a SQL View that I'm working on that spits out some important information for my boss's boss's boss. The view includes a field called Item ID, which can be in several different formats.
Here are some examples (that may or many not be made up to protect the innocent):
ATS-LC-PLN-RT-RH-0.3125-18-3X2.125X1.5-1
012345.012345
01234567.0123
123456789012
000000.000000
000000.000002
I'd like to take the view and use it to (eventually) produce an excel spreadsheet, but I'm not confident that there's a way to format this column in a way that will work for all of these different Item ID's.
When playing around with Excel, these numbers drop their trailing zeroes and switch to scientific notation, among other shenanigans. I just need to format this column in a way that will preserve the Item ID.
If you know of a way to programmatically create an excel spreadsheet in a way that allows me to assign a format based on the data in the cell, that would work great. The problem that I'm mainly suffering from is that this spreadsheet naturally has hundreds of lines, soon to be thousands, and there's no feasible way to hand-format these lines one at a time on a daily or weekly basis.
I've got SQL-Server 2014 and Excel via Microsoft Office Standard 2013, which may offer more options.
Permit me to suggest another way of framing your issue. I don't think you really want to analyze (either manually or programmatically) each item ID and determine whether it is an integer, a decimal, or alphanumeric text. Since your item ID data varies, the only Excel formatting that will work for all of your cases is 'Text.' So my suggestion is look for a way to automate the export of your data to Excel while making sure that the formatting in Excel is set to 'Text' for all cells to contain your item ID data. As you've noticed, if you are pasting data in Excel, if the target cells are not first set to 'Text' formatting, Excel will make its own 'corrections' to each pasted value, including removal of leading and trailing zeros.
The best solution is to use SQL Server Reporting Services (SSRS). You can set the field formatting in SSRS, and then (if you choose) automate the export of your data to Excel by calling the report server by URL with &rs:Format=excel. (There is learning curve for SSRS but if you plan to continue doing things like this, it will be worth it.)
Other options
The easiest manual option is to 1) export the data to .csv format, 2) Open Excel and use the Text Import Wizard, and during Step 3 make sure to click the data column and then choose 'Text' as the data format. (You could automate this somewhat with an Excel VBA macro.)
The most complicated method involves programming using Excel VBA and ADO to automate the connection and querying of the data from your database view, and then rendering that data to a spreadsheet, using VBA to set the formatting to 'Text.'

Visual Studio Data Comparison with less columns

My project currently has a database which contains several tables, the most important of which has one binary column with very large entries (representing serialized C# objects). There are a large number of entries in the production database, and when debugging, it is often necessary to pull these entries down into the local development database (as remote debugging does not seem to work, which is a separate issue).
If I attempt to compare the local and production databases on this table with all columns, the comparison can take up to an hour, or eventually time out, but this has worked in the past and allowed me to download the entries and debug them successfully. If I compare on all table columns but the binary data column, the comparison is almost instantaneous, but that column is not then transferred to the production database.
My question is: is there any way to run a data comparison between two tables, excluding a particular column for the comparison itself (other fields give enough information to differentiate without it) but including it when updating the target database?
You could use a hash function on your large varbinary fields and compare those. HASHBYTES with MD5 is a good method for comparing as it's astronomically unlikely to generate the same hash value for two different inputs. Problem is, HASHBYTES only works on fields up to 8000 bytes. There are some work arounds though by creating a function. A few posted here:
SQL Server 2008 and HashBytes
You would have the option of storing the hash values in your table at the time of insert or update by using a persisted calculated fields. Or you could just generate the hash values while doing your comparison query.

ADO - Can I edit results of a complex query with multiple join statements?

I'm working on a data conversion utility which can push data from one master database out to a number of different databases. The utility its self will have no knowledge of how data is kept in the destination (table structure), but I would like to provide writing a SQL statement to return data from the destination using a complex SQL query with multiple join statements. As long as the data is in a standardized format that the utility can recognize (field names) in an ADO query.
What I would like to do is then modify the live data in this ADO Query. However, since there are multiple join statements, I'm not sure if it's possible to do this. I know at least with BDE (I've never used BDE), it was very strict and you had to return all fields (*) and such. ADO I know is more flexible, but I don't know quite how flexible in this case.
Is it supposed to be possible to modify data in a TADOQuery in this manner, when the results include fields from different tables? And even if so, suppose I want to append a new record to the end (TADOQuery.Append). Would it append to two different tables?
The actual primary table I'm selecting from has a complimentary table which is joined by the same primary key field, one is a "Small" table (brief info) and the other is a "Detail" table (more info for each record in Small table). So, a typical statement would include something like this:
select ts.record_uid, ts.SomeField, td.SomeOtherField from table_small ts
join table_detail td on td.record_uid = ts.record_uid
There are also a number of other joins to records in other tables, but I'm not worried about appending to those ones. I'm only worried about appending to the "Small" and "Detail" tables - at the same time.
Is such a thing possible in an ADO Query? I'm willing to tweak and modify the SQL statement in any way necessary to make this possible. I have a bad feeling though that it's not possible.
Compatibility:
SQL Server 2000 through 2008 R2
Delphi XE2
Editing these Fields which have no influence on the joins is usually no problem.
Appending is ... you can limit the Append to one of the Tables by
procedure TForm.ADSBeforePost(DataSet: TDataSet);
begin
inherited;
TCustomADODataSet(DataSet).Properties['Unique Table'].Value := 'table_small';
end;
but without an Requery you won't get much further.
The better way will be setting Values by Procedure e.g. in BeforePost, Requery and Abort.
If your View would be persistent you would be able to use INSTEAD OF Triggers
Jerry,
I encountered the same problem on FireBird, and from experience I can tell you that it can be made(up to a small complexity) by using CachedUpdates . A very good resource is this one - http://podgoretsky.com/ftp/Docs/Delphi/D5/dg/11_cache.html. This article has the answers to all your questions.
I have abandoned the original idea of live ADO query updates, as it has become more complex than I can wrap my head around. The scope of the data push project has changed, and therefore this is no longer an issue for me, however still an interesting subject to know.
The new structure of the application consists of attaching multiple "Field Links" on various fields from the original set of data. Each of these links references the original field name and a SQL Statement which is to be executed when that field is being imported. Multiple field links can be on one single field, therefore can execute multiple statements, placing the value in various tables, etc. The end goal was an app which I can easily and repeatedly export a common dataset from an original source to any outside source with different data structures, without having to recompile the app.
However the concept of cached updates was not appealing to me, simply for the fact pointed out in the link in RBA's answer that data can be changed in the database in the mean-time. So I will instead integrate my own method of customizable data pushes.

How to store XML result of WebService into SQL Server database?

We have got a .Net Client that calls a Webservice. We want to store the result in a SQL Server database.
I think we have two options here how to store the data, and I am a bit undecided as I can't see the pros and cons clearly: One would be to map the results into database fields. That would require us to have database fields corresponding to each possible result type, e.g. for each "normal" result type as well as those for faults.
On the other hand, we could store the resulting XML and query that via the SQL Server built in XML functions.
Personally, I am comfortable with dealing with both SQL and XML, so both look fine to me.
Are there any big pros and cons and what would I need to consider in terms of database design when trying to store the resulting XML for quite a few different possible Webservice operations? I was thinking about a result table for each operation that we call with different entries for the different possible outcomes / types and then store the XML in the right field, e.g. a fault in the fault field, a "normal" return type in the appropriate field etc.
We use a combination of both. XML for reference and detailed data, and text columns for fields you might search on. Searchable columns include order number, customer reference, ticket number. We just add them when we need them since you can extract them from the XML column.
I wouldn't recommend just the XML. If you store 10.000 messages a day, a query like:
select * from XmlLogging with (nolock) where Response like '%Order12%'
can become slow and interfere with other queries. You also can't display the logging in a GUI because retrieval is too slow.
I wouldn't recommend just the text columns either. If the XML format changes, you'd get an empty column. That's hard to troubleshoot without the XML message. In addition, if you need to "replay" the message stream, that's a lot easier with the XML messages. Few requirements demand replay, but it's really helpful when repairing the fallout of production problems.

Export tables from SQL Server to be imported to Oracle 10g

I'm trying to export some tables from SQL Server 2005 and then create those tables and populate them in Oracle.
I have about 10 tables, varying from 4 columns up to 25. I'm not using any constraints/keys so this should be reasonably straight forward.
Firstly I generated scripts to get the table structure, then modified them to conform to Oracle syntax standards (ie changed the nvarchar to varchar2)
Next I exported the data using SQL Servers export wizard which created a csv flat file. However my main issue is that I can't find a way to force SQL Server to double quote column names. One of my columns contains commas, so unless I can find a method for SQL server to quote column names then I will have trouble when it comes to importing this.
Also, am I going the difficult route, or is there an easier way to do this?
Thanks
EDIT: By quoting I'm refering to quoting the column values in the csv. For example I have a column which contains addresses like
101 High Street, Sometown, Some
county, PO5TC053
Without changing it to the following, it would cause issues when loading the CSV
"101 High Street, Sometown, Some
county, PO5TC053"
After looking at some options with SQLDeveloper, or to manually try to export/import, I found a utility on SQL Server management studio that gets the desired results, and is easy to use, do the following
Goto the source schema on SQL Server
Right click > Export data
Select source as current schema
Select destination as "Oracle OLE provider"
Select properties, then add the service name into the first box, then username and password, be sure to click "remember password"
Enter query to get desired results to be migrated
Enter table name, then click the "Edit" button
Alter mappings, change nvarchars to varchar2, and INTEGER to NUMBER
Run
Repeat process for remaining tables, save as jobs if you need to do this again in the future
Use the SQLDeveloper migration tools
I think quoting column names in oracle is something you should not use. It causes all sort of problems.
As Robert has said, I'd strongly advise agains quoting column names. The result is that you'd have to quote them not only when importing the data, but also whenever you want to reference that column in a SQL statement - and yes, that probably means in your program code as well. Building SQL statements becomes a total hassle!
From what you're writing, I'm not sure if you are referring to the column names or the data in these columns. (Can SQLServer really have a comma in the column name? I'd be really surprised if there was a good reason for that!) Quoting the column content should be done for any string-like columns (although I found that other characters usually work better as the need to "escape" quotes becomes another issue). If you're exporting in CSV that should be an option .. but then I'm not familiar with the export wizard.
Another idea for moving the data (depending on the scale of your project) would be to use an ETL/EAI tool. I've been playing around a bit with the Pentaho suite and their Kettle component. It offered a good range of options to move data from one place to another. It may be a bit oversized for a simple transfer, but if it's a big "migration" with the corresponding volume, it may be a good option.

Resources