Entering a comma delimited string into an INT column in SQL - sql-server

I'm trying to add a comma delimited string into a column in my table set to INT.
I used it to reference a Category ID but now I'd like the possibility to add more than just 1 category.
Thinking I could accomplish this by entering a string like: 1,2,3 instead of just 1 but i'm getting errors that the changed value in this cell was not recognized as valid.
Does this mean I need to change that column to VARCHAR instead of INT?

No, this means that you should set up proper tables to support a 1-many relationship. That is, you want a separate table, not a bogus data representation in a varchar column.
Why? Well, SQL has this great data structure for lists, called a table. In addition:
Numbers should be stored in native format, not a string.
SQL has (relatively) poor functions for manipulating strings.
Operations on the column will not take advantage of indexes.
If the numbers represent ids in another table, you cannot declare proper foreign key references.

Related

Since looping is bad, how should I accomplish what I am trying to do?

I have a table with records each having a unique Identifier, a start date, and two semi-colon delimited strings which I need to break out into a value for each date with the first date being the start date value for each record.
Currently, I am doing all sorts of bad things, in that I am using a While loop to go through each record in the table (16K records+) and I am using a split function that uses a While loop. The split function is based on Oskar Austegard's dbo.fnSplit function.
I've been reading about how using a loop to split is bad for performance, but these delimited strings have at most 100 Items. I am beginning to try to learn about CROSS APPLY and I have been using CTEs for several SPs I've created, but I am not sure if using a CTE here would work or how it would work. I especially am not sure on CROSS APPLY or any other APPLY.
I will gladly post my query and the adulterated split function if someone is willing to assist.
Below is the same answer to this question on MSDN forums.
You were forced into doing bad things because you violated first normal form in your data model(https://en.wikipedia.org/wiki/First_normal_form). A column should not contain a delimited list because the data isn't atomic. Also repeating data should typically be stored as separate rows (probably in a different table with a one-to-many relationship) instead of as columns of the same row.
If you fix your model to more closely adhere to normalization principals, your queries will become much prettier and perform much better too.
Looping is bad.
.
I have a table with records each having ... two semi-colon delimited strings... [T]hese delimited strings have at most 100 Items.
You know what's just about as bad as looping, maybe worse? Storing delimited data in single columns. DON'T DO THAT!
Instead, add a new table to your database. This table will use the primary key from the original table and space for one single item from the delimited column. So if one row in the original table had 100 items in the delimited column, now you have one hundred rows in the new table that each have the primary key from the original table and just one of the elements from the other column.
Now you can you use queries with a simple JOIN to identify each item.

How can I change the datatype of a column from integer to text in SQL Server?

I need to change a database column from integer to string/text but I am not sure how to go about it.
This column is meant to store identification numbers, but recently the ID format changed and now the IDs contain ASCII characters as well (so with this change the new IDs cannot be stored as integers).
The application I am updating is written in Delphi 7 and uses the odbcexpress components for the SQL Server library.
Is it possible to use ALTER TABLE for this? Or does the data need to be copied to a new column as string, delete the old column, and rename the column to the old name?
Can you provide an example on how I might do this? I am not very familiar with the workings of SQL Server.
Thanks!
ALTER TABLE is precisely what you want to do.
Your SQL might look something like this:
ALTER TABLE dbo.MyTable ALTER COLUMN MyColumn VARCHAR(20) NOT NULL;
Note that if you have columns that reference this one, you will have to update those as well, generally by dropping the foreign key constraints temporarily, making your changes, then recreating your foreign key constraints.
Don't forget to change anything that is dependent or downstream as well, such as any variables in stored procedures or your Delphi code.
Additional info related to comments (thanks, all):
This alter column operation will preserve data as it will be implicitly casted to the new type. An int casts to varchar without a problem so long as your varchar is wide enough to accommodate the largest converted value at least. For total safety with ints, I often use a varchar(11) or larger in order to handle the widest int value: negative two billion.
ALTER TABLE your_table MODIFY your_column_name varchar(255) null;

is it possible to select all columns as nvarchar or varchar without explicit casting in SQL server 2008?

basically what I want is when i fill dataset executing some query, all data column must be string only irrespective of type in database tables. Is it possible?
I want to know if any short SQL syntax exists instead of casting every columns when you have huge number of columns or even dynamic number of columns
No, this isn't possible. If you select data from a table the data has the datatype of the column as defined.
You can create a table with only nvarchar fields instead.
No. The database assumes that you defined your columns with a given type because you wanted them to be that type. That's how the data was given to it to store, so that's how it's going to return it. If you don't want them to be that type, the database requires you to explicitly state that.

Is comma separated value substitue for dynamically changing columns?

Requirement: A set of data which contains decimal numbers to be inserted in db. The total number of values in this set can vary from 12 to 288.
These data are used just for calculation which will be fetched based on other parameters in other columns like:date
Solution: Should the columns be created dynamically to insert each of the values
OR
We can use one column which will have these values in comma separated format?
Please suggest the efficient approach.
Should the columns be created dynamically to insert each of the values OR We can use one column which will have these values in comma separated format?
Depends on whether a set of values can be considered "atomic" from the database management perspective.
If yes (all the values are always read from and written to the database together as a unit), then you can just go ahead and encode them into one field. A comma-separated list is one possible encoding, but using a "special" data type such as Oracle's RAW or PostgreSQL array might be even better.
If no (the values do need to be accessed individually), consider the number of possible values versus the number expected to be used in a given set. If the former number is relatively small, and you can live with dynamically constructing the queries to accommodate new columns, and costs of storing NULLs in unused columns are acceptable, then you can consider dynamically adding columns.
But there is another alternative... If the data is "sparse" and there is a large number of applicable values, but only a small fraction will apply to a given set, then using a form of EAV might be more appropriate...
Create two tables, one for "set" and the other for "value in set" and connect them via 1:N relationship. For example:
CREATE TABLE SET (
SET_ID INT PRIMARY KEY
-- Other fields...
);
CREATE TABLE SET_VALUE (
SET_ID INT REFERENCES SET (SET_ID),
NAME VARCHAR(50),
VALUE DECIMAL,
PRIMARY KEY (SET_ID, NAME)
);
This way, you don't need add new columns dynamically, yet retain the ability to access the individual values (unlike comma-separated list).

prefix field with letters

I've been given the task of combining 7 databases into one legacy system that will only hold static data.
So what I want to do is add a prefix of two letters to the identity field of each table so that I can see by the ID of which database it originally came from e.g.
StaffID = 1 in the old database, in the new single database I want it to look like this StaffID = AB1. This is also to get rid of the problem of users in all seven databases having the same StaffID (Along with other ID Fields).
Is there a quick and easy way of doing this? Or do you guys think there is a better solution?
Cheers!
Quick and easy; not really. If you want to update the existing IDs, instead of creating a new column to refer to:
First, you'll need to change any ID fields that are numeric (int, etc.) into a varchar field to hold the new key. The varchar would need to be of an appropriate size to hold the largest ID plus the prefix.
You would also need to convert any foreign key fields from a numeric field to a varchar field of an appropriate size.
Finally, you would need to import the data into the new database, appending the prefix to any primary keys and foreign keys.

Resources