Converting bigint to smallint shows an error - sql-server

ALTER TABLE employee
ALTER COLUMN emp_phoneNo SMALLINT;
I am trying to alter the data type from BIGINT to SMALLINT and it is showing this error:
Arithmetic overflow error converting expression to data type int.
I am not able to understand what is wrong.

You have existing rows with values in that specific column that are bigger than the new data type allows.
You need to update or delete the rows that are currently "oversized".
(or not perform the column alter at all .. because most likely you don't want to lose the information)
You can find the rows with this query:
SELECT 'CurrentlyOverSized' as MyLabel, * FROM dbo.employee WHERE ABS(emp_phoneNo ) > 32767
Note a phone number like : 5555555555 (which would be numeric for 555-555-5555) would be greater than the 32767 number.
Even 5555555 (for 555-5555 (no area code)) is too big for 32767.
Also
A debatable topic. But number or string for storing phone numbers...check out this link for food for thought:
What datatype should be used for storing phone numbers in SQL Server 2005?
Personally I think numeric is the wrong data type for phone numbers.
Whatever you do, be consistent. If you go with a string (varchar(xyz)) for example..........store them with no extra characters 5555555555, with hyphens 555-555-5555 , with dot 555.555.5555 .. or whatever but do them all the same would be my advice.

Related

String or Binary Data would be truncated SQL Server

I have a table
Table1:
someColumnName0 char(10) DoesNotAllowNull
someColumnName1 char(20) AllowsNull
someColumnName2 char(20) AllowsNull
....
...
...
someColumnNameN char(100) AllowsNull
I got this error
String or Binary value would be truncated....
and I thought one of the values in one of the columns was exceeding size limit and I didn't know which column.
Since all the columns can take a NULL value, I started entering a row with just one value in the first column and I still got the same message even when I manually entered a value in the first column only.
I don't know what is going on and what am I missing.
Your help would be really appreciated.

SQL INSERT using tables where the columns have different data types and getting error converting

I have two tables and I would like to insert from one into the other. In my staging (source) table every column is defined as nvarchar(300) and this restriction cannot change.
In my destination table, the columns are of all different types. If I want to, for example, select from the source table (data type nvarchar(300)) and insert that column into a data type of decimal(28, 16).
When this happens I get the following error:
Error converting data type nvarchar to numeric.
Even when I use a cast I get the error.
INSERT INTO Destination (
Weighting
)
VALUES (
CAST(src.Weighting AS decimal(28, 16))
)
Could null values be affecting this at all? Is there anything else to consider?
If all data in your staging table column can be implicitly converted to the target data type then you do not have to set up an explicit cast.
But if any one value cannot be converted implicitly (i.e. one cell contains a non-numeric or ill-formatted string value that is supposed to end up in a decimal type column) then the entire transaction will fail.
You can migrate the risk of a failing transaction by setting up the insert like this:
INSERT
LiveTable (
VarcharCol,
DecimalCol,
NonNullableCol
)
SELECT
NvarcharCol1,
CASE ISNUMERIC(nvarcharCol2) = 0 THEN NvarcharColl2 END,
ISNULL(NvarcharCol3, '')
FROM
StagingTable
But clearly that means the risk of losing potentially relevant data or numeric precision.
You can read which data types are implicitly convertible between each other on the MSDN (scroll down to the matrix). For all other conversions you'll have to use CAST or CONVERT.
This will search for non numeric strings
select src.Weighting from src where isnumeric(src.Weighting) = 0
INSERT INTO Destination (Weighting)
SELECT CAST(src.Weighting AS decimal(28, 16))
FROM [Source] src
should work OK, provided your varchar values are in correct format.
If the error still occurs, please give an example of value being converted.
NULLs will successfully convert to NULLs.
TSQL has functions for casting or converting data to the type you want it to be. If your data types in the source are strictly what you are trying to store them as in the destination table and with in the specifications of the destination table you won't have much trouble.
If you have a column of numbers and one of the is 'three' instead of '3' it gets complicated. Here is a question about converting a varchar to a decimal
An example: I can cast 123 as a varchar(20) then cast the varchar into a decimal with no problem when it is appropriate.
SELECT cast(cast('123' as varchar(20)) as decimal(8,2))
However if I try to convert a character it will give an error.
SELECT cast(cast('1a3' as varchar(20)) as decimal(8,2))
The null only be a problem if the target column does not allow nulls, I think the problem is that the format string that can not always be converted into a decimal, see if the decimal separator is a comma instead of a point.

Creating a unique id (PIN) for each record of a table

I want to create a PIN that is unique within a table but not incremental to make it harder for people to guess.
Ideally I'd like to be able to create this within SQL Server but I can do it via ASP.Net if needed.
EDIT
Sorry if I wasn't clear: I'm not looking for a GUID as all I need is a unique id for that table; I just don't want it to be incremental.
Add a uniqueidentifier column to your table, with a default value of NEWID(). This will ensure that each column gets a new unique identifier, which is not incremental.
CREATE TABLE MyTable (
...
PIN uniqueidentifier NOT NULL DEFAULT newid()
...
)
The uniqueidentifier is guaranteed to be unique, not just for this table, but for all tables.
If it's too large for your application, you can derive a smaller PIN from this number, you can do this like:
SELECT RIGHT(REPLACE((SELECT PIN from MyTable WHERE UserID=...), '-', ''), 4/*PinLength*/)
Note that the returned smaller PIN is not guaranteed to be unique for all users, but may be more manageable, depending upon your application.
EDIT: If you want a small PIN, with guaranteed uniqueness, the tricky part is that you need to know at least the maximum number of users, in order to choose the appropriate size of the pin. As the number of users increases, the chances of a PIN collision increases. This is similar to the Coupon Collector's problem, and approaches n log n complexity, which will cause very slow inserts (insert time proportional to the number of existing elements, so inserting M items then becomes O(N^2)). The simplest way to avoid this is to use a large unique ID, and select only a portion of that for your PIN, assuming that you can forgo uniqueness of PIN values.
EDIT2:
If you have a table definition like this
CREATE TABLE YourTable (
[id] [int] IDENTITY(1,1) NOT NULL,
[pin] AS (CONVERT(varchar(9),id,0)+RIGHT(pinseed,3)) PERSISTED,
[pinseed] [uniqueidentifier] NOT NULL
)
This will create the pin from the pinseed a unique ID and the row id. (RAND does not work - since SQL server will use the same value to initialize multiple rows, this is not the case with NEWID())
Just so that it is said, I advise that you do not consider this in any way secure. You should consider it always possible that another user could guess someone else's PIN, unless you somehow limit the number of allowed guesses (e.g. stop accepting requests after 3 attempts, similar to a bank witholding your card after 3 incorrect PIN entries.)
What you want is a GUID
http://en.wikipedia.org/wiki/Globally_unique_identifier
Most languages have some sort of API for generating this... a google search will help ;)
How about a UNIQUEIDENTIFIER type column with a default value of NEWID()?
That will generate a new GUID for each row.
Please have in mind that by requiring an unique PIN (which is uncommon) you will be limiting the max number of allowed users to the the PIN specification. Are you sure you want this ?
A not very elegant solution but which works is to use an UNIQUE field, and then loop attempting to insert a random generated PIN until the insert is successful.
You can use the following to generate a BIGINT, or other datatype.
SELECT CAST(ABS(CHECKSUM(NEWID()))%2000000000+1 as BIGINT) as [PIN]
This creates a number between 1 and 2 billion. You will simulate some level of randomness since it's derived from the NEWID function. You can also format the result as you wish.
This doesn't guarantee uniqueness. I suggest that you use a unique constraint on the PIN column. And, your code that creates the new PIN should check that the new value is unique before it assigns the value.
Use a random number.
SET #uid = ROUND(RAND() * 100000)
The more sparse your values are in the table, the better this works. If the number of assigned values gets large is relationship to the number of available values, it does not work as well.
Once the number is generated you have a couple of options.
1) INSERT the value inside of a retry loop. If you get a dupe error, regenerate the value (or try the value +/-1) and try again.
2) Generate the value and look for the MAX and MIN existing unique identifiers.
DECLARE
#uid INTEGER
SET #uid = ROUND(RAND() * 10000, 1)
SELECT #uid
SELECT MAX(uid) FROM table1 WHERE uid < #uid
SELECT MIN(uid) FROM table1 WHERE uid > #uid
The MIN and MAX value give you a range of available values to work from if the random value is already assigned.

Sql Server Column with Auto-Generated Data

I have a customer table, and my requirement is to add a new varchar column that automatically obtains a random unique value each time a new customer is created.
I thought of writing an SP that randomizes a string, then check and re-generate if the string already exists. But to integrate the SP into the customer record creation process would require transactional SQL stuff at code level, which I'd like to avoid.
Help please?
edit:
I should've emphasized, the varchar has to be 5 characters long with numeric values between 1000 and 99999, and if the number is less than 10000, pad 0 on the left.
if it has to be varchar, you can cast a uniqueidentifier to varchar.
to get a random uniqueidentifier do NewId()
here's how you cast it:
CAST(NewId() as varchar(36))
EDIT
as per your comment to #Brannon:
are you saying you'll NEVER have over 99k records in the table? if so, just make your PK an identity column, seed it with 1000, and take care of "0" left padding in your business logic.
This question gives me the same feeling I get when users won't tell me what they want done, or why, they only want to tell me how to do it.
"Random" and "Unique" are conflicting requirements unless you create a serial list and then choose randomly from it, deleting the chosen value.
But what's the problem this is intended to solve?
With your edit/update, sounds like what you need is an auto-increment and some padding.
Below is an approach that uses a bogus table, then adds an IDENTITY column (assuming that you don't have one) which starts at 1000, and then which uses a Computed Column to give you some padding to make everything work out as you requested.
CREATE TABLE Customers (
CustomerName varchar(20) NOT NULL
)
GO
INSERT INTO Customers
SELECT 'Bob Thomas' UNION
SELECT 'Dave Winchel' UNION
SELECT 'Nancy Davolio' UNION
SELECT 'Saded Khan'
GO
ALTER TABLE Customers
ADD CustomerId int IDENTITY(1000,1) NOT NULL
GO
ALTER TABLE Customers
ADD SuperId AS right(replicate('0',5)+ CAST(CustomerId as varchar(5)),5)
GO
SELECT * FROM Customers
GO
DROP TABLE Customers
GO
I think Michael's answer with the auto-increment should work well - your customer will get "01000" and then "01001" and then "01002" and so forth.
If you want to or have to make it more random, in this case, I'd suggest you create a table that contains all possible values, from "01000" through "99999". When you insert a new customer, use a technique (e.g. randomization) to pick one of the existing rows from that table (your pool of still available customer ID's), and use it, and remove it from the table.
Anything else will become really bad over time. Imagine you've used up 90% or 95% of your available customer ID's - trying to randomly find one of the few remaining possibility could lead to an almost endless retry of "is this one taken? Yes -> try a next one".
Marc
Does the random string data need to be a certain format? If not, why not use a uniqueidentifier?
insert into Customer ([Name], [UniqueValue]) values (#Name, NEWID())
Or use NEWID() as the default value of the column.
EDIT:
I agree with #rm, use a numeric value in your database, and handle the conversion to string (with padding, etc) in code.
Try this:
ALTER TABLE Customer ADD AVarcharColumn varchar(50)
CONSTRAINT DF_Customer_AVarcharColumn DEFAULT CONVERT(varchar(50), GETDATE(), 109)
It returns a date and time up to milliseconds, wich would be enough in most cases.
Do you really need an unique value?

SQL Server Row Length

I'm attempting to determine the row length in bytes of a table by executing the following stored procedure:
CREATE TABLE #tmp
(
[ID] int,
Column_name varchar(640),
Type varchar(640),
Computed varchar(640),
Length int,
Prec int,
Scale int,
Nullable varchar(640),
TrimTrailingBlanks varchar(640),
FixedLenNullInSource varchar(640),
Collation varchar(256)
)
INSERT INTO #tmp exec sp_help MyTable
SELECT SUM(Length) FROM #tmp
DROP TABLE #tmp
The problem is that I don't know the table definition (data types, etc..) of the table returned by 'sp_help.'
I get the following error:
Insert Error: Column name or number of supplied values does not match table definition.
Looking at the sp_help stored procedure does not give me any clues.
What is the proper CREATE TABLE statement to insert the results of a sp_help?
How doing it this way instead?
CREATE TABLE tblShowContig
(
ObjectName CHAR (255),
ObjectId INT,
IndexName CHAR (255),
IndexId INT,
Lvl INT,
CountPages INT,
CountRows INT,
MinRecSize INT,
MaxRecSize INT,
AvgRecSize INT,
ForRecCount INT,
Extents INT,
ExtentSwitches INT,
AvgFreeBytes INT,
AvgPageDensity INT,
ScanDensity DECIMAL,
BestCount INT,
ActualCount INT,
LogicalFrag DECIMAL,
ExtentFrag DECIMAL
)
GO
INSERT tblShowContig
EXEC ('DBCC SHOWCONTIG WITH TABLERESULTS')
GO
SELECT * from tblShowContig WHERE ObjectName = 'MyTable'
GO
Try this:
-- Sum up lengths of all columns
select SUM(sc.length)
from syscolumns sc
inner join systypes st on sc.xtype = st.xtype
where id = object_id('table')
-- Look at various items returned
select st.name, sc.*
from syscolumns sc
inner join systypes st on sc.xtype = st.xtype
where id = object_id('table')
No guarantees though, but it appears to be the same length that appears in sp_help 'table'
DISCLAIMER:
Note that I read the article linked by John Rudy and in addition to the maximum sizes here you also need other things like the NULL bitmap to get the actual row size. Also the sizes here are maximum sizes. If you have a varchar column the actual size is less on most rows....
Vendoran has a nice solution, but I do not see the maximum row size anywhere (based on table definition). I do see the average size and all sorts of allocation information which is exactly what you need to estimate DB size for most things.
If you are interested in just what sp_help returns for length and adding it up, then I think (I'm not 100% sure) that the query to sysobjects returns those same numbers. Do they represent the full maximum row size? No, you are missing things like the NULL bitmap. Do they represent a realistic measure of your actual data? No. Again VARCHAR(500) does not take 500 bytes if you only are storing 100 characters. Also TEXT fields and other fields stored separately from the row do not show their actual size, just the size of the pointer.
None of the aforementioned answers is correct or valid.
The question is one of determining the number of bytes consumed per row by each column's data type.
The only method(s) I have that work are:
exec sp_help 'mytable' - then add up the Length field of the second result set (If working from Query Analyzer or Management Studio - simply copy and paste the result into a spreadsheet and do a SUM)
Write a C# or VB.NET program that accesses the second resultset and sums the Length field of each row.
Modify the code of sp_help.
This cannot be done using Transact SQL and sp_help because there is no way to deal with multiple resultsets.
FWIW: The table definitions of the resultsets can be found here:
http://msdn.microsoft.com/en-us/library/aa933429(SQL.80).aspx
I can't help you with creating a temp table to store sp_help information, but I can help you with calculating row lengths. Check out this MSDN article; it helps you calculate such based on the field lengths, type, etc. Probably wouldn't take too much to convert it into a SQL script you could reuse by querying against sysobjects, etc.
EDIT:
I'm redacting my offer to do a script for it. My way was nowhere near as easy as Vendoran's. :)
As an aside, I take back what I said earlier about not being able to help with the temp table. I can: You can't do it. sp_help outputs seven rowsets, so I don't think you'll be able to do something as initially described in the original question. I think you're stuck using a different method to come up with it.
This will give you all the information you need
Select * into #mytables
from INFORMATION_SCHEMA.columns
select * from #mytables
drop table #mytables
UPDATE:
The answer I gave was incomplete NOT incorrect. If you look at the data returned you'd realize that you could write a query using case to calculate a rows size in bytes. It has all you need: the datatype|size|precision. BOL has the bytes used by each datatype.
I will post the complete answer when I a chance.

Resources