Unable to Increase Column size - sql-server

I need increase column size of the table , i am using below query to increase the size but i am getting below error
Alter Table Tabl1 Alter Column Col1 VarChar(6) Not NULL
Msg 5074, Level 16, State 1, Line 1
The object 'Tabl1' is dependent on column 'Col1'.
Msg 5074, Level 16, State 1, Line 1
The statistics '_WA_Sys_Col1_5070F446' is dependent on column 'Col1'.
Msg 4922, Level 16, State 9, Line 1
ALTER TABLE ALTER COLUMN Col1 failed because one
or more objects access this column.
Because of same table as a dependency on the column
need help on this

SQL Server automatically adds statistics to a table over time to use when it parses a query and builds a query plan. You have to drop the statistic to change the column. For instance:
drop statistics [dbo].[Tabl1].[_WA_Sys_Col1_5070F446]
However, you should use SSMS to view the columns that are in the _WA_Sys_Col1_5070F446 statistics before you drop it so that you can recreate it. Something like this:
create statistics [_WA_Sys_Col1_5070F446] on [dbo].[Tabl1]([Col1])
But there may be more columns..., so be sure to find out which need to be included before you drop it.
You can run this SQL to find most of the dependencies, it doesn't report the statistics dependencies, but it catches most of the others:
SELECT OBJECT_NAME(d.object_id) AS SP_Or_Function, OBJECT_NAME(d.referenced_major_id) AS TableReferenced
FROM sys.sql_dependencies AS d INNER JOIN
sys.all_sql_modules AS m ON m.object_id = d.object_id
where OBJECT_ID(N'Tabl1') = d.referenced_major_id
GROUP BY OBJECT_NAME(d.object_id), OBJECT_NAME(d.referenced_major_id)
ORDER BY SP_Or_Function, TableReferenced
You can find all statistics used by a given table with this query:
SELECT DISTINCT
OBJECT_NAME(s.[object_id]) AS TableName,
c.name AS ColumnName,
s.name AS StatName,
s.auto_created,
s.user_created,
s.no_recompute,
s.[object_id],
s.stats_id,
sc.stats_column_id,
sc.column_id,
STATS_DATE(s.[object_id], s.stats_id) AS LastUpdated
FROM sys.stats s JOIN sys.stats_columns sc ON sc.[object_id] = s.[object_id] AND sc.stats_id = s.stats_id
JOIN sys.columns c ON c.[object_id] = sc.[object_id] AND c.column_id = sc.column_id
JOIN sys.partitions par ON par.[object_id] = s.[object_id]
JOIN sys.objects obj ON par.[object_id] = obj.[object_id]
WHERE OBJECTPROPERTY(s.OBJECT_ID,'IsUserTable') = 1
AND (s.auto_created = 1 OR s.user_created = 1)
AND object_id(N'Tabl1') = s.[object_id]
Thanks to SQLAuthority for the last two SQL queries:
SQL SERVER – Get the List of Object Dependencies – sp_depends and information_schema.routines and sys.dm_sql_referencing_entities (Gabriel's post)
SQL SERVER – Find Details for Statistics of Whole Database – DMV – T-SQL Script

Here is quote from SQL Server 2000 help:
ALTER COLUMN
The altered column cannot be:
.....
Used in an index, unless the column is a varchar, nvarchar, or varbinary data type, the data type is not changed, and the new size is
equal to or larger than the old size.
Used in statistics generated by the CREATE STATISTICS statement. First remove the statistics using the DROP STATISTICS statement.
Statistics automatically generated by the query optimizer are
automatically dropped by ALTER COLUMN. .....

Related

How to make consistent queries to SQL Server metadata

My application needs to cache SQL Server metadata (tables, columns, indexes, etc).
It makes several subsequent queries to system tables and views like sysobjects.
Sometimes data synchronization procedure runs simultaneously that creates tables and indexes.
In this case queried metadata becomes inconsistent:
Application reads tables and columns lists.
Data synchronization procedure creates new table and index.
Application reads indexes list, and the new index is for "non-existing" table.
A simple example to reproduce this.
In session 1:
-- 0. Drop example table if exists
if object_id('test') is not null drop table test
-- 1. Query tables (nothing returned)
select * from sysobjects where name = 'test'
-- 3. Query indexes (index returned for the new table)
select IndexName = x.name, TableName = o.name
from sysobjects o join sysindexes x on x.id = o.id
where o.name = 'test'
In session 2:
-- 2. Create table with index
create table test (id int primary key)
Is there a way to make metadata queries consistent, something like Schema Modification lock on the entire database or database schema?
Running metadata queries in transaction with serializable isolation level does not help.
You can "simulate" consistency with temp table for sysobjects (tables) and then using this temp table to query for indexes that belong to that tables.
Like this:
if object_id('tempdb..#tempTables') is not null
drop table #tempTables;
select
*
into #tempTables
from sys.objects as o
where o.type = 'U'
select
*
from #tempTables t
select
i.*
from #tempTables t
inner join sys.indexes as i on t.object_id = i.object_id

SQL Server: does data type change of column (kept column name same) requires recreation of views?

I changed data type of column from nvarchar to datetime keeping the column name same in SQL server. does it requires to drop and recreate the views which depends on that column?
When you change base table column types you should always do at least refresh view:
exec sp_refreshview 'yourView'
And that is why.
When you change your table definition, the view metadata is not refreshed.
Now immagine that you have users that have no permission on the base table, but have SELECT permission on the view. If they ask sp_help about the columns of this view, or if they open Columns folder in OE, it will still show the old types. Then the users can make illegal queries an they'll be going crazy to figure out what happens.
I give you this example to show what can happen.
create view dbo.vw_test_types as
select art_code, art_desc
from dbo.test_types;
go
insert into dbo.test_types (art_code, art_desc)
values ('123', 'trekking shoes');
go
alter table dbo.test_types alter column art_code int;
go
exec sp_help 'dbo.vw_test_types'
select art_desc + ' ' + art_code as full_art_desc
from dbo.vw_test_types;
--Msg 245, Level 16, State 1, Line 33
--Conversion failed when converting the varchar value 'trekking shoes ' to data type int.
exec sp_refreshview 'dbo.vw_test_types'
go
exec sp_help 'dbo.vw_test_types'
go
Here you create a table containing only varchar columns. You then create a view based on this table and enter a row. At this point you decide to change the art_code type to int, the command works fine.
But look at your view's metadata (exec sp_help 'dbo.vw_test_types'), it still shows you only varchar columns.
Now there is a user that has no access to the base table, he wants to show up the whole description including the art_code. He opens SSMS -> OE -> dbo.vw_test_types -> Columns and see that all columns are varchar, so he just concatenates art_desc and art_code. And gets the error! And he can really be perplessed about it, he SEES varchar only but the error told him abot int type
And even worse. Think if the user has built some reports based on that query. One day all these reports do not work at all and they can be configured to NOT show the error to the user, simply do not work showing "an error happen when processing the query"
Not as an answer but to further sepupic's answer above (who beat me to the punch!), consider the following:
Begin Transaction
Create Table dbo.SOTest(col1 Numeric(12,2));
Go
Create View dbo.SOTestV As Select col1 From dbo.SOTest
Go
Select c.name, t.name
From sys.columns As c
Join sys.types As t
On c.user_type_id = t.user_type_id
Where c.object_id = Object_Id('dbo.SoTestV')
Go
Alter Table dbo.SOTest Alter Column col1 Int
Go
Select c.name, t.name
From sys.columns As c
Join sys.types As t
On c.user_type_id = t.user_type_id
Where c.object_id = Object_Id('dbo.SoTestV')
Go
Exec sys.sp_refreshview N'dbo.SOTestV'
Go
Select c.name, t.name
From sys.columns As c
Join sys.types As t
On c.user_type_id = t.user_type_id
Where c.object_id = Object_Id('dbo.SoTestV')
Go
Rollback Transaction
The results of which will be
col1 numeric
(1 row(s) affected)
col1 numeric
(1 row(s) affected)
col1 int
Thus proving that the metadata in the view is still showing the previous datatype until refreshed.

Looping through a table in a nested query in SQL Server 2005

I have a task where I have to identify the company database tables last_user_update and then store that in a separate table where reports for each tables last update can be monitored. What I'm aiming to do is have my query loop through a list of table names and so far I have the following:
Identify the all tables in the database:
INSERT INTO [CorpDB1].[dbo].[tblTableNames]
([name])
SELECT *
FROM sys.Tables
ORDER by name
The result is a table containing a 984 rows of table names for all tables in the DB.
Next I have the following:
Query to return and insert last_user_update into a new table called DatabaseTableHistory:
INSERT INTO [CorpDB1].[dbo].[DatabaseTableHistory]
([DatabaseName]
,[last_user_update]
,[database_id]
,[object_id]
,[index_id]
,[user_seeks]
,[user_scans]
,[user_lookups]
,[user_updates]
,[last_user_seek])
SELECT OBJECT_NAME(OBJECT_ID) AS DatabaseName, last_user_update,
database_id, object_id, index_id, user_seeks, user_scans, user_lookups,
user_updates, last_user_seek
FROM sys.dm_db_index_usage_stats
WHERE database_id = DB_ID( 'CorpDB1')
AND OBJECT_ID=OBJECT_ID('tblStatesList') AND last_user_update <
'20150430'
This query works as designed. What I'm trying to do is have the last query loop through the table containing the list of table names inserting it where OBJECT_ID=OBJECT_ID('tbleStatesList') is so I don't have to manually run the query by typing each table name in by hand. Any suggestions would be appreciated.
First things first, apart from being wrong (SELECT * will return much more than just name), your first query is completely unnecessary. There's no need to select your table names into a cache table.
You can get the list of tables you need by using an INNER JOIN between sys.dm_db_index_usage_stats and sys.tables on the object_id.
I'm assuming you want all the indexes on CorpDB1 tables that have a last_user_update prior to 30th April 2015.
USE CorpDB1;
GO
INSERT INTO [CorpDB1].[dbo].[DatabaseTableHistory]
([DatabaseName]
,[last_user_update]
,[database_id]
,[object_id]
,[index_id]
,[user_seeks]
,[user_scans]
,[user_lookups]
,[user_updates]
,[last_user_seek])
SELECT
DB_NAME(ddius.database_id) AS DatabaseName, -- Fix this on your query
ddius.last_user_update,
ddius.database_id,
ddius.object_id,
ddius.index_id,
ddius.user_seeks,
ddius.user_scans,
ddius.user_lookups,
ddius.user_updates,
ddius.last_user_seek
FROM sys.dm_db_index_usage_stats ddius
INNER JOIN sys.tables AS t
ON t.object_id = ddius.object_id
WHERE database_id = DB_ID( 'CorpDB1')
AND last_user_update < '20150430';
If you are working with MSSQL, I would suggest creating a stored procedure to loop through the first dataset using a cursor.
See the link below for a short tutorial on how to create and user cursors for this purpose.
http://stevestedman.com/2013/04/t-sql-a-simple-example-using-a-cursor/

Why aren't system tables updated after compressing tables

SQL Server 2012
I wanted to compress tables and indexes. I did a search to find tables that weren't compressed and manually checked accuracy of script by looking at table properties/storage prior to compressing. I generated scripts for tables as follows:
ALTER TABLE [R_CompPen].[CP2507BodySystem]
REBUILD WITH (DATA_COMPRESSION=PAGE);
After the script ran I verified compression through SMS however, the script I ran to find the uncompressed tables and generate scripts still showed them as uncompressed.
So the question is why didn't the Alter Table script update system tables and if it actually is but showing indexes, how can the script be written to only show tables and conversely a separate script to only show indexes?
SELECT distinct 'ALTER TABLE ['
+ sc.[name] + '].[' + st.[name]
+ '] REBUILD WITH (DATA_COMPRESSION=PAGE);'
FROM sys.partitions SP
INNER JOIN sys.tables ST ON st.object_id = sp.object_id
INNER JOIN sys.Schemas SC on sc.schema_ID = st.schema_ID
WHERE sp.data_compression = 0
The 'DISTINCT' is the culprit here. Once you have multiple indexes, you also have multiple entries in sys.partitions. But the distinct hides the other entries.
Here I have a table called Album with 2 indexes, which I compressed using
ALTER TABLE Album REBUILD WITH (DATA_COMPRESSION = PAGE);
After running this statement, the non clustered index remains uncompressed and keeps appearing in the list.
EDIT:
Turns out that when you only want to know about table level compression, you simply filter for index_id 0 or 1. Higher numbers refer to non clustered indexes. Shameless copy from Barguast's solution on his own question:
SELECT [t].[name] AS [Table], [p].[partition_number] AS [Partition],
[p].[data_compression_desc] AS [Compression]
FROM [sys].[partitions] AS [p]
INNER JOIN sys.tables AS [t] ON [t].[object_id] = [p].[object_id]
WHERE [p].[index_id] in (0,1)

SQL Server ambiguous query validation

I have just come across a curious SQL Server behaviour.
In my scenario I have a sort of dynamic database, so I need to check the existence of tables and columns before run queries involving them.
I can't explain why the query
IF 0 = 1 -- Check if NotExistingTable exists in my database
BEGIN
SELECT NotExistingColumn FROM NotExistingTable
END
GO
executes correctly, but the query
IF 0 = 1 -- Check if NotExistingColumn exists in my ExistingTable
BEGIN
SELECT NotExistingColumn FROM ExistingTable
END
GO
returns Invalid column name 'NotExistingColumn'.
In both cases the IF block is not executed and contains an invalid query (the first misses a table, the second a column).
Is there any reason why SQL engine checks for syntax erorrs just in one case?
Thanks in advance
Deffered name resolution:
Deferred name resolution can only be used when you reference nonexistent table objects. All other objects must exist at the time the stored procedure is created. For example, when you reference an existing table in a stored procedure you cannot list nonexistent columns for that table.
You can look through the system tables for the existence of a specific table / column name
SELECT t.name AS table_name,
SCHEMA_NAME(schema_id) AS schema_name,
c.name AS column_name
FROM sys.tables AS t
INNER JOIN sys.columns c ON t.OBJECT_ID = c.OBJECT_ID
WHERE c.name LIKE '%colname%'
AND t.name LIKE '%tablename%'
ORDER BY schema_name, table_name;
The query above will pull back all tables / columns with partial match of a columnname and tablename, just change the like % for exact match.

Resources