Can you change the column length in a view in SQL Server 2000? - sql-server

Not sure if this is even allowed, but if so, can someone tell me what the T-SQL is? I've tried the following but to no avail.
alter [View_Name]
alter column [Coln_Name] [New size/length] not null
GO
Any help is appreciated. Thanks!

Not directly.
This is derived automatically from the column expression. You can CAST the expression in the View SELECT list to a particular datatype though.

You would need to change the column length in the underlying table, or to change the SELECT statement forming the view to CAST or CONVERT the column to a different length data type.

Views are ways to see data in other tables; typically the data is simply whatever is in the underlying table, so you would need to change the column there.
However, you can have views that do things like cast() or convert(); these are typically a bad idea, becuase the data needs to be re-fetched every time the view is used, and these operations add overhead. In the design of the view, you can decide to cast as another data type, or do any transformation you would like - but it has overhead, and will not alter the original data.
If you know what the current view selects, you can use something like:
Alter view Viewname[cloumn] as Select cast(original_data as varchar(n)) from Original_Table

I just ran into the same situation. What I did was:
Change the column size in the table that the view looks at.
Create a script to recreate the view (if you don't already have one).
Delete the view
Use the script to recreate the view.
After that the column sizes in the view were the same as the changes I made to the underlying table.

You can not alter the column size in a view as a view is derived from other table. So if you need to change the column size, change the column size of the table. To change the column size use ALTER TABLE as :
ALTER TABLE [Table_Name]
ALTER COLUMN [Column_Name] Data_Type(Size)
After changing the column size you might need to drop the view and recreate it again.

If the length shown in the view doesn't match the underlying table then drop and recreate the view.
Use something like this to investigate the column lengths in the table and view
SELECT o.[name], *
FROM sys.all_columns c
INNER JOIN sys.objects o on o.[object_id]=c.[object_id]
WHERE c.[name] = 'OML'
-- AND c.[max_length]=11
ORDER BY O.[name];
To get the Drop and Create sql I use SSMS and right click context menu (on the view in the Object Explorer), then go to 'Script View as', and 'DROP And CREATE To'.

Related

Use column set for ALL_SPARSE_COLUMNS in the Sql View

I have a table where I add sparse columns dynamically:
CREATE TABLE [dbo].[my_table](
[id] [BIGINT] NOT NULL,
[column_set] XML COLUMN_SET FOR ALL_SPARSE_COLUMNS)
I add sparse columns at runtime with the following SQL:
ALTER TABLE my_table ADD my_sparse_column ... SPARSE
I want to create the SQL view for this table:
CREATE VIEW [dbo].[v_my_view]
AS
SELECT v.*
FROM my_table v
However I cannot query data from my sparse columns when I use the view:
SELECT my_sparse_column FROM v_my_view
However, I receive such an error:
This query works fine when executing it on the original table.
Is it possible to make it work?
This behaviour is documented for SPARSE columns when there is a COLUMN_SET present.
Warning:
Adding a column set changes the behavior of SELECT * queries. The query will return the column set as an XML column and not return the individual sparse columns. Schema designers and software developers must be careful not to break existing applications. Individual sparse columns can still be queried by name in a SELECT statement.
So the view will never contain that column, unless you specifically select it, not just using select *.
There is another issue that you would get even if it wasn't SPARSE.
You are adding the column after creating the view.
You need to then run the following statement:
EXEC sp_refreshview N'dbo.v_my_view';
When creating a view, the view is parsed into a compiled expression tree (without any optimizations). Then, when you use the view, the compiler does not simply dump the view text into the outer query. Instead, it parses the outer query into an expression tree, and uses the expression tree from the view in the correct place.
So when you add a column, the expression tree is not updated. So you need to refresh the view definition.
You also need to rebuild any stored procedures which access this table or the view, for the same reason.
EXEC sp_refreshsqlmodule N'dbo.YourProc';
db<>fiddle

How to dynamically exclude non-copyable fields in trigger tables

Background: I am trying to have an after update trigger which stores the changed values dynamically into another table. Since this trigger should be generic and easy to transfer to other tables and won't cause problems, if I add additional columns (If my whole code should be required to solve this, I'll update the question)
While trying to do this, I encounter following issue: I want to store the inserted table into an temporary table, which I do in this way:
SELECT *
INTO #tempINSERTED
FROM INSERTED
But the original table contains both: ntext and timestamp columns which aren't allowed in temporary tables.
Another approach I tried, was looping through the system table INFORMATION_SCHEMA.COLUMNS and build a SQL statement as a string excluding non-copyable columns, but this way I cannot access the inserted table. - I already figured I cannot access inserted if I use sp_executesql.
So my question: is there a way to access the inserted table and exclude non-copyable columns as ntext, text, image ?
Thanks in advance
You want the triggers to run fast. So the better approach would be to generate the create trigger code rather than looping through the fields in the trigger itself. Then if the table schema changes you will need to regenerate the trigger.
For your #TEMPINSERTED table you can use nvarchar(max) in place of ntext,
varchar(max) for text and varbinary(max) in place of image. You can also use and binary(8) or bigint in place of timestamp.
I would suggest using a table variable instead of an #temptable. I.e.:
declare #tempTable table (
fieldname int, -- and so on
)

How To change the column order of An Existing Table in SQL Server 2008

I have situation where I need to change the order of the columns/adding new columns for existing Table in SQL Server 2008.
Existing column
MemberName
MemberAddress
Member_ID(pk)
and I want this order
Member_ID(pk)
MemberName
MemberAddress
I got the answer for the same ,
Go on SQL Server → Tools → Options → Designers → Table and Database Designers and unselect Prevent saving changes that require table re-creation
2- Open table design view and that scroll your column up and down and save your changes.
It is not possible with ALTER statement. If you wish to have the columns in a specific order, you will have to create a newtable, use INSERT INTO newtable (col-x,col-a,col-b) SELECT col-x,col-a,col-b FROM oldtable to transfer the data from the oldtable to the newtable, delete the oldtable and rename the newtable to the oldtable name.
This is not necessarily recommended because it does not matter which order the columns are in the database table. When you use a SELECT statement, you can name the columns and have them returned to you in the order that you desire.
If your table doesn't have any records you can just drop then create your table.
If it has records you can do it using your SQL Server Management Studio.
Just click your table > right click > click Design then you can now arrange the order of the columns by dragging the fields on the order that you want then click save.
Best Regards
I tried this and dont see any way of doing it.
here is my approach for it.
Right click on table and Script table for Create and have this on
one of the SQL Query window,
EXEC sp_rename 'Employee', 'Employee1' -- Original table name is Employee
Execute the Employee create script, make sure you arrange the columns in the way you need.
INSERT INTO TABLE2 SELECT * FROM TABLE1.
-- Insert into Employee select Name, Company from Employee1
DROP table Employee1.
Relying on column order is generally a bad idea in SQL. SQL is based on Relational theory where order is never guaranteed - by design. You should treat all your columns and rows as having no order and then change your queries to provide the correct results:
For Columns:
Try not to use SELECT *, but instead specify the order of columns in the select list as in: SELECT Member_ID, MemberName, MemberAddress from TableName. This will guarantee order and will ease maintenance if columns get added.
For Rows:
Row order in your result set is only guaranteed if you specify the ORDER BY clause.
If no ORDER BY clause is specified the result set may differ as the Query Plan might differ or the database pages might have changed.
Hope this helps...
This can be an issue when using Source Control and automated deployments to a shared development environment. Where I work we have a very large sample DB on our development tier to work with (a subset of our production data).
Recently I did some work to remove one column from a table and then add some extra ones on the end. I then had to undo my column removal so I re-added it on the end which means the table and all references are correct in the environment but the Source Control automated deployment will no longer work because it complains about the table definition changing.
The real problem here is that the table + indexes are ~120GB and the environment only has ~60GB free so I'll need to either:
a) Rename the existing columns which are in the wrong order, add new columns in the right order, update the data then drop the old columns
OR
b) Rename the table, create a new table with the correct order, insert to the new table from the old and delete from the old as I go along
The SSMS/TFS Schema compare option of using a temp table won't work because there isn't enough room on disc to do it.
I'm not trying to say this is the best way to go about things or that column order really matters, just that I have a scenario where it is an issue and I'm sharing the options I've thought of to fix the issue
SQL query to change the id column into first:
ALTER TABLE `student` CHANGE `id` `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT FIRST;
or by using:
ALTER TABLE `student` CHANGE `id` `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT AFTER 'column_name'

Is it bad to use ALTER TABLE to resize a varchar column to a larger size?

I need a simple resize of a column from VARCHAR(36) to VARCHAR(40).
If you try to use SQL Server Enterprise Manager, the script it generates is effectively creating a new table with the new structure, inserting all of the data from the existing table into it, dropping the existing table, renaming the new table, and recreating any indexes.
If you read the documentation (and many online resources including SO), you can use an ALTER statement for the resize.
Does the ALTER affect the way the data is stored in any way? Indexes? Statistics? I want to avoid performance hits because of this modification due to the fact that the table can get large.
Just use ALTER TABLE. SSMS is a bit, er, stupid sometimes
You'll need to drop and recreate dependent constraints (FK, unique, index, check etc)
However, this is only a metadata change and will be very quick for any size table (unless you also change NOT NULL to NULL or varchar to nvarchar or such)
No, ALTER TABLE (http://msdn.microsoft.com/de-de/library/ms190273.aspx) is the way how Microsoft intended to do this kind of change.
And if you do not add extra options to your command, no indexes or statistics should get harmed.
A possibility of data loss is also not given, because you are just making the column bigger.
Everything should be fine.
Changes to database structure should NEVER be made using SSMS on a porduction environment for just the reason you brought up. It can destroy performance in a large table. ALTER table is the prefered method, it is faster and it can be stored in source control as a change to push to prod after testing.
Following should be the better way to handle this
IF EXISTS (SELECT 1
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = '<tablename>'
AND COLUMN_NAME = '<field>')
BEGIN
ALTER TABLE <tablename> ALTER COLUMN [<field>] varchar(xxxx) null
END
ELSE

Is there something like a "column symlink" in Oracle?

I would like to have a column in my DB accessible via two column names temporarily.
Why? The column name was badly chosen, I would like to refactor it. As I want my webapp to remain stable while changing the column name, it would be good to
have a (let's call it) symlink named better_column_name pointing to the column bad_column_name
change the webapplication to use better_column_name
drop the symlink and rename column to better_column_name
"Refactoring Databases" suggests to actually add a second column which is synchronized on commit in order to achieve this. I am just hoping that there might be an easier way with Oracle, with less work and less overhead.
As long as you have code that uses both column names, I don't see a way to get around the fact that you'll have two (real) columns in that table.
I would add the new column with the correct name and then create a trigger that checks which column has been modified and updates the "other" column correspondingly. So whatever is being updated, the value is synch'ed with the other column.
Once all the code that uses the old column has been migrated, remove the trigger and drop the old column.
Edit
The trigger would so do something like this:
CREATE OR REPLACE TRIGGER ...
...
UPDATE OF bad_column_name, better_column_name ON the_table
...
BEGIN
IF UPDATING ('BAD_COLUMN_NAME') THEN
:new.better_column_name = :new.bad_column_name
END IF;
IF UPDATING ('BETTER_COLUMN_NAME') THEN
:new.bad_column_name = :new.better_column_name
END IF;
END;
The order of the IF statements controls which change has a "higher priority" in case someone updated both columns at the same time.
Rename the table:
alter table mytable rename to mytable_old;
Create a view with the original tablename with both bad_column_name and better_column_name that point to the same column (and of course all the other columns):
create or replace view mytable as
select column1
, column2
, ...
, bad_column_name
, bad_column_name better_column_name
from mytable_old
;
Since this view is updatable by default (I assume here that mytable has a primary key), you can insert/update/delete from the view and it doesn't matter if you use bad_column_name or better_column_name.
After the refactoring, drop the view and rename the table and column:
drop view mytable;
alter table mytable_old rename column bad_column_name to better_column_name;
alter table mytable_old rename to mytable;
The best solution to this is only available in Oracle 11g Release 2: Edition-based Redefinition. This really cool feature allows us to maintain different versions of database tables and PL/SQL code, using special triggers and views. Find out more.
Essentially this is Oracle's built-in implementation of #AHorseWithNoName's suggestion.
you can create a view for the table. And port your application to use that view instead of the table.
create table t (bad_name varchar2(10), c2 varchar2(10));
create view vt as select bad_name AS good_name, c2 from t;
insert into vt (good_name, c2) values ('blub', 'blob');
select * from t;
select * from vt;
If you're on 11g you could look at using a virtual column. I'd probably be tempted to change the order slightly; rename the real column and create the virtual one using the old (bad) name, which can then be dropped at leisure. You may be restricted, of course, and there may be implications on other objects being invalidated that make this order less suitable for you.

Resources