How to update column_name in information_schema.columns - sql-server

Unable to update the column_name column in information_schema.columns
I have a table named 'knd' in MS-SQL server. Now I want to alter the column names of all the columns in this table in this way:
for example, my column names in this table are: Fuel category, fuel type, end date, start date
I want to update these names to [Fuel category], [fuel type], [end date], [start date]. i.e column names must include [] and the updation should be done in one shot.
What I have tried:
update INFORMATION_SCHEMA.COLUMNS
Set COLUMN_NAME = CONCAT('[',COLUMN_NAME,']')
where TABLE_NAME = 'knd'
I get the below error:
Ad hoc updates to system catalogs are not allowed.
I tried to reconfigure with override as below, but didn't work:
exec sp_configure 'allow updates','1';
go
reconfigure with override
go
even if I have to use exec sp_rename, how can I do it for all columns in one shot. I believe using sp_rename requires more manual intervention as my column names might change tommorow .
Can someone please help to accomplish this?

First: This is a terrible idea, as everyone wrote in the comments. Adding square brackets to column names will only force you to refer to the columns with double square brackets - to refer to a column named [fuel type] you will have to write [[fuel type]]].
Second, You can't directly update system tables or the views that relies on them. Everything in the sys schema and in the INFORMATION_SCHEMA schema is readonly. To rename a column in a view, you must write an alter view statement, or use sp_rename. To rename a column in a table, you must write an alter table statement or use sp_rename.
That being said, it's best to first find all objects that depends on the column you want to rename, becuase renaming a column will not rename every reference to it, so you might break stuff when renaming.
You can query the built in table valued function sys.dm_sql_referencing_entities to get dependencies of an object in SQL Server.

Quotename does the needful in my case as suggested by #jeroen-mostert in one of the above comments!!.
Below is my simple code snippet to perform this for all the columns in my table set.
Select STUFF((SELECT N',' + QUOTENAME(C.COLUMN_NAME)
FROM INFORMATION_SCHEMA.COLUMNS C
WHERE C.TABLE_NAME = 'knd'
FOR XML PATH(N''),TYPE).value(N'.',N'nvarchar(MAX)'),1,1,N'')
Result is as follows:
[Start Date_(MM-DD-YYYY)],[End Date_(MM-DD-YYYY)],[mode],[scac],[fuel category],[Fuel Type],[Base],[Escalator],[Surcharge],[FSC # $3.2],[Step],[Co_ID]

Related

How can I remove tool generated columns named createdby updatedat etc

I used a Redgate tool to synchronize data from a SQL Server database, and in the process, the tool created four new columns in each table with names like createdby, updatedby, etc.
Now that the data is in sync, I don't want these columns anymore.
Is there a simple way, maybe a script, to remove these columns?
You can drop the columns by running the following statement
ALTER TABLE table_name
DROP COLUMN column_name;
https://www.sqlservertutorial.net/sql-server-basics/sql-server-alter-table-drop-column/
EDIT:
As Dale suggested, the intention might be to have a way to drop these columns en masse, so here's an update:
I tend to generate the code that does not have to be fully automated, but needs to be relatively easy to update. If I had dozens or hundreds of tables with extra columns that I want to remove, I would write a query similar to the one below, then copy the results from the lower pane in SSMS and execute the resulting script.
select 'alter table ' + quotename(table_schema) + '.' + quotename(table_name) + ' drop column ' + quotename(column_name)
from information_schema.columns
where 1=1
and column_name in ('createdby', 'updatedby')
Here is an alternative solution, not as good as #dalek
Suck the schema into Visual Studio using a new db project and tool-schema-compare. Then do replace all using the regex below, replace with nothing.
This removes the unwanted column
\[CreatedAt\][^\,.]+\,
this removes all the dangling commas at the end of a table column create that preceded the unwanted columns
\,([\s\r][\s]+)+\)

Failing to understand how to select all rows based on table- and database-name

WHAT I EXPECT:
I want to create a Job in my SQL Server Agent that allows me to fire off a stored procedure to clean up a particular table. The spu would take two parameters: TableName and Days.
TableName would be the name of the table I'm looking for and Days would be how far back I wish to delete records.
WHAT I'VE DONE:
After having looked around online I've found sources on how to see if a User Database holds the supplied TableName:
SELECT *
FROM INFORMATION_SCHEMA.Tables
WHERE TABLE_NAME = #TableName
This results in a few rows looking a bit like this:
TABLE_CATALOG | TABLE_SCHEMA |TABLE_NAME |TABLE_TYPE
Database_A | table_schema |table_A |table_type
WHAT I DON'T UNDERSTAND:
How can I use the resulting rows of the previous query to find all rows of the supplied #TableName in a particular Database? In pseudo:
SELECT * FROM table_A WHERE database = database_A
I know I need to use a cursor somehow, that's not the problem.
What I'm simply struggling to understand is how I can use the database name and the table name to find the rows of the table in a particular database.
In my case I've got 10 or so databases that need to be iterated through to find the initial dataset (all user databases where #TableName exists) and then a secondary query to find all rows of the #TableName in the database that the cursor currently is pointing at.
You have to do select * from ..table_A
but you can't do that in a a simple TSQL. Possibly you could generate a sub script and execute.

SQL How to SELECT specific fields from tables using a table of table names

SO,
I am trying to find a (messy?) solution to an even more messy problem. I have a SQL Server 2014 database which, in part, stores data from another software package but also stores data for me. The software creates a table with specific fields for each set of data - a Name and a Geometry field. For example, one might contain cities (dtCitiesData), another contains roads (dtRoadsData), another contains states(dtStates), etc. I also have a table (dtSpatialDataTables) which stores the names of the tables which store the data I want. That table only has 2 fields: ID and TableName.
I would like to create a SELECT statement which queries dtSpatialDataTables for all entries, then queries all tables with the name corresponding to each TableName result, and SELECTs Name and Geometry from them.
In pseudocode, effectively I want to do this:
SELECT TableName FROM dtSpatialDataTables
FOREACH TableName :
SELECT Name, Geometry FROM (TableName)
I can do this easily PHP via a first query against dtSpatialDataTables and then a loop of queries to each of the returned row TableNames but I want to know if this is possible via SQL directly.
In reality, what I want to do is create a VIEW with this query so I can directly query the VIEW rather than soak of processing time on potentially lots of queries.
Is this possible? Unfortunately, my Google-ing doesn't turn up any meaningful results.
Thanks everyone!
PS: I figure this is messy and not the way this should be done. But I have no choice in how the software puts data in my database. I simply have to use what I get. So... whether this is the "right" way or the "wrong" way, I need a solution. :)
you could do something like this using dynamic sql..
CREATE PROCEDURE dbo.usp_SpatialData_GetByID
(
#ID INT
)
AS
BEGIN
DECLARE #SQL NVARCHAR(MAX),
#Selects NVARCHAR(MAX) = 'SELECT Name, Geometry, ''<<TableName>>'' AS Source FROM <<TableName>>'
SELECT #SQL = COALESCE(#SQL + ' UNION ALL ', '') + REPLACE(#Selects, '<<TableName>>', TableName)
FROM dtSpatialDataTables
WHERE ID = #ID
EXEC(#SQL)
END
GO
I feel like you left out filtering of the Geometry tables somewhere so you might have to add a filter to the #Selects statement

Searching the existance of one column or another gives me an error

IF EXISTS( SELECT * FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'SiteInformation'
AND COLUMN_NAME = 'Number_Injectors')
BEGIN
SELECT [Number_Injectors] as Injectors
FROM [BLEND].[dbo].[SiteInformation]
END
ELSE
BEGIN
IF EXISTS( SELECT * FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'SiteInformation'
AND COLUMN_NAME = 'Injectors')
BEGIN
SELECT [Injectors] as Injectors
FROM [BLEND].[dbo].[SiteInformation]
END
END
The basic premise is that I have Visual Studio code that references a table called SiteInfomation from different servers to collect information concerning a certain piece of machinery. Thing is that I found out that a couple of those servers have have different column names (Injectors and Number_Injectors). The data variable Injectors from Visual Studio is looking for columns named Injectors to collect information. When it comes to a SiteInformation table that has a column named Number_Injectors instead of Injectors, the value that is returned is NULL. Number_Injectors and Injectors are both one and the same except in name.
I checked StackOverflow and found a topic on how to check if a column exists and created the code mentioned above. The if exist portion of the code works fine but I get an error if I use this query on a server that doesn't contain one of the two column names.
Example: The SiteInformation table from Server A has the column Injectors. It would give me an error because of this code:
SELECT [Number_Injectors] as Injectors
FROM [BLEND].[dbo].[SiteInformation]
Likewise the SiteInformation table from Server B has the column Number_Injectors. It would give me an error because of this code:
SELECT [Number_Injectors] as Injectors
FROM [BLEND].[dbo].[SiteInformation]
I am a bit lost on how to correct it. It seems like both Select queries are ran at the same time despite the if-exist part. Any suggestions will be helpful.
The SQL compiler is going to try to validate both select statements, so you'd need to "hide" them from the compiler by embedding them in an EXEC like this:
EXEC ('SELECT [Number_Injectors] as Injectors
FROM [BLEND].[dbo].[SiteInformation]')

Is there any way to ALTER a column to switch off 'NOT NULL'?

Can this be done in bulk too? So that all columns in the table can be set to switch off the 'NOT NULL' flag?
You should be able to use an ALTER TABLE xxx ALTER COLUMN statement to redefine the column.
If this is a one-time thing you need to run, you could use a trick by writing a query that queries the column names for the table from the system/dba table and generates your alter statements. You copy the results of the query (your 15 or however many alter statements) into your script and just run that. I don't have much mssql experience nor an environment to test on right now but something along the lines of:
SELECT
'ALTER TABLE ' + table_name + ' ALTER COLUMN ' + column_name + ' ' + data_type
FROM INFORMATION_SCHEMA.Columns
WHERE TABLE_NAME = 'xxx'
where you will need to manipulate the data_type part to add/remove the NULL constraint text
To do it in bulk, once...
Use SSMS designer to generate a script. This will rebuild your table (create a temp table, copy data, drop old table, rename temp table).
Otherwise, it's one at a time using ALTER TABLE...
Yes, you can do it. Read books online.
No, it can't be done in bulk, but you could execute several statements in a single query.
Get a list of the columns and a template that has the required SQL and use some tool to create the statements for you.
I have done this in Excel before, but you could write a real program using your language of choice.
When the number of tables is low enough I'm using SSMSE (SQL Server Management Studio Express), by entering design mode on each table and checking Allow Nulls on the required columns.
For a larger number of tables, try the answer provided by ChrisCM.

Resources