I want to fetch all the columns except one column,Can anybody help me how I can get the result except write all the column name,because it is good for less number of columns but if the table have more than 100 column then it will be very lengthy.......
For this you need to execute dynamic-SQL. You can create a function which will return you the column names or you can do something like
DECLARE #ColList Varchar(1000), #SQLStatment VARCHAR(4000)
SET #ColList = ''
select #ColList = #ColList + Name + ' , ' from syscolumns where id = object_id('Table1') AND Name != 'Column20'
SELECT #SQLStatment = 'SELECT ' + Substring(#ColList,1,len(#ColList)-1) + ' From Table1'
EXEC(#SQLStatment)
here is the link for this example -
http://social.msdn.microsoft.com/Forums/en-US/transactsql/thread/39eb0314-4c2f-4e07-84c8-e832499049f8
If this is a frequent need, I'd create a view that contains the columns you're interested in.
I don't believe this is possible.
This is not possible without writing another query to loop over the column names.
If you know which columns you need, you should SELECT them by name.
If not, you should SELECT *.
You have to list all the names I'm afraid. Assuming this is a permanent database object (e.g. table, view) then in Management studio you can right click the object in the tree view and choose SCRIPT TABLE AS -> SELECT to avoid typing them all.
Or alternatively drag the "columns" folder into your query window to get the comma delimited list of column names added.
Related
I'm using Microsoft SQL server management studio.
I would like to add a new column to a table (altertable1), and name that column using the data from a cell (Date) of another table (stattable1).
DECLARE #Data nvarchar(20)
SELECT #Data = Date
FROM stattable1
WHERE Adat=1
DECLARE #sql nvarchar(1000)
SET #sql = 'ALTER TABLE altertable1 ADD ' + #Data + ' nvarchar(20)'
EXEC (#sql)
Executing this, I get the following error and can't find out why:
"Incorrect syntax near '2021'."
The stattable1 looks like this:
Date |Adat
2021-09-08 |1
2021-09-08 is a daily generated data:
**CONVERT(date,GETDATE())**
Just like Larnu said in comment, maybe this is not a main problem for you, but if you want to do this add [ ] when you want to name column starting with number.
Like this:
SET #sql = 'ALTER TABLE altertable1 ADD [' + #Data + '] nvarchar(20)'
And of course, naming columns by date or year is not best practice.
The problem with your overall design is that you seem to be adding a column to the table every day. A table is not a spreadsheet and you should be storing data for each day in a row, not in a separate column. If your reports need to look that way, there are many ways to pivot the data so that you can handle that at presentation time without creating impossible-to-maintain technical debt in your database.
The problem with your current code is that 2021-06-08 is not a valid column name, both because it starts with a number, and because it contains dashes. Even if you use a more language-friendly form like YYYYMMDD (see this article to see what I mean), it still starts with a number.
The best solution to the local problem is to not name columns that way. If you must, the proper way to escape it is to use QUOTENAME() (and not just manually slap [ and ] on either side):
DECLARE #Data nvarchar(20), #sql nvarchar(max);
SELECT #Data = Date
FROM dbo.stattable1
WHERE Adat = 1;
SET #sql = N'ALTER TABLE altertable1
ADD ' + QUOTENAME(#Data) + N' nvarchar(20);';
PRINT #sql;
--EXEC sys.sp_executesql #sql;
This also demonstrates your ability to debug a statement instead of trying to decipher the error message that came from a string you can't inspect.
Some other points to consider:
if you're declaring a string as nvarchar, and especially when dealing with SQL Server metadata, always use the N prefix on any literals you define.
always reference user tables with two-part names.
always end statements with statement terminators.
generally prefer sys.sp_executesql over EXEC().
some advice on dynamic SQL:
Protecting Yourself from SQL Injection - Part 1
Protecting Yourself from SQL Injection - Part 2
Say I have columns is_return_foo, is_return_bar and is_return_baz.
I need to return the foo, bar, baz columns if any of the above are respectively set to true...
Is CASE WHEN the best option?
Something like:
SELECT
CASE is_return_foo WHEN true THEN foo ELSE null
CASE is_return_bar WHEN true THEN bar ELSE null
CASE is_return_baz WHEN true THEN baz ELSE null
another_column
FROM
my_table
Update
Basically I want to return columns based on on/off flags. So if flag A is on then return the column A value, if flag B is on then return column B value.
Maybe we could say based on permissions but more fine grained.
So say you have email message with to, from, body, headers, read, read time.
So a standard user will only see to from, body, and a premium customer might be configured to also read headers, read and read time.
But would.like to do ot per column instead of group of columns.
If it was group of columns then we could easily say CASE WHEN premium THEN headers, read, read time.
Update 2
I think we can do group based "permissions" so if you are a silver member you only see some fields, but if you are gold member you see all fields.
Maybe something like this is the solution you are looking for:
SELECT
your columns here
FROM my_table
where COALESCE(is_return_foo,is_return_bar,is_return_baz) is not null
Dynamic TSQL and pivot tables work for this use case.
DECLARE #Columns nvarchar(max);
DECLARE #Sql nvarchar(max);
SELECT #Columns = CONCAT('[', Column, ']') FROM Permissions
SET #Sql = '
SELECT pvt.*
FROM Data AS d
PIVOT (MIN(ColumnValue) FOR ColumnName IN (' + #Columns +')) AS pvt'
EXEC sp_executesql #Sql;
sp_executesql Reference
Pivot Reference
The CASE statement is the only way I know of to accomplish what you are trying to do in your question...at least short of having a lot of IF/ELSE conditions that would duplicate the code for your base select statement (would make for a management nightmare). There might be better solutions available to you if we understood more about what these fields are and why this scenario exists though. My gut tells me these would be better as separate queries, but it's hard to tell based on "foo" and "bar" type examples.
SO,
I am trying to find a (messy?) solution to an even more messy problem. I have a SQL Server 2014 database which, in part, stores data from another software package but also stores data for me. The software creates a table with specific fields for each set of data - a Name and a Geometry field. For example, one might contain cities (dtCitiesData), another contains roads (dtRoadsData), another contains states(dtStates), etc. I also have a table (dtSpatialDataTables) which stores the names of the tables which store the data I want. That table only has 2 fields: ID and TableName.
I would like to create a SELECT statement which queries dtSpatialDataTables for all entries, then queries all tables with the name corresponding to each TableName result, and SELECTs Name and Geometry from them.
In pseudocode, effectively I want to do this:
SELECT TableName FROM dtSpatialDataTables
FOREACH TableName :
SELECT Name, Geometry FROM (TableName)
I can do this easily PHP via a first query against dtSpatialDataTables and then a loop of queries to each of the returned row TableNames but I want to know if this is possible via SQL directly.
In reality, what I want to do is create a VIEW with this query so I can directly query the VIEW rather than soak of processing time on potentially lots of queries.
Is this possible? Unfortunately, my Google-ing doesn't turn up any meaningful results.
Thanks everyone!
PS: I figure this is messy and not the way this should be done. But I have no choice in how the software puts data in my database. I simply have to use what I get. So... whether this is the "right" way or the "wrong" way, I need a solution. :)
you could do something like this using dynamic sql..
CREATE PROCEDURE dbo.usp_SpatialData_GetByID
(
#ID INT
)
AS
BEGIN
DECLARE #SQL NVARCHAR(MAX),
#Selects NVARCHAR(MAX) = 'SELECT Name, Geometry, ''<<TableName>>'' AS Source FROM <<TableName>>'
SELECT #SQL = COALESCE(#SQL + ' UNION ALL ', '') + REPLACE(#Selects, '<<TableName>>', TableName)
FROM dtSpatialDataTables
WHERE ID = #ID
EXEC(#SQL)
END
GO
I feel like you left out filtering of the Geometry tables somewhere so you might have to add a filter to the #Selects statement
The year is 2010.
SQL Server licenses are not cheap.
And yet, this error still does not indicate the row or the column or the value that produced the problem. Hell, it can't even tell you whether it was "string" or "binary" data.
Am I missing something?
A quick-and-dirty way of fixing these is to select the rows into a new physical table like so:
SELECT * INTO dbo.MyNewTable FROM <the rest of the offending query goes here>
...and then compare the schema of this table to the schema of the table into which the INSERT was previously going - and look for the larger column(s).
I realize that this is an old one. Here's a small piece of code that I use that helps.
What this does, is returns a table of the max lengths in the table you're trying to select from. You can then compare the field lengths to the max returned for each column and figure out which ones are causing the issue. Then it's just a simple query to clean up the data or exclude it.
DECLARE #col NVARCHAR(50)
DECLARE #sql NVARCHAR(MAX);
CREATE TABLE ##temp (colname nvarchar(50), maxVal int)
DECLARE oloop CURSOR FOR
SELECT COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'SOURCETABLENAME' AND TABLE_SCHEMA='dbo'
OPEN oLoop
FETCH NEXT FROM oloop INTO #col;
WHILE (##FETCH_STATUS = 0)
BEGIN
SET #sql = '
DECLARE #val INT;
SELECT #val = MAX(LEN(' + #col + ')) FROM dbo.SOURCETABLENAME;
INSERT INTO ##temp
( colname, maxVal )
VALUES ( N''' + #col + ''', -- colname - nvarchar(50)
#val -- maxVal - int
)';
EXEC(#sql);
FETCH NEXT FROM oloop INTO #col;
END
CLOSE oloop;
DEALLOCATE oloop
SELECT * FROM ##temp
DROP TABLE ##temp;
Another way here is to use binary search.
Comment half of the columns in your code and try again. If the error persists, comment out another half of that half and try again. You will narrow down your search to just two columns in the end.
You could check the length of each inserted value with an if condition, and if the value needs more width than the current column width, truncate the value and throw a custom error.
That should work if you just need to identify which is the field causing the problem. I don't know if there's any better way to do this though.
Recommend you vote for the enhancement request on Microsoft's site. It's been active for 6 years now so who knows if Microsoft will ever do anything about it, but at least you can be a squeaky wheel: Microsoft Connect
For string truncation, I came up with the following solution to find the max lengths of all of the columns:
1) Select all of the data to a temporary table (supply column names where needed), e.g.
SELECT col1
,col2
,col3_4 = col3 + '-' + col4
INTO #temp;
2) Run the following SQL Statement in the same connection (adjust the temporary table name if needed):
DECLARE #table VARCHAR(MAX) = '#temp'; -- change this to your temp table name
DECLARE #select VARCHAR(MAX) = '';
DECLARE #prefix VARCHAR(256) = 'MAX(LEN(';
DECLARE #suffix VARCHAR(256) = ')) AS max_';
DECLARE #nl CHAR(2) = CHAR(13) + CHAR(10);
SELECT #select = #select + #prefix + name + #suffix + name + #nl + ','
FROM tempdb.sys.columns
WHERE object_id = object_id('tempdb..' + #table);
SELECT #select = 'SELECT ' + #select + '0' + #nl + 'FROM ' + #table
EXEC(#select);
It will return a result set with the column names prefixed with 'max_' and show the max length of each column.
Once you identify the faulty column you can run other select statements to find extra long rows and adjust your code/data as needed.
I can't think of a good way really.
I once spent a lot of time debugging a very informative "Division by zero" message.
Usually you comment out various pieces of output code to find the one causing problems.
Then you take this piece you found and make it return a value that indicates there's a problem instead of the actual value (in your case, should be replacing the string output with the len(of the output)). Then manually compare to the lenght of the column you're inserting it into.
from the line number in the error message, you should be able to identify the insert query that is causing the error. modify that into a select query to include AND LEN(your_expression_or_column_here) > CONSTANT_COL_INT_LEN for the string various columns in your query. look at the output and it will give your the bad rows.
Technically, there isn't a row to point to because SQL didn't write the data to the table. I typically just capture the trace, run it Query Analyzer (unless the problem is already obvious from the trace, which it may be in this case), and quickly debug from there with the ages old "modify my UPDATE to a SELECT" method. Doesn't it really just break down to one of two things:
a) Your column definition is wrong, and the width needs to be changed
b) Your column definition is right, and the app needs to be more defensive
?
The best thing that worked for me was to put the rows first into a temporary table using select .... into #temptable
Then I took the max length of each column in that temp table. eg. select max(len(jobid)) as Jobid, ....
and then compared that to the source table field definition.
It looks like #temptables created using dynamic SQL via the EXECUTE string method have a different scope and can't be referenced by "fixed" SQLs in the same stored procedure.
However, I can reference a temp table created by a dynamic SQL statement in a subsequence dynamic SQL but it seems that a stored procedure does not return a query result to a calling client unless the SQL is fixed.
A simple 2 table scenario:
I have 2 tables. Let's call them Orders and Items. Order has a Primary key of OrderId and Items has a Primary Key of ItemId. Items.OrderId is the foreign key to identify the parent Order. An Order can have 1 to n Items.
I want to be able to provide a very flexible "query builder" type interface to the user to allow the user to select what Items he want to see. The filter criteria can be based on fields from the Items table and/or from the parent Order table. If an Item meets the filter condition including and condition on the parent Order if one exists, the Item should be return in the query as well as the parent Order.
Usually, I suppose, most people would construct a join between the Item table and the parent Order tables. I would like to perform 2 separate queries instead. One to return all of the qualifying Items and the other to return all of the distinct parent Orders. The reason is two fold and you may or may not agree.
The first reason is that I need to query all of the columns in the parent Order table and if I did a single query to join the Orders table to the Items table, I would be repoeating the Order information multiple times. Since there are typically a large number of items per Order, I'd like to avoid this because it would result in much more data being transfered to a fat client. Instead, as mentioned, I would like to return the two tables individually in a dataset and use the two tables within to populate a custom Order and child Items client objects. (I don't know enough about LINQ or Entity Framework yet. I build my objects by hand). The second reason I would like to return two tables instead of one is because I already have another procedure that returns all of the Items for a given OrderId along with the parent Order and I would like to use the same 2-table approach so that I could reuse the client code to populate my custom Order and Client objects from the 2 datatables returned.
What I was hoping to do was this:
Construct a dynamic SQL string on the Client which joins the orders table to the Items table and filters appropriate on each table as specified by the custom filter created on the Winform fat-client app. The SQL build on the client would have looked something like this:
TempSQL = "
INSERT INTO #ItemsToQuery
OrderId, ItemsId
FROM
Orders, Items
WHERE
Orders.OrderID = Items.OrderId AND
/* Some unpredictable Order filters go here */
AND
/* Some unpredictable Items filters go here */
"
Then, I would call a stored procedure,
CREATE PROCEDURE GetItemsAndOrders(#tempSql as text)
Execute (#tempSQL) --to create the #ItemsToQuery table
SELECT * FROM Items WHERE Items.ItemId IN (SELECT ItemId FROM #ItemsToQuery)
SELECT * FROM Orders WHERE Orders.OrderId IN (SELECT DISTINCT OrderId FROM #ItemsToQuery)
The problem with this approach is that #ItemsToQuery table, since it was created by dynamic SQL, is inaccessible from the following 2 static SQLs and if I change the static SQLs to dynamic, no results are passed back to the fat client.
3 around come to mind but I'm look for a better one:
1) The first SQL could be performed by executing the dynamically constructed SQL from the client. The results could then be passed as a table to a modified version of the above stored procedure. I am familiar with passing table data as XML. If I did this, the stored proc could then insert the data into a temporary table using a static SQL that, because it was created by dynamic SQL, could then be queried without issue. (I could also investigate into passing the new Table type param instead of XML.) However, I would like to avoid passing up potentially large lists to a stored procedure.
2) I could perform all the queries from the client.
The first would be something like this:
SELECT Items.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter)
SELECT Orders.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter)
This still provides me with the ability to reuse my client sided object-population code because the Orders and Items continue to be returned in two different tables.
I have a feeling to, that I might have some options using a Table data type within my stored proc, but that is also new to me and I would appreciate a little bit of spoon feeding on that one.
If you even scanned this far in what I wrote, I am surprised, but if so, I woul dappreciate any of your thoughts on how to accomplish this best.
You first need to create your table first then it will be available in the dynamic SQL.
This works:
CREATE TABLE #temp3 (id INT)
EXEC ('insert #temp3 values(1)')
SELECT *
FROM #temp3
This will not work:
EXEC (
'create table #temp2 (id int)
insert #temp2 values(1)'
)
SELECT *
FROM #temp2
In other words:
Create temp table
Execute proc
Select from temp table
Here is complete example:
CREATE PROC prTest2 #var VARCHAR(100)
AS
EXEC (#var)
GO
CREATE TABLE #temp (id INT)
EXEC prTest2 'insert #temp values(1)'
SELECT *
FROM #temp
1st Method - Enclose multiple statements in the same Dynamic SQL Call:
DECLARE #DynamicQuery NVARCHAR(MAX)
SET #DynamicQuery = 'Select * into #temp from (select * from tablename) alias
select * from #temp
drop table #temp'
EXEC sp_executesql #DynamicQuery
2nd Method - Use Global Temp Table:
(Careful, you need to take extra care of global variable.)
IF OBJECT_ID('tempdb..##temp2') IS NULL
BEGIN
EXEC (
'create table ##temp2 (id int)
insert ##temp2 values(1)'
)
SELECT *
FROM ##temp2
END
Don't forget to delete ##temp2 object manually once your done with it:
IF (OBJECT_ID('tempdb..##temp2') IS NOT NULL)
BEGIN
DROP Table ##temp2
END
Note: Don't use this method 2 if you don't know the full structure on database.
I had the same issue that #Muflix mentioned. When you don't know the columns being returned, or they are being generated dynamically, what I've done is create a global table with a unique id, then delete it when I'm done with it, this looks something like what's shown below:
DECLARE #DynamicSQL NVARCHAR(MAX)
DECLARE #DynamicTable VARCHAR(255) = 'DynamicTempTable_' + CONVERT(VARCHAR(36), NEWID())
DECLARE #DynamicColumns NVARCHAR(MAX)
--Get "#DynamicColumns", example: SET #DynamicColumns = '[Column1], [Column2]'
SET #DynamicSQL = 'SELECT ' + #DynamicColumns + ' INTO [##' + #DynamicTable + ']' +
' FROM [dbo].[TableXYZ]'
EXEC sp_executesql #DynamicSQL
SET #DynamicSQL = 'IF OBJECT_ID(''tempdb..##' + #DynamicTable + ''' , ''U'') IS NOT NULL ' +
' BEGIN DROP TABLE [##' + #DynamicTable + '] END'
EXEC sp_executesql #DynamicSQL
Certainly not the best solution, but this seems to work for me.
I would strongly suggest you have a read through http://www.sommarskog.se/arrays-in-sql-2005.html
Personally I like the approach of passing a comma delimited text list, then parsing it with text to table function and joining to it. The temp table approach can work if you create it first in the connection. But it feel a bit messier.
Result sets from dynamic SQL are returned to the client. I have done this quite a lot.
You're right about issues with sharing data through temp tables and variables and things like that between the SQL and the dynamic SQL it generates.
I think in trying to get your temp table working, you have probably got some things confused, because you can definitely get data from a SP which executes dynamic SQL:
USE SandBox
GO
CREATE PROCEDURE usp_DynTest(#table_type AS VARCHAR(255))
AS
BEGIN
DECLARE #sql AS VARCHAR(MAX) = 'SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = ''' + #table_type + ''''
EXEC (#sql)
END
GO
EXEC usp_DynTest 'BASE TABLE'
GO
EXEC usp_DynTest 'VIEW'
GO
DROP PROCEDURE usp_DynTest
GO
Also:
USE SandBox
GO
CREATE PROCEDURE usp_DynTest(#table_type AS VARCHAR(255))
AS
BEGIN
DECLARE #sql AS VARCHAR(MAX) = 'SELECT * INTO #temp FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = ''' + #table_type + '''; SELECT * FROM #temp;'
EXEC (#sql)
END
GO
EXEC usp_DynTest 'BASE TABLE'
GO
EXEC usp_DynTest 'VIEW'
GO
DROP PROCEDURE usp_DynTest
GO