Why this procedure is not working? - sql-server

this is my first question here. I am very new into SQL Server and T-SQL.
I would like to create a table, with a column that is using data from another column. I thought I could use select function, but it is not allowed.
How to do it?
It is very simple to create view in this way, but I would like to have a table not view.
It should look like
Column A, ColumnB,
Column C=select count(*) from [another table] where....
Could you please advise?

SELECT [COLUMN A],[COLUMN B],COUNT(*) as [COLUMN C]
INTO [destination table] FROM [another table] where...
You should use an alias

You create a table using the create table syntax because you will need to define the field names and sizes. Look the syntax up in Books Online. Do not ever use SELECT INTO unless you are creating a staging table for one-time use or a temp table. It is not a good choice for creating a new table. Plus, you don't say where any of the other columns come from except the column one, so it is may be impossible to properly set up the correct field sizes from the initial insert. Further, well frankly you should take the time to think about what columns you need and what data types they should be, it is irresponsible to avoid doing this for a table that will be permanently used.
To populate you use the Insert statement with a select instead of the values statement. If only column c come from another table, then it might be something like":
Insert table1 (colA, Colb, colC)
select 'test', 10, count(*)
from tableb
where ...
If you have to get the data from multiple tables, then you may need a join.
If you need to maintain the computed column as the values change in TableB, then you may need to write triggers on TableB or better (easier to develop and maintain and less likely to be buggy or create a data integrity problem) use a view for this instead of a separate table.

Related

Split field and insert rows in SQL Server trigger, when mutliple rows are affected without using a cursor

I have an INSERT trigger of a table, where one field of the table contains a comma-separated list of key-value pairs, that are separated by a :
I can select this field with the two values into a temp table easily with this statement:
-- SAMPLE DATA FOR PRESENTATION ONLY
DECLARE #messageIds VARCHAR(2000) = '29708332:55197,29708329:54683,29708331:54589,29708330:54586,29708327:54543,29708328:54539,29708333:54538,29708334:62162,29708335:56798';
SELECT
SUBSTRING(value, 1,CHARINDEX(':', value) - 1)AS MessageId,
SUBSTRING(value, CHARINDEX(':', value) + 1, LEN(value)-SUBSTRING(value,0,CHARINDEX(value,':'))) AS DeviceId
INTO #temp_messages
FROM STRING_SPLIT(#messageIds, ',')
SELECT * FROM #temp_messages
DROP TABLE #temp_messages
The result will look like this
29708332 55197
29708329 54683
29708331 54589
29708330 54586
29708327 54543
29708328 54539
29708333 54538
29708334 62162
29708335 56798
From here I can join the temp table to other tables and insert some of the results into a third table.
Inside the trigger I can get the messageIds with a simple SELECT statement like
DECLARE #messageIds VARCHAR(2000) = (SELECT ProcessMessageIds FROM INSERTED)
Now I create the temp table (like described above) and process my
INSERT INto <new_table> SELECT col1, col1, .. FROM #temp_messages
JOIN <another_table> ON ...
Unfortunately this will only work for single row inserts. As soon as there is more than one row, my SELECT ProcessMessageIds FROM INSERTED will fail, as there are multiple rows in the INSERTED table.
I can process the rows in a CURSOR but as far as I know CURSORS are a no-go in triggers and I should avoid them whenever it is possible.
Therefore my question is, if there is another way to do this without using a CURSOR inside the trigger?
Before we get into the details of the solution, let me point out that you would have no such issues if you normalized your database, as #Larnu pointed out in the comment section of your question.
Your
DECLARE #messageIds VARCHAR(2000) = (SELECT ProcessMessageIds FROM INSERTED)
statement assumes that there will be a single value to be assigned to #messageIDs and, as you have pointed out, this is not necessarily true.
Solution 1: Join with INSERTED rather than load it into a variable
INSERT INTO t1
SELECT ...
FROM t2
JOIN T3
ON ...
JOIN INSERTED
ON ...
and then you can reach INSERTED.ProcessMessageIds without issues. This will no longer assume that a single value was used.
Solution 2: cursors
You can use a CURSOR, as you have already pointed out, but it's not a very good idea to use cursors inside a trigger, see https://social.msdn.microsoft.com/Forums/en-US/87fd1205-4e27-413d-b040-047078b07756/cursor-usages-in-trigger-in-sql-server?forum=aspsqlserver
Solution 3: insert a single line at a time
While this would not require a change in your trigger, it would require a change in how you insert and it would increase the number of db requests necessary, so I would advise you not to choose this approach.
Solution 4: normalize
See https://www.simplilearn.com/tutorials/sql-tutorial/what-is-normalization-in-sql
If you had a proper table rather than a table of composite values, you would have no such issues and you would have a much easier time to process the message ids in general.
Summary
It would be wise to normalize your tables and perform the refactoring that would be needed afterwards. It's a great effort now, but you will enjoy its fruits. If that's not an option, you can "act as if it was normalized" and choose Solution 1.
As pointed out in the answers, joining with the INSERTED table solved my problem.
SELECT INTAB.Id,
SUBSTRING(value, 1,CHARINDEX(':', value) - 1)AS MessageId,
SUBSTRING(value, CHARINDEX(':', value) + 1, LEN(value)-SUBSTRING(value,0,CHARINDEX(value,':'))) AS DeviceId
FROM INSERTED AS INTAB
CROSS APPLY STRING_SPLIT(ProcessMessageids,',')
I never used "CROSS APPLY" before, thank you.

Use result of stored procedure to join to a table

I have a stored procedure that returns a dataset from a dynamic pivot query (meaning the pivot columns aren't know until run-time because they are driven by data).
The first column in this dataset is a product id. I want to join that product id with another product table that has all sorts of other columns that were created at design time.
So, I have a normal table with a product id column and I have a "dynamic" dataset that also has a product id column that I get from calling a stored procedure. How can I inner join those 2?
Dynamic SQL is very powerfull, but has some severe draw backs. One of them is exactly this: You cannot use its result in ad-hoc-SQL.
The only way to get the result of a SP into a table is, to create a table with a fitting schema and use the INSERT INTO NewTbl EXEC... syntax...
But there are other possibilities:
1) Use SELECT ... INTO ... FROM
Within your SP, when the dynamic SQL is executed, you could add INTO NewTbl to your select:
SELECT Col1, Col2, [...] INTO NewTbl FROM ...
This will create a table with the fitting schema automatically.
You might even hand in the name of the new table as a paramter - as it is dynamic SQL, but in this case it will be more difficult to handle the join outside (must be dynamic again).
If you need your SP to return the result, you just add SELECT * FROM NewTbl. This will return the same resultset as before.
Outside your SP you can join this table as any normal table...
BUT, there is a big BUT - ups - this sounds nasty somehow - This will fail, if the tabel exists...
So you have to drop it first, which can lead into deep troubles, if this is a multi-user application with possible concurrencies.
If not: Use IF EXISTS(SELECT 1 FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME='NewTbl') DROP TABLE NewTbl;
If yes: Create the table with a name you pass in as parameter and do you external query dynamically with this name.
After this you can re-create this table using the SELECT ... INTO syntax...
2) Use XML
One advantage of XML is the fact, that any structure and any amount of data can be stuffed into one single column.
Let your SP return a table with one single XML column. You can - as you know the schema now - create a table and use INSERT INTO XmlTable EXEC ....
Knowing, that there will be a ProductID-element you can extract this value and create a 2-column-derived-table with the ID and the depending XML. This is easy to join.
Using wildcards in XQuery makes it possible to query XML data without knowing all the details...
3) This was my favourite: Don't use dynamic queries...

sql query to find all the updated column of a table

I need a dynamic sql query that can get a column values of a table on the basis of any case/condition. I want to do that while update any record. And I need updated value of column and for that I am using Inserted and Deleted tables of SQL Server.
I made one query which is working fine with one column but I need a generic query that should work for all of the columns.
SELECT i.name
FROM inserted i
INNER JOIN deleted d ON i.id = d.id
WHERE d.name <> i.name
With the help of above query we can get "Name" column value if it's updated. But as it's specific to one column same thing I want for all the columns of a table in which there should be no need to define any column name it should be generic/dynamic query.
I am trying to achieve that by adding one more inner join with PIVOT of "INFORMATION_SCHEMA.COLUMNS" for columns but I am not sure about it whether we can do that or not by this.
SELECT COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME LIKE '%table1%'
You can't get that kind of information using just a query. It sounds like you need to be running an update trigger. In there you can code the criteria get your columns, etc. From your question, it sounds like only one column can be updated, you are just not sure which one it will be. If multiple columns are being updated, that complicates things, but not that much
There are several ways that you can get the data you need. Off the top, I'd suggest some simple looping once you get the column names.
What has worked for me in similar scenarios is to do a simple query to INFORMAATION_SCHEMA.Columns and put the results in a table variable. From there, iterate through the table variable to get one column name at a time and run a simple dynamic query to see if the value has changed.
Hope this helps.
:{)

Can I use a cursor in the select part of a SQL query?

I would like to select columns from two tables and add an extra and put this into a table. My question is can I use a cursor to loop through a table and calculate a value and then assign this to the new column in the select part like below
SELECT a.cola,
a.colB,
b.colC,
b.colD,
(CURSOR TO LOOP THROUGH a table and then calculate some value) as new column
INTO NEWTABLE
FROM a
INNER JOIN b
ON a.id=b.id
WHERE etc
I just need to know that this is possible?
You can use a correlated subquery (it does mean you will have to ensure only one value per record is possible to be returned though) but it is far better to do this through joins, if possible, for performance reasons. You should never consider looping as a way to get data if a set-based alternative works.

Data Warehouseing with minimal changes

Ok, I have a table that has 10 years worth of data, and performance is taking a hit. I am planning on moving the older data to a seperate historicaltable. the problem is i need to select from the first table if it is in there and the 2nd table if not. I do not want to do a join because then it will do a lookup on the 2nd table always. HELP?
IF you still need to query the data in no way would I move it to another table. How big is the table now? What are the indexes? Have you considered partioning the table?
If you must move to another table, you could query in stored procs with an if statement. Query the main table first and then if the rowcount = 0 query the other table. It will be slower for records not in the main table but should stay fast if they are in there. However, it wouldn't know when you need records from both.
Sample of code to do this:
CREATE PROC myproc (#test INT)
AS
SELECT field1, field2 from table1field1, field2 from table1
IF ##rowcount = 0
BEGIN
SELECT field1, field2 FROM table2 field1, field2 from table1
END
But really the partioning and indexing correctly is probaly your best choice. Also optimize existing queries. If you are using known poorly performing techniques such as cursors, correlated subqueries, views that call views, scalar functions, nonsargable where clauses, etc. just fixing your queries may mean you don't have to archive.
Sometimes, buying a better server would help as well.
Rather than using a separate historical table, you might want to look into partitioning the table by some function of the date (year perhaps?) to improve performance instead.

Resources