I have a master table named Master_Table and the columns and values in the master table are below:
| ID | Database | Schema | Table_name | Common_col | Value_ID |
+-------+------------+--------+-------------+------------+----------+
| 1 | Database_1 | Test1 | Test_Table1 | Test_ID | 1 |
| 2 | Database_2 | Test2 | Test_Table2 | Test_ID | 1 |
| 3 | Database_3 | Test3 | Test_Table3 | Test_ID2 | 2 |
I have another Value_Table which consist of values that need to be deleted.
| Value_ID | Common_col | Value |
+----------+------------+--------+
| 1 | Test_ID | 110 |
| 1 | Test_ID | 111 |
| 1 | Test_ID | 115 |
| 2 | Test_ID2 | 999 |
I need to build a query to create a SQL query to delete the value from the table provided in Master_Table whose database and schema information is provided in the same row. The column that I need to refer to delete the record is given in Common_col column of master table and the value I need to select is in Value column of Value_Table.
The result of my query should create a query as given below :
DELETE FROM Database_1.Test1.Test_Table1 WHERE Test_ID=110;
or
DELETE FROM Database_1.Test1.Test_Table1 WHERE Test_ID in (110,111,115);
These query should be inside a loop so that I can delete all the row from all the database and tables provided in master table.
Queries don't really create queries.
One way to do what you're saying, which could be useful if this is a one time thing or very occasional thing, is to use SSMS to generate query statements, then copy them to the clipboard, paste them into the window, and execute there.
SELECT 'DELETE FROM Database_1.Test1.Test_Table1 WHERE '
+ common_col
+ ' = '
+ convert(VARCHAR(10),value)
This probably isn't what you want; it sounds more like you want to automate cleanup or something.
You can turn this into one big query if you don't mind repeating yourself a little:
DELETE T1
FROM Database_1.Test1.Test_Table1 T1
INNER JOIN Database_1.Test1.ValueTable VT ON
(VT.common_col = 'Test_ID' and T1.Test_ID=VT.Value) OR
(VT.common_col = 'Test_ID2' and T1.Test_ID2=VT.Value)
You can also use dynamic SQL combined with the first part ... but I hate dynamic SQL so I'm not going to put it in my answer.
Related
I have a table called product and its data as follows,
| ProductID | ProductName | Code | SortValue |
+-----------+-------------+------+-----------+
| 10 | AAA | 13RT | 1 |
| 11 | BBB | 14RT | 2 |
| 12 | CCC | 15RT | 3 |
| 13 | DDD | 16RT | 4 |
| 14 | EEE | 17RT | 5 |
| 15 | FFF | 19RT | 6 |
I wrote a merge query to insert product data as follows,
MERGE [product] AS target
USING (SELECT #productName, #code) AS source (productname, code)
ON (target.code = source.code)
WHEN matched THEN
UPDATE
SET productname = source.productname,
code = source.code,
WHEN NOT matched THEN
INSERT (productname, code, sortvalue)
VALUES (source.productname, source.code, 1)
OUTPUT inserted.productid INTO #insertedTable;
You can see the product table has a column SortValue and each product has a sort value. I need to insert newly inserting product sort value as 1 and need to update the existing product's sort value by one. To do that, I wrote the following query.
UPDATE product
SET SortValue = sortvalue + 1
WHERE sortvalue >= 1;
I need to execute the above query before inserting new records. How can I do it, I apply it after the merge statement's NOT matched part, but it makes an error. How can I sort this issue?
I have a table with column name IDENTIFIER and the table (TAB1) has an index for this column. whenever i try to query a single data using a simple where clause with single value, explain plan shows that it is utilizing an existing index on that particular column.
But whenever i have a list of values in another table, say a temporary table ( TEMP_IDENTIFIER ) with list of all identifiers that i want to query and when i frame a query on the same table with an IN clause , i could see that explain plan is not considering the index, instead it performs an full table scan on the table
Ideally i would want the second query to utilize the existing index as well
Please find the both the queries and explain plan as follows
Query 1
explain plan for
select * from schemaowner.TAB1
where IDENTIFIER = 'A';
Explain Plan
Plan hash value: 4172144893
------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 51 | 12750 | 11 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| TAB1 | 51 | 12750 | 11 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | COL_INDEX | 51 | | 4 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("IDENTIFIER"='A')
Query 2
explain plan for
select * from schemaowner.TAB1
where IDENTIFIER in (select IDENTIFIER from SCHEMAOWNER.temp_IDENTIFIER);
Explain Plan :
Plan hash value: 935676029
-------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 3135K| 822M| | 74751 (1)| 00:14:58 |
|* 1 | HASH JOIN RIGHT SEMI| | 3135K| 822M| 2216K| 74751 (1)| 00:14:58 |
| 2 | TABLE ACCESS FULL | TEMP_IDENTIFIER | 61115 | 1492K| | 85 (2)| 00:00:02 |
| 3 | TABLE ACCESS FULL | TAB1 | 3745K| 893M| | 28028 (2)| 00:05:37 |
-------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("IDENTIFIER"="IDENTIFIER")
Note
-----
- dynamic sampling used for this statement (level=2)
Thats the beauty of the optimizer. It's figured out (or costed) that a SEMI join is the most efficient method :)
We currently use TPT (Table Per Type) in Entity Framework, this is very slow as we have about 20 tables, when they are queried, Entity Framework creates some massive disguising SQL which is very slow.
Each table has an auto increment integer column, this allows each type to have a number that is incremented per type. This is what the clients wanted. Now that we are wanting to move to the more performant TPH, we need all these table columns moved to the one table.
How can we have the auto increment columns based on the type as in the results below?
e.g.
Current Job Task
| TaskId | TaskNumber |
-----------------------------
| 1234 | 1 |
| 2345 | 2 |
Current Work Task
| TaskId | TaskNumber |
-----------------------------
| 3244 | 1 |
| 3245 | 2 |
This is the TPH table structure we want, as you can see, we want the task number to increment based on the Type of task.
| TaskId | Type | JobTaskNumber | WorkTaskNumber |
---------------------------------------------------------------
| 1234 | Job | 1 | null |
| 2345 | Job | 2 | null |
| 3244 | Work | null | 1 |
| 3245 | Work | null | 2 |
I am wondering if we use a seeding table, but any solutions greatly appreciated
Many thanks
Andrew
OK so did what I thought would work.
Not a hugely nice approach as we need about 20 seed tables.Each table has just an identity id defined as a BIGINT in sql server
When we want to add and get a new incremented id we just call this using dapper to get the result.
INSERT INTO SeedMyTable DEFAULT VALUES; SELECT CAST(SCOPE_IDENTITY() AS BIGINT)
I have 2 tables, ShareButton and SharePage.
ShareButton table:
+----+---------------+---------------+
| ID | Name | TotalShare |
+----+---------------+---------------+
| 1 | Facebook | 0 |
| 2 | Twitter | 0 |
+----+---------------+---------------+
SharePage table:
+----+--------------------+-------+---------------+
| ID | URL | Share | ShareButtonID |
+----+--------------------+-------+---------------+
| 1 | www.abc.xyz/page1 | 3 | 1 |
| 2 | www.abc.xyz/page1 | 14 | 2 |
| 3 | www.abc.xyz/page2 | 6 | 1 |
| 4 | www.abc.xyz/page2 | 10 | 2 |
+----+--------------------+-------+---------------+
After insert or update a record in the SharePage table, TotalShare column of ShareButton is updated
update ShareButton
set TotalShare = (sum(Share) from SharePage where "ShareButtonID" = ShareButtonID of updated/inserted record))
where ID = ShareButtonID of updated/inserted record)`
Thank for reading!
Let me start my answer by saying I agree with Mureinik. Unless you have a really bad performance hit getting the sum of shares using a simple group by query, I wouldn't recommend saving that sum in the ShareButton table.
If you really want a trigger to calculate it, I guess the simplest way to do it is this:
CREATE TRIGGER trSharePage_Changed ON SharePage
FOR UPDATE, INSERT, DELETE
AS
UPDATE buttons
SET TotalShare = SumOfShares
FROM ShareButton buttons
INNER JOIN
(
SELECT ShareButtonID, SUM(Share) As SumOfShares
FROM SharePage
GROUP BY ShareButtonID
) pages ON buttons.ID = pages.ShareButtonID
Note that this trigger will be fired after any insert, update or delete statement on table SharePage will be completed. Since it's an after trigger, you don't need to deal with the inserted and deleted tables at all.
Is it possible in SQL Server to take two select statements and combine them into a single row without knowing how many entries one of the select statements got?
I've been looking around at various Join solutions but they all seem to work on the basis that the amount of columns is predetermined. I have a case here where one table has a determined amount of columns (t1) and the other table have an undetermined amount of entries (t2) which all use a key that matches one entry in t1.
+----+------+-----+
| id | name | ... |
+----+------+-----+
| 1 | John | ... |
+----+------+-----+
And
+-------------+----------------+
| activity_id | account_number |
+-------------+----------------+
| 1 | 12345467879 |
| 1 | 98765432515 |
| ... | ... |
| ... | ... |
+-------------+----------------+
The number of account numbers belonging to the first query is unknown.
After the query it would become:
+----+------+-----+----------------+------------------+-----+------------------+
| id | name | ... | account_number | account_number_2 | ... | account_number_n |
+----+------+-----+----------------+------------------+-----+------------------+
| 1 | John | ... | 12345467879 | 98765432515 | ... | ... |
+----+------+-----+----------------+------------------+-----+------------------+
So I don't know how many account numbers could be associated with the id beforehand.