Hierarchical List of All tables - sql-server

In a SQL Server DB, I have to find all the "Master"(Parent) tables and also build a
Hierarchical list of Paerent/Child tables. Finally I would like to traverse that hierarchical
list from down and delete all the child table data at the end i can able to delete
the parent data also.
I have tried in one way, that is, Using system tables (like sys.objects etc) I
queried the metadata of the db (like its primary and Foreign keys). But I don't know how
to formulate the tree like structure.

try this in SQL Server Management Studio:
EXEC sp_msdependencies #intrans = 1
if you insert the results into a temp table, you could then filter it to be just tables, just views, or use the other, alternative parameters for the proc to do the same thing
EXEC sp_msdependencies #intrans = 1 ,#objtype=8 --8 = tables
EXEC sp_msdependencies #intrans = 1 ,#objtype=3 --3 = tables is the correct one
Check this for more Heirarchical

Related

SQL Query to find number of tables available in Snowflake account(including all DB and schemas)

SQL Query to get total number of tables available in Snowflake account(including all DB and schemas)
You can query account_usage.tables or information_schema.tables views to find the total number of tables:
select count(*) from information_schema.tables;
https://docs.snowflake.com/en/sql-reference/info-schema/tables.html
select count(*) from snowflake.account_usage.tables;
https://docs.snowflake.com/en/sql-reference/account-usage/tables.html
There are three ways:
You can query the view INFORMATION_SCHEMA.TABLES to find all tables of your current database. So: You have to write a SELECT COUNT(*) FROM [database].INFORMATION_SCHEMA.TABLES for each of your databases, do a UNION ALL afterwards and SUM() your results per database to get the whole number of tables in all databases.
You can query the view ACCOUNT_USAGE.TABLES to find all tables and views of your account. One row represents one table. As ACCOUNT_USAGE.TABLES also contains views, you have to add a WHERE-Klause for the attribute TABLE_TYPE. Here you also have to keep in mind that you may have a latency of 90 minutes.
Run SHOW TABLES IN ACCOUNT; to see all tables
More infos about INFORMATION_SCHEMA.TABLES: https://docs.snowflake.com/en/sql-reference/info-schema/tables.html
More infos about ACCOUNT_USAGE.TABLES: https://docs.snowflake.com/en/sql-reference/account-usage/tables.html
More infos about SHOW TABLES: https://docs.snowflake.com/en/sql-reference/sql/show-tables.html
Note: For all three ways you can only see objects for which your current role has access privileges.

Merge query using two tables in SQL server 2012

I am very new to SQL and SQL server, would appreciate any help with the following problem.
I am trying to update a share price table with new prices.
The table has three columns: share code, date, price.
The share code + date = PK
As you can imagine, if you have thousands of share codes and 10 years' data for each, the table can get very big. So I have created a separate table called a share ID table, and use a share ID instead in the first table (I was reliably informed this would speed up the query, as searching by integer is faster than string).
So, to summarise, I have two tables as follows:
Table 1 = Share_code_ID (int), Date, Price
Table 2 = Share_code_ID (int), Share_name (string)
So let's say I want to update the table/s with today's price for share ZZZ. I need to:
Look for the Share_code_ID corresponding to 'ZZZ' in table 2
If it is found, update table 1 with the new price for that date, using the Share_code_ID I just found
If the Share_code_ID is not found, update both tables
Let's ignore for now how the Share_code_ID is generated for a new code, I'll worry about that later.
I'm trying to use a merge query loosely based on the following structure, but have no idea what I am doing:
MERGE INTO [Table 1]
USING (VALUES (1,23-May-2013,1000)) AS SOURCE (Share_code_ID,Date,Price)
{ SEEMS LIKE THERE SHOULD BE AN INNER JOIN HERE OR SOMETHING }
ON Table 2 = 'ZZZ'
WHEN MATCHED THEN UPDATE SET Table 1.Price = 1000
WHEN NOT MATCHED THEN INSERT { TO BOTH TABLES }
Any help would be appreciated.
http://msdn.microsoft.com/library/bb510625(v=sql.100).aspx
You use Table1 for target table and Table2 for source table
You want to do action, when given ID is not found in Table2 - in the source table
In the documentation, that you had read already, that corresponds to the clause
WHEN NOT MATCHED BY SOURCE ... THEN <merge_matched>
and the latter corresponds to
<merge_matched>::=
{ UPDATE SET <set_clause> | DELETE }
Ergo, you cannot insert into source-table there.
You could use triggers for auto-insertion, when you insert something in Table1, but that will not be able to insert proper Shared_Name - trigger just won't know it.
So you have two options i guess.
1) make T-SQL code block - look for Stored Procedures. I think there also is a construct to execute anonymous code block in MS SQ, like EXECUTE BLOCK command in Firebird SQL Server, but i don't know it for sure.
2) create updatable SQL VIEW, joining Table1 and Table2 to show last most current date, so that when you insert a row in this view the view's on-insert trigger would actually insert rows to both tables. And when you would update the data in the view, the on-update trigger would modify the data.

Stored procedure to update a table based on data from table parameter

I want to begin by stating I'm an SQL noob, so I'd appreciate any suggestions or comments on my workflow and/or mindset when trying to solve this issue.
What I'm doing is gathering usage statistics about several applications, in several categories (not all categories necessarily apply to all applications), storing them in a database.
I've set up a few tables to do that, and then one table to link everything together that's structured like so (from now on: Dtable):
(column name - details)
UserID - foreign key to another table which stores users data
ApplicationID - foreign key to another table which stores applications data
CategoryID - foreign key to another table which holds a list of different categories
Value - the actual data
Each application gathers the data, then submits it to the database using a stored procedure. As the amount of data can be different based on actual usage (not always sending every category) and for each application, I was thinking of sending the data as a DataTable with a list of CategoryID and Value so I won't have to call a procedure for every individual category (Ptable).
I need to update each record in Dtable to the correct value in Ptable according to CategoryID, but also filtered by UserID and ApplicationID. UserID and ApplicationID will be given as two other parameters to the Stored Procedure. Ptable only contains a list of CategoryID / Value records.
Now, I read about Cursors (for each record in the table parameter set the relevant data in the database table), but the consensus seems to be "Avoid at all costs".
How would I go about updating the table, then, based on the varying records in Ptable?
P.S.
The tables are structured like so to keep agility and scalability in adding more categories/applications in the future. If there's a better way to do it I'll be happy to know.
I believe the update statement would look something like this, where #ApplicationID and #UserID are the stored proc's other parameters:
update Dtable
set Dtable.Value = p.Value
from Ptable p
where Dtable.UserID = #UserID
and Dtable.ApplicationID = #ApplicationID
and Dtable.CategoryID = p.CategoryID;

order hint for openquery?

I need to execute the following SQL (SQL Server 2008) in a scheduled job periodically. The Query plan shows 53% cost is sort after the data is pulled from the oracle server. However, I've ordered the data in the openquery. How to force the query not to sort when merge joining?
merge target as t
using (select * from openquery(oracle, '
select * from t1 where UpdateTime > ''....'' order by k1, k2')
) as s on s.k1=t.k1 and s.k2=t.K2 -- the clustered PK of "target" is K1,k2
when matched then ......
when not matched then ......
Is there something like bulk insert's "with (order( { column [ ASC | DESC ] } [ ,...n ] ))"? will it help improve the query plan of the merge statement if it exists?
If the oracle table already have PK on K1,K2, will just using oracle.db.owner.tablename as target better? (will SQL Server figure out the index from oracle meta information?)
Or the best I can do is stored the oracle data in a local temp table and create a clustered primary key on K1,k2? I am trying to avoid to create a temp table because sometime the returned openquery data set can be large.
I think a table is the best way to go because then you can create whatever indexes you need, but there's no reason why it should be temporary; why not create a permanent staging table? A local join using local indexes will probably be much more efficient than a join on the results of a remote query, although the only way to know for sure is to test it and see.
If you're worried about the large number of rows, you can look into only copying over new or changed rows. If the Oracle table already has columns for row creation and update times, that would be quite easy.
Alternatively, you could consider using SSIS instead of a scheduled job. I understand that if you're not already using SSIS you may not want to invest time in learning it, but it's a very powerful tool and it's designed for moving large amounts of data into MSSQL. You would create a package with the following workflow:
Delete existing rows from the staging table (only if you can't populate it incrementally)
Copy the data from Oracle
Execute the MERGE statement

What is best approach of joining 2 tables from different Database?

What is best approach of joining 2 tables from different Database? In my situation, I have a development database that has postfix such as _DEV while on the production has _PROD.
The issue is that if I join these 2 tables I have to reference a full database name such as DB1_DEV.dbo.table1 INNER JOIN DB2_DEV.dbo.table100
Work well though but if you want to move this into production, it will be a nightmaire cause I have to change these.
Thanks
You can use Synonyms to simplify your queries. For example:
-- Create a synonym for the Product table in AdventureWorks.
USE tempdb;
GO
CREATE SYNONYM MyProduct
FOR AdventureWorks.Production.Product;
GO
-- Query the Product table by using the synonym.
USE tempdb;
GO
SELECT ProductID, Name FROM MyProduct WHERE ProductID < 5;
GO
EDIT: You can define the Synonyms for the tables in question. Use the Synonym instead of the full name in any place where you query the tables.
When you deploy to production, all you have to do is change the synonym.
Depending on your situation, a SYNONYM may be the best answer, or possibly a VIEW.
Example with a VIEW:
CREATE VIEW table1 AS SELECT * FROM DB1_DEV.dbo.table1
Later, when you move to PROD:
ALTER VIEW table1 AS SELECT * FROM DB1_PROD.dbo.table1
Just like with a SYNONYM, the update magically fixes all queries referring to simply "table1".
Here is a discussion explaining the differences between synonyms and views:
What are the pros/cons of using a synonym vs. a view?
Here's a suggestion: Move your Dev and Prod databases to different server clusters with the same name.
If you can't or won't do that, I suggest you find some way to parameterize your database names in your queries.
You must be storing your Schema name somewhere in Database. Fetch that schema name and build dynamic sql. and use oracle internal package DBMS_SQL.EXECUTE() to execute your select query.
Then when you will move to PROD you dont have to change anything.

Resources