I would like to kindly ask you the best approach of analyzing the columns from a materialized table regarding to find their source tables.
As an example: the table to be analyzed CUSTOMERS and the column is customer_name
I would like to have an output like:
current_name
source_name
source_table
customer_name
nombre_cust
nombre_cust
I would like to create valid_from valid_to columns from the source table like below example.
Desired Output:
Is there any way to analyze the sources of the columns from the final table?
I am using Snowflake and I have checked INFORMATION_SCHEMA.COLUMNS and SNOWFLAKE.ACCOUNT_USAGE.COLUMNS but they did not help me to find the source name of the column or it's source table.
I appreciate with any suggestions.
Thanks,
Kind Regards
In case you are asking for the source of a particular table: In theory you could have numerous scripts/ways inside or outside of Snowflake to load a target table. Thats why there is no straightforward way with Snowflake capabilities to detect the source of a certain table and it really depends on how you are loading the table.
Procedure based: You could run DESC PROCEDURE xyz to get the procedure code and parse it for source objects.
INSERT based: If someone is executing a simple INSERT ... SELECT statement, you are not getting this dependency unless you are parsing the query history.
CREATE based: If you are asking for the dependency based on a CREATE TABLE AS ... SELECT statement, you also need to check the query history.
In case you are asking for views/materialized views: You can checkout OBJECT_DEPENDENCIES in Snowflake: https://docs.snowflake.com/en/user-guide/object-dependencies.html. With this you can query, which view depends on which source object. Since materialized views can only be based on one source table, every column is based on this source table or was derived somehow.
Related
I was referencing this question How to export all data from table to an insertable sql format?
while looking for a way to create an insert statement for a single row from a table without having to manually write it since the table has many columns. In my case I simply followed the steps listed then performed a ctrl-f search in the resulting script for the record I wanted then copied and pasted that single line to another query window but this would be terrible if I had hundreds of millions of rows. Is there a way to get the same functionality but tell the script generator I only want rows where id = value? Is there a better way to do this using only the out of the box Microsoft tools?
There is no way to do this, but you can do it by using a temp table
Create a new table by inset into and select those records which you want to insert.
Create the script and change the table name by using find and replace.
finally drop that temporary table.
How could we manage the archive process without writing separate stored procedures
in SQL Server 2000?
For example,
There are two tables in current db-student and employee.
The objective is to archive the data in these tables-
student table - data older than 1 year
employee table - data older than 2 years
The date field to be compared in the student table is field CreatedDate and that of employee
is DOJ
In addition, I have kept a configuration table with columns ConfigtableName, ConfigColumnName , ConfigCutoffdate.
a) How can I write a generic query so that it dynamically takes the table name as well as column
name from the configuration table and insert the data to the archive dbs' tables?
Something like this....
INSERT INTO <ArchiveDb>.Dbo.<Table name obtained from config table>
SELECT *
FROM <CurrentDb>.Dbo.<Table name obtained from config table>
WHERE
<ConfigColumnName obtained from config table> < <Cutoffdate obtained from config table>
b) How to manage the identify field set option?
c) Is it possible if an error occur in the nth iteration, it could save the error detail to a log?
The only way to construct such a dynamic query in a stored procedure is by using the sp_executesql stored procedure. Read the documentation I linked. Is pretty straight forward.
I am not sure I understand what you mean "identity field set option", but if you are concerned about duplicate values in a column that should have unique values (PK), I'd recommend that you disable the unique indexes in the archive tables since they are for archiving. I don't assume there will be a major issue with duplicate values in an id column but most importantly, that situation should never arise anyway if the archiving tables are identical copies of the source tables.
If you want to catch errors in the nth iteration, you are going to have to enclose every iteration in a begin tran/commit tran block and check for errors. If there's one, you can log to any other table you choose; if not, then you commit the transaction. Read this for an example (scroll all the way down to the Transactions section).
We're implementing a New system using Java/Spring/Hibernate on PostgreSQL. This system needs to make a copy of Every Record as soon as a modification/deletion is done on the record(s) in the Tables(s). Later, the Audit Table(s) will be queried by Reports to display the data to the users.
I was planning to implement this auditing/versioning feature by having a trigger on the table(s) which would make a copy of the modified row(deleted row) "TO" a TABLE called ENTITY_VERSIONS which would have about 20 columns called col1, col2, col3, col4, etc which would store the columns from the above Table(s); However, the problem is that if there is more than 1 Table to be versioned and ONLY 1 TARGET table(ENTITY_VERSIONS) to store all the tables' versions, how do I design the TARGET table ?
OR is it better that there will be a COPY of the VERSION Table for each Table that needs versioning ?
It will be bonus if some pointers towards PostgreSQL Triggers (and associated Stored Procedure ) code for implementing the auditing/versioning can be shared.
P.S : I looked at Suggestions for implementing audit tables in SQL Server? and kinda like the answer except I would NOT know what type should OldValue and NewValue be ?
P.P.S : If the Tables use SOFT DELETEs (phantom deletes) instead of HARD deletes, do any of your advice change ?
I would have a copy of each table to hold the versions of that table you wish to keep. It sounds like a bit of a nightmare to maintain and use a global versioning table.
This link in the Postgres documentation shows some audit trigger examples in Postgres.
In global table all columns can be stored in single column as hstore type. I just tried audit and I it is works great, I recommend it. Awesome audit table example tracks all changes in single table by simply adding a trigger onto the tables you want to begin to keep audit history on. all changes are stored in as hstore type- works for v 9.1+
this link
Huge database in mssql2005 with big codebase depending on the structure of this database.
I have about 10 similar tables they all contain either the file name or the full path to the file. The full path is always dependent on the item id so it doesn't make sense to store it in the database. Getting useful data out of these tables goes a little like this:
SELECT a.item_id
, a.filename
FROM (
SELECT id_item AS item_id
, path AS filename
FROM xMedia
UNION ALL
-- media_path has a different collation
SELECT item_id AS item_id
, (media_path COLLATE SQL_Latin1_General_CP1_CI_AS) AS filename
FROM yMedia
UNION ALL
-- fullPath contains more than just the filename
SELECT itemId AS item_id
, RIGHT(fullPath, CHARINDEX('/', REVERSE(fullPath))-1) AS filename
FROM zMedia
-- real database has over 10 of these tables
) a
I'd like to create a single view of all these tables so that new code using this data-disaster doesn't need to know about all the different media tables. I'd also like use this view for insert and update statements. Obviously old code would still rely on the tables to be up to date.
After reading the msdn page about creating views in mssql2005 I don't think a view with SCHEMABINDING would be enough.
How would I create such an updateable view?
Is this the right way to go?
Scroll down on the page you linked and you'll see a paragraph about updatable views. You can not update a view based on unions, amongst other limitations. The logic behind this is probably simple, how should Sql Server decide on what source table/view should receive the update/insert?
You can modify partitioned views, provided they satisfy certain conditions.
These conditions include having a partitioning column as a part of the primary key on each table, and having a set on non-overlapping check constraints for the partitioning column.
This seems to be not your case.
In your case, you may do either of the following:
Recreate you tables as views (with computed columns) for your legacy soft to work, and refer to the whole table from the new soft
Use INSTEAD OF triggers to update the tables.
If a view is based on multiple base tables, UPDATE statement on the view may or may not work depending on the UPDATE statement. If the UPDATE statement affects multiple base tables, SQL server throws an error. Whereas, if the UPDATE affects only one base table in the view then the UPDATE will work (Not correctly always). The insert and delete statements will always fail.
INSTEAD OF Triggers, are used to correctly UPDATE, INSERT and DELETE from a view that is based on multiple base tables. The following links has examples along with a video tutorial on the same.
INSTEAD OF INSERT Trigger
INSTEAD OF UPDATE Trigger
INSTEAD OF DELETE Trigger
In SQL Server given a Table/View how can you generate a definition of the Table/View in the form:
C1 int,
C2 varchar(20),
C3 double
The information required to do it is contained in the meta-tables of SQL Server but is there a standard script / IDE faciltity to output the data contained there in the form described above ?.
For the curious I want this as I have to maintain a number of SP's which contain Table objects (that is a form of temporary table used by SQL Server). The Table objects need to match the definition of Tables or Views already in the database - it would make life a lot easier if these definitions could be generated automatically.
Here is an example of listing the names and types of columns in a table:
select
COLUMN_NAME,
COLUMN_DEFAULT,
IS_NULLABLE,
DATA_TYPE,
CHARACTER_MAXIMUM_LENGTH,
NUMERIC_PRECISION,
NUMERIC_SCALE
from
INFORMATION_SCHEMA.COLUMNS
where
TABLE_NAME = 'YOUR_TABLE_NAME_HERE'
order by
Ordinal_Position
Generating DDL from that information is more difficult. There seems to be some suggestions at SQLTeam
If you want to duplpicate a table definition you could use:
select top 0
*
into
newtable
from
mytable
Edit: Sorry, just re-read your question, and realised this might not answer it. Could you be clear on what you are after, do you want an exact duplicate of the table definition, or a table that contains information about the tables definition?
Thanks for your replies. Yes I do want an exact duplicate of the DDL but I've realised I misstated exactly what I needed. It's DDL which will create a temporary table which will match the columns of a view.
I realised this in looking at Duckworths suggestion - which is good but unfortunately doesn't cover the case of a view.
SELECT VIEWDEFINITION FROM
INFORMATIONSCHEMA.VIEWS
... will give you a list of columns in a view and (assuming that all columns in the view are derived directly from a table) it should then be possible to use an amended version of Duckworths suggestion to pull together the relevant DLL.
I'm just amazed it's not easier ! I was expecting someone to tell me that there was a well established routine to do this given the TABLE objects need to have all columns full defined (rather than the way Oracle does it which is to say - "give me something which looks like table X".
Anyway thanks again for help and any further suggestions welcomed.
In this posting to another question I've got a DB reverse engineering script that will do tables, views, PK, UK and index definitions and foreign keys. This one is for SQL Server 2005 and is a port of one I originally wrote for SQL Server 2000. If you need a SQL Server 2000 version add a comment to this post and I'll post it up here.