I have below requirement.
I want to drop table : DIM_SALES_DETAILS
Before dropping, I want to run a query which will provide me all those object names (Views, Materialized Views, Stored Procedures, Tasks etc.) where this table has been used.
Is there a way we can get this ?
Thank You.
Have you tried to query the OBJECT_DEPENDENCIES view in the Account Usage schema of the shared SNOWFLAKE database?
https://docs.snowflake.com/en/user-guide/object-dependencies.html#impact-analysis-find-the-objects-referenced-by-a-table
There are some sample queries to find out the dependent objects.
Related
On the left part of the web interface for Snowflake, it shows me databases, schemas, tables, and views.
Is there a way to show functions that were created in a schema, or does it just show tables and views?
the new snowsight has this :-)
You have to run the various "SHOW ..." commands to see these objects
Alternatively, you can view the definition of a function by using either of the following commands in the query interface:
SELECT GET_DDL('function', <function_name(argument_datatype)>');
or:
DESC FUNCTION <function_name(argument_datatype)>;
There are some records that are missing values. I would like to modify/fill-in data into snowflake from Alteryx. I would like to modify many records at once.
What is the best way to modify snowflake database from Alteryx:
deleting specific rows and append modified data?
modifying data using sql statement in Alteryx output tool?
clone original table --> create modified table --> replace table?
any other ways?
Sincerely,
knozawa
Use an UPDATE statement in Snowflake. There is no benefit to the other methods that you've suggested. When you run an UPDATE in Snowflake, it is recreating the micro-partitions that those records are contained in regardless. Your other methods would be doing that same operation, but more than once. Stick with UPDATE.
I am needing to find all references to a table's field, and I have hundreds of stored procedures to search through. The goal is to find where it is being used in where clauses and add an extra value to the in statement. For example,
where myTable.Field_X in (1,2,3)
needs to become
myTable.Field_X in (1,2,3,4).
So I'm curious if there is a system table, like dba_dependencies, or something that I can query that would show me what procs, views, or functions are referencing the field? Thanks for any help you can give.
What version of Oracle are you using? If you are using at least 11.1, which introduced column-level dependency tracking, and you're not afraid to leverage some undocumented data dictionary tables, you can create a dba_dependency_columns view that will give you this information. But that will show every piece of code that depends on that column not just those where the column is used in the WHERE clause of a statement.
You can search dba_source for the text of procedures, functions, and triggers to look for code that has that sort of WHERE clause. That gets a little tricky, though-- if someone put the list of values on a different line or if there is an inline view where that column name is aliased to something else, it can be tricky to find via a text search. For views, you'd need to use dbms_metadata to generate the DDL for the view and search that in a loop.
The setup
I have the following database setup:
CentralDB
Table: Stores
Table: Users
Store1DB
Table: Orders
Store2DB
Table: Orders
Store3DB
Table: Orders
Store4DB
Table: Orders
... etc
CentralDB contains the users, logging and a Stores table with the name of each store database and general information about each store such as address, name, description, image, etc...
All the StoreDB's use the same structure just different data.
It is important to know that the list of stores will shrink and increase in the future.
The main client communicating with this setup is an API REST Service which gets passed a STOREID in the Header of each request telling it which database to connect to. This works flawlessly so far.
The reasoning
Whenever we need to do database maintenance on one store, we don't want all other stores to be down.
Backup management should be per store
Not having to write the WHERE storeID=x every time and for every table
Performance: each store could run on its own database server if the need arises
The goal
I need my REST API Service to somehow get all orders from all stores in one query.
Will you help me figure out a way to do this without hardcoding all storedb names? I was thinking about a stored procedure on the CentralDB but I was hoping there would be other solutions. In any case it has to be very efficient.
One option would be to have a list of databases stored in a "system" table in CentralDB.
Then you could create a stored procedure that would read the database names from the table, loop through them with cursor and generate a dynamic SQL that would UNION the results from all the databases. This way you would get a single recordset of results.
However, this database design is IMHO flawed. There is no reason for using multiple databases to store data that belongs to the same "domain". All the reasons that you have mentioned can be solved by using a single database with proper database design. Having multiple databases will create multiple problems on the long term:
you will need to change structure of all the DBs when you modify your database model
you will need to create/drop new databases when new stores are added/removed from your system
you will need to have items and other entities that are "common" to all the stores duplicated in all the DBs
what about reporting requirements (e.g. get sales data for stores 1 and 2 together, etc.) - this will require creating complex union queries...
etc...
On the long term, managing and maintaining this model will be a big pain.
I'd maintain a set of views that UNION ALL all the data. Every time a store is added or deleted those views must be updated. This can be automated.
The views provide an illusion to the application that there is only one database.
What I would not do is have each SQL query or procedure query all the database names and create dynamic SQL. That would entail lots of code duplication and an unnecessary loss of performance. This approach is error prone. Better generate code once in a central place and have all other SQL code reference that generated code.
Hey can someone tell me what the Field, File and Index .ddf files do in pervasive. Do they have to changed or be updated when a table definition changes? Any insight would be GREATLY appreciated.
Cheers.
FILE.DDF links the underlying Btrieve Data files to a logical table name.
FIELD.DDF uses the File Id from FILE.DDF to define all of the fields including offsets, data types, etc for each table.
INDEX.DDF defines the indexes on the fields in FIELD.DDF.
They are the field information meta date used by PSQL to access the data files in a relation access method (ODBC, OLEDB, ADO.NET, etc).
They do have to be changed if the underlying data file is changed through Btrieve. If the table definition changes through SQL (like ALTER TABLE statements), the Pervasive Control Center, DTI (Distributed Tuning Interface), DTO (Distributed Tuning Object), PDAC, ActiveX, or DDF Builder then the DDFs are updated automatically.