Snowflake Materialized View Not Updating - snowflake-cloud-data-platform

I have materialized views in Snowflake that is not refreshing. Below is a basic example of what I'm doing.
--Create table and insert two records
CREATE OR REPLACE TABLE T1 (ID INTEGER);
INSERT INTO T1 VALUES (1);
INSERT INTO T1 VALUES (2);
--Create materialized view on table
CREATE OR REPLACE MATERIALIZED VIEW VW_T1 AS SELECT ID AS AVG_ID FROM T1;
--Insert two more records after creating the materialized view
INSERT INTO T1 VALUES (3);
INSERT INTO T1 VALUES (4);
-- Show metadata
SHOW MATERIALIZED VIEWS LIKE '%T1';
No matter how long I wait, the view does not seem to be updating. The row count is always 2. Behind_by always has a value.
What am i doing wrong. I have followed the troubleshooting in the Snowflake documentation, but no success. https://docs.snowflake.com/en/user-guide/views-materialized.html#troubleshooting
Marius

This is expected behaviour. Snowflake materialized views are different than materliazed views on other databases. Two important points:
1) Materialized views are automatically and transparently maintained by Snowflake.
2) Materialized views provide always current data. If a query is run before the materialized view is up-to-date, Snowflake either updates the materialized view or uses the up-to-date portions of the materialized view and retrieves any required newer data from the base table.
So you do not need to worry about the updates. It will be updated in the background time to time (based on some criteria such as DML size, DML count, time). You can see when it's updated if you check the "refreshed_on" column on the output of SHOW command.
---------- Extra info --------------
MV keeps the data on its own data files. The SHOW command shows "when the data is refreshed", "how many rows it contains" etc... Marius saw 2 rows, because the MV had 2 rows at that point. When Marius add more rows to the source table, MV will not copy them immediately. There are some thresholds, but if you try to read from MV, MV will read the delta from source table, and provide current data all the time. The users do not need to worry about the "behind_by", "refreshed_on" or "number of rows" (unless the lag is several days).
In summary, SHOW command and MV seem working as expected.

Related

Oracle: keep a table up to date to another

We use Oracle database and now I face a problem.
We need to copy a subset of columns from table A to a new table B:
Table A
Name|Birth Date|Location|Office
Table B
Name|Location
And table A will change quite often (several times in a month). And it's managed by another team.
What is the best way to keep synchronized the table B from table A ?
Thank you very much.
Instead of a new table - think of a view or a materialized view.
View won't even occupy any space, it is just a stored query:
create or replace view v_b as
select name, location
from some_user.table_a;
It would always be "synchronized", you'd instantly see all committed data that belongs to some_user.
A materialized view occupies space, acts as if it was another table - you can even create indexes on it. Set it to refresh in a scheduled manner (for example, every night) or on demand or whenever some_user commits changes made in their table_a.
If I were you, I wouldn't create another table; (materialized) view seems to be a more appropriate solution.

Is it possible to create a view where deleted rows in the original table are kept

I have a table COMPANY where companies are kept. I want to create a view of that table, let's name it COMPANY_CDC but with one caveat:
When an entry in the original table is deleted, I want to set a deleted flag on the view entry instead of deleting it.
EDIT Why soft deletes? The point is that im performing change data capture using JDBC, and JDBC is only able to capture soft deletes. Inserts / updates are no problem.
If this cannot be done by using a view, what would be an alternative solution?
You can insert deleted values in another table using trigger
, and with join of these two table you can create your view.

How to find out what is acting on an Oracle database table?

There is this table in my Oracle database that is used to store audit information.
When I first did a SELECT * on that table, the audit timestamps were all on the same day, within the same hour (e.g. 18/10/2013 15:06:45, 18/10/2013 15:07:29); the next time I did it, the previous entries were gone, and the table then only contained entries with the 16:mm:ss timestamp.
I think something is acting on that table, such that every interval the table contents is/may be backed up to somewhere - I don't know where, and then the table is cleared. However, as I'm not familiar with databases, I'm not sure what is doing this.
I'd like to know how I can find out what is acting on this table, so that I can in turn retrieve the previous data I need.
EDIT:
What I've tried thus far...
SELECT * FROM DBA_DEPENDENCIES WHERE REFERENCED_NAME='MY_AUDIT_TABLE';
I got back four results, but all of which were (based on my programming skills) talking about putting data into the table, none about backing it up anywhere.
SELECT * FROM MY_AUDIT_TABLE AS OF TIMESTAMP ...
This only gives me a snapshot at a certain time, but since the table is being updated very frequently, it does not make sense for me to query every second.
The dba_dependencies view will give you an idea on what procedures, function etc will act on the table
SELECT * FROM DBA_DEPENDENCIES WHERE REFERENCED_NAME='MY_AUDIT_TABLE';
where MY_AUDIT_TABLE is the audit table name
if the table's synonym is used in the database then
SELECT * FROM DBA_DEPENDENCIES WHERE REFERENCED_NAME='MY_AUDIT_TABLE_SYNONYM';
where MY_AUDIT_TABLE_SYNONYM is the synonym for MY_AUDIT_TABLE
Or if any triggers are acting on the table
Select * from dba_triggers where table_name='MY_AUDIT_TABLE';
for external script to process the table
you can request DBA to turn on DB Fine grained audit for the table
Then query view DBA_FGA_AUDIT_TRAIL with timestamp between 15:00:00 and 16:00:00 to check the external call(OS_PROCESS column will give Operating System Process ID) or what SQL(SQL_TEXT) is executing on the table

How to make SQLDataSource insert into view instead of the underlying table?

We have replaced 20 tables with a consolidated table, that separates each set of data via a "set id" (all the records for table "A" have a set_id of 1, table "B" is 2, etc.).
We then built views on the table, and renamed each so they had the views had the same names as the original 20 tables, with a WHERE to add the set_id. Net result - inserts/updates/selects of the views still work
We did this so our web page, which uses a sqldatasource with "sql command builder", wouldn't have to change. We added an INSTEAD OF INSERT trigger on each view, so that when you insert into the view, it adds the set_id and inserts into the consolidated table. So far, so good.
It partially works: UPDATEs and DELETEs work, because they know the actual ID for the record.
However INSERTs don't - when the command actually executes, we see "exec sp_executesql insert into consolidatedtable" - rather than hitting the view, the data source control finds the underlying table, then inserts directly into it. If we try adding fields to the views, they then show up in the data source control, but the web page then shows a configurable field.
Is there a way to change things on the database side to force it to use the view? My only other option at this point is to replace the views with tables, add an AFTER INSERT, UPDATE, DELETE trigger so that the consolidated gets updated, and then a process to make sure they're in sync and there are no issues.
MANY thanks in advance.

Updateable view in mssql with multiple tables and computed values

Huge database in mssql2005 with big codebase depending on the structure of this database.
I have about 10 similar tables they all contain either the file name or the full path to the file. The full path is always dependent on the item id so it doesn't make sense to store it in the database. Getting useful data out of these tables goes a little like this:
SELECT a.item_id
, a.filename
FROM (
SELECT id_item AS item_id
, path AS filename
FROM xMedia
UNION ALL
-- media_path has a different collation
SELECT item_id AS item_id
, (media_path COLLATE SQL_Latin1_General_CP1_CI_AS) AS filename
FROM yMedia
UNION ALL
-- fullPath contains more than just the filename
SELECT itemId AS item_id
, RIGHT(fullPath, CHARINDEX('/', REVERSE(fullPath))-1) AS filename
FROM zMedia
-- real database has over 10 of these tables
) a
I'd like to create a single view of all these tables so that new code using this data-disaster doesn't need to know about all the different media tables. I'd also like use this view for insert and update statements. Obviously old code would still rely on the tables to be up to date.
After reading the msdn page about creating views in mssql2005 I don't think a view with SCHEMABINDING would be enough.
How would I create such an updateable view?
Is this the right way to go?
Scroll down on the page you linked and you'll see a paragraph about updatable views. You can not update a view based on unions, amongst other limitations. The logic behind this is probably simple, how should Sql Server decide on what source table/view should receive the update/insert?
You can modify partitioned views, provided they satisfy certain conditions.
These conditions include having a partitioning column as a part of the primary key on each table, and having a set on non-overlapping check constraints for the partitioning column.
This seems to be not your case.
In your case, you may do either of the following:
Recreate you tables as views (with computed columns) for your legacy soft to work, and refer to the whole table from the new soft
Use INSTEAD OF triggers to update the tables.
If a view is based on multiple base tables, UPDATE statement on the view may or may not work depending on the UPDATE statement. If the UPDATE statement affects multiple base tables, SQL server throws an error. Whereas, if the UPDATE affects only one base table in the view then the UPDATE will work (Not correctly always). The insert and delete statements will always fail.
INSTEAD OF Triggers, are used to correctly UPDATE, INSERT and DELETE from a view that is based on multiple base tables. The following links has examples along with a video tutorial on the same.
INSTEAD OF INSERT Trigger
INSTEAD OF UPDATE Trigger
INSTEAD OF DELETE Trigger

Resources