SAP PowerDesigner link to other model and reflecting changes made - powerdesigner

I have tables in a PDM Y that have links via Copy or via Mapping from a table A to a table A in PDM X.
I cannot see that if a change is made to table A in PDM X, that PDM Y sees this automatically.
Can this be achieved?
Or is there manual process I can execute?

Related

Snowflake Materialized View Not Updating

I have materialized views in Snowflake that is not refreshing. Below is a basic example of what I'm doing.
--Create table and insert two records
CREATE OR REPLACE TABLE T1 (ID INTEGER);
INSERT INTO T1 VALUES (1);
INSERT INTO T1 VALUES (2);
--Create materialized view on table
CREATE OR REPLACE MATERIALIZED VIEW VW_T1 AS SELECT ID AS AVG_ID FROM T1;
--Insert two more records after creating the materialized view
INSERT INTO T1 VALUES (3);
INSERT INTO T1 VALUES (4);
-- Show metadata
SHOW MATERIALIZED VIEWS LIKE '%T1';
No matter how long I wait, the view does not seem to be updating. The row count is always 2. Behind_by always has a value.
What am i doing wrong. I have followed the troubleshooting in the Snowflake documentation, but no success. https://docs.snowflake.com/en/user-guide/views-materialized.html#troubleshooting
Marius
This is expected behaviour. Snowflake materialized views are different than materliazed views on other databases. Two important points:
1) Materialized views are automatically and transparently maintained by Snowflake.
2) Materialized views provide always current data. If a query is run before the materialized view is up-to-date, Snowflake either updates the materialized view or uses the up-to-date portions of the materialized view and retrieves any required newer data from the base table.
So you do not need to worry about the updates. It will be updated in the background time to time (based on some criteria such as DML size, DML count, time). You can see when it's updated if you check the "refreshed_on" column on the output of SHOW command.
---------- Extra info --------------
MV keeps the data on its own data files. The SHOW command shows "when the data is refreshed", "how many rows it contains" etc... Marius saw 2 rows, because the MV had 2 rows at that point. When Marius add more rows to the source table, MV will not copy them immediately. There are some thresholds, but if you try to read from MV, MV will read the delta from source table, and provide current data all the time. The users do not need to worry about the "behind_by", "refreshed_on" or "number of rows" (unless the lag is several days).
In summary, SHOW command and MV seem working as expected.

Initial load creates my tables in the sym_x tables schema

After setting up a master-master replication on top of PostgreSQL , I tried to perform an initial load using:
./symadmin -engine octopusdb reload-node 2
My setup is:
1. I created all sym_x tables in a separate schema (replication).
2. I created all my application tables in other schemas of their own.
3. I inserted into sym_trigger.source_schema_name the suitable schema name for each application table.
Still, the initial load seem to create the application tables under the 'replication' schema instead of in their own schemas.
Is there some parameter I am missing for the properties file, or the initial load command?
So apparently for multi-schmea configuration,you need to create a separate record for each schema in sym_router (with a separate router_id, and appropriate target_schema_name), and for each table put a record in sym_trigger_router and sym_trigger with the appropriate router_id and schema name).
Also, once failed, I needed to remove everything from the tmp directory which is under the symmetric software, so the updates to sym tables will be recognized.

CDC ODI - Why odi need two views JV$ and JV$D

During cdc process odi is creating two views JV$ AND JV$D even both have same structure why odi need two views if both are doing the same work.
In the next paragraphs you will see the diferences (extract from link).
The JV$ view is the view that is used in the mappings where you select the option Journalized data only. Records from the J$ table are filtered so that only the following records are returned:
Only Locked records :JRN_CONSUMED=’1’;
If the same PK appears multiple times, only the last entry for that PK (based on the JRN_DATE) is taken into account. Again the logic here is that we want to replicate values as they are currently in the source database. We are not interested in the history of intermediate values that could have existed.
An additional filter is added in the mappings at design time so that only the records for the selected subscriber are consumed from the J$ table, as we saw in figure 5.
Similarly to the JV$ view, the JV$D view joins the J$ table with the source table on the primary key. This view shows all changed records, locked or not, but applies the same filter on the JRN_DATE column so that only the last entry is taken into account when the same record has been modified multiple times since the last consumption cycle. It lists the changes for all subscribers.

How to use the pre-copy script from the copy activity to remove records in the sink based on the change tracking table from the source?

I am trying to use change tracking to copy data incrementally from a SQL Server to an Azure SQL Database. I followed the tutorial on Microsoft Azure documentation but I ran into some problems when implementing this for a large number of tables.
In the source part of the copy activity I can use a query that gives me a change table of all the records that are updated, inserted or deleted since the last change tracking version. This table will look something like
PersonID Age Name SYS_CHANGE_OPERATION
---------------------------------------------
1 12 John U
2 15 James U
3 NULL NULL D
4 25 Jane I
with PersonID being the primary key for this table.
The problem is that the copy activity can only append the data to the Azure SQL Database so when a record gets updated it gives an error because of a duplicate primary key. I can deal with this problem by letting the copy activity use a stored procedure that merges the data into the table on the Azure SQL Database, but the problem is that I have a large number of tables.
I would like the pre-copy script to delete the deleted and updated records on the Azure SQL Database, but I can't figure out how to do this. Do I need to create separate stored procedures and corresponding table types for each table that I want to copy or is there a way for the pre-copy script to delete records based on the change tracking table?
You have to use a LookUp activity before the Copy Activity. With that LookUp activity you can query the database so that you get the deleted and updated PersonIDs, preferably all in one field, separated by comma (so its easier to use in the pre-copy script). More information here: https://learn.microsoft.com/en-us/azure/data-factory/control-flow-lookup-activity
Then you can do the following in your pre-copy script:
delete from TableName where PersonID in (#{activity('MyLookUp').output.firstRow.PersonIDs})
This way you will be deleting all the deleted or updated rows before inserting the new ones.
Hope this helped!
In the meanwhile the Azure Data Factory provides the meta-data driven copy task. After going through the dialogue driven setup, a metadata table is created, which has one row for each dataset to be synchronized. I solved this UPSERT problem by adding a stored procedure as well as a table type for each dataset to be synchronized. Then I added the relevant information in the metadata table for each row like this
{
"preCopyScript": null,
"tableOption": "autoCreate",
"storedProcedure": "schemaname.UPSERT_SHOP_SP",
"tableType": "schemaname.TABLE_TYPE_SHOP",
"tableTypeParameterName": "shops"
}
After that you need to adapt the sink properties of the copy task like this (stored procedure, table type, table type parameter name):
#json(item().CopySinkSettings).storedProcedure
#json(item().CopySinkSettings).tableType
#json(item().CopySinkSettings).tableTypeParameterName
If the destination table does not exist, you need to run the whole task once before adding the above variables, because auto-create of tables works only as long as no stored procedure is given in the sink properties.

How to implement Auditing/versioning of Table Modifications on PostgreSQL

We're implementing a New system using Java/Spring/Hibernate on PostgreSQL. This system needs to make a copy of Every Record as soon as a modification/deletion is done on the record(s) in the Tables(s). Later, the Audit Table(s) will be queried by Reports to display the data to the users.
I was planning to implement this auditing/versioning feature by having a trigger on the table(s) which would make a copy of the modified row(deleted row) "TO" a TABLE called ENTITY_VERSIONS which would have about 20 columns called col1, col2, col3, col4, etc which would store the columns from the above Table(s); However, the problem is that if there is more than 1 Table to be versioned and ONLY 1 TARGET table(ENTITY_VERSIONS) to store all the tables' versions, how do I design the TARGET table ?
OR is it better that there will be a COPY of the VERSION Table for each Table that needs versioning ?
It will be bonus if some pointers towards PostgreSQL Triggers (and associated Stored Procedure ) code for implementing the auditing/versioning can be shared.
P.S : I looked at Suggestions for implementing audit tables in SQL Server? and kinda like the answer except I would NOT know what type should OldValue and NewValue be ?
P.P.S : If the Tables use SOFT DELETEs (phantom deletes) instead of HARD deletes, do any of your advice change ?
I would have a copy of each table to hold the versions of that table you wish to keep. It sounds like a bit of a nightmare to maintain and use a global versioning table.
This link in the Postgres documentation shows some audit trigger examples in Postgres.
In global table all columns can be stored in single column as hstore type. I just tried audit and I it is works great, I recommend it. Awesome audit table example tracks all changes in single table by simply adding a trigger onto the tables you want to begin to keep audit history on. all changes are stored in as hstore type- works for v 9.1+
this link

Resources