I am developing a web application using Oracle ADF. I have a bounded task flow in that i have a page fragment in that I have a table. I am generating this table from managed bean. The following is my table
I have pasted "#{pageFlowScope.tableUtilBean.tableList}" in value field of table in the property inspector. My table is successfully generated.
I have a method in a managed bean called generateTable(). The table will be generated after executing a query. Suppose the query result contains 10 records the table will have 10 rows.
My problem is suppose if the query result is having 100 records this method is executing 100 times and the query is executing 100 times. Due to this, It is taking too much time to generate the table. I need to make sure that this method gets executes only once.
Please help me. How do I achieve this.
Thanks in advance.
In your task flow try to create a Method Activity and make it default activity. This method action should call #{pageFlowScope.tableUtilBean.generateTable}, before fragment is loaded.
And when you have a query to show result then why are you populating table from managed bean
just create a ViewObject from SQL Query and drop it on page as af:table
Make use of ADF Business components
Ashish
Related
Background: I have few models which are materialized as 'Table'. This tables are populated with wipe(Truncate) and Load. Now I want to protect my existing data in the Table if the query used to populate data is returning empty result set. How can I make sure an empty result set is not replacing my existing data in table.
My table lies in Snowflake and using dbt to model the output table.
Nutshell: Commit the transaction only when SQL statement used is returning Not empty result set.
Have you tried using dbt ref() function, which allows us to reference one model within another?
https://docs.getdbt.com/reference/dbt-jinja-functions/ref
If you are loading data in a way that is not controlled via dbt and then you are using this table - this is called a source. You can read more about this in here.
dbt does not control what you load into a source, everything else that is the T in the ELT is controlled where you reference a model via ref() function. A great example if you have a source that changes and you load it into a table and make sure that incoming data does not "drop" already recorded data is "incremental" materialization. I suggest you read more in here.
Thinking incremental takes time and practise, also it is recommended every now and then to do a --full-refresh.
You can have pre-hooks and post-hooks that can check your sources with clever macros and add dbt tests. We would really need a little bit more context of what you have and what you wish to achieve to suggest a real response.
I have three fields badge number, termination date and status to update to Salesforce based on badge number.im using update strategy in mapping and on session level -upsert with external lookup field as badge_number_c and treat source rows as data driven(session properties). However we get only 50 records updated and 20000 records rejected as badge numbers not present in target and those 20 k records trying to insert and hence rejected(since we did not map all fields to form record in Salesforce as we only update).. for this error log it consuming lot of time and wf run time is high.
I tried to remove upsert and external lookup field but it throws error as I'd field missing..
it looks like you are trying to update salesforce target using infa target definition and mixing two things.
If you are using only update strategy + treat source rows as data driven(session properties), then please ensure you handle update condition in update strategy.
For example,
First calculate INSERT_UPDATE_FLAG using some lookup on target by joining on primary key columns.
And then use it like below logic in update strategy.
IIF ( INSERT_UPDATE_FLAG = 'UPD',DD_UPDATE, DD_INSERT) -- if you want UPSERT logic.
or
IIF ( INSERT_UPDATE_FLAG = 'UPD',DD_UPDATE, IIF(1=2,DD_INSERT)) -- if you want only UPDATE logic.
Also pls note, you need to mention primary key columns in infa target definition otherwise update wont work.
Now as per your screenshot, if you want to use SFDC specific logic, probably, you need to be careful and follow below link to do this. Its a multi step process to create external id first and then use it to do lookup and update.
https://knowledge.informatica.com/s/article/124909?language=en_US
We have a large production MSSQL database (mdf appx. 400gb) and i have a test database. All the tables,indexes,views etc. are same eachother. I need to make sure that tha datas in the tables of this two database consistent. so i need to insert all the new rows and update all the updated rows into test db from production every night.
I came up with idea of using SSIS packages to make the data consistent by checking updated rows and new rows in all the tables. My SSIS Flow is ;
I have packages in SSIS for each tables seperately because;
Orderly;
Im getting the timestamp value in the table in order to get last 1 day rows instead of getting whole table.
I get the rows of the table in the production
Then im using 'Lookup' tool to compare this data with the test database table data.
Then im using conditional sprit to get a clue whether the data is new or updated.
If the data is new, i insert this data to the destination
5_2. If the data is updated, then i update the data in the destination table.
Data flow is in the MTRule and STBranch package in the picture
The problem is, im repeating creating all this single flow for each table and i have more than 300 table like this. It takes hours and hours :(
What im asking is;
Is there any way in SSIS to do this dynamically ?
PS: Every single table has its own columns and PK values but my data flow schema is always same. . (Below)
You can look into BiMLScript, which lets you create packages dynamically based on metadata.
I believe the best way to achieve this is to use Expressions. They empower you to dynamically set the source and Destination.
One possible solution might be as follows:
create a table which stores all your table names and PK columns
define a package which Loops through this table and which parses a SQL Statement
Call your main package and pass the stmt to it
Use the stmt as Data Source for your Data Flow
if applicable, pass the Destination Table as Parameter as well (another column in your config table)
This is how I processed several really huge tables: the data had to be fetched from 20 tables and moved to one single table.
You are better off writing a stored procedure that takes the tablename as parameter and doing your CRUD there.
Then call the stored procedure in a FOR EACH component in SSIS.
Why do you need to use SSIS?
You are better off writing a stored procedure that takes the tablename as parameter and doing your CRUD there. Then call the stored procedure in a FOR EACH component in SSIS.
In fact you might be able to do everything using a Stored Procedure and scheduling it in a SQL Agent Job.
We have duplicate data in entities in Master data services and not in staging tables. how can we delete these? We cannot delete each row because these are more than 100?
Did you create a view for this entity? see: https://msdn.microsoft.com/en-us/library/ff487013.aspx
Do you access to the database via SQL Server Management Studio?
If so:
Write a query against the view that returns the value of the Code field for each record you want to delete.
Write a query that inserts the following into the staging table for that entity: code (from step 1), BatchTag, ImportType of 4 (delete)
Run the import stored proc EXEC [stg].[udp_YourEntityName_Leaf] See: https://msdn.microsoft.com/en-us/library/hh231028.aspx
Run the validation stored proc see: https://msdn.microsoft.com/en-us/library/hh231023.aspx
Use ImportType 6 instead of 4 as the deletion will fail if the Code which you are trying to delete is being referenced by a domain based attribute in other entities if you use ImportType 4. Rest all the steps will remain same as told by Daniel.
I deleted the duplicate data from the transaction tables which cleared the duplicates from the UI also.
MDS comes out-of-the-box with two front-end UIs:
Web UI
Excel plugin
You can use both of them to easily delete multiple records. I'd suggest using the excel plugin.
Are there any Domain-based attributes linked to the entity you're deleting values from? If so, if the values are related to child entity members, you'll have to delete those values first.
I'm a SQL Server developer learning MDS. I loaded some entities via staging tables and via Excel add-in.
I'm trying to update members in an entity in MDS via the staging table. I can successfully add new members, but any attribute updates to existing members aren't populated to the entity view. The import process runs successfully with no errors.
I've tried ImportType = 0 and 2, neither works. When I set to 1, as expected I get an error. I also tried to update the code value using the NewCode column and that also does not get updated.
I've set up staging data with an SSIS package, and also with direct T-SQL INSERT INTO statement.
I am using almost the same T-SQL INSERT statement for a test entity which I created to load a new member, and then to modify attributes for the new member in a second batch.
Do you have any ideas why the updates would be ignored, or suggestions for things I can try?
Look at your batch in the staging table to see if an errors occurred. If the "ImportStatus_ID" = 2 then the record failed to import. You can see the reason for failure by querying the view that shows reasons for the import failures. The view will be named "stg.viw_EntityName_MemberErrorDetails.
Here is a Microsoft link for reference:
https://technet.microsoft.com/en-us/library/ff486990(v=sql.110).aspx
Hope this helps.
As suggested above Member error details view describe the error
Make sure that you are checking below points when updating in MDS
1) Put code column in your INSERT statement
2) Include all columns of staging table in INSERT query when using
importType = 2 (Otherwise all column will be updated as NULL)
You should insert the data into staging table with ImportType as 0 or 2 along with the batchtag and then run the staging stored procedure to load the data from staging table to entity table. SP will compare the data from staging table with the data in entity table based on Code value and update the data in entity table.
While you can update importstatus_id in the stg.leaf table.
update stg.C_Leaf
set
ImportStatus_ID = 0
While I think it will force the data to be ready for staging and load to mdm entity.
Using Import type =0 shall help u update the new attributes untill the updated new attribute has NoT Null Data. If it is so then Update shall fail. Recheck the data in entity.
If that doesn't work. Please try to refresh the cache in Model and try to get the entiry details again.
Learn more about import types in MDS from below link:
https://learn.microsoft.com/en-us/sql/master-data-services/leaf-member-staging-table-master-data-services?view=sql-server-2017
Hope this helps.