Updating to Salesforce user object using informatica - salesforce

I have three fields badge number, termination date and status to update to Salesforce based on badge number.im using update strategy in mapping and on session level -upsert with external lookup field as badge_number_c and treat source rows as data driven(session properties). However we get only 50 records updated and 20000 records rejected as badge numbers not present in target and those 20 k records trying to insert and hence rejected(since we did not map all fields to form record in Salesforce as we only update).. for this error log it consuming lot of time and wf run time is high.
I tried to remove upsert and external lookup field but it throws error as I'd field missing..

it looks like you are trying to update salesforce target using infa target definition and mixing two things.
If you are using only update strategy + treat source rows as data driven(session properties), then please ensure you handle update condition in update strategy.
For example,
First calculate INSERT_UPDATE_FLAG using some lookup on target by joining on primary key columns.
And then use it like below logic in update strategy.
IIF ( INSERT_UPDATE_FLAG = 'UPD',DD_UPDATE, DD_INSERT) -- if you want UPSERT logic.
or
IIF ( INSERT_UPDATE_FLAG = 'UPD',DD_UPDATE, IIF(1=2,DD_INSERT)) -- if you want only UPDATE logic.
Also pls note, you need to mention primary key columns in infa target definition otherwise update wont work.
Now as per your screenshot, if you want to use SFDC specific logic, probably, you need to be careful and follow below link to do this. Its a multi step process to create external id first and then use it to do lookup and update.
https://knowledge.informatica.com/s/article/124909?language=en_US

Related

Python+peewee: retrieve a field after Model.save() (and trigger execution in Sqlite database)

In my application I work with Sqlite. In one of the tables inside database I've implemented a trigger (basically, after an insert event on the table TAB, it has to update a column named codecolumn which depends on the ID PK field)
In my code I create and object from a PeeweeModel previously setted
objfromModel = Model(params....)
After the execution of line:
objfromModel.save()
We hoped to get appart from the _id field generated -in fact objfromModel.id is retrieved from DB-, but also the codecolumn new field generated by the trigger execution on insert event. But objfromModel.codecolumn is None
Question: is there a trick to make on Peewee in order to recover this new field generated in database by trigger.
Unfortunately SQLite does not support the concept of INSERT ... RETURNING. What you could do is a couple of things:
A. After creation simply re-fetch the codecolumn. e.g. self.codecolumn = MyModel.select(MyModel.codecolumn).where(MyModel.id == self.id).scalar(convert=True) (the use of "scalar" says return just one value, the "convert=True" says convert the underyling database type to a Python type. This is really only necessary if the database type is a date or datetime
B. Create a post-insert trigger that calls a user-defined function. Register a handler for the user-defined function on your database instance, and have your callback receive the new codecolumn value and set it as a database attribute in the callback. Hopefully this makes sense?
C. Move the codecolumn trigger out of SQL and into Python, making it easier to know ahead-of-time what its value will be. This depends obviously on what that column contains.
Hope these ideas help.

SSIS no-match lookup? SQL server integration services - prevent duplicate rows

In ssis 2012, let's presume I simply copy customer data from one DB Source to a DB Destination (both are different database instances, one cannot "see" the other).
How do I prevent adding customer data I already added before. In other words, when I rerun the task, it should not add the customer twice or more (only the ones that previously failed). We have a non-unique reference available in the destination customer table e.g. 'SourceCustomerID' which is non-unique!
So we cannot rely on some unique index in the Destination table(s), and if we could, I don't want go this way (would cause failures)...
Added based on questions below: there ARE columns that uniquely identify data in the target table, and we need these for this, but these are nor implemented as unique indexes, nor do I want to let the job (or rows) fail like this. I want to prevent adding these rows in a controlled way.
I tried the lookup component, playing with "Lookup No Match Output", etc...no luck yet.
Any ideas how to accomplish this using the SSIS principles??
Best regards
Bart.
Use the SCD component
https://msdn.microsoft.com/en-us/library/ms141715.aspx
You map the business key which will check for existing record and you can insert/update. You can alter it to insert only.

Set up relation on two existing Salesforce objects

I have a custom object in Salesforce which I need to setup a Master Detail relationship from Accounts. Accounts being the Master and CompHist being the Detail. The problem I am running into is that I need to set the relation to work off of custom fields within the objects. Example:
1.) Accounts has a custom field called CustomerId.
2.) CompHist also has custom field called CustomerId.
3.) I need to be able to have this linked together by CustomerId field for report generation.
About 2,000 records are inserted into CompHist around the 8th of each month. This is done from a .NET application that kicks off at the scheduled time, collects info from our databases and then uploads that data to salesforce via the SOAP API.
Maybe I'm misunderstanding how Salesforce relationships work as I am fairly new (couple months) to salesforce development.
Thanks,
Randy
There is a way to get this to work without triggers that will link the records or pre-querying the SF to learn Account Ids in .NET before you'll push the CompHistories.
Setup
On Account: set the "External ID" checkbox on your CustomerId field. I'd recommend setting "Unique" too.
On CompHist: you'll need to make decision whether it's acceptable to move them around or when the relation to Account is set - it'll stay like that forever. When you've made that decision tick / untick the "reparentable master-detail" in the definition of your lookup / m-d to Account.
And if you have some Id on these details, something like "line item number" - consider making an Ext. Id. for them too. Might save your bacon some time in future when end user questions the report or you'll have to make some kind of "flush" and push all lines from .NET (will help you figure out what's to insert, what's to update).
At this point it's useful to think how are you going to fill the missing data (all the nulls in the Ext. Id) field.
Actual establishing of the relationship
If you have the external ids set it's pretty easy to tell salesforce to figure out the linking for you. The operation is called upsert (mix between update and insert) and can be used in 2 flavours.
"Basic" upsert is for create/update solving; means "dear Salesforce, please save this CompHist record with MyId=1234. I don't know what's the Id in your database and frankly I don't care, go figure this out will ya?"
If there was no such record - 1 will be created.
If there was exactly 1 match - it will be updated.
If there were more than 1 found - SF won't know which one to update and throw error back at you (that's why marking as "unique" is a good idea. There's a chance you'll spot errors sooner).
"Advanced" upsert is for maintaining foreign keys, establishing lookups. "Dear SF, please hook this CompHist up to Account which is marked as "ABZ123" in my DB. Did I mention I don't care about your Ids and I can't be bothered to query your database first prior to me uploading my stuff?"
Again - exact match - works as expected.
0 or 2 Accounts with same ext. id value = error.
Code plz
I'd recommend you to play with Data Loader or similar tool first to get a grasp. of what exactly happens, how to map fields and how to not be confused (these 2 flavours of upsert can be used at same time). Once you'll manage to push the changes the way you want you can modify your integration a bit.
SOAP API upsert: http://www.salesforce.com/us/developer/docs/api/Content/sforce_api_calls_upsert.htm (C# example at the bottom)
REST API: http://www.salesforce.com/us/developer/docs/api_rest/Content/dome_upsert.htm
If you'd prefer an Salesforce Apex example: Can I insert deserialized JSON SObjects from another Salesforce org into my org?

Dynamic SQL statement return value using the current target connection

I'm currently creating my first real life project in Pervasive. The task is to map a certain XML structure containing orders (as in shops and products) to 3 tables I created myself. These tables rest inside a MS-SQL-Server instance.
All of the tables have a unique key called "id", an automatically incremented column. I've dropped this column from all mappings so that Pervasive will not try to fill it itself.
For certain calculations, for a split key in one of the tables and for references to the created records in other tables, I will need the id that the database has just created. For that, I have googled the answer. I can use "select ##identity;" as a statement, and this returns the id that has most recently been created for the current connection. This means that in Pervasive, I will have to execute this statement using the already existing target connection object.
But how to do that? I am quite sure that I will need a JDImport or DJExport object, but how to get one associated with the current connection that Pervasive inserts the records by?
Or is there any other way to handle this auto increment when I need to reference the id in other tables?
Not sure how things work in Pervasive, but you may run into issues with ##identity,. Scope_identity() would probably be safer but may still not work in Pervasive.
Hopefully your tables have a natural key in addition to the generated id, in which case you can select your id based on the natural key. This will avoid any issues you may have with disparate sessions and scope.
If there is anyone looking this post up and wonders about the answer, it's "You can't". Pervasive does not allow access to their very own connection object, the one they use to query the database. Without access to it, you cannot guaranteed fetch the right id. The solution for us was this: We used a stored procedure which we called in the Before-Transformation event that created the header record and returned the id and an optional error message as a table. We executed it and it returns the id we then save and use throughout our mapping.

Optimization suggestions for sql server table

I have a table containing user input which needs to be optimized.
I have some ideas about how to solve this but i would really appreciate your input on this. The table that needs optimization is called Value in the structure below.
All tables mentioned below has integer primary keys called Id.
Specs: Ms Sql Server 2008, Linq2Sql, asp.net website, C#.
The current structure looks as follows:
Page -> Field -> FieldControl -> ValueGroup -> Value
Page
A pages is a container for one or more Fields.
Field
A field is a container for one or more FieldControls such as a textbox or dropdown-options.
Relationships: PageId
FieldControl
If a Field is of the type 'TextBox' then a single FieldControl is created for the Field.
If a Field is of the type 'DropDown' then one FieldControl per dropdown option is created for the Field containing the option text.
Relationships: FieldId
ValueGroup
Each time a user fills in Fields within a Page and saves it, a new ValueGroup (Id) is created to keep track of user input that is relevant to that save. When a user wants to
look at a previously filled in form, the valuegroup is used to load the Values into the FieldControls of that previously filled in instance.
Relationships: None
Value
The actual input of a FieldControl. If the user typed 'Hello' in a TextBox then 'Hello' would be stored in a row in this table followed by a reference back to which FieldControl 'Hello' was inputted for. A ValueGroup is linked to values in order to group them to keep track of which save/instance they belong to as described in ValueGroup.
Relationships: ValueGroupId, FieldControlId
The problem
If 100.000 Pages are fully filled in, containing 10 TextBoxes each then we get 100.000 * 10 records in the Values table meaning we quickly reach one million records making it really slow as it is now. The user can create as many different pages with as many different Fields as he/she likes and all these values are stored in the Values table. The way i use this data is by either displaying a gridview with pagination that displays all records for a single Pagetype, or when looking at a specific Page instance (Values grouped by ValueGroupId).
Some ideas that i have:
Good indexing should be very important when optimizing the Values table.
Should i perhaps add a foreign key directly back to Page from Value, ending up with indexing by (Id, PageId, ValueGroup) allowing the gridview to retrieve values that are only relevant for one Page?
Should i look into partitioning the table and if so, how would you recommend that i do this? I was thinking that partitioning by Page, hence getting chunks of values that are only relevant to a certain page would be wise in this case right? How would the script/schema look for something like that where pages could be created/removed at any time by the users.
PS. There should be a badge on this forum for all people that finished reading this long post, and i hope ive made myself clear :)
Just to close this post. Correct indexing solved all performance problems.
This may be slightly off-topic, but why? Is this data that you need to access in real-time, or is it for some later processing? Could you perhaps pack the data into a single row and then unpack it later?
Generic
You say it is slow now and that can be many reasons for that other than the database
like low memory, high CPU, disk fragmentation, network load, sockets problems etc etc.
This should show up on a system monitor
Try for instance Sysinternals (now MS) tool: http://live.sysinternals.com/procexp.exe
But if that is all under control then back to the database.
Database index
One million records is not "that much" and should not be a problem.
An index should do the trick if you don't have any indexes right now.
You should probably set indexes on all tables if you haven't done so already.
I tried to do a database model, is this right:
http://www.freeimagehosting.net/image.php?a39cf99ae5.png
Table structure (?)
Page -> Field -> FieldControl -> ValueGroup -> Value
The table structure looks like it may not be the optimal one but it is hard to say exactly when I don't know how the application works.
Do all tables have the foreign keys of
the table above ?
Is this somewhat similar to your code ?
Pseudo code:
1. Get page info. Gives key "page-id"
2. Get all "Field":s marked with that "page-id".
Gives keys "field-id" & "fieldcontrol-id"
3. Loop trough all fields-id:s and get the FieldControl for each one
4. Loop trough all fields-id:s and get all ValueGroup:s.
Gives a list of "valuegroup-id":s keys
5. Loop trough all ValueGroup:s and get all fields

Resources