How can I import Landslide CRM data into Salesforce? - salesforce

I'm importing some customer information from Landslide CRM into Salesforce.
Anyone have advice on the best methodology for doing the import?
It seems like the Apex Data Loader is the best way to go, but I don't know
if there are any issues with handling the objects in question, or if there
might be a specific tool or script to perform this migration.
Any experience with this import in specific or importing data into Salesforce
in general would be appriciated.

Importing Data to salesforce can be achieved in multiple ways depending on the type of data nd the requirements you have.
The first thing to do is get your data into CSV files so you'll need to find a way to export the dat afirst. For UTF-8 encoded data don't use Excel use something like OpenOffice (only required if you have UTF-8 Characters)
If its account and contact data for example. There is an import wizard available in Setup > Administration Setup > Import Business Accounts/Contacts
Next Option is as you say to use the Apex Data Loader. This probably the best approach.
The first thing and this is critical for big migrations is to Create a Field on your account object which will be a Unique Field for reference purposes. When creating this field set it as an External ID field and populate it with a unique reference for your accounts, the same goes for anything else which will be a parent. (you'll see why shortly.)
Next use the Insert option in the Data Loader to load the data mapping all the fields, especially the External Id
Now when you upload child objects use the Upsert option and map your Account Id via the External Id created earlier. This will match the accounts using your unique Id instead of you having to use the Salesforce id, saves alot of time.
Repeat the same for other objects and you should be good to go.
Apologies for the lack for structure here... doing this while in work and don't have alot of time but hope this helps.

The data loader works great for most types of imports. The one suggestion I would give you is to create a new custom field on your target objects (presumably Account and Contact) called "Landslide ID" or similar, identify it as an external ID field, and then import the primary keys from your source system into this field (along with the "real" data).
Doing this achieves a couple things - first, you have an easy unique link back to the source data for troubleshooting or tracing back to the source system. Second, if you find yourself in a situation where you need to import more fields or related data from the original source system, you'll be able to do so in an easy and correct way. It's just a good standard practice to adopt when doing data migrations -- it's almost no additional effort and can save you many hours in the future.

Related

Adding and storing json-ld to an existing website / database

I've seen that there is a vote for a close request because the question is too broad. Let me rewrite the question:
There is a need to add metadata to different database tables (and thus records) which exposes itself as linked data in the form of json-ld on the website.
Database records of the same type could have different metadata. Depending on the content I should be able to save/store different metadata.
For example, I could add this fixed metadata to every page. Because I have this data available:
<script type="application/ld+json">
{"#context":"https://schema.org","#type":"City","name":"#model.MyCityVariable"}
</script>
Let's say I want to link my city to Geonames and Getty TGN. In a normal circumstance I would need to add two database fields: one to store the Geonames id and one to store the Getty TGN id.
For brussels this would be:
ID 2800866 from https://www.geonames.org/2800866/brussels.html
ID 7007868 from https://www.getty.edu/vow/TGNFullDisplay?find=Brussel&place=&nation=&prev_page=1&english=Y&subjectid=7007868
What if I want to add other sources or other metadata. Adding a database field for each new item doesn't feel right.
The goal is finding out how to store additional metadata, without having to add a specific database field for each item.
After searching I didn't know sql server was capable of working with JSON: https://learn.microsoft.com/en-us/sql/relational-databases/json/json-data-sql-server?view=sql-server-ver16
It looks like this might be a good way to add one extra database field and store a serialized json object in it, which contains my different metadata fields (like the IDs).
Maybe the question now can be:
Is this json+sql server a proper way of storing metadata and linking/providing linked data to an existing website/database? Are there other, better ways?
Tech used: sql server, asp.net core mvc, ef core
(I might be adding an asp.net core web api also.)

Is there a way to export a Informatica maplet 'graphical' data to a simple csv/Excel file?

The firm I work in has a lot of data sources entering the firm database using the Informatica ETL tool, stored in maplets and other data models (sorry If I'm not using the exact terminology).
The problem is that all the business logic is stored in the 'graphical interface' and nowhere else - Every time I want to see what field goes into the target field I have to trace the inputs through the maplet and that takes a very long time.
The Question is: Is there a tool that can takes all the relationships in the Informatica maplet and somehow export them to a excel table (so I can see it all without tracing)? that way I could try to make proper documentation....
Thanks in Advance.
It's possible to export mappings or whole workflows to XML. Next, you can use this tool - it will create tables with source to target dependency for every mapping.
Keep in mind it will only map input to output, it won't extract the full logic and transformations done along the way - that would've been to complex for simple visualization.
Informatica supports exporting mapping information to Excel - just search the documentation which tells you how to do it.
However, for anything other than the simplest of mappings, what ends up in Excel is not that easy to understand. If your Informatica installation supports it, then using the lineage capabilities is a much better bet.

Export Outlook Emails Into SQL (Vai ACCESS?)

I have a email folder in Outlook that contains 100s of emails which record my discussions with a developer of some bespoke software. I want to import these into SQL to create a knowledge base of information that can be searched upon to extract all the decisions that we have made during the course of the 2 year project.
Having sreached the net, I found that it is very easy to dump the contents of an email folder into Access using the import data functionality. In fact I have linked the table and so believe (never used Access before!!) that I now have an Access table that is connected in 'real-time' to the Outlook folder. This is eactly what I want BUT in SLQ as this is something that I am very familiar with using.
So I have tried to import the Access database into SQL (which also appears to be relatively easy) but keep getting the message that 'The source database ...contains no visible tables or views'. Checking SQL pemissions, I am owner of this new databse.
Two questions please. First, cant believe that going through Access is the simplest way to do this and presume that I will loose the 'real-time' link - am I right? Second, given that I can see my Access database has a visible table, why am I getting the error?
The easiest and quickest way is to create a VBA macro where you can populate your SQL database from Outlook emails. You can build the table structure according to your needs and extract the required information from Outlook using VBA. I'd suggest processing emails in chunks using the Find/FindNext or Restrict methods of the Items class, so you will not reach the reference counter limit. The MailItem properties you may find described in MSDN.
BTW The internal store (if you use the cached mode) in Outlook acts like a database. So, why do you need to introduce yet a new database?

Is possible to connect the company DB to Odoo 9 (OpenERP)?

I am creating a business app to manage the inventory of a mid-sized shoe company.
I already have a database created with PostgreSQL. I would now be able to connect it to Odoo 9 to develop applications.
How can I do that?
You can't just use your existing data directly with odoo
But Odoo makes it easy to use existing data from existing applications and databases with it's flexible import system. in which data is imported to it via csv files, you can import any data you can possibly think of, you just have to know the right format of data that odoo is expecting and the required fields you have to provide
Every model in Odoo even custom built modules can have data imported into them
If you want to import data just click on the listview of any model and you should see an import link beside the create button, from there you can carry on, there's even a FAQ there which you can read if you need to find out more about the import system.
As for knowing the required columns that have to be present when you're importing, Just create sample data and save it, Then
Go to the listview
Tick the check-box beside the record name to select it
Now click on the more button you should see an export option
From there you can pick the fields you want to export, the required fields will highlighted in blue
There are other ways of re-using existing data with odoo (You can write to the database directly...but this is not recommended as several constraints would not be checked and you can end up messing the database up) or you can use xmlrpc or jsonrpc which is going to be time consuming
Using the import functionality is the best and the easiest way.

Database Driven Model Mapping in Java

So, I have a project where I get data from a dozen different sources, some are database objects, most often the data is in different JSON formats, or often XML formats. So, I need to take this disparate data and pull it into one single clean managed object that we control.
I have seen dozens of different posts on various tools to do object to object mapping. Orika being one of them, etc. But the problem is that Orika, like many of these still need solid classes defined to do the mapping. If there is a change to the mapping, then I have to change my class, re-commit it, then do a build and deploy new code ... BTW, testing would also have to be done like any code change. So, maybe some of these tools aren't a great solution for me.
Then I was looking to do some sort of database-driven mapping, where I have a source, a field, and then the new field or function I would like to take it to. So, with a database-driven tool, I could modify the fields in the database, and everything would keep working as it should. I could always create a front-end to modify this tool.
So, with that ... I am asking if there is any database-driven tool where I can map field to field, or fields to functions type of mapping? Drools was my first choice, but I don't know if it is my best choice? Maybe it is overkill for my needs? So, I was looking for advice on what might be the best tool to do my mapping.
Please let me know if you need any more information from me, and thanks for all the help!
Actually Orika can handle dynamic Data source like that, there is even an example on how to convert from XML Element (DOM API) or even JsonObject.
You can use an XML parser to convert your data into Element object, or Jackson to get JsonObject
Then define you class map between your "Canonical" Java Class and these dynamic "Classes"
http://orika-mapper.github.io/orika-docs/advanced-mappings.html Customizing the PropertyResolverStrategy
Here is an example of Orika mapping to MongoDB DBObject to Java Bean
https://gist.github.com/elaatifi/ade7321a1405c61ff8a9
However converting JSON is more straightforward than XML (the semantic of Attributes/Childs/Custom tags do not match with JavaBeans)

Resources