We would like to keep Salesforce synced with data from our organization's back-end. The organizational data gets updated by nightly batch processes, so "real-time" syncing to Salesforce isn't in view. We intend to refresh Salesforce nightly, after our batch processes complete.
We will have somewhere around 1 million records in Salesforce (some are Accounts, some are Contacts, and some belong to custom objects).
We want the refresh to be efficient, so it would be nice to send only updated records to Salesforce. One thought is to use Salesforce's Bulk API to first get all records, then compare to our data, and only send updated records to Salesforce. But this might be an expensive GET.
Another thought is to just send all 1 million records through the Bulk API as upserts to Salesforce - as a "full refresh".
What we'd like to avoid is the burden/complexity of keeping track of what's in Salesforce ourselves (i.e. tables that attempt to reflect what's in Salesforce, so that we can determine the changes to send to Salesforce).
Related
I have an express API application running on GAE. It is my understanding that every time someone makes a GET request to it, it creates a log. In the Logs Explorer (Operations -> Logging -> Logs Explorer), I can filter to view only GET requests from a certain source by querying:
protoPayload.method="GET"
protoPayload.referrer="https://fakewebsite.com/"
In the top-right of the Logs Explorer, I can also select a time range of 1 day to view logs from the past 24 hours.
I want to be able to see how many GET requests the app receives from a given referrer every day. Is this possible? And is there functionality to display the daily logs in a bar chart (say, to easily visualize how many logs I get every day over the period of a week)?
You can achieve this but not directly in the Cloud Logging service. You have started well: you have created a custom filter.
Then, create a sink and save the logs to BigQuery.
So now, you can perform a query to count the GET every day, and you can build a datastudio dashboard to visualise your logs.
If only the count is needed on daily basis you can create a sink to stream data directly into the BigQuery. As the data needs to be segregated on daily basis while creating a sink , a better option would be to use a partition table which can help you in two ways:
You would have a new table everyday
Based on your usage although BigQuery provide a free tier , this data is not
needed in near future storing it this way will reduce your cost and querying cost
BigQuery comes with data studio , as soon as you query on table you'll have the option to explore the result in Data studio and generate reports as needed.
My team and I are working on a Full-Stack Application using ReactJS on the frontend and AWS Amplify on the backend. We are using AWS AppSync to Query data in our DynamoDB tables (through GraphQL Queries), Cognito for User Authentication, and SES to send out emails to users. Basically, the user inputs some info (DynamoDB Table #1), and that is matched against an opportunity database (DynamoDB table #2), and the top 3 opportunities are shown to the user. If none are found, an email is sent to inform the user that they will receive an email when opportunities are found. Now for the Question: I wanted to know if there is a way to automatically Query a DynamoDB table (Like once a day or every time a new opportunity is added to the DynamoDB Table #2) and send out emails with matching opportunities to users who were waiting for them? I tried using Lambda Triggers but the only way I could do it was by querying each row of DynamoDB Table #1 against DynamoDB Table #2. That is computationally infeasible as there will be too many resources being used up. I am asking for advice on how I can go about making that daily check because I haven't been able to figure it out yet! Any responses are appreciated, and let me know if you need any additional information from my side! Thank you!
You could look into using DynamoDB Streams. When a new Opportunity is added to DynamoDB, the stream would trigger a lambda to be called. Your lambda could then execute your business logic to match the opportunity with the appropriate user.
Here is the problem:
I have a tenant with 50,000 users Every day I need to pull that user list to see what has changed. Example: Which users were added or removed, and what are their mySite URL is.
I can get some general information calling /users but, I need each user's mySite. The only way I have found to retrieve that is to call /users/userId?$select=mySite.
This implies I must make 50k calls and I then encounter throttling issues.
Is there a way through Microsoft Graph (or some other mechanism) to pull the user data, including mySite efficiently?
We're using Cloudant as the remote database for our app. The database contains documents for each user of the app. When the app launches, we need to query the database for all the documents belonging to a user. What we found is the CDTDatastore API only allows pulling the entire database and storing it inside the app then performing the query in the local copy. The initial pulling to the local datastore takes about 10 seconds and I imagine will take longer when adding more users.
Is there a way I can save only part of the remote database to the local datastore? Or, are we using the wrong service for our app?
You can use a server side replication filter function; you'll need to add information about your filter to the pull replicator. However replication will have a performance hit when using the function.
That being said a common pattern is to use one database per user, however this has other trade offs and it is something you should read up on. There is some information on the one database per user pattern here.
I am working on a WinForms application that has the ability to create and send invoices. In addition to creating invoices locally, the app uses PayPals Permissions API(and the invoicing API) to allow the application to (optionally)send invoices on the clients behalf.
My question is as follows, what is the best method of keeping track of the current status of invoices sent using PayPal? The application needs to know the status of each invoice so it can update its records locally.
I am aware of PayPals Instant Payments Notifications although I am unsure how this would fit together with a WinForms application(?).
My initial thought was to use the PayPal Invoicing API to query the required information on a as and when need-to-know basis. Additionally, a function that ran on a different thread could be run periodically to retrieve information in the background from the API, updating records locally.
Am I failing to acknowledge a better solution?
IPN will not fit invoice payment in case of creating invoice using permission API as there's no parameter to set the IPN URL in the invoice API. Your idea of calling GetInvoiceDetails API to retrieve invoice status based on need-to-know sounds a good solution for me.