Concurrency limitations for Salesforce API? - salesforce

I am running this code that uses jsforce to interact with Salesforce API.
In two different steps, one upserts into a collection, and the next step would require finding the object again in the collection. I'm failing to trigger the 2nd step occasionally, and according to my logs in these failure cases, the 2nd step triggered before the 1st step (upsert) responded.
I'm suspecting that Salesforce isn't allowing me to read that object when another operation is concurrently updating it. Is this an expected behavior?

Apparently, I can simply wait for the 1st request to respond before proceeding or add an arbitrary timer prior to firing off the 2nd request to bypass this problem.
Marking this as solved as I've filed the question in the library's repository: https://github.com/jsforce/jsforce/issues/1177

Related

Specifying trigger conditions to reduce API calls as opposed to using a condition in the flow itself

I am using an automated flow on a SharePoint list with the "item created or modified" trigger. The flow will trigger a logic app when certain conditions are met. I've tested two different solutions:
Using a condition in the flow
This is the simplest solution and it works well. However this means that the flow is executed even if the parameters are not included.
Using trigger conditions
This also works well, and the logic app is not run unless the conditions are met.
Does the approach with trigger conditions reduce API calls count as opposed to: When modified > Check values > Do or Don't do stuff?
While the workflow doesn't get triggered, I believe the polling request made by trigger is still billed as mentioned in the pricing model & usage metering doc.
But even with that, you will reduce the action calls made for checking the conditions within your logic app and have a clean workflow run history.

Getting Error Pipe Notifications bind failure "Bucket already bound from another Snowflake account"

I am getting the error: Pipe Notifications bind failure "Bucket already bound from another Snowflake account"
We have 2 accounts, so I removed the bucket references from one account, but I am still getting this error. I have an S3 Integration setup.
Do I need to re-do the integration to get this working properly? I am unable to create additional / new transforms on this pipe.
Does this require Snowflake hands to fix?
Thanks!
Someone (bstora) answered this yesterday with a correct answer, I'm not sure why it's been deleted, but it was 100% correct.
So, to rewrite what they wrote, if this happens to you then you will need to reach out to Snowflake Support to have them determine the correct course of action. There is currently no action you can take as a user to correct or fix this.
This page will show you how to create a support ticket, if you haven't created one in the past.
https://community.snowflake.com/s/article/How-To-Submit-a-Support-Case-in-Snowflake-Lodge

How to implement wait() without passing the element in selenium java

I have a scenario in my project where i need to wait for a file to process in the blackened,when i upload a file. Based on that success or failure is displayed.
Currently i am using Thread.sleep() as there is not element i could wait for.
Can you please suggest me is there any way to handle this.
Thank You,
It require either an flag in DB with stated that file is process successfully.
If you having such thing than connect to DB using JDBC and loop the record till the flag not change or record not found.
Another way is an API response, it may need developers to add an key in api response with you can validate using java library like rest-assured
Third - If that file you can see in UI after complete process then use loop till the element with filename is not shown on UI, make sure loop should break at particular time period otherwise the loop run for indefinite time

Rollback a set of actions in Azure Logic Apps

I have a workflow like this as a Azure Logic App:
Read from Azure Table -> Process it in a Function -> Send Data to SQL Server -> Send an email
Currently we can check if the previous action ended with an error and based on that we do not execute any further steps.
Is it possible in Logic Apps to perform a Rollback of actions when one of the steps goes wrong? Meaning can we undo all the steps to the beginning when something in step 3 goes wrong, for example.
Thanks in advance.
Regards.
Currently there is no support for rollback in Logic Apps (as they are not transnational).
Note that Logic Apps provide out-of-the-box resiliency against intermittent errors (retry strategies), which should minimize execution failures.
You can add custom handling of errors (e.g. following your example, if something goes in step 3, you can explicitly handle the failure and add rollback steps). Take a look at https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-exception-handling for details. You
Depending on whether steps in your logic app are idempotent you can also make use of the resubmit feature. It allows you to re-trigger the run with same trigger contents with which the original run instance was invoked. Take a look at https://www.codit.eu/blog/2017/02/logic-apps-resubmit-considerations/ for a good overview of this feature.

will gatling actually perform the operation or will it check only the urls' response time?

I have a gatling test for an application that will answer a survey and upon answering this survey, the application will identify possible answers that may pose a risk and create what we call riskareas. These riskareas are normally created in the background as soon as the survey answering is finished. My question is I have a gatling test with ten users who will go and answer the survey and logout, I used recorder to record the test; now after these ten users are finished I do not see any riskareas being created in the application. Am I missing something--should the survey be really answered by gatling (like it does in selenium) user or is it just the urls that the gatling test will touch ?
I am new to gatling please help.
Gatling should be indistinguishable from a user in a web browser (or Selenium) as far as the server is concerned, so the end result should be exactly the same as if you'd gone through the process yourself. However, writing a Gatling script is a little more work than writing a Selenium script.
For performance reasons, Gatling operates at a lower level than Selenium. Gatling works with the actual data that is sent and received from the server (i.e, the actual GETs and POSTs sent to the server), rather than with user-level interactions (such as clicking links and filling forms).
The recorder will generally produce a relaitvely "dumb" script. It records the exact data that was sent to the server, and makes no attempt to account for things that may change from run to run. For example, the web application you are testing might have hidden form fields that contain session information, or the link addresses might contain a unique identifier or a session id.
This means that your script may not be doing what you think it's doing.
To debug the script, the first thing to do is to add checks on each of the requests, to validate that you are getting the response you expect (for example, check that when you submit page 1 of the survey, you are taken to page 2 - check for something that you'd only expect to find on page 2, like a specific question).
Once you know which requests are failing, look at what data was sent with the request, and try to figure out where it came from. You will probably find that there are session ids, view state, or similar, that must be extracted from the previous page.
It will help to enable request and response logging, as per the documentation.
To simplify testing of web apps, we wrote some helper functions to allow tests to be written in a more Selenium-like way. Once you understand what your application is doing, you may find that it simplifies scripting for you too. However, understanding why your current script doesn't work the way you expect should be your first step.

Resources