I have a few Azure Logic Apps which all have the same structure: run on HTTP trigger > call stored procedure > parse JSON data > call REST API per record from sp.
The only difference between the apps is the name of the stored procedure and the fields in the result set.
I'd like to create a single Logic App (variable param in HTTP call to define which sp to execute), but I got stuck in the parse JSON step as it needs a fixed schema.
Is there a way to accomplish this?
The whole point of Parse_JSON is to retrieve the variable inside the the JSON. If the JSON Schema is going to vary you can either make a combined JSON Schema which satisfies all the conditions of the flow or you can create parallel branches and have the schemas of its own.
Here is a screenshot on how we can add a parallel branch.
One of the best way to write the schema is to take the previous step's JSON and add it i the Use Sample Payload to Generate Schema so that it will automatically generates a Schema for the flow.
Here is my Logic App
while having the schema at Parse JSON I'm using the Insert Row JSON and using it in Parse JSON.
Related
I am new to azure logic app and am trying to develop our first logic app in the designer. The goal is simple - connect to our salesforce production instance and extract data from an object. Write that data to a delimited file in our storage container - from where the data warehouse can ingest the file.
The first steps are pretty straight forward - i added a recurrence trigger and connected a salesforce Get Records action to it. Chose the User object in salesforce for now because its got minimal data.
Executed the logic app and it runs fine. I can see the data extracted is correct.
However now I am lost as to how to create a delimited file in the container. I obviously could not connect the Create Blob action directly to the Get Records action. Based on some documentation i tried creating a CSV table first but got an error saying that an Array is expected but the Get Records action returns an object.
If anyone has faced something similar and has any pointers, documentation would appreciate it. Thanks in advance for any help
Regards
Sid
The Get records object will have 2 objects '#odata.context' and 'value'. The 'value' field will have actual array of records, so you need to use the value.
Please refer below screenshot of the logic app that works, fetched record from user object and stored as delimited csv file.
Here is output screenshot for create csv table:
We have a .Net Core web application, where some data is generated dynamically(Some mathematical calculations) and displayed on the screen.
When the user clicks "Print", we need to convert all the data displayed into a PDF.
Currently, we are storing the data in a DB and sending the PrimaryKey as a parameter to the SSRS report in the URL. And then SSRS calls another REST API to access that data.
Like described here - Pass a Report Parameter Within a URL
Is there any possibility for us to skip the Storing data in DB and directly pass the dataset to the SSRS server in a URL, and generate a PDF?
Unfortunately, you can send only get variable to the reporting. And you can't send a lot of data because get variable has a restriction. If you have a lot of information save it in the DB and print is the best way, if not you can put the big string in get variable and parse in in the reporting
I have a specific field that I am trying to find. The salesforce instance I am in has hundreds of tables/objects so I can't look through them manually.
I also only have read only access, so I can't run an APEX script or create objects. I am using an API to access the database, and store the data outside of salesforce.
What I need is to find the object/table that this field is stored in so I can write an SOQL query to get the field's values. Any ideas?
Easiest way is with Workbench.
I am writing a web app and I am trying to improve the performance of search/displaying results. I am relatively new to programming this sort of thing, so I apologize in advance if these are simple questions/concepts.
Right now I have a database of ~20,000 sites, each with properties, and I have a search form that (for now) just asks the database to pull all sites within a set distance (for this example, say 50km). I have put the data into an index and use the Search API to find sites.
I am noticing that the database search takes ~2-3 seconds to:
1) Search the index
2) Get a list of key names (this is stored in the search index)
3) Using key names, pull from datastore (in a loop) and extract data properties to be displayed to the user
4) Transmit data to the user via jinja template variables
This is also only getting 20 results (the default maximum for a Search API query.. I haven't implemented cursors here yet, although I will have to).
For whatever reason, it feels quite slow.. I am wondering what websites do to make the process seem faster. Do they implement some kind of "asynchronous" search, where a page loads while in the background the search/data pulls are processed, and then subsequently shown to the user...?
Are there "standard" ways of performing searches here where the processing/loading feels seamless to the user?
Thanks.
edit
Would doing something like just passing a "query ID" via the page work, and then using AJAX to get data from the datastore via JSON work? Like... can app engine redirect the user to the final page, pass in only a "query ID", and then search in the meantime, and then once the data is ready, pass the information the user via JSON?
Make sure you are getting entities from the datastore in parallel. Since you already have the key names, you just have to pass your list of keys to the appropriate method.
For db:
MyModel.get_by_key_name(key_names)
For ndb:
ndb.get_multi([ndb.Key.from_path('MyModel', key_name) for key_name in key_names])
If you needed to do datastore queries, you could enable parallel fetches with the query.run (db) and query.fetch_async (ndb) methods.
I'm currently trying to backup the attachment files from old cases and emails in our Salesforce org through an automated process. I first tried to do this with the Bulk API but sadly this doesn't allow me to export the body column of attachments.
I did manage to pull this data out with the dataloader via command line (and the FileExporter tool). Now what I would like to do is export only those attachments that are attached to old emails or cases.
Is it possible to use a collection of ID's (preferable in a file) from those parent objects in the WHERE clause of the query in the beans file? If so, could somebody post an example?
Something along the lines of:
entry key="sfdc.extractionSOQL" value="SELECT Id, Body FROM Attachment WHERE id IN parentIdFile.csv"/>
Would be much appreciated!