I am using List secrets activity to get all the secrets from key vault. I am only able to get first few values as pagination is not Woking for this activity. Is there any other way I can get all the secrets values from the logic apps.Right now I am only able to do for first page values only and as per Microsoft there is limitation of maximum 25 items.
I've managed to recreate the problem in my own tenant and yes, it is indeed an issue. There should be a paging option in the settings but there's not.
To get around this, I suggest calling the REST API's directly. The only consideration is how you authenticate and if it were me, I'd be using a managed identity to do so.
I've mocked up a small example for you ...
The steps are ...
Create a variable that stores the nextLink property. Initialise it with the initial URL for the first call to the REST API, it looks something like this ... https://my-test-kv.vault.azure.net/secrets?maxresults=25&api-version=7.3 ... and is straight out of the doco ... https://learn.microsoft.com/en-us/rest/api/keyvault/secrets/get-secrets/get-secrets?tabs=HTTP
In the HTTP call as shown, use the Next Link variable given that will contain the URL. As for authentication, my suggestion is to use a managed identity. If you're unsure how to do that, sorry but it's a whole other question. In simple terms, go to the Identity tab on the LogicApp and switch on the system managed status to on. You'll then need to assign it access in the KeyVault itself (Key Vault Secrets User or Officer will do the job).
Next, create an Until action and set the left hand side to be the Next Link variable with the equal to value being expression string('') which will check for a blank string (that's how I like to do it).
Finally, set the value of the Next Link value to the property in the response from the last call, the expression is ... body('HTTP')?['nextLink']
From here, you can choose what you do with the output, I'd suggest creating an array and appending all of the entries to that array so you can process it later. I haven't taken the answer that far given I don't know the exactness of how you want to process the results.
That should get you across the line.
I have an record type of case which has a lookup to an Opportunity which has lookup to Account. When a query is run with workbench it works fine and I can get the account- however when its run in apex only the Opportunity ID is returned, not the account. I am new to salesforce but this is driving me crazy- the fact that the query tool and the code would return different information is beyond me- any advice would be great!
SELECT RelatedOpportunity__r.Account.Name
FROM Case
from apex:
(Case:{RelatedOpportunity__c=0061k00000B5GqiAAF, Id=5001k00000FwaivAAB})
Everything is queried same way, write your code "normal".
System.debug(myCase) focuses on what's important, what's really on "this" record. It doesn't go "up" via lookups and "down" via subqueries (if you have any). It'd spam the debug logs too much.
(well, not even that. It's not System.debug's fault. It's supposed to emit strings. A case is an object, not a string so a toString() is called on it)
You can System.debug(myCase.RelatedOpportunity__r.Account); and you'll get your name. Or play with System.debug(JSON.serializePretty(myCase));
Even better - try to learn the right way from the start. Checking your code's flow by spamming debugs is passe. There's no true debugging of Apex (you can't attach say interactive debugger like with .NET, can't add breakpoints and pause execution like in JavaScript) but all cool kids use https://trailhead.salesforce.com/en/content/learn/projects/find-and-fix-bugs-with-apex-replay-debugger/apex-replay-debugger-debug-your-code, check it out.
I have a WPF application that uses entity framework. I am going to be implementing a repository pattern to make interactions with EF simple and more testable. Multiple clients can use this application and connect to the same database and do CRUD operations. I am trying to think of a way to synchronize clients repositories when one makes a change to the database. Could anyone give me some direction on how one would solve this type of issue, and some possible patterns that would be beneficial for this type of problem?
I would be very open to any information/books on how to keep clients synchronized, and even be alerted of things other clients are doing(The only thing I could think of was having a server process running that passes messages around). Thank you
The easiest way by far to keep every client UI up to date is just to simply refresh the data every so often. If it's really that important, you can set a DispatcherTimer to tick every minute when you can get the latest data that is being displayed.
Clearly, I'm not suggesting that you refresh an item that is being edited, but if you get the fresh data, you can certainly compare collections with what's being displayed currently. Rather than just replacing the old collection items with the new, you can be more user friendly and just add the new ones, remove the deleted ones and update the newer ones.
You could even detect whether an item being currently edited has been saved by another user since the current user opened it and alert them to the fact. So rather than concentrating on some system to track all data changes, you should put your effort into being able to detect changes between two sets of data and then seamlessly integrating it into the current UI state.
UPDATE >>>
There is absolutely no benefit from holding a complete set of data in your application (or repository). In fact, you may well find that it adds detrimental effects, due to the extra RAM requirements. If you are polling data every few minutes, then it will always be up to date anyway.
So rather than asking for all of the data all of the time, just ask for what the user wants to see (dependant on which view they are currently in) and update it every now and then. I do this by simply fetching the same data that the view requires when it is first opened. I wrote some methods that compare every property of every item with their older counterparts in the UI and switch old for new.
Think of the Equals method... You could do something like this:
public override bool Equals(Release otherRelease)
{
return base.Equals(otherRelease) && Title == otherRelease.Title &&
Artist.Equals(otherRelease.Artist) && Artists.Equals(otherRelease.Artists);
}
(Don't actually use the Equals method though, or you'll run into problems later). And then something like this:
if (!oldRelease.Equals(newRelease)) oldRelease.UpdatePropertyValues(newRelease);
And/Or this:
if (!oldReleases.Contains(newRelease) oldReleases.Add(newRelease);
I'm guessing that you get the picture now.
How are folks generally handling the whitelisting of foreign key values? Let's ignore the use case of an associated user record which brings an additional set of issues and stick to a fairly benign scenario: A Task belongs to a Project. When I create the task, I want to create it with its project_id value, but I don't want that value to be editable. The property is passed by a hidden field in the shared form.
I know I could just unset that property in the controller before calling save() in the edit action, but I was wondering whether anyone had a better solution. I've used/tried several, but all are laborious or less "universal" than I'd like.
Does anyone have a solution that they really like to solve this particular problem?
Thanks.
I handle this manually as well. The process is something like this.
Load object and show the edit screen to the user.
When user submits, take the primary ID and load the object again. Check ownership.
Have a whitelist of user editable fields, loop through those keys and populate your new object, leave everything else alone.
Save.
You could move this into some kind of before save hook or behavior I would say. But this seems the best practice with the RoR feature (we all know what happened in GitHub)
I'm putting together a small web app that writes to a database (Perl CGI & MySQL). The CGI script takes some info from a form and writes it to a database. I notice, however, that if I hit 'Reload' or 'Back' on the web browser, it'll write the data to the database again. I don't want this.
What is the best way to protect against the data being re-written in this case?
Do not use GET requests to make modifications! Be RESTful; use POST (or PUT) instead the browser should warn the user not to reload the request. Redirecting (using HTTP redirection) to a receipt page using a normal GET request after a POST/PUT request will make it possible to refresh the page without getting warned about resubmitting.
EDIT:
I assume the user is logged in somehow, and therefore you allready have some way of tracking the user, e.g. session or similar.
You could make a timestamp (or a random hash etc..) when displaying the form storing it both as a hidden field (just besides the anti Cross-Site Request token I'm sure you allready have there), and in a session variable (wich is stored safely on your server), when you recieve a the POST/PUT request for this form, you check that the timestamp is the same as the one in session. If it is, you set the timestamp in the session to something variable and hard to guess (timestamp concatenated with some secret string for instance) then you can save the form data. If someone repeats the request now you won't find the same value in the session variable and deny the request.
The problem with doing this is that the form is invalid if the user clicks back to change something, and it might be a bit to harsh, unless it's money you're updating. So if you have problems with "stupid" users who refresh and click the back-button thus accidentally reposting something, just using POST would remind them not to do that, and redirecting will make it less likely. If you have a problem with malicious users, you should use a timestampt too allthough it will confuse users sometimes, allthough if users is deliberately posting the same message over and over you probably need to find a way to ban them. Using POST, having a timestam, and even doing a full comparison of the whole database to check for duplicate posts, won't help at all if the malicious users just write a script to load the form and submit random garbage, automatically. (But cross-site-request protection makes that a lot harder)
Using a POST request will cause the browser to try to prevent the user from submitting the same request again, but I'd recommend using session-based transaction tracking of some kind so that if the user ignores the warnings from the browser and resubmits his query your application will prevent duplication of changes to the database. You could include a hidden input in the submission form with value set to a crypto hash and record that hash if the request is submitted and processed without error.
I find it handy to track the number of form submissions the user has performed in their session. Then when rendering the form I create a hidden field that contains that number. If the user then resubmits the form by pressing the back button it'll submit the old # and the server can tell that the user has already submitted the form by examining what's in the session to what the form is saying.
Just my 2 cents.
If you aren't already using some sort of session-management (which would let you note and track form submissions), a simple solution would be to include some sort of unique identifier in the form (as a hidden element) that is either part of the main DB transaction itself, or tracked in a separate DB table. Then, when you are submitted a form you check the unique ID to see if it has already been processed. And each time the form itself is rendered, you just have to make sure you have a unique ID.
First of all, you can't trust the browser, so any talk about using POST rather than GET is mostly nerd flim-flam. Yes, the client might get a warning along the lines of "Did you mean to resubmit this data again?", but they're quite possibly going to say "Yes, now leave me alone, stupid computer".
And rightly so: if you don't want duplicate submissions, then it's your problem to solve, not the user's.
You presumably have some idea what it means to be a duplicate submission. Maybe it's the same IP within a few seconds, maybe it's the same title of a blog post or a URL that has been submitted recently. Maybe it's a combination of values - e.g. IP address, email address and subject heading of a contact form submission. Either way, if you've manually spotted some duplicates in your data, you should be able to find a way of programmatically identifying a duplicate at the time of submission, and either flagging it for manual approval (if you're not certain), or just telling the submitter "Have you double-clicked?" (If the information isn't amazingly confidential, you could present the existing record you have for them and say "Is this what you meant to send us? If so, you've already done it - hooray")
I'd not rely on POST warnings from the browser. Users just click OK to make messages go away.
Anytime you'll have a request that needs to be one time only e.g 'make a payment', send a unique token down, that gets submitted back with the request. Throw the token out after it comes back, and so you can now tell when something is a valid submission (anything with a token that isn't 'active'). Expire active tokens after X amount of time, e.g. when a user session ends.
(alternately track the tokens that have come back, and if you have received it before then it is invalid.)
Do a POST every time you alter data, but never return an HTML response from a post... instead return a redirect to a GET that retrieves the updated data as a confirmation page. That way, there is no worry about them refreshing the page. If they refresh, all that will happen is another retrieve, never a data-altering action.