I am trying to pass an object from one zul page to another. where :
Both zul pages have own composers.
I want to set value of object in 1st zul's composer.
And Want to get it in 2nd zul's composer.
I have tried executions.sendredirect(), but it clears the value of object, forward() thrwos an exception saying that "use sendRedirect instead to process user request".
Your problem is scope.
In ZK, like other web application frameworks, you have access to a number of different scopes: webapp, desktop, page, session, request, etc. If you have two different pages served from two different URLs, those will have distinct request scopes.
When moving from one request to another, you can pass information on the request URL:
Executions.sendRedirect("page2.zul?myId=1234")
Then, in the composer on page2, you can retrieve this value from the Execution:
Execution execution = Executions.getCurrent();
execution.getParameter("myId");
This is standard HTTP query string business so you're limited to text and numbers here. For passing things like database ids though, this can be quite convenient.
A more robust solution might be to leverage some of ZK's other scopes. For example, you could put your object in the user's Session scope. Refer to my answer to ZK session variable with a menu for implementation details. Note that when using the Session, you are no longer limited to text but can store an actual Object.
Related
I am using List secrets activity to get all the secrets from key vault. I am only able to get first few values as pagination is not Woking for this activity. Is there any other way I can get all the secrets values from the logic apps.Right now I am only able to do for first page values only and as per Microsoft there is limitation of maximum 25 items.
I've managed to recreate the problem in my own tenant and yes, it is indeed an issue. There should be a paging option in the settings but there's not.
To get around this, I suggest calling the REST API's directly. The only consideration is how you authenticate and if it were me, I'd be using a managed identity to do so.
I've mocked up a small example for you ...
The steps are ...
Create a variable that stores the nextLink property. Initialise it with the initial URL for the first call to the REST API, it looks something like this ... https://my-test-kv.vault.azure.net/secrets?maxresults=25&api-version=7.3 ... and is straight out of the doco ... https://learn.microsoft.com/en-us/rest/api/keyvault/secrets/get-secrets/get-secrets?tabs=HTTP
In the HTTP call as shown, use the Next Link variable given that will contain the URL. As for authentication, my suggestion is to use a managed identity. If you're unsure how to do that, sorry but it's a whole other question. In simple terms, go to the Identity tab on the LogicApp and switch on the system managed status to on. You'll then need to assign it access in the KeyVault itself (Key Vault Secrets User or Officer will do the job).
Next, create an Until action and set the left hand side to be the Next Link variable with the equal to value being expression string('') which will check for a blank string (that's how I like to do it).
Finally, set the value of the Next Link value to the property in the response from the last call, the expression is ... body('HTTP')?['nextLink']
From here, you can choose what you do with the output, I'd suggest creating an array and appending all of the entries to that array so you can process it later. I haven't taken the answer that far given I don't know the exactness of how you want to process the results.
That should get you across the line.
I want to get the type of the page i.e PLP or PDP page
You can determine the "type" of the page in a couple different ways within SFCC's controllers or backend logic (Server-exectued JavaScript) as well as some other ways when trying to detect this condition in frontend logic. (Browser-executed JavaScript) Additionally, for the frontend and backend solutions it may be different depending on the implementation that you're working on. Depending on it's age, and who implemented it, you may be working in a SiteGenesis, MFRA, or SFRA context. Possibly, even a 100% custom implementation. Therefore please accept this as a very general answer given that the question currently has absolutely no context whatsoever.
Backend Context
In all backend contexts, the solution relies upon the "ClickPath" data store of a session. You must be conscious that this may not be 100% accurate as there could be race conditions wherein users browsing in multiple tabs may have pages visited after the current request is processed and it is unknown whether the last entry in the ClickStream could be 'newer' than the request being processed currently.
Why the pipelineName attribute isn't present on the Request class is a bit confusing to me. Why wouldn't it be? Regardless, we must do what we must do.
SFRA
// in the context of a Controller method
req.session.clickStream.last.pipelineName // returns string like: 'Pipeline-Name'
All other contexts (Site Genesis & MFRA)
// session is a global variable in SFCC. It should be accessible
// in all backend contexts with the exception of certain hook
// contexts.
session.clickStream.last.pipelineName // returns string like: 'Pipeline-Name'
Frontend Context
The frontend context which represents JavaScript running in the browser relies upon variables that have been set by the server within the browser's execution context. Specifically, variables that are accessible within the context of the app.js monolithic module in the case of SiteGenesis or SFRA 'client' modules
SFRA
In SFRA, there's a kind of 'global' JS module which is what you find in main.js and then there are type-specific JS modules like productDetail.js and search.js. In the context of SFRA I'd recommend add your modules/scripts to one of those page-type-specific modules in order to execute different logic based on the page type.
Site Genesis
Within the context of JavaScript executing on the page the following expression will typically give sufficient values to adjust behavior based on page type. Note, you may need to have this expression execute in the context of the closure within app.js in order for it to work.
app.page.type // equals 'product' or 'search' for example (IIRC)
I'm creating a design for a Twitter application to practice DDD. My domain model looks like this:
The user and tweet are marked blue to indicate them being a aggregate root. Between the user and the tweet I want a bounded context, each will run in their respective microservice (auth and tweet).
To reference which user has created a tweet, but not run into a self-referencing loop, I have created the UserInfo object. The UserInfo object is created via events when a new user is created. It stores only the information the Tweet microservice will need of the user.
When I create a tweet I only provide the userid and relevant fields to the tweet, with that user id I want to be able to retrieve the UserInfo object, via id reference, to use it in the various child objects, such as Mentions and Poster.
The issue I run into is the persistance, at first glance I thought "Just provide the UserInfo object in the tweet constructor and it's done, all the child aggregates have access to it". But it's a bit harder on the Mention class, since the Mention will contain a dynamic username like so: "#anyuser". To validate if anyuser exists as a UserInfo object I need to query the database. However, I don't know who is mentioned before the tweet's content has been parsed, and that logic resides in the domain model itself and is called as a result of using the tweets constructor. Without this logic, no mentions are extracted so nothing can "yet" be validated.
If I cannot validate it before creating the tweet, because I need the extraction logic, and I cannot use the database repository inside the domain model layer, how can I validate the mentions properly?
Whenever an AR needs to reach out of it's own boundary to gather data there's two main solutions:
You pass in a service to the AR's method which allows it to perform the resolution. The service interface is defined in the domain, but most likely implemented in the infrastructure layer.
e.g. someAr.someMethod(args, someServiceImpl)
Note that if the data is required at construction time you may want to introduce a factory that takes a dependency on the service interface, performs the validation and returns an instance of the AR.
e.g.
tweetFactory = new TweetFactory(new SqlUserInfoLookupService(...));
tweet = tweetFactory.create(...);
You resolve the dependencies in the application layer first, then pass the required data. Note that the application layer could take a dependency onto a domain service in order to perform some reverse resolutions first.
e.g.
If the application layer would like to resolve the UserInfo for all mentions, but can't because it doesn't know how to parse mentions within the text it could always rely on a domain service or value object to perform that task first, then resolve the UserInfo dependencies and provide them to the Tweet AR. Be cautious here not to leak too much logic in the application layer though. If the orchestration logic becomes intertwined with business logic you may want to extract such use case processing logic in a domain service.
Finally, note that any data validated outside the boundary of an AR is always considered stale. The #xyz user could currently exist, but not exist anymore (e.g. deactivated) 1ms after the tweet was sent.
My problem is I'm making too many API requests, which I want to cut down if possible. Below I'll describe the situation:
I have three pages, all linked using ngRoute. Like this:
Page A: Teams (list of teams)
URL: "/teams"
Page B: Team Details (list of players)
URL: "/teams/team-details"
Page C: Player Details (list of player stats)
URL: "/teams/team-details/player-details"
Page A is populated by pulling an array of the teams from an API very easily using a simple $resource.query() request, and using ng-repeat to iterate through them.
Page B is populated by calling an html template and populating specific fields with values from a separate API request to the /team-details endpoint, taking the team_id value from the clicked element on Page A.
Page C (as with page B) takes a player_id from the clicked player on Page B and calls the /player-details endpoint using that value. This is yet another separate request.
This all works fine, but as you can imagine, a single user could quite easily rack up in excess of 100 API requests within an hour.
I have a request limit of 1000/hour, so if a mere 10 users are online at the same time, it could easily exceed my limit and shut down my API.
If I could access the API as one single master endpoint that outputted all data and subdata in one set, then that would solve my problem, but since I need to request separate endpoints I can't see how to do this.
Is there a better way to approach this? Or are these excessive API requests the only way?
Any help would be appreciated.
As far as I can see, Your model looks suitable for the application and meets how an API-driven application should work...
However, One potential cut-down you could make is to cache some of the results locally. i.e. store a local version of some of the data that is unlikely to change within a session. For example, If the number of teams is unlikely to change, then store the results of 1 API request locally and use that instead of recalling data from your API.
Following on from this route, you could choose to only update certain data after a certain time period. So, if a user has looked at some team-details then refuse to update this data for the next 10-20minutes. However, this does again depend how time-sensitive your data is.
I'm new to AngularJS and I am currently building a webapp using a Django/Tastypie API. This webapp works with posts and an API call (GET) looks like :
{
title: "Bootstrap: wider input field - Stack Overflow",
link: "http://stackoverflow.com/questions/978...",
author: "/v1/users/1/",
resource_uri: "/v1/posts/18/",
}
To request these objects, I created an angular's service which embed resources declared like the following one :
Post: $resource(ConfigurationService.API_URL_RESOURCE + '/v1/posts/:id/')
Everything works like a charm but I would like to solve two problems :
How to properly replace the author field by its value ? In other word, how the request as automatically as possible every reference field ?
How to cache this value to avoid several calls on the same endpoint ?
Again, I'm new to angularJS and I might missed something in my comprehension of the $resource object.
Thanks,
Maxime.
With regard to question one, I know of no trivial, out-of-the-box solution. I suppose you could use custom response transformers to launch subsidiary $resource requests, attaching promises from those requests as properties of the response object (in place of the URLs). Recent versions of the $resource factory allow you to specify such a transformer for $resource instances. You would delegate to the global default response transformer ($httpProvider.defaults.transformResponse) to get your actual JSON, then substitute properties and launch subsidiary requests from there. And remember, when delegating this way, to pass along the first TWO, not ONE, parameters your own response transformer receives when it is called, even though the documentation mentions only one (I learned this the hard way).
There's no way to know when every last promise has been fulfilled, but I presume you won't have any code that will depend on this knowledge (as it is common for your UI to just be bound to bits and pieces of the model object, itself a promise, returned by the original HTTP request).
As for question two, I'm not sure whether you're referring to your main object (in which case $scope should suffice as a means of retaining a reference) or these subsidiary objects that you propose to download as a means of assembling an aggregate on the client side. Presuming the latter, I guess you could do something like maintaining a hash relating URLs to objects in your $scope, say, and have the success functions on your subsidiary $resource requests update this dictionary. Then you could make the response transformer I described above check the hash first to see if it's already got the resource instance desired, getting the $resource from the back end only when such a local copy is absent.
This is all a bunch of work (and round trips) to assemble resources on the client side when it might be much easier just to assemble your aggregate in your application layer and serve it up pre-cooked. REST sets no expectations for data normalization.