I am using AEM 6.3 and using allowProxy for clientlibs. As expected dispatcher caches the clientlibs under path /cache/etc.clientlibs/myapp/clientlibs/clientlib.css. But corresponding jcr path will be /apps/myapp/clientlibs/clientlib/mystyle.css
So when clientlibs are modified during deployment, and published, they wont clear respective apache cache automatically. Today we are doing this manually.
Plus we use automated cache buster VersionedClientlibs. So we never end up loading obsolete clientlib. But apache cache gets piled up with 1000s of obsolete clientlib files if manual clearance is not done.
What is the recommended approach to clear obsolete clientlibs at apache that is versioned and proxy allowed?
This a known limitation, and we've also been flushing the whole /etc.clientlib path after each deployment. we do this via ACS dispatcher-flush-ui.
Typically, when deploying to production, you'd flush the whole or part of the dispatcher cache anyway to make sure component changes reflect. So adding this task to that process is easy.
If you really want this to become an automatic process, you can:
Write a ResourceChangeListener example here or a a JCR EventListener example: here. And basically listen for changes at the clientlib path and replicate the corresponding /etc.clientlibs/ path
Write a ReplicationPathTransformer so that when a your clientlib path is replicated, you can transform it to the corresponding /etc.clientlib/ path to be flushed in dispatcher.
Hope this helps.
Related
What's exactly the difference between the IdempotentRepository and the InProgressRepository?
I have following definitions from the File component page:
IdempotentRepository: "Option to use the Idempotent Consumer EIP pattern to let Camel skip already processed files"
InProgressRepository: "The in-progress repository is used to account the current in progress files being consumed."
For me these are the same definitions, only slightly differently phrased.
They can also both use the same idempotent repository.
So I'm slightly confused, do I need both? Or is the idempotentRepository good enough?
Before you read below info, make sure you have read and understand the concept of idempotent.
IdempotentRepository - The place used to store the cache of already processed file (i.e. files have been consumed and handled by your route). In use when you check on idempotent feature.
InProgressRepository - The place used to store the cache of current in progress file (i.e. files to be consumed in current batch). Always in use for file consumer.
IMO, one always need InProgressRepository and with default setup (memory based repository) in general. One might need IdempotentRepository if idempotent is required and choose their own setup (file-based, JPA-based, ...) to against app restart.
I'm wondering, when I press "deploy" in the google app engine launcher, how does it sync my changes to the actual instance.... maybe it would be better to ask specific questions :)
1) Does it only upload the delta changes (as opposed to the entire file) for changed files?
2) Does it only upload new files and changed files (i.e. does not copy pre-existing) unchanged files?
3) Does it delete remote files that do not exist locally?
4) Does all of this happen instantaneously for the end user once the app has finished deploying? (i.e. let's say I accidentally uploaded an insecure file that sits on example.com/passwords.txt - if #3 is true, then once I remove it from the local directory and re-deploy it should be gone- but can I be sure it is really gone and not cached on some edge somewhere?)
If you use only the launcher or the appcfg util as opposed to manage your code by means of git, AppEngine will keep only one 'state' of that particular version of your app and will not store any past state. So,
1) Yes, it uploads only deltas, not full files.
2) Yes, only new, modified or deleted files.
3) Yes, it deletes them if you delete locally and deploy. As Ibrahim Arief suggested, it is a good idea to use appcfg so you can prove it to yourself.
4) Here there are some caveats. With your new deploy, your old instances are sent a kill signal, and until it actually gets executed, there is a time span (seconds to minutes) during wich new requests could hit your previous version.
It is also very important the point Port Pleco has made. You have to be careful with caching on static files. If you have a file with Expires or Cache-Control headers, and it is actually served, then it could be cached on various places so the existence of old copies of it, is completely out of your control.
Happy coding!
I'm not a google employee so I don't have guaranteed accurate answers, but I can speak a little about your questions from my experience:
1) From what I've seen, it does upload all files each time
2) See 1, I'm fairly sure everything is uploaded
3) I'm not entirely sure whether it "deletes" the files, but I'm 99% sure that they're inaccessible if they don't exist in your current version. If you want to ensure that a file is inaccessible, then you can deploy your project with a new version number, and switch your app version to the new version in your admin panel. That will force google to use all your most recent files in that new version.
4) From what I've seen, changes that are rendered/executed, like html hardcoded text or controller changes or similar, appear instantly. Static files might be cached, as normal with web development, which means that you might have old versions of files saved on user's machines. You can use a query string on the end of the file name with the version to force an update on that.
For example, if I had a javascript file that I knew I would want to redeploy regularly, I would reference it like this:
<script type="text/javascript src="../javascript/file.js?version=1.2" />
Then just increment the version number manually when I needed to force deployment of the javascript to my users.
Is it possible to simulate multi developer scenario with RTC source control so that when I make code changes I can test accepting change sets for example. This is just so I can test a multi developer environment but using just one user.
I've tried creating multiple Eclipse workspaces, and loading the same project area into each Eclipse workspace. Using this method I am unable to accept change sets as RTC source control will just ask me to resync my workspace once I make a change in work Eclipse workspace:
It seems the only method of accepting incoming changes is to
1. Right click on the stream from within 'Pending CHanges' view
2. Select load
3. Select following option :
Make sure you use the Stream (ie make sure you don't deliver directly to another repo workspace simulating another user)
(Note: this is entirely different in ClearCase, where the "out of sync" can happen between the configuration of an UCM view and the one of a Stream after a rebase)
If you create different repo workspace (loaded in different Eclipse workspace), this can cause some confusion when used within the same Eclipse instance.
As said in this thread
repository workspaces are meant to isolate changes - being your private stream.
There is no automatic accepting of changes so you are in full control of what flows in. You can also run private builds on them. that is the whole idea.
If you want to run several repository workspaces with shared code you should use a Stream I think.
The clean repo workspace would be used to accept the changes you decide to deliver to your stream.
So you are trying to use a repository workspace as a stream. While they are almost identical, I am not sure about how they would react to changes delivered to them. Especially while being loaded.
You should use two Eclipse instances. I am concerned about having the same eclipse projects loaded multiple times in the same sandbox and the same Eclipse
That "confusion" is explained in the same thread:
This is expected behavior.
When you change WS1 by delivering to it, the content you've loaded to disk for WS1 isn't updated. So you have to reload.
For this reason, you are not allowed to deliver to other user's workspaces. You can't alter someone's workspace but you can alter your own because you would know why it went out of sync.
Check out point 7 and 10 of "Good practices and key workflows for Rational Team Concert Source Control users".
Note: the article "Loading Content from a Jazz Source Control Repository in Rational Team Concert 2.0" (also valid for RTC3.0) mentions in the section "Reloading Out-of-sync Shared Folders" a similar advice than the one given by the OP:
The local workspace can become out of sync with the remote workspace due to a couple of reasons:
The remote workspace is loaded multiple times and changes have been checked in or accepted from another client session.
An error was encountered during an operation (e.g. Accept) that modifies both the local and remote workspaces.
When the local workspace became out of sync with the remote workspace in RTC 1.0, the user was forced to run the Load wizard and reselect the folders that needed to be reloaded.
In RTC 2.0, this new option will automatically select the out of sync folders and reload them so they are no longer out of sync.
Also new in RTC 2.0 is an indication in the Pending Changes view that there are projects out of sync, as shown below.
Clicking on the Reload out of sync link in the Pending Changes view will open the Load wizard.
The reload option will be selected by default and clicking next will then allow you to select which folders to reload.
As you can see in the following screen shot, all the projects in the Foundation component are out of sync and need to be reloaded.
Clicking Finish will reload these folders and bring them back in sync.
Also the thread "How to handle project out of sync " provides an interesting illustration of that mechanism (even though it isn't exactly your situation).
We have a config spec that we use for our builds that we encourage all developers in our organization to use so that they can run any task in our build without fear of failure. Every now and again we need to update that config spec to include new elements or exclude old elements.
When we do this, the process is to write a quick mail to all of our developers telling them to manually update any views that they use to build our system with the current config spec.
This is annoying and error prone and thus leads many developers to just ignore those mails and then we get called because the build's broken.
I'm very interested in defining the config spec centrally somehow so that all views can use that config spec and we can update it underneath of people. This may seem draconian but when you have hundreds of developers and they're all supposed to be running the same builds, it seems to make sense.
I've already investigated the idea of using a share to store the config spec and then include it in the developer's views using an include line, but as the documentation states: "Include files are re-read on each execution of setcs and edcs." This appears in testing to mean what it seems to mean, that the only time the rules are re-evaluated are in the context of editing the config spec in some way.
The solution I'm looking for would re-evaluate the config spec every time you interact with clearcase, or at the very least update. In that way, I could manage the config spec for everyone.
Thoughts?
I can work, especially if your included config spec doesn't change too often.
Each time it will change, your users will have to run
cleartool setcs -current
(as explained in the example#2 of this technote)
You will then need to decide where to store that common config spec:
on a share drive
on a ClearCase view in order to benefit from the history feature for that common config spec content.
You can see a full debate in this thread:
However, I have encountered situations where a version controlled
include file was necessary because it referred to plenty of elements
from a legacy code which users had to use to continue their work on some
of the new code. It was pain and we had to live with it.
Just like with any other 'process', this too needs some 'education' to
the users.
I found "Suspend Change-set" in RTC to be very useful, and since we're working with ClearCase as well (dozens of users) I'm wondering if that feature is also available in ClearCase as well.
If not - could it be generated by script/trigger/hook ?
We use UCM, and I'd like to explain my question:
if I have to deliver and I want to skip delivering one activity, I can decide not to deliver it (if no dependencies...) , so my question is regarding working on my current stream: Is that possible to "suspend" an activity from my current stream ?
Thanks in advance
Simply put, not easily.
RTC is basically ClearCase rewritten from scratch, and the "suspend" mode (also called stashed or shelve) takes advantage of the notion of applying a changeset (to any state of a repository)
The UCM changeset are a list of versions of files. Each version is tied to its predecessor, and you cannot easily remove it (unless you do some negative or subtractive merges), and then re-apply them later.
That being said, Reuven just contacted me this morning, because he had files in checked out in a snapshot view on a Stream which he wants to rebase (similar issue to your deliver problem).
A possible way to do that is to create another view (dynamic one), which you can use for your rebase, and then go back to your snapshot view and update it: it will detect the updated config spec (following the rebase) and will not erase any of your currently checked out files.
On the checkin, those files will be merged with the updated version.