How I find out which files wicket write in the "generated" directory? - payara

I'm new in wicket. But I have to care a project with wicket 6.20 components (run on a payara application server). Now I have a space problem on my live-server. In directory
payara41/glassfish/domains/domain1/generated/jsp/my-ear-21628-LIVE-CLUSTER/my-web_war/wicket.my-web-filestore
web-filestore are collect in time a huge amount of data files.
./7857
./7857/9907
./7857/9907/6cb5a7b9cb89ad9e0d8b1422af63
./7857/9907/6cb5a7b9cb89ad9e0d8b1422af63/data
./7857/1851
./7857/1851/6bc6a644f4ab91a7674b0e91b4fe
./7857/1851/6bc6a644f4ab91a7674b0e91b4fe/data
...
How can I find the cause of this file generation?

Wicket stores the stateful pages on the disk (by default). It creates a file for each http session. On session expiration the file is deleted and the content of the temp folder should not grow endlessly.
But there was a bug in the past that left such files there.
I don't remember in which version exactly the bug was fixed.
The best would be to upgrade to the latest available version. 6.x is not supported since several years, but still you can upgrade to latest 6.x.

Related

CDN caching strategy for React websites that use chunks

What's the best caching strategy for React websites that use chunks (code splitting)?
Before using chunks I would just cache everything for one year in CloudFront, and I would delete the old files and invalidate the cache after a new version of the website was deployed. Works great.
However, after I started using chunks I started getting problems. One common issue is that after deploying a new version of the site I delete the old files and invalidate the cache. One user is already active on the old version of the site and his version of the site tries to load a chunk that no longer exists, so the site crashes for him.
One potential solution would be to keep all old files for a month or longer, and delete all files that are older than X months during the deployment process.
Is there any better solution to this problem. Am I missing something special from the service worker that CRA (Create React App) provides? If I remember correctly it provides some kind of cache busting.
Thanks.

Archive artifact in Nexus

Our internal Nexus repository has an artifact that we wish we never published, but unfortunately we did. And various environments installed it.
I'd like to delete it from the repository so that nobody downloads the erroneous release again, but for the people who already downloaded & installed that artifact, it seems a bit deceptive to pretend the release never happened. Is there a way to "archive" or "disable" an artifact so that it still is preserved somewhere for analysis or archeological purposes, but won't be installed by someone pulling artifacts for installation?
There isn't a feature in NXRM for disabling access to individual artifacts. This is a bit hacky but you could achieve it by
Creating a new hosted repository (either raw format or the same
format that you are currently using)
Marking that repository as "offline"
Moving the artifact to the archived repository.
Step 3 is the problematic part: If you are an NXRM PRO user on a recent version there are REST calls you can use for moving components. See here for more details https://help.sonatype.com/repomanager3/staging#Staging-RESTEndpoints
If you are an OSS user you will likely have to republish the same artifact to the archive repository and then delete it from the original repository.
(Note the above assumes you are using NXRM3 - if you are using NXRM2 take a look at https://blog.sonatype.com/2010/04/nexus-tip-moving-artifacts-between-nexus-repositories/)

Version control strategy with Google Cloud Endpoints

When working with Google Cloud Endpoints in a appengine project (eclipse based), some files that describe the api are automatically generated every time the endpoint is edited and for every version.
The files are *-v1.api, *-v1-rest.discovery and *-v1-rpc.discovery (the version number may change) and are placed in WEB-INF.
┬┐Should these files be committed to source control?
My impression is that if the files are automatically generated, they will always be available and there is no need to track them.
Even if I add more versions of the endpoint in the future, I need to keep all those versions for backwards compatibility so all .api and .discovery files will also be generated for all the versions.
Personally, I don't version control (or even worry about backing up) any generated files. I only worry about source and published binaries. And in theory you don't need the binary either because you should be able to recreate the binary from those source files.

For mobile app updates why does the entire app need to be downloaded again?

I have noticed on a number of platforms: iOS, Android and BlackBerry that when updating an app the entire app is downloaded again (others mobile platforms may work this way too but I have only been exposed to these platforms).
Why does the entire app need to be downloaded again for an update instead of incremental updates?
This seems very inefficient especially if you are dealing with large apps.
Your basic question is not true for iOS, as of 6.0:
Starting with iOS 6, the app store will automatically produce an
update package for all new versions of apps submitted to the store.
This package is optimized for updating an app from one version to
another, and contains files that have changed between the prior
version of an app and the new version of the app, excluding files that
have not changed.
When used optimally, an update package is significantly smaller to
download than the full package of the app and the update will install
more quickly. Also, in many cases, this mechanism allows updates to
large apps to be downloadable over cellular networks where app
downloads are subject to a size limit.
In addition to new content, the update package contains instructions
on how to transform the prior version of the app into the new version
of the app. New files will be added, modified files will be replaced
with their updated counterpart, and deleted files will be removed as
part of this transformation. As far as the developer and user are
concerned, this process is entirely transparent and the resulting
updated app will be indistinguishable from a full download of the
corresponding updated version of their app.
So, yes, it is possible to do delta application updates, as well as delta OS updates, on mobile platforms. This capability simply has to be added by the OS vendor.
Some of the code and other content might be updated and/or changed. This requires a update. Since Android does not allow you to change/delete/add files to the installation folder, you have to download and reinstall the whole app.

How to remove previous versions of an offline clickonce application

We have an net 4.0 winforms application that we publish with clickonce to the client pc's. The installation is about 80 MB. The application is offline available and the update occurs in the startup of the app using
ApplicationDeployment.CurrentDeployment.Update
Each time we do an update of the application everything works fine and each client gets udpated. However the application cache keeps growing in size... We noticed that more then two versions are kept in the LocalAppData folder. The size of the clickonce installation folder is more then 1GB.
ClearOnlineAppCache works only for online applications and we don't find any information to clean the LocalAppData for offline application.
Is there any way to manage previous versions of our application in the LocalAppData folder from our client pc's?
Update:
We removed our custom update code and used the update mechanism of the Clickonce framework. Now old versions are removed properly and only two versions are kept in LocalAppData. I have still no idea why all versions are kept when we update through the custom update code.
I've seen this issue before, but I clarified with the ClickOnce lead at Microsoft before answering.
It keeps two versions of the deployment plus there are extra folders for each assembly. When processing an update, ClickOnce figures out which files have changed by comparing against the assembly it has already cached, and it only downloads the ones that have changed. The deployment folders have hard links to the assemblies in the separate folders. So you might see additional files, but it's not actually the file, it's a link to the files in the assembly-only folders. Explorer will show it as a file, but it's not. So unless you're running out of disk space and are just concerned about the folder size, be aware that the information reported by Windows Explorer may not be accurate.
There is an answer to this problem here
I wrote a function to clean old ClickOnce versions in the client side.
In my machine I've freed 6Gb of space. I don't want to even know the total space used by old versions org wide...

Resources