'NSAllocateMemoryPages() failed' with NSPersistentCloudKitContainer - nspersistentcloudkitcontainer

DESCRIPTION OF PROBLEM:
When syncing large items (such as images) with NSPersistentCloudKitContainer, my app crashes with the error 'NSAllocateMemoryPages() failed'. This happens repeatedly within a minute of launching the app. It happens both in production and development environments.
The crash does not happen if the user disconnects from the internet, or turns off iCloud sync for my app.
The images are stored in Core Data as "Binary Data" in separate entities with "many to one" relationships to their parent entities. The setup is adapted from this exaple: https://developer.apple.com/documentation/coredata/synchronizing_a_local_store_to_the_cloud
STEPS TO REPRODUCE:
Add multiple images to core data at once or sync with an iCloud account that already has multiple images in the private iCloud container.
PLATFORM AND VERSION:
Happens on both iPhone X and iPad Pro (gen 1) running iOS 13.3.1.
Have anybody else come accross this issue? All suggestions will be very welcome!

I found the problem in the end. I was storing full-res images of sizes exceeding 20 MB. Compressing the images made the problem go away. It looks like NSPersistentCloudKitContainer is not happy with syncing RawData fields larger than around 15 MB. If anyone has more details on this I would be very interested to learn more about this size-limitation and where/if it is documented!

Related

Codename One - Reduce the loading time of a web-app

I have a Codename One web-app that, after showing the logo, it remains completely blank and white for a variable time (from few seconds to more that ten seconds). My Internet connection is very fast (optical fiber).
Is there any tip to reduce the loading time of a Codename One web-app? The build size is 663kb and the generated application is 10,5MB (unzipped).
Chrome has some really nice benchmarking tools that help point out the time spent on each stage. You should run these and make sure that the downloaded binaries are gzipped so the download isn't the bottleneck.
Also make sure to run your tests against a deployed app and not on the preview which might exhibit different behavior.
In terms of the app, try to show a form quickly without any server requests or IO. Once you do that defer the code to the actual loading block later. If you trigger a server call this will significantly slow down loading.

WSo2 EMM - App Management Database Bug

Running WSo2 EMM 1.1.0, everything has been working just fine except for one big issue.
From the moment I first click on an app in the App Management tab, the WSO2EMM_DB.h2.db file starts to steadily grow as long as the server is running, even with absolutely no changes. Eventually, it gets so big that clicking an app on that tab takes a ridiculously long time to load the list of devices using the app. We're talking 5+ minutes, it becomes completely unusable. I have checked the error logs and found no errors at all, every time.
Restarting the server does nothing to correct the issue. Even if I click an app on the App Management tab once, and never again, the database file will continue to grow. Even restarting the server and not logging into the EMM page, it will continue to grow.
The only thing I've found so far that can possibly help is keeping backup copies of the database file and overwriting the current file when it gets too big. Obviously that's not a solution, as I'd need to create a new backup file every time there's a change on the server, and eventually the database file would grow too big from that too.
It's not an issue with the H2 database either. Not only have I tried starting over fresh several times and have had the same behavior, but here is the only info I could find regarding this issue, and they were having the issue regardless of whether or not it was on H2 or MySQL.
I've been trying to find a solution for this for over a month with no success. Any help would be appreciated!
EDIT: It looks like this might be the subject of EMM-826. Unfortunately there seems to be no response to that bug report so far.
EDIT 2: EMM-826 was closed with a message saying the following:
This issue is fixed in the EMM 1.1.0 GA latest pack. Please get all the patches for the product/build the product from the latest source [ https://github.com/wso2/product-emm ] and try again.
Unfortunately, that did not work for me. I'm not sure what exactly I'm doing wrong, so I'll list the what I did to try to fix it:
Downloaded the EMM 1.1.0 zip from http://wso2.com/products/enterprise-mobility-manager/.
Downloaded the zip from https://github.com/wso2/product-emm and pasted the files from that into my EMM_HOME directory.
When that didn't work, I searched for patches and found I was only using patches 1-6. In the documentation I found I could download patches 7-12 here. Patches 9 and 10 didn't work right for some reason; causing me not to be able to reach the EMM dashboard or publisher. I could only access the Carbon manager. I was able to make patches 7, 8, 11, and 12 work though - with no change in behavior.
Here are the steps I take to reproduce the issue:
After setting a fresh copy of the EMM up, I log in to the EMM dashboard as Admin, set up a user account, and upload an app through the Publisher.
Register a device to the user account I set up. In this case, an Android device running Android 4.2.2.
From the dashboard, I go to App Management and click the app I uploaded. The list of devices loads, but from that point on, the database file starts growing and eventually, after several hours, becomes so large it the device list will never load.
Please help!
Found this happening also, from a quick look it's the WSO2EMM_DB.notifications table. Seems to keep a history of all notifications over time, and the info for app installs is taken from non-optimized queries, which degrade as the table grows. You 'could' delete all rows from the table, and it will re-populate as devices 'check back' and report their info.
But you'd probably want to write a query to just keep the latest notification of each type of each user (I'll leave that to someone else...) and as was mentioned, it is apparently fixed in the latest version.
Issue appears to be resolved in EMM 2.0, which can be found here.

Is it OK for a web app to refresh itself when server is pushing the update?

Basically, we are building a small apps for mobile that list and display all applications/builds/details availables on the server. The listing is between 3-25 apps and each apps on the display occupy about 60 pixel hight with nice UI.
Apps 1 version ABC build date...
Details of the apps here (1-100 lines)
Apps 2 version ABC build date...
Details of the apps here (1-100 lines)
Apps 3 version ABC build date...
Details of the apps here (1-100 lines)
Apps 4 version ABC build date...
Details of the apps here (1-100 lines)
etc.
The listing order/size may change few times during a day (1-3). Is it good practice the just update the listing without any warning?
We have concerns if user is reading details for exemple while the refresh occured, the refresh will cause a scroll back to top. etc. We are talking about "good practice" of apps behavior.
Any comment or suggestion would be appreciated.
thanks,
I think it's all right to refresh data on an application if the goal of the application is to show the most actual data at all times.
But it is VERY important to give a clear and noticeable feed back to the user that the data has been refreshed, with an animation, some color changing or just an effect that goes well with the application.
Another option (and you have a great example just here in Browser Stack) is to tell the user that there is new data, and make them do something to update it.
Like a small prompt at the top of the screen that says "New data is available, tap here to update". That way you are sure that the user is 100% aware that data is being pushed to the application again, and that he agrees to do it.
Hope this helps.

How can I find why some classic asp pages randomly take a real long time to execute?

I'm working on a rather large classic asp / SQL Server application.
A new version was rolled out a few months ago with a lot of new features, and I must have a very nasty bug somewhere : some very basic pages randomly take a very long time to execute.
A few clues :
It isn't the database : when I run the query profiler, it doesn't detect any long running query
When I launch IIS Diagnostic tools, reqviewer shows that the request is in state "processing"
This can happen on ANY page
I can't reproduce it easily, it's completely random.
To have an idea of "a very long time" : this morning I had a page take more than 5 minutes to execute, when it normaly should be returned to the client in less than 100 ms.
The application can handle rather large upload and download of files (up to 2 gb in size). This is also handled with a classic asp script, using SoftArtisan FileUp. Don't think it can cause the problem though, we've had these uploads for quite a while now.
I've had the problem on two separate servers (in two separate locations, with different sets of data). One is running the application with good ol' SQL Server 2000 and the other runs SQL Server 2005. The web server is IIS 6 in both cases.
Any idea what the problem is or on how to solve that kind of problem ?
Thanks.
Sebastien
Edit :
The problem came from memory fragmentation. Some asp pages were used to download files from the server. File sizes could go from a few kb to more than 2 gb. These variations in size induced memory fragmentation. The asp pages could also take quite some time to execute (the time for the user to download the pages minus what is put in cache at IIS's level), which is not really standard for server pages that should execute quickly.
This is what I did to improve things :
Put all the download logic in a single asp page with session turned off
That allowed me to put that asp page in a specific pool that could be recycled every so often (download would now disturb the rest of the application no more)
Turn on LFH (Low Fragmention Heap), which is not by default on Windows 2003, in order to reduce memory fragmentation
References for LFH :
http://msdn.microsoft.com/en-us/library/aa366750(v=vs.85).aspx
Link (there is a dll there that you can use to turn on LFH, but the article is in French. You'll have to learn our beautiful language now!)
I noticed the same thing on a classic ASP + ajax application that I worked on. Using Timer, I timed the page load to be 153 milliseconds but in the firebug waterfall chart it randomly says 3.5 seconds. The Timer output is on the response and the waterfall chart claims that it's Firefox waiting for a response from the server. Because the waterfall chart also shows the response, I can compare the waterfall chart to the timer and there's a huge discrepancy 'every so often'
Can you establish whether this is a problem for all pages or a common subset of pages?
If a subset examine what these pages have in common, for example they all use a specific COM dll, that other pages don't.
Does this problem affect multiple clients or just a few?
IOW is there an issue with a specific browser OS version.
Is this public or intranet?
Can you reproduce the problem from a client you own?
Is there any chance there are some full-text search queries going on SQL Server?
Because if so, and if SQL Server has no access to internet, it may cause a 45-second delay every few hours or so when it tries to check the certifications (though this does not apply to SQL Server 2000).
For a detailed explanation of what I'm referring to, read this.
Are any other apps running on your web server? If so, is your problematic in the same app pool as any of them? If so, try creating a dedicated app pool for it. Maybe one of the other apps is having a problem and is adversely affecting yours.
One thing to watch out for is if you have server side debugging turned on in IIS, the web server will run in single threaded mode.
So if you try to load a page, and someone else has hit that url at the same time, you will be queued up behind them. It will seem like pages take a long time to load, but its simply because the server is doling out page requests in a single file line and sometimes you aren't at the front of the line.
You may have turned this on for debugging and forgot to turn it off for production.

Elegant way to determine total size of website?

is there an elegant way to determine the size of data downloaded from a website -- bearing in mind that not all requests will go to the same domain that you originally visited and that other browsers may in the background be polling at the same time. Ideally i'd like to look at the size of each individual page -- or for a Flash site the total downloaded over time.
I'm looking for some kind of browser plug-in or Fiddler script. I'm not sure Fiddler would work due to the issues pointed out above.
I want to compare sites similar to mine for total filesize - and keep track of my own site also.
Firebug and HttpFox are two Firefox plugin that can be used to determine the size of data downloaded from a website for one single page. While Firebug is a great tool for any web developer, HttpFox is a more specialized plugin to analyze HTTP requests / responses (with relative size).
You can install both and try them out, just be sure to disable the one while enabling the other.
If you need a website wide measurement:
If the website is made of plain HTML and assets (like CSS, images, flash, ...) you can check how big the folder containing the website is on the server (this assumes you can login on the server)
You can mirror the website locally using wget, curl or some GUI based application like Site Sucker and check how big the folder containing the mirror is
If you know the website is huge but you don't know how much, you can estimate its size. i.e. www.mygallery.com has 1000 galleries; each gallery has an average of 20 images loaded; every image is stored in 2 different sizes (thumbnail and full size) an average of for _n_kb / image; ...
Keep in mind that if you download / estimating a dynamic websites, you are dealing with what the website produces, not with the real size of the website on the server. A small PHP script can produce tons of HTML.
Have you tried Firebug for Firefox?
The "Net" panel in Firebug will tell you the size and fetch time of each fetched file, along with the totals.
You can download the entire site and then you will know for sure!
https://www.httrack.com/

Resources