Background fetch in IOS not working - codenameone

Back Ground fetch functionality with codenameone 1.5 release is working fine in Android devices and its not working in iOS devices. I have added "ios.background_modes=fetch" property also. Can you please help me on this.

On iOS you'll have less control over background fetch than you do on Android. The OS decides when it will allow you to perform a background fetch, and this depends on many factors. On Android it works more like clock-work. You specify the preferred interval, and it does an update on that interval (min 60 seconds I think). On iOS it could be 10 minutes, or it could decide to disallow you altogether if your app has been taking too long on previous fetches. Etc...
So given that information, what tests have you run to conclude that it is not working?

Related

React App becomes slow after long usage in safari

I'm running a learning app written in react. The average session duration is 15 Minutes. Recently some Safari users (mobile and desktop) reported that the apps feels laggy after a fair bit of use (answering questions).
I monitored memory used in safari and answered some questions in the app. After a while, there is a spike in page memory usage (a ridiculous amount). A minute later page memory goes up again and stays there. From there on the app feels quite laggy.
The vast majority of javascript events are Animation Frame Requested and Animation Frame Fired events. That's why I thought that animations may be the culprit. I'm using react-spring and found this open issue.
I can only run safari in a docker container on my Linux machine, as I don't have an apple device (https://github.com/sickcodes/Docker-OSX). Maybe that is the reason for the weirdly high amount of page memory usage (I wish I had that amount of RAM).
I'm kind of stuck and don't know how to proceed. Did anyone experience similar issues with react, react-spring and safari?

How can I spoof the time when running a webpack/react app (only for testing)?

I have had a user report an issue where between midnight and 1am, some of my app's functions don't work correctly.
To recreate these issues, I would either need to just wait til midnight to fix them, or somehow emulate the time so the app thinks it's midnight. Is there a way I can do this without changing my Windows system time, or preferably without using a VM? My current technologies are
Windows 10
React 17.0.1
Webpack 5.9.0
I am open to using more npm packages if required. Any suggestions would be helpful
I have answered my own question here. There is a chrome extension here https://chrome.google.com/webstore/detail/change-timezone-time-shif/nbofeaabhknfdcpoddmfckpokmncimpj/related?hl=en that will quickly change your timezone in a current browser session. Hope someone else finds this useful

Get running / walking pace in steps per minute using React native

I'm developing an app in which i'll play music which is synced your current walking or running pace. In order to do so i have tried using a pedometer input from expo, however this isn't working too great. Input of the pedometer is delayed and is therefore hard to fix. Is there a "better" way to get pedometer inputs in react native more reliably? Otherwise i could opt for trying to guess a walking / running pace by deeming how fast the user is travelling with google maps or similar.
How would you approach my problem?
I would drop expo. With expo you will be limited in the capabilities of your app, meaning you won't be able to add 3rd party native libraries to your app. These are what you are looking for to get the data you want. A quick google search found this, and I guarantee there are many other similar packages if this doesn't quite fit your needs: https://github.com/mathieudutour/react-native-pedometer
Once you drop expo, a world of possibilities will open up to you, but if you are unable to drop expo ... Not being an expo expert, I'm not sure what can be done to get better data. Depending on what you have made so far, dropping expo may be a difficult experience. Either way, good luck.

React Native production app remote diagnostics / user assistance

I have published an app built with React Native. Currently it's iOS only, but eventually may be released for Android as well. I'd like a cross-platform solution to remotely assist customers that run into bugs, crashes or any unexpected behavior. While the app could continuously log everything to a server, I've found that that's not very helpful since customers usually have very specific points in time that they need help with. Sifting through continuous logs is time consuming and generally a waste of resources.
My hope is to give the user the ability to press a button to send the stack trace, the last N minutes worth of logs, etc directly to me. This wouldn't work in the case of a hard crash of course. The vast majority of the time the app is functional when there's something they need help with.
A pie-in-the-sky idea would be to let the user share their screen with me.
Found this related question but it doesn't fully encompass what I'm trying to accomplish:
Release mode diagnostics in React Native
BugSnag looks promising. It's a paid service.
https://www.bugsnag.com/platforms/react-native-error-reporting
I tried BugSnag and a few other services. In the end, Sentry has the most reliable and simplest RN library. It's also free for the Developer plan (5k errors per month is plenty enough for us, and supports multiple apps).
https://sentry.io/pricing/

in-app A|B Testing for Mobile

Is there a good solution for A|B Testing in mobile apps like online? I know with iOS it's against the TOS to have different user experiences with identical actions, but what about Android? And what about firms like Apsalar which claim to offer A|B Testing in their analytics for apps? How would one implement that?
Artisan mobile makes an A/B testing solution for iOS and Android.
The basic idea is that you drop the SDK in your app and then put it out in the app store. You can use the service to create A/B tests and optimize your application without having to touch the code or go back through the app store for each test.
For mobile apps, A/B testing basically works by replacing static, hard-coded objects with dynamic objects that can be controlled from a remote server.
This methodology raises a potential performance issue: What if the end user's device is not connected to pull configuration data for an object being tested? We've built Splitforce (http://splitforce.com) to seamlessly setup and manage A/B testing in mobile apps while controlling for performance risk.
Los details
Once the SDK and experiment has been integrated, non-technical product or marketing folks can setup new tests or tweak existing tests on-the-fly - without having to resubmit to the app stores or hassle engineers.
On first app launch, the mobile app requests configuration data from the server and then caches that data locally on the device. This is to both ensure a consistent user experience on subsequent app launches, and prevent corrupt test results by guaranteeing accurate attribution of conversion events to variations.
If the end user's connection fails or is timed-out on first app launch, the library displays a hard-coded 'default' variation. And to make sure that everything is looking good before you go live, we've built a 'shake to preview' functionality in debug mode that does just that :-)
Once the app is deployed with Splitforce event data are stored locally and sent back to the website to be displayed for each variation alongside measurements of observed improvement and statistical confidence.
Instructions on integration of the SDKs and new tests can be found at https://splitforce.com/documentation.
And how is it used?
We've seen Splitforce used to A/B test:
UI elements + layouts (color, text, images, ad/menu placements)
UX workflows
Game dynamics + rules
Prices + promotions
We've also seen the tool used to control mobile apps remotely, by essentially setting one variation of a test subject to 100%.
Yes there is: E.g. the company Leanplum offers a Visual Interface Editor for iOS and Android: This requires no coding, and Leanplum will automatically detect the elements and allow you to change them. No engineers or app store resubmissions required.
Apple must have updated their TOS (https://developer.apple.com/app-store/review/guidelines/#user-interface) - At least I am not aware of anything that prohibits altering the UI in a way that the Leanplum Visual Editor is doing it.
Generally that is achieved by method swizzling (iOS) and reflection (Android).
To learn more about the Leanplum Visual Interface Editor, check out leanplum.com. They offer a free 30-day trial.
(Disclaimer: I am an Engineer at Leanplum.)
I wrote a small open source project called Switchboard.
It let's you A/B test, remote configure and stage rollout things in your native mobile app. It contains a server component that specifies what information the application should have and 2 native clients for android and iOS.
You can find the codebase at github.com/keepsafe/switchboard and a blog post about how you can use it HERE
The new kid around the block is Arise.io. They provide an A/B testing service for iOS and Android.
I wrote MSActiveConfig, an extremely flexible framework to do remote configuration + A/B testing on iOS, with a portable format to be able to implement clients on other platforms: https://github.com/mindsnacks/MSActiveConfig.
This framework is being used in applications with more than 5 million users.
There have been a spate of new entrants in this field...you could check out Swerve, Appiterate, leanplum...all of them seem to be having SDKs for iOS as well, not really sure whether and how Apple TOS allows for that, but since there are some many of them doing it, there must be a way.
Yes, new entrants are showing up in app A/B testing practically every week! But, I think Appiterate has gone two steps ahead of other competitors by creating a visual interface, without any need to re-write code. I have seen their platform (you can ask for an invite. I got a demo within 12 hours) and believe me, it is actual WYSIWYG that they are providing.

Resources