Detect or react to Guided Access? - ios6

In an app we're creating, we need to add some extra screens to configure the app.
It would be nice if we could add extra buttons on the opening screen of the app, that is only visible if the iPad is not in guided access.
Is it possible to detect that the device is currently running with guided access, and react to it being enabled or disabled?

You want something like this:
NSLog(#"Accessabilitiy enabled: %#", UIAccessibilityIsGuidedAccessEnabled() ? #"YES" : #"NO");
if (!UIAccessibilityIsGuidedAccessEnabled()) {
// show something since I'm not in guided access
}
If you want to know when it changes...
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(guidedAccessChanged) name:UIAccessibilityGuidedAccessStatusDidChangeNotification object:nil];
then check to see if it is on or off as per the first test.

Guided Access in depth at WWDC 2013 (begins at 39:26)
Check if Guided Access is Enabled ( iOS 6+ ):
UIAccessibilityIsGuidedAccessEnabled()
Respond to Guided Access status changes ( iOS 6+ ):
UIAccessibilityGuidedAccessStatusDidChangeNotification
Add custom restrictions while in Guided Access mode ( iOS 7+ ):
UIGuidedAccessRestrictionDelegate
Getting the restriction state for specified restriction ( iOS 7+ ):
Swift:
func UIGuidedAccessRestrictionStateForIdentifier(_ restrictionIdentifier: String) -> UIGuidedAccessRestrictionState
Obj-C
UIGuidedAccessRestrictionState UIGuidedAccessRestrictionStateForIdentifier(NSString *restrictionIdentifier);

Swift 4.2:
if !UIAccessibility.isGuidedAccessEnabled {
// show something since I'm not in guided access
}

Related

How to implement appium's new feature of mobile:getContexts for android hybrid automation

Appium introduced a feature to collect details of webview using mobile command mobile:getContexts. Using this feature i want to rule out the detached window from the webview so i can switch to the actual needed window. How do i do that in java? how do i filter the response of driver.executeScript("mobile:getContexts") and validate the value of key attached. Thank you.
Please use the below approach for the latest appium v1.22+,
//getContext
String currentContext = (String) appiumDriver.execute(DriverCommand.GET_CURRENT_CONTEXT_HANDLE).getValue();
System.out.println("getContext:"+currentContext);
//SwithToContext (NATIVE_APP)
appiumDriver.execute(DriverCommand.SWITCH_TO_CONTEXT, ImmutableMap.of("name", "NATIVE_APP"));
//SwithToContext (WEBVIEW)
appiumDriver.execute(DriverCommand.SWITCH_TO_CONTEXT, ImmutableMap.of("name", "WEBVIEW"));

From iOS Objective-C code and Android Java code to a Codename One PeerComponent

At the page https://www.wowza.com/docs/how-to-build-a-basic-app-with-gocoder-sdk-for-ios there are the following examples:
if (self.goCoder != nil) {
// Associate the U/I view with the SDK camera preview
self.goCoder.cameraView = self.view;
// Start the camera preview
[self.goCoder.cameraPreview startPreview];
}
// Start streaming
[self.goCoder startStreaming:self];
// Stop the broadcast that is currently running
[self.goCoder endStreaming:self];
The equivalent Java code for Android is reported at the page https://www.wowza.com/docs/how-to-build-a-basic-app-with-gocoder-sdk-for-android#start-the-camera-preview, it is:
// Associate the WOWZCameraView defined in the U/I layout with the corresponding class member
goCoderCameraView = (WOWZCameraView) findViewById(R.id.camera_preview);
// Start the camera preview display
if (mPermissionsGranted && goCoderCameraView != null) {
if (goCoderCameraView.isPreviewPaused())
goCoderCameraView.onResume();
else
goCoderCameraView.startPreview();
}
// Start streaming
goCoderBroadcaster.startBroadcast(goCoderBroadcastConfig, this);
// Stop the broadcast that is currently running
goCoderBroadcaster.endBroadcast(this);
The code is self-explaining: the first blocks start a camera preview, the second blocks start a streaming and the third blocks stop it. I want the preview and the streaming inside a Codename One PeerComponent, but I didn't remember / understand how I have to modify both these native code examples to return a PeerComponent to the native interface.
(I tried to read again the developer guide but I'm a bit confused on this point).
Thank you
This is the key line in the iOS instructions:
self.goCoder.cameraView = self.view;
Here you define the view that you need to return to the peer and that we can place. You need to change it from self.view to a view object you create. I think you can just allocate a UIView and assign/return that.
For the Android code instead of using the XML code they use there you can use the WOWZCameraView directly and return that as far as I can tell.

How to add custom span into Trace, in go

My app runs on App Engine Standard and the Go runtime.
I have this trace for my recent request:
There is a big gap between the "urlfetch" span and the "datastore_v3" span, because my app processes some CPU-bound computation for ~1000ms.
I would love to programmatically add my computation as a custom span into the Trace view, and get something like this:
Is there a way to do this in my app written in go? (source here)
It appears it might be possible. From Setting Up Stackdriver Trace for Go:
Alpha
This is an alpha release of the OpenCensus package for Go. These
libraries might be changed in backward-incompatible ways and are not
recommended for production use. They are not subject to any SLA or
deprecation policy.
Stackdriver Trace can be used by Go applications using the
OpenCensus package for Go.
Stackdriver Trace's Go support is provided by OpenCensus, a set
of tracing and application metrics instrumentation libraries that work
with multiple backends. The latest details about OpenCensus for Go,
along with additional documentation and examples, can be found on its
GitHub page.
Support is enabled by default in the flexible environment, however the docs make no mention about the standard environment (if that's your case I'd say just give it a try). From App Engine:
On Google App Engine flexible environment, the Stackdriver Trace API
access scope is enabled by default, and the OpenCensus client
library can be used without needing to provide credentials or a
project ID.
An application code sample is provided on the same page.
I could make it work with the new AppEngine runtime for Go 1.11 (currently in beta) and OpenCensus with the Stackdriver exporter.
In order to attach my custom span to the main Trace of the request, I use this utility func:
// Start a new span "With Remote Parent"
func startSpanfWRT(r *http.Request, msg string, args ...interface{}) (c2 context.Context, endSpan func()) {
caption := fmt.Sprintf(msg, args...)
c := r.Context()
spanContext, ok := (&propagation.HTTPFormat{}).SpanContextFromRequest(r)
if !ok {
return c, func() {}
}
var span *trace.Span
c2, span = trace.StartSpanWithRemoteParent(c, caption, spanContext)
endSpan = func() {
span.End()
}
return c2, endSpan
}
Note that it requires the *http.Request as argument (a context.Context wouldn't be enough here).
Here is the sample app source code.
As a span needs to be started and then later stopped, the start func returns an "end" callback, and a new Context as well.
It is fine to call startSpanfWRT multiple times, and they may overlap. It requires to pass the *http.Request around, which is not super-convenient (usually we only pass Contexts around).
However, after a call to startSpanfWRT, you may add children spans conveniently, just paying attention to the respective Contexts:
c2, childSpan := trace.StartSpan(c, caption)

Application getting crashed on setting the vocabulary for carName using Siri Kit

I am trying to develop an application using SiriKit to get the car door lock status and set the same from Siri. I followed this blog https://www.appcoda.com/sirikit-introduction/ and did all the setup replacing the INStartWorkoutIntent with INGetCarLockStatusIntent.
But when i try to set the vocabulary for carName, the application is getting crashed with following exception,
Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Illegal attempt to provide vocabulary of type INVocabularyStringTypeCarName by an app that does not handle any intents that could use that type of vocabulary'
The source code that i am using to set the vocabulary is,
INPreferences.requestSiriAuthorization { (status) in
}
INVocabulary.shared().setVocabularyStrings(["benz", "bmw", "audi"], of: .carName)
In AppDelegate, i have added the following method,
func application(_ application: UIApplication, continue userActivity: NSUserActivity, restorationHandler: #escaping ([Any]?) -> Void) -> Bool {
guard let intent = userActivity.interaction?.intent as? INGetCarLockStatusIntent else {
print("AppDelegate: Start Workout Intent - FALSE")
return false
}
print("AppDelegate: Start Workout Intent - TRUE")
print(intent)
return true
}
Also created the extension for intent handler and implemented INSetCarLockStatusIntentHandling, INGetCarLockStatusIntentHandling protocols. I am getting this issue when i try to run it in iPhone 10.
Check, if in TARGETS of your project in Build Phases->Embed App Extensions added your Siri Extension. Maybe if you replace the INStartWorkoutIntent with INGetCarLockStatusIntent, old INStartWorkoutIntent remained there.
My crash fix this.
I was facing a similar issue. Make sure your extension's Deployment Target is set to appropriate iOS version. Creating an extension with the latest Xcode (at the moment 10.1) will set the Deployment Target to 12.1 and thus cause crash when run on iOS 10. So you should change it to your desired minimum.

UIActivityViewController showing "Unable to Load" row in iOS 11

I was using a UIActivityViewController instance in several previous versions of an app in order to share content through AirDrop, and other services. And now, without even recompiling, when running the app in iOS 11 Beta, the dialog (UIActivityViewController instance) shows the following row with the message "Unable to load":
Here is the code:
let handlerOk = { () -> Void in
let activityItems = [NSURL(fileURLWithPath: fullPath, isDirectory: true)]
self.viewControllerShare = UIActivityViewController(activityItems: activityItems, applicationActivities: nil)
if let viewControllerShare = self.viewControllerShare {
viewControllerShare.popoverPresentationController?.sourceView = sender
let excludedActivities = [
UIActivityType.postToFacebook,
UIActivityType.postToTwitter,
UIActivityType.postToWeibo,
//UIActivityType.message,
//UIActivityType.mail,
UIActivityType.print,
UIActivityType.copyToPasteboard,
UIActivityType.assignToContact,
UIActivityType.saveToCameraRoll,
UIActivityType.addToReadingList,
UIActivityType.postToFlickr,
UIActivityType.postToVimeo,
UIActivityType.postToTencentWeibo,
UIActivityType.openInIBooks
]
viewControllerShare.excludedActivityTypes = excludedActivities
self.present(viewControllerShare, animated: true, completion: nil)
}
}
Right now I'm compiling with XCode 9 in Swift 4, but I don't think this is relevant since as said, if an older version of the app is run in iOS 11 (compiled with XCode 8.somenthing or even downloading the binary from TestFlight) the problem is the same. So this really looks like an iOS 11 API problem...
Is there anything new regarding the dialog initialization that may prevent the "Unable to load" row to appear?
EDIT: I tested a third party app (from Apple) that uses the same sharing dialog, and found that that row belongs to iCloud. But I'm not using iCloud in my app, so the previous UIActivityViewController behaviour was correct: it just did not show anything. So I made an experiment: disabled from iOS' Settings the iCloud permission for that third party app, and it showed the same unkind "Unable to load" message. I hope this is not the way that the final iOS 11 version works, and that it's just a Beta problem.
That said, workarounds are still welcome! Thanks!

Resources