ARKit allowsCameraControl - ios11

I'm implementing allowsCameraControl in my ARKit scene via an IBAction that sets self.sceneView.allowsCameraControl = true via a switch, it works great. My switch logic seems fine in the debugger, however when I turn self.sceneView.allowsCameraControl = false - the camera doesn't return back to it's original tracking state. The objects stay stationary in my Scene View. Any clues?

The allowsCameraControl option is defined by ARSCNView's superclass SCNView — which is to say, it's designed for non-AR situations. That it behaves strangely in that view's ARKit subclass is probably a bug (arguably, it shouldn't work at all, since in AR the camera is supposed to always match device movement). You might want to file that bug with Apple.
In the meantime, if you want to switch between AR (user controls camera by moving device) and non-AR (you control camera, or user controls camera via touch gestures) views of the same content, you might experiment with moving your scene between instances of ARSCNView and SCNView.

Related

What is the intended use case of ARKit's tracked raycast?

I'm developing an AR app for iOS that lets the user place a model in the physical world, using ARKit and SceneKit.
I've looked at this Apple sample code for inspiration. In the sample project, they use tracked raycasts to position 3D models in a scene. This is close to what I want, and what led me to assume I need to do the same to achieve the most accurate positioning.
However, when I use a tracked raycast to position my model, the model drifts around the scene a lot as ARKit updates the position of the raycast.
I get much more stable positioning when using a non-tracked raycast.
That makes me ask: what actually is the intended use case for a tracked raycast? Am I understanding this API wrong?
I've tried:
Positioning the model using an image anchor. This is very stable.
Positioning the model using a non-tracked raycast. This is about as stable as the image anchor.
Positioning the model using a tracked raycast. This drifts all over the scene.
I also understand what an AR raycast in general is for:
getting the intersection of a 2D point on the screen with the 3D geometries that ARKit is tracking.
As this post has explained already.
In Apple's example app you mentioned, raycasting is used to update the FocusSquare all the time. You don't really need it for placing your model. You can get a certain (real-world) position (using the FocusSquare) to place a model on that exact location. For this you can fetch static positon data from the FocusSquare at the moment you add your model on scene. I hope I understood corectly what you want.

Create something similar to the iPhone camera functions selector in Codename One

In this picture, there is a screenshot of the iPhone camera functions selector: the user can horizontally scroll them, and the function name moved to center is selected (it changes its color and it calls a listener that activate the function). It's easier to test on a real iPhone than to describe.
The behavior is very similar to the lightweight string picker, but the main differences are that it's horizontal and it's always shown (while the string picker can be opened and closed).
At the moment, I haven't no idea how to replicate it in Codename One: I need to put it over a camera PeerComponent. I need something "enough" similar and usable: the rotating effect (that I suppose hard to replicate) is very nice but not strictly necessary.
This is the one case where List has no better substitute. Notice that this doesn't cover the slight 3d effect in iOS. You can fake it a bit by using a layered layout and gradient fade on top of the list but that might not look great:
Form f = new Form("Horizontal List", new BorderLayout());
DefaultListCellRenderer.setShowNumbersDefault(false); com.codename1.ui.List<String> l = new com.codename1.ui.List<>("Time-Lapse", "Slo-Mo", "Video", "Foto", "Ritrato");
l.setOrientation(com.codename1.ui.List.HORIZONTAL);
l.setFixedSelection(com.codename1.ui.List.FIXED_CENTER);
f.add(SOUTH, l);
f.show();

Can I create a motion colorizing pixel shader in WPF?

I have a video playing of lines being drawn on the screen. Is it possible to create a pixel shader (for WPF) that turns newly colored pixels a certain color for N milliseconds?
That way, there can be some indication to the user to movement on the screen when the lines don't move often and the user isn't always looking at the screen.
You can use DirectShow. Its written in unmanaged code, so you need to use this wrapper DirectShow.NET in order to use it in your C# application which is running in managed environment (samples are included, even with EVR which stands for Enhanced video Renderer which means MUCH better video quality). And when you will be passing a control handle to wrapper method for setting the video output, you need a WinForms control, because only from them you can get your desired control handle. That WinForms control you can then host in your WPF application using the WindowsFormsHost control provided for such situations when you need to use some WinForms control(s) in a WPF application. Its just theory, so i dont know if its an ultimate solution for you.
BTW: The whole idea is based on fact, that DirectShow is just some query constructed from separated filters. Renderer is a filter (EVR, VMR-7, VMR-9). Sound player is a filter. And they are connected through their pins. Its like a diagram. Electronic schema or something like that. And you can put for example Grey scale filter in there. And voila, video output will be greyscale. There is a bunch of tutorials for that. And completed simple filters as well. Unfortunately, filters must be written in C++:(
PS: I never said its gonna be easy:D

IOS6 Constraints using Autolayout

I have an app that can be rotated, so I need to deal with portrait and landscape orientations. Additionally, users will be allowed to use pinch gestures to change the scale of views. Here is the basic hierarchy of the views.
mainView is a subview of self.view (from the context of the main view controller). It is a UIImageView, although the image part of it is relatively unimportant. In any case, this is the view within which the rest of the views in this discussion are placed as subviews.
The first is what I call the board. It is the view on which items are assembled by the user. These items are themselves image views.
Additionally, there are what I call palettes. These are simply views that can be resized and scaled by the user. Additionally, the image views just mentioned can be dragged from one palette to another or to the board. The palettes can be thought of as work space for the user. When they are finished their work, they place their assembly onto the board.
So far, I've been working with the app where the board is part of autolayout but the palettes are created programatically as needed. This is good because when the user rotates the device, autolayout automatically places the board appropriately. At least it did until I wanted to add pinch scaling to it.
Autolayout has the following constraints set to it in interface builder:
Leading, top, trailing, and bottom all set to superview default.
When the user scales the view, the result is that it sort of sticks to the upper-left corner of the screen. I'd rather have it retain the center.
I tried changing this programmatically by adding the following code to the pinch gesture recognizer for this view:
if (self.pinchView.tag == TAGBOARD) {
[NSLayoutConstraint constraintWithItem:self.pinchView
attribute:NSLayoutAttributeCenterX
relatedBy:NSLayoutRelationEqual
toItem:self.mainView
attribute:NSLayoutAttributeCenterX
multiplier:1.0
constant:0];
}
but this seemed to do nothing. I'm guessing it's because it conflicts with the IB constraints. Is there something else I can do to make this work with autolayout? Or should I just do it all programmatically like I do with the other views?
In this code example, self.pinchView is the view on which the pinch gesture is applied to. For the sake of this discussion, it is what I've called the board. The self.mainView view is its superview.
The part we're not seeing in the code is something like
[self.pinchView addConstraint:yourNewConstraint];
I can see where you create the constraint, but not add it to the view. If you want your constraint to win you'll need to remove the other views or make sure the new constraint has a higher priority.
If your view should be centered, try adjusting the constraints in the storyboard to pin the width and height to the default and then aligning horizontally in the center. That should satisfy autolayout and replicate what you're trying to add, then in your pinch recognizer you can change the constant of the width and height. Be sure to drag your width and height constraints into your controller to create an outlet so that you can adjust them during the pinch gesture.

WPF Flick along a path (Surface)

Im developing an app for Microsoft Surface and Im trying to make the most of the libraries that are out there, the functionality Im after if to be able to flick a UI element.
The ScatterView control makes this easy, but I would like to restrict the UI element to only be able to be flicked along a set path. This is where Im having trouble.
So my questions are:
1) Can you restrict a ScatterViewItem to only be flicked along a path?
2) If not, how would you implement a flick gesture to flick a UI element along a set path?
Thanks!
Mark
1) Not that I know of, and this probably isn't the best way to approach it.
2) Assuming you have the object you want flicked and the path at design-time, I've previously implemented dragging and flicking along a path by creating a timeline animation that represents the movement across the entire path. At runtime, I capture contacts on that object, feed them to a Affine2DManipulationProcessor, and seek the animation based on the manipulation events.
So in my case I was creating a drawer. When the user touched the drawer, I start the animation and pause it immediately. If the user drags it open, I seek the animation the appropriate amount forward based on how far the manipulation processor tells me they've moved.
To get the flick behavior, you just hand off the manipulation to the Affine2DInertiaProcessor and continue handling the delta events.
This all works surprisingly well.

Resources