Using SCNRenderer with ARKit makes weird result - scenekit

I'm trying to export a depth texture being created during rendering pass of ARSCNView. In order to do that, I wrote a code that renders SCNScene in the background with a custom MTLRenderPassDescriptor. When I trace the resources bound to GPU using 'GPU Capture tool', I found that the custom MTLRenderPassDescriptor is ignored in the SCNRenderer.render method.
I used this code to render the SCNScene to the offscreen.
// Render Pass - render sceneView
renderer.scene = sceneView.scene
renderer.pointOfView = sceneView.pointOfView
renderer.render(atTime: 0, viewport: viewport, commandBuffer: commandBuffer, passDescriptor: renderPassDescriptor)
When I check the resources in GPU capture, the renderer produces its own frame texture and depth texture, not described ones in renderPassDescriptor. It's weird according to the document. I also tested this without ARKit session and it works as I expected (the renderer uses texture resources described in the renderPassDescriptor).
How can I fix this? Is this a SceneKit bug?
Image 1. I found in GPU capture that the depth texture is not linked to blit pass.
Image 2. The color attachment texture address is 0x144a4f310
Image 3. The depth attachment texture address is 0x144a50050
Image 4. Bound textures to render method have different addresses
Here is Minimum Working Example.
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate, ARSessionDelegate {
  #IBOutlet var sceneView: ARSCNView!
  var ship: SCNNode!
  var device: MTLDevice!
  var renderer: SCNRenderer!
  var commandQueue: MTLCommandQueue!
  let textureSizeX = 2732
  let textureSizeY = 2048
  lazy var viewport = CGRect(x: 0, y: 0, width: CGFloat(textureSizeX), height: CGFloat(textureSizeY))
   
   
  override func viewDidLoad() {
    super.viewDidLoad()
     
    // Set the view's delegate
    sceneView.delegate = self
    sceneView.session.delegate = self
    sceneView.showsStatistics = true
    sceneView.scene = SCNScene()
     
    ship = SCNScene(named: "art.scnassets/ship.scn")?.rootNode.childNode(withName: "shipMesh", recursively: true)!
    sceneView.scene.rootNode.addChildNode(ship)
     
     
    // background renderer
    device = MTLCreateSystemDefaultDevice()!
    renderer = SCNRenderer(device: device, options: nil)
    commandQueue = device.makeCommandQueue()!
  }
   
  override func viewWillAppear(_ animated: Bool) {
    super.viewWillAppear(animated)
     
    guard let referenceImages = ARReferenceImage.referenceImages(inGroupNamed: "AR Resources", bundle: nil) else {
      fatalError("Missing expected asset catalog resources.")
    }
    // Create a session configuration
    let configuration = ARWorldTrackingConfiguration()
    configuration.detectionImages = referenceImages
    // Run the view's session
    sceneView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])
  }
   
  override func viewWillDisappear(_ animated: Bool) {
    super.viewWillDisappear(animated)
     
    // Pause the view's session
    sceneView.session.pause()
  }
  // MARK: - ARSCNViewDelegate
   
  // Override to create and configure nodes for anchors added to the view's session.
  func renderer(_ renderer: SCNSceneRenderer, willRenderScene scene: SCNScene, atTime time: TimeInterval) {
    doRender()
  }
   
  func doRender() {
    let renderPassDescriptor = makeRenderPassDescriptor()
    let commandBuffer = commandQueue.makeCommandBuffer()!
     
    // Render Pass - render sceneView
    renderer.scene = sceneView.scene
    renderer.pointOfView = sceneView.pointOfView
    renderer.render(atTime: 0, viewport: viewport, commandBuffer: commandBuffer, passDescriptor: renderPassDescriptor)
     
    // Blit Pass - copy depth texture to buffer
    let imageWidth = Int(textureSizeX)
    let imageHeight = Int(textureSizeY)
    let pixelCount = imageWidth * imageHeight
    let depthImageBuffer = device.makeBuffer(length: 4 * pixelCount, options: .storageModeShared)!
    let blitEncoder = commandBuffer.makeBlitCommandEncoder()!
    blitEncoder.copy(from: renderPassDescriptor.depthAttachment.texture!,
             sourceSlice: 0,
             sourceLevel: 0,
             sourceOrigin: MTLOriginMake(0, 0, 0),
             sourceSize: MTLSizeMake(imageWidth, imageHeight, 1),
             to: depthImageBuffer,
             destinationOffset: 0,
             destinationBytesPerRow: 4 * imageWidth,
             destinationBytesPerImage: 4 * pixelCount,
             options: .depthFromDepthStencil)
    blitEncoder.endEncoding()
     
     
    commandBuffer.commit()
    // Wait until depth buffer copying is done.
    commandBuffer.waitUntilCompleted()
  }
   
  func makeRenderPassDescriptor() -> MTLRenderPassDescriptor {
    let renderPassDescriptor = MTLRenderPassDescriptor()
    let frameBufferDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm_srgb, width: textureSizeX, height: textureSizeY, mipmapped: false)
    frameBufferDescriptor.usage = [.renderTarget, .shaderRead]
    renderPassDescriptor.colorAttachments[0].texture = device.makeTexture(descriptor: frameBufferDescriptor)!
    renderPassDescriptor.colorAttachments[0].loadAction = .clear
    renderPassDescriptor.colorAttachments[0].storeAction = .store
    renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColorMake(1, 1, 1, 1.0)
    let depthBufferDescriptor: MTLTextureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .depth32Float, width: textureSizeX, height: textureSizeY, mipmapped: false)
    depthBufferDescriptor.usage = .renderTarget
    renderPassDescriptor.depthAttachment.texture = device.makeTexture(descriptor: depthBufferDescriptor)
    renderPassDescriptor.depthAttachment.loadAction = .clear
    renderPassDescriptor.depthAttachment.storeAction = .store
     
    return renderPassDescriptor
  }
   
  func session(_ session: ARSession, didFailWithError error: Error) {
    // Present an error message to the user
     
  }
   
  func session(_ session: ARSession, didUpdate frame: ARFrame) {
  }
   
  func sessionWasInterrupted(_ session: ARSession) {
    // Inform the user that the session has been interrupted, for example, by presenting an overlay
     
  }
   
  func sessionInterruptionEnded(_ session: ARSession) {
    // Reset tracking and/or remove existing anchors if consistent tracking is required
     
  }
}

Related

How to put an image overlay over video

I want to put an image overlay over video, but I'm not sure how I can do this. I'm trying to modify example from this repo Azure Media Services v3 .NET Core tutorials
, bascially what I changed here is transform:
private static Transform EnsureTransformForOverlayExists(IAzureMediaServicesClient client, string resourceGroupName, string accountName, string transformNameNew)
{
Console.WriteLine(transformNameNew);
Transform transform = client.Transforms.Get(resourceGroupName, accountName, transformName);
if (transform == null)
{
TransformOutput[] outputs = new TransformOutput[]
{
new TransformOutput(
new StandardEncoderPreset(
codecs: new Codec[]
{
new AacAudio(
channels: 2,
samplingRate: 48000,
bitrate: 128000,
profile: AacAudioProfile.AacLc
),
new H264Video(stretchMode: "AutoFit",
keyFrameInterval: TimeSpan.FromSeconds(2),
layers: new[]
{
new H264Layer(
bitrate: 1500000,
maxBitrate: 1500000,
width: "640",
height: "360"
)
}
),
new PngImage(
start: "25%",
step: "25%",
range: "80%",
layers: new PngLayer[]{
new PngLayer(
width: "50%",
height: "50%"
)
}
),
},
filters: new Filters
{
Overlays = new List<Overlay>
{
new VideoOverlay("input1")
}
},
formats: new Format[]
{
new Mp4Format(
filenamePattern: "{Basename}_letterbox{Extension}"
),
new PngFormat(
filenamePattern: "{Basename}_{Index}_{Label}_{Extension}"
),
}
))
};
transform = client.Transforms.CreateOrUpdate(resourceGroupName, accountName, transformName, outputs);
}
return transform;
}
and RunAsync method to provide multiple inputs where one of them should be an overlay:
private static async Task RunAsync(ConfigWrapper config)
{
IAzureMediaServicesClient client = await CreateMediaServicesClientAsync(config);
// Set the polling interval for long running operations to 2 seconds.
// The default value is 30 seconds for the .NET client SDK
client.LongRunningOperationRetryTimeout = 2;
try
{
// Ensure that you have customized encoding Transform. This is really a one time setup operation.
Transform overlayTransform = EnsureTransformForOverlayExists(client, config.ResourceGroup, config.AccountName, transformName);
// Creating a unique suffix so that we don't have name collisions if you run the sample
// multiple times without cleaning up.
string uniqueness = Guid.NewGuid().ToString().Substring(0, 13);
string jobName = "job-" + uniqueness;
string inputAssetName = "input-" + uniqueness;
string outputAssetName = "output-" + uniqueness;
Asset asset = client.Assets.CreateOrUpdate(config.ResourceGroup, config.AccountName, inputAssetName, new Asset());
var inputs = new JobInputs(new List<JobInput>());
var input = new JobInputHttp(
baseUri: "https://nimbuscdn-nimbuspm.streaming.mediaservices.windows.net/2b533311-b215-4409-80af-529c3e853622/",
files: new List<String> {"Ignite-short.mp4"},
label:"input1"
);
inputs.Inputs.Add((input));
input = new JobInputHttp(
baseUri: "SomeBaseUriHere",
files: new List<string> {"AssetVideo_000001_None_.png"},
label: "overlay");
inputs.Inputs.Add((input));
Asset outputAsset = CreateOutputAsset(client, config.ResourceGroup, config.AccountName, outputAssetName);
Job job = SubmitJob(client, config.ResourceGroup, config.AccountName, transformName, jobName, inputs, outputAsset.Name);
DateTime startedTime = DateTime.Now;
job = WaitForJobToFinish(client, config.ResourceGroup, config.AccountName, transformName, jobName);
TimeSpan elapsed = DateTime.Now - startedTime;
if (job.State == JobState.Finished)
{
Console.WriteLine("Job finished.");
if (!Directory.Exists(outputFolder))
Directory.CreateDirectory(outputFolder);
await MakeContainerPublic(client, config.ResourceGroup, config.AccountName, outputAsset.Name, config.BlobConnectionString);
DownloadResults(client, config.ResourceGroup, config.AccountName, outputAsset.Name, outputFolder).Wait();
}
else if (job.State == JobState.Error)
{
Console.WriteLine($"ERROR: Job finished with error message: {job.Outputs[0].Error.Message}");
Console.WriteLine($"ERROR: error details: {job.Outputs[0].Error.Details[0].Message}");
}
}
catch(ApiErrorException ex)
{
string code = ex.Body.Error.Code;
string message = ex.Body.Error.Message;
Console.WriteLine("ERROR:API call failed with error code: {0} and message: {1}", code, message);
}
}
But I have this error
Microsoft.Cloud.Media.Encoding.PresetException: Preset ERROR: There are 2 input assets. Preset has 2 Source but does NOT specify AssetID for each Source
and I have no idea how to overcome this.
At this time, it is not possible to use v3 APIs to create overlays. The feature is not fully implemented. See this link for other gaps between the v2 and v3 APIs.
For more details, you can see this site .

why is exception is thrown Can't endBackgroundTask no background task exists with identifier or it may have already been ended.

Can't endBackgroundTask: no background task exists with identifier 48fcd6, or it may have already been ended. Break in UIApplicationEndBackgroundTaskError() to debug.
What I tried is here.
- (void)applicationDidEnterBackground:(UIApplication *)application
{
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
NSUserDefaults *prefs = [NSUserDefaults standardUserDefaults];
NSString *myString = [prefs stringForKey:#"getTime"];
int timeInterval = [myString integerValue];
intervalForTimer = timeInterval;
[timer invalidate];
timer=nil;
timer = [ NSTimer
scheduledTimerWithTimeInterval:intervalForTimer
target:self
selector:#selector(startCoreUpdate)
userInfo:nil
repeats:YES
];
//change to NSRunLoopCommonModes
[ [NSRunLoop currentRunLoop]
addTimer:timer
forMode:NSRunLoopCommonModes
];
[[NSRunLoop currentRunLoop] run];
});
UIBackgroundTaskIdentifier back =
[[UIApplication sharedApplication]
beginBackgroundTaskWithExpirationHandler:^{
[self startCoreUpdate];
[ [UIApplication sharedApplication]
endBackgroundTask:back
];
} ];
}
-(void) startCoreUpdate{
notification=[[AlertNotificationViewController alloc]init];
[notification recentUpdatesVideo];
}
so my question is why I am getting this error/warning.Is there any solution because this interupts my notification timer and timer is getting fired before the actual timer time.

When and how is collectionView:viewForSupplementaryElementOfKind:atIndexPath: called?

I'm trying to implement a native-Calendar-app-like timeline view with UICollectionView and custom layout. And I'm new to it.
Here is where my problem come from.
I'm using Decoration View to implement those background gridlines, and trying to use Supplementary View to make the time labels (near the gridlines), and will use Cell to make the events but not that far yet.
But before doing the events, I found when I run it all the supplementary views are not working, no matter if I have cell or not. And I found my collectionView:viewForSupplementaryElementOfKind:atIndexPath: method is not called.
So I'm wondering how and when this method is called? What could be leading to my situation that it's not called?
Actually, is it good to make those time labels with supplementary view? I'm not sure about it since I do need them to be visible even when there's no event (no cell/item in section).
Here is my code:
View Controller
- (void)viewDidLoad
{
[super viewDidLoad];
self.collectionView.backgroundColor = [UIColor whiteColor];
[self.collectionView registerClass:TodayCellKindTask.class forCellWithReuseIdentifier:CellKindTaskIdentifier];
[self.collectionView registerClass:TodayTimelineTimeHeader.class forSupplementaryViewOfKind:TimelineKindTimeHeader withReuseIdentifier:TimelineTimeHeaderIdentifier];
[self.timelineViewLayout registerClass:TodayTimelineTileWhole.class forDecorationViewOfKind:TimelineKindTileWholeHour];
[self.timelineViewLayout registerClass:TodayTimelineTileHalf.class forDecorationViewOfKind:TimelineKindTileHalfHour];
}
- (NSInteger)collectionView:(UICollectionView *)collectionView numberOfItemsInSection:(NSInteger)section {
return 5;
}
- (UICollectionViewCell *)collectionView:(UICollectionView *)collectionView cellForItemAtIndexPath:(NSIndexPath *)indexPath {
TodayCellKindTask *cellTask = [collectionView dequeueReusableCellWithReuseIdentifier:CellKindTaskIdentifier forIndexPath:indexPath];
return cellTask;
}
- (UICollectionReusableView *)collectionView:(UICollectionView *)collectionView viewForSupplementaryElementOfKind:(NSString *)kind atIndexPath:(NSIndexPath *)indexPath {
NSLog(#"viewForSupplementaryElementOfKind");
TodayTimelineTimeHeader *timeHeader = [self.collectionView dequeueReusableSupplementaryViewOfKind:TimelineKindTimeHeader withReuseIdentifier:TimelineTimeHeaderIdentifier forIndexPath:indexPath];
NSCalendar *calendar = [NSCalendar currentCalendar];
NSDate *today = [NSDate date];
NSDateComponents *comps = [calendar components:(NSYearCalendarUnit | NSMonthCalendarUnit | NSDayCalendarUnit) fromDate:today];
[comps setHour:indexPath.item];
timeHeader.time = [calendar dateFromComponents:comps];
return timeHeader;
}
Custom Layout
// in prepareLayout
NSMutableDictionary *timeHeaderAttributes = [NSMutableDictionary dictionary];
CGSize headerSize = [TodayTimelineTimeHeader defaultSize];
CGFloat headerOffsetY = (tileSize.height - headerSize.height) / 2;
for (NSInteger hour = 24; hour >= 0; hour--) {
NSIndexPath *timeHeaderIndexPath = [NSIndexPath indexPathForItem:hour inSection:0];
UICollectionViewLayoutAttributes *currentTimeHeaderAttributes = [UICollectionViewLayoutAttributes layoutAttributesForSupplementaryViewOfKind:TimelineKindTimeHeader withIndexPath:timeHeaderIndexPath];
CGFloat headerPosY = hour * 2 * tileSize.height + headerOffsetY;
currentTimeHeaderAttributes.frame = CGRectMake(TimeHeaderPosX, headerPosY, headerSize.width, headerSize.height);
timeHeaderAttributes[timeHeaderIndexPath] = currentTimeHeaderAttributes;
}
self.timelineTileAttributes[TimelineKindTimeHeader] = timeHeaderAttributes;
// layoutAttributesForSupplementaryViewOfKind
- (UICollectionViewLayoutAttributes *)layoutAttributesForSupplementaryViewOfKind:(NSString *)kind atIndexPath:(NSIndexPath *)indexPath {
return self.timelineTileAttributes[kind][indexPath];
}
// layoutAttributesForElementsInRect
- (NSArray *)layoutAttributesForElementsInRect:(CGRect)rect {
NSMutableArray *allAttributes = [NSMutableArray arrayWithCapacity:self.timelineTileAttributes.count];
[self.timelineTileAttributes enumerateKeysAndObjectsUsingBlock:^(NSString *elementIdentifier,
NSDictionary *elementsInfo,
BOOL *stop) {
[elementsInfo enumerateKeysAndObjectsUsingBlock:^(NSIndexPath *indexPath,
UICollectionViewLayoutAttributes *attributes,
BOOL *stop) {
if (CGRectIntersectsRect(rect, attributes.frame)) {
[allAttributes addObject:attributes];
}
}];
}];
return allAttributes;
}
// collectionViewContentSize
- (CGSize)collectionViewContentSize {
CGSize tileSize = [TodayTimelineTileWhole defaultSize];
CGFloat contentHeight = tileSize.height * self.numberOfTiles;
return CGSizeMake(tileSize.width, contentHeight);
}
I tried not to post all the code here since that'd be a lot, but let me know if you need to know others.
Any tip is appreciated!
Pine
So, this problem is solved, and the reason was my stupid mistake, setting those views' position x out of the screen, and due to that the method was not called. Nothing else.
I found this reason by logging out each view's frame in prepareLayout, and found the x position was wrong. I got this position from the retina design so…
I used supplementary view and it was working without any problem.
My lesson: no worries, calm down. This can save you time from making mistake.

UIPanGestureRecognizer and BringSubviewToFront

I have an app where the user taps a button and the app will instantiate an image layer (UIImageView). I want to make it so that the user can move the selected image layer whichever they tap. After reading some topics here, I've learnt that I can use UIPanGestureRecognizer to move the selected image layer.
- (IBAction)buttonClicked:(id)sender {
imageview = [[UIImageView alloc] initWithFrame:CGRectMake(100, 0, 300, 22)];
UIPanGestureRecognizer *imageviewGesture = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(recognizePan:)];
[imageviewGesture setMinimumNumberOfTouches:1];
[imageviewGesture setMaximumNumberOfTouches:1];
imageview.image = [UIImage imageNamed:#"pinimage.png"];
[imageview addGestureRecognizer:imageviewGesture];
[imageview setUserInteractionEnabled:YES];
[self.view addSubview:imageview];
NSUInteger currentag = [self UniqueTag]; // Assigning a unique integer
imageview.tag = currentag;
}
- (void)recognizePan:(UIPanGestureRecognizer *)sender {
[[[(UITapGestureRecognizer *)sender view] layer] removeAllAnimations];
[self.view bringSubviewToFront:[(UIPanGestureRecognizer *)sender view]];
CGPoint translatedPoint = [(UIPanGestureRecognizer *)sender translationInView:self.view];
if([(UIPanGestureRecognizer *)sender state] == UIGestureRecognizerStateBegan) {
firstX = [[sender view] center].x;
firstY = [[sender view] center].y;
}
translatedPoint = CGPointMake(firstX + translatedPoint.x,firstY + translatedPoint.y);
[[sender view] setCenter:translatedPoint];
}
Now, what I cannot figure out is how to bring the tapped layer to front when there are multiple image layers. It doesn't seem that [self.view bringSubviewToFront:[(UIPanGestureRecognizer *)sender view]] is effective. So how can I revise my code so that the application will bring the tapped layer to the top among others?
Thank you for your help.
A good way to do this is to make a CustomImageView subclass of UIImageView. You attach the UIPanGestureRecognizer to each instance of CustomImageView, and set that instance as it's target. Then the action method triggered by the gesture is implemented in the view itself, so that you can refer to the view with self:
In buttonClicked
MyImageView* imageview = [[MyImageView alloc] initWithFrame:CGRectMake(100, 0, 300, 22)];
UIPanGestureRecognizer *imageviewGesture =
[[UIPanGestureRecognizer alloc] initWithTarget:imageview
action:#selector(recognizePan:)];
In CustomImageView.m
- (void)recognizePan:(UIPanGestureRecognizer *)sender {
[self.layer removeAllAnimations];
[self.superview bringSubviewToFront:self];
CGPoint translatedPoint = [sender translationInView:self];
if([sender state] == UIGestureRecognizerStateBegan) {
self.firstX = [self center].x;
self.firstY = [self center].y;
}
translatedPoint = CGPointMake(self.firstX + translatedPoint.x,
self.firstY + translatedPoint.y);
[self setCenter:translatedPoint];
}
update
Not thinking straight - you can of course do this from the viewController, as you are doing, by accessing the view property of the gestureRecongnizer. Your error is rather here:
- (void)recognizePan:(UIPanGestureRecognizer *)sender {
[[[(UITapGestureRecognizer *)sender view] layer] removeAllAnimations];
You are changing the sender type from UIPanGestureRecognizer to UITapGestureRecognizer. In fact you don't need to do any of that sender typecasting in the body of the method.

iOS6 MKMapView Zoom

I'm trying to get my MKMapView to zoom into to an annotation. I've tried all sorts however I remain zoomed out to my region.
- (void)mapView:(MKMapView *)mapView didAddAnnotationViews:(NSArray *)views
{
NSLog(#"Did add annotations");
MKAnnotationView *annotationView = [views objectAtIndex:0];
id <MKAnnotation> mp = [annotationView annotation];
MKCoordinateSpan span;
span.longitudeDelta = 0.02;
span.latitudeDelta = 0.02;
MKCoordinateRegion region;
region.center = mapView.userLocation.coordinate;
region.span = span;
[mapView selectAnnotation:mp animated:YES];
[self.spotMapView setRegion:region animated:YES];
[self.spotMapView regionThatFits:region];
}
Which does run, however the map stays zoomed out.
Changing didAddAnnotationViews to this:
- (void)mapView:(MKMapView *)mapView didAddAnnotationViews:(NSArray *)views
{
NSLog(#"Did add annotations");
MKAnnotationView *annotationView = [views objectAtIndex:0];
id <MKAnnotation> mp = [annotationView annotation];
[mapView selectAnnotation:mp animated:NO];
}
Seemed to do the trick, the map now zooms in.

Resources