Change texture with random - arrays

I have some textures
var texturesOfEnemies = [SKTexture(imageNamed: "EnemyTexture1"),
SKTexture(imageNamed: "EnemyTexture2"),
SKTexture(imageNamed: "EnemyTexture3"),
SKTexture(imageNamed: "EnemyTexture4"),
SKTexture(imageNamed: "EnemyTexture5")]
I'm using arc4random to select a random texture
var randomTextureOfEnemies = Int(arc4random_uniform(UInt32(texturesOfEnemies.count)))
and assign the selected texture to a node by
enemy.texture = texturesOfEnemies[randomTextureOfEnemies]
I have some actions for my node. When the actions are done, I want to change the texture of my node.
enemy.run(someAction, completion:{enemy.texture = texturesOfEnemies[randomTextureOfEnemies]})
This actions repeat and are working well, but the textures are only changing once. How can I change the texture every time the action completes?

Related

how to convert RenderTexture to 1D array?

I thought I would find an answer somewhere instantly but apparently not. So here it goes:
I need to convert a RenderTexture to a 1D array for each frame update in the script (C#) in Unity. How can I do that?
That is it actually. The rest is kind of besides the point of the question, but let me also explain why I need to do this anyway:
..so that I can then dump the 1D array within a cell of a row in a .csv file. I would collect data (including the rendered texture data) at each frame update and write all this collected per-frame data using WriteLine(line). My idea is that this would save me from wasting computation time that would otherwise be spent encoding images and saving those images as separate files with each update.
The RenderTexture, and the underlying Texture, objects don't have a lot to work with in this regard. So the idea is to record the data in the RenderTexture as a new Texture2D, and use that new item to read the data you're looking for.
Here's a piece of code that should read a 1D array of type Color32 struct (untested):
public Color32 [ ] GetRenderTexturePixels ( RenderTexture renderTexture )
{
var texture = new Texture2D( renderTexture.width, renderTexture.height, TextureFormat.RGB24, false );
// record the current render texture.
var currentRenderTexture = RenderTexture.active;
// Set the new render texture.
RenderTexture.active = renderTexture;
texture.ReadPixels ( new Rect ( 0, 0, renderTexture.width, renderTexture.height ), 0, 0 );
texture.Apply ( );
// reapply the previous render texture.
RenderTexture.active = currentRenderTexture;
// Return then texture2d pixels. This assumes mipmap level 0.
return texture.GetPixels32 ( );
}

Metal / Scenekit - Repeating texture on sampler

not sure why but I am not able to repeat the texture when using customer fragment shader
Here is my fragment
fragment float4 bFragment( VertexOut vertexOut [[stage_in]],
texture2d<float, access::sample> textureInput [[texture(0)]],)
{
constexpr sampler sampler2d(coord::normalized, address::repeat, filter::linear, address::repeat);
float4 outputColor;
outputColor = textureInput.sample(sampler2d, vertexOut.texCoord);
return float4(outputColor.x , outputColor.y , outputColor.z , 1.0);
}
Here is how I pass the texture:
let imageProperty = SCNMaterialProperty(contents: texture)
imageProperty.wrapS = .repeat
imageProperty.wrapT = .repeat
imageProperty.contentsTransform = SCNMatrix4MakeScale(screenRatio * numberOfRepetitionsOnX, numberOfRepetitionsOnX , 1)
node.geometry!.firstMaterial!.setValue(imageProperty, forKey: "textureInput")
Image is NOT repeated, just clamped to the object, no matter the size of the texture.
If I use the same settings with NO customer shader:
let myMaterial = SCNMaterial()
myMaterial.lightingModel = .constant
myMaterial.diffuse.contents = texture
myMaterial.diffuse.wrapS = .repeat
myMaterial.diffuse.wrapT = .repeat
myMaterial.diffuse.contentsTransform = SCNMatrix4MakeScale(screenRatio * numberOfRepetitionsOnX, numberOfRepetitionsOnX , 1)
node.geometry!.firstMaterial! = myMaterial
Texture correctly repeated
Questions:
What I have to change in order to be effective contentsTransform value also when using the sampler in custom fragment shader?
If that is not possible, what is the easiest way to achieve that? (Scaling, repeating.redrawing the texture is not an option)
Thanks.
When using SCNProgram the contentsTransform property of SCNMaterialProperty has no effect (because support is implemented in SceneKit's built-in vertex and fragment shaders).
You will need to pass the 2D matrix to the custom SCNProgram shaders and manually transform the texture coordinates there.

SceneKit ARKit glowing effect

Hi I'm trying to have a glowing effect around a node.
I used the SCNNode filters property and set to an array of CIFilter.
It works and renders only when the node has no node behind it which I don't understand. I tried to set the rendering order and the readDepth options without success. I'm really stuck at this point and would appreciate your input!
Please see the screenshot for an example and the code sample.
func addBloom() -> [CIFilter]? {
let bloomFilter = CIFilter(name:"CIBloom")!
bloomFilter.setValue(10.0, forKey: "inputIntensity")
bloomFilter.setValue(30.0, forKey: "inputRadius")
return [bloomFilter]
}
Calling this using:
myNode.filters = addBloom()
A final note, I noticed that for CIFilter to work with Metal the antiAliasing needs to be set to .none
arSceneView.antialiasingMode = .none
Thanks a lot!
Adrien
Have you tried setting the writesToDepthBuffer to false for those nodes which you aren't apply the filters to?
For your information writesToDepthBuffer refers to:
SceneKit’s rendering process uses a depth buffer to determine the
ordering of rendered surfaces relative to the viewer. The default
value of this property is YES, specifying that SceneKit saves depth
information for each rendered pixel for use by later rendering passes.
Typically, you disable writing to the depth buffer when rendering
semitransparent objects, because later stages of the rendering process
may require depth information about the opaque objects behind them.
This example seems to be working fine:
/// Generates An SCNPlane & A Red & Green SCNSphere
func generateNodes(){
let planeNode = SCNNode(geometry: SCNPlane(width: 1, height: 0.5))
planeNode.geometry?.firstMaterial?.diffuse.contents = UIColor.black
planeNode.position = SCNVector3(0, 0, -1)
let redSphereNode = SCNNode(geometry: SCNSphere(radius: 0.1))
redSphereNode.geometry?.firstMaterial?.diffuse.contents = UIColor.red
redSphereNode.position = SCNVector3(-0.3, 0, -1)
redSphereNode.filters = addBloom()
let greenSphereNode = SCNNode(geometry: SCNSphere(radius: 0.1))
greenSphereNode.geometry?.firstMaterial?.diffuse.contents = UIColor.green
greenSphereNode.position = SCNVector3(0.3, 0, -1)
greenSphereNode.filters = addBloom()
self.augmentedRealityView.scene.rootNode.addChildNode(planeNode)
self.augmentedRealityView.scene.rootNode.addChildNode(redSphereNode)
self.augmentedRealityView.scene.rootNode.addChildNode(greenSphereNode)
planeNode.geometry?.firstMaterial?.writesToDepthBuffer = false
}
/// Creates An Array Of CIBloom Filters
///
/// - Returns: [CIFilter]?
func addBloom() -> [CIFilter]? {
let bloomFilter = CIFilter(name:"CIBloom")!
bloomFilter.setValue(10.0, forKey: "inputIntensity")
bloomFilter.setValue(30.0, forKey: "inputRadius")
return [bloomFilter]
}
One thing to note however, which I did notice was that if I used an image with a transparent background for the contents of the SCNPlane it didn't work, although with another image it was fine.
Hope it points you in the right direction...

fast-rendering array of CALayers vs array of CAShapeLayers in swift

I'm writing an app in which I need to populate 500 or so layers with previously-defined bezier paths and I'm running into either PERFORMANCE or INTERACTIVITY issues depending on which route I choose. Note that I require no animation features, as the paths i draw are static:
If I use CALayers, drawing the paths to screen takes about 15 seconds (bad), but the resulting interactive experience (i.e moving around the screen) is great.
If I use CAShapeLayers, drawing the paths to screen takes a fraction of a second (good), but the interactivity is terrible.
This is my code with CALayers:
func drawPathToCALayer (myView: UIImageView, pointArray: [CGPoint], bbox: CGRect, color: UIColor) {
// step 1. create path
let path = CGPathCreateMutable()
var pathOffset = CGAffineTransformMakeTranslation( (bbox.origin.x * -1), (bbox.origin.y * -1))
CGPathAddLines(path, &pathOffset, pointArray, pointArray.count)
CGPathCloseSubpath(path)
// step 2. draw to context -- this is the part that kills the amount of time that we call this function repeatedly
UIGraphicsBeginImageContextWithOptions(bbox.size, false, 0)
CGContextAddPath(UIGraphicsGetCurrentContext(), path)
CGContextSetFillColorWithColor(UIGraphicsGetCurrentContext(), color.CGColor)
CGContextFillPath(UIGraphicsGetCurrentContext())
// step 3. assign context drawing to sublayer
let sublayer = CALayer()
let strokeImage = UIGraphicsGetImageFromCurrentImageContext()
sublayer.frame = bbox
sublayer.contents = strokeImage.CGImage
UIGraphicsEndImageContext()
myView.layer.addSublayer(sublayer)
This is the code with CAShapeLayers
func drawPathToCAShapeLayer (myView: UIImageView, pointArray: [CGPoint], bbox: CGRect, color: UIColor) {
// step 1. create path
let path = CGPathCreateMutable()
var pathOffset = CGAffineTransformMakeTranslation( (bbox.origin.x * -1), (bbox.origin.y * -1))
CGPathAddLines(path, &pathOffset, pointArray, pointArray.count)
CGPathCloseSubpath(path)
// step 2. assign path to sublayer
let sublayer = CAShapeLayer()
sublayer.path = path
sublayer.fillColor = color
myView.layer.addSublayer(sublayer)
I like the succinctness and speed of the CAShapeLayer approach, but from an interactivity point of view, this route is a no go.
The question is (thanks for hanging in there), is there is a way to do a hybrid approach in which I draw to a CAShapeLayer temporarily, and use it to populate a CALayer like so?
func drawPathToHybrid (myView: UIImageView, pointArray: [CGPoint], bbox: CGRect, color: UIColor) {
// step 1. create path
let path = CGPathCreateMutable()
var pathOffset = CGAffineTransformMakeTranslation( (bbox.origin.x * -1), (bbox.origin.y * -1))
CGPathAddLines(path, &pathOffset, pointArray, pointArray.count)
CGPathCloseSubpath(path)
// step 2. assign path to sublayer
let sublayer = CALayer()
let tmplayer = CAShapeLayer()
tmplayer.path = path
tmplayer.fillColor = color
sublayer.contents = tmplayer.contents // ---> i know this doesn't work, but is there something similar I can take advantage of that doesn't rely on defining a context?
myView.layer.addSublayer(sublayer)
Or better yet, is there some other way that I can populate an array of CALayers with bezier paths to get both good INTERACTIVITY and PERFORMANCE?

MKMapView setRegion not exact

Starting Situation
iOS6, Apple Maps. I have two annotations on a MKMapView. The coordinates of the two annotations have the same values for the latitude component, but different longitudes.
What I want
I now want to zoom the map so that one annotation is exactly on the left edge of the mapView's frame, and the other annotation on the right edge of the frame. For that I tried MKMapView's setRegion and setVisibleMapRect methods.
The Problem
The problem is that those methods seem to snap to certain zoom levels and therefore not setting the region as exactly as I need it. I saw a lot of questions on Stack Overflow that point out, that this behavior is normal in iOS5 and below. But since Apple Maps are using vector graphics, the view is not bound to certain zoom levels to display imagery in proper resolution.
Tested in...
I tested on iPhone 4, 4S, 5, iPad 3 and iPad Mini (all with iOS6.1), and in the Simulator on iOS6.1.
My question
So why is setRegion and setVisibleMapRect snapping to certain zoom levels and does not adjust the exact region to the values I pass it?
Sample Code
in view did appear I define 4 different Locations as iVars, and set up the map view:
// define coords
_coord1 = CLLocationCoordinate2DMake(46.0, 13.0);
_coord2 = CLLocationCoordinate2DMake(46.0, 13.1);
_coord3 = CLLocationCoordinate2DMake(46.0, 13.2);
_coord4 = CLLocationCoordinate2DMake(46.0, 13.3);
// define frame for map in landscape mode
CGRect mainScreen = [[UIScreen mainScreen] bounds];
CGRect newSRect = mainScreen;
newSRect.size.width = mainScreen.size.height;
newSRect.size.height = mainScreen.size.width;
// setup map view
customMapView = [[MKMapView alloc] initWithFrame:newSRect];
[customMapView setDelegate:self];
[customMapView setShowsUserLocation:NO];
[self.view addSubview:customMapView];
Then I add 3 buttons. Those trigger all the same method addAnnotationsWithCoord1:coord2. The first button passes _coord1 and _coord2, the second button passes _coord1 and _coord3 and the third button passes _coord1 and _coord4. The method looks like so (TapAnnotation is my subclass of MKAnnotation):
-(void)addAnnotationsWithCoord1:(CLLocationCoordinate2D)coord1 coord2:(CLLocationCoordinate2D)coord2{
// Make 2 new annotations with the passed coordinates
TapAnnotation *annot1 = [[TapAnnotation alloc] initWithNumber:0 coordinate:coord1];
TapAnnotation *annot2 = [[TapAnnotation alloc] initWithNumber:0 coordinate:coord2];
// Remove all existing annotations
for(id<MKAnnotation> annotation in customMapView.annotations){
[customMapView removeAnnotation:annotation];
}
// Add annotations to map
[customMapView addAnnotation:annot1];
[customMapView addAnnotation:annot2];
}
After that I determine the SouthWest and NorthEast points that will determine the rect which is containing my 2 annotations.
// get northEast and southWest
CLLocationCoordinate2D northEast;
CLLocationCoordinate2D southWest;
northEast.latitude = MAX(coord1.latitude, coord2.latitude);
northEast.longitude = MAX(coord1.longitude, coord2.longitude);
southWest.latitude = MIN(coord1.latitude, coord2.latitude);
southWest.longitude = MIN(coord1.longitude, coord2.longitude);
Then I calculate the center point between the two coordinates and set the center coordinate of the map to it (remember, since all coordinates have same latitudes the following calculation should be correct):
// determine center coordinate and set to map
double centerLon = ((coord1.longitude + coord2.longitude) / 2.0f);
CLLocationCoordinate2D center = CLLocationCoordinate2DMake(southWest.latitude, centerLon);
[customMapView setCenterCoordinate:center animated:NO];
Now I try to set the region of the map so that it fits like I want:
CLLocation *loc1 = [[CLLocation alloc] initWithLatitude:southWest.latitude longitude:southWest.longitude];
CLLocation *loc2 = [[CLLocation alloc] initWithLatitude:northEast.latitude longitude:northEast.longitude];
CLLocationDistance meterSpanLong = [loc1 distanceFromLocation:loc2];
CLLocationDistance meterSpanLat = 0.1;
MKCoordinateRegion region = MKCoordinateRegionMakeWithDistance(center, meterSpanLat, meterSpanLong);
[customMapView setRegion:region animated:NO];
This does not behave as expected, so I try this:
MKCoordinateSpan span = MKCoordinateSpanMake(0.01, northEast.longitude-southWest.longitude);
MKCoordinateRegion region = MKCoordinateRegionMake(center, span);
[customMapView setRegion:region animated:NO];
This still not behaves as expected, so I try it with setVisibleMapRect:
MKMapPoint westPoint = MKMapPointForCoordinate(southWest);
MKMapPoint eastPoint = MKMapPointForCoordinate(northEast);
MKMapRect mapRect = MKMapRectMake(westPoint.x, westPoint.y,eastPoint.x-westPoint.x,1);
[customMapView setVisibleMapRect:mapRect animated:NO];
And still, it does not behave like I want. As a verification, I calculate the point distance from the left annotation to the left edge of the mapView's frame:
// log the distance from the soutwest point to the left edge of the mapFrame
CGPoint tappedWestPoint = [customMapView convertCoordinate:southWest toPointToView:customMapView];
NSLog(#"distance: %f", tappedWestPoint.x);
For _coord1 and _coord2 it shows: 138
For _coord1 and _coord3 it shows: 138
For _coord1 and _coord4 it shows: 65
So why do I get these values? If anything works as expected, these should all be 0.
Thanks for any help, struggling with this problem for a week now.
Read the docs on setRegion:
When setting this property, the map may adjust the new region value so that it fits the visible area of the map precisely. This is normal and is done to ensure that the value in this property always reflects the visible portion of the map. However, it does mean that if you get the value of this property right after setting it, the returned value may not match the value you set. (You can use the regionThatFits: method to determine the region that will actually be set by the map.)
You will have to figure out the math yourself if you want a precise zoom.

Resources