Scenekit giving buffer size error while passing array data to uniform array in openGL shader - arrays

this is the surface shader which I use to make a trail on floor surface.
#pragma arguments
uniform vec2 trailPoints[5];
uniform float count;
#pragma body
float trailRadius = 10.0;
float x = _surface.diffuseTexcoord.x;
float x100 = float(x * 100);
float y = _surface.diffuseTexcoord.y;
float y100 = float(y * 100);
for (int i = 0; i < int(count); i++) {
vec2 position = trailPoints[i];
if ((x100 > position.x - trailRadius && x100 < position.x + trailRadius) && (y100 > position.y - trailRadius && y100 < position.y + trailRadius)) {
_surface.diffuse.rgb = vec3(0.0, 10.0 ,0.0);
}
}
and this is the swift side code which I use to pass vector data to surface shader.
if let geometry = self.floorNode.geometry {
if let material = geometry.firstMaterial {
// this is the temporary data which I use to find the problem.
// this data will be dynamic later on.
let myValueArray:[float2] = [float2(x:80, y:80),float2(x:60, y:60),float2(x:40, y:40),float2(x:20, y:20),float2(x:0, y:0)]
// Passing array count to shader. There is no problem here.
var count = Float(myValueArray.count)
let countData = Data(buffer: UnsafeBufferPointer(start: &count, count: 1))
material.setValue(countData, forKey: "count")
// and here is the problem start.
// myValueArray converted to data with its size.
let valueArrayData = Data(buffer: UnsafeBufferPointer(start: myValueArray, count: myValueArray.count))
material.setValue(valueArrayData, forKey: "trailPoints")
}
}
When I build and run the project the following error occurred and no data passed to the "trailPoints" in shader.
Error: arguments trailPoints : mismatch between the NSData and the buffer size 40 != 8
When I change the array count to 1 while converting array to data,
let valueArrayData = Data(buffer: UnsafeBufferPointer(start: myValueArray, count: 1))
the errors dissapear but only the first member of the array will passing to shader.
so, the problem is,
how can I pass the all array members to the shader?

I think, the answer of this question this:
I recently realized, the OpenGl ES 2.0 only allow the following array definitions:
float myValue[3];
myValue[0] = 1.0;
myValue[1] = 2.0;
myValue[2] = 3.0;
But as far as I can tell, it is not possible to do this using the SCNShaderModifierEntryPoint with following way.
material.setValue (1.0, forKey: "myValue[0]")
material.setValue (2.0, forKey: "myValue[1]")
material.setValue (3.0, forKey: "myValue[2]")
And finally I found a way to pass the array to fragment shader with SCNProgram handleBinding method.
let myValue:[Float] = [1.0, 2.0, 3.0]
material.handleBinding(ofSymbol:"myValue", handler: { (programId:UInt32, location:UInt32, node:SCNNode?, renderer:SCNRenderer) in
for (index, v) in myValue.enumerated() {
var v1 = v
let aLoc = glGetUniformLocation(programId, String(format: "myValue[%i]", index))
glUniform1fv(GLint(aLoc), 1, &v1)
}
})
But, SCNProgram is completly rid off the default swift shader program and use yours.
The default shader program of swift is highly complex and do lots of things to your place.
default vertex shader of swift
default fragment shader of swift
So maybe its not a good idea to use SCNProgram for only pass the arrays to shader.
And one interesting thing, SCNProgram does not work on SCNFloor geometry.

Related

SceneKit point cloud from ARKit depth buffers

I am attempting to find a simple way in SceneKit to calculate the depth of a pixels in SceneKit and LiDAR data from
sceneView.session.currentFrame?.smoothedSceneDepth?.depthMap
Ideally I don't want to use metal shaders. I would prefer find a points in my currentFrame and their corresponding depth map, to get the depth of a points in SceneKit (ideally in world coordinates, not just local to that frustum at that point in time).
Fast performance isn't necessary as it won't be calculated at capture.
I am aware of the Apple project at link, however this is far too complex for my needs.
As a starting point, my code works like this:
guard let depthData = frame.sceneDepth else { return }
let camera = frame.camera
let depthPixelBuffer = depthData.depthMap
let depthHeight = CVPixelBufferGetHeight(depthPixelBuffer)
let depthWidth = CVPixelBufferGetWidth(depthPixelBuffer)
let resizeScale = CGFloat(depthWidth) / CGFloat(CVPixelBufferGetWidth(frame.capturedImage))
let resizedColorImage = frame.capturedImage.toCGImage(scale: resizeScale);
guard let colorData = resizedColorImage.pixelData() else {
fatalError()
}
var intrinsics = camera.intrinsics;
let referenceDimensions = camera.imageResolution;
let ratio = Float(referenceDimensions.width) / Float(depthWidth)
intrinsics.columns.0[0] /= ratio
intrinsics.columns.1[1] /= ratio
intrinsics.columns.2[0] /= ratio
intrinsics.columns.2[1] /= ratio
var points: [SCNVector3] = []
let depthValues = depthPixelBuffer.depthValues()
for vv in 0..<depthHeight {
for uu in 0..<depthWidth {
let z = -depthValues[uu + vv * depthWidth]
let x = Float32(uu) / Float32(depthWidth) * 2.0 - 1.0;
let y = 1.0 - Float32(vv) / Float32(depthHeight) * 2.0;
points.append(SCNVector3(x, y, z))
}
}
The resulting point cloud looks ok, but is severely bent on the Z-axis. I realize this code is also not adjusting for screen orientation either.
Cupertino kindly got back to me with this response on the forums at developer.apple.com:
The unprojection calculation itself is going to be identical, regardless of whether it is done CPU side or GPU side.
CPU side, the calculation would look something like this:
/// Returns a world space position given a point in the camera image, the eye space depth (sampled/read from the corresponding point in the depth image), the inverse camera intrinsics, and the inverse view matrix.
func worldPoint(cameraPoint: SIMD2<Float>, eyeDepth: Float, cameraIntrinsicsInversed: simd_float3x3, viewMatrixInversed: simd_float4x4) -> SIMD3<Float> {
let localPoint = cameraIntrinsicsInversed * simd_float3(cameraPoint, 1) * -eyeDepth
let worldPoint = viewMatrixInversed * simd_float4(localPoint, 1);
return (worldPoint / worldPoint.w)[SIMD3(0,1,2)];
}
Implemented, this looks like
for vv in 0..<depthHeight {
for uu in 0..<depthWidth {
let z = -depthValues[uu + vv * depthWidth]
let viewMatInverted = (sceneView.session.currentFrame?.camera.viewMatrix(for: UIApplication.shared.statusBarOrientation))!.inverse
let worldPoint = worldPoint(cameraPoint: SIMD2(Float(uu), Float(vv)), eyeDepth: z, cameraIntrinsicsInversed: intrinsics.inverse, viewMatrixInversed: viewMatInverted * rotateToARCamera )
points.append(SCNVector3(worldPoint))
}
}
The point cloud is pretty messy, needs confidence worked out, and there are gaps vertically where Int rounding has occurred, but it's a solid start. Missing functions come from the link to the Apple demo project in the question above.

Convert Image to Rulinalg Matrix in Rust

I am learning Rust and I want to perform some basic image operations. I am currently reading my images like this: image::open("path/to/img.jpg").unwrap() by using the image crate.
The problem is that I want my image to be in a specific 2d format where there are N=height*weight rows and 3 columns (one for each color). I have found the rulinalg crate but I cannot create a rulinalg::matrix::Matrix from the DynamicImage object. I have tried the following:
// Normalize to [0, 1]
let (width, height) = img.dimensions();
let n = width*height;
let tmp = img.as_rgb8().unwrap().to_vec().iter().map(|&e| e as f32 / 255.0).collect::<Vec<f32>>();
let mat = Matrix::new(n, 3, tmp);
So now I have a matrix of n rows and 3 columns but I am not sure if it is a correct representation. By that, I mean that I am not sure that the first pixel consists of the values mat[[0, 0]], mat[[n, 0]], mat[[n*2, 0]].
So, in order to test it, I thought I should try to recreate the image by using the mat matrix with the following code:
let mut img_buf = image::ImageBuffer::new(width, height);
for i in 0..mat.rows() {
let color0 = (mat[[i, 0]] * 255.0) as u8;
let color1 = (mat[[i, 1]] * 255.0) as u8;
let color2 = (mat[[i, 2]] * 255.0) as u8;
let x = i as u32 /height;
let y = i as u32 - x*height;
let pixel = img_buf.get_pixel_mut(x, y);
// let image = *pixel;
*pixel = image::Rgb([color0, color1, color2]);
}
img_buf.save("./tmp.jpg").unwrap();
But the output is only noise (even though the color structure seems to be kept the same). I have tried a lot of things and nothing seems to work. I also tried to find similar "projects" on github but the only relevant thing I found was applying filters to images which simply called functions from the image crate.
Desired Workflow
So, what I want is the following:
Read image from path (3d image since it is not grayscale)
Convert image to a Matrix (rulinalg::matrix::Matrix)
Apply functions such as: element-wise logarith/exponential/square-root, axis-wise sum/max and other similar functions (numpy equivalent: np.log(arr), np.exp(arr), np.sqrt(arr), np.sum(arr, axis=0), np.amax(arr, axis=0))
Save the modified matrix as an image.
Reason
My goal is to perform image segmentation and reduce the colors of the image (by clustering).
Question
Does anyone have any idea or pointers of how I can do the above in rust?
Following #Jmb's comment, the reason that the output image was noise was because I have mistaken the axes.
I just want to post what I did in case there is someone else who wants to do something similar. Do keep in mind though that I am new to rust and so I may have made some mistakes.
Using rulinalg
If you want to use the rulinalg crate, the following should work (version 0.4.2):
let img = image::open("path/to/img.jpg").unwrap();
let (width, height) = img.dimensions();
let n = (width * height) as usize;
// Normalize image pixels to [0, 1]
let tmp = img.as_rgb8().unwrap().to_vec().iter().map(|&e| e as f32 / 255.0).collect::<Vec<f32>>();
// Reduce dimensions
let mut mat = Matrix::new(n, 3, tmp);
// Change the array values by using some other method
mat = my_processing_method(mat);
// Image buffer for the new image
let mut img_buf = image::ImageBuffer::new(width, height);
for i in 0..mat.rows() {
// Move back to the [0, 255] range
let color0 = (mat[[i, 0]] * 255.0) as u8;
let color1 = (mat[[i, 1]] * 255.0) as u8;
let color2 = (mat[[i, 2]] * 255.0) as u8;
let x = i as u32 % width;
let y = i as u32 / width;
let pixel = img_buf.get_pixel_mut(x, y);
*pixel = image::Rgb([color0, color1, color2]);
}
// Save the updated image
img_buf.save("path/to/new/image.jpg").unwrap();
Using ndarray and ndarray_image:
For ease, you may also use the ndarray-image (version 0.3.0) in order to open the image and save it in an ndarray::Array3 (3d array). If you want to use ndarray instead of the rulinalg crate then you can do the following:
use ndarray::{Array2, Dim};
use ndarray_image::{open_image, save_image, Colors};
let img = open_image("path/to/img.jpg", Colors::Rgb).expect("unable to open input image");
// A vector of 3 spots for each dimension
let sh = img.shape();
let (height, width, colors) = (sh[0] as u32, sh[1] as u32, sh[2]);
let n = (width * height) as usize;
// The dimension of the new 2d array
let new_dim = Dim([n, 3]);
// Normalize to [0, 1] and convert to 1d vector
let img_vec = img.map(|&e| e as f32 / 255_f32).into_raw_vec();
// Convert the 1d vector to the 2d ndarray::Array2
let img_arr = Array2::from_shape_vec(new_dim, img_vec).ok().unwrap();
let mut img_buf = image::ImageBuffer::new(width, height);
for i in 0..img_arr.nrows() {
let color0 = (img_arr[[i, 0]] * 255.0) as u8;
let color1 = (img_arr[[i, 1]] * 255.0) as u8;
let color2 = (img_arr[[i, 2]] * 255.0) as u8;
let x = i as u32 % width;
let y = i as u32 / width;
let pixel = img_buf.get_pixel_mut(x, y);
*pixel = image::Rgb([color0, color1, color2]);
}
img_buf.save("path/to/new/image.jpg").unwrap();
Hope this is helpful!

Dereference UnsafeMutablePointer<UnsafeMutableRawPointer>

I have a block that is passing data in that I'd like to convert to an array of array of floats -- e.g. [[0.1,0.2,0.3, 1.0], [0.3, 0.4, 0.5, 1.0], [0.5, 0.6, 0.7, 1.0]]. This data is passed to me in the form of data:UnsafeMutablePointer<UnsafeMutableRawPointer> (The inner arrays are RGBA values)
fwiw -- the block parameters are from SCNParticleEventBlock
How can I dereference data into a [[Float]]? Once I have the array containing the inner arrays, I can reference the inner array (colorArray) data with:
let rgba: UnsafeMutablePointer<Float> = UnsafeMutablePointer(mutating: colorArray)
let count = 4
for i in 0..<count {
print((rgba+i).pointee)
}
fwiw -- this is Apple's example Objective-C code for referencing the data (from SCNParticleSystem handle(_:forProperties:handler:) )
[system handleEvent:SCNParticleEventBirth
forProperties:#[SCNParticlePropertyColor]
withBlock:^(void **data, size_t *dataStride, uint32_t *indices , NSInteger count) {
for (NSInteger i = 0; i < count; ++i) {
float *color = (float *)((char *)data[0] + dataStride[0] * i);
if (rand() & 0x1) { // Switch the green and red color components.
color[0] = color[1];
color[1] = 0;
}
}
}];
You can actually subscript the typed UnsafeMutablePointer without having to create an UnsafeMutableBufferPointer, as in:
let colorsPointer:UnsafeMutableRawPointer = data[0] + dataStride[0] * i
let rgbaBuffer = colorsPointer.bindMemory(to: Float.self, capacity: dataStride[0])
if(arc4random_uniform(2) == 1) {
rgbaBuffer[0] = rgbaBuffer[1]
rgbaBuffer[1] = 0
}
Were you ever able to get your solution to work? It appears only a handful of SCNParticleProperties can be used within an SCNParticleEventBlock block.
Based on this answer, I've written the particle system handler function in swift as:
ps.handle(SCNParticleEvent.birth, forProperties [SCNParticleSystem.ParticleProperty.color]) {
(data:UnsafeMutablePointer<UnsafeMutableRawPointer>, dataStride:UnsafeMutablePointer<Int>, indicies:UnsafeMutablePointer<UInt32>?, count:Int) in
for i in 0..<count {
// get an UnsafeMutableRawPointer to the i-th rgba element in the data
let colorsPointer:UnsafeMutableRawPointer = data[0] + dataStride[0] * i
// convert the UnsafeMutableRawPointer to a typed pointer by binding it to a type:
let floatPtr = colorsPointer.bindMemory(to: Float.self, capacity: dataStride[0])
// convert that to a an UnsafeMutableBufferPointer
var rgbaBuffer = UnsafeMutableBufferPointer(start: floatPtr, count: dataStride[0])
// At this point, I could convert the buffer to an Array, but doing so copies the data into the array and any changes made in the array are not reflected in the original data. UnsafeMutableBufferPointer are subscriptable, nice.
//var rgbaArray = Array(rgbaBuffer)
// about half the time, mess with the red and green components
if(arc4random_uniform(2) == 1) {
rgbaBuffer[0] = rgbaBuffer[1]
rgbaBuffer[1] = 0
}
}
}
I'm really not certain if this is the most direct way to go about this and seems rather cumbersome compared to the objective-C code (see above question). I'm certainly open to other solutions and/or comments on this solution.

Passing Metal buffer to SceneKit shader

I'd like to use a Metal compute shader to calculate some positions that are then fed into a Metal shader. Sounds straight forward, but I'm having trouble getting my MTLBuffer data into the Metal based SCNProgram.
The compute kernel is as follows, in this contrived example it's taking in three 3D vectors (in both buffers).
kernel void doSimple(const device float3 *inVector [[ buffer(0) ]],
device float3 *outVector [[ buffer(1) ]],
uint id [[ thread_position_in_grid ]]) {
float yDisplacement = 0;
. . . .
outVector[id] = float3(
inVector[id].x,
inVector[id].y + yDisplacement,
inVector[id].z);
}
This kernel function is run each frame in the - renderer:willRenderScene:atTime: method of my SCNSceneRendererDelegate. There's two buffers, and they get switched after each frame.
Buffers are created as follows;
func setupBuffers() {
positions = [vector_float3(0,0,0), vector_float3(1,0,0), vector_float3(2,0,0)]
let bufferSize = sizeof(vector_float3) * positions.count
//copy same data into two different buffers for initialisation
buffer1 = device.newBufferWithBytes(&positions, length: bufferSize, options: .OptionCPUCacheModeDefault)
buffer2 = device.newBufferWithBytes(&positions, length: bufferSize, options: .OptionCPUCacheModeDefault)
}
And the compute shader is run using the following (in the willRenderScene func);
let computeCommandBuffer = commandQueue.commandBuffer()
let computeCommandEncoder = computeCommandBuffer.computeCommandEncoder()
computeCommandEncoder.setComputePipelineState(pipelineState)
computeCommandEncoder.setBuffer(buffer1, offset: 0, atIndex: 0)
computeCommandEncoder.setBuffer(buffer2, offset: 0, atIndex: 1)
computeCommandEncoder.dispatchThreadgroups(numThreadgroups, threadsPerThreadgroup: threadsPerGroup)
computeCommandEncoder.endEncoding()
computeCommandBuffer.commit()
computeCommandBuffer.waitUntilCompleted()
let bufferSize = positions.count*sizeof(vector_float3)
var data = NSData(bytesNoCopy: buffer2.contents(), length: bufferSize, freeWhenDone: false)
var resultArray = [vector_float3](count: positions.count, repeatedValue: vector_float3(0,0,0))
data.getBytes(&resultArray, length:bufferSize)
for outPos in resultArray {
print(outPos.x, ", ", outPos.y, ", ", outPos.z)
}
This works, and I can see my compute shader is updating the y coordinate for each vector in the array.
This scene consists of three spheres evenly spaced. The vertex shader simply takes the position calculated in the compute shader and adds it to each vertex position (well the y component anyway). I give each sphere an index, the vertex shader uses this index to pull the appropriate position out of my computed array.
The Metal vertex function is shown below, it's referenced by a SCNProgram and set to the material of each sphere.
vertex SimpleVertex simpleVertex(SimpleVertexInput in [[ stage_in ]],
constant SCNSceneBuffer& scn_frame [[buffer(0)]],
constant MyNodeBuffer& scn_node [[buffer(1)]],
constant MyPositions &myPos [[buffer(2)]],
constant uint &index [[buffer(3)]]
)
{
SimpleVertex vert;
float3 posOffset = myPos.positions[index];
float3 pos = float3(in.position.x,
in.position.y + posOffset.y,
in.position.z);
vert.position = scn_node.modelViewProjectionTransform * float4(pos,1.0);
return vert;
}
MyPositions is a simple struct containing an array of float3s.
struct MyPositions
{
float3 positions[3];
};
I have no problem passing data to the vertex shader using the setValue method of each sphere's material as shown below (also done in the willRenderScene method). Everything works as expected (the three spheres move upwards).
var i0:UInt32 = 0
let index0 = NSData(bytes: &i0, length: sizeof(UInt32))
sphere1Mat.setValue(index0, forKey: "index")
sphere1Mat.setValue(data, forKey: "myPos")
BUT this requires the data be copied from the GPU to CPU to GPU and is really something I'd rather avoid. So my question is... How do I pass a MTLBuffer to a SCNProgram?
Have tried the following in willRenderScene but get nothing but EXEC_BAD...
let renderCommandEncoder = renderer.currentRenderCommandEncoder!
renderCommandEncoder.setVertexBuffer(buffer2, offset: 0, atIndex: 2)
renderCommandEncoder.endEncoding()
Complete example is over on GitHub.
Thanks for reading, been struggling with this one. Workaround is to use a MTLTexture in place of a MTLBuffer as I've been able to pass these into an SCNProgram via the diffuse mat prop.
just switch the bindings of the buffers from step to step.
step1
computeCommandEncoder.setBuffer(buffer1, offset: 0, atIndex: 0)
computeCommandEncoder.setBuffer(buffer2, offset: 0, atIndex: 1)
step2
computeCommandEncoder.setBuffer(buffer1, offset: 0, atIndex: 1)
computeCommandEncoder.setBuffer(buffer2, offset: 0, atIndex: 0)
step3
computeCommandEncoder.setBuffer(buffer1, offset: 0, atIndex: 0)
computeCommandEncoder.setBuffer(buffer2, offset: 0, atIndex: 1)
and so on ...
the out buffer becomes the new in buffer and vice versa ...

Index expression must be constant - WebGL/GLSL error

I'm having trouble accessing an array in a fragment shader using a non-constant int as the index. I've removed the formula as it wouldn't make much sense here anyway, but my code is meant to calculate the tileID based on the current pixel and use that to determine the color.
Here's my code:
int tileID = <Insert formula here>;
vec3 colorTest;
int arrayTest[1024];
for (int x = 0; x < 1024; x++) {
if (x == 1) arrayTest[x] = 1;
else arrayTest[x] = 2;
}
if (arrayTest[tileID] == 1) colorTest = vec3(0.0, 1.0, 0.0);
else if (arrayTest[tileID] == 2) colorTest = vec3(1.0, 0.0, 0.0);
else colorTest = vec3(0.0, 0.0, 0.0);
Apparently GLSL doesn't like this and I get the error:
'[]' : Index expression must be constant
Does anyone know how I would fix this? Thanks.
As background -- GLSL looks a lot like C, but compiles a bit different. Things are very unrolled, and conditionals may be executed in parallel and switched at the end, that sort of thing. Depends on the hardware...
You can use loop indices or constants to index into arrays. The assignment in your loop is ok, but the access by tileID isn't.
WebGL Shader language is from GLES, documented
http://www.khronos.org/registry/gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf
The Appendix, section 5, discusses:
Indexing of Arrays, Vectors and Matrices
Definition:
constant-index-expressions are a superset of constant-expressions. Constant-index-expressions can include loop indices as defined in Appendix A section 4.
The following are constant-index-expressions:
• Constant expressions
• Loop indices as defined in section 4
• Expressions composed of both of the above
When used as an index, a constant-index-expression must have integral type.
Hope that helps!
Oh, as for fixing it, in the exact example above... looks like you could compute from tileID rather than precompute and index.
Or, precompute whatever array you like, and pass it in as a texture. A texture, of course, can be indexed however you like.
Here's a javascript helper method I use, to pass floats down to the shaders:
function glSetupStuff() { ...
...
if(!gl.getExtension("OES_texture_float")) // <<-- enables RGBA float values, handy!
alert("cant pass in floats, use 8-bit values instead.");
... }
/*
* Pass in an array of rgba floats,
* for example: var data = new Float32Array([0.1,0.2,0.3,1, .5,.5,1.0,1]);
*/
function textureFromFloats(gl,width,height,float32Array)
{
var oldActive = gl.getParameter(gl.ACTIVE_TEXTURE);
gl.activeTexture(gl.TEXTURE15); // working register 31, thanks.
var texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA,
width, height, 0,
gl.RGBA, gl.FLOAT, float32Array);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.bindTexture(gl.TEXTURE_2D, null);
gl.activeTexture(oldActive);
return texture;
}
Note the use of gl.NEAREST, so it doesn't "blur" your values! Then you can set it up before the gl.drawXxx call, with something like
textureUnit = 3; // from 0 to 15 is ok
gl.activeTexture(gl.TEXTURE0 + textureUnit);
gl.bindTexture(gl.TEXTURE_2D, texture);
var z = gl.getUniformLocation(prog, "uSampler");
gl.uniform1i(z, textureUnit);
And in the shader (I believe fragment or vertex; some earlier webgl's didn't support vertex textures...)
uniform sampler2D uSampler;
...
vec4 value = texture2D(uSampler, vec2(xValueBetween0And1,yValueBetween0And1));
So, you have to index appropriately for the array-as-texture size, within range of 0 to 1. Try to sample from the middle of each value/pixel. Like, if the array is 2 values wide, index by 0.25 and 0.75.
That's the gist of it!
Tested in Safari 9.1.2 on OS X 10.11.6
uniform float data[32];
float getData(int id) {
for (int i=0; i<32; i++) {
if (i == id) return data[i];
}
}
void main(void) {
float f = getData(yourVariable);
}
I hit this error because I was attempting to use an integer variable to take the nth texture from an array of textures:
// this doesn't compile!
varying vec2 vUv; // uv coords
varying float vTexture; // texture index in array of textures
uniform sampler2D textures[3]; // identify that there are 3 textures
void main() {
int textureIndex = int(floor(vTexture));
gl_FragColor = texture2D(textures[textureIndex], vUv);
}
The solution was to break out the texture indexing into a sequence of conditionals:
// this compiles!
varying vec2 vUv; // uv coords
varying float vTexture; // texture index in array of textures
uniform sampler2D textures[3]; // identify that there are 3 textures
void main() {
int textureIndex = int(floor(vTexture));
if (textureIndex == 0) {
gl_FragColor = texture2D(textures[0], vUv);
} else if (textureIndex == 1) {
gl_FragColor = texture2D(textures[1], vUv);
} else if (textureIndex == 2) {
gl_FragColor = texture2D(textures[2], vUv);
}
}

Resources