So I'm trying to send an array of values to my fragment shader-
The shader reads values from a texture and depending on the value currently being read by the texture, I want to retrieve a value from the array-
I am able to cast the value (u.r) to an int using int(u.r), but when I actually put that into the array index to find the value, it says that the integer isn't a constant, so I can't use it...
ERROR: 0:75: '[]' : Index expression must be constant -
Is there a better way of sending arrays of values to the shader?
Here is some of the code- as you can see, the array "tab" is what I'm looking at mostly
<script id="shader-fs" type="x-shader/x-fragment">
#ifdef GL_ES
precision highp float;
#endif
uniform sampler2D uTexSamp;
uniform sampler2D uTabSamp;
uniform float dt;
uniform float dte;
uniform float dth2;
uniform float a;
uniform float nb;
uniform float m;
uniform float eps;
uniform float weee;
uniform float tab[100];
//uniform float temp;
uniform int fframes;
uniform vec2 vStimCoord;
varying vec2 vTexCoord;
const float d = 0.001953125; // 1./512.
void main(void) {
vec4 t = texture2D(uTexSamp, vTexCoord);
float u = t.r, v = t.g, u2 = t.b, v2 = t.a;
//const mediump int arrindex = floor(u*10 + u2);
//float sigvaluetab = tab[arrindex];
u += u2/255.; v += v2/255.;
//u += u2 * 0.003921568627451;
v += v2 * 0.003921568627451;
//Scaling factors
v = v*1.2;
u = u*4.;
float temp = (1.0 / (exp(2.0 * (u-3.0)) + 1.0)); // (1-tanh(u-3)) * 0.5
//const mediump int utoint;
//utoint = int(u);
//for(int index = 0; index< 50; index++)
int u2toint;
u2toint = int(u2);
// int arrindex = utoint*10 + u2toint;
float sigmoid = tab[u2toint];//(tab[5] + 1.);
//float sigmoid= temp;//tab[arrindex];
float hfunc = sigmoid * u * u;
float ffunc = -u +(a - pow(v*nb,m))*hfunc ;
float gfunc = -v;
if (u > 1.0) { //u-1.0 > 0.0
gfunc += 1.4990;
}
... MORE STUFF UNDER, BUT THIS IS THE IDEA
Fragment shaders are tricky, unlike vertex shaders where you can index a uniform using any integer expression in a fragment shader the expression must qualify as const-index. This can go as far as to rule out indexing uniforms in a loop in fragment shaders :-\
GLSL ES Specification (version 100) - Appendix A: Limitations for ES 2.0 - pp. 110
Many implementations exceed these requirements, but understand that fragment shaders are more restrictive than vertex shaders. If you could edit your question to include the full fragment shader, I might be able to offer you an alternate solution.
One solution might be to use a 1D texture lookup instead of array. Technically, texture lookups that use non-const coordinates are dependent lookups, which can be significantly slower. However, texture lookups do overcome limitations of array indexing in GLSL ES.
Related
Programming Language: C
I'm currently in the process of implementing a 3D wireframe model represented through isometric projection.
My current understanding of the project is to:
parse a text map containing the x,y,z coordinates of the wireframe model
Transforming the 3D coordinates to 2D using isometric projection
Drawing the line using the Bresenham Line Algo and a few functions out of my graphic library of choice.
I'm done with Step 1 however I've been stuck on Step 2 for the last few days.
I understand that isometric projection is the process of projecting a 2D plane in a angle that it looks like it's 3D even though we are only working with x,y when drawing the lines. That is def. not the best way of describing it and if I'm incorrect please correct me.
Example of a text map:
0 0 0
0 5 0
0 0 0
My data structure of choice (implemented as array of structs)
typedef struct point
{
float x;
float y;
float z;
bool is_last;
int color; // Implemented after mandatory part
} t_point;
I pretty much just read out the rows, column and values of the text map and store them in x,y,z values respectively.
Now that I have to transform them I've tried the following formulas:
const double angle = 30 * M_PI / 180.0;
void isometric(t_dot *dot, double angle)
{
dot->x = (dot->x - dot->y) * cos(angle);
dot->y = (dot->x + dot->y) * sin(angle) - dot->z;
}
static void iso(int x, int y, int z)
{
int previous_x;
int previous_y;
previous_x = x;
previous_y = y;
x = (previous_x - previous_y) * cos(0.523599);
y = -z + (previous_x + previous_y) * sin(0.523599);
}
t_point *calc_isometric(t_point *pts, int max_pts)
{
float x;
float y;
float z;
const double angle = 30 * M_PI / 180.0;
int num_pts;
num_pts = 0;
while (num_pts < max_pts)
{
x = pts[num_pts].x;
y = pts[num_pts].y;
z = pts[num_pts].z;
printf("x: %f y: %f z: %f\n", x, y, z);
pts[num_pts].x = (x - y) * cos(angle);
pts[num_pts].y = (x + y) * sin(angle) - z;
printf("x_iso %f\ty_iso %f\n\n", pts[num_pts].x, pts[num_pts].y);
num_pts++;
}
return (pts);
}
It spits out various things which makes no sense to me. I could just go one and try to implement the Line Algo. from here and hope for the best but I would like to understand what I'm actually doing here.
Next to that I learned through me research that I need to set up my camera in a certain way to create the projection.
All in all I'm just very lost and my question boils down to this.
Please help me understand the concept of isometric projection.
How to transform 3D coordinates (x,y,z) into coordinates using isometric projection.
I see it like this:
// constants:
float deg = M_PI/180.0;
float ax = 30*deg;
float ay =150*deg;
vec2 X = vec2(cos(ax),-sin(ax)); // x axis
vec2 Y = vec2(cos(ay),-sin(ay)); // y axis
vec2 Z = vec2( 0.0,- 1.0); // z axis
vec2 O = vec2(0,0); // position of point (0,0,0) on screen
// conversion:
vec3 p=vec3(?,?,?); // input point
vec2 q=O+(p.x*X)+(p.y*Y)+(p.z*Y); // output point
the coordinatewise version:
float Xx = cos(ax);
float Xy = -sin(ax);
float Yx = cos(ay);
float Yy = -sin(ay);
float Zx = 0.0;
float Zy = - 1.0;
float Ox = 0;
float Oy = 0;
// conversion:
float px=?,py=?,pz=?; // input point
float qx=Ox+(px*Xx)+(py*Yx)+(pz*Yx); // output point
float qy=Oy+(px*Xy)+(py*Yy)+(pz*Yy); // output point
Asuming x axis going to right and y axis going down ... the O is usually set to center of screen instead of (0,0) unless you add pan capabilities of your isometric world.
In case you want to add arbitrary rotations within the "3D" XY plane see this:
How can I warp a shader matrix to match isometric perspective in a 3d scene?
So you just compute the X,Y vectors on the ellipse (beware they will not be unit anymore!!!) So if I see it right it would be:
float ax=?,ay=ax+90*deg;
float Xx = cos(ax) ;
float Xy = -sin(ax)*0.5;
float Yx = cos(ay) ;
float Yy = -sin(ay)*0.5;
where ax is the rotation angle...
I am replacing my project's use of glRotatef because I need to be able to transform double matrices. glRotated is not an option because OpenGL does not guarantee the stored matrices or any operations performed to be double precision. However, my new implementation only rotates around the global axes, and does not give the same result as glRotatef.
I have looked at some implementations of glRotatef (like OpenGl rotate custom implementation) and don't see how they account for the initial transformation matrix's local axes when calculating the rotation matrix.
I have a generic rotate function, taken (with some changes) from https://community.khronos.org/t/implementing-rotation-function-like-glrotate/68603:
typedef double double_matrix_t[16];
void rotate_double_matrix(const double_matrix_t in, double angle, double x, double y, double z,
double_matrix_t out)
{
double sinAngle, cosAngle;
double mag = sqrt(x * x + y * y + z * z);
sinAngle = sin ( angle * M_PI / 180.0 );
cosAngle = cos ( angle * M_PI / 180.0 );
if ( mag > 0.0f )
{
double xx, yy, zz, xy, yz, zx, xs, ys, zs;
double oneMinusCos;
double_matrix_t rotMat;
x /= mag;
y /= mag;
z /= mag;
xx = x * x;
yy = y * y;
zz = z * z;
xy = x * y;
yz = y * z;
zx = z * x;
xs = x * sinAngle;
ys = y * sinAngle;
zs = z * sinAngle;
oneMinusCos = 1.0f - cosAngle;
rotMat[0] = (oneMinusCos * xx) + cosAngle;
rotMat[4] = (oneMinusCos * xy) - zs;
rotMat[8] = (oneMinusCos * zx) + ys;
rotMat[12] = 0.0F;
rotMat[1] = (oneMinusCos * xy) + zs;
rotMat[5] = (oneMinusCos * yy) + cosAngle;
rotMat[9] = (oneMinusCos * yz) - xs;
rotMat[13] = 0.0F;
rotMat[2] = (oneMinusCos * zx) - ys;
rotMat[6] = (oneMinusCos * yz) + xs;
rotMat[10] = (oneMinusCos * zz) + cosAngle;
rotMat[14] = 0.0F;
rotMat[3] = 0.0F;
rotMat[7] = 0.0F;
rotMat[11] = 0.0F;
rotMat[15] = 1.0F;
multiply_double_matrices(in, rotMat, out); // Generic matrix multiplication function.
}
}
I call this function with the same rotations I used to call glRotatef with and in the same order, but the result is different. All rotations are done around the global axes, while glRotatef would rotate around the local axis of in.
For example, I have a plane:
and I pitch up 90 degrees (this gives the expected result with both glRotatef and my rotation function) and persist the transformation:
If I bank 90 degrees with glRotatef (glRotatef(90, 0.0f, 0.0f, 1.0f)), the plane rotates around the transformation's local Z axis pointing out of the plane's nose, which is what I want:
But if I bank 90 degrees with my code (rotate_double_matrix(in, 90.0f, 0.0, 0.0, 1.0, out)), I get this:
The plane is still rotating around the global Z axis.
Similar issues happen if I change the order of rotations - the first rotation gives the expected result, but subsequent rotations still happen around the global axes.
How does glRotatef rotate around a matrix's local axes? What do I need to change in my code to get the same result? I assume rotate_double_matrix needs to modify the x, y, z values passed in based on the in matrix somehow, but I'm not sure.
You're probably multiplying the matrices in the wrong order. Try changing
multiply_double_matrices(in, rotMat, out);
to
multiply_double_matrices(rotMat, in, out);
I can never remember which way is right, and there's a reasonable chance multiply_double_matrices is backwards anyway (at least if I'd written it :)
The order you multiply matrices in matters. Since rotMat holds your rotation, and in holds the combination of all other matrices applied so far, i.e. "everything else", multiplying in the wrong order means that rotMat gets applied after everything else instead of before everything else. (And I didn't get that part backwards! If you want rotMat to be the "top of stack" transformation, that means you actually want it to be the first when your vertex coordinates are processed)
Another possibility is that you mixed up rows with columns. OpenGL matrices go down, then across, i.e.
matrix[0] matrix[4] matrix[8] matrix[12]
matrix[1] matrix[5] matrix[9] matrix[13]
matrix[2] matrix[6] matrix[10] matrix[14]
matrix[3] matrix[7] matrix[11] matrix[15]
even though 2D arrays are traditionally stored across, then down:
matrix[0] matrix[1] matrix[2] matrix[3]
matrix[4] matrix[5] matrix[6] matrix[7]
matrix[8] matrix[9] matrix[10] matrix[11]
matrix[12] matrix[13] matrix[14] matrix[15]
Getting this wrong can cause similar-looking, but mathematically different, issues
I have the following function in Core Image Kernel Language and I need something equivalent in Metal Shading Language, but I have problem with destCoord , unpremultiply and premultiply functions.
kernel vec4 MyFunc(sampler src, __color color, float distance, float slope) {
vec4 t;
float d;
d = destCoord().y * slope + distance;
t = unpremultiply(sample(src, samplerCoord(src)));
t = (t - d*color) / (1.0-d);
return premultiply(t);
}
My function in MSL so far is:
float4 MyFunc(sample_t image, float3 color, float dist, float slope) {
float4 t;
float d;
d = color[1] * slope + dist
...
return t;
}
Any help would be appreciated!
This should work:
float4 MyFunc(sampler src, float4 color, float dist, float slope, destination dest) {
const float d = dest.coord().y * slope + dist;
float4 t = unpremultiply(src.sample(src.coord()));
t = (t - d * color) / (1.0 - d);
return premultiply(t);
}
Note the destination parameter. This is an optional last parameter to a kernel, that gives you access to information about the render destination (like the coordinate in destination-space that you are rendering to). You don't need to pass anything for this when invoking the CIKernel, Core Image will fill it automatically.
Since you are only sampling the input src at the current location, you can also optimize the kernel to be a CIColorKernel. These are kernels that have a 1:1 mapping of input to output pixels. They can be concatenated by the Core Image runtime. The kernel code would look like this:
float4 MyFunc(sample_t src, float4 color, float dist, float slope, destination dest) {
const float d = dest.coord().y * slope + dist;
float4 t = unpremultiply(src);
t = (t - d * color) / (1.0 - d);
return premultiply(t);
}
Notice sample_t (which is basically a float4) vs. sampler.
I've been writing GLSL shaders, and using an integer texture (GL_RED) to store values in the shader.
When I attempt to divide a value taken from the usampler2D texture, it stays the same.
The following is the minimal reproducible shader.
#version 440
in vec2 uv;
out vec3 color;
layout (binding = 1) uniform usampler2D tm_data;
void main(){
float index = texture(tm_data, uv).r;
float divisor = 16.0f
color = vec3(index / divisor, 0, 0);
}
The rendered red value is always 1.0 regardless of a way i try to divide or mutate the index value.
When the sampler is changed to a normalized one (sampler2D) the color manipulation works as expected
#version 440
in vec2 uv;
out vec3 color;
layout (binding = 1) uniform sampler2D tm_data; //Loads as normalized from [0,255] to [0,1]
void main(){
float index = texture(tm_data, uv).r * 255.0f; //Convert back to integer approximation
float divisor = 4.0f
color = vec3(index / divisor, 0, 0); //Shade of red now appears considerably darker
}
Does anyone know why this unexpected behaviour happens?
The tm_data texture is loaded as GL_RED -> GL_RED
The OpenGL version used is 4.4
There is no framework being used (no sneaky additions), everything is loaded using gl function calls.
For the use of usampler2D, the internal format has to be a unsigned integral format (e.g. GL_R8UI). See Sampler types.
If the internal format is the basic format GL_RED, then the sampler type has to be sampler2D.
Note, sampler* is for the use of floating point formats, isampler* for signed integral formats and usampler* for unsigned integral formats.
See OpenGL Shading Language 4.60 Specification - 4.1.7. Opaque Types and OpenGL Shading Language 4.60 Specification - 8.9. Texture Functions
I'm looking for a fast, accurate implementation of RGB to HSB and HSB to RGB in pure C. Note that I'm specifically looking for Hue, Saturation, Brightness and not HSL (Luminosity).
Of course I have Googled this extensively, but speed is of the utmost importance here and I am looking for any specific recommendations for solid, fast, reliable code.
Here is a straightforward implementation in standard C.
This is - without further context - as good as it can get. Perhaps you care to shed some more light on
how you store your RGB samples (bits/pixel to begin with !?)
how you store your pixel data (do you want to efficiently transform larger buffers, if so, what is the organization)
how you want to represent the output (I assumed floats for now)
I could come up with a further optimized version (perhaps one that utilisze SSE4 instructions nicely...)
All that said, when compiled with optimizations, this doesn't work too badly:
#include <stdio.h>
#include <math.h>
typedef struct RGB_t { unsigned char red, green, blue; } RGB;
typedef struct HSB_t { float hue, saturation, brightness; } HSB;
/*
* Returns the hue, saturation, and brightness of the color.
*/
void RgbToHsb(struct RGB_t rgb, struct HSB_t* outHsb)
{
// TODO check arguments
float r = rgb.red / 255.0f;
float g = rgb.green / 255.0f;
float b = rgb.blue / 255.0f;
float max = fmaxf(fmaxf(r, g), b);
float min = fminf(fminf(r, g), b);
float delta = max - min;
if (delta != 0)
{
float hue;
if (r == max)
{
hue = (g - b) / delta;
}
else
{
if (g == max)
{
hue = 2 + (b - r) / delta;
}
else
{
hue = 4 + (r - g) / delta;
}
}
hue *= 60;
if (hue < 0) hue += 360;
outHsb->hue = hue;
}
else
{
outHsb->hue = 0;
}
outHsb->saturation = max == 0 ? 0 : (max - min) / max;
outHsb->brightness = max;
}
Typical usage and test:
int main()
{
struct RGB_t rgb = { 132, 34, 255 };
struct HSB_t hsb;
RgbToHsb(rgb, &hsb);
printf("RGB(%u,%u,%u) -> HSB(%f,%f,%f)\n", rgb.red, rgb.green, rgb.blue,
hsb.hue, hsb.saturation, hsb.brightness);
// prints: RGB(132,34,255) -> HSB(266.606354,0.866667,1.000000)
return 0;
}
First off
HSB and HLS were developed to specify numerical Hue, Saturation and Brightness (or Hue, Lightness and Saturation) in an age when users had to specify colors numerically. The usual formulations of HSB and HLS are flawed with respect to the properties of color vision. Now that users can choose colors visually, or choose colors related to other media (such as PANTONE), or use perceptually-based systems like L*u*v* and L*a*b*, HSB and HLS should be abandoned [source]
Look at the opensource Java implementation here
Boost library (I know, it's C++) seemed to contains conversion to HSB at one time but nowadays I can only find a luminance conversion (here)
I would suggest using a lookup table to store the HSB and RGB values. First convert the RGB value (which is presumably 8 bits per component) to a 16-bit value (5 bits per component). The HSB values, also 16-bit values, can be converted in the same way, whereby here, the hue component should probably use more bits than the saturation and brightness, probably 8 bits per component, and the saturation and brightness 4 bits each. Of course, the reverse will apply when converting from HSB to RGB.
A fast RGB to HSV floating point conversion from lolengine.net takes a "common" RGB2HSV implementation and makes these observations:
Only the hue offset K changes. The idea now is the following:
Sort the triplet (r,g,b) using comparisons
Build K while sorting the triplet
Perform the final calculation
We notice that the last swap effectively changes the sign of K and the sign of g - b. Since both are then added and passed to fabs(), the sign reversal can actually be omitted.
The step before that last point looks like C code, but their final form is C++ that's trivially convertible to C:
static void RGB2HSV(float r, float g, float b,
float &h, float &s, float &v)
{
float K = 0.f;
if (g < b)
{
std::swap(g, b);
K = -1.f;
}
if (r < g)
{
std::swap(r, g);
K = -2.f / 6.f - K;
}
float chroma = r - std::min(g, b);
h = fabs(K + (g - b) / (6.f * chroma + 1e-20f));
s = chroma / (r + 1e-20f);
v = r;
}