Why can't I access matrices in hlsl pixel-shader? - wpf

I'm learning how to create effects for wpf using hlsl.
I'm currently trying to make a simple effect that marks edges in an image.
I want too use the Sobel operator for this, so I set a public float2x3 in my hlsl code, but I can't seem to access elements in that matrix.
I've tried manually inputting the proper values and it works, but not when I use a loop.
sampler2D imageSampler : register(s0);
float imageWidth : register(c0);
float imageHeight : register(c1);
float threshold : register(c2);
float2x3 op =
{
1.0f, 2.0f, 1.0f,
-1.0f, -2.0f, -1.0f
};
float grayScale(float3 color)
{
return (color.r + color.g + color.b) / 3;
}
float4 GetEdgeLoop(float2 coord, float2 pixelSize)
{
float2 current;
float avrg = 0;
float holder;
float gsHolder;
current.x = coord.x - pixelSize.x;
for (int x = 0; x < 2; x++)
{
current.y = coord.y - pixelSize.y;
for (int y = 0; y < 3; y++)
{
holder = op[x][y];
gsHolder = grayScale(tex2D(imageSampler, current).rgb);
avrg += gsHolder * holder;
current.y += pixelSize.y;
}
current.x += pixelSize.x * 2;
}
avrg = abs(avrg / 8);
if (avrg > threshold)
return float4(1, 0, 0, 1);
return tex2D(imageSampler, coord);
}
float4 main(float2 uv : TEXCOORD) : COLOR
{
float2 pixelSize = (1 / imageWidth, 1 / imageHeight);
return GetEdgeLoop(uv, pixelSize);
}
This method should return the color red for strong enough edges.
This does return red sometimes, but clearly not for edges.
I have another method for detecting edges that actually works, but it samples the required pixels manually:
float4 GetEdge(float2 coord, float2 pixelSize)
{
float avrg = 0;
avrg += grayScale(tex2D(imageSampler, float2(coord.x - pixelSize.x, coord.y - pixelSize.y)).rgb) * 1;
avrg += grayScale(tex2D(imageSampler, float2(coord.x - pixelSize.x, coord.y)).rgb) * 2;
avrg += grayScale(tex2D(imageSampler, float2(coord.x - pixelSize.x, coord.y + pixelSize.y)).rgb) * 1;
avrg += grayScale(tex2D(imageSampler, float2(coord.x + pixelSize.x, coord.y - pixelSize.y)).rgb) * (-1);
avrg += grayScale(tex2D(imageSampler, float2(coord.x + pixelSize.x, coord.y)).rgb) * (-2);
avrg += grayScale(tex2D(imageSampler, float2(coord.x + pixelSize.x, coord.y + pixelSize.y)).rgb) * (-1);
avrg = abs(avrg / 8);
if (avrg > threshold)
return float4(1, 0, 0, 1);
return tex2D(imageSampler, coord);
}
This method isn't very elegant and I want to replace it.

I'm not an expert on shaders in WPF, but in the code that you provided you don't specify any registers for your input, so it's not accessible from the pixel shader code. I would expect something like this to be part of your code:
sampler2D implicitInputSampler : register(S0);
float opacity : register(C0);
(The first definition is a texture sampler, the second one a floating number register.)
You need to 'declare' the objects that will be passed into the shader function. Microsoft's guide to shader effect writing has a nice example at https://learn.microsoft.com/en-us/dotnet/api/system.windows.media.effects.shadereffect?view=netframework-4.8. Here's the code for posterity:
// Threshold shader
// Object Declarations
sampler2D implicitInput : register(s0);
float threshold : register(c0);
float4 blankColor : register(c1);
//------------------------------------------------------------------------------------
// Pixel Shader
//------------------------------------------------------------------------------------
float4 main(float2 uv : TEXCOORD) : COLOR
{
float4 color = tex2D(implicitInput, uv);
float intensity = (color.r + color.g + color.b) / 3;
float4 result;
if (intensity > threshold)
{
result = color;
}
else
{
result = blankColor;
}
return result;
}
You can see that the code declares a texture sampler, a float and a float4 (vector with 4 floats) as input. You can then bind to these through DependencyProperties like this:
#region Input dependency property
public Brush Input
{
get { return (Brush)GetValue(InputProperty); }
set { SetValue(InputProperty, value); }
}
public static readonly DependencyProperty InputProperty =
ShaderEffect.RegisterPixelShaderSamplerProperty("Input", typeof(ThresholdEffect), 0);
#endregion
///////////////////////////////////////////////////////////////////////
#region Threshold dependency property
public double Threshold
{
get { return (double)GetValue(ThresholdProperty); }
set { SetValue(ThresholdProperty, value); }
}
public static readonly DependencyProperty ThresholdProperty =
DependencyProperty.Register("Threshold", typeof(double), typeof(ThresholdEffect),
new UIPropertyMetadata(0.5, PixelShaderConstantCallback(0)));
#endregion
///////////////////////////////////////////////////////////////////////
#region BlankColor dependency property
public Color BlankColor
{
get { return (Color)GetValue(BlankColorProperty); }
set { SetValue(BlankColorProperty, value); }
}
public static readonly DependencyProperty BlankColorProperty =
DependencyProperty.Register("BlankColor", typeof(Color), typeof(ThresholdEffect),
new UIPropertyMetadata(Colors.Transparent, PixelShaderConstantCallback(1)));
#endregion
Do note that the PixelShaderConstantCallback has an int parameter which let's you specify the index of the register to which you want to bind the value.
Edit: Ok, so I've read up some more on this topic, played around with your code and got it to work. Here are the changes I've done in no particular order:
I've read that you should declare your array as static const. It does not change during the execution of the shader and it apparently 'might' allow some minor compiler optimizations.
I've changed the float2x3 to float3x3 to fit a Laplace kernel - as we want the edges working from all sides of the image.
I've changed the grayscale function to use weights for the luminosity approach.
Here's the relevant code:
static const float3x3 laplace =
{
-1.0f, -1.0f, -1.0f,
-1.0f, 8.0f, -1.0f,
-1.0f, -1.0f, -1.0f,
};
float grayScaleByLumino(float3 color)
{
return (0.299 * color.r + 0.587 * color.g + 0.114 * color.b);
}
float4 GetEdgeGeorge(float2 coord, float2 pixelSize)
{
float2 current = coord;
float avrg = 0;
float kernelValue;
float4 currentColor;
float grayScale;
float4 result;
current.x = coord.x - pixelSize.x;
for (int x = 0; x < 3; x++)
{
current.y = coord.y - pixelSize.y;
for (int y = 0; y < 3; y++)
{
kernelValue = laplace[x][y];
grayScale = grayScaleByLumino(tex2D(imageSampler, current).rgb);
avrg += grayScale * kernelValue;
current.y += pixelSize.y;
}
current.x += pixelSize.x;
}
avrg = abs(avrg / 8);
if (avrg > threshold)
{
result = float4(1, 0, 0, 1);
}
else
{
result = tex2D(imageSampler, coord);
}
return result;
}
The whole solution I've created is available here: https://github.com/georgethejournalist/WPFShaders. I hope this helps.

Related

first glsl complex shader for me, how to make it evolve with time (with three-fiber and react)?

I created the code sandbox below where I try to show a mesh.
https://codesandbox.io/s/template-shader-opcqst?file=/src/App.js
For the moment, the mesh is shown but not animated. I don't know how to make change the 'time' parameter of this mesh. It must be linked with a good use of useMemo, useEffect or useFrame.
The code of the mesh is taken from this codepen where it is edited with three.js not fiber :
https://codepen.io/marioecg/pen/mdrvgpq (link to three.js package is blocked by cors)
import * as THREE from "https://cdn.skypack.dev/three#0.124.0";
import { OrbitControls } from "https://cdn.skypack.dev/three/examples/jsm/controls/OrbitControls";
import * as dat from "https://cdn.skypack.dev/dat.gui#0.7.7";
const gui = new dat.GUI();
const settings = {
speed: 0.2,
density: 1.5,
strength: 0.2,
frequency: 3.0,
amplitude: 6.0,
intensity: 7.0,
};
const folder1 = gui.addFolder('Noise');
const folder2 = gui.addFolder('Rotation');
const folder3 = gui.addFolder('Color');
folder1.add(settings, 'speed', 0.1, 1, 0.01);
folder1.add(settings, 'density', 0, 10, 0.01);
folder1.add(settings, 'strength', 0, 2, 0.01);
folder2.add(settings, 'frequency', 0, 10, 0.1);
folder2.add(settings, 'amplitude', 0, 10, 0.1);
folder3.add(settings, 'intensity', 0, 10, 0.1);
const noise = `
// GLSL textureless classic 3D noise "cnoise",
// with an RSL-style periodic variant "pnoise".
// Author: Stefan Gustavson (stefan.gustavson#liu.se)
// Version: 2011-10-11
//
// Many thanks to Ian McEwan of Ashima Arts for the
// ideas for permutation and gradient selection.
//
// Copyright (c) 2011 Stefan Gustavson. All rights reserved.
// Distributed under the MIT license. See LICENSE file.
// https://github.com/ashima/webgl-noise
//
vec3 mod289(vec3 x)
{
return x - floor(x * (1.0 / 289.0)) * 289.0;
}
vec4 mod289(vec4 x)
{
return x - floor(x * (1.0 / 289.0)) * 289.0;
}
vec4 permute(vec4 x)
{
return mod289(((x*34.0)+1.0)*x);
}
vec4 taylorInvSqrt(vec4 r)
{
return 1.79284291400159 - 0.85373472095314 * r;
}
vec3 fade(vec3 t) {
return t*t*t*(t*(t*6.0-15.0)+10.0);
}
// Classic Perlin noise, periodic variant
float pnoise(vec3 P, vec3 rep)
{
vec3 Pi0 = mod(floor(P), rep); // Integer part, modulo period
vec3 Pi1 = mod(Pi0 + vec3(1.0), rep); // Integer part + 1, mod period
Pi0 = mod289(Pi0);
Pi1 = mod289(Pi1);
vec3 Pf0 = fract(P); // Fractional part for interpolation
vec3 Pf1 = Pf0 - vec3(1.0); // Fractional part - 1.0
vec4 ix = vec4(Pi0.x, Pi1.x, Pi0.x, Pi1.x);
vec4 iy = vec4(Pi0.yy, Pi1.yy);
vec4 iz0 = Pi0.zzzz;
vec4 iz1 = Pi1.zzzz;
vec4 ixy = permute(permute(ix) + iy);
vec4 ixy0 = permute(ixy + iz0);
vec4 ixy1 = permute(ixy + iz1);
vec4 gx0 = ixy0 * (1.0 / 7.0);
vec4 gy0 = fract(floor(gx0) * (1.0 / 7.0)) - 0.5;
gx0 = fract(gx0);
vec4 gz0 = vec4(0.5) - abs(gx0) - abs(gy0);
vec4 sz0 = step(gz0, vec4(0.0));
gx0 -= sz0 * (step(0.0, gx0) - 0.5);
gy0 -= sz0 * (step(0.0, gy0) - 0.5);
vec4 gx1 = ixy1 * (1.0 / 7.0);
vec4 gy1 = fract(floor(gx1) * (1.0 / 7.0)) - 0.5;
gx1 = fract(gx1);
vec4 gz1 = vec4(0.5) - abs(gx1) - abs(gy1);
vec4 sz1 = step(gz1, vec4(0.0));
gx1 -= sz1 * (step(0.0, gx1) - 0.5);
gy1 -= sz1 * (step(0.0, gy1) - 0.5);
vec3 g000 = vec3(gx0.x,gy0.x,gz0.x);
vec3 g100 = vec3(gx0.y,gy0.y,gz0.y);
vec3 g010 = vec3(gx0.z,gy0.z,gz0.z);
vec3 g110 = vec3(gx0.w,gy0.w,gz0.w);
vec3 g001 = vec3(gx1.x,gy1.x,gz1.x);
vec3 g101 = vec3(gx1.y,gy1.y,gz1.y);
vec3 g011 = vec3(gx1.z,gy1.z,gz1.z);
vec3 g111 = vec3(gx1.w,gy1.w,gz1.w);
vec4 norm0 = taylorInvSqrt(vec4(dot(g000, g000), dot(g010, g010), dot(g100, g100), dot(g110, g110)));
g000 *= norm0.x;
g010 *= norm0.y;
g100 *= norm0.z;
g110 *= norm0.w;
vec4 norm1 = taylorInvSqrt(vec4(dot(g001, g001), dot(g011, g011), dot(g101, g101), dot(g111, g111)));
g001 *= norm1.x;
g011 *= norm1.y;
g101 *= norm1.z;
g111 *= norm1.w;
float n000 = dot(g000, Pf0);
float n100 = dot(g100, vec3(Pf1.x, Pf0.yz));
float n010 = dot(g010, vec3(Pf0.x, Pf1.y, Pf0.z));
float n110 = dot(g110, vec3(Pf1.xy, Pf0.z));
float n001 = dot(g001, vec3(Pf0.xy, Pf1.z));
float n101 = dot(g101, vec3(Pf1.x, Pf0.y, Pf1.z));
float n011 = dot(g011, vec3(Pf0.x, Pf1.yz));
float n111 = dot(g111, Pf1);
vec3 fade_xyz = fade(Pf0);
vec4 n_z = mix(vec4(n000, n100, n010, n110), vec4(n001, n101, n011, n111), fade_xyz.z);
vec2 n_yz = mix(n_z.xy, n_z.zw, fade_xyz.y);
float n_xyz = mix(n_yz.x, n_yz.y, fade_xyz.x);
return 2.2 * n_xyz;
}
`;
const rotation = `
mat3 rotation3dY(float angle) {
float s = sin(angle);
float c = cos(angle);
return mat3(
c, 0.0, -s,
0.0, 1.0, 0.0,
s, 0.0, c
);
}
vec3 rotateY(vec3 v, float angle) {
return rotation3dY(angle) * v;
}
`;
const vertexShader = `
varying vec2 vUv;
varying float vDistort;
uniform float uTime;
uniform float uSpeed;
uniform float uNoiseDensity;
uniform float uNoiseStrength;
uniform float uFrequency;
uniform float uAmplitude;
${noise}
${rotation}
void main() {
vUv = uv;
float t = uTime * uSpeed;
float distortion = pnoise((normal + t) * uNoiseDensity, vec3(10.0)) * uNoiseStrength;
vec3 pos = position + (normal * distortion);
float angle = sin(uv.y * uFrequency + t) * uAmplitude;
pos = rotateY(pos, angle);
vDistort = distortion;
gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.);
}
`;
const fragmentShader = `
varying vec2 vUv;
varying float vDistort;
uniform float uTime;
uniform float uIntensity;
vec3 cosPalette(float t, vec3 a, vec3 b, vec3 c, vec3 d) {
return a + b * cos(6.28318 * (c * t + d));
}
void main() {
float distort = vDistort * uIntensity;
vec3 brightness = vec3(0.5, 0.5, 0.5);
vec3 contrast = vec3(0.5, 0.5, 0.5);
vec3 oscilation = vec3(1.0, 1.0, 1.0);
vec3 phase = vec3(0.0, 0.1, 0.2);
vec3 color = cosPalette(distort, brightness, contrast, oscilation, phase);
gl_FragColor = vec4(color, 1.0);
}
`;
class Scene {
constructor() {
this.renderer = new THREE.WebGLRenderer({ antialias: true });
this.renderer.setPixelRatio(Math.min(window.devicePixelRatio, 1.5));
this.renderer.setSize(window.innerWidth, window.innerHeight);
this.renderer.setClearColor('black', 1);
this.camera = new THREE.PerspectiveCamera(
45,
window.innerWidth / window.innerHeight,
0.1,
1000
);
this.camera.position.set(0, 0, 4);
this.scene = new THREE.Scene();
this.clock = new THREE.Clock();
this.controls = new OrbitControls(this.camera, this.renderer.domElement);
this.init();
this.animate();
}
init() {
this.addCanvas();
this.addElements();
this.addEvents();
}
addCanvas() {
const canvas = this.renderer.domElement;
canvas.classList.add('webgl');
document.body.appendChild(canvas);
}
addElements() {
const geometry = new THREE.IcosahedronBufferGeometry(1, 64);
const material = new THREE.ShaderMaterial({
vertexShader,
fragmentShader,
uniforms: {
uTime: { value: 0 },
uSpeed: { value: settings.speed },
uNoiseDensity: { value: settings.density },
uNoiseStrength: { value: settings.strength },
uFrequency: { value: settings.frequency },
uAmplitude: { value: settings.amplitude },
uIntensity: { value: settings.intensity },
},
// wireframe: true,
});
this.mesh = new THREE.Mesh(geometry, material);
this.scene.add(this.mesh);
}
addEvents() {
window.addEventListener('resize', this.resize.bind(this));
}
resize() {
let width = window.innerWidth;
let height = window.innerHeight;
this.camera.aspect = width / height;
this.renderer.setSize(width, height);
this.camera.updateProjectionMatrix();
}
animate() {
requestAnimationFrame(this.animate.bind(this));
this.render();
}
render() {
this.controls.update();
// Update uniforms
this.mesh.material.uniforms.uTime.value = this.clock.getElapsedTime();
this.mesh.material.uniforms.uSpeed.value = settings.speed;
this.mesh.material.uniforms.uNoiseDensity.value = settings.density;
this.mesh.material.uniforms.uNoiseStrength.value = settings.strength;
this.mesh.material.uniforms.uFrequency.value = settings.frequency;
this.mesh.material.uniforms.uAmplitude.value = settings.amplitude;
this.mesh.material.uniforms.uIntensity.value = settings.intensity;
this.renderer.render(this.scene, this.camera);
}
}
new Scene();
Here's the bubble that should be created by the code : https://tympanus.net/Tutorials/WebGLBlobs/index3.html
how would you do it ? Thanks.
If you bring the mesh into its own function component, you can then use the useFrame hook to update the uniform every frame using a ref of the mesh. (In this case, you could also apply the ref directly to the material)
function MovingBlob() {
const mesh = useRef()
useFrame(({ clock }) => {
if (mesh.current) {
mesh.current.material.uniforms.uTime.value = clock.elapsedTime;
}
})
return (
<mesh ref={mesh}>
<icosahedronBufferGeometry attach="geometry" args={[1, 64]} />
<shaderMaterial attach="material" {...data} />
</mesh>
)
}
This will update the uniform every frame without triggering any re-rendering in the scene, which would happen if you were calling setState every time.

Cannot create a diffuse lighting shader that works in Unity with a SPOTLIGHT

Code below - I have tried using https://catlikecoding.com/unity/tutorials/rendering/part-5/#5 but I cannot make sense of the tutorials or what they want me to do or make edits when they request them.
Shader "Unlit/DiffuseLighting"
{
Properties
{
_MainTex("Texture", 2D) = "white" {}
_LightPoint("Light Point Position", Vector) = (0, 0, 0, 0)
}
SubShader
{
Tags { "RenderType" = "Opaque" }
LOD 100
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma multi_compile_fog
#pragma multi_compile DIRECTIONAL POINT SPOT
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float4 normal : NORMAL;
float2 uv : TEXCOORD0;
};
struct v2f
{
float2 uv : TEXCOORD0;
float3 worldNormal : TEXCOORD1;
float3 worldPosition : TEXCOORD2;
UNITY_FOG_COORDS(1)
float4 vertex : SV_POSITION;
};
sampler2D _MainTex;
float4 _MainTex_ST;
float4 _LightPoint;
v2f vert(appdata v)
{
v2f o;
o.worldNormal = UnityObjectToWorldNormal(v.normal);
o.vertex = UnityObjectToClipPos(v.vertex);
o.worldPosition = mul(unity_ObjectToWorld, v.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
UNITY_TRANSFER_FOG(o,o.vertex);
return o;
}
fixed4 frag(v2f i) : SV_Target
{
#if defined(POINT)||defined(SPOT)
fixed3 lightDirection = normalize(_WorldSpaceLightPos0.xyz - i.worldPosition);
fixed3 lightDifference = i.worldPosition - _LightPoint.xyz;
fixed intensity = max(-1 * dot(lightDirection, i.worldNormal), 0);
fixed4 col = intensity * tex2D(_MainTex, i.uv);
UNITY_APPLY_FOG(i.fogCoord, col);
return col;
#else
fixed3 lightDifference = i.worldPosition - _LightPoint.xyz;
fixed3 lightDirection = normalize(lightDifference);
fixed intensity = max(-1 * dot(lightDirection, i.worldNormal), 0);
fixed4 col = intensity * tex2D(_MainTex, i.uv);
UNITY_APPLY_FOG(i.fogCoord, col);
return col;
#endif
}
ENDCG
}
}
}
I tried following a tutorial i went to the discord for the tutorial and they redirected me to https://catlikecoding.com/unity/tutorials/rendering/part-5/#5 and now I am stuck and do not know how to make it work with the spotlight in our scene.

What am I doing wrong when trying to assign a value in hlsl?

I have this in a geometry shader
cbuffer cbFixed
{
float2 TexC[4] =
{
float2(0.0f, 1.0f),
float2(0.0f, 0.0f),
float2(1.0f, 1.0f),
float2(1.0f, 0.0f)
};
};
struct PS_INPUT
{
float4 Pos : SV_POSITION;
float4 PosWorld : POSITION;
float4 Norm : NORMAL;
float2 Tex : TEXCOORD;
uint PrimID: SV_PrimitiveID;
};
and later...
PS_INPUT gout;
for (int i = 0; i < 4; i++)
{
// other stuff
gout.Tex = TexC[i];
// other stuff
}
However, for some reason this does not work as expected, by which I mean that the textures are not applied, whereas this does:
for (int i = 0; i < 4; i++)
{
if (i == 0) gout.Tex = float2(0.0f, 1.0f);
if (i == 1) gout.Tex = float2(0.0f, 0.0f);
if (i == 2) gout.Tex = float2(1.0f, 1.0f);
if (i == 3) gout.Tex = float2(1.0f, 0.0f);
}
Any idea why? I didn't want to make this a long post so kept the detail to a minimum.
Your TexC values are ignored because variables inside cbuffer have to be set with a call *SetConstantBuffers and default initializers like yours are ignored, and if you don't do that you get undefined behaviour (in most cases you'll be reading zero values). What you want is to write something like this:
static const float2 TexC[4] =
{
float2(0.0f, 1.0f),
float2(0.0f, 0.0f),
float2(1.0f, 1.0f),
float2(1.0f, 0.0f)
};
instead of cbuffer.

OpenGL ortho projection is broken

So I just added ortho projection to my rendering and everything stopped rendering... If I remove it, it works again. This is my matrix code:
#include <stdlib.h>
#include <stdlib.h>
#include <math.h>
matrix4x4 init_matrix4x4() {
matrix4x4 m = calloc(16, sizeof(float));
m[0] = 1; m[1] = 0; m[2] = 0; m[3] = 0;
m[4] = 0; m[5] = 1; m[6] = 0; m[7] = 0;
m[8] = 0; m[9] = 0; m[10] = 1; m[11] = 0;
m[12] = 0; m[13] = 0; m[14] = 0; m[15] = 1;
return m;
}
void translate_matrix4x4(matrix4x4* matrix, float x, float y, float z) {
matrix4x4 m = (*matrix);
m[12] = m[0] * x + m[4] * y + m[8] * z + m[12];
m[13] = m[1] * x + m[5] * y + m[9] * z + m[13];
m[14] = m[2] * x + m[6] * y + m[10] * z + m[14];
m[15] = m[3] * x + m[7] * y + m[11] * z + m[15];
}
void ortho_matrix4x4(matrix4x4* matrix, float left, float right, float bottom, float top, float near, float far) {
matrix4x4 m = (*matrix);
m[0] = 2 / (right-left);
m[1] = 0;
m[2] = 0;
m[3] = 0;
m[4] = 0;
m[5] = 2 / (top - bottom);
m[6] = 0;
m[7] = 0;
m[8] = 0;
m[9] = 0;
m[10] = 1 / (far - near);
m[11] = 0;
m[12] = (left + right) / (left - right);
m[13] = (top + bottom) / (bottom - top);
m[14] = near / (near - far);
m[15] = 1;
}
void mat4_identity(matrix4x4* matrix) {
matrix4x4 out = (*matrix);
out[0] = 1;
out[1] = 0;
out[2] = 0;
out[3] = 0;
out[4] = 0;
out[5] = 1;
out[6] = 0;
out[7] = 0;
out[8] = 0;
out[9] = 0;
out[10] = 1;
out[11] = 0;
out[12] = 0;
out[13] = 0;
out[14] = 0;
out[15] = 1;
}
void mat4_lookAtf(matrix4x4* matrix, float eye[3], float center[3], float up[3]) {
matrix4x4 out = (*matrix);
float x0, x1, x2, y0, y1, y2, z0, z1, z2, len,
eyex = eye[0],
eyey = eye[1],
eyez = eye[2],
upx = up[0],
upy = up[1],
upz = up[2],
centerx = center[0],
centery = center[1],
centerz = center[2];
if (fabs(eyex - centerx) < 0.000001 &&
fabs(eyey - centery) < 0.000001 &&
fabs(eyez - centerz) < 0.000001) {
mat4_identity(&out);
return;
}
z0 = eyex - centerx;
z1 = eyey - centery;
z2 = eyez - centerz;
len = 1 / sqrt/*f*/(z0 * z0 + z1 * z1 + z2 * z2);
z0 *= len;
z1 *= len;
z2 *= len;
x0 = upy * z2 - upz * z1;
x1 = upz * z0 - upx * z2;
x2 = upx * z1 - upy * z0;
len = sqrt(x0 * x0 + x1 * x1 + x2 * x2);
if (!len) {
x0 = 0;
x1 = 0;
x2 = 0;
} else {
len = 1 / len;
x0 *= len;
x1 *= len;
x2 *= len;
}
y0 = z1 * x2 - z2 * x1;
y1 = z2 * x0 - z0 * x2;
y2 = z0 * x1 - z1 * x0;
len = sqrt(y0 * y0 + y1 * y1 + y2 * y2);
if (!len) {
y0 = 0;
y1 = 0;
y2 = 0;
} else {
len = 1 / len;
y0 *= len;
y1 *= len;
y2 *= len;
}
out[0] = x0;
out[1] = y0;
out[2] = z0;
out[3] = 0;
out[4] = x1;
out[5] = y1;
out[6] = z1;
out[7] = 0;
out[8] = x2;
out[9] = y2;
out[10] = z2;
out[11] = 0;
out[12] = -(x0 * eyex + x1 * eyey + x2 * eyez);
out[13] = -(y0 * eyex + y1 * eyey + y2 * eyez);
out[14] = -(z0 * eyex + z1 * eyey + z2 * eyez);
out[15] = 1;
};
And here is the main.c , where I render things:
#include <glad/glad.h>
#include <GLFW/glfw3.h>
#include <stdio.h>
#include <stdlib.h>
#include "include/matrix.h"
#include "include/io.h"
const int WIDTH = 640;
const int HEIGHT = 480;
// called when user resizes window
void framebuffer_size_callback(GLFWwindow* window, int width, int height) {
glViewport(0, 0, width, height);
}
// called when we receive input
void processInput(GLFWwindow *window) {
if(glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS)
glfwSetWindowShouldClose(window, 1);
}
GLuint get_checker_texture() {
unsigned char texDat[64];
for (int i = 0; i < 64; ++i)
texDat[i] = ((i + (i / 8)) % 2) * 128 + 127;
//upload to GPU texture
GLuint tex;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, 8, 8, 0, GL_RED, GL_UNSIGNED_BYTE, texDat);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
return tex;
}
//void render_box(renderable* this, unsigned int vbo, unsigned int vao, unsigned int ebo) {
// draw_texture(this->texture, this->x, this->y, this->z, vbo, vao, ebo);
//}
int main(int argc, char* argv[]) {
glfwInit();
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
#ifdef __APPLE__
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); // only on MACOS
#endif
// creating the window
GLFWwindow* window = glfwCreateWindow(WIDTH, HEIGHT, "OpenGL App", NULL, NULL);
if (window == NULL) {
printf("Failed to create GLFW window");
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
// hook on window resize
glfwSetFramebufferSizeCallback(window, framebuffer_size_callback);
if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress)) {
printf("Failed to initialize GLAD");
return -1;
}
printf("OpenGL %d.%d\n", GLVersion.major, GLVersion.minor);
glEnable(GL_DEPTH_TEST);
glViewport(0, 0, WIDTH, HEIGHT);
unsigned int tex = get_checker_texture();
const char* vertex_shader_src = read_file("res/shaders/textured_and_positioned.vs.glsl");
unsigned int vertex_shader;
vertex_shader = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vertex_shader, 1, &vertex_shader_src, NULL);
glCompileShader(vertex_shader);
int success;
char infoLog[512];
glGetShaderiv(vertex_shader, GL_COMPILE_STATUS, &success);
if (!success) {
glGetShaderInfoLog(vertex_shader, 512, NULL, infoLog);
printf("%s\n", infoLog);
}
const char* fragment_shader_src = read_file("res/shaders/textured_and_positioned.fs.glsl");
unsigned int fragment_shader;
fragment_shader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fragment_shader, 1, &fragment_shader_src, NULL);
glCompileShader(fragment_shader);
int success0;
char infoLog0[512];
glGetShaderiv(fragment_shader, GL_COMPILE_STATUS, &success0);
if (!success0) {
glGetShaderInfoLog(fragment_shader, 512, NULL, infoLog0);
printf("%s\n", infoLog0);
}
unsigned int shaderProgram;
shaderProgram = glCreateProgram();
glAttachShader(shaderProgram, vertex_shader);
glAttachShader(shaderProgram, fragment_shader);
glLinkProgram(shaderProgram);
unsigned uniform_sampler_ourTexture = glGetUniformLocation(shaderProgram, "ourTexture");
unsigned uniform_mat4_model = glGetUniformLocation(shaderProgram, "model");
unsigned uniform_mat4_view = glGetUniformLocation(shaderProgram, "view");
unsigned uniform_mat4_perspective = glGetUniformLocation(shaderProgram, "perspective");
int success1;
char infoLog1[512];
glGetProgramiv(shaderProgram, GL_LINK_STATUS, &success1);
if(!success1) {
glGetProgramInfoLog(shaderProgram, 512, NULL, infoLog1);
printf("%s\n", infoLog1);
}
float vertices[] = {
// positions // colors // texture coords
0.1f, 0.1f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, // top right
0.1f, -0.1f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, // bottom right
-0.1f, -0.1f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, // bottom left
-0.1f, 0.1f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0 // top left
};
unsigned elements[] = {
0, 1, 2, // triangle
2, 3, 0 // triangle
};
unsigned int vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
matrix4x4 model = init_matrix4x4();
matrix4x4 view = init_matrix4x4();
translate_matrix4x4(&view, 0.0f, 0.0f, 0.0f);
float x = 0.0f;
float y = 0.0f;
float z = 0.0f;
matrix4x4 perspective = calloc(16, sizeof(float));
ortho_matrix4x4(&perspective, 0.0f, 640.0f, 0.0f, 480.0f, 0.1f, 100.0f);
unsigned int vbo;
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
// positions
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)(0 * sizeof(float)));
glEnableVertexAttribArray(0);
// colors
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)(3 * sizeof(float)));
glEnableVertexAttribArray(1);
// texture coords
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)(6 * sizeof(float)));
glEnableVertexAttribArray(2);
unsigned int ebo;
glGenBuffers(1, &ebo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(elements), elements, GL_STATIC_DRAW);
glUseProgram(shaderProgram);
glUniformMatrix4fv(uniform_mat4_view, 1, GL_FALSE, view);
glUniformMatrix4fv(uniform_mat4_perspective, 1, GL_FALSE, perspective);
// render loop
while(!glfwWindowShouldClose(window)) {
processInput(window);
// render here
glClearColor(
0, 0, 0, 0
);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex);
glUniform1i(uniform_sampler_ourTexture, 0);
translate_matrix4x4(&model, x, y, z);
glUniformMatrix4fv(uniform_mat4_model, 1, GL_FALSE, model);
//x += 0.0001f;
//y += 0.0001f;
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
glfwSwapBuffers(window);
glfwPollEvents();
}
glfwTerminate();
return 0;
}
Here is the vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aColor;
layout (location = 2) in vec2 aTexCoord;
uniform mat4 model;
uniform mat4 view;
uniform mat4 perspective;
out vec3 ourColor;
out vec2 TexCoord;
void main()
{
gl_Position = perspective * view * model * vec4(aPos, 1.0);
ourColor = aColor;
TexCoord = aTexCoord;
}
Here is the fragment shader:
#version 330 core
out vec4 FragColor;
in vec3 ourColor;
in vec2 TexCoord;
uniform sampler2D ourTexture;
void main()
{
FragColor = vec4(vec3(texture(ourTexture, TexCoord).r), 1.);
}
Now, if I remove the perspective value from the shader, which is the ortho matrix, the checkered texture is rendered as it should.
What is wrong here?
Is it my shader or is it the matrix ortho function?
Your matrices are stored in row-major, submit them to the uniforms without transposing and do calculations left-associative in the shader.
You can either
store in column major order
or
transpose upon loading into the uniform
or
switch to left-associative multiplication in the shader
Each to the same effect.

Hand touches move the object model in rend ios?

I downloaded the project from https://github.com/antonholmquist/rend-ios.
I run this project the teapot rotating 360 degree full rotation,But not stop the rotation and touches not rotate the teapot,So i work for stop the rotation it worked properly and after rotation stopped touches to move the teapot i will try this below code:
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event;
{
CGPoint currentMovementPosition = [[touches anyObject] locationInView:glView_];
[self renderByRotatingAroundX:(lastMovementPosition.x - currentMovementPosition.x)
rotatingAroundY: (lastMovementPosition.y - currentMovementPosition.y) scaling:1.0f translationInX:0.0f translationInY:0.0f];
lastMovementPosition = currentMovementPosition;
}
- (void)renderByRotatingAroundX:(float)xRotation rotatingAroundY:(float)yRotation scaling:(float)scaleF translationInX:(float)xTranslation translationInY:(float)yTranslation{
currentCalculatedMatrix = CATransform3DIdentity;
currentCalculatedMatrix = CATransform3DTranslate(currentCalculatedMatrix, 0.0, -0.2, 0.0);
currentCalculatedMatrix = CATransform3DScale(currentCalculatedMatrix, 4.5, 4.5 * (320.0/480.0), 4.5);
glClearColor(0.0f,0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
GLfloat currentModelViewMatrix[16];
// Perform incremental rotation based on current angles in X and Y
if ((xRotation != 0.0) || (yRotation != 0.0))
{
GLfloat totalRotation = sqrt(xRotation*xRotation + yRotation*yRotation);
CATransform3D temporaryMatrix = CATransform3DRotate(currentCalculatedMatrix, totalRotation * M_PI / 180.0,
((xRotation/totalRotation) * currentCalculatedMatrix.m12 + (yRotation/totalRotation) * currentCalculatedMatrix.m11),
((xRotation/totalRotation) * currentCalculatedMatrix.m22 + (yRotation/totalRotation) * currentCalculatedMatrix.m21),
((xRotation/totalRotation) * currentCalculatedMatrix.m32 + (yRotation/totalRotation) * currentCalculatedMatrix.m31));
if ((temporaryMatrix.m11 >= -100.0) && (temporaryMatrix.m11 <= 100.0))
currentCalculatedMatrix = temporaryMatrix;
}
else
{
}
// Draw the teapot model
[self convert3DTransform:&currentCalculatedMatrix toMatrix:currentModelViewMatrix];
[plainDisplayProgram use];
glUniformMatrix4fv(plainDisplayModelViewMatrix, 1, 0, currentModelViewMatrix);
glVertexAttribPointer(plainDisplayPositionAttribute, 3, GL_FLOAT, 0, 0, cube_vertices);
glEnableVertexAttribArray(plainDisplayPositionAttribute);
NSLog(#"posit:%d,matrix=%d",plainDisplayPositionAttribute,plainDisplayModelViewMatrix);
//Draw teapot. The new_teapot_indicies array is an RLE (run-length encoded) version of the teapot_indices array in teapot.h
for(int i = 0; i < num_cube_indices; i += cube_indices[i] + 1)
{
NSLog(#"count:%d,i=%d",num_cube_indices,i);
glDrawElements(GL_TRIANGLES, cube_indices[i], GL_UNSIGNED_SHORT, &cube_indices[i + 1]);
}
[glView_ presentFramebuffer];
}
- (BOOL)presentFramebuffer
{
BOOL success = FALSE;
if ([EAGLContext currentContext])
{
#ifdef MSAA
glBindFramebuffer(GL_READ_FRAMEBUFFER_APPLE, multisampleFramebuffer);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER_APPLE,framebuffer);
glResolveMultisampleFramebufferAPPLE();
#endif
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
success = [[EAGLContext currentContext] presentRenderbuffer:GL_RENDERBUFFER];
#ifdef MSAA
glBindFramebuffer(GL_FRAMEBUFFER, multisampleFramebuffer);
#endif
}
return success;
}
Move the teapot it will not move, only blackcolor screen shown.
How to show the teapot model for touches moved?
Using touches moved, it will rotate clearly in all directions. try this.
xangle and yangle are globally declared and initialised as 0.0.
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch * touch = [touches anyObject];
NSLog(#"Touch Returns : %#",touches);
CGPoint location = [touch locationInView:glView_];
CGPoint lastLoc = [touch previousLocationInView:glView_];
CGPoint diff = CGPointMake(lastLoc.x - location.x, lastLoc.y - location.y);
float rotX = 1 * DegreesToRadians(diff.y / 0.2);
float rotY = 1 * DegreesToRadians(diff.x / 0.2);
xAngle += rotX;
yAngle += rotY;
teapotNode_.rotation = CC3VectorMake(xAngle, yAngle, 0);
director_.running = YES;
}
I found answer myself https://github.com/antonholmquist/rend-ios in this rend-ios project every button touch move the object model using that code below:
-(void)rightButtonPressed
{
float angle;
angle = teapotNode_.rotationAngle;
angle +=0.4;
director_.running = YES;//Redirector object
if (director_.frameInterval == 2)
{
director_.running = NO;
}
}
its work fine.

Resources