Why is 'static' needed in front of this variable? - static

The documentation does not tell much about this behavior:
Variable Syntax
static Mark a local variable so that it is initialized one time and persists between function calls. If the declaration does not include an initializer, the value is set to zero. A global variable marked static is not visible to an application.
Can you explain why does removing the static modifier from matrices produces unexpected output ?
static float3x3 protanopia ={
0.567f, 0.433f, 0.000f,
0.558f, 0.442f, 0.000f,
0.000f, 0.242f, 0.758f,
};
Normal result with static :
Incorrect without static:
Here's the complete code:
sampler2D input : register(s0);
// new HLSL shader
// modify the comment parameters to reflect your shader parameters
/// <summary>Explain the purpose of this variable.</summary>
/// <minValue>0/minValue>
/// <maxValue>8</maxValue>
/// <defaultValue>0</defaultValue>
float Filter : register(C0);
static float3x3 norm ={
1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
};
static float3x3 protanopia ={
0.567f, 0.433f, 0.000f,
0.558f, 0.442f, 0.000f,
0.000f, 0.242f, 0.758f,
};
float4 main(float2 uv : TEXCOORD) : COLOR
{
int filter = (int)abs(Filter);
float3x3 mat;
switch (filter)
{
case 0:
mat = norm;
break;
case 1:
mat=protanopia;
break;
default:
break;
}
float4 color = tex2D( input , uv.xy);
float3 rgb = {
color.x * mat._m00 + color.y * mat._m01 + color.z * mat._m02,
color.x * mat._m10 + color.y * mat._m11 + color.z * mat._m12,
color.x * mat._m20 + color.y * mat._m21 + color.z * mat._m22
};
return float4(rgb,1);
}

You have to manage the memory of non-static variables yourself. Therefore when using static everything works as expected, since the compiler cares about reserving some memory where it can store the filter values. If the static is not present you have to manage the memory yourself - that means you have to retrieve the default value of the variable and copy it by hand to constant buffer for instance.

Related

WPF Shader Effect - antialiasing not showing

I am running to a problem where I have a WPF shader effect (modified from Rene Schulte) to simulate Dot Matrix Display (DMD). Everything works great, but all dots are aliased.
See attached image.
I tried many features within WPF, to bring antialiasing, nothing to do.
in constructor (image within a textbox);
RenderOptions.SetBitmapScalingMode(MarqueeTB, BitmapScalingMode.HighQuality);
RenderOptions.SetEdgeMode(MarqueeTB, EdgeMode.Unspecified);
RenderOptions.SetClearTypeHint(MarqueeTB, ClearTypeHint.Enabled);
I don't think it is my graphic card or Windows config. I did some testing on two PCs, same results; Windows 8.1 and Windows 7.
I have no clue how to proceed. Any help or advises would be welcome.
Thanks in advance, regards,
Shader code:
// Project: Shaders
//
// Description: Mosaic Shader for Coding4Fun.
//
// Changed by: $Author$
// Changed on: $Date$
// Changed in: $Revision$
// Project: $URL$
// Id: $Id$
//
//
// Copyright (c) 2010 Rene Schulte
//
/// <description>Mosaic Shader for Coding4Fun.</description>
/// <summary>The number pixel blocks.</summary>
/// <type>Single</type>
/// <minValue>2</minValue>
/// <maxValue>500</maxValue>
/// <defaultValue>50</defaultValue>
float BlockCount : register(C0);
/// <summary>The rounding of a pixel block.</summary>
/// <type>Single</type>
/// <minValue>0</minValue>
/// <maxValue>1</maxValue>
/// <defaultValue>0.45</defaultValue>
float Max : register(C2);
/// <summary>The aspect ratio of the image.</summary>
/// <type>Single</type>
/// <minValue>0</minValue>
/// <maxValue>10</maxValue>
/// <defaultValue>1</defaultValue>
float AspectRatio : register(C3);
// Sampler
sampler2D input : register(S0);
// Static computed vars for optimization
static float2 BlockCount2 = float2(BlockCount, BlockCount / AspectRatio);
static float2 BlockSize2 = 1.0f / BlockCount2;
// Shader
float4 main(float2 uv : TEXCOORD) : COLOR
{
// Calculate block center
float2 blockPos = floor(uv * BlockCount2);
float2 blockCenter = blockPos * BlockSize2 + BlockSize2 * 0.5;
// Scale coordinates back to original ratio for rounding
float2 uvScaled = float2(uv.x * AspectRatio, uv.y);
float2 blockCenterScaled = float2(blockCenter.x * AspectRatio, blockCenter.y);
// Round the block by testing the distance of the pixel coordinate to the center
float dist = length(uvScaled - blockCenterScaled) * BlockCount2;
if(dist < 0 || dist > Max)
{
return 1;
}
// Sample color at the calculated coordinate
return tex2D(input, blockCenter);
}
Here is a solution & the corrected code accordingly. I am not sure if this is 'the best' way, but it worked. (see antialiasedCircle section, solution from: http://gamedev.stackexchange.com/questions/34582/how-do-i-use-screen-space-derivatives-to-antialias-a-parametric-shape-in-a-pixel )
//
// Project: Dot Matrix Display (DMD) Shader
// Inspired from From Mosaic shader Copyright (c) 2010 Rene Schulte
/// <summary>The number pixel blocks.</summary>
/// <type>Single</type>
/// <minValue>2</minValue>
/// <maxValue>500</maxValue>
/// <defaultValue>34</defaultValue>
float BlockCount : register(C0);
/// <summary>The rounding of a pixel block.</summary>
/// <type>Single</type>
/// <minValue>0</minValue>
/// <maxValue>1</maxValue>
/// <defaultValue>0.45</defaultValue>
float Max : register(C2);
/// <summary>The aspect ratio of the image.</summary>
/// <type>Single</type>
/// <minValue>0</minValue>
/// <maxValue>10</maxValue>
/// <defaultValue>1.55</defaultValue>
float AspectRatio : register(C3);
/// <summary>The monochrome color used to tint the input.</summary>
/// <defaultValue>Yellow</defaultValue>
float4 FilterColor : register(C1);
/// <summary>monochrome.</summary>
/// <defaultValue>1</defaultValue>
float IsMonochrome : register(C4);
// Sampler
sampler2D input : register(S0);
// Static computed vars for optimization
static float2 BlockCount2 = float2(BlockCount, BlockCount / AspectRatio);
static float2 BlockSize2 = 1.0f / BlockCount2;
float4 setMonochrome(float4 color) : COLOR
{
float4 monochrome= color;
if(((int)IsMonochrome) == 1)
{
float3 rgb = color.rgb;
float3 luminance = dot(rgb, float3(0.30, 0.59, 0.11));
monochrome= float4(luminance * FilterColor.rgb, color.a);
}
return monochrome;
}
float4 SetDMD(float2 uv : TEXCOORD, sampler2D samp) : COLOR
{
// Calculate block center
float2 blockPos = floor(uv * BlockCount2);
float2 blockCenter = blockPos * BlockSize2 + BlockSize2 * 0.5;
// Scale coordinates back to original ratio for rounding
float2 uvScaled = float2(uv.x * AspectRatio, uv.y);
float2 blockCenterScaled = float2(blockCenter.x * AspectRatio, blockCenter.y);
// Round the block by testing the distance of the pixel coordinate to the center
float dist = length(uvScaled - blockCenterScaled) * BlockCount2;
float4 insideColor= tex2D(samp, blockCenter);
float4 outsideColor = insideColor;
outsideColor.r = 0;
outsideColor.g = 0;
outsideColor.b = 0;
outsideColor.a = 1;
float distFromEdge = Max - dist; // positive when inside the circle
float thresholdWidth = .22; // a constant you'd tune to get the right level of softness
float antialiasedCircle = saturate((distFromEdge / thresholdWidth) + 0.5);
return lerp(outsideColor, insideColor, antialiasedCircle);
}
// Shader
float4 main(float2 uv : TEXCOORD) : COLOR
{
float4 DMD= SetDMD(uv, input);
DMD = setMonochrome(DMD);
return DMD;
}
Here is an image of the successful solution;
Is it layout rounding? Try MarqueeTB.UseLayoutRounding = false;

glGetUniform returns -1 for active uniform

I have this code calling glGetUniform location but it's returning -1 even though I'm using the uniform in my vertex shader. I don't get any errors from glGetError or glGetProgramInfoLog or glGetShaderInfoLog and the shaders/program all gets created successfully. I also only call this after it gets compiled and linked.
int projectionUniform = glGetUniformLocation( shaderProgram, "dfProjection" );
#version 410
uniform float dfRealTime;
uniform float dfGameTime;
uniform mat4 dfProjection;
uniform mat4 dfModelView;
layout(location = 0) in vec3 vertPosition;
layout(location = 1) in vec4 vertColor;
smooth out vec4 color;
out vec4 position;
void main() {
color = vertColor;
position = (dfModelView * dfProjection) * vec4(vertPosition, 1.0);
}
This is the fragment shader:
smooth in vec4 color;
out vec4 fragColor;
void main() {
fragColor = color;
}
There are three positibilites:
You have mis-spelled dfProjection in glGetUniformLocation, but it doesn't seem so.
You are not binding the correct program before calling glGetUniformLocation using glUseProgram.
Or you are not using position in your fragment shader, which means dfProjection is not really active.
Another thing from the code it seems you are passing the shader handle to glGetUniformLocation you should pass the linked program handle instead.
After your edit you are not using position in your fragment shader,
smooth in vec4 color;
out vec4 fragColor;
in vec4 position;
void main() {
// do sth with position here
fragColor = color*position;
}
Keep in mind that you still need to use gl_Position in-order for the fragment shader to know the final fragment position. But I was answering the question of why a uniform variable is not being detected.

Displaying RGBA images using Image class w/o pre-multiplied alpha

I'm trying to display the separate R, G, B and A channels of a texture based on user input. I'm using an Image class to display textures that have alpha channels. These textures are loaded in to BitmapSource objects and have a Format of Bgra32. The problem is that when I set the Image's Source to the BitmapSource, if I display any combination of the R, G or B channels, I always get pixels that are pre-multiplied by the alpha value. I wrote a really simple shader, pre-compiled it, and used a ShaderEffect class to assign to the Image's Effect property in order to separate and display the separate channels, but apparently, the shader is given the texture after WPF has pre-multiplied the alpha value onto the texture.
Here's the code snippet for setting the Image's Source:
BitmapSource b = MyClass.GetBitmapSource(filepath);
// just as a test, write out the bitmap to file to see if there's an alpha.
// see attached image1
BmpBitmapEncoder test = new BmpBitmapEncoder();
//test.Compression = TiffCompressOption.None;
FileStream stest = new FileStream(#"c:\temp\testbmp2.bmp", FileMode.Create);
test.Frames.Add(BitmapFrame.Create(b));
test.Save(stest);
stest.Close();
// effect is a class derived from ShaderEffect. The pixel shader associated with the
// effect displays the color channel of the texture loaded in the Image object
// depending on which member of the Point4D is set. In this case, we are showing
// the RGB channels but no alpha
effect.PixelMask = new System.Windows.Media.Media3D.Point4D(1.0f, 1.0f, 1.0f, 0.0f);
this.image1.Effect = effect;
this.image1.Source = b;
this.image1.UpdateLayout();
Here's the shader code (its pretty simple, but I figured I'd include it just for completeness):
sampler2D inputImage : register(s0);
float4 channelMasks : register(c0);
float4 main (float2 uv : TEXCOORD) : COLOR0
{
float4 outCol = tex2D(inputImage, uv);
if (!any(channelMasks.rgb - float3(1, 0, 0)))
{
outCol.rgb = float3(outCol.r, outCol.r, outCol.r);
}
else if (!any(channelMasks.rgb - float3(0, 1, 0)))
{
outCol.rgb = float3(outCol.g, outCol.g, outCol.g);
}
else if (!any(channelMasks.rgb - float3(0, 0, 1)))
{
outCol.rgb = float3(outCol.b, outCol.b, outCol.b);
}
else
{
outCol *= channelMasks;
}
if (channelMasks.a == 1.0)
{
outCol.r = outCol.a;
outCol.g = outCol.a;
outCol.b = outCol.a;
}
outCol.a = 1;
return outCol;
}
Here's the output from the code above:
(sorry, i don't have enough reputation points to post images or apparently more than 2 links)
The file save to disk (C:\temp\testbmp2.bmp) in photoshop:
http://screencast.com/t/eeEr5kGgPukz
Image as displayed in my WPF application (using image mask in code snippet above):
http://screencast.com/t/zkK0U5I7P7

How to fill polygon with different color than boundary?

I need to draw a polygon that has the boundary lines with one color and fill the interior with another color. Is there an easy way to do this ? I currently draw two polygons one for the interior color and 1 for the boundary. I think there must be a better to do this. Thanks for your help.
glColor3d (1, 1., .7);
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
glBegin(GL_TRIANGLES);
glVertex3f(-0.8f, 0.0f, 0.0f);
glVertex3f(-0.6f, 0.0f, 0.0f);
glVertex3f(-0.7f, 0.2f, 0.0f);
glEnd();
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glColor3d (.5, .5, .7);
glBegin(GL_TRIANGLES);
glVertex3f(-0.8f, 0.0f, 0.0f);
glVertex3f(-0.6f, 0.0f, 0.0f);
glVertex3f(-0.7f, 0.2f, 0.0f);
glEnd();
Thank you everyone for answering my question. I am fairly new to openGL and was looking for a simple answer to a simple question. The answer appears to be not so simple and probably can take a semester worth of study.
A more modern approach would be to implement this via geometry shaders. This would work for OpenGL 3.2 and above as part of the core functionality, or for OpenGL 2.1 with extension GL_EXT_geometry_shader4.
This paper has all the relevant theory : Shader-Based wireframe drawing. It also provides a sample implementation of the simplest technique in GLSL.
Here is my own stab at it, basically a port of their implementation for OpenGL 3.3, limited to triangle primitives:
Vertex shader: (replace the inputs with whatever you use to pass in vertex data an m, v and p matrices)
#version 330
layout(location = 0) in vec4 position;
layout(location = 1) in mat4 mv;
out Data
{
vec4 position;
} vdata;
uniform mat4 projection;
void main()
{
vdata.position = projection * mv * position;
}
Geometry shader:
#version 330
layout(triangles) in;
layout(triangle_strip, max_vertices = 3) out;
in Data
{
vec4 position;
} vdata[3];
out Data
{
noperspective out vec3 dist;
} gdata;
void main()
{
vec2 scale = vec2(500.0f, 500.0f); // scaling factor to make 'd' in frag shader big enough to show something
vec2 p0 = scale * vdata[0].position.xy/vdata[0].position.w;
vec2 p1 = scale * vdata[1].position.xy/vdata[1].position.w;
vec2 p2 = scale * vdata[2].position.xy/vdata[2].position.w;
vec2 v0 = p2-p1;
vec2 v1 = p2-p0;
vec2 v2 = p1-p0;
float area = abs(v1.x*v2.y - v1.y*v2.x);
gdata.dist = vec3(area/length(v0),0,0);
gl_Position = vdata[0].position;
EmitVertex();
gdata.dist = vec3(0,area/length(v1),0);
gl_Position = vdata[1].position;
EmitVertex();
gdata.dist = vec3(0,0,area/length(v2));
gl_Position = vdata[2].position;
EmitVertex();
EndPrimitive();
}
Vertex shader: (replace the colors with whatever you needed !)
#version 330
in Data
{
noperspective in vec3 dist;
} gdata;
out vec4 outputColor;
uniform sampler2D tex;
const vec4 wireframeColor = vec4(1.0f, 0.0f, 0.0f, 1.0f);
const vec4 fillColor = vec4(1.0f, 1.0f, 1.0f, 1.0f);
void main()
{
float d = min(gdata.dist.x, min(gdata.dist.y, gdata.dist.z));
float I = exp2(-2*d*d);
outputColor = mix(fillColor, wireframeColor, I);
}
You can switch the fill mode between polygons, lines and points, using glPolygonMode.
In order to draw polygon lines in a different color you can do the following:
glPolygonMode(GL_FRONT_AND_BACK,GL_FILL);
draw_mesh( fill_color );
glPolygonMode(GL_FRONT_AND_BACK,GL_LINE);
glEnable(GL_POLYGON_OFFSET_LINE);
glPolygonOffset(-1.f,-1.f);
draw_mesh( line_color );
Line offset may be needed, because OpenGL doesn't guarantee the edges of polygons will be rasterized in the exact same pixels as the lines. So, without explicit offset you may and up with lines being hidden by polygons, due to failed depth test.
There are 2 ways to do this:
the one you do at the moment (2 polygons, one a little larger than the other or drawn after)
texture
to my knowledge there are no other possibilities and from a performance standpoint these 2 possibilities, especially the first one as long as you only color-fill, are extremely fast.
I think you should see this answer:
fill and outline
first draw your triangle using
glPolygonMode(GL_FRONT_AND_BACK,GL_FILL) and use your desired color.
then draw the triangle again using
glPolygonMode(GL_FRONT_AND_BACK,GL_LINE) using your outline color.

HLSL hue change algorithm

I want to make a pixel shader for Silverlight which will help me to change the Hue/Saturation/Lightness using a slider.
* Hue slider has values in range: [-180, 180]
* Saturation slider has values in range: [-100, 100]
* Lightness slider has values in range: [-100, 100]
I managed to create a pixel shader which can manipulate the Saturation and Lightness values.
But I can find any algorithm for changing the hue value.
Can anyone provide me an algorithm? Thank you.
Here is my HLSL code:
/// <summary>The brightness offset.</summary>
/// <minValue>-180</minValue>
/// <maxValue>180</maxValue>
/// <defaultValue>0</defaultValue>
float Hue : register(C0);
/// <summary>The saturation offset.</summary>
/// <minValue>-100</minValue>
/// <maxValue>100</maxValue>
/// <defaultValue>0</defaultValue>
float Saturation : register(C1);
/// <summary>The lightness offset.</summary>
/// <minValue>-100</minValue>
/// <maxValue>100</maxValue>
/// <defaultValue>0</defaultValue>
float Lightness : register(C2);
sampler2D input : register(S0);
//--------------------------------------------------------------------------------------
// Pixel Shader
//--------------------------------------------------------------------------------------
float4 main(float2 uv : TEXCOORD) : COLOR
{
// some vars
float saturation = Saturation / 100 + 1;
float lightness = Lightness / 100;
float3 luminanceWeights = float3(0.299,0.587,0.114);
// input raw pixel
float4 srcPixel = tex2D(input, uv);
// Apply saturation
float luminance = dot(srcPixel, luminanceWeights);
float4 dstPixel = lerp(luminance, srcPixel, saturation);
// Apply lightness
dstPixel.rgb += lightness;
//retain the incoming alpha
dstPixel.a = srcPixel.a;
return dstPixel;
}
With a slightly different input domain but it can be adapted easily:
float Hue : register(C0); // 0..360, default 0
float Saturation : register(C1); // 0..2, default 1
float Luminosity : register(C2); // -1..1, default 0
sampler2D input1 : register(S0);
static float3x3 matrixH =
{
0.8164966f, 0, 0.5352037f,
-0.4082483f, 0.70710677f, 1.0548190f,
-0.4082483f, -0.70710677f, 0.1420281f
};
static float3x3 matrixH2 =
{
0.84630f, -0.37844f, -0.37844f,
-0.37265f, 0.33446f, -1.07975f,
0.57735f, 0.57735f, 0.57735f
};
float4 main(float2 uv : TEXCOORD) : COLOR
{
float4 c = tex2D(input1, uv);
float3x3 rotateZ =
{
cos(radians(Hue)), sin(radians(Hue)), 0,
-sin(radians(Hue)), cos(radians(Hue)), 0,
0, 0, 1
};
matrixH = mul(matrixH, rotateZ);
matrixH = mul(matrixH, matrixH2);
float i = 1 - Saturation;
float3x3 matrixS =
{
i*0.3086f+Saturation, i*0.3086f, i*0.3086f,
i*0.6094f, i*0.6094f+Saturation, i*0.6094f,
i*0.0820f, i*0.0820f, i*0.0820f+Saturation
};
matrixH = mul(matrixH, matrixS);
float3 c1 = mul(c, matrixH);
c1 += Luminosity;
return float4(c1, c.a);
}
The conversions between colour spaces are available at EasyRGB, see this post at nokola.com for a Silverlight implementation of a Hue shift. It may be possible to fit hue, saturation and brightness in one PS 2.0 shader if you take the approach mentioned here but I haven't tried.
Here is a great excample how to work with hue.
http://www.silverlightshow.net/news/Hue-Shift-in-Pixel-Shader-2.0-EasyPainter-Silverlight.aspx

Resources