Converting shadertoy to Metal (cube mapping ?) - scenekit

I'am trying to rewrite this shader and i'am stuck with line:
float backColor = dot (texture (iChannel0, direction).rgb, channel);
how would i do it ? Following this tutorial i should be able to pass my cubetexture here, but i cant wrap my head around this task.
Right now without this line i get some random colors over time so i assume thats part i'am missing.
I'am using SceneKit with SCNProgram.

Assuming you've ported the relevant parts of the shader, loaded a cube map, and bound it as a shader argument, the equivalent line of Metal Shading Language code is simply:
float backColor = dot(texCube.sample(cubeSampler, direction).rgb, channel);
where texCube is of type texturecube<float, access::sample> and cubeSampler is something like
constexpr sampler cubeSampler(coord::normalized, filter::linear, mip_filter::linear)

Related

Can a GLSL fragment shader run without a framebuffer and similar inconveniences?

Reapeating the above: Can a GLSL fragment shader run without a framebuffer and any rasterization stage?
This perfect answer gives an insight about where to start with SSBO's. The answer has a link to OpenGL ARB extension that has a boilerplate code. The code works for me if made with some changes to work with OpenGL compute programs. But, I really does not get it, how to do with a fragment program? And without any other buffers than SSBO.
The code clearly has fragment source code without any pixel operations, only SSBO ones.
in vec4 color;
void main()
{
uint fragmentNumber = atomicCounterIncrement(fragmentCounter);
if (fragmentNumber < maxFragmentCount) {
fragments[fragmentNumber].position = ivec2(gl_FragCoord.xy);
fragments[fragmentNumber].color = color;
}
}
And later in the C program file:
// Generate, bind, and specify the data store for the atomic counter.
glGenBuffers(1, &counterBuffer);
glBindBufferBase(GL_ATOMIC_COUNTER_BUFFER, 0, counterBuffer);
glBufferData(GL_ATOMIC_COUNTER_BUFFER, sizeof(GLuint), NULL,
GL_DYNAMIC_DRAW);
// Reset the atomic counter to zero, then draw stuff. This will record
// values into the shader storage buffer as fragments are generated.
GLuint zero = 0;
glBufferSubData(GL_ATOMIC_COUNTER_BUFFER, 0, sizeof(GLuint), &zero);
glUseProgram(program);
glDrawElements(GL_TRIANGLES, ...);
As per my setup, I do not have any output with the means of OpenGL pixels. I wish it to stay so. Is it possible, or am I missing something?
P.S The above setup gives me error invalid framebuffer operation after glDrawElements immediately followed by glFinish.
Update 21.03.2021
There is a Framebuffers with no attachments. The only thing you should set in its state is its width and height. And that is somewhat at the course that anyone's heading, if one wish to minimize setup.
The minus of the aformentioned, is that it is still requires some geometry to be fed to rasterization stage. To start the shader stages, you know. But, as a plus, one gets geometry rasterization, wish it or not.
If I have time, I leave some code as a reminder for miself.
Can a GLSL fragment shader run without a framebuffer and similar inconveniences?
No. The fragment shaders need the step that invokes them. The stage that produce fragments called rasterization.
From the khronos wiki:
A Fragment Shader is the Shader stage that
will process a Fragment generated by the Rasterization
into a set of colors and a single depth value.
The fragment shader is the OpenGL pipeline stage after a primitive is rasterized.
And the rasterization needs a render step to produce fragments. The rendering is done to somewhere.
In OpenGL, it is done to framebuffer. So without a framebuffer, you can not render, hence OpenGL
can not produce fragments.
Setup of a framebuffer can be minimized by
Framebuffers with no attachments.
But one needs to supply geometry and render it, to invoke fragment shaders.
Fragment shaders can read and write to arbitrary SSBO. But the usage is not similar to compute shaders.
Fragment shaders invoke on each produced fragment, and compute shaders can be invoked, as I may say, arbitrary.
Many thanks to all commenters who had pointed me to the obvious, by now, reason why the fragment shaders need a render operation.

SceneKit – access destination color in shader modifier for blending

Is there a way to access last fragment color (destination color) in Metal shader modifier similar to gl_LastFragData in GLES?
My goal is to perform custom blending using shader modifiers (SceneKit's SCNBlendModes do not suffice in my situation). Currently I'm using SCNTechnique with 3 passes (render the destination, render the source, combine) to achieve this and that seems like a major overkill to me + it is really hard to have several blending groups without introducing new passes.
SCNProgram does not seem like an option for several reasons (I'm using PBR, tessellation/subdivision; I'd rather stick with using techniques for now I guess).
I've tried using #extension GL_EXT_shader_framebuffer_fetch : require as suggested in this answer, but it doesn't work even for GLSL shader modifiers (I'm using Xcode 9.0 and iOS 11).
I've also stumbled upon this wonderful gist that has SceneKit's default metal shader implementation, but it seems that blending is not performed there. Which makes me wonder if that is the reason why I can't find any destination color reference: blending happens somewhere else.
Is SCNProgram is the only way besides the SCNTechnique atrocity?
P.S:
The only mention of gl_LastFragData in the context of Metal that I've found is in chapter 4.8 Programmable Blending of Metal Shading Language Specification which would be helpful if I could somehow access the [[color(0)]] or something similar in shader modifier (if that's even possible).
I just wanted to check that that you hadn't overlooked the fragment entry point?
In the documentation it says: "Use this entry point to change the color of a fragment after all other shading has been performed."
I'm not sure if this is exactly what you mean by accessing the "last fragment color" but thought it might be worth mentioning.
https://developer.apple.com/documentation/scenekit/scnshadermodifierentrypoint/1523342-fragment

Representing images as graphs based on pixels using OpenCV's CvGraph

Need to use c for a project and i saw this screenshot in a pdf which gave me the idea
http://i983.photobucket.com/albums/ae313/edmoney777/Screenshotfrom2013-11-10015540_zps3f09b5aa.png
It say's you can treat each pixel of an image as a graph node(or vertex i guess) so i was wondering how
i would do this using OpenCV and the CvGraph set of functions. Im trying to do this to learn about and how
to use graphs in computer vision and i think this would be a good starting point.
I know i can add a vetex to a graph with
int cvGraphAddVtx(CvGraph* graph, const CvGraphVtx* vtx=NULL, CvGraphVtx** inserted_vtx=NULL )
and the documentation says for the above functions vtx parameter
"Optional input argument used to initialize the added vertex (only user-defined fields beyond sizeof(CvGraphVtx) are copied)"
is this how i would represent a pixel as a graph vertex or am i barking up the wrong tree...I would love to learn more about
graphs so if someone could help me by maybe posting code, links, or good ol' fashioned advice...Id be grateful=)
http://vision.csd.uwo.ca/code has an implementation on Mulit-label optimization. GCoptimization.cpp file has a GCoptimizationGridGraph class, which I guess is what you need. I am not a C++ expert, so can't still figure out how it works. I am also looking for some simpler solution.

OpenGL saving objects for later drawing

I'm making a program using openGL with transparent objects in it, so obviously I have to paint those last. Unfortunately I was unaware of this requirement when beginning this project and now it would be a real pain to reorder it painting those at last.
I'm drawing objects by calling my drawing functions after translating and rotating the scene. There can be multiple translations and rotations before an actual drawing (e.g. first I draw the ground, then translate, then call the drawing of the house, which repeatedly translates and rotates, then calls the drawing of the walls and so on).
So my idea was saving the current modelview matrices in a list instead of painting the transparent objects when I normally would, then when I'm done with the opaque stuff, I iterate through my list and load each matrix and paint each object (a window, to be precise).
I do this for saving a matrix:
GLdouble * modelMatrix = (GLdouble *)malloc(16 * sizeof(GLdouble));
glGetDoublev(GL_MODELVIEW, modelMatrix);
addWindow(modelMatrix); // save it for later painting
And this is the "transparent stuff management" part:
/***************************************************************************
*** TRANSPARENT STUFF MANAGEMENT ******************************************
**************************************************************************/
typedef struct wndLst {
GLdouble * modelMatrix;
struct wndLst * next;
} windowList;
windowList * windows = NULL;
windowList * pWindow;
void addWindow(GLdouble * windowModelMatrix) {
pWindow = (windowList *)malloc(sizeof(windowList));
pWindow->modelMatrix = windowModelMatrix;
pWindow->next = windows;
windows = pWindow;
}
void clearWindows() {
while(windows != NULL) {
pWindow = windows->next;
free(windows->modelMatrix);
free(windows);
windows = pWindow;
}
}
void paintWindows() {
glPushMatrix(); // I've tried putting this and the pop inside the loop, but it didn't help either
pWindow = windows;
while(pWindow != NULL) {
glLoadMatrixd(pWindow->modelMatrix);
Size s;
s.height = 69;
s.width = 49;
s.length = 0.1;
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glDepthMask(GL_FALSE);
glColor4f(COLOR_GLASS, windowAlpha);
drawCuboid(s);
glDepthMask(GL_TRUE);
glDisable(GL_BLEND);
pWindow = pWindow->next;
}
glPopMatrix();
}
/* INTERFACE
* paint all the components, that are not yet painted,
* then clean up.
*/
void flushComponents() {
paintWindows();
clearWindows();
}
/**************************************************************************/
I call flushComponents(); at the very end of my drawings.
The problem is, that the windows don't get in their place, instead I get weird-shaped blue objects randomly appearing and disappearing in my scene.
Am I doing something wrong? Or such matrix manipulations cannot even be used like this? Then what other method could I use for doing this?
Here is the full code if you need it: farm.zip Matrix-saving is at components.c line 1548, management is at line 142. It might not work on Windows without some minor hacking with the includes, which should probably be done in global.h.
Edit: I can only use C code and the glut library to write this program.
Edit 2: The problem is glGetDoublev not returning anything for some reason, it leaves the modelMatrix array intact. Though I still have no idea what causes this, I could make a workaround using bernie's idea.
OpenGL is not a math library. You should not use it for doing matrix calculations. In fact that part has been completely removed from OpenGL-3. Instead you should rely on a specialized matrix math library. That allows you to calculate the matrices for each object with ease, without jumping through the hoops of OpenGL's glGet… API (which was never meat for this kind of abuse). For a good replacement look at GLM: http://glm.g-truc.net/
Try adding glMatrixMode(GL_MODELVIEW) before your paintWindows() method. Perhaps you are not modifying the correct matrix.
The basic idea of your method is fine and is very similar to what I used for transparent objects. I would however advise for performance reasons against reading back the matrices from OpenGL. Instead, you can keep a CPU version of the current modelview matrix and just copy that to your window array.
As for your comment about push and pop matrix, you can safely put it outside the loop like you did.
edit
Strictly speaking, your method of rendering transparent objects misses one step: before rendering the list of windows, you should sort them back to front. This allows for overlapping windows to have the correct final color. In general, for 2 windows and a blending function:
blend( blend(scene_color, window0_color, window0_alpha), window1_color, window1_alpha )
!=
blend( blend(scene_color, window1_color, window1_alpha), window0_color, window0_alpha )
However, if all windows consist of the exact same uniform color (e.g. plain texture or no texture) and alpha value, the above equation is true (window0_color==window1_color and window1_alpha==window0_alpha) so you don't need to sort your windows. Also, if it's not possible to have overlapping windows, don't worry about sorting.
edit #2
Now you found something interesting with the erroneous matrix readback. Try it out with the following instead (you certainly don't need double precision):
GLfloat* modelMatrix = (GLfloat*)malloc(16 * sizeof(GLfloat)); // glass
glGetFloatv(GL_MODELVIEW, modelMatrix);
addWindow(modelMatrix); // save it for later painting
If that still doesn't work, you could directly store references to your houses in your transparent object list. During the transparent rendering pass, re-render each house, but only issue actual OpenGL draw calls for the transparent parts. In your code, putWallStdWith would take another boolean argument specifying whether to render the transparent or the opaque geometry. This way, your succession of OpenGL matrix manipulation calls would be redone for the transparent parts instead of read using glGetxxx(GL_MODEL_VIEW).
The correct way to do it however is to do matrix computation on the CPU and simply load complete matrices in OpenGL. That allows you to reuse matrices, control the operation precision, see the actual matrices easily, etc.

What vertex shader code should be used for a pixel shader used for simple 2D SpriteBatch drawing in XNA?

Preface
First of all, why is a vertex shader required for a SilverlightEffect (.slfx file) in Silverlight 5? I'm trying to port a simple 2D XNA game to Silverlight 5 RC, and I would like to use a basic pixel shader. This shader works great in XNA for Windows and Xbox, but I can't get it to compile with Silverlight as a SilverlightEffect.
The MS blog for the Silverlight Toolkit says that "there is no difference between .slfx and .fx", but apparently this isn't quite true -- or at least SpriteBatch is working some magic for us in "regular XNA", and it isn't in "Silverlight XNA".
If I try to directly copy my pixel shader file into a Silverlight project (and change it to the supported "Effect - Silverlight" importer/processor), when I try to compile I see the following error message:
Invalid effect file. Unable to find vertex shader in pass "P0"
Indeed, there isn't a vertex shader in my pixel shader file. I haven't needed one with my other 2D XNA apps since I'm just doing basic SpriteBatch drawing.
I tried adding a vertex shader to my shader file, using Remi Gillig's comment on this Shawn Hargreaves blog post for guidance, but it doesn't quite work. The shader file successfully compiles, and I see some semblance of my game on screen, but it's tiny, twisted, repeated, and all jumbled up. So clearly something's not quite right.
The Real Question
So that brings me to my real question: Since a vertex shader is required, is there a basic vertex shader function that works for simple 2D SpriteBatch drawing?
And if the vertex shader requires world/view/project matricies as parameters, what values am I supposed to use for a 2D game?
Can any shader pros help? Thanks!
In XNA, SpriteBatch is providing its own vertex shader. Because there is no SpriteBatch in Silverlight 5, you have to provide your own.
You may want to look at the source code for the shader that SpriteBatch uses (it's in SpriteEffect.fx). Here is its vertex shader, it's pretty simple:
void SpriteVertexShader(inout float4 color : COLOR0,
inout float2 texCoord : TEXCOORD0,
inout float4 position : SV_Position)
{
position = mul(position, MatrixTransform);
}
So now you just need the input position and the transformation matrix (and the texture co-ordinates, but those are fairly simple).
The same blog post you linked, towards the end, tells you how to set up the matrix (also here). Here it is again:
Matrix projection = Matrix.CreateOrthographicOffCenter(0, viewport.Width, viewport.Height, 0, 0, 1);
Matrix halfPixelOffset = Matrix.CreateTranslation(-0.5f, -0.5f, 0);
Matrix transformation = halfPixelOffset * projection;
(Note: I'm not sure if Silverlight actually requires the half-pixel offset to maintain texel alignment (ref). It probably does.)
The tricky bit is that the positioning of the sprite is done outside of the shader, on the CPU. Here's the order of what SpriteBatch does:
Start with four vertices in a rectangle, with (0,0) being the top-left and the texture's (width,height) as the bottom right
Translate backwards by origin
Scale
Rotate
Translate by position
This places the sprite vertices in "client space", and then the transformation matrix transforms those vertices from client space into projection space (which is used for drawing).
I've got a highly-inlined version of this transformation in ExEn (SpriteBatch.InternalDraw in SpriteBatchOpenGL.cs) that you could easily adapt for your use.
I came across this question today and I want to add that you can do all of the calculations in the vertex shader itself:
float2 Viewport; //Set to viewport size from application code
void SpriteVertexShader(inout float4 color : COLOR0,
inout float2 texCoord : TEXCOORD0,
inout float4 position : POSITION0)
{
// Half pixel offset for correct texel centering.
position.xy -= 0.5;
// Viewport adjustment.
position.xy = position.xy / Viewport;
position.xy *= float2(2, -2);
position.xy -= float2(1, -1);
}
(I didn't write this, credit goes to the comments in the article mensioned above)

Resources