OpenGL sampler object not updating bound texture - c

I am currently binding a sampler object to a texture unit (GL_TEXTURE12 to be specific) with
glBindSampler(12, sampler)
and the initial settings are very visible compared to the textures own settings. But when I change the samplers parameters with
glSamplerParameteri(sampler, GL_TEXTURE_***_FILTER, filter);
the texture bound to the texture unit filters just the same as it did before with no apparent change from any perspective.
I have tried re-binding the sampler to the texture unit again after the parameter change but I'm pretty sure this isn't required.
What changes can I make to get this working?

Since I could not explain why this statement: "I have tried re-binding the texture unit to the sampler again after the parameter change but I'm pretty sure this isn't required." makes no sense in comments, consider the following C pseudo-code.
/* Thin state wrapper */
struct SamplerObject {
SamplerState sampler_state;
};
/* Subsumes SamplerObject */
struct TextureObject {
ImageData* image_data;
...
SamplerState sampler_state;
};
/* Binding point: GL4.x gives you at least 80 of these (16 per-shader stage) */
struct TextureImageUnit {
TextureObject* bound_texture; /* Default = NULL */
SamplerObject* bound_sampler; /* Default = NULL */
} TextureUnits [16 * 5];
vec4 texture2D ( GLuint n,
vec2 tex_coords )
{
/* By default, sampler state is sourced from the bound texture object */
SamplerState* sampler_state = &TextureUnits [n]->bound_texture->sampler_state;
/* If there is a sampler object bound to texture unit N, use its state instead
of the sampler state built-in to the bound texture object. */
if (TextureUnits [n]->bound_sampler != NULL)
sampler_state = &TextureUnits [n]->bound_sampler->sampler_state;
...
}
I believe the source of confusion is coming from the fact that in GLSL the uniforms used to identify which texture image unit to sample from (and how) are called sampler[...]. Hopefully this clears up some of the confusion so we are all on the same page.

Related

Win32 GDI color palette transparency bug

This question could be considered more of a bug report on an unsightly and time-wasting issue I've recently encountered while using Win32/GDI:
That is, loading a bitmap image into static control (a bitmap static control, not icon). I'll demonstrate with the following code (this follows the creation of the main window):
HBITMAP hbmpLogo;
/* Load the logo bitmap graphic, compiled into the executable file by a resource compiler */
hbmpLogo = (HBITMAP)LoadImage(
wc.hInstance, /* <-- derived from GetModuleHandle(NULL) */
MAKEINTRESOURCE(ID_LOGO), /* <-- ID_LOGO defined in a header */
IMAGE_BITMAP,
0, 0,
LR_CREATEDIBSECTION | LR_LOADTRANSPARENT);
/* We have a fully functioning handle to a bitmap at this line */
if (!hbmpLogo)
{
/* Thus this statement is never reached */
abort();
}
We then create the control, which is a child of the main window:
/* Add static control */
m_hWndLogo = CreateWindowExW(
0, /* Extended styles, not used */
L"STATIC", /* Class name, we want a STATIC control */
(LPWSTR)NULL, /* Would be window text, but we would instead pass an integer identifier
* here, formatted (as a string) in the form "#100" (let 100 = ID_LOGO) */
SS_BITMAP | WS_CHILD | WS_VISIBLE, /* Styles specified. SS = Static Style. We select
* bitmap, rather than other static control styles. */
32, /* X */
32, /* Y */
640, /* Width. */
400, /* Height. */
hMainParentWindow,
(HMENU)ID_LOGO, /* hMenu parameter, repurposed in this case as an identifier for the
* control, hence the obfuscatory use of the cast. */
wc.hInstance, /* Program instance handle appears here again ( GetModuleHandle(NULL) )*/
NULL);
if (!m_hWndLogo)
{
abort(); /* Also never called */
}
/* We then arm the static control with the bitmap by the, once more quite obfuscatory, use of
* a 'SendMessage'-esque interface function: */
SendDlgItemMessageW(
hMainParentWindow, /* Window containing the control */
ID_LOGO, /* The identifier of the control, passed in via the HMENU parameter
* of CreateWindow(...). */
STM_SETIMAGE, /* The action we want to effect, which is, arming the control with the
* bitmap we've loaded. */
(WPARAM)IMAGE_BITMAP, /* Specifying a bitmap, as opposed to an icon or cursor. */
(LPARAM)hbmpLogo); /* Passing in the bitmap handle. */
/* At this line, our static control is sufficiently initialised. */
What is not impressive about this segment of code is the mandated use of LoadImage(...) to load the graphic from the program resources, where it is otherwise seemingly impossible to specify that our image will require transparency. Both flags LR_CREATEDIBSECTION and LR_LOADTRANSPARENT are required to effect this (once again, very ugly and not very explicit behavioural requirements. Why isn't LR_LOADTRANSPARENT good on its own?).
I will elaborate now that the bitmap has been tried at different bit-depths, each less than 16 bits per pixel (id est, using colour palettes), which incurs distractingly unaesthetical disuniformity between them. [Edit: See further discoveries in my answer]
What exactly do I mean by this?
A bitmap loaded at 8 bits per pixel, thus having a 256-length colour palette, renders with the first colour of the bitmap deleted (that is, set to the window class background brush colour); in effect, the bitmap is now 'transparent' in the appropriate areas. This behaviour is expected.
I then recompile the executable, now loading a similar bitmap but at (a reduced) 4 bits per pixel, thus having a 16-length colour palette. All is good and well, except I discover that the transparent region of the bitmap is painted with the WRONG background colour, one that does not match the window background colour. My wonderful bitmap has an unsightly grey rectangle around it, revealing its bounds.
What should the window background colour be? All documentation leads back, very explicitly, to this (HBRUSH)NULL-inclusive eyesore:
WNDCLASSEX wc = {}; /* Zero initialise */
/* initialise various members of wc
* ...
* ... */
wc.hbrBackground = (HBRUSH)(COLOR_WINDOW+1); /* Here is the eyesore. */
Where a certain colour preset must be incremented, then cast to a HBRUSH typename, to specify the desired background colour. 'Window colour' is an obvious choice, and a fragment of code very frequently recurring and reproducible.
You may note that when this is not done, the Window instead assumes the colour of its preceding number code, which on my system happens to be the 'Scroll' colour. Indeed, and alas, if I happen to forget the notorious and glorious +1 appended to the COLOR_WINDOW HBRUSH, my window will become the unintended colour of a scroll bar.
And it seems this mistake has propagated within Microsofts own library. Evidence? That a 4-bpp bitmap, when loaded, will also erase the bitmap transparent areas to the wrong background color, where a 8-bpp bitmap does not.
TL;DR
It seems the programmers at Microsoft themselves do not fully understand their own Win32/GDI interface jargon, especially regarding the peculiar design choice behind adding 1 to the Window Class WNDCLASS[EX] hbrBackground member (supposedly to support (HBRUSH)NULL).
This is unless, of course, anyone can spot a mistake on my part?
Shall I submit a bug report?
Many thanks.
As though to patch over a hole in a parachute, there is a solution that produces consistency, implemented in the window callback procedure:
LRESULT CALLBACK WndProc(HWND hWnd, UINT uiMsg, WPARAM wp, LPARAM lp)
{
/* ... */
switch (uiMsg)
{
/* This message is sent to us as a 'request' for the background colour
* of the static control. */
case WM_CTLCOLORSTATIC:
/* WPARAM will contain the handle of the DC */
/* LPARAM will contain the handle of the control */
if (lp == (LPARAM)g_hLogo)
{
SetBkMode((HDC)wp, TRANSPARENT);
return (LRESULT)GetSysColorBrush(COLOR_WINDOW); /* Here's the magic */
}
break;
}
return DefWindowProc(hWnd, uiMsg, wp, lp);
}
It turns out the problem was not reproducible when other transparent bitmaps of varying sizes (not only bit depths) were loaded.
This is horrible. I am not sure why this happens. Insights?
EDIT: All classes have been removed to produce a neat 'minimal reproducible example'.

Load an image only once (SDL2)

I want to simulate a tv screen in my game. A 'no signal' image is displayed all along. It will be replaced by a scene of a man shooting another one, that's all. so I wrote this, which load my image every time :
void Display_nosignal(SDL_Texture* Scene, SDL_Renderer* Rendu)
{
SDL_Rect dest = {416, 0, 416, 416};
SDL_Rect dest2 = {SCENE};
SDL_SetRenderTarget(Rendu, Scene);
SDL_Rect src = {(1200-500)/2,0,500,545};
/*Scene = IMG_LoadTexture(Rendu, "mages/No_Signal.jpg");
if (Scene==NULL){printf("Erreur no signal : %s\n", SDL_GetError());}*/
if (Scene == NULL)
{
Scene = IMG_LoadTexture(Rendu, "mages/No_Signal.jpg");
if (Scene == NULL){printf("Erreur no signal : %s\n", SDL_GetError());}
}
SDL_RenderCopy(Rendu, Scene, NULL, &src);
SDL_SetRenderTarget(Rendu, NULL);
//SDL_DestroyTexture(Scene);
//500, 545
}
and it causes memory leaks. I've tried to destroy the texture in the loop etc., but nothing changes. so, can you advice me some ways to load the image at the very beginning , keep it , and display it only when needed.
I agree with the commenters about dedicated texture loader being a correct solution, but if you only want this behavior for one particular texture it may be an overkill. In that case you can write a separate function which loads this particular texture and make sure it is only called once.
Alternatively, you can use static variables. If a variable declared in a function is marked as static it will retain its value across calls to that function. You can find a simple example here (it's a tutorial-grade source but it shows basic usage) or here (SO source).
By modifying your code ever so slightly you should be able to make sure that the texture is loaded only once. By marking a pointer to it as static you ensure that its value (so address of the loaded texture) is not lost bewteen calls to the function. Afterwards the pointer will live in memory until the program terminates. Thanks to this, we do not have to free the texture's memory (unless you explicitly want to free it at some point, but then the texture manager is probably a better idea). A memory leak will not occur, since we are never going to lose the reference to the texture.
void Display_nosignal(SDL_Texture* Scene, SDL_Renderer* Rendu)
{
static SDL_Texture* Scene_cache = NULL; // Introduce a variable which remembers the texture across function calls
SDL_Rect dest = {416, 0, 416, 416};
SDL_Rect dest2 = {SCENE};
SDL_SetRenderTarget(Rendu, Scene);
SDL_Rect src = {(1200-500)/2,0,500,545};
if (Scene_cache == NULL) // First time we call the function, Scene_cache will be NULL, but in next calls it will point to a loaded texture
{
Scene_cache = IMG_LoadTexture(Rendu, "mages/No_Signal.jpg"); // If Scene_cache is NULL we load the texture
if (Scene_cache == NULL){printf("Erreur no signal : %s\n", SDL_GetError());}
}
Scene = Scene_cache; // Set Scene to point to the loaded texture
SDL_RenderCopy(Rendu, Scene, NULL, &src);
SDL_SetRenderTarget(Rendu, NULL);
}
If you care about performance and memory usage etc you should read about consequences of static variables, for instance their impact on cache and how they work internally. This might be considered a "dirty hack" but it might be just enough for a small project that does not need bigger solutions.

How to use FT_RENDER_MODE_SDF in freetype?

I'm quite new to font rendering and I'm trying to generate signed distance field with freetype so that it can be used in fragment shader in OpenGL. Here is the code that I tried:
error = FT_Load_Glyph(face, glyph_index, FT_LOAD_DEFAULT);
if (error)
{
// Handle error
}
error = FT_Render_Glyph(face->glyph, FT_RENDER_MODE_SDF);
if (error)
{
// Handle error
}
Maybe I completly misunderstand the idea of SDF, but my thought was that I could give freetype a ttf file and with FT_RENDER_MODE_SDF it should produce a buffer with signed distances. But FT_Render_Glyph returns an error (19) which happens to be "cannot render this glyph format".
SDF support was added at the end of 2020, with a new module in the second half of 2021, so make sure you have a more recent version than that. For example, 2.6 is older than 2.12.0 (the newest at the time of writing).
With that out of the way, let's get started.
I'm assuming you've completed the font rendering tutorial from LearnOpenGL and you can successfully render text on the screen. You should have something like this (notice the new additions):
glPixelStorei(GL_UNPACK_ALIGNMENT, 1); // Disable byte-alignment restriction
FT_GlyphSlot slot = face->glyph; // <-- This is new
for (unsigned char c = 0; c < 128; c++)
{
// Load character glyph
if (FT_Load_Char(face, c, FT_LOAD_RENDER))
{
// error message
continue;
}
FT_Render_Glyph(slot, FT_RENDER_MODE_SDF); // <-- And this is new
// Generate texture
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D( ... );
...
}
When rendering the text, you have to tell OpenGL not to write the fragments of the quads to the depth buffer, otherwise adjacent glyphs will overlap and start to flicker:
glDepthMask(GL_FALSE); // Don't write into the depth buffer
RenderText(pTextShader, text, 25.0f, 25.0f, 1.0f, glm::vec3(0.5, 0.8f, 0.2f));
glDepthMask(GL_TRUE); // Re-enable writing to the depth buffer
If you want to place the text as an object in your scene, in world-space, then in the vertex shader you can use:
gl_Position = uVp * uModel * vec4(vertex.xy, 0.0, 1.0); // uVp is "projection * view" on the CPU side
However, this is a bit outside the scope of your question. It just makes it easier to inspect the text from all angles by circling the camera around it. Make sure you run glDisable(GL_CULL_FACE) before drawing the glyphs, to disable backface culling, so they're visible from both sides.
As for the fragment shader I suggest you watch this video.
The bare minimum would be:
void main()
{
float glyphShape = texture(uGlyphTexture, TexCoords).r;
if (glyphShape < 0.5)
discard;
oFragColor = vec4(uTextColor, 1.0);
}
Result:
I think there's a pretty stark difference between them, wouldn't you say?
Have fun!

Rendering on a WPF Control with DirectX 11

I am trying to create a map editor based on WPF. Currently I'm using a hack to render DirectX contents. I created a WinFormsHost and rendered on a WinForms-Panel.
This all because DirectX (I´m using DirectX 11 with Featurelevel 10) wants a Handle (alias IntPtr) where to render. I don´t know how I can initialize and use the DX Device without a handle.
But a WPF control has no handle. So I just found out, there is an interop class called "D3DImage". But I don't understand how to use it.
My current system works like this:
The inner loop goes through a list of "IGameloopElement"s. For each, it renders its content calling "Draw()". After that, it calls "Present()" of the swap chain to show the changes. Then it resets the device to switch the handle to the next element (mostly there is only one element).
Now, because D3DImage doesn't have a handle, how do I render onto it? I just know I have to use "Lock()" then "SetBackBuffer()", "AddDirtyRect()" and then "Unlock()".
But how do I render onto a DirectX11.Texture2D object without specifying a handle for the device?
I´m really lost... I just found the "DirectX 4 WPF" sample on codeplex, but this implements all versions of DirectX, manages the device itself and has such a huge overhead.
I want to stay at my current system. I´m managing the device by myself. I don´t want the WPF control to handle it.
The loop just should call "Render()" and then passes the backbuffer texture to the WPF control.
Could anyone tell me how to do this? I´m totally stuck ...
Thanks a lot :)
R
WPF's D3DImage only supports Direct3D9/Direct3D9Ex, it does not support Direct3D 11. You need to use DXGI Surface Sharing to make it work.
Another answer wrote, "D3DImage only supports Direct3D9/Direct3D9Ex"... which is perhaps not entirely true for the last few years anyway. As I summarized in a comment here, the key appears to be that Direct3D11 with DXGI has a very specific interop compatibility mode (D3D11_SHARED_WITHOUT_MUTEX flag) which makes the ID3D11Texture2D1 directly usable as a D3DResourceType.IDirect3DSurface9, without copying any bits, which just so happens to be exactly (and only) what WPF D3DImage is willing to accept.
This is a rough sketch of what worked for me, to create a D3D11 SampleAllocator that produces ID3D11Texture2D1 that are directly compatible with WPF's Direct3D9. Because all the .NET interop shown here is of my own design, this will not be totally ready-to-run code to drop in your project, but the method, intent, and procedures should be clear for easy adaptation.
1. preliminary helper
static D3D_FEATURE_LEVEL[] levels =
{
D3D_FEATURE_LEVEL._11_1,
D3D_FEATURE_LEVEL._11_0,
};
static IMFAttributes GetSampleAllocatorAttribs()
{
MF.CreateAttributes(out IMFAttributes attr, 6);
attr.SetUINT32(in MF_SA_D3D11_AWARE, 1U);
attr.SetUINT32(in MF_SA_D3D11_BINDFLAGS, (uint)D3D11_BIND.RENDER_TARGET);
attr.SetUINT32(in MF_SA_D3D11_USAGE, (uint)D3D11_USAGE.DEFAULT);
attr.SetUINT32(in MF_SA_D3D11_SHARED_WITHOUT_MUTEX, (uint)BOOL.TRUE);
attr.SetUINT32(in MF_SA_BUFFERS_PER_SAMPLE, 1U);
return attr;
}
static IMFMediaType GetMediaType()
{
MF.CreateMediaType(out IMFMediaType mt);
mt.SetUINT64(in MF_MT_FRAME_SIZE, new SIZEU(1920, 1080).ToSwap64());
mt.SetGUID(in MF_MT_MAJOR_TYPE, in WMMEDIATYPE.Video);
mt.SetUINT32(in MF_MT_INTERLACE_MODE, (uint)MFVideoInterlaceMode.Progressive);
mt.SetGUID(in MF_MT_SUBTYPE, in MF_VideoFormat.RGB32);
return mt;
}
2. the D3D11 device and context instances go somewhere
ID3D11Device4 m_d3D11_device;
ID3D11DeviceContext2 m_d3D11_context;
3. initialization code is next
void InitialSetup()
{
D3D11.CreateDevice(
null,
D3D_DRIVER_TYPE.HARDWARE,
IntPtr.Zero,
D3D11_CREATE_DEVICE.BGRA_SUPPORT,
levels,
levels.Length,
D3D11.SDK_VERSION,
out m_d3D11_device,
out D3D_FEATURE_LEVEL _,
out m_d3D11_context);
MF.CreateDXGIDeviceManager(out uint tok, out IMFDXGIDeviceManager m_dxgi);
m_dxgi.ResetDevice(m_d3D11_device, tok);
MF.CreateVideoSampleAllocatorEx(
ref REFGUID<IMFVideoSampleAllocatorEx>.GUID,
out IMFVideoSampleAllocatorEx sa);
sa.SetDirectXManager(m_dxgi);
sa.InitializeSampleAllocatorEx(
PrerollSampleSink.QueueMax,
PrerollSampleSink.QueueMax * 2,
GetSampleAllocatorAttribs(),
GetMediaType());
}
4. use sample allocator to repeatedly generate textures, as needed
ID3D11Texture2D1 CreateTexture2D(SIZEU sz)
{
var vp = new D3D11_VIEWPORT
{
TopLeftX = 0f,
TopLeftY = 0f,
Width = sz.Width,
Height = sz.Height,
MinDepth = 0f,
MaxDepth = 1f,
};
m_d3D11_context.RSSetViewports(1, ref vp);
var desc = new D3D11_TEXTURE2D_DESC1
{
SIZEU = sz,
MipLevels = 1,
ArraySize = 1,
Format = DXGI_FORMAT.B8G8R8X8_UNORM,
SampleDesc = new DXGI_SAMPLE_DESC { Count = 1, Quality = 0 },
Usage = D3D11_USAGE.DEFAULT,
BindFlags = D3D11_BIND.RENDER_TARGET | D3D11_BIND.SHADER_RESOURCE,
CPUAccessFlags = D3D11_CPU_ACCESS.NOT_REQUESTED,
MiscFlags = D3D11_RESOURCE_MISC.SHARED,
TextureLayout = D3D11_TEXTURE_LAYOUT.UNDEFINED,
};
m_d3D11_device.CreateTexture2D1(ref desc, IntPtr.Zero, out ID3D11Texture2D1 tex2D);
return tex2D;
}

How do I make a stationary image out of particles in LSL?

LSL (Linden Scripting Language) allows for various particle effects using the llParticleSystem function. What are the right parameters to give to that function in order to make a non-moving particle-based image hover over the prim?
(This question was asked in the Script Academy discussion group today. I'm reposting the question and my answer here to help get more LSL users into Stack Overflow.)
The following script will create a stationary hovering image out of particles, using the first texture found in the contents of the prim.
ParticleImage(string tex, vector scale)
{
list params;
//set texture and size
params += [PSYS_SRC_TEXTURE, tex];
params += [PSYS_PART_START_SCALE, scale];
//make particles follow source
params += [PSYS_PART_FLAGS, PSYS_PART_FOLLOW_SRC_MASK];
//use drop pattern, which has no velocity
params += [PSYS_SRC_PATTERN, PSYS_SRC_PATTERN_DROP];
llParticleSystem(params);
}
default
{
state_entry()
{
//make the prim invisible
llSetAlpha(0.0, ALL_SIDES);
if (llGetInventoryNumber(INVENTORY_TEXTURE))
{
string tex = llGetInventoryName(INVENTORY_TEXTURE, 0);
ParticleImage(tex, <1.0, 1.0, 0.0>);
}
}
}

Resources