Skybox texture not showing in DirectX9 - directx-9

I am trying to render a skybox in DirectX9 with fixed function pipeline (without shader or effect file). I can view the cube rendering. When I set the texture all faces are getting painted with a single strange color (not white or blank). The texture file is a correct cube map.I think some texture setting is not set. Below is the code snippet
// for creating the cube and the cube texture
HRESULT apiResult = D3DXCreateBox(g_pd3dDevice,1,1,1,&g_pMesh,NULL);
apiResult = D3DXCreateCubeTextureFromFile(g_pd3dDevice,L"cubeMap.dds",&g_pTexture);
both APIs are returning S_OK.
// for renderin the skybox
g_pd3dDevice->SetSamplerState(0,D3DSAMP_MIPFILTER,D3DTEXF_LINEAR);
g_pd3dDevice->SetSamplerState(0,D3DSAMP_MAGFILTER,D3DTEXF_LINEAR);
g_pd3dDevice->SetSamplerState(0,D3DSAMP_MINFILTER,D3DTEXF_LINEAR);
// set texture
g_pd3dDevice->SetTexture(0,g_pTexture);
// Begin the scene
if( SUCCEEDED( g_pd3dDevice->BeginScene() ) )
{
g_pMesh->DrawSubset(0);
g_pd3dDevice->EndScene();
}
// Present the backbuffer contents to the display
g_pd3dDevice->Present( NULL, NULL, NULL, NULL );
Which setting is missing ?

I'm not an expert but try lowering the resolution of the image you are loading, i had a similar problem once and found out that the image couldn't load because of that, the resolution was too big.

Related

ARSCNPlaneGeometry update and re-calculate texture coordinates, instead of stretching them

I'm having a problem on the texture coordinates of planes geometries being updated by ARKit. Texture images get stretched, I want to avoid that.
Right now I'm detecting horizontal and vertical walls and applying a texture to them. It's working like a charm...
But when the geometry gets updated because it extents the detection of the wall/floor, the texture coordinates get stretched instead of re-mapping, causing the texture to look stretched like image below.
You can also see an un-edited video of the problem happening: https://www.youtube.com/watch?v=wfwYPwzND74
This is the piece of code where the geometry gets updated:
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let planeAnchor = anchor as? ARPlaneAnchor else {
return
}
let planeGeometry = ARSCNPlaneGeometry(device: device)!
planeGeometry.update(from: planeAnchor.geometry)
//I suppose I need to do some texture re-mapping here.
planeGeometry.materials = node.geometry!.materials
node.geometry = planeGeometry
}
I have seen that you can define the texture coordinates by defining it as a source like this:
let textCords = [] //Array of coordinates
let uvData = Data(bytes: textCords, count: textCords.count * MemoryLayout<vector_float2>.size)
let textureSource = SCNGeometrySource(data: uvData,
semantic: .texcoord,
vectorCount: textCords.count,
usesFloatComponents: true,
componentsPerVector: 2,
bytesPerComponent: MemoryLayout<Float>.size,
dataOffset: 0,
dataStride: MemoryLayout<vector_float2>.size)
But I have no idea how to fill the textCords array to make it fit correctly accordingly to the updated planeGeometry
Edit:
Re-defining the approach:
Thinking deeply on the problem, I came with the idea that I need to modify the texture's transform to fix the stretching part, but then I have two options if I do that:
Either keep the texture big enough to fill the entire geometry but
keeping a ratio of 1:1 to avoid stretching
Or keep the texture the
original size but with 1:1 aspect ratio and repeat the texture
multiple times to fit the entire geometry.
Any of these approaches I'm still lost of how to do it. What would you suggest?

What is SceneKit doing between calls to didApplyConstraints and willRenderScene?

The SceneKit rendering loop is well documented here https://developer.apple.com/documentation/scenekit/scnscenerendererdelegate and here https://www.raywenderlich.com/1257-scene-kit-tutorial-with-swift-part-4-render-loop. However neither of these documents explains what SceneKit does between calls to didApplyConstraints and willRenderScene.
I've modified my SCNSceneRendererDelegate to measure the time between each call and I can see that around 5ms elapses between those two calls. It isn't running my code in that time, but presumably some aspect of the way I've set up my scene is creating work which has to be done there. Any insight into what SceneKit is doing would be very helpful.
I am calling SceneKit myself from an MTKView's draw call (rather than using an SCNView) so that I can render the scene twice. The first render is normal, the second uses the depth buffer from the first but draws just a subset of the scene that I want to "glow" onto a separate colour buffer. That colour buffer is then scaled down, gaussian blurred, scaled back up and then blended over the top of the first scene (all with custom Metal shaders).
The 5ms spent between didApplyConstraints and willRenderScene started happening when I introduced this extra rendering pass. To control which nodes are in each scene I switch the opacity of a small number of parent nodes between 0 and 1. If I remove the code which switches opacity but keep everything else (so there are two rendering passes but they both draw everything) the extra 5ms disappears and the overall frame rate is actually faster even though much more rendering is happening.
I'm writing Swift targeting MacOS on a 2018 MacBook Pro.
UPDATE: mnuages has explained that changing the opacity causes SceneKit to rebuild the scene graph and it that explains part of the lost time. However I've now discovered that my use of a custom SCNProgram for the nodes in one rendering pass also triggers a 5ms pause between didApplyConstraints and willRenderScene. Does anyone know why this might be?
Here is my code for setting up the SCNProgram and the SCNMaterial, both done once:
let device = MTLCreateSystemDefaultDevice()!
let library = device.makeDefaultLibrary()
glowProgram = SCNProgram()
glowProgram.library = library
glowProgram.vertexFunctionName = "emissionGlowVertex"
glowProgram.fragmentFunctionName = "emissionGlowFragment"
...
let glowMaterial = SCNMaterial()
glowMaterial.program = glowProgram
let emissionImageProperty = SCNMaterialProperty(contents: emissionImage)
glowMaterial.setValue(emissionImageProperty, forKey: "tex")
Here's where I apply the material to the nodes:
let nodeWithGeometryClone = nodeWithGeometry.clone()
nodeWithGeometryClone.categoryBitMask = 2
let geometry = nodeWithGeometryClone.geometry!
nodeWithGeometryClone.geometry = SCNGeometry(sources: geometry.sources, elements: geometry.elements)
glowNode.addChildNode(nodeWithGeometryClone)
nodeWithGeometryClone.geometry!.firstMaterial = glowMaterial
The glow nodes are a deep clone of the regular nodes, but with an alternative SCNProgram. Here's the Metal code:
#include <metal_stdlib>
using namespace metal;
#include <SceneKit/scn_metal>
struct NodeConstants {
float4x4 modelTransform;
float4x4 modelViewProjectionTransform;
};
struct EmissionGlowVertexIn {
float3 pos [[attribute(SCNVertexSemanticPosition)]];
float2 uv [[attribute(SCNVertexSemanticTexcoord0)]];
};
struct EmissionGlowVertexOut {
float4 pos [[position]];
float2 uv;
};
vertex EmissionGlowVertexOut emissionGlowVertex(EmissionGlowVertexIn in [[stage_in]],
constant NodeConstants &scn_node [[buffer(1)]]) {
EmissionGlowVertexOut out;
out.pos = scn_node.modelViewProjectionTransform * float4(in.pos, 1) + float4(0, 0, -0.01, 0);
out.uv = in.uv;
return out;
}
constexpr sampler linSamp = sampler(coord::normalized, address::clamp_to_zero, filter::linear);
fragment half4 emissionGlowFragment(EmissionGlowVertexOut in [[stage_in]],
texture2d<half, access::sample> tex [[texture(0)]]) {
return tex.sample(linSamp, in.uv);
}
By changing the opacity of nodes you're invalidating parts of the scene graph which can result in additional work for the renderer.
It would be interesting to see if setting the camera's categoryBitMask is more performant (it doesn't modify the scene graph).

Codename One rounded image from an internet source

I am trying to display a rounded image that I get straight from the Internet.
I used the code below to create a round mask, get the image from the Internet, then tried to either set the mask on the image or the label itself. None of these approaches worked. If I remove the mask, the image is displayed fine. If I keep the code to set the mask then all I see is an empty white circle.
I have the idea that if I apply the mask on the image itself, then it may not take effect because the image was not downloaded at the time the mask was applied.
But I don't seem to understand why calling setMask on the label is also not working.
// Create MASK
Image maskImage = Image.createImage(w, l);
Graphics g = maskImage.getGraphics();
g.setAntiAliased(true);
g.setColor(0x000000);
g.fillRect(0, 0, w, l);
g.setColor(0xffffff);
g.fillArc(0, 0, w, l, 0, 360);
Object mask = maskImage.createMask();
// GET IMAGE
com.cloudinary.Cloudinary cloudinary = new com.cloudinary.Cloudinary(ObjectUtils.asMap(
"cloud_name", "REMOVED",
"api_key", "REMOVED",
"api_secret", "REMOVED"));
// Disable private CDN URLs as this doesn't seem to work with free accounts
cloudinary.config.privateCdn = false;
Image placeholder = Image.createImage(150, 150);
EncodedImage encImage = EncodedImage.createFromImage(placeholder, false);
Image img2 = cloudinary.url()
.type("fetch") // Says we are fetching an image
.format("jpg") // We want it to be a jpg
.transformation(
new Transformation()
.radius("max").width(150).height(150).crop("thumb").gravity("faces").image(encImage, "http://upload.wikimedia.org/wikipedia/commons/4/46/Jennifer_Lawrence_at_the_83rd_Academy_Awards.jpg");
Label label = new Label(img2);
label.setMask(mask); // also tried to do img2.applyMask(mask); before passing img2
So I tried various things:
1) Removing the mask that was set through cloudinary - That did not work
2) applied the mask to the placeholder & encoded image (as expected these shouldnt affect the final version that is getting published)
3) This is what works! I am not sure if the issue is really with downloading the picture before or after applying the mask.. time can tell down the road
Label label = new Label();
img2.applyMask(mask); // If you remove this line , the image will no longer be displayed, I will only see a rounded white circle ! I am not sure what this is doing, it might be simply stalling the process until the image is downloaded? or maybe somehow calling repaint or revalidate
label.setIcon( img2.applyMask(mask));
Here is what worked for me if anyone else is having similar issues:
//CREATE MASK
Image maskImage = Image.createImage(w, l);
Graphics g = maskImage.getGraphics();
g.setAntiAliased(true);
g.setColor(0x000000);
g.fillRect(0, 0, w, l);
g.setColor(0xffffff);
g.fillArc(0, 0, w, l, 0, 360);
Object mask = maskImage.createMask();
//CONNECT TO CLOUDINARY
com.cloudinary.Cloudinary cloudinary = new com.cloudinary.Cloudinary(ObjectUtils.asMap(
"cloud_name", "REMOVED",
"api_key", "REMOVED",
"api_secret", "REMOVED"));
// Disable private CDN URLs as this doesn't seem to work with free accounts
cloudinary.config.privateCdn = false;
//CREATE IMAGE PLACEHOLDERS
Image placeholder = Image.createImage(w, l);
EncodedImage encImage = EncodedImage.createFromImage(placeholder, false);
//DOWNLOAD IMAGE
Image img2 = cloudinary.url()
.type("fetch") // Says we are fetching an image
.format("jpg") // We want it to be a jpg
.transformation(
new Transformation()
.crop("thumb").gravity("faces")
.image(encImage, url);
// Add the image to a label and place it on the form.
//GetCircleImage(img2);
Label label = new Label();
img2.applyMask(mask); // If you remove this line , the image will no longer be displayed, I will only see a rounded white circle ! I am not sure what this is doing, it might be simply stalling the process until the image is downloaded? or maybe somehow calling repaint or revalidate
label.setIcon( img2.applyMask(mask));
Shai, I seriously appreciate your time!! Thank you very much. Will have to dig more into it if it gives me any other problems later but it seems to consistently work for now.
The Cloudinary API returns a URLImage which doesn't work well with the Label.setMask() method because, technically, a URLImage is an animated image (it is a placeholder image until it finishes loading, and then "animates" to become the target image).
I have just released a new version of the cloudinary cn1lib which gives you a couple of options for working around this.
I have added two new image() methods. One that takes an ImageAdapter parameter that you can use to apply the mask to the image itself, before setting it as the icon for the label. Then you wouldn't use Label.setMask() at all.
See javadocs for this method here
The other method uses the new Async image loading APIs underneath to load the image asynchronously. The image you receive in the callback is a "real" image so you can use it with a mask normally.
See javadocs for this method here
We are looking at adding a soft warning to the Label.setMask() and setIcon() methods if you try to add an "animated" image and mask it so that it is more clear.
I think the making code you set to the label might be conflicting with the masking code you get from Cloudinary.

WPF Canvas to animated gif with transparent background to display on svg canvas

In my web project, I need dynamically render XAML frames to animated gif. It's working now - I'm rendering each frame to png with this code:
// Save current canvas transform
var transform = surface.LayoutTransform;
// reset current transform (in case it is scaled or rotated)
surface.LayoutTransform = null;
// Get the size of canvas
var size = new System.Windows.Size(surface.Width, surface.Height);
// Measure and arrange the surface
// VERY IMPORTANT
surface.Measure(size);
surface.Arrange(new Rect(size));
// Create a render bitmap and push the surface to it
var renderBitmap =
new RenderTargetBitmap(
(int)size.Width,
(int)size.Height,
96d,
96d,
PixelFormats.Pbgra32);
renderBitmap.Render(surface);
Bitmap bmp;
// Create a file stream for saving image
using (var stream = new MemoryStream()) {
// Use png encoder for our data
var encoder = new PngBitmapEncoder();
// push the rendered bitmap to it
encoder.Frames.Add(BitmapFrame.Create(renderBitmap));
// save the data to the stream
encoder.Save(stream);
bmp = new Bitmap(stream);
}
return bmp;
and then creating animated gif with MagickImage.
But when I'm putting it on a web page (on svg canvas), background isn't transparent, but black.
How to make it transparent?
You have to ensure that you have the transparent flag set. You will also have to get the index of the transparent colour and ensure that is set as well and ensure that all transparent pixels index the transparent colour index (note the transparent pixel can have any RGB value).
I had a look at the MagickImage documentation and it gave me nothing on GIF. Then I looked at your code again and you are encoding a png not a gif. I found the gif info and well maybe you gave an intermediate step. The API does not have anything about GIF all I can do is give you the details on the gif data block so you can find the correct flags and settings yourself.
The stuff you have to change is in the GCE block before the start of every image data block.
Correct settings for transparent
options.delay = 2; //what ever you want Dont go under 2 as many gif renderers are limited to 50frames a second. Frame delay in 1/100 secs
options.transparentFlag = true; // This must be true for transparent
options.transparentIndex = transColour; //The 8 bit index of the transparent colour
// this must be set correctly for transparent to work.
options.dispose = 3; //use 3 or 2 if you want transparency;
Writing the GCE block
// the following is writing the GCE block that must be included before every image
// write writes an array or single byte to the stream
// shortData creates a Little-endian 16 bit short (high byte first) byte array
//
GCE_ID = 0xf9; // GCE block indentifier
GCE_size = 4; // block length
write([0x21 , GCE_ID , GCE_size]); // extension introducer
write(
0 + // bits 8:7 reserved
(options.dispose << 2) + // bits 5:4:3:2 disposal method
(options.transparentFlag?1:0)); // 1 transparency flag
write( shortData(options.delay) ); // delay x 1/100 sec
write( options.transparentIndex ); // transparent color index
write(0); // block terminator
// image block follows
Not the type of answer you are after but I don't think you are saving gifs. If you are than you should be able to find out what to set so that the correct GCE blocks are written to the gif file. Last note. Use a different colour index for the background colour than you use for the transparent index. I reserve the last two indexes of the global RGB lookup table for background and transparent.

DirectX9 Use Geometry Instancing for a Mesh with multiple materials

I am trying to have a flexible Geometry Instancing code able to handle meshes with multiple materials. For a mesh with one material everything is fine. I manage to render as many instances as I want with a single draw call.
Things get a bit more complicated with multiple materials. My mesh comes from an .x file. It has one vertex buffer, one index buffer but several materials. The indexes to render for each subset (materials) is stored in an attribute array.
Here is the code I use:
d3ddev->SetVertexDeclaration( m_vertexDeclaration );
d3ddev->SetIndices( m_indexBuffer );
d3ddev->SetStreamSourceFreq(0, (D3DSTREAMSOURCE_INDEXEDDATA | m_numInstancesToDraw ));
d3ddev->SetStreamSource(0, m_vertexBuffer, 0, D3DXGetDeclVertexSize( m_geometryElements, 0 ) );
d3ddev->SetStreamSourceFreq(1, (D3DSTREAMSOURCE_INSTANCEDATA | 1ul));
d3ddev->SetStreamSource(1, m_instanceBuffer, 0, D3DXGetDeclVertexSize( m_instanceElements, 1 ) );
m_effect->Begin(NULL, NULL); // begin using the effect
m_effect->BeginPass(0); // begin the pass
for( DWORD i = 0; i < m_numMaterials; ++i ) // loop through each subset.
{
d3ddev->SetMaterial(&m_materials[i]); // set the material for the subset
if(m_textures[i] != NULL)
{
d3ddev->SetTexture( 0, m_textures[i] );
}
d3ddev->DrawIndexedPrimitive(
D3DPT_TRIANGLELIST, // Type
0, // BaseVertexIndex
m_attributes[i].VertexStart, // MinIndex
m_attributes[i].VertexCount, // NumVertices
m_attributes[i].FaceStart * 3, // StartIndex
m_attributes[i].FaceCount // PrimitiveCount
);
}
m_effect->EndPass();
m_effect->End();
d3ddev->SetStreamSourceFreq(0,1);
d3ddev->SetStreamSourceFreq(1,1);
This code will work for the first material only. When I say the first I meant the one at index 0 because if I start my loop with the second material, it will not be rendered. However, by debugging the vertex buffer in PIX, I can see all my materials being processed properly. So something happens after the vertex shader.
Another weird issue, all my materials will be rendered if I set my stream source containing the instance data to be a vertex size of zero.
So Instead of this:
d3ddev->SetStreamSource(1, m_instanceBuffer, 0, D3DXGetDeclVertexSize( m_instanceElements, 1 ) );
I replace it by:
d3ddev->SetStreamSource(1, m_instanceBuffer, 0, 0 );
But of course, with this code, all my instances are rendered at the same position since I reuse the same instance data over and over again.
And last point, everything works fine if I create my device with D3DCREATE_SOFTWARE_VERTEXPROCESSING. Only Hardware has the issue but unfortunately DirectX does not report any problem in debug mode.
See the Shader Model 3 docs
If you are implementing shaders in hardware, you may not use vs_3_0 or ps_3_0 with any other shader versions, and you may not use either shader type with the fixed function pipeline. These changes make it possible to simplify drivers and the runtime. The only exception is that software-only vs_3_0 shaders may be used with any pixel shader version.
I had the same problem and in my case, the problem was with pool for instancing mesh. I originally had this mesh in SYSTEM_MEMORY, but the instanced mesh in POOL_DEFAULT. When I changed instancing mesh to sit in a default mem, everything worked as desired.
Hope it helps.

Resources