how to continuously animate an SCNProgram shader within an SCNScene / SCNView? - scenekit

Within a MacOS app, I have a pattern defined by a SCNProgram that is mapped to an SCNPlane.
It looks like this:
The shader is supposed to make the rows of triangles shift , like in this video. The video is a screen grab of the same shader running inside of an MTKview.
animated texture
In the SceneKit version of this shader, the shader only animates when I click on the plane within the view.
How do I make the SceneKit view (or Scene?) continually animate the shader all the time? Again this app is on MacOS. I have tried setting
self.gameView!.isPlaying = true
but that doesn't seem to fix the problem
Here is the Metal shader:
#include <metal_stdlib>
using namespace metal;
#include <SceneKit/scn_metal>
struct myPlaneNodeBuffer {
float4x4 modelTransform;
float4x4 modelViewTransform;
float4x4 normalTransform;
float4x4 modelViewProjectionTransform;
float2x3 boundingBox;
};
typedef struct {
float3 position [[ attribute(SCNVertexSemanticPosition) ]];
float2 texCoords [[ attribute(SCNVertexSemanticTexcoord0) ]];
} VertexInput;
static float rand(float2 uv)
{
return fract(sin(dot(uv, float2(12.9898, 78.233))) * 43758.5453);
}
static float2 uv2tri(float2 uv)
{
float sx = uv.x - uv.y / 2;
float sxf = fract(sx);
float offs = step(fract(1 - uv.y), sxf);
return float2(floor(sx) * 2 + sxf + offs, uv.y);
}
struct SimpleVertexWithUV
{
float4 position [[position]];
float2 uv;
};
vertex SimpleVertexWithUV trianglequiltVertex(VertexInput in [[ stage_in ]],
constant SCNSceneBuffer& scn_frame [[buffer(0)]],
constant myPlaneNodeBuffer& scn_node [[buffer(1)]])
{
SimpleVertexWithUV vert;
vert.position = scn_node.modelViewProjectionTransform * float4(in.position, 1.0);
vert.uv = in.texCoords;
return vert;
}
fragment float4 trianglequiltFragment(SimpleVertexWithUV in [[stage_in]],
constant SCNSceneBuffer& scn_frame [[buffer(0)]],
constant myPlaneNodeBuffer& scn_node [[buffer(1)]])
{
float4 fragColor;
float2 uv = in.uv*10;
float timer = scn_frame.time;
uv.y += timer;
float t = timer * 0.8;
float tc = floor(t);
float tp = smoothstep(0, 0.8, fract(t));
float2 r1 = float2(floor(uv.y), tc);
float2 r2 = float2(floor(uv.y), tc + 1);
float offs = mix(rand(r1), rand(r2), tp);
uv.x += offs * 8;
float2 p = uv2tri(uv);
float ph = rand(floor(p)) * 6.3 + p.y * 0.2;
float c = abs(sin(ph + timer));
fragColor = float4(c, c, c, 1);
return(fragColor);
}
here is the view controller:
import SceneKit
import QuartzCore
class GameViewController: NSViewController {
#IBOutlet weak var gameView: GameView!
override func awakeFromNib(){
super.awakeFromNib()
// create a new scene
let scene = SCNScene()
Swift.print(gameView.colorPixelFormat.rawValue)
// create and add a camera to the scene
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
scene.rootNode.addChildNode(cameraNode)
// place the camera
cameraNode.position = SCNVector3(x: 0, y: 0, z: 15)
// turn off default lighting
self.gameView!.autoenablesDefaultLighting = false
// set the scene to the view
self.gameView!.scene = scene
// allows the user to manipulate the camera
self.gameView!.allowsCameraControl = true
// show statistics such as fps and timing information
self.gameView!.showsStatistics = true
// configure the view
self.gameView!.backgroundColor = NSColor.black
// play it always?
self.gameView!.isPlaying = true
var geometry:SCNGeometry
geometry = SCNPlane(width:10, height:10)
let geometryNode = SCNNode(geometry: geometry)
let program = SCNProgram()
program.fragmentFunctionName = "trianglequiltFragment"
program.vertexFunctionName = "trianglequiltVertex"
let gradientMaterial = SCNMaterial()
gradientMaterial.program = program
gradientMaterial.specular.contents = NSColor.black
gradientMaterial.locksAmbientWithDiffuse = true
geometry.materials = [gradientMaterial]
geometryNode.geometry?.firstMaterial?.lightingModel = .constant
scene.rootNode.addChildNode(geometryNode)
}
}

Try setting:
gameView?.rendersContinuously = true
(You don’t need all those extra ‘self.’s either.)

Related

first glsl complex shader for me, how to make it evolve with time (with three-fiber and react)?

I created the code sandbox below where I try to show a mesh.
https://codesandbox.io/s/template-shader-opcqst?file=/src/App.js
For the moment, the mesh is shown but not animated. I don't know how to make change the 'time' parameter of this mesh. It must be linked with a good use of useMemo, useEffect or useFrame.
The code of the mesh is taken from this codepen where it is edited with three.js not fiber :
https://codepen.io/marioecg/pen/mdrvgpq (link to three.js package is blocked by cors)
import * as THREE from "https://cdn.skypack.dev/three#0.124.0";
import { OrbitControls } from "https://cdn.skypack.dev/three/examples/jsm/controls/OrbitControls";
import * as dat from "https://cdn.skypack.dev/dat.gui#0.7.7";
const gui = new dat.GUI();
const settings = {
speed: 0.2,
density: 1.5,
strength: 0.2,
frequency: 3.0,
amplitude: 6.0,
intensity: 7.0,
};
const folder1 = gui.addFolder('Noise');
const folder2 = gui.addFolder('Rotation');
const folder3 = gui.addFolder('Color');
folder1.add(settings, 'speed', 0.1, 1, 0.01);
folder1.add(settings, 'density', 0, 10, 0.01);
folder1.add(settings, 'strength', 0, 2, 0.01);
folder2.add(settings, 'frequency', 0, 10, 0.1);
folder2.add(settings, 'amplitude', 0, 10, 0.1);
folder3.add(settings, 'intensity', 0, 10, 0.1);
const noise = `
// GLSL textureless classic 3D noise "cnoise",
// with an RSL-style periodic variant "pnoise".
// Author: Stefan Gustavson (stefan.gustavson#liu.se)
// Version: 2011-10-11
//
// Many thanks to Ian McEwan of Ashima Arts for the
// ideas for permutation and gradient selection.
//
// Copyright (c) 2011 Stefan Gustavson. All rights reserved.
// Distributed under the MIT license. See LICENSE file.
// https://github.com/ashima/webgl-noise
//
vec3 mod289(vec3 x)
{
return x - floor(x * (1.0 / 289.0)) * 289.0;
}
vec4 mod289(vec4 x)
{
return x - floor(x * (1.0 / 289.0)) * 289.0;
}
vec4 permute(vec4 x)
{
return mod289(((x*34.0)+1.0)*x);
}
vec4 taylorInvSqrt(vec4 r)
{
return 1.79284291400159 - 0.85373472095314 * r;
}
vec3 fade(vec3 t) {
return t*t*t*(t*(t*6.0-15.0)+10.0);
}
// Classic Perlin noise, periodic variant
float pnoise(vec3 P, vec3 rep)
{
vec3 Pi0 = mod(floor(P), rep); // Integer part, modulo period
vec3 Pi1 = mod(Pi0 + vec3(1.0), rep); // Integer part + 1, mod period
Pi0 = mod289(Pi0);
Pi1 = mod289(Pi1);
vec3 Pf0 = fract(P); // Fractional part for interpolation
vec3 Pf1 = Pf0 - vec3(1.0); // Fractional part - 1.0
vec4 ix = vec4(Pi0.x, Pi1.x, Pi0.x, Pi1.x);
vec4 iy = vec4(Pi0.yy, Pi1.yy);
vec4 iz0 = Pi0.zzzz;
vec4 iz1 = Pi1.zzzz;
vec4 ixy = permute(permute(ix) + iy);
vec4 ixy0 = permute(ixy + iz0);
vec4 ixy1 = permute(ixy + iz1);
vec4 gx0 = ixy0 * (1.0 / 7.0);
vec4 gy0 = fract(floor(gx0) * (1.0 / 7.0)) - 0.5;
gx0 = fract(gx0);
vec4 gz0 = vec4(0.5) - abs(gx0) - abs(gy0);
vec4 sz0 = step(gz0, vec4(0.0));
gx0 -= sz0 * (step(0.0, gx0) - 0.5);
gy0 -= sz0 * (step(0.0, gy0) - 0.5);
vec4 gx1 = ixy1 * (1.0 / 7.0);
vec4 gy1 = fract(floor(gx1) * (1.0 / 7.0)) - 0.5;
gx1 = fract(gx1);
vec4 gz1 = vec4(0.5) - abs(gx1) - abs(gy1);
vec4 sz1 = step(gz1, vec4(0.0));
gx1 -= sz1 * (step(0.0, gx1) - 0.5);
gy1 -= sz1 * (step(0.0, gy1) - 0.5);
vec3 g000 = vec3(gx0.x,gy0.x,gz0.x);
vec3 g100 = vec3(gx0.y,gy0.y,gz0.y);
vec3 g010 = vec3(gx0.z,gy0.z,gz0.z);
vec3 g110 = vec3(gx0.w,gy0.w,gz0.w);
vec3 g001 = vec3(gx1.x,gy1.x,gz1.x);
vec3 g101 = vec3(gx1.y,gy1.y,gz1.y);
vec3 g011 = vec3(gx1.z,gy1.z,gz1.z);
vec3 g111 = vec3(gx1.w,gy1.w,gz1.w);
vec4 norm0 = taylorInvSqrt(vec4(dot(g000, g000), dot(g010, g010), dot(g100, g100), dot(g110, g110)));
g000 *= norm0.x;
g010 *= norm0.y;
g100 *= norm0.z;
g110 *= norm0.w;
vec4 norm1 = taylorInvSqrt(vec4(dot(g001, g001), dot(g011, g011), dot(g101, g101), dot(g111, g111)));
g001 *= norm1.x;
g011 *= norm1.y;
g101 *= norm1.z;
g111 *= norm1.w;
float n000 = dot(g000, Pf0);
float n100 = dot(g100, vec3(Pf1.x, Pf0.yz));
float n010 = dot(g010, vec3(Pf0.x, Pf1.y, Pf0.z));
float n110 = dot(g110, vec3(Pf1.xy, Pf0.z));
float n001 = dot(g001, vec3(Pf0.xy, Pf1.z));
float n101 = dot(g101, vec3(Pf1.x, Pf0.y, Pf1.z));
float n011 = dot(g011, vec3(Pf0.x, Pf1.yz));
float n111 = dot(g111, Pf1);
vec3 fade_xyz = fade(Pf0);
vec4 n_z = mix(vec4(n000, n100, n010, n110), vec4(n001, n101, n011, n111), fade_xyz.z);
vec2 n_yz = mix(n_z.xy, n_z.zw, fade_xyz.y);
float n_xyz = mix(n_yz.x, n_yz.y, fade_xyz.x);
return 2.2 * n_xyz;
}
`;
const rotation = `
mat3 rotation3dY(float angle) {
float s = sin(angle);
float c = cos(angle);
return mat3(
c, 0.0, -s,
0.0, 1.0, 0.0,
s, 0.0, c
);
}
vec3 rotateY(vec3 v, float angle) {
return rotation3dY(angle) * v;
}
`;
const vertexShader = `
varying vec2 vUv;
varying float vDistort;
uniform float uTime;
uniform float uSpeed;
uniform float uNoiseDensity;
uniform float uNoiseStrength;
uniform float uFrequency;
uniform float uAmplitude;
${noise}
${rotation}
void main() {
vUv = uv;
float t = uTime * uSpeed;
float distortion = pnoise((normal + t) * uNoiseDensity, vec3(10.0)) * uNoiseStrength;
vec3 pos = position + (normal * distortion);
float angle = sin(uv.y * uFrequency + t) * uAmplitude;
pos = rotateY(pos, angle);
vDistort = distortion;
gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.);
}
`;
const fragmentShader = `
varying vec2 vUv;
varying float vDistort;
uniform float uTime;
uniform float uIntensity;
vec3 cosPalette(float t, vec3 a, vec3 b, vec3 c, vec3 d) {
return a + b * cos(6.28318 * (c * t + d));
}
void main() {
float distort = vDistort * uIntensity;
vec3 brightness = vec3(0.5, 0.5, 0.5);
vec3 contrast = vec3(0.5, 0.5, 0.5);
vec3 oscilation = vec3(1.0, 1.0, 1.0);
vec3 phase = vec3(0.0, 0.1, 0.2);
vec3 color = cosPalette(distort, brightness, contrast, oscilation, phase);
gl_FragColor = vec4(color, 1.0);
}
`;
class Scene {
constructor() {
this.renderer = new THREE.WebGLRenderer({ antialias: true });
this.renderer.setPixelRatio(Math.min(window.devicePixelRatio, 1.5));
this.renderer.setSize(window.innerWidth, window.innerHeight);
this.renderer.setClearColor('black', 1);
this.camera = new THREE.PerspectiveCamera(
45,
window.innerWidth / window.innerHeight,
0.1,
1000
);
this.camera.position.set(0, 0, 4);
this.scene = new THREE.Scene();
this.clock = new THREE.Clock();
this.controls = new OrbitControls(this.camera, this.renderer.domElement);
this.init();
this.animate();
}
init() {
this.addCanvas();
this.addElements();
this.addEvents();
}
addCanvas() {
const canvas = this.renderer.domElement;
canvas.classList.add('webgl');
document.body.appendChild(canvas);
}
addElements() {
const geometry = new THREE.IcosahedronBufferGeometry(1, 64);
const material = new THREE.ShaderMaterial({
vertexShader,
fragmentShader,
uniforms: {
uTime: { value: 0 },
uSpeed: { value: settings.speed },
uNoiseDensity: { value: settings.density },
uNoiseStrength: { value: settings.strength },
uFrequency: { value: settings.frequency },
uAmplitude: { value: settings.amplitude },
uIntensity: { value: settings.intensity },
},
// wireframe: true,
});
this.mesh = new THREE.Mesh(geometry, material);
this.scene.add(this.mesh);
}
addEvents() {
window.addEventListener('resize', this.resize.bind(this));
}
resize() {
let width = window.innerWidth;
let height = window.innerHeight;
this.camera.aspect = width / height;
this.renderer.setSize(width, height);
this.camera.updateProjectionMatrix();
}
animate() {
requestAnimationFrame(this.animate.bind(this));
this.render();
}
render() {
this.controls.update();
// Update uniforms
this.mesh.material.uniforms.uTime.value = this.clock.getElapsedTime();
this.mesh.material.uniforms.uSpeed.value = settings.speed;
this.mesh.material.uniforms.uNoiseDensity.value = settings.density;
this.mesh.material.uniforms.uNoiseStrength.value = settings.strength;
this.mesh.material.uniforms.uFrequency.value = settings.frequency;
this.mesh.material.uniforms.uAmplitude.value = settings.amplitude;
this.mesh.material.uniforms.uIntensity.value = settings.intensity;
this.renderer.render(this.scene, this.camera);
}
}
new Scene();
Here's the bubble that should be created by the code : https://tympanus.net/Tutorials/WebGLBlobs/index3.html
how would you do it ? Thanks.
If you bring the mesh into its own function component, you can then use the useFrame hook to update the uniform every frame using a ref of the mesh. (In this case, you could also apply the ref directly to the material)
function MovingBlob() {
const mesh = useRef()
useFrame(({ clock }) => {
if (mesh.current) {
mesh.current.material.uniforms.uTime.value = clock.elapsedTime;
}
})
return (
<mesh ref={mesh}>
<icosahedronBufferGeometry attach="geometry" args={[1, 64]} />
<shaderMaterial attach="material" {...data} />
</mesh>
)
}
This will update the uniform every frame without triggering any re-rendering in the scene, which would happen if you were calling setState every time.

Cannot create a diffuse lighting shader that works in Unity with a SPOTLIGHT

Code below - I have tried using https://catlikecoding.com/unity/tutorials/rendering/part-5/#5 but I cannot make sense of the tutorials or what they want me to do or make edits when they request them.
Shader "Unlit/DiffuseLighting"
{
Properties
{
_MainTex("Texture", 2D) = "white" {}
_LightPoint("Light Point Position", Vector) = (0, 0, 0, 0)
}
SubShader
{
Tags { "RenderType" = "Opaque" }
LOD 100
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma multi_compile_fog
#pragma multi_compile DIRECTIONAL POINT SPOT
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float4 normal : NORMAL;
float2 uv : TEXCOORD0;
};
struct v2f
{
float2 uv : TEXCOORD0;
float3 worldNormal : TEXCOORD1;
float3 worldPosition : TEXCOORD2;
UNITY_FOG_COORDS(1)
float4 vertex : SV_POSITION;
};
sampler2D _MainTex;
float4 _MainTex_ST;
float4 _LightPoint;
v2f vert(appdata v)
{
v2f o;
o.worldNormal = UnityObjectToWorldNormal(v.normal);
o.vertex = UnityObjectToClipPos(v.vertex);
o.worldPosition = mul(unity_ObjectToWorld, v.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
UNITY_TRANSFER_FOG(o,o.vertex);
return o;
}
fixed4 frag(v2f i) : SV_Target
{
#if defined(POINT)||defined(SPOT)
fixed3 lightDirection = normalize(_WorldSpaceLightPos0.xyz - i.worldPosition);
fixed3 lightDifference = i.worldPosition - _LightPoint.xyz;
fixed intensity = max(-1 * dot(lightDirection, i.worldNormal), 0);
fixed4 col = intensity * tex2D(_MainTex, i.uv);
UNITY_APPLY_FOG(i.fogCoord, col);
return col;
#else
fixed3 lightDifference = i.worldPosition - _LightPoint.xyz;
fixed3 lightDirection = normalize(lightDifference);
fixed intensity = max(-1 * dot(lightDirection, i.worldNormal), 0);
fixed4 col = intensity * tex2D(_MainTex, i.uv);
UNITY_APPLY_FOG(i.fogCoord, col);
return col;
#endif
}
ENDCG
}
}
}
I tried following a tutorial i went to the discord for the tutorial and they redirected me to https://catlikecoding.com/unity/tutorials/rendering/part-5/#5 and now I am stuck and do not know how to make it work with the spotlight in our scene.

Scenekit transparency over black background

I'm currently learning how to use scenekit, and am having issues with transparent objects. I've written a shader to increase transparency when looking faces head on, and everything works as expected when I choose a white background... transparency with white background
but when I choose a black background the object acts as if it's completely opaque transparency with black background
Here's my view controller code
import Cocoa
import SceneKit
import SceneKit.ModelIO
struct shaderSettings {
var Color: vector_float4
var minTransparency: Float
}
class ViewController: NSViewController {
var scnView: SCNView!
var scnScene: SCNScene!
override func viewDidLoad() {
super.viewDidLoad()
scnView = self.view as! SCNView
scnScene = SCNScene()
scnView.scene = scnScene
scnView.allowsCameraControl = true
scnView.backgroundColor = NSColor.black
//load mesh
let url = URL(fileURLWithPath: "monkey.obj")
let monkey = MDLAsset(url: url).object(at: 0)
let node = SCNNode(mdlObject: monkey)
node.name = "monkey"
//use metal shader
let program = SCNProgram();
program.vertexFunctionName = "vert";
program.fragmentFunctionName = "frag";
program.isOpaque = false
//setup material
var s = shaderSettings(Color: vector_float4(1, 0, 0, 1) ,minTransparency: 0.3)
let mat = SCNMaterial();
mat.program = program;
mat.setValue(NSData(bytes: &s, length: MemoryLayout<shaderSettings>.stride), forKey: "s")
mat.blendMode = SCNBlendMode.alpha
mat.writesToDepthBuffer = false
mat.readsFromDepthBuffer = false
mat.cullMode = SCNCullMode.back
//create node
node.geometry!.firstMaterial = mat
scnScene.rootNode.addChildNode(node)
}
override func viewDidDisappear() {
//quit when window closes
exit(0)
}
}
and, while I don't think the issue's in here, here's my shader program anyway
#include <metal_stdlib>
using namespace metal;
#include <metal_geometric>
#include <metal_common>
#include <SceneKit/scn_metal>
struct settings {
float4 Color;
float minTransparency;
};
typedef struct {
float4 pos [[ attribute(SCNVertexSemanticPosition) ]];
float4 norm [[ attribute(SCNVertexSemanticNormal) ]];
} Input;
typedef struct {
float4 pos [[position]];
float3 camDir;
float3 norm;
} v2f;
struct nodeBuffer {
float4x4 modelViewProjectionTransform;
float4x4 modelViewTransform;
float4x4 normalTransform;
};
vertex v2f vert(Input in [[ stage_in ]],
constant SCNSceneBuffer& scn_frame [[buffer(0)]],
constant nodeBuffer& scn_node [[buffer(1)]]) {
v2f o;
o.pos = scn_node.modelViewProjectionTransform * in.pos;
o.norm = normalize(float3(scn_node.normalTransform * in.norm));
o.camDir = normalize(float3(scn_node.modelViewTransform*in.pos));
return o;
}
fragment half4 frag(v2f in [[stage_in]], constant settings& s [[buffer(2)]]) {
float nDotCam = abs(dot(float3(in.norm), float3(-in.camDir)));
half4 col;
col.rgb = half3(s.Color.rgb);
col.a = half(mix(s.Color.a, s.minTransparency, nDotCam));
return col;
}
Thanks for your time!
I figured it out. I read here that scenekit assumes premultiplied alphas, so I added
col.rgb *= col.a;
before the return in the fragment shader and now it works fine

SharpDX Constant/Texture Buffers Don't Work

I've been trying to get the constant/texture buffers to work in SharpDX (it's just like SlimDX), but the data I put in it doesn't seem to get into the shaders.
I've looked up how to do it and followed examples but I just can't get it to work.
Ultimately I will need to be able to input multiple large arrays of various data types into my shaders, so if anyone can give me a working example that can do that, it would be great!
But for now I've written a simple example that I've tried to test, and I just can't get it to work. Usually I can at least get something to display when I draw a triangle but right now it won't even do that.
That's probably a silly mistake I overlooked, but anyway, if someone could just take a look at it and point out what's wrong, or better yet, fix it and post the updated code (it is complete and should compile).
I'm sorry if the code is long but I tried to make it as simple as possible. Anyway, here it is:
using SharpDX;
using SharpDX.Direct3D;
using SharpDX.Direct3D11;
using SharpDX.DXGI;
using SharpDX.Windows;
using SharpDX.D3DCompiler;
using System;
using System.Collections.Generic;
using System.Drawing;
using System.Linq;
using System.Runtime.InteropServices;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
namespace test_namespace
{
class Test
{
[StructLayout(LayoutKind.Explicit, Size = 80, Pack = 16)]
public struct Data
{
[FieldOffset(0)]
public Matrix mat;
[FieldOffset(64)]
public Vector4 testColor;
}
[StructLayout(LayoutKind.Explicit)]
public struct Point
{
[FieldOffset(0)]
public Vector4 pos;
[FieldOffset(16)]
public Vector2 tex;
}
int width = 1000;
int height = 1000;
const int vertSize = 6 * sizeof(float);
RenderForm form;
PictureBox pic;
SharpDX.Direct3D11.Device dev;
DeviceContext dc;
SwapChainDescription scd;
SwapChain sc;
RasterizerStateDescription rsd;
RasterizerState rs;
Viewport vp;
Texture2DDescription depthDesc;
DepthStencilView dsv;
RenderTargetView rtv;
SharpDX.Direct3D11.Buffer buffer;
InputLayout il;
VertexShader vs;
ShaderBytecode vsCode;
PixelShader ps;
ShaderBytecode psCode;
Matrix view;
Matrix proj;
Matrix mat;
Data data;
DataStream pointStream;
SharpDX.Direct3D11.Buffer pointBuffer;
public Test()
{
init();
initMat();
data.testColor = new Vector4(1.0f, 0.5f, 0.25f, 0.0f);
string code = "struct vert { float4 pos : POSITION; float2 tex : TEXCOORD; };\n"
+ "struct pix { float4 pos : SV_POSITION; float2 tex : TEXCOORD; };\n"
+ "cbuffer buf1 : register(b0) { float4x4 mat; float4 testColor; }\n"
+ "pix VS(vert vertIn) { pix pixOut = (pix)0; pixOut.pos = mul(vertIn.pos, mat); pixOut.tex = vertIn.tex; return pixOut; }\n"
+ "float4 PS(pix pixIn) : SV_Target { return testColor; }";
vsCode = ShaderBytecode.Compile(code, "VS", "vs_5_0");
vs = new VertexShader(dev, vsCode);
psCode = ShaderBytecode.Compile(code, "PS", "ps_5_0");
ps = new PixelShader(dev, psCode);
dc.VertexShader.Set(vs);
dc.PixelShader.Set(ps);
il = new InputLayout(dev, ShaderSignature.GetInputSignature(vsCode),
new InputElement[] {new InputElement("POSITION", 0, Format.R32G32B32_Float, 0, 0),
new InputElement("TEXCOORD", 0, Format.R32G32_Float, 16, 0)});
dc.InputAssembler.InputLayout = il;
dc.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList;
updateBuffer();
RenderLoop.Run(form, () =>
{
dc.ClearDepthStencilView(dsv, DepthStencilClearFlags.Depth, 1.0f, 0);
dc.ClearRenderTargetView(rtv, Color4.Black);
float dist = 10.0f;
draw(new Vector3(-dist, -dist, dist), Vector2.Zero, new Vector3(-dist, dist, dist), Vector2.UnitY,
new Vector3(dist, dist, dist), Vector2.One);
});
}
void init()
{
form = new RenderForm();
form.ClientSize = new System.Drawing.Size(width, height);
form.BackColor = System.Drawing.Color.Black;
form.FormClosed += form_FormClosed;
pic = new PictureBox();
pic.Location = new System.Drawing.Point(0, 0);
pic.Size = new Size(width, height);
pic.Show();
form.Controls.Add(pic);
scd = new SwapChainDescription();
scd.BufferCount = 1;
scd.Flags = SwapChainFlags.AllowModeSwitch;
scd.IsWindowed = true;
scd.ModeDescription = new ModeDescription(width, height, new Rational(60, 1), Format.R8G8B8A8_UNorm);
scd.OutputHandle = pic.Handle;
scd.SampleDescription = new SampleDescription(1, 0);
scd.SwapEffect = SwapEffect.Discard;
scd.Usage = Usage.RenderTargetOutput;
rsd = new RasterizerStateDescription();
rsd.CullMode = CullMode.None;
rsd.DepthBias = 0;
rsd.DepthBiasClamp = 0;
rsd.FillMode = FillMode.Solid;
rsd.IsAntialiasedLineEnabled = true;
rsd.IsDepthClipEnabled = true;
rsd.IsFrontCounterClockwise = false;
rsd.IsMultisampleEnabled = true;
rsd.IsScissorEnabled = false;
rsd.SlopeScaledDepthBias = 0;
SharpDX.Direct3D11.Device.CreateWithSwapChain(DriverType.Hardware, DeviceCreationFlags.Debug, scd, out dev, out sc);
rs = new RasterizerState(dev, rsd);
vp = new Viewport(0, 0, width, height, 0.0f, 1.0f);
dc = dev.ImmediateContext;
dc.Rasterizer.State = rs;
dc.Rasterizer.SetViewports(vp);
depthDesc = new Texture2DDescription();
depthDesc.ArraySize = 1;
depthDesc.BindFlags = BindFlags.DepthStencil;
depthDesc.CpuAccessFlags = CpuAccessFlags.None;
depthDesc.Format = Format.D32_Float_S8X24_UInt;
depthDesc.Height = height;
depthDesc.MipLevels = 1;
depthDesc.OptionFlags = ResourceOptionFlags.None;
depthDesc.SampleDescription = new SampleDescription(1, 0);
depthDesc.Usage = ResourceUsage.Default;
depthDesc.Width = width;
dsv = new DepthStencilView(dev, new Texture2D(dev, depthDesc));
rtv = new RenderTargetView(dev, (SharpDX.Direct3D11.Resource)SharpDX.Direct3D11.Resource.FromSwapChain<Texture2D>(sc, 0));
dc.OutputMerger.SetTargets(dsv, rtv);
buffer = new SharpDX.Direct3D11.Buffer(dev, Marshal.SizeOf(typeof(Data)),
ResourceUsage.Default, BindFlags.ConstantBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0);
dc.VertexShader.SetConstantBuffer(0, buffer);
}
void initMat()
{
view = Matrix.LookAtLH(Vector3.Zero, Vector3.UnitZ, Vector3.UnitY);
proj = Matrix.PerspectiveFovLH((float)Math.PI / 4.0f, (float)width / (float)height, 0.001f, 10000.0f);
mat = view * proj;
mat.Transpose();
data.mat = mat;
}
void updateBuffer()
{
dc.UpdateSubresource<Data>(ref data, buffer);
}
public void draw(Vector3 p1, Vector2 t1, Vector3 p2, Vector2 t2, Vector3 p3, Vector2 t3)
{
Vector3[] p = new Vector3[3] {p1, p2, p3};
Vector2[] t = new Vector2[3] {t1, t2, t3};
Point[] points = new Point[3];
for(int i = 0; i < 3; i++)
{
points[i] = new Point();
points[i].pos = new Vector4(p[i].X, p[i].Y, p[i].Z, 1.0f);
points[i].tex = new Vector2(t[i].X, t[i].Y);
}
using(pointStream = new DataStream(vertSize * 3, true, true))
{
pointStream.WriteRange<Point>(points);
using(pointBuffer = new SharpDX.Direct3D11.Buffer(dev, pointStream, vertSize * 3,
ResourceUsage.Default, BindFlags.VertexBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0))
{
dc.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList;
dc.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(pointBuffer, vertSize, 0));
dc.Draw(3, 0);
}
}
}
void form_FormClosed(object sender, FormClosedEventArgs e)
{
buffer.Dispose();
il.Dispose();
ps.Dispose();
psCode.Dispose();
vs.Dispose();
vsCode.Dispose();
rtv.Dispose();
dsv.Dispose();
dc.ClearState();
dc.Flush();
dc.Dispose();
dev.Dispose();
sc.Dispose();
}
}
}
Also, here is the shader code formatted in a more readable way:
struct vert
{
float4 pos : POSITION;
float2 tex : TEXCOORD;
};
struct pix
{
float4 pos : SV_POSITION;
float2 tex : TEXCOORD;
};
cbuffer buf1 : register(b0)
{
float4x4 mat;
float4 testColor;
}
pix VS(vert vertIn)
{
pix pixOut = (pix)0;
pixOut.pos = mul(vertIn.pos, mat);
pixOut.tex = vertIn.tex;
return out;
}
float4 PS(pix pixIn) : SV_Target
{
return testColor;
}
I'm not sure if this is of any help here, but why go UpdateSubresource in your updateBuffer()? In the SharpDXTutorial/Tutorial16 (the cubemap example) the buffer is initialized with the "device" object,
device.UpdateData<Data>(dataConstantBuffer, sceneInformation);
This is a very handy object. It is contained in SharpHelper, part of SharpDXTutorial,
https://github.com/RobyDX/SharpDX_Demo/blob/master/SharpDXTutorial/SharpHelper/SharpHelper.csproj
.. maybe it takes care of stuff missed, to update the constant buffer?

Hand touches move the object model in rend ios?

I downloaded the project from https://github.com/antonholmquist/rend-ios.
I run this project the teapot rotating 360 degree full rotation,But not stop the rotation and touches not rotate the teapot,So i work for stop the rotation it worked properly and after rotation stopped touches to move the teapot i will try this below code:
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event;
{
CGPoint currentMovementPosition = [[touches anyObject] locationInView:glView_];
[self renderByRotatingAroundX:(lastMovementPosition.x - currentMovementPosition.x)
rotatingAroundY: (lastMovementPosition.y - currentMovementPosition.y) scaling:1.0f translationInX:0.0f translationInY:0.0f];
lastMovementPosition = currentMovementPosition;
}
- (void)renderByRotatingAroundX:(float)xRotation rotatingAroundY:(float)yRotation scaling:(float)scaleF translationInX:(float)xTranslation translationInY:(float)yTranslation{
currentCalculatedMatrix = CATransform3DIdentity;
currentCalculatedMatrix = CATransform3DTranslate(currentCalculatedMatrix, 0.0, -0.2, 0.0);
currentCalculatedMatrix = CATransform3DScale(currentCalculatedMatrix, 4.5, 4.5 * (320.0/480.0), 4.5);
glClearColor(0.0f,0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
GLfloat currentModelViewMatrix[16];
// Perform incremental rotation based on current angles in X and Y
if ((xRotation != 0.0) || (yRotation != 0.0))
{
GLfloat totalRotation = sqrt(xRotation*xRotation + yRotation*yRotation);
CATransform3D temporaryMatrix = CATransform3DRotate(currentCalculatedMatrix, totalRotation * M_PI / 180.0,
((xRotation/totalRotation) * currentCalculatedMatrix.m12 + (yRotation/totalRotation) * currentCalculatedMatrix.m11),
((xRotation/totalRotation) * currentCalculatedMatrix.m22 + (yRotation/totalRotation) * currentCalculatedMatrix.m21),
((xRotation/totalRotation) * currentCalculatedMatrix.m32 + (yRotation/totalRotation) * currentCalculatedMatrix.m31));
if ((temporaryMatrix.m11 >= -100.0) && (temporaryMatrix.m11 <= 100.0))
currentCalculatedMatrix = temporaryMatrix;
}
else
{
}
// Draw the teapot model
[self convert3DTransform:&currentCalculatedMatrix toMatrix:currentModelViewMatrix];
[plainDisplayProgram use];
glUniformMatrix4fv(plainDisplayModelViewMatrix, 1, 0, currentModelViewMatrix);
glVertexAttribPointer(plainDisplayPositionAttribute, 3, GL_FLOAT, 0, 0, cube_vertices);
glEnableVertexAttribArray(plainDisplayPositionAttribute);
NSLog(#"posit:%d,matrix=%d",plainDisplayPositionAttribute,plainDisplayModelViewMatrix);
//Draw teapot. The new_teapot_indicies array is an RLE (run-length encoded) version of the teapot_indices array in teapot.h
for(int i = 0; i < num_cube_indices; i += cube_indices[i] + 1)
{
NSLog(#"count:%d,i=%d",num_cube_indices,i);
glDrawElements(GL_TRIANGLES, cube_indices[i], GL_UNSIGNED_SHORT, &cube_indices[i + 1]);
}
[glView_ presentFramebuffer];
}
- (BOOL)presentFramebuffer
{
BOOL success = FALSE;
if ([EAGLContext currentContext])
{
#ifdef MSAA
glBindFramebuffer(GL_READ_FRAMEBUFFER_APPLE, multisampleFramebuffer);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER_APPLE,framebuffer);
glResolveMultisampleFramebufferAPPLE();
#endif
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
success = [[EAGLContext currentContext] presentRenderbuffer:GL_RENDERBUFFER];
#ifdef MSAA
glBindFramebuffer(GL_FRAMEBUFFER, multisampleFramebuffer);
#endif
}
return success;
}
Move the teapot it will not move, only blackcolor screen shown.
How to show the teapot model for touches moved?
Using touches moved, it will rotate clearly in all directions. try this.
xangle and yangle are globally declared and initialised as 0.0.
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch * touch = [touches anyObject];
NSLog(#"Touch Returns : %#",touches);
CGPoint location = [touch locationInView:glView_];
CGPoint lastLoc = [touch previousLocationInView:glView_];
CGPoint diff = CGPointMake(lastLoc.x - location.x, lastLoc.y - location.y);
float rotX = 1 * DegreesToRadians(diff.y / 0.2);
float rotY = 1 * DegreesToRadians(diff.x / 0.2);
xAngle += rotX;
yAngle += rotY;
teapotNode_.rotation = CC3VectorMake(xAngle, yAngle, 0);
director_.running = YES;
}
I found answer myself https://github.com/antonholmquist/rend-ios in this rend-ios project every button touch move the object model using that code below:
-(void)rightButtonPressed
{
float angle;
angle = teapotNode_.rotationAngle;
angle +=0.4;
director_.running = YES;//Redirector object
if (director_.frameInterval == 2)
{
director_.running = NO;
}
}
its work fine.

Resources