Increase Intensity of SCNLights - scenekit

Hello I have a SCNScene that is the basis of my game. The lighting was tricky and to get the effect I wanted I ended up duplicating three lights three times. This increased the intensity of the lights to create the affect and colors I wanted. However I know that 9 lights all casting shadows has been taking a toll on my fps. Is there any way to increase the intensity of the lights like i did by duplicating them without destroying my fps?
Thanks!

what type of light do you have ? Do they have non-default attenuation values ? (see attenuationStartDistance, attenuationEndDistance and attenuationFalloffExponent).
You can try to increase the brightness of your lights colors if that's possible (if they're aren't already 100% white for instance).
Otherwise you can use shader modifiers. The SCNShaderModifierEntryPointLightingModel entry point will let you customize the effect of each light.

In iOS 10 & macOS 10.12, SCNLight now has an intensity: CGFloat property which'll let you multiply the brightness of each light.  Assuming you're not using PBR/IES, intensity acts as a permille multiplier— 1000 = 1×, 3000 = 3×, 100 = 0.1×, etc.  (When using PBR or IES lighting, intensity instead controls the luminous flux of the light.)
To triple the brightness of each SCNLight, simply do:
myLight1.intensity = 3000

Related

Uniform random sampling of CIELUV for RGB colors

Selecting a random color on a computer is a touch harder than I thought it would be.
The naive way of uniform random sampling of 0..255 for R,G,B will tend to draw lots of similar greens. It would make sense to sample from a perceptually uniform space like CIELUV.
A simple way to do this is to sample L,u,v on a regular mesh and ensure the color solid is contained in the bounds (I've seen different bounds for this). If the sample falls outside embedded RGB solid (tested by mapping it XYZ then RGB), reject it and sample again. You can settle for a kludgy-but-guaranteed-to-terminate "bailout" selection (like the naive procedure) if you reject more then some arbitrary threshold number of times.
Testing if the sample lies within RGB needs to be sure to test for the special case of black (some implementations end up being silent on the divide by zero), I believe. If L=0 and either u!=0 or v!=0, then the sample needs to be rejected or else you would end up oversampling the L=0 plane in Luv space.
Does this procedure have an obvious flaw? It seems to work but I did notice that I was rolling black more often than I thought made sense until I thought about what was happening in that case. Can anyone point me to the right bounds on the CIELUV grid to ensure that I am enclosing the RGB solid?
A useful reference for those who don't know it:
https://www.easyrgb.com/en/math.php
The key problem with this is that you need bounds to reject samples that fall outside of RGB. I was able to find it worked out here (nice demo on page, API provides convenient functions):
https://www.hsluv.org/
A few things I noticed with uniform sampling of CIELUV in RGB:
most colors are green and purple (this is true independent of RGB bounds)
you have a hard time sampling what we think of as yellow (very small volume of high lightness, high chroma space)
I implemented various strategies that focus on sampling hues (which is really what we want when we think of "sampling colors") by weighting according to the maximum chromas at that lightness. This makes colors like chromatic light yellows easier to catch and avoids oversampling greens and purples. You can see these methods in actions here (select "randomize colors"):
https://www.mysticsymbolic.art/
Source for color randomizers here:
https://github.com/mittimithai/mystic-symbolic/blob/chromacorners/lib/random-colors.ts
Okay, while you don't show the code you are using to generate the random numbers and then apply them to the CIELUV color space, I'm going to guess that you are creating a random number 0.0-100.0 from a random number generator, and then just assigning it to L*.
That will most likely give you a lot of black or very dark results.
Let Me Explain
L* of L * u * v* is not linear as to light. Y of CIEXYZ is linear as to light. L* is perceptual lightness, so an exponential curve is applied to Y to make it linear to perception but then non-linear as to light.
TRY THIS
To get L* with a random value 0—100:
Generate a random number between 0.0 and 1.0
Then apply an exponent of 0.42
Then multiply by 100 to get L*
Lstar = Math.pow(Math.random(), 0.42) * 100;
This takes your random number that represents light, and applies a powercurve that emulates human lightness perception.
UV Color
As for the u and v values, you can probably just leave them as linear random numbers. Constrain u to about -84 and +176, and v to about -132.5 and +107.5
Urnd = (Math.random() - 0.5521) * 240;
Vrnd = (Math.random() - 0.3231) * 260;
Polar Color
It might be interesting converting uv to LChLUV or LshLUV
For hue, it's probably as simple as H = Math.random() * 360
For chroma contrained 0—178: C = Math.random() * 178
The next question is, should you find chroma? Or saturation? CIELUV can provide either Hue or Sat — but for directly generating random colors, it seems that chroma is a bit better.
And of course these simple examples are not preventing over-runs, so they color values to be tested to see if they are legal sRGB or not. There's a few things that can be done to constrain the generated values to legal colors, but the object here was to get you to a better distribution without excess black/dark results.
Please let me know of any questions.

What do the channels do in CNN?

I am a newbie in CNN and I want to ask what does the channels do in SSD for example? For what reason they exist? For example 18X18X1024 (third number)?
Thanks for any answer.
The dimensions of an image can be represented using 3 numbers. For example, a color image in CIFAR-10 dataset has a height of 32 pixels, width of 32 pixels and is represented as 32 x 32 x 3. Here 3 represents the number of channels in your image. Color images have a channel size of 3 (usually RGB), while a grayscale image will have a channel size of 1.
A CNN will learn features of the images that you feed it, with increasing levels of complexity. These features are represented by the channels. The deeper you go into the network, the more channels you will have that represents these complex features. These features are then used by the network to perform object detection.
In your example, 18X18X1024 means your input image is now represented with 1024 channels, where each channel represents some complex feature/information about the image.
Since you are a beginner, I suggest you look into how CNNs work in general, before diving into object detection. A good start would be image classification using CNNs. I hope this answers your question. Happy learning!! :)

Do depth values in AVDepthData (from TrueDepth camera) indicate distance from camera or camera plane?

Do depth values in AVDepthData (from TrueDepth camera) indicate distance in meters from the camera, or perpendicular distance from the plane of the camera (i.e. z-value in camera space)?
My goal is to get an accurate 3D point from the depth data, and this distinction is important for accuracy. I've found lots online regarding OpenGL or Kinect, but not for TrueDepth camera.
FWIW, this is the algorithm I use. I'm find the value of depth buffer at a pixel found using some OpenCV feature detection. Below is the code I use to find real world 3D point at a given pixel at let cgPt: CGPoint. This algorithm seems to work quite well, but I'm not sure whether small error is introduced by the assumption of depth being distance to camera plane.
let depth = 1/disparity
let vScreen = sceneView.projectPoint(SCNVector3Make(0, 0, -depth))
// cgPt is the 2D coordinates at which I sample the depth
let worldPoint = sceneView.unprojectPoint(SCNVector3Make(cgPt.x, cgPt.y, vScreen.z))
I'm not sure of authoritative info either way, but it's worth noticing that capture in a disparity (not depth) format uses distances based on a pinhole camera model, as explained in the WWDC17 session on depth photography. That session is primarily about disparity-based depth capture with back-facing dual cameras, but a lot of the lessons in it are also valid for the TrueDepth camera.
That is, disparity is 1/depth, where depth is distance from subject to imaging plane along the focal axis (perpendicular to imaging plane). Not, say, distance from subject to the focal point, or straight-line distance to the subject's image on the imaging plane.
IIRC the default formats for TrueDepth camera capture are depth, not disparity (that is, depth map "pixel" values are meters, not 1/meters), but lacking a statement from Apple it's probably safe to assume the model is otherwise the same.
It looks like it measures distance from the camera's plane rather than a straight line from the pinhole. You can test this out by downloading the Streaming Depth Data from the TrueDepth Camera sample code.
Place the phone vertically 10 feet away from the wall, and you should expect to see one of the following:
If it measures from the focal point to the wall as a straight line, you should expect to see a radial pattern (e.g. the point closest to the camera is straight in front of it; the points furthest to the camera are those closer to the floor and ceiling).
If it measures distance from the camera's plane, then you should expect the wall color to be nearly uniform (as long as you're holding the phone parallel to the wall).
After downloading the sample code and trying it out, you will notice that it behaves like #2, meaning it's distance from the camera's plane, not from the camera itself.

Making a dynamic gradient with HSL or RGB

I have a standard 50-state map built with d3 in which I'm dynamically coloring states according to various datasets. Whatever the dataset, the values are normalized on a scale of 0 to 1, where 1 corresponds to the state with the highest value. I'm looking for a way to calculate the shade of the state using the value of the normalized data point.
In the past, I've chosen a base color that I like -- say, #900 -- and set the fill of each state to that color and the opacity to the normalized value. This works okay save for two problems:
when the canvas has a background color, it requires drawing a blank white state beneath every shaded state; and
fading out colors this way can look pasty
But I really like being able to set the color dynamically rather than dealing with bins for the data and preset arrays of RGB values for the gradient. So I'm wondering if there's a better way. I can take care of conversion if an alternate color system would work better.
d3 has a baked-in HSL converter, so I tried this:
// 0 <= val <= 1
function colorize(val) {
// nudge in the extremes
val = 0.2 + 0.6 * val;
return d3.hsl(0, val, 1 - val);
}
It works okay -- This is a map of fishing jobs, which are most prevalent in Maine and Oregon -- but I suspect there's a better way. Ideas?
I like what you did actually, but if you wish to do something different, you can always do a D3 scale. For example:
var scale = d3.scale.linear().domain([rangeMin, rangeMid,
rangeMax]).range(["#Color1","#Color2","#Color3"]);
And then set each state by
return scale(dataValue);
You can set your rangeMin and rangeMax variables to be the minimum and maximum values of your data. The median number, rangeMid, that I added is optional. I would suggest using this if you would like some variety in your color. I have used this scale feature to make a word frequency heatmap that came out pretty nice. I hope that I was able to help in some way!
Note: I used this with css hex values, but I believe RGB and HSL could also work.

RGB value detection and implementation

I'm writing an application that displays different color swatches to help people with color coordination. How can I find the RGB values of real world objects?
For example, one of the colors is Red Apple but obviously a red apple isn't just red. It has hints of other colors in it.
Well, it's not an easy task to be honest, but a good place to start would be with a digital camera and/or a flatbed scanner.
Once you have an image in the computer then the task is somewhat easier beacuse all you need is to use a picture / photo editing package such as photoshop or the gimp to sample a selection of colours before using them in your application.
once you have a few different samples, then you need to average them, and that's quite easy to do. Lets say you took 5 samples of RGB values:
255,50,10
250,40,11
253,51,15
248,60,13
254,45,20
You simply need to add up each component and divide by how many samples you took so:
Red = (255 + 250 + 253 + 248 + 254) / 5
Green = (50 + 40 + 51 + 60 + 45) / 5
Blue = (10 + 11 + 15 + 13 + 20) / 5
Now, if what your asking is how do I do this automatically in program code, that's a whole different kettle of fish, first you'll need something like a web cam, then you'll need to write code to capture images from the web-cam, then once you have your image you'll need not just the ability to pick colour, but to actually figure out where in the image the object you want to pick the colour from actually is.
For now, I'd look at using the first method, it's a bit manual I agree, but far easier and will get you started.
The image processing required to do the second maths has given software engineers & comp scientists headaches for years and is still not a perfect science... and that's before we even start thinking about the maths.
For each object, I would do it this way:
Use goolge images to search pictures of the object you want.
Select the one that have the most accurate color, say, to your idea of a "red apple" for example.
--you can skip 1 and 2 if you have a digital picture of the object.
Open that image in Paint; you can do it stroking the "Impr Pant" key on your keyboard, opening Paint, and then "ctrl+v" will paste the screenshoot in paint.
Select the pick color tool on Paint (the one like a dropper) and click on the image, just in the place with the color you want.
Select from the menu, "Colors -> Edit colors" and then in the Colors palette that opens, clic on "Define Custom Colors".
You got it, there RGB values are at your right.
There must be an easier way, but this will work.
If your looking for a programmatic solution then you would look into bitwise operations. The general idea here is you would read the image in it's binary roots and then you could logically convert the bits into RGB values. There are several methods for doing this depending on programming language. Here is a method for Actionscript3.
http://www.flashandmath.com/intermediate/rgbs/explanations.html
also if your looking for the average color look here, (for AS3)
http://blog.soulwire.co.uk/code/actionscript-3/extract-average-colours-from-bitmapdata
a related method and explanation for Java
Bitwise version of finding RGB in java

Resources