writing methods in f# - c

I have a question:
There is such a method in C:
inline void ColorSet(int face, int pos,int col)
{
color[face*9+pos]=col;
}
I've tried to write it in F#;
type ColorSet =
member this.ColorSet (face: int, pos: int, col: int) =
color.[face*9+pos] = col
but I encountered with such an error:
The operator 'expr.[idx]' has been used on an object of indeterminate type based on information prior to this program point. Consider adding further type constraints...
Could you help me to write the exact method?

Reading the comments, it seems you may be trying to do this:
let itemCount = 9
let faceCount = 6
let color : int [] = Array.zeroCreate (faceCount * itemCount)
let setColor face pos col =
color.[face * itemCount + pos] <- col
Two things to note:
The object of indeterminate type error can usually be solved with a type annotation: by declaring color as : int [], it is specified that color must be an array of integers
The operator = is a test for equality in F#. To assign to a mutable variable or an array component, use <-.
Usage might look like this:
let red = 0xFFFF0000 // Assuming ARGB (machine endianness)
setColor 0 8 red // Set the last component of the first face to red
Note that this is unusual style for F#. I do use code like this, but only if it is known to be performance critical and the compiler can't optimize it. Normally, you would use a type for color, e.g. System.Drawing.Color for compatibility, and a type for the objects iterated by the face parameter.
Edit Are you storing the colors of 6 faces of dice or cuboids interleaved in an array? Just in case someone is interested, I'll assume that and write how it might look in more typical F#. I don't know if this is relevant, but I guess it can't hurt to add it.
/// A color, represented as an int. Format, from most to least
/// significant byte: alpha, red, green, blue
type Color = Color of int
let black = Color 0xFF000000
let red = Color 0xFFFF0000
type CubeColors =
{ Top : Color; Bottom : Color
Left : Color; Right : Color
Front : Color; Back : Color }
/// Creates CubeColors where all faces have the same color
static member Uniform c =
{ Top=c; Bottom=c; Left=c
Right=c; Front=c; Back=c }
// Make an array with nine completely black cubes
let cubes = Array.create 9 (CubeColors.Uniform black)
// Change the top of the second cube to red
cubes.[1] <- { cubes.[1] with Top = red }
This uses a single-case discriminated union for the Color type and a record for the CubeColors type. This is much safer to use, and often more readable, than doing low-level array stuff.

You should define color somewhere in the class constructor.
For example if color is an array:
type ColorSet() =
let color = Array.zeroCreate 100
member this.ColorSet (face: int, pos: int, col: int) =
color.[face*9+pos] <- col

Related

Is it possible to define the size of a Float32Array type in typescript?

I know that with tuples sizes of arrays can be defined. Not applicable to float32array which is a class itself though.
Can that somehow be done with float32arrays as well?
I tried const foo: FloatArray32[4] but that casts the type directly to the number.
I also tried to check if types might be compatible:
let foo: [number, number, number, number];
foo = new Float32Array([1, 2, 3, 4]);
But they are not.
Changing all the types in my code to '[number, number, number, number];' (in my case I need a 4 float array for a point coordinate) is a possibility, although I would need to make changes in quite a lot of places in the code.
However, I was wondering if there might be a 'childtype' extending Float32Array type, where the number of the elements of the array can be fixed in the type.
Javascript typed arrays, are in fact, fixed length - see the docs for your example. The constructors in particular:
new Float32Array(); // new in ES2017
new Float32Array(length);
new Float32Array(typedArray);
new Float32Array(object);
new Float32Array(buffer [, byteOffset [, length]]);
all have the length deducible on creation (that new first one creates an empty array with 0 elements. I guess it simplified some edge cases).
I'm not sure how you are determining the type, but as soon as you get an item from your array it will be converted to a number, the only number type available in JS - so looking at your log is misleading here. Take a look at the following static property:
Float32Array.prototype.byteLength
Returns the length (in bytes) of the Float32Array. Fixed at construction time and thus read only.
This is the only thing that counts. If you still don't believe the docs, try logging a cell after you overflow it (easier with int8 - put 200 or something). This is relevant to your example - nothing is being converted to a number. The array object is a view in fixed length numbers - again, run your test with an Int8Array and try to assign 200 to the cell, and read the cell.
This is a view into raw data. If you extract it and make mathematical operations, you are now in JS realm and working with Numbers - but once you assign stuff back, you better make sure the data fits. You cannot get JS/TS to show you something like float32 in your console, but each cell of the array itself does have an exact byte length.
unfortunately, making the length a part of the type is non-trivial within the type system as far as I can tell since the length is a property determined in construction (even if static and read only) and not a part of the type. If you do want something like this a thin wrapper could do the trick:
class vec4 extends Float32Array {
constructor(initial_values? : [number, number, number, number]) {
initial_values? super(initial_values) : super(4);
}
}
would do the trick. If you are willing to give up square brackets you can add index out-of-bound checking in the different methods (you can set in a fixed width array any cell, but it will do nothing, and retrieving it will yield undefined if out of bounds, which may be error prone):
get(index : number) {
if(index > 4 || index < 0) ...
return this.private_data[index];
}
set(index : number, value : number) {
if(index > 4 || index < 0) ...
this.private_data[index] = value;
}
Of course, without LSP in JS/TS the array and your class are still interchangeable, so enforcement is really only done on construction, and only if you do not try to break your own code (let foo : vec4; foo = new Float32Array([1, 2]); etc...).

Need help understanding a color-mapping algorithm

So I'm working on a project that involves taking pre-existing skeleton code of an application that simulates "fluid flow and visualization" and applying different visualization techniques on it.
The first step of the project is to apply different color-mapping techniques on three different data-sets which are as follows: fluid density (rho), fluid velocity magnitude (||v||) and the force field magnitude ||f||.
The skeleton code provided already has an example that I can study to be able to determine how best to design and implement different color-maps such as red-to-white or blue-to-yellow or what have you.
The snippet of code I'm trying to understand is the following:
//rainbow: Implements a color palette, mapping the scalar 'value' to a rainbow color RGB
void rainbow(float value,float* R,float* G,float* B)
{
const float dx=0.8;
if (value<0) value=0; if (value>1) value=1;
value = (6-2*dx)*value+dx;
*R = max(0.0,(3-fabs(value-4)-fabs(value-5))/2);
*G = max(0.0,(4-fabs(value-2)-fabs(value-4))/2);
*B = max(0.0,(3-fabs(value-1)-fabs(value-2))/2);
}
The float valuebeing passed by the first parameter is, as far as I can tell, the fluid density. I've determined this by studying these two snippets.
//set_colormap: Sets three different types of colormaps
void set_colormap(float vy)
{
float R,G,B;
if (scalar_col==COLOR_BLACKWHITE)
R = G = B = vy;
else if (scalar_col==COLOR_RAINBOW)
rainbow(vy,&R,&G,&B);
else if (scalar_col==COLOR_BANDS)
{
const int NLEVELS = 7;
vy *= NLEVELS; vy = (int)(vy); vy/= NLEVELS;
rainbow(vy,&R,&G,&B);
}
glColor3f(R,G,B);
}
and
set_colormap(rho[idx0]);
glVertex2f(px0, py0);
set_colormap(rho[idx1]);
glVertex2f(px1, py1);
set_colormap(rho[idx2]);
glVertex2f(px2, py2);
set_colormap(rho[idx0]);
glVertex2f(px0, py0);
set_colormap(rho[idx2]);
glVertex2f(px2, py2);
set_colormap(rho[idx3]);
glVertex2f(px3, py3);
With all of this said, could somebody please explain to me how the first method works?
Here's the output when the method is invoked by the user and matter is injected into the window by means of using the cursor:
Whereas otherwise it would look like this (gray-scale):
I suspect that this is a variation of HSV to RGB.
The idea is that you can map your fluid density (on a linear scale) to the Hue parameter of a color in HSV format. Saturation and Value can just maintain constant value of 1. Normally Hue starts and ends at red, so you also want to shift your Hue values into the [red, blue] range. This will give you a "heatmap" of colors in HSV format depending on the fluid density, which you then have to map to RGB in the shader.
Because some of your values can be kept constant and because you don't care about any of the intermediate results, the algorithm that transforms fluid density to RGB can be simplified to the snippet above.
I'm not sure which part of the function you don't understand, so let me explain this line by line:
void rainbow(float value,float* R,float* G,float* B){
}
This part is probably clear to you - the function takes in a single density/color value and outputs a rainbow color in rgb space.
const float dx=0.8;
Next, the constant dx is initialised. I'm not sure what the name "dx" stands for, but looks like it's later used to determine which part of the color spectrum is used.
if (value<0) value=0; if (value>1) value=1;
This clamps the input to a value between 0 and 1.
value = (6-2*dx)*value+dx;
This maps the input to a value between dx and 6-dx.
*R = max(0.0,(3-fabs(value-4)-fabs(value-5))/2);
This is probably the most complicated part. If value is smaller than 4, this simplifies to max(0.0,(2*value-6)/2) or max(0.0,value-3). This means that if value is less than 3, the red output will be 0, and if it is between 3 and 4, it will be value-3.
If value is between 4 and 5, this line instead simplifies to max(0.0,(3-(value-4)-(5-value))/2) which is equal to 1. So if value is between 4 and 5, the red output will be 1.
Lastly, if value is greater than 5, this line simplifies to max(0.0,(12-2*value)/2) or just 6-value.
So the output R is 1 when value is between 4 and 5, 0 when value is smaller than 3, and something in between otherwise. The calculations for the green and blue output or pretty much the same, just with tweaked value; green is brightest for values between 2 and 4, and blue is brightest for values between 1 and 2. This way the output forms a smooth rainbow color spectrum.

Non-scalar enumeration in Matlab

Is it possible to have enumeration member that are non-scalar?
For example, how can I enumerate colors such that each color is a 1x3 double (as needed for plots), without using methods?
With the following class definition
classdef color
properties
R, G, B
end
methods
function c = color(r, g, b)
c.R = r;
c.G = g;
c.B = b;
end
function g = get(c)
g = [c.R, c.G, c.B];
end
end
enumeration
red (1, 0, 0)
green (0, 1, 0)
end
end
I can write color.green.get() to get [0 1 0], but I would like the same result with color.green to make the code cleaner.
A different solution may be setting color as a global struct, but it's not practical because global variables can cause confusion and I have to write global color; in each script/function.
I'm not sure exactly what you're asking here, but I think the main answer is that you're currently doing basically the right thing (although I'd suggest a few small changes).
You can certainly have non-scalar arrays of enumeration values - using your class, for example, you could create mycolors = [color.red, color.green]. You can also have an enumeration with non-scalar properties, such as the following:
classdef color2
properties
RGB
end
methods
function c = color2(r, g, b)
c.RGB = [r,g,b];
end
end
enumeration
red (1, 0, 0)
green (0, 1, 0)
end
end
and then you could just say color2.red.RGB and you'd get [1,0,0].
But I'm guessing that neither of those are really what you want. The thing that I imagine you're aiming for, and unfortunately what you explicitly can't do, is something like:
classdef color3 < double
enumeration
red ([1,0,0])
green ([0,1,0])
end
end
where you would then just type color3.red and you'd get [1,0,0]. You can't do that, because when an enumeration inherits from a built-in, it has to be a scalar.
Personally, I would do basically what you're doing, but instead of calling your method get I would call it toRGB, so you'd say color.red.toRGB, which feels quite natural (especially if you also give it some other methods like toHSV or toHex as well). I'd also modify it slightly, so that it could accept arrays of colors:
function rgb = toRGB(c)
rgb = [[c.R]', [c.G]', [c.B]'];
end
That way you can pass in an array of n colors, and it will output an n-by-3 array of RGB values. For example, you could say mycolors = [color.red, color.green]; mycolors.toRGB and you'd get [1,0,0;0,1,0].
Hope that helps!

Require an Array to be a minimum size in Swift-protocol

I have an object that will output pixels line by line (just like old televisions did). This object simply writes bytes into a twodimensional array. So there is a number of horizontal lines with each having a number of pixels. These numbers are fixed: there is x number of horizontal lines, and each lines y number of pixels. A pixel is a struct of red, green, blue.
I would like clients of this class to plug in their own object to which these values can be written, as I would like this code two work well on Apple-platforms (where CALayer is present), but also on other platforms (e.g. Linux, where the rendering needs to be done without CALayer). So I was thinking of making protocols like this:
struct Pixel
{
var red: UInt8 = 0
var green: UInt8 = 0
var blue: UInt8 = 0
}
protocol PixelLine
{
var pixels: [Pixel] { get }
}
protocol OutputReceivable
{
var pixelLines: [PixelLine] { get }
}
These protocols woul be used at some point like
let pixelLineIndex = ... // max 719
let pixelIndex = ... // max 1279
// outputReceivable is an object that conforms to the OutputReceivable protocol
outputReceivale.pixelLines[pixelLineIndex][pixelIndex].red = 12
outputReceivale.pixelLines[pixelLineIndex][pixelIndex].green = 128
outputReceivale.pixelLines[pixelLineIndex][pixelIndex].blue = 66
Two questions arise:
how to require the protocol PixelLine to have a minimum of 1280 Pixel units in the array and and the protocol OutputReceivable a minimum of 720 PixelLine elements in the array ?
as I learned from a video, using generics can help the compiler generate optimal code. Is there a way for me to use generics to generate more performant code then using plain protocols as a type?
There are no dependent types in Swift. You cannot directly require Arrays to be of a minimum size. What you can do is create new types that can only be constructed with particular data. So in this case, the better model would be to make PixelLine a struct rather than a protocol. Then you can have an init? that ensures that it is legal before using it.
A simple struct wrapper around an Array is zero-cost in memory and extremely low cost in dispatch. If you're dealing with a high performance system, a struct wrapping an Array is an excellent starting point.
struct PixelLine {
let pixels: [Pixel]
init?(pixels: [Pixel]) {
guard pixels.count >= 1280 else { return nil }
self.pixels = pixels
}
}
You can either expose pixels directly like this does, or you can make PixelLine a Collection (or even just a Sequence) that forwards its required methods to pixels.

RGB palette designated for ordered values

I have a function f(x,y), mostly monotonic, which produces some values in the range {0.0 .. 100.0}. I would like to draw them using different colors as a 2D picture, where (x,y) are coordinates and where distinctive colors stand for distinctive values of the function. The problem is following: I don't know how to map the values of this function to RGB color space preserving the order (visibly). I have found that smth. like:
R = f(x,y) * 10.0f;
G = f(x,y) * 20.0f;
B = f(x,y) * 30.0f;
color = B<<16|G<<8|R; //#low-endian
works fine, but the resulting picture is too dark. If I increase these constants, it makes things not better, because at some moment a color component will be greater than 0xFF, so it will overflow (one color component should be in the range {0 .. 0xFF}.
Do you have any idea how to map values from {0.0 .. 100.0} to
RGB=[{0 .. 0xFF}<<16|{0 .. 0xFF}<<8|{0 .. 0xFF}] so that the resulting RGB values are visibly Ok?
PS: maybe you know, where to find more info about related theory online? I remember only Comp.Graphics by Foley/Van Dam, but I don't have this book.
UPDATE: I am looking for how to generate a chroma palette like one on the right:
You could just try clamping the values to a maximum of 255 (0xff).
R = min((int)(f(x,y) * 10.0f), 0xff);
G = min((int)(f(x,y) * 20.0f), 0xff);
B = min((int)(f(x,y) * 30.0f), 0xff);
Edit: There are a lot of different ways to convert to colors automatically, but you might find that none of them generates the exact progression you're looking for. Since you already have a picture of an acceptable palette, one method would be to create a lookup table of 256 colors.
#define RGB(R,G,B) (B<<16|G<<8|R)
int palette[256] = { RGB(0,0,0), RGB(0,0,128), ... };
int greyscale = (int)(f(x,y) * 2.559999);
assert(greyscale >= 0 && greyscale <= 255);
int rgb = palette[greyscale];
If the lookup table is too much trouble, you could also break the greyscale range into different subranges and do a linear interpolation between the endpoints of each range.
int interpolate(int from, int to, double ratio)
{
return from + (int)((to - from) * ratio);
}
if (greyscale <= 48)
{
R = 0;
G = 0;
B = interpolate(0, 255, greyscale/48.0);
}
else if (greyscale <= 96)
{
R = 0;
G = interpolate(0, 255, (greyscale-48)/48.0);
B = interpolate(255, 0, (greyscale-48)/48.0);
}
else if ...
Actually you could use the YUV color model since it is kind of based on two coordinates, i.e. U and V.
This seems more appropriate for the task.
And YUV -> RGB conversion is pretty straightforward.
You can convert RGB to HSL and increase the brightness/contrast. You can find forumlas for the conversions and other useful info on this page: http://lodev.org/cgtutor/color.html
Use a different colour-space that means you can easily assign coordinates to different colours.
YUV or YCrCb might suit as the UV or CrCb dimensions could be treated as "at right-angles"
Or HSL/HSV if one of your dimensions wraps around like the hue does.
I believe you want a heatmap based on the ordered values. Then you have different types of heatmaps. Solution here: http://www.andrewnoske.com/wiki/Code_-_heatmaps_and_color_gradients

Resources