Larger data decimals than BigDecimal - bigdecimal

I recently made a basic Pi calculator that works with Math.BigDecimal. It outputs to a text file, and calculates extremely fast, but even BigDecimal has it's limits. I was wondering what the basic code would be for numbers even more precise (Not that any code would actually need that). Here is the Pi code:
package mainPack;
import java.io.IOException;
import java.math.BigDecimal;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.Scanner;
public class Main {
public static BigDecimal piDigCalc(int i){
double o = (double)i;
BigDecimal ret = new BigDecimal((1/(Math.pow(16.0D, o))*((4/((8*o) + 1))-(2/((8*o) + 4))-(1/((8*o) + 5))-(1/((8*o)+6))))); //Just the code for a hexdigit of Pi.
return ret;
}
public static void main(String[] args) throws IOException {
System.out.println("Enter accuracy:");
Scanner s = new Scanner(System.in);
int acc = s.nextInt();
s.close();
BigDecimal Pi = new BigDecimal(0);
for(int i = 0; i < acc; i++){
Pi = Pi.add(piDigCalc(i));
}
Path file = Paths.get("C:\\tmp\\PI.txt");
String pi = "" + Pi;
Files.write(file, pi.getBytes());
System.out.println("File has been saved at "+ file.toString());
}
}

I rewrote your piDigCalc function to use entirely arbitrary precision values. You can see it running on ideone (1300 was about the largest precision I could get to finish under ideone's 15 second time limit).
It is in no way optimized, it's just a verbatim translation of the math in your code to bigdecimals.
You might notice that there is actually a precision limit to the values, but this is partly because setting huge limits increases runtime by a lot. The limit I set scales with the provided accuracy.
In case ideone ever explodes, here's the code for piDigCalc:
public static BigDecimal piDigCalc(int _i, int x){
BigDecimal
i = BigDecimal.valueOf(_i),
a = SIXTEEN.pow(_i),
b = EIGHT.multiply(i),
c = ONE.divide(a, x, BigDecimal.ROUND_HALF_EVEN),
d = b.add(ONE), // b + 1
e = b.add(FOUR), // b + 4
f = e.add(ONE), // b + 5
g = f.add(ONE), // b + 6
h = FOUR.divide(d, x, BigDecimal.ROUND_HALF_EVEN),
j = TWO.divide(e, x, BigDecimal.ROUND_HALF_EVEN),
k = ONE.divide(f, x, BigDecimal.ROUND_HALF_EVEN),
l = ONE.divide(g, x, BigDecimal.ROUND_HALF_EVEN),
m = h.subtract(j).subtract(k).subtract(l),
n = c.multiply(m);
return n;
}
The constants ONE, TWO, FOUR, EIGHT, SIXTEEN are defined static to the class and have exact integer values (ONE also exists as a constant in BigDecimal).

Related

Linearly interpolate from one speed to the other based on percentage

I am trying to slowly decelerate based on a percentage.
Basically: if percentage is 0 the speed should be speed_max, if the percentage hits 85 the speed should be speed_min, continuing with speed_min until the percentage hits 100%. At percentages between 0% and 85%, the speed should be calculated with the percentage.
I started writing the code already, though I am not sure how to continue:
// Target
int degrees = 90;
// Making sure we're at 0
resetGyro(0);
int speed_max = 450;
int speed_min = 150;
float currentDeg = 0;
float percentage = 0;
while(percentage < 100)
{
//??
getGyroDeg(&currentDeg);
percentage = (degrees/100)*currentDeg;
}
killMotors(1);
Someone in the comments asked why I am doing this.
Unfortunately, I am working with very limited hardware and a pretty bad gyroscope, all while trying to guarantee +- 1 degree precision.
To do this, I am starting at speed_max, slowly decreasing to speed_min (this is to have better control over the motors) when nearing the target value (90).
Why does it stop decelerating at 85%? This is to really be precise and hit the target value successfully.
Assuming speed is linearly calculated based on percentages from 0 to 85 (and stays at speed_min with percentage is gt 85), then this is your formula for calculating speed:
if (percentage >= 85)
{
speed = speed_min;
}
else
{
speed = speed_max - (((speed_max - speed_min)*percentage)/85);
}
Linear interpolation is fairly straight forward.
At percentage 0, the speed should be speed_max.
At percentage 85, the speed should be speed_min.
At percentage values greater than 85, the speed should still be speed_min.
Between 0 and 85, the speed should be linearly interpolated between speed_max and speed_min, so percentage is a 'amount of drop from maximum speed'.
Assuming percentage is of type float:
float speed_from_percentage(float percent)
{
if (percent <= 0.0)
return speed_max;
if (percent >= 85.0)
return speed_min;
return speed_min + (speed_max - speed_min) * (85.0 - percentage) / 85.0;
}
You can also replace the final return with the equivalent:
return speed_max - (speed_max - speed_min) * percentage / 85.0;
If you're truly pedantic, all the constants should be suffixed with F to indicate float and hence use float arithmetic instead of double arithmetic. And hence you should probably also use float for speed_min and speed_max. If everything is meant to be integer arithmetic, you can change float to int and drop the .0 from the expressions.
Assuming getGyroDeg is input from the controller, what you are describing is a proportional control. A constant response curve, ie, 0 to 85 has an output of 450 to 150, and 150 after that, is an ad-hoc approach, based on experience. However, a properly initialised PID controller generally attains a faster time to set-point and greater stability.
#include <stdio.h>
#include <time.h>
#include <assert.h>
#include <stdlib.h>
static float sim_current = 0.0f;
static float sim_dt = 0.01f;
static float sim_speed = 0.0f /* 150.0f */;
static void getGyroDeg(float *const current) {
assert(current);
sim_current += sim_speed * sim_dt;
/* Simulate measurement error. */
*current = sim_current + 3.0 * ((2.0 * rand() / RAND_MAX) - 1.0);
}
static void setGyroSpeed(const float speed) {
assert(speed >= /*150.0f*/-450.0f && speed <= 450.0f);
sim_speed = speed;
}
int main(void) {
/* https://en.wikipedia.org/wiki/PID_controller
u(t) = K_p e(t) + K_i \int_0^t e(\theta)d\theta + K_d de(t)/dt */
const float setpoint = 90.0f;
const float max = 450.0f;
const float min = -450.0f/* 150.0f */;
/* Random value; actually get this number. */
const float dt = 1.0f;
/* Tune these. */
const float kp = 30.0f, ki = 4.0f, kd = 2.0f;
float current, last = 0.0f, integral = 0.0f;
float t = 0.0f;
float e, p, i, d, pid;
size_t count;
for(count = 0; count < 40; count++) {
getGyroDeg(&current);
e = setpoint - current;
p = kp * e;
i = ki * integral * dt;
d = kd * (e - last) / dt;
last = e;
pid = p + i + d;
if(pid > max) {
pid = max;
} else if(pid < min) {
pid = min;
} else {
integral += e;
}
setGyroSpeed(pid);
printf("%f\t%f\t%f\n", t, sim_current, pid);
t += dt;
}
return EXIT_SUCCESS;
}
Here, instead of the speed linearly decreasing, it calculates the speed in a control loop. However, if the minimum is 150, then it's not going to achieve greater stability; if you go over 90, then you have no way of getting back.
If the controls are [-450, 450], it goes through zero and it is much nicer; I think this might be what you are looking for. It actively corrects for errors.

Implementing equations with very small numbers in C - Plank's Law generating blackbody

I have a problem that, after much head scratching, I think is to do with very small numbers in a long-double.
I am trying to implement Planck's law equation to generate a normalised blackbody curve at 1nm intervals between a given wavelength range and for a given temperature. Ultimately this will be a function accepting inputs, for now it is main() with the variables fixed and outputting by printf().
I see examples in matlab and python, and they are implementing the same equation as me in a similar loop with no trouble at all.
This is the equation:
My code generates an incorrect blackbody curve:
I have tested key parts of the code independently. After trying to test the equation by breaking it into blocks in excel I noticed that it does result in very small numbers and I wonder if my implementation of large numbers could be causing the issue? Does anyone have any insight into using C to implement equations? This a new area to me and I have found the maths much harder to implement and debug than normal code.
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
//global variables
const double H = 6.626070040e-34; //Planck's constant (Joule-seconds)
const double C = 299800000; //Speed of light in vacume (meters per second)
const double K = 1.3806488e-23; //Boltzmann's constant (Joules per Kelvin)
const double nm_to_m = 1e-6; //conversion between nm and m
const int interval = 1; //wavelength interval to caculate at (nm)
//typedef structure to hold results
typedef struct {
int *wavelength;
long double *radiance;
long double *normalised;
} results;
int main() {
int min = 100 , max = 3000; //wavelength bounds to caculate between, later to be swaped to function inputs
double temprature = 200; //temprature in kelvin, later to be swaped to function input
double new_valu, old_valu = 0;
static results SPD_data, *SPD; //setup a static results structure and a pointer to point to it
SPD = &SPD_data;
SPD->wavelength = malloc(sizeof(int) * (max - min)); //allocate memory based on wavelength bounds
SPD->radiance = malloc(sizeof(long double) * (max - min));
SPD->normalised = malloc(sizeof(long double) * (max - min));
for (int i = 0; i <= (max - min); i++) {
//Fill wavelength vector
SPD->wavelength[i] = min + (interval * i);
//Computes radiance for every wavelength of blackbody of given temprature
SPD->radiance[i] = ((2 * H * pow(C, 2)) / (pow((SPD->wavelength[i] / nm_to_m), 5))) * (1 / (exp((H * C) / ((SPD->wavelength[i] / nm_to_m) * K * temprature))-1));
//Copy SPD->radiance to SPD->normalised
SPD->normalised[i] = SPD->radiance[i];
//Find largest value
if (i <= 0) {
old_valu = SPD->normalised[0];
} else if (i > 0){
new_valu = SPD->normalised[i];
if (new_valu > old_valu) {
old_valu = new_valu;
}
}
}
//for debug perposes
printf("wavelength(nm) radiance(Watts per steradian per meter squared) normalised radiance\n");
for (int i = 0; i <= (max - min); i++) {
//Normalise SPD
SPD->normalised[i] = SPD->normalised[i] / old_valu;
//for debug perposes
printf("%d %Le %Lf\n", SPD->wavelength[i], SPD->radiance[i], SPD->normalised[i]);
}
return 0; //later to be swaped to 'return SPD';
}
/*********************UPDATE Friday 24th Mar 2017 23:42*************************/
Thank you for the suggestions so far, lots of useful pointers especially understanding the way numbers are stored in C (IEEE 754) but I don't think that is the issue here as it only applies to significant digits. I implemented most of the suggestions but still no progress on the problem. I suspect Alexander in the comments is probably right, changing the units and order of operations is likely what I need to do to make the equation work like the matlab or python examples, but my knowledge of maths is not good enough to do this. I broke the equation down into chunks to take a closer look at what it was doing.
//global variables
const double H = 6.6260700e-34; //Planck's constant (Joule-seconds) 6.626070040e-34
const double C = 299792458; //Speed of light in vacume (meters per second)
const double K = 1.3806488e-23; //Boltzmann's constant (Joules per Kelvin) 1.3806488e-23
const double nm_to_m = 1e-9; //conversion between nm and m
const int interval = 1; //wavelength interval to caculate at (nm)
const int min = 100, max = 3000; //max and min wavelengths to caculate between (nm)
const double temprature = 200; //temprature (K)
//typedef structure to hold results
typedef struct {
int *wavelength;
long double *radiance;
long double *normalised;
} results;
//main program
int main()
{
//setup a static results structure and a pointer to point to it
static results SPD_data, *SPD;
SPD = &SPD_data;
//allocate memory based on wavelength bounds
SPD->wavelength = malloc(sizeof(int) * (max - min));
SPD->radiance = malloc(sizeof(long double) * (max - min));
SPD->normalised = malloc(sizeof(long double) * (max - min));
//break equasion into visible parts for debuging
long double aa, bb, cc, dd, ee, ff, gg, hh, ii, jj, kk, ll, mm, nn, oo;
for (int i = 0; i < (max - min); i++) {
//Computes radiance at every wavelength interval for blackbody of given temprature
SPD->wavelength[i] = min + (interval * i);
aa = 2 * H;
bb = pow(C, 2);
cc = aa * bb;
dd = pow((SPD->wavelength[i] / nm_to_m), 5);
ee = cc / dd;
ff = 1;
gg = H * C;
hh = SPD->wavelength[i] / nm_to_m;
ii = K * temprature;
jj = hh * ii;
kk = gg / jj;
ll = exp(kk);
mm = ll - 1;
nn = ff / mm;
oo = ee * nn;
SPD->radiance[i] = oo;
}
//for debug perposes
printf("wavelength(nm) | radiance(Watts per steradian per meter squared)\n");
for (int i = 0; i < (max - min); i++) {
printf("%d %Le\n", SPD->wavelength[i], SPD->radiance[i]);
}
return 0;
}
Equation variable values during runtime in xcode:
I notice a couple of things that are wrong and/or suspicious about the current state of your program:
You have defined nm_to_m as 10-9,, yet you divide by it. If your wavelength is measured in nanometers, you should multiply it by 10-9 to get it in meters. To wit, if hh is supposed to be your wavelength in meters, it is on the order of several light-hours.
The same is obviously true for dd as well.
mm, being the exponential expression minus 1, is zero, which gives you infinity in the results deriving from it. This is apparently because you don't have enough digits in a double to represent the significant part of the exponential. Instead of using exp(...) - 1 here, try using the expm1() function instead, which implements a well-defined algorithm for calculating exponentials minus 1 without cancellation errors.
Since interval is 1, it doesn't currently matter, but you can probably see that your results wouldn't match the meaning of the code if you set interval to something else.
Unless you plan to change something about this in the future, there shouldn't be a need for this program to "save" the values of all calculations. You could just print them out as you run them.
On the other hand, you don't seem to be in any danger of underflow or overflow. The largest and smallest numbers you use don't seem to be a far way from 10±60, which is well within what ordinary doubles can deal with, let alone long doubles. The being said, it might not hurt to use more normalized units, but at the magnitudes you currently display, I wouldn't worry about it.
Thanks for all the pointers in the comments. For anyone else running into a similar problem with implementing equations in C, I had a few silly errors in the code:
writing a 6 not a 9
dividing when I should be multiplying
an off by one error with the size of my array vs the iterations of for() loop
200 when I meant 2000 in the temperature variable
As a result of the last one particularly I was not getting the results I expected (my wavelength range was not right for plotting the temperature I was calculating) and this was leading me to the assumption that something was wrong in the implementation of the equation, specifically I was thinking about big/small numbers in C because I did not understand them. This was not the case.
In summary, I should have made sure I knew exactly what my equation should be outputting for given test conditions before implementing it in code. I will work on getting more comfortable with maths, particularly algebra and dimensional analysis.
Below is the working code, implemented as a function, feel free to use it for anything but obviously no warranty of any kind etc.
blackbody.c
//
// Computes radiance for every wavelength of blackbody of given temprature
//
// INPUTS: int min wavelength to begin calculation from (nm), int max wavelength to end calculation at (nm), int temperature (kelvin)
// OUTPUTS: pointer to structure containing:
// - spectral radiance (Watts per steradian per meter squared per wavelength at 1nm intervals)
// - normalised radiance
//
//include & define
#include "blackbody.h"
//global variables
const double H = 6.626070040e-34; //Planck's constant (Joule-seconds) 6.626070040e-34
const double C = 299792458; //Speed of light in vacuum (meters per second)
const double K = 1.3806488e-23; //Boltzmann's constant (Joules per Kelvin) 1.3806488e-23
const double nm_to_m = 1e-9; //conversion between nm and m
const int interval = 1; //wavelength interval to calculate at (nm), to change this line 45 also need to be changed
bbresults* blackbody(int min, int max, double temperature) {
double new_valu, old_valu = 0; //variables for normalising result
bbresults *SPD;
SPD = malloc(sizeof(bbresults));
//allocate memory based on wavelength bounds
SPD->wavelength = malloc(sizeof(int) * (max - min));
SPD->radiance = malloc(sizeof(long double) * (max - min));
SPD->normalised = malloc(sizeof(long double) * (max - min));
for (int i = 0; i < (max - min); i++) {
//Computes radiance for every wavelength of blackbody of given temperature
SPD->wavelength[i] = min + (interval * i);
SPD->radiance[i] = ((2 * H * pow(C, 2)) / (pow((SPD->wavelength[i] * nm_to_m), 5))) * (1 / (expm1((H * C) / ((SPD->wavelength[i] * nm_to_m) * K * temperature))));
//Copy SPD->radiance to SPD->normalised
SPD->normalised[i] = SPD->radiance[i];
//Find largest value
if (i <= 0) {
old_valu = SPD->normalised[0];
} else if (i > 0){
new_valu = SPD->normalised[i];
if (new_valu > old_valu) {
old_valu = new_valu;
}
}
}
for (int i = 0; i < (max - min); i++) {
//Normalise SPD
SPD->normalised[i] = SPD->normalised[i] / old_valu;
}
return SPD;
}
blackbody.h
#ifndef blackbody_h
#define blackbody_h
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
//typedef structure to hold results
typedef struct {
int *wavelength;
long double *radiance;
long double *normalised;
} bbresults;
//function declarations
bbresults* blackbody(int, int, double);
#endif /* blackbody_h */
main.c
#include <stdio.h>
#include "blackbody.h"
int main() {
bbresults *TEST;
int min = 100, max = 3000, temp = 5000;
TEST = blackbody(min, max, temp);
printf("wavelength | normalised radiance | radiance |\n");
printf(" (nm) | - | (W per meter squr per steradian) |\n");
for (int i = 0; i < (max - min); i++) {
printf("%4d %Lf %Le\n", TEST->wavelength[i], TEST->normalised[i], TEST->radiance[i]);
}
free(TEST);
free(TEST->wavelength);
free(TEST->radiance);
free(TEST->normalised);
return 0;
}
Plot of output:

Calculating and printing the Standard Deviation of a user-defined Array

My program takes an array of numbers from the user and calculates those numbers into the mean and into the standard deviation. I'm having an issue with my Standard Deviation part and I'm not sure if I'm even doing this correctly. Here is what I have:
public static void main(String args[])
{
Scanner scan = new Scanner(System.in);
System.out.println("How many numbers do you want to calculate?");
int n = scan.nextInt();
double a[] = new double[(int) n]; // casting n to a double
double sum = 0.0;
double sd = 0.0;
int ifLoop = 0;
System.out.println("Fill in the values for all " + n + " numbers.");
for(int i = 0; i < a.length; i++)
{
a[i] = scan.nextDouble();
sum = sum + a[i];
ifLoop++;
if(ifLoop == a.length)
{
sd = sd + Math.pow(a[i] - (sum/a.length), 2); //THIS IS WHERE I NEED HELP
}
}
System.out.println("The Mean of the " + n +" numbers is " + sum/a.length); // this line finds the average
System.out.println("The Standard Deviation of the " + n + " numbers is " + sd);
}
Example input:
30.7
190.9
11
14
Output:
The Mean of the 4 numbers is 61.65
The Standard Deviation of the 4 numbers is 2270.5225 I know this is wrong because the 2270.5225 is bogus and I'm not sure how to correctly implement the Standard Deviation formula. Any help is very much appreciated.
Try the following link. You can get the answer for your query.
How to calculate standard deviation using JAVA
Use the following method to calculate the Standard deviation after input all the number to array. You have to calculate mean first. After that SD. Have to use 2 loops.
double calculateSD(double[] values) {
double mean = 0.0;
double sum = 0.0;
int n = values.length;
for (double value : values) {
sum += value;
}
mean = sum / n;
sum = 0.0;
for (double value : values) {
sum += Math.pow(mean - value, 2);
}
return sum / n;
}

Most efficient way to get first digit of a double/float

For example if I have a float/double variable:
float f = 1212512.028423;
double d = 938062.3453;
int f1 = 1;
int d1 = 9;
What's the fastest way to get the first digit of those number?
int first_digit(double n) {
while(n>10) n/=10;
return n;
}
Is this the most efficient?
I need an implementation that doesn't involves char/string, that would be also works (or give the best performance in their specific language). The language are Ruby, Python, C#, Java, C++, Go, JavaScript, PHP.
//Try this.
double d = 938062.3453;
int f1 = Int32.Parse(d.ToString().Substring(0, 1));
Both mentioned solutions will crash at number -0.0123. I have constructed a different method for slightly different purpose:
public static double InteligentRound(double X, int kam=0)
{ int zn = (X < 0) ? (-1) : (1); //signum of the input
X *= zn; //we work with positive values only
double exp = Math.Log10(X); //exponent of 10
exp = Math.Floor(exp);
double B = Math.Pow(10, exp);//the pure power of 10
double FD = X / B; //lies between 1 and 10
if (kam == 0) FD = Math.Round(FD);
if (kam < 0) FD = Math.Floor(FD); //Now, FD is the first digit
if (kam > 0) FD = Math.Ceiling(FD);
return (zn * FD * B);
}

VsampFactor and HsampFactor in FJCore library

I've been using the FJCore library in a Silverlight project to help with some realtime image processing, and I'm trying to figure out how to get a tad more compression and performance out of the library. Now, as I understand it, the JPEG standard allows you to specify a chroma subsampling ratio (see http://en.wikipedia.org/wiki/Chroma_subsampling and http://en.wikipedia.org/wiki/Jpeg); and it appears that this is supposed to be implemented in the FJCore library using the HsampFactor and VsampFactor arrays:
public static readonly byte[] HsampFactor = { 1, 1, 1 };
public static readonly byte[] VsampFactor = { 1, 1, 1 };
However, I'm having a hard time figuring out how to use them. It looks to me like the current values are supposed to represent 4:4:4 subsampling (e.g., no subsampling at all), and that if I wanted to get 4:1:1 subsampling, the right values would be something like this:
public static readonly byte[] HsampFactor = { 2, 1, 1 };
public static readonly byte[] VsampFactor = { 2, 1, 1 };
At least, that's the way that other similar libraries use these values (for instance, see the example code here for libjpeg).
However, neither the above values of {2, 1, 1} nor any other set of values that I've tried besides {1, 1, 1} produce a legible image. Nor, in looking at the code, does it seem like that's the way it's written. But for the life of me, I can't figure out what the FJCore code is actually trying to do. It seems like it's just using the sample factors to repeat operations that it's already done -- i.e., if I didn't know better, I'd say that it was a bug. But this is a fairly established library, based on some fairly well established Java code, so I'd be surprised if that were the case.
Does anybody have any suggestions for how to use these values to get 4:2:2 or 4:1:1 chroma subsampling?
For what it's worth, here's the relevant code from the JpegEncoder class:
for (comp = 0; comp < _input.Image.ComponentCount; comp++)
{
Width = _input.BlockWidth[comp];
Height = _input.BlockHeight[comp];
inputArray = _input.Image.Raster[comp];
for (i = 0; i < _input.VsampFactor[comp]; i++)
{
for (j = 0; j < _input.HsampFactor[comp]; j++)
{
xblockoffset = j * 8;
yblockoffset = i * 8;
for (a = 0; a < 8; a++)
{
// set Y value. check bounds
int y = ypos + yblockoffset + a; if (y >= _height) break;
for (b = 0; b < 8; b++)
{
int x = xpos + xblockoffset + b; if (x >= _width) break;
dctArray1[a, b] = inputArray[x, y];
}
}
dctArray2 = _dct.FastFDCT(dctArray1);
dctArray3 = _dct.QuantizeBlock(dctArray2, FrameDefaults.QtableNumber[comp]);
_huf.HuffmanBlockEncoder(buffer, dctArray3, lastDCvalue[comp], FrameDefaults.DCtableNumber[comp], FrameDefaults.ACtableNumber[comp]);
lastDCvalue[comp] = dctArray3[0];
}
}
}
And notice that in the i & j loops, they're not controlling any kind of pixel skipping: if HsampFactor[0] is set to two, it's just grabbing two blocks instead of one.
I figured it out. I thought that by setting the sampling factors, you were telling the library to subsample the raster components itself. Turns out that when you set the sampling factors, you're actually telling the library the relative size of the raster components that you're providing. In other words, you need to do the chroma subsampling of the image yourself, before you ever submit it to the FJCore library for compression. Something like this is what it's looking for:
private byte[][,] GetSubsampledRaster()
{
byte[][,] raster = new byte[3][,];
raster[Y] = new byte[width / hSampleFactor[Y], height / vSampleFactor[Y]];
raster[Cb] = new byte[width / hSampleFactor[Cb], height / vSampleFactor[Cb]];
raster[Cr] = new byte[width / hSampleFactor[Cr], height / vSampleFactor[Cr]];
int rgbaPos = 0;
for (short y = 0; y < height; y++)
{
int Yy = y / vSampleFactor[Y];
int Cby = y / vSampleFactor[Cb];
int Cry = y / vSampleFactor[Cr];
int Yx = 0, Cbx = 0, Crx = 0;
for (short x = 0; x < width; x++)
{
// Convert to YCbCr colorspace.
byte b = RgbaSample[rgbaPos++];
byte g = RgbaSample[rgbaPos++];
byte r = RgbaSample[rgbaPos++];
YCbCr.fromRGB(ref r, ref g, ref b);
// Only include the byte in question in the raster if it matches the appropriate sampling factor.
if (IncludeInSample(Y, x, y))
{
raster[Y][Yx++, Yy] = r;
}
if (IncludeInSample(Cb, x, y))
{
raster[Cb][Cbx++, Cby] = g;
}
if (IncludeInSample(Cr, x, y))
{
raster[Cr][Crx++, Cry] = b;
}
// For YCbCr, we ignore the Alpha byte of the RGBA byte structure, so advance beyond it.
rgbaPos++;
}
}
return raster;
}
static private bool IncludeInSample(int slice, short x, short y)
{
// Hopefully this gets inlined . . .
return ((x % hSampleFactor[slice]) == 0) && ((y % vSampleFactor[slice]) == 0);
}
There might be additional ways to optimize this, but it's working for now.

Resources