Bitmap scaling using nearest neighbor not correct - c

I am currently developing a tile-based game in C and I'm trying to implement zooming using the nearest neighbor algorithm.
This is how the algorithm it looks right now:
u32 *DestPixel = (u32 *)DestBitmap.Pixels;
u32 *SourcePixel = (u32 *)SourceBitmap->Pixels;
f64 RatioX = (f64)SourceBitmap->Width / (f64)DestBitmap.Width;
f64 RatioY = (f64)SourceBitmap->Height / (f64)DestBitmap.Height;
for(s32 Y = 0;
Y < DestBitmap.Height;
++Y)
{
for(s32 X = 0;
X < DestBitmap.Width;
++X)
{
s32 ScaledX = (s32)(X * RatioX);
s32 ScaledY = (s32)(Y * RatioY);
s32 DestOffset = Y*DestBitmap.Width + X;
s32 SourceOffset = ScaledY*SourceBitmap->Width + ScaledX;
*(DestPixel + DestOffset) = *(SourcePixel + SourceOffset);
}
}
However, it is not producing the results I want. When trying to convert the source bitmap (64x64) to a bitmap whose size is not a power of 2 (in this case 30x30), the scaled bitmap looks weird on the right side. Left side is the original bitmap while the right side is the scaled bitmap:
What I've tried:
Rounding ScaledX and ScaledY (instead of truncating)
Flooring ScaledX and ScaledY (instead of truncating)

I actually solved it. It was the rendering of the bitmaps that the problem was, not the alogrithm. I was drawing 4 pixels at a time in my rendering code which doesn't work with 30x30, but only with power of 2s.

Related

How do I create a 2 way sliding door in second life?

I'm trying to create double doors that slide opposite ways but I want to only use a single object. Basically I'm using this script;
But I'm wondering if its possible so it separates in the middle and retracts rather than it retracting from one end? Supermarket doors would be my best example.
``//When touched the prim is retracted towards one end and when touched again stretched back out.
//
//Prim moves/changes size along the local coordinate specified in the offset vector below.
//
//To change the overall size, edit the prim when stretched out and reset the script when done.
//
//The script works both in unlinked and linked prims.
//
vector offset = <0,1,0>; //Prim moves/changes size along this local coordinate
float hi_end_fixed = TRUE; //Which end of the prim should remain in place when size changes?
//The one with the higher (local) coordinate?
float min = 0.2; //The minimum size of the prim relative to its maximum size
integer ns = 10; //Number of distinct steps for move/size change
default {
state_entry() {
offset *= ((1.0 - min) / ns) * (offset * llGetScale());
hi_end_fixed -= 0.5;
}
touch_start(integer detected) {
integer i;
do llSetPrimitiveParams([PRIM_SIZE, llGetScale() - offset,
PRIM_POSITION, llGetLocalPos() + ((hi_end_fixed * offset) * llGetLocalRot())]);
while ((++i) < ns);
offset = - offset; }
}``

Intesection problem with Möller-Trumbore algorithm on 1 dimension of the triangle

I am currently working on a raytracer project and I just found out a issue with the triangle intersections.
Sometimes, and I don't understand when and why, some of the pixels of the triangle don't appear on the screen. Instead I can see the object right behind it. It only occurs on one dimension of the triangle and it depends on the camera and the triangle postions (e.g. picture below).
Triangle with pixels missing
I am using Möller-Trumbore algorithm to compute every intersection. Here's my implementation :
t_solve s;
t_vec v1;
t_vec v2;
t_vec tvec;
t_vec pvec;
v1 = vec_sub(triangle->point2, triangle->point1);
v2 = vec_sub(triangle->point3, triangle->point1);
pvec = vec_cross(dir, v2);
s.delta = vec_dot(v1, pvec);
if (fabs(s.delta) < 0.00001)
return ;
s.c = 1.0 / s.delta;
tvec = vec_sub(ori, triangle->point1);
s.a = vec_dot(tvec, pvec) * s.c;
if (s.a < 0 || s.a > 1)
return ;
tvec = vec_cross(tvec, v1);
s.b = vec_dot(dir, tvec) * s.c;
if (s.b < 0 || s.a + s.b > 1)
return ;
s.t1 = vec_dot(v2, tvec) * s.c;
if (s.t1 < 0)
return ;
if (s.t1 < rt->t)
{
rt->t = s.t1;
rt->last_obj = triangle;
rt->flag = 0;
}
The only clue at the moment is that by using a different method of calculating my ray (called dir in the code), the result is that I have less pixels missing.
Moreover, when I turn the camera and look behind, I see that the bug occurs on the opposite side of the triangle. All of this make me think that the issue is mainly linked with the ray..
Take a look at Watertight Ray/Triangle Intersection. I would much appropriate if you could provide a minimal example where a ray should hit the triangle, but misses it. I had this a long time ago with the Cornel Box - inside the box there were some "black" pixels because on edges none of the triangles has been hit. It's a common problem stemming from floating-point imprecision.

Is there any way for image thresholding without GDI+? [duplicate]

Is it possible to directly read/write to a WriteableBitmap's pixel data? I'm currently using WriteableBitmapEx's SetPixel() but it's slow and I want to access the pixels directly without any overhead.
I haven't used HTML5's canvas in a while, but if I recall correctly you could get its image data as a single array of numbers and that's kind of what I'm looking for
Thanks in advance
To answer your question, you can more directly access a writable bitmap's data by using the Lock, write, Unlock pattern, as demonstrated below, but it is typically not necessary unless you are basing your drawing upon the contents of the image. More typically, you can just create a new buffer and make it a bitmap, rather than the other way around.
That being said, there are many extensibility points in WPF to perform innovative drawing without resorting to pixel manipulation. For most controls, the existing WPF primitives (Border, Line, Rectangle, Image, etc...) are more than sufficient - don't be concerned about using many of them, they are rather cheap to use. For complex controls, you can use the DrawingContext to draw D3D primitives. For image effects, you can implement GPU assisted shaders using the Effect class or use the built in effects (Blur and Shadow).
But, if your situation requires direct pixel access, pick a pixel format and start writing. I suggest BGRA32 because it is easy to understand and is probably the most common one to be discussed.
BGRA32 means the pixel data is stored in memory as 4 bytes representing the blue, green, red, and alpha channels of an image, in that order. It is convenient because each pixel ends up on a 4 byte boundary, lending it to storage in an 32 bit integer. When dealing with a 32 bit integer, keep in mind the order will be reversed on most platforms (check BitConverter.IsLittleEndian to determine proper byte order at runtime if you need to support multiple platforms, x86 and x86_64 are both little endian)
The image data is stored in horizontal strips which are one stride wide which compose a single row the width of an image. The stride width is always greater than or equal to the pixel width of the image multiplied by the number of bytes per pixel in the format selected. Certain situations can cause the stride to be longer than the width * bytesPerPixel which are specific to certain architechtures, so you must use the stride width to calculate the start of a row, rather than multiplying the width. Since we are using a 4 byte wide pixel format, our stride does happen to be width * 4, but you should not rely upon it.
As mentioned, the only case I would suggest using a WritableBitmap is if you are accessing an existing image, so that is the example below:
Before / After:
// must be compiled with /UNSAFE
// get an image to draw on and convert it to our chosen format
BitmapSource srcImage = JpegBitmapDecoder.Create(File.Open("img13.jpg", FileMode.Open),
BitmapCreateOptions.None, BitmapCacheOption.OnLoad).Frames[0];
if (srcImage.Format != PixelFormats.Bgra32)
srcImage = new FormatConvertedBitmap(srcImage, PixelFormats.Bgra32, null, 0);
// get a writable bitmap of that image
var wbitmap = new WriteableBitmap(srcImage);
int width = wbitmap.PixelWidth;
int height = wbitmap.PixelHeight;
int stride = wbitmap.BackBufferStride;
int bytesPerPixel = (wbitmap.Format.BitsPerPixel + 7) / 8;
wbitmap.Lock();
byte* pImgData = (byte*)wbitmap.BackBuffer;
// set alpha to transparent for any pixel with red < 0x88 and invert others
int cRowStart = 0;
int cColStart = 0;
for (int row = 0; row < height; row++)
{
cColStart = cRowStart;
for (int col = 0; col < width; col++)
{
byte* bPixel = pImgData + cColStart;
UInt32* iPixel = (UInt32*)bPixel;
if (bPixel[2 /* bgRa */] < 0x44)
{
// set to 50% transparent
bPixel[3 /* bgrA */] = 0x7f;
}
else
{
// invert but maintain alpha
*iPixel = *iPixel ^ 0x00ffffff;
}
cColStart += bytesPerPixel;
}
cRowStart += stride;
}
wbitmap.Unlock();
// if you are going across threads, you will need to additionally freeze the source
wbitmap.Freeze();
However, it really isn't necessary if you are not modifying an existing image. For example, you can draw a checkerboard pattern using all safe code:
Output:
// draw rectangles
int width = 640, height = 480, bytesperpixel = 4;
int stride = width * bytesperpixel;
byte[] imgdata = new byte[width * height * bytesperpixel];
int rectDim = 40;
UInt32 darkcolorPixel = 0xffaaaaaa;
UInt32 lightColorPixel = 0xffeeeeee;
UInt32[] intPixelData = new UInt32[width * height];
for (int row = 0; row < height; row++)
{
for (int col = 0; col < width; col++)
{
intPixelData[row * width + col] = ((col / rectDim) % 2) != ((row / rectDim) % 2) ?
lightColorPixel : darkcolorPixel;
}
}
Buffer.BlockCopy(intPixelData, 0, imgdata, 0, imgdata.Length);
// compose the BitmapImage
var bsCheckerboard = BitmapSource.Create(width, height, 96, 96, PixelFormats.Bgra32, null, imgdata, stride);
And you don't really even need an Int32 intermediate, if you write to the byte array directly.
Output:
// draw using byte array
int width = 640, height = 480, bytesperpixel = 4;
int stride = width * bytesperpixel;
byte[] imgdata = new byte[width * height * bytesperpixel];
// draw a gradient from red to green from top to bottom (R00 -> ff; Gff -> 00)
// draw a gradient of alpha from left to right
// Blue constant at 00
for (int row = 0; row < height; row++)
{
for (int col = 0; col < width; col++)
{
// BGRA
imgdata[row * stride + col * 4 + 0] = 0;
imgdata[row * stride + col * 4 + 1] = Convert.ToByte((1 - (col / (float)width)) * 0xff);
imgdata[row * stride + col * 4 + 2] = Convert.ToByte((col / (float)width) * 0xff);
imgdata[row * stride + col * 4 + 3] = Convert.ToByte((row / (float)height) * 0xff);
}
}
var gradient = BitmapSource.Create(width, height, 96, 96, PixelFormats.Bgra32, null, imgdata, stride);
Edit: apparently, you are trying to use WPF to make some sort of image editor. I would still be using WPF primitives for shapes and source bitmaps, and then implement translations, scaling, rotation as RenderTransform's, bitmap effects as Effect's and keep everything within the WPF model. But, if that does not work for you, we have many other options.
You could use WPF primitives to render to a RenderTargetBitmap which has a chosen PixelFormat to use with WritableBitmap as below:
Canvas cvRoot = new Canvas();
// position primitives on canvas
var rtb = new RenderTargetBitmap(width, height, dpix, dpiy, PixelFormats.Bgra32);
var wb = new WritableBitmap(rtb);
You could use a WPF DrawingVisual to issue GDI style commands then render to a bitmap as demonstrated on the sample on the RenderTargetBitmap page.
You could use GDI using an InteropBitmap created using System.Windows.Interop.Imaging.CreateBitmapSourceFromHBitmap from an HBITMAP retrieved from a Bitmap.GetHBitmap method. Make sure you don't leak the HBITMAP, though.
After a nice long headache, I found this article that explains a way to do it without using bit arithmetic, and allows me to treat it as an array instead:
unsafe
{
IntPtr pBackBuffer = bitmap.BackBuffer;
byte* pBuff = (byte*)pBackBuffer.ToPointer();
pBuff[4 * x + (y * bitmap.BackBufferStride)] = 255;
pBuff[4 * x + (y * bitmap.BackBufferStride) + 1] = 255;
pBuff[4 * x + (y * bitmap.BackBufferStride) + 2] = 255;
pBuff[4 * x + (y * bitmap.BackBufferStride) + 3] = 255;
}
You can access the raw pixel data by calling the Lock() method and using the BackBuffer property afterwards. When you're finished, don't forget to call AddDirtyRect and Unlock.
For a simple example, you can take a look at this: http://cscore.codeplex.com/SourceControl/latest#CSCore.Visualization/WPF/Utils/PixelManipulationBitmap.cs

Calculate (x exponent 0.19029) with low memory using lookup table?

I'm writing a C program for a PIC micro-controller which needs to do a very specific exponential function. I need to calculate the following:
A = k . (1 - (p/p0)^0.19029)
k and p0 are constant, so it's all pretty simple apart from finding x^0.19029
(p/p0) ratio would always be in the range 0-1.
It works well if I add in math.h and use the power function, except that uses up all of the available 16 kB of program memory. Talk about bloatware! (Rest of program without power function = ~20% flash memory usage; add math.h and power function, =100%).
I'd like the program to do some other things as well. I was wondering if I can write a special case implementation for x^0.19029, maybe involving iteration and some kind of lookup table.
My idea is to generate a look-up table for the function x^0.19029, with perhaps 10-100 values of x in the range 0-1. The code would find a close match, then (somehow) iteratively refine it by re-scaling the lookup table values. However, this is where I get lost because my tiny brain can't visualise the maths involved.
Could this approach work?
Alternatively, I've looked at using Exp(x) and Ln(x), which can be implemented with a Taylor expansion. b^x can the be found with:
b^x = (e^(ln b))^x = e^(x.ln(b))
(See: Wikipedia - Powers via Logarithms)
This looks a bit tricky and complicated to me, though. Am I likely to get the implementation smaller then the compiler's math library, and can I simplify it for my special case (i.e. base = 0-1, exponent always 0.19029)?
Note that RAM usage is OK at the moment, but I've run low on Flash (used for code storage). Speed is not critical. Somebody has already suggested that I use a bigger micro with more flash memory, but that sounds like profligate wastefulness!
[EDIT] I was being lazy when I said "(p/p0) ratio would always be in the range 0-1". Actually it will never reach 0, and I did some calculations last night and decided that in fact a range of 0.3 - 1 would be quite adequate! This mean that some of the simpler solutions below should be suitable. Also, the "k" in the above is 44330, and I'd like the error in the final result to be less than 0.1. I guess that means an error in the (p/p0)^0.19029 needs to be less than 1/443300 or 2.256e-6
Use splines. The relevant part of the function is shown in the figure below. It varies approximately like the 5th root, so the problematic zone is close to p / p0 = 0. There is mathematical theory how to optimally place the knots of splines to minimize the error (see Carl de Boor: A Practical Guide to Splines). Usually one constructs the spline in B form ahead of time (using toolboxes such as Matlab's spline toolbox - also written by C. de Boor), then converts to Piecewise Polynomial representation for fast evaluation.
In C. de Boor, PGS, the function g(x) = sqrt(x + 1) is actually taken as an example (Chapter 12, Example II). This is exactly what you need here. The book comes back to this case a few times, since it is admittedly a hard problem for any interpolation scheme due to the infinite derivatives at x = -1. All software from PGS is available for free as PPPACK in netlib, and most of it is also part of SLATEC (also from netlib).
Edit (Removed)
(Multiplying by x once does not significantly help, since it only regularizes the first derivative, while all other derivatives at x = 0 are still infinite.)
Edit 2
My feeling is that optimally constructed splines (following de Boor) will be best (and fastest) for relatively low accuracy requirements. If the accuracy requirements are high (say 1e-8), one may be forced to get back to the algorithms that mathematicians have been researching for centuries. At this point, it may be best to simply download the sources of glibc and copy (provided GPL is acceptable) whatever is in
glibc-2.19/sysdeps/ieee754/dbl-64/e_pow.c
Since we don't have to include the whole math.h, there shouldn't be a problem with memory, but we will only marginally profit from having a fixed exponent.
Edit 3
Here is an adapted version of e_pow.c from netlib, as found by #Joni. This seems to be the grandfather of glibc's more modern implementation mentioned above. The old version has two advantages: (1) It is public domain, and (2) it uses a limited number of constants, which is beneficial if memory is a tight resource (glibc's version defines over 10000 lines of constants!). The following is completely standalone code, which calculates x^0.19029 for 0 <= x <= 1 to double precision (I tested it against Python's power function and found that at most 2 bits differed):
#define __LITTLE_ENDIAN
#ifdef __LITTLE_ENDIAN
#define __HI(x) *(1+(int*)&x)
#define __LO(x) *(int*)&x
#else
#define __HI(x) *(int*)&x
#define __LO(x) *(1+(int*)&x)
#endif
static const double
bp[] = {1.0, 1.5,},
dp_h[] = { 0.0, 5.84962487220764160156e-01,}, /* 0x3FE2B803, 0x40000000 */
dp_l[] = { 0.0, 1.35003920212974897128e-08,}, /* 0x3E4CFDEB, 0x43CFD006 */
zero = 0.0,
one = 1.0,
two = 2.0,
two53 = 9007199254740992.0, /* 0x43400000, 0x00000000 */
/* poly coefs for (3/2)*(log(x)-2s-2/3*s**3 */
L1 = 5.99999999999994648725e-01, /* 0x3FE33333, 0x33333303 */
L2 = 4.28571428578550184252e-01, /* 0x3FDB6DB6, 0xDB6FABFF */
L3 = 3.33333329818377432918e-01, /* 0x3FD55555, 0x518F264D */
L4 = 2.72728123808534006489e-01, /* 0x3FD17460, 0xA91D4101 */
L5 = 2.30660745775561754067e-01, /* 0x3FCD864A, 0x93C9DB65 */
L6 = 2.06975017800338417784e-01, /* 0x3FCA7E28, 0x4A454EEF */
P1 = 1.66666666666666019037e-01, /* 0x3FC55555, 0x5555553E */
P2 = -2.77777777770155933842e-03, /* 0xBF66C16C, 0x16BEBD93 */
P3 = 6.61375632143793436117e-05, /* 0x3F11566A, 0xAF25DE2C */
P4 = -1.65339022054652515390e-06, /* 0xBEBBBD41, 0xC5D26BF1 */
P5 = 4.13813679705723846039e-08, /* 0x3E663769, 0x72BEA4D0 */
lg2 = 6.93147180559945286227e-01, /* 0x3FE62E42, 0xFEFA39EF */
lg2_h = 6.93147182464599609375e-01, /* 0x3FE62E43, 0x00000000 */
lg2_l = -1.90465429995776804525e-09, /* 0xBE205C61, 0x0CA86C39 */
ovt = 8.0085662595372944372e-0017, /* -(1024-log2(ovfl+.5ulp)) */
cp = 9.61796693925975554329e-01, /* 0x3FEEC709, 0xDC3A03FD =2/(3ln2) */
cp_h = 9.61796700954437255859e-01, /* 0x3FEEC709, 0xE0000000 =(float)cp */
cp_l = -7.02846165095275826516e-09, /* 0xBE3E2FE0, 0x145B01F5 =tail of cp_h*/
ivln2 = 1.44269504088896338700e+00, /* 0x3FF71547, 0x652B82FE =1/ln2 */
ivln2_h = 1.44269502162933349609e+00, /* 0x3FF71547, 0x60000000 =24b 1/ln2*/
ivln2_l = 1.92596299112661746887e-08; /* 0x3E54AE0B, 0xF85DDF44 =1/ln2 tail*/
double pow0p19029(double x)
{
double y = 0.19029e+00;
double z,ax,z_h,z_l,p_h,p_l;
double y1,t1,t2,r,s,t,u,v,w;
int i,j,k,n;
int hx,hy,ix,iy;
unsigned lx,ly;
hx = __HI(x); lx = __LO(x);
hy = __HI(y); ly = __LO(y);
ix = hx&0x7fffffff; iy = hy&0x7fffffff;
ax = x;
/* special value of x */
if(lx==0) {
if(ix==0x7ff00000||ix==0||ix==0x3ff00000){
z = ax; /*x is +-0,+-inf,+-1*/
return z;
}
}
s = one; /* s (sign of result -ve**odd) = -1 else = 1 */
double ss,s2,s_h,s_l,t_h,t_l;
n = ((ix)>>20)-0x3ff;
j = ix&0x000fffff;
/* determine interval */
ix = j|0x3ff00000; /* normalize ix */
if(j<=0x3988E) k=0; /* |x|<sqrt(3/2) */
else if(j<0xBB67A) k=1; /* |x|<sqrt(3) */
else {k=0;n+=1;ix -= 0x00100000;}
__HI(ax) = ix;
/* compute ss = s_h+s_l = (x-1)/(x+1) or (x-1.5)/(x+1.5) */
u = ax-bp[k]; /* bp[0]=1.0, bp[1]=1.5 */
v = one/(ax+bp[k]);
ss = u*v;
s_h = ss;
__LO(s_h) = 0;
/* t_h=ax+bp[k] High */
t_h = zero;
__HI(t_h)=((ix>>1)|0x20000000)+0x00080000+(k<<18);
t_l = ax - (t_h-bp[k]);
s_l = v*((u-s_h*t_h)-s_h*t_l);
/* compute log(ax) */
s2 = ss*ss;
r = s2*s2*(L1+s2*(L2+s2*(L3+s2*(L4+s2*(L5+s2*L6)))));
r += s_l*(s_h+ss);
s2 = s_h*s_h;
t_h = 3.0+s2+r;
__LO(t_h) = 0;
t_l = r-((t_h-3.0)-s2);
/* u+v = ss*(1+...) */
u = s_h*t_h;
v = s_l*t_h+t_l*ss;
/* 2/(3log2)*(ss+...) */
p_h = u+v;
__LO(p_h) = 0;
p_l = v-(p_h-u);
z_h = cp_h*p_h; /* cp_h+cp_l = 2/(3*log2) */
z_l = cp_l*p_h+p_l*cp+dp_l[k];
/* log2(ax) = (ss+..)*2/(3*log2) = n + dp_h + z_h + z_l */
t = (double)n;
t1 = (((z_h+z_l)+dp_h[k])+t);
__LO(t1) = 0;
t2 = z_l-(((t1-t)-dp_h[k])-z_h);
/* split up y into y1+y2 and compute (y1+y2)*(t1+t2) */
y1 = y;
__LO(y1) = 0;
p_l = (y-y1)*t1+y*t2;
p_h = y1*t1;
z = p_l+p_h;
j = __HI(z);
i = __LO(z);
/*
* compute 2**(p_h+p_l)
*/
i = j&0x7fffffff;
k = (i>>20)-0x3ff;
n = 0;
if(i>0x3fe00000) { /* if |z| > 0.5, set n = [z+0.5] */
n = j+(0x00100000>>(k+1));
k = ((n&0x7fffffff)>>20)-0x3ff; /* new k for n */
t = zero;
__HI(t) = (n&~(0x000fffff>>k));
n = ((n&0x000fffff)|0x00100000)>>(20-k);
if(j<0) n = -n;
p_h -= t;
}
t = p_l+p_h;
__LO(t) = 0;
u = t*lg2_h;
v = (p_l-(t-p_h))*lg2+t*lg2_l;
z = u+v;
w = v-(z-u);
t = z*z;
t1 = z - t*(P1+t*(P2+t*(P3+t*(P4+t*P5))));
r = (z*t1)/(t1-two)-(w+z*w);
z = one-(r-z);
__HI(z) += (n<<20);
return s*z;
}
Clearly, 50+ years of research have gone into this, so it's probably very hard to do any better. (One has to appreciate that there are 0 loops, only 2 divisions, and only 6 if statements in the whole algorithm!) The reason for this is, again, the behavior at x = 0, where all derivatives diverge, which makes it extremely hard to keep the error under control: I once had a spline representation with 18 knots that was good up to x = 1e-4, with absolute and relative errors < 5e-4 everywhere, but going to x = 1e-5 ruined everything again.
So, unless the requirement to go arbitrarily close to zero is relaxed, I recommend using the adapted version of e_pow.c given above.
Edit 4
Now that we know that the domain 0.3 <= x <= 1 is sufficient, and that we have very low accuracy requirements, Edit 3 is clearly overkill. As #MvG has demonstrated, the function is so well behaved that a polynomial of degree 7 is sufficient to satisfy the accuracy requirements, which can be considered a single spline segment. #MvG's solution minimizes the integral error, which already looks very good.
The question arises as to how much better we can still do? It would be interesting to find the polynomial of a given degree that minimizes the maximum error in the interval of interest. The answer is the minimax
polynomial, which can be found using Remez' algorithm, which is implemented in the Boost library. I like #MvG's idea to clamp the value at x = 1 to 1, which I will do as well. Here is minimax.cpp:
#include <ostream>
#define TARG_PREC 64
#define WORK_PREC (TARG_PREC*2)
#include <boost/multiprecision/cpp_dec_float.hpp>
typedef boost::multiprecision::number<boost::multiprecision::cpp_dec_float<WORK_PREC> > dtype;
using boost::math::pow;
#include <boost/math/tools/remez.hpp>
boost::shared_ptr<boost::math::tools::remez_minimax<dtype> > p_remez;
dtype f(const dtype& x) {
static const dtype one(1), y(0.19029);
return one - pow(one - x, y);
}
void out(const char *descr, const dtype& x, const char *sep="") {
std::cout << descr << boost::math::tools::real_cast<double>(x) << sep << std::endl;
}
int main() {
dtype a(0), b(0.7); // range to optimise over
bool rel_error(false), pin(true);
int orderN(7), orderD(0), skew(0), brake(50);
int prec = 2 + (TARG_PREC * 3010LL)/10000;
std::cout << std::scientific << std::setprecision(prec);
p_remez.reset(new boost::math::tools::remez_minimax<dtype>(
&f, orderN, orderD, a, b, pin, rel_error, skew, WORK_PREC));
out("Max error in interpolated form: ", p_remez->max_error());
p_remez->set_brake(brake);
unsigned i, count(50);
for (i = 0; i < count; ++i) {
std::cout << "Stepping..." << std::endl;
dtype r = p_remez->iterate();
out("Maximum Deviation Found: ", p_remez->max_error());
out("Expected Error Term: ", p_remez->error_term());
out("Maximum Relative Change in Control Points: ", r);
}
boost::math::tools::polynomial<dtype> n = p_remez->numerator();
for(i = n.size(); i--; ) {
out("", n[i], ",");
}
}
Since all parts of boost that we use are header-only, simply build with:
c++ -O3 -I<path/to/boost/headers> minimax.cpp -o minimax
We finally get the coefficients, which are after multiplication by 44330:
24538.3409, -42811.1497, 34300.7501, -11284.1276, 4564.5847, 3186.7541, 8442.5236, 0.
The following error plot demonstrates that this is really the best possible degree-7 polynomial approximation, since all extrema are of equal magnitude (0.06659):
Should the requirements ever change (while still keeping well away from 0!), the C++ program above can be simply adapted to spit out the new optimal polynomial approximation.
Instead of a lookup table, I'd use a polynomial approximation:
1 - x0.19029 ≈ - 1073365.91783x15 + 8354695.40833x14 - 29422576.6529x13 + 61993794.537x12 - 87079891.4988x11 + 86005723.842x10 - 61389954.7459x9 + 32053170.1149x8 - 12253383.4372x7 + 3399819.97536x6 - 672003.142815x5 + 91817.6782072x4 - 8299.75873768x3 + 469.530204564x2 - 16.6572179869x + 0.722044145701
Or in code:
double f(double x) {
double fx;
fx = - 1073365.91783;
fx = fx*x + 8354695.40833;
fx = fx*x - 29422576.6529;
fx = fx*x + 61993794.537;
fx = fx*x - 87079891.4988;
fx = fx*x + 86005723.842;
fx = fx*x - 61389954.7459;
fx = fx*x + 32053170.1149;
fx = fx*x - 12253383.4372;
fx = fx*x + 3399819.97536;
fx = fx*x - 672003.142815;
fx = fx*x + 91817.6782072;
fx = fx*x - 8299.75873768;
fx = fx*x + 469.530204564;
fx = fx*x - 16.6572179869;
fx = fx*x + 0.722044145701;
return fx;
}
I computed this in sage using the least squares approach:
f(x) = 1-x^(19029/100000) # your function
d = 16 # number of terms, i.e. degree + 1
A = matrix(d, d, lambda r, c: integrate(x^r*x^c, (x, 0, 1)))
b = vector([integrate(x^r*f(x), (x, 0, 1)) for r in range(d)])
A.solve_right(b).change_ring(RDF)
Here is a plot of the error this will entail:
Blue is the error from my 16 term polynomial, while red is the error you'd get from piecewise linear interpolation with 16 equidistant values. As you can see, both errors are quite small for most parts of the range, but will become really huge close to x=0. I actually clipped the plot there. If you can somehow narrow the range of possible values, you could use that as the domain for the integration, and obtain an even better fit for the relevant range. At the cost of worse fit outside, of course. You could also increase the number of terms to obtain a closer fit, although that might also lead to higher oscillations.
I guess you can also combine this approach with the one Stefan posted: use his to split the domain into several parts, then use mine to find a close low degree polynomial for each part.
Update
Since you updated the specification of your question, with regard to both the domain and the error, here is a minimal solution to fit those requirements:
44330(1 - x0.19029) ≈ + 23024.9160933(1-x)7 - 39408.6473636(1-x)6 + 31379.9086193(1-x)5 - 10098.7031260(1-x)4 + 4339.44098317(1-x)3 + 3202.85705860(1-x)2 + 8442.42528906(1-x)
double f(double x) {
double fx, x1 = 1. - x;
fx = + 23024.9160933;
fx = fx*x1 - 39408.6473636;
fx = fx*x1 + 31379.9086193;
fx = fx*x1 - 10098.7031260;
fx = fx*x1 + 4339.44098317;
fx = fx*x1 + 3202.85705860;
fx = fx*x1 + 8442.42528906;
fx = fx*x1;
return fx;
}
I integrated x from 0.293 to 1 or equivalently 1 - x from 0 to 0.707 to keep the worst oscillations outside the relevant domain. I also omitted the constant term, to ensure an exact result at x=1. The maximal error for the range [0.3, 1] now occurs at x=0.3260 and amounts to 0.0972 < 0.1. Here is an error plot, which of course has bigger absolute errors than the one above due to the scale factor k=44330 which has been included here.
I can also state that the first three derivatives of the function will have constant sign over the range in question, so the function is monotonic, convex, and in general pretty well-behaved.
Not meant to answer the question, but it illustrates the Road Not To Go, and thus may be helpful:
This quick-and-dirty C code calculates pow(i, 0.19029) for 0.000 to 1.000 in steps of 0.01. The first half displays the error, in percents, when stored as 1/65536ths (as that theoretically provides slightly over 4 decimals of precision). The second half shows both interpolated and calculated values in steps of 0.001, and the difference between these two.
It kind of looks okay if you read from the bottom up, all 100s and 99.99s there, but about the first 20 values from 0.001 to 0.020 are worthless.
#include <stdio.h>
#include <math.h>
float powers[102];
int main (void)
{
int i, as_int;
double as_real, low, high, delta, approx, calcd, diff;
printf ("calculating and storing:\n");
for (i=0; i<=101; i++)
{
as_real = pow(i/100.0, 0.19029);
as_int = (int)round(65536*as_real);
powers[i] = as_real;
diff = 100*as_real/(as_int/65536.0);
printf ("%.5f %.5f %.5f ~ %.3f\n", i/100.0, as_real, as_int/65536.0, diff);
}
printf ("\n");
printf ("-- interpolating in 1/10ths:\n");
for (i=0; i<1000; i++)
{
as_real = i/1000.0;
low = powers[i/10];
high = powers[1+i/10];
delta = (high-low)/10.0;
approx = low + (i%10)*delta;
calcd = pow(as_real, 0.19029);
diff = 100.0*approx/calcd;
printf ("%.5f ~ %.5f = %.5f +/- %.5f%%\n", as_real, approx, calcd, diff);
}
return 0;
}
You can find a complete, correct standalone implementation of pow in fdlibm. It's about 200 lines of code, about half of which deal with special cases. If you remove the code that deals with special cases you're not interested in I doubt you'll have problems including it in your program.
LutzL's answer is a really good one: Calculate your power as (x^1.52232)^(1/8), computing the inner power by spline interpolation or another method. The eighth root deals with the pathological non-differentiable behavior near zero. I took the liberty of mocking up an implementation this way. The below, however, only does a linear interpolation to do x^1.52232, and you'd need to get the full coefficients using your favorite numerical mathematics tools. You'll adding scarcely 40 lines of code to get your needed power, plus however many knots you choose to use for your spline, as dicated by your required accuracy.
Don't be scared by the #include <math.h>; it's just for benchmarking the code.
#include <stdio.h>
#include <math.h>
double my_sqrt(double x) {
/* Newton's method for a square root. */
int i = 0;
double res = 1.0;
if (x > 0) {
for (i = 0; i < 10; i++) {
res = 0.5 * (res + x / res);
}
} else {
res = 0.0;
}
return res;
}
double my_152232(double x) {
/* Cubic spline interpolation for x ** 1.52232. */
int i = 0;
double res = 0.0;
/* coefs[i] will give the cubic polynomial coefficients between x =
i and x = i+1. Out of laziness, the below numbers give only a
linear interpolation. You'll need to do some work and research
to get the spline coefficients. */
double coefs[3][4] = {{0.0, 1.0, 0.0, 0.0},
{-0.872526, 1.872526, 0.0, 0.0},
{-2.032706, 2.452616, 0.0, 0.0}};
if ((x >= 0) && (x < 3.0)) {
i = (int) x;
/* Horner's method cubic. */
res = (((coefs[i][3] * x + coefs[i][2]) * x) + coefs[i][1] * x)
+ coefs[i][0];
} else if (x >= 3.0) {
/* Scaled x ** 1.5 once you go off the spline. */
res = 1.024824 * my_sqrt(x * x * x);
}
return res;
}
double my_019029(double x) {
return my_sqrt(my_sqrt(my_sqrt(my_152232(x))));
}
int main() {
int i;
double x = 0.0;
for (i = 0; i < 1000; i++) {
x = 1e-2 * i;
printf("%f %f %f \n", x, my_019029(x), pow(x, 0.19029));
}
return 0;
}
EDIT: If you're just interested in a small region like [0,1], even simpler is to peel off one sqrt(x) and compute x^1.02232, which is quite well behaved, using a Taylor series:
double my_152232(double x) {
double part_050000 = my_sqrt(x);
double part_102232 = 1.02232 * x + 0.0114091 * x * x - 3.718147e-3 * x * x * x;
return part_102232 * part_050000;
}
This gets you within 1% of the exact power for approximately [0.1,6], though getting the singularity exactly right is always a challenge. Even so, this three-term Taylor series gets you within 2.3% for x = 0.001.

OpenGL - Mapping between x and y in glVertex2f(x, y) to screen integer coordinates

I would like to know how the vertices of glVertex2f(x, y) map to actual screen integer co-ordinates.
I intend to use a complex plane with minR, minI and maxR, maxI (I and R - Imaginary and Real part), such that the plane gets mapped to 512 x 512 pixels on the screen. I have points of 512 steps between the min and max values.
The mapping between the vertices is unclear since, I had to scale the my planar image using glScalef(100, 100, 0) to get it roughly fit the screen. But still, a large portion of it is left blank.
Please note that I am using the glBegin(GL_POINTS) routine to map the points in the plane to the screen.
The code looks thus,
for (X = 0; X < 512; X++)
for (Y = 0; Y < 512; Y++)
glVertex2f (Complexplane[X][Y].real, Complexplane[X][Y].imag);
P.S.:
Complexplane[0][0].real = -2, Complexplane[0][0].imag = -1.2
Complexplane[511][511].real = 1.0, Complexplane[0][0].imag = 1.8
I'm assuming you haven't set the projection or modelview matrices - they will be set to the identity matrix by default BTW...
For X,Y coordinates, a point will be visible if: -1 <= X <= 1, -1 <= Y <= 1
The glViewport function describes how this range is mapped to the window. It is initially set to (0, 0, window_width, window_height) when the GL context is created. The fact that glScale(100, 100, 0) is only taking up a portion of the window suggests that you are applying another transform elsewhere.
The mapping depends on the transformation matrices set. In up to OpenGL-2 the pipeline is
v_eye = ModelviewMatrix * v
v_projected = ProjectionMatrix * v_eye
v_clipped = clip(v_projected)
v_NDC.xyzw = v_clipped.xyzw / v_clipped.w
The default matrices are identity, so the only operation applied in the default state is the clipping. v_NDC then undergoes the viewport transform:
p.xyz = (v_NDC.xyz + 1) * viewport.wh / 2 + viewport.xy

Resources