Why is the ratio of print units to mm 34/9 - silverlight

I'm trying to print a grid of labels to a sheet of A4 label paper.
The user specifies the left and top margins of the paper in mm.
A4 is 210 x 297.
Siverlight tells me that the printable area is 793 x 1122 if I add the printable area and the margins together.
printDocument.PrintPage += (s, e) =>
{
var printableArea = e.PrintableArea;
var pageMargin = e.PageMargins;
}
If I do the maths the ratio between 210 & 793 and 297 & 1122 is 3.777777 recurring which (more accurately) is 34/9.
Why is it this value?
It is always this value regardless of the destination of the print? I've checked an actual printer and an XPS document set up to be A4 in size and it appears to be the case, but I don't want to get caught out in 6 months time.
If it does change how can I work out this relationship in code?

After a little bit more research I've worked out the answer.
The sizes Silverlight is using is the paper size in device independent units - calculated as 1/96th of an inch.
297 mm = 11.6929 inches
Multiply that by 96 and you get 1122.51
Similarly
210 mm = 8.2677 inches
which works out to 493.70
So now I understand where the numbers come from I can happily use my slightly more accurate calculation of 34/9 along with a comment explaining where it comes from.

Related

d3 x axis won't fit on svg [duplicate]

I have a temperature range of 33 degrees to 64 degrees F. I'm trying to figure out the pixel position of 59 degrees along a y-axis that spans 600 pixels. This is for a column chart.
Full disclosure, this is homework that I've been grappling for a while and simply have not yet figured it out.
var scale = d3.scale.linear().domain([33,64]).range([0,64]);
scale(59) returns 53.677419354838705 which is the wrong answer.
600 - 53.677419354838705 is also the wrong answer.
What might I be missing?
Thank you.
The domain is the complete set of values, so in this case that is all of your temperatures, from 33 to 64. The range is the set of resulting values of a function, in this case the resulting values of scaling your temperatures from 0 to 600.
So you're almost there, you're just using the wrong range - in this case your range should be the span of your y-axis (0 - 600):
var scale = d3.scale.linear().domain([33, 64]).range([0, 600]);
Which will result in scale(59) giving an output of 503.2258064516129.

Replace several numbers in an array every step in Matlab

I am having a set of data. Let's say a grid-points nxm (n latitude, m:longitude) daily temperature for the whole world during a month. However, the temperature in my location of interest is not correct, so I need to update it. In other words, I have to change the data at some certain grid points for every time step (daily). I attach here a simple example. Let's say each matrix 1x2 on the left is the correct data, while each 6x4 matrix contains some incorrect data (6: latitude, 4: longitude). What I need is to change the correct data from the left to the right as indicated in the same color for every time step.
Could anyone help me?
Many thanks
For example this data:
A=rand(4,2)
B=rand(6,4,4)
You would want these values to be replaced by A:
B(3,2:3,:)
Just make sure the size is the same
size(B(3,2:3,:))
> 1 2 4
A=reshape(A',[1 2 4])
And you can put it there
B(3,2:3,:)=A
[edit] Sorry, I probably just don't see the problem.
T = randi(255,[1E3,1E3,31],'uint8'); %1000 longitude, 1000 latitude, 31 days
C = repmat([50,100],[31,1,1]); %correction for 31 days and two locations. must become 50 and 100.
%location 20,10 and 20,11 must change.
T(20,10:11,:)=reshape(C',[1 2 31]);
T(20,10,3) %test for third day.
>> 50
T(20,11,10) %test for tenth day.
>> 100
The replacement takes 0.000365 second on my pc.

Opacity from RGB array

So I'm no expert in coding: I just have a vague understanding of a few bits and bobs.
I understand that images of pixel dimensions X*Y are stored in array of size 3*X*Y where each vector pulled from a given (x,y) value has 3 elements for the 3 RGB values.
I also understand that one can also store an image in a 4*X*Y array, where each pulled vector now has 4 values, RGBA with A being Alpha, used to represent the opacity of a particular pixel.
Now I'm into animation, and have pencil drawings of white clouds on a black background that I want to import into Flash. I would like the blacker parts of the drawing to be more transparent and the lighter parts to be more opaque. I have the scans of the drawings saved in .png format.
If I had any idea how to manipulate an image at the 'array level', I could have a stab at this myself but I'm at a loss.
I need a program that, given a .png image and a reference RGB value {a b c}, obtains the RGB array of the image and converts it into an RBGA array such that:
a pixel of RGB value {p q r}
...... Becomes ......
a pixel of RGBA value {p q r 1-M[(|p-a|^2 + |q-b|^2 + |r-c|^2)^1/2]}.
Where M is a normalisation factor which makes the largest alpha value = 1.
i.e. M = 1/[(255^2 + 255^2 + 255^2)^1/2]) = 0.0026411...
i.e. the alpha value of the replacement pixel is the 'distance' between the colour of the pixel and some reference colour which can be input.
This then needs to export the new RGBA Array as a png image.
Any ideas or any fellow animators know if this can be done directly with actionscript?
Example: Reference = {250 251 245}
RGB array =
|{250 251 245} {250 250 250}|
|{30 255 22} {234 250 0 }|
...... Becomes ......
RGBA array =
|{250 251 245 1} {250 251 245 0.987}|
|{30 255 22 0.173} {234 250 0 0.352}|
You can do this quite simply, just at the command-line, with ImageMagick which is installed on most Linux distros and is available for free on OSX and Windows.
The "secret sauce" is the -fx operator - described here.
So, let's generate a 300x200 black image and then use -fx to calculate the red channel so that the red varies across the image according to what fraction of the width (w) we are from the left side (i):
convert -size 300x200 xc:black -channel R -fx 'i/w' result.png
Note that I am generating an image "on-the-fly" with -size 300x200 xc:black, whereas if you have a PNG file with your animation frame in it, you can put that in, in its place.
Now let's say we want to vary the opacity/alpha too - according to the distance down the image from the top:
convert -size 300x200 xc:black -alpha on \
-channel R -fx 'i/w' \
-channel A -fx 'j/h' result.png
Ok, we are getting there... your function is a bit more complicated, so, rather than typing it on the command-line every time, we can put it in a script file called RGB2Opacity.fx like this:
convert -size 300x200 xc:black -alpha on -channel A -fx #RGB2Opacity.fx result.png
where RGB2Opacity.fx is simple and looks like this for the moment:
0.2
Now we need to put your "reference" pixel on the command line with your animation frame so that ImageMagick can work out the difference. That means your actual command-line will look more or less exactly like the following:
convert -size 300x200 xc:"rgb(250,251,245)" YourImage.png -alpha on -channel A -fx #RGB2Opacity.fx result.png
And then we need to implement your formula in the -fx script file. Your variable names must be at least 2 letters long with no digits in them, and you should return a single value for the opacity. Variables are all scaled between [0,1.0] so your 255 scaling is a little different. I have no sample image and I am not sure how the answer is supposed to look, but it will be pretty close to this:
MM=1/pow(3,0.5);
pmasq=pow((u.r-v.r),2.0);
qmbsq=pow((u.g-v.g),2.0);
rmcsq=pow((u.b-v.b),2.0);
1-MM*pow((pmasq+qmbsq+rmcsq),0.5)
I don't know if/how to put comments in -fx scripts, so I will explain the variable names below:
pmasq is the square of p minus a.
qmbsq is the square of q minus b.
rmcsq is the square of r minus c.
u.r refers to the red channel of the first image in ImageMagick's list, i.e. the red channel of your reference pixel.
v.g refers to the green channel of the second image in ImageMagick's list, i.e. the green channel of your animation frame.
Let's create your animation frame now:
convert xc:"rgb(250,251,245)" xc:"rgb(250,250,250)" xc:"rgb(30,255,22)" xc:"rgb(234,250,0)" +append frame.png
And check it looks correct:
convert frame.png txt:
Output
# ImageMagick pixel enumeration: 4,1,65535,srgb
0,0: (64250,64507,62965) #FAFBF5 srgb(250,251,245)
1,0: (64250,64250,64250) #FAFAFA grey98
2,0: (7710,65535,5654) #1EFF16 srgb(30,255,22)
3,0: (60138,64250,0) #EAFA00 srgb(234,250,0)
If we apply that to your image and check the results, you can see I have got it slightly wrong somewhere, butI'll leave you (or some other bright spark) to work that out...
convert -size 4x1 xc:"rgb(250,251,245)" frame.png -alpha on -channel A -fx #RGB2Opacity.fx result.png
convert result.png txt:
# ImageMagick pixel enumeration: 4,1,65535,srgba
0,0: (64250,64507,62965,65535) #FAFBF5FF srgba(250,251,245,1)
1,0: (64250,64507,62965,64764) #FAFBF5FC srgba(250,251,245,0.988235)
2,0: (64250,64507,62965,19018) #FAFBF54A srgba(250,251,245,0.290196)
3,0: (64250,64507,62965,29041) #FAFBF571 srgba(250,251,245,0.443137)

Creating a 3D plot in Matlab

I want to create a 3D plot of the final fraction of grass covered on the Earth (=in 2 billion years from now) (=A) as a function of a varying death rate of grass (=D) and the growth rate of grass (=G).
The final value of A (at 2 billion years away from now) can be calculated using a loop with the following discritised equation:
A(t+dt) = A(t)*((1-A(t))*G-D)*dt + A(t)
%Define variables and arrays
D=0.1; %constant value
G=0.4; %constant value
A=0.001; %initial value of A at t=0
t=0;
dt=10E6;
startloop=1; %define number of iterations
endloop=200;
timevector=zeros(1,endloop); %create vector with 0
grassvector=zeros(1,endloop);
%Define the loop
for t=startloop:endloop
A=A.*((((1-A).*G)-D)) + A;
grassvector(t)=A;
timevector(t)=t*dt;
end
Now i'm stuck on how to create a 3D plot of this final value of A as a function of a varying G and D. I got this but after a few trials, it keeps giving errors:
%(1) Create array of values for G and D varying between 0 and 1
A=0.001;
G=[0.005:0.005:1]; %Vary from 0.005 to 1 in steps of 0.005
D=[0.005:0.005:1]; %Vary from 0.005 to 1 in steps of 0.005
%(2) Meshgrid both variables = all possible combinations in a matrix
[Ggrid,Dgrid]=meshgrid(G,D);
%(3) Calculate the final grass fraction with varying G and D
D=0.1;
G=0.4;
A=0.001;
t=0;
dt=10E6;
startloop=1; %define number of iterations
endloop=200;
timevector=zeros(1,endloop); %create vector with 0
grassvector=zeros(1,endloop);
%Define the loop
for t=startloop:endloop
A=A.*((((1-A).*Ggrid)-Dgrid)) + A;
grassvector(t)=A;
timevector(t)=t*dt;
end
%(4) mesh together with D and G
...??
Can someone help? Thanks!
Your code is wrong, as grassvector(t)=A; can not be executed, as the sizes are not consistent. However, I think you may want to do:
grassvector=zeros([size(Ggrid),endloop]);
and in the loop:
grassvector(:,:,t)=A;
Also, while completely unnecesary computationally, you may want to initialize A to A=0.001*ones(size(Dgrid)), as it makes more sense logically.
Anyways: this is how you can plot it in the end:
surf(Ggrid,Dgrid,A,'LineStyle','none');
xlabel('growth rate ')
ylabel('death rate ')
zlabel('grass')
colorbar
gives:
But, as I was actually interested in your research, I decided to make a couple of plots to see how fast the grass will grow and stuff. Here is some nice plotting code. You can modify different stuff here to be able to change the appearance of it. I use custom colormaps, so if it doesn't work, delete the colormap(viridis()) line. If you like the colormap, visit this.
fh=figure();
filename='grass.gif';
for t=startloop:endloop
clf
hold on
surf(Ggrid,Dgrid,grassvector(:,:,t),'LineStyle','none');
[c,h]=contour3(Ggrid,Dgrid,grassvector(:,:,t)+0.05,[0:0.1:1],'LineColor',[153,0,18]/255,'LineWidth',2);
clabel(c,h);
xlabel('growth rate ')
ylabel('death rate ')
zlabel('grass')
title(['Years passed: ' num2str(t*dt/1000000) ' million'])
colormap(viridis())
axis([0 1 0 1 0 1])
grid on
view(-120,40);
frame = getframe(fh);
im = frame2im(frame);
[imind,cm] = rgb2ind(im,256);
if t == 1;
imwrite(imind,cm,filename,'gif', 'Loopcount',inf);
else
imwrite(imind,cm,filename,'gif','WriteMode','append','DelayTime',0.1);
end
end
Results:

Converting two bit color to eight bit color

I've color values coming in six bits, with two bits each for red, green and blue. So black would be represented as binary 000000, red 110000, blue 000011, yellow 111100 and so on.
I have to convert this color into to 24 bit rgb value to pass it to the graphics layer (DirectFB). Since three (binary 11) is to become 255 (0xFF), I used the following formula with 85 (=255/3) as the conversion factor.
r = (color_6bit >> 4) * FACTOR;
g = ((color_6bit >> 2) & 0x3) * FACTOR;
b = (color_6bit & 0x3) * FACTOR;
color_32bit = (r << 16)| (g << 8) | b;
This correctly converts the colors (white [0x3F -> 0xFFFFFF], red [0x30 -> 0xFF0000] and so on).
Now, these colors are text and background colors of captions to be displayed on TV and we have test streams that have reference color palette embedded in the video. When I draw the eight bit colors obtained using this formula on to the screen, it is not exactly matching with the reference color present in the video - it is fairly close, but there is a difference.
Am I doing the conversion correctly or is there any standard algorithm for converting two bit rgb color values to eight bit rgb values? Could DirectFB be using some different representation (like RGB565) internally?
For what it is worth, when the factor 85 is replaced with 48 (value found by trial and error), the colors are matching almost perfectly.
The only standard I know of is EGA - there's a reference on wikipedia.
For what it's worth, 6 bits is a very small space - only 64 values. The quickest way of doing the conversion in terms of both cpu time and memory is almost certainly just looking up the value in an array of size 64. Especially if you have test data, this is really easy - just put the correct 64 values from the test data in the array. That way you don't need to worry if it is a standard - the standard is exactly whatever works for this system.

Resources