This is a problem that I have been trying to solve for some time. I have a binary file, that, after processing, leaves me with a binary bmp file, i.e, the pixels have only two values. Now, I have with me the following HDR file:
ENVI
description = {
PolSARpro File Imported to ENVI}
samples = 2618
lines = 2757
bands = 1
header offset = 0
file type = ENVI Standard
data type = 4
interleave = bsq
byte order = 0
map info = {UTM, 1, 1, 399711.555, 2641320.529, 12.500, 12.500, 45, North, WGS-84}
wavelength units = meters
band names = {
SPF_L1.bin }
generated by ENVI and PolSARPro. The problem I am facing is that softwares like ENVI calculate the latitude & longitude values for each pixel, while I am not able to find any method for replicating the same in my program ( I am using C, using PolSARPro's source files as base ). If any one could help me by explaining how to assign the positional information, it would be highly appreciated!
P.S: From my point of view, map info - lists geographic coordinates information in the order of projection name (UTM), reference pixel x location in file coordinates, pixel y, x pixel size, y pixel size, Projection Zone, North or South for UTM only.
Looks like all of the information is there to do what you want.
You have a pixel size (meters, I presume) and a reference. Getting the coordinates of a particular pixel involves offsetting the reference coordinates by the appropriate amount (12.5 meters times the number of pixels). Looks like it's the same for both directions.
The 399711.555 and 2641320.529 are Easting and Northing coordinates in UTM. (Near Steel City in India?)
You'll need another conversion to get to Lat/Long, though.
Related
I am trying to make a graph of the brightness of a pixel vs the distance from center of that pixel. To do so I used for loops to check each pixel for these values. But when adding them to my array I find that I can't. One of the issues is I have to define the array size first so no values get placed in the right spot. I believe everything else to be working except adding values to the arrays.
I've tried various methods of concatenation to add the values of each pixel to the array. I didn't have any more solutions to try.
folder3 = 'C:\Users\slenka\Desktop\Image_Analysis\Subtracted';
cd('C:\Users\slenka\Desktop\Image_Analysis\Subtracted');
subtractedFiles = [dir(fullfile(folder3,'*.TIF')); dir(fullfile(folder3,'*.PNG')); dir(fullfile(folder3,'*.BMP')); dir(fullfile(folder3,'*.jpg'))];
numberOfSubImages= length(subtractedFiles);
for b = 1 : numberOfSubImages
subFileName=fullfile(folder3, subtractedFiles(b).name);
chartImage=imread(subFileName);
[chartY, chartX, chartNumberOfColorChannels] = size(chartImage);
ccY= chartY/2;
ccX= chartX/2;
c=[ccX,ccY];
distanceArray=zeros(1,chartX);
intensityArray=zeros(1,chartY);
f=1;
g=1;
for y=1:chartY
for x=1:chartX
D = sqrt((y - c(1)) .^ 2 + (x - c(2)) .^ 2);
grayScale= impixel(chartImage, x, y);
distanceArray(f)=[D];
intensityArray(g)=[grayScale];
f=f+1;
g=g+1;
end
end
xAxis=distanceArray;
yAxis=intensityArray;
plot(xAxis,yAxis);
end
I'm expecting 2 arrays one full of the data values for the light intensity of each pixel in the image, and another for that pixels distance from the center of the image. I am wanting to plot these two arrays as the y and x axis respectively. At the moment the actual results is an entirely empty array full of zeros.
I have a cell array called output. Output contains matrices of size 1024 x 1024, type = double, grayscale. I would like to plot each matrix and its corresponding histogram on a single plot. Here is what I have so far:
for i = 1:size(output,2)
figure
subplot(2,1,1)
imagesc(output{1,i});
colormap('gray')
colorbar;
title(num2str(dinfo(i).name))
subplot(2,1,2)
[pixelCount, grayLevels] = imhist(output{1,i});
bar(pixelCount);
title('Histogram of original image');
xlim([0 grayLevels(end)]); % Scale x axis manually.
grid on;
end
The plot I get, however, seems to be faulty... I was expecting a distribution of bars.
I am somewhat lost at how to proceed, any help or suggestions would be appreciated!
Thanks :)
Based on the colorbar on your image plot the values of your image pixels range from [0, 5*10^6].
For many image processing functions, MATLAB assumes one of two color models, double values ranging from [0, 1] or integer values ranging from [0 255]. While the supported ranges are not explicitly mentioned in the imhist documentation, in the "Tips" section of the imhist documentation, there is a table of scale factors for different numeric types that hints at these assumptions.
I think the discrepancy between your image range and these models is the root of the problem.
For example, I load a grayscale image and scale the pixels by 1000 to approximate your data.
% Toy data to approximate your image
I = im2double(imread('cameraman.tif'));
output = {I, I .* 1000};
for i = 1:size(output,2)
figure
subplot(2,1,1)
imagesc(output{1,i});
colormap('gray')
colorbar;
subplot(2,1,2)
[pixelCount, grayLevels] = imhist(output{1,i});
bar(pixelCount);
title('Histogram of original image');
grid on;
end
The first image is using a matrix with the standard [0,1] double value range. The imhist calculates a histogram as expected. The second image is using a matrix with the scaled [0, 1000] double value range. imhist assigns all the pixels to the 255 bin since that is the maximum bin. Therefore, we need a method that allows us to scale the bins.
Solution : Use histogram
histogram is designed for any numeric type and range. You may need to fiddle with the bin edges to show the structures that you are interested in as it doesn't initialize bins the same way imhist does.
figure
subplot(2,1,1)
imagesc(output{1,2});
colormap('gray')
colorbar;
subplot(2,1,2)
histogram(output{1,2});
title('Histogram of original image');
grid on;
I have an array which stores the information of a 20x20 black and white image.
int array[][] = new int[20][20];
If the pixel is black at a specific point, for example (0,5) I insert the value one into my array.
array[0][5] = 1;
I am trying to create a dataset so I can feed it into a neural network. I was wondering if there is a way to reduce the size of input values (20x20 = 400) by compressing the information.
I have an Nx3 array which stores the values in N coordinates. The first and second column correspond to x and y coordinate respectively, and the third column represents the value at that coordinates. I want to plot a 2D intensity plot, what's the best way to do it?
If the coordinates are evenly spaced, then I can use meshgrid and then use imshow, but in my data the coordinates are not evenly spaced. Besides, the array is very large N~100000, and the values (third column) span several orders of magnitude (so I should be using a logplot?). What's the best way to plot such a graph?
You can use griddata to interpolate your data at all 100000 points to a uniform grid (say 100 x 100) and then plot everything with a Log scaling of the colours,
x = data[:,0]
y = data[:,1]
z = data[:,2]
# define grid.
xi = np.linspace(np.min(x),np.max(x),100)
yi = np.linspace(np.min(y),np.max(y),100)
# grid the data.
zi = griddata(x,y,z,xi,yi,interp='linear')
#pcolormesh of interpolated uniform grid with log colormap
plt.pcolormesh(xi,yi,zi,norm=matplotlib.colors.LogNorm())
plt.colormap()
plt.show()
I've not tested this but basic idea should be correct. This has the advantage that you don't need to know your original (large) dataset and can work simply with the grid data xi, yi and zi.
The alternative is to colour a scatterplot,
plt.scatter(x, y, c=z,edgecolors='none', norm=matplotlib.colors.LogNorm())
and turn off the outer edges of the points so they make up a continuous picture.
For the sake of illumination analysis, based on this document, I am trying to determine three things for an array of lights and a series of points on a solid surface:
(Image key: big blue points are lights with illumination direction shown, small points are the points on my surface)
1) The distances between each of the lights and each of the points,
2) the angles between the direction each light is facing and the normal vectors of all of the points:
Note in this image I have replicated the normal vector and moved it to more clearly show the angle.
3) the angles between the direction each light is facing, and the vector from that light to all of the points on the solid:
Originally I had nested for loops iterating through all of the lights and points on the solid, but am now doing my best to do it in true MATLAB style with matrices:
I have found the distances between all the points with the pdist2 function, but have not managed to find a similar method to find the angles between the lights and all the points, nor the lights and the normal vectors of the points. I would prefer to do this with matrix methods rather than with iteration as I have been using.
Considering I have data set out, where each column of Lmat has my x,y,z position vectors of my lights; Dmat gives x,y,z directions of each light, thus the combination of each row from both of these matrices fully define the light and the direction it is facing. Similarly, Omega and nmat do the same for the points on the surface.
I am fairly sure that to get angles I want to do something along the lines of:
distMatrix = pdist2(Omega, Lmat);
LmatNew = zeros(numPoints, numLights, 3);
DmatNew = zeros(numPoints, numLights, 3);
OmegaNew = zeros(numPoints, numLights, 3);
nmatNew = zeros(numPoints, numLights, 3);
for i = 1:numLights
LmatNew(:,i,1) = Lmat(i,1);
LmatNew(:,i,2) = Lmat(i,2);
LmatNew(:,i,3) = Lmat(i,3);
DmatNew(:,i,1) = Dmat(i,1);
DmatNew(:,i,2) = Dmat(i,2);
DmatNew(:,i,3) = Dmat(i,3);
end
for j = 1:numPoints
OmegaNew(j,:,1) = Omega(j,1);
OmegaNew(j,:,2) = Omega(j,2);
OmegaNew(j,:,3) = Omega(j,3);
DmatNew(:,i,1) = Dmat(i,1);
DmatNew(:,i,2) = Dmat(i,2);
DmatNew(:,i,3) = Dmat(i,3);
end
angleMatrix = -dot(LmatNew-OmegaNew, DmatNew, 3);
angleMatrix = atand(angleMatrix);
angleMatrix = angleMatrix.*(angleMatrix > 0);
But I am getting conceptually stuck trying to get my head around what to do after my dot product.
Am I on the right track? Is there an inbuilt angle equivalent of pdist2 that I am overlooking?
Thanks all for your help, and sorry for the paint images!
Context: This image shows my lights (big blue points), the directions the lights are facing (little black traces), and my model.
According to MathWorks, there is no built-in function to calculate the angle between vectors. However, you can use trigonometry to calculate the angles.
Inputs
Since you unfortunately didn't explain your input data in great detail, I'm going to assume that you have a matrix Lmat containing a location vector of a light source in each row and a matrix Dmat containing the directional vectors for the light sources, both of size n×3, where n is the number of light sources in your scene.
The matrices Omega and Nmat supposedly are of size m×3 and contain the location vectors and normal vectors of all m surface points. The desired result are the angles between all light direction vectors and surface normal vectors, of which there are n⋅m, and the angles between the light direction vectors and the vectors connecting the light to each point on the surface, of which there are n⋅m as well.
To get results for all combinations of light sources and surface points, the input matrices have to be repeated vertically:
Lmat = repmat(Lmat, size(Omega,1), 1);
Dmat = repmat(Dmat, size(Omega,1), 1);
Omega = repmat(Omega, size(Lmat,1), 1);
Nmat = repmat(Nmat, size(Lmat,1), 1);
Using the inner product / dot product
The definition of the inner product of two vectors is
where θ is the angle between the two vectors. Reordering the equation yields
You can therefore calculate the angles between your directional vectors Dmat and your normal vectors Nmat like this:
normProd = sqrt(sum(Dmat.^2,2)).*sqrt(sum(Nmat.^2,2));
anglesInDegrees = acos(dot(Dmat.',Nmat.')' ./ normProd) * 180 / pi;
To calculate the angles between the light-to-point vectors and the directional vectors, just replace Nmat with Omega - Lmat.
Using the vector product / cross product
It has been mentioned that the above method will have problems with accuracy for very small (θ ≈ 0°) or very large (θ ≈ 180°) angles. The suggested solution is calculating the angles using the cross product and the inner product.
The norm of the vector product of two vectors is
You can combine this with the above definition of the inner product to get
which can obviously be reordered to this:
The corresponding MATLAB code looks like this:
normCross = sqrt(sum(cross(Dmat,Nmat,2).^2,2));
anglesInDegrees = atan2(normCross,dot(Dmat.',Nmat.')') * 180/pi;