I am attempting to write some Postscript in order to produce artwork in files I can send to a printer to get some signs printed.
The printer has various requirements for PDFs, one of which is that they should use CMYK.
In all my prior use of Postscript I have used setrgbcolor and never really dealt with colorspace management, ICC profiles, etc.
One of the colours I am using is called RAL 1507 RAL 5017 (Traffic Blue) with RGB and CMYK values I obtained by using a search engine for the colour name. I checked by using an online RGB to CMYK convertor (with no specified colourspace profile)
I though I'd try setcmykcolor and created the following
%!PS-Adobe3.0
%
% Test use of CMYK in Postscript in preparation for creating a PDF/A-1a file
% for use by a commercial printer.
%
%%Pages: 1
%%Page: One 1
/Hevetica-Bold 20 selectfont
0 90 255 div 140 255 div setrgbcolor
100 100 250 100 rectfill
120 130 moveto 1 setgray (RGB: 0 90 140) show
100 255 div 60 255 div 0 10 255 div setcmykcolor
100 200 250 100 rectfill
120 230 moveto 1 setgray (CMYK: 100 60 0 10) show
100 255 div 36 255 div 0 45 255 div setcmykcolor
100 300 250 100 rectfill
120 330 moveto 1 setgray (CMYK: 100 36 0 45) show
0 0 1 setrgbcolor
100 400 250 100 rectfill
120 430 moveto 1 setgray (RGB: 0 0 255) show
showpage
%%EOF
(Forgive the DSC - it's intended to be just enough to placate GSView)
GSView 5.0 on MS-Windows 10 with Ghostscript 9.05 renders it like this
I had expected at least one of the CMYK colours to be rendered much closer to the bottom RGB colour.
The colour in question is designed for printing road-signs, so I'd be surprised if it is outside the relevant colour gamut used by commercial printers.
What do I need to do to be confident the printer will print my CMYK value with a result close to what I expect from GSView's rendering of the RGB value.
I don't know where you got the CMYK values from but they are not (IMO) a good representation of the RGB colour. Try 0.74 0.44 0 0.27 setcmykcolor instead.
The numbers you have used would be reasonable, if you treated them as percentages, not as values in the range 0->255. 100% Cyan, 36% magenta 0% yellow and 45% black produces quite a respectable match. I wonder if that's your mistake ?
That would be:
1 0.36 0 0.45 setcmykcolor
By the way, I think you mean RAL 5017, not 1507 which is red.
On top of that, bear in mind that you are converting an RGB colour to CMYK, then displaying that CMYK value on an RGB monitor, which involves converting it back to RGB, so some loss of precision is to be expected.
The highly simplistic calculation given in the Red Book (PostScript Language Reference Manual) is that cyan = 1 - red, magenta = 1 - green, yellow = 1 - blue. However equal values of CMYK do not generally create black, so we also apply undercolor removal.
Take the lowest value of C, M, Y, Make that value K (black). Then subtract k from each of C, M, Y. The final result is:
c = 1 - red
m = 1 - green
y = 1 - blue
k = min (c, m, y)
cyan = c - k
magenta = m - k
yellow = y - k
black = k
For your values (mapped to values from 0-1, assuming a range of 0-255);
red = 0
green = 0.353
blue = 0.549
c = 1 - 0 = 1
m = 1 - 0.353 = 0.647
y = 1 - 0.549 = 0.451
k = 0.451
cyan = 1 - 0.451 = .549
magenta = 0.647 - 0.451 = 0.196
yellow = 0.451 - 0.451 = 0
black = 0.451
so
0.549 0.196 0 0.451 setcmykcolor
That's a cheap and cheerful calculation, its intended to be done by a PostScript interpreter in a printer, so its chosen to be quick, rather than accurate. But I think you'll see that its closer than the values you were using.
For proper colour space management the RGB colours you are using would be values in a particular RGB space, for example the colour space of your monitor. You would then use the ICC profile associated with that device to turn the RGB values into values in the CIE XYZ space (a device-independent space). Then you would choose a particular destination CMYK space (eg the printer you want to use) and would use the ICC profile asociated with the destination device to go the other way, turn the XYZ values into CMYK values.
In a properly colour managed workflow, where all the devices are characterised by ICC profiles, the result is that the colour on all the devices would be as close as it possible to get to being the same.
Of course, this relies on you having everything characterised, clearly you don't.
Note that spot colours (/Separation colours in PostScript and PDF) are somewhat 'different'. These are intended to be printed using the specific ink so there's no need to characterise the values, 50% Pantone 1495 is an absolutely accurate value.
However, if your printer isn't equipped to print that colour, because for example your doing a quick check on your local CMYK printer, these colours are normally defined to have an 'alternate' representation. Ideally these would be CMYK values which will print something which is not entirely unlike the desired colour. Some ink manufacturers specify an alternate representation which is not a particularly good representation of the actual colour, arguably because they have a number of inks which map to the same colour in CMYK, so they use 'off' values to be able to tell the difference. Suspicious users have been known to comment that its done to make sure you can't do a decent print without using the manufacturers inks.....
Related
I'm using xterm-256color. Here is my short program snippet:
mvwprintw(stdscr,1,1,"You have %d colors",COLORS);
mvwprintw(stdscr,2,1,"You have %d color pairs",COLOR_PAIRS);
wprintw(stdscr,"\n\n");
for (i=1;i<10;i++)
{
short r,g,b;
short thiscolor=i+70;
init_pair(i,thiscolor,COLOR_BLACK);
color_content(thiscolor,&r,&g,&b);
wattron(stdscr,COLOR_PAIR(i));
wprintw(stdscr,"This is color %d\t%d %d %d\n",thiscolor,r,g,b);
wattroff(stdscr,COLOR_PAIR(i));
}
refresh();
It prints out 10 various shades of green, but the output of color_content does not match the green colors it is printing:
You have 256 colors
You have 256 color pairs
This is color 71 1000 1000 1000
This is color 72 0 0 0
This is color 73 1000 0 0
This is color 74 0 1000 0
This is color 75 1000 1000 0
This is color 76 0 0 1000
This is color 77 1000 0 1000
This is color 78 0 1000 1000
This is color 79 1000 1000 1000
I would have expected to see the middle value (G) always be a fairly high number. I would not have expected to see a 0.
Am I doing something wrong? Or am I misunderstanding what color_content is supposed to output?
ncurses has no foreknowledge of the palette used by a given terminal emulator. Unless you initialized colors (init_color), it has only its built-in table. There is no portable means for determining a terminal's color palette.
The section on start_color in the manual says
If the terminal supports the initc (initialize_color) capability,
start_color initializes its internal table representing the red,
green and blue components of the color palette.
The components depend on whether the terminal uses CGA (aka "ANSI")
or HLS (i.e., the hls (hue_lightness_saturation) capability is
set). The table is initialized first for eight basic colors
(black, red, green, yellow, blue, magenta, cyan, and white), and
after that (if the terminal supports more than eight colors) the
components are initialized to 1000.
start_color does not attempt to set the terminal's color palette to
match its built-in table. An application may use init_color to alter the internal table along with the terminal's color.
That "initialized to 1000" could be clearer. The library uses the red/green/blue pattern from the 8 ANSI colors as something to repeat (using 1000 for the non-zero values) repetitively after the first 8 colors (see source-code...).
That's the default, built into the library. If you want something different, you will have to tell it what that is, using init_color. The ncurses-examples have sample palette data files which can be used by some of the programs (ncurses, picsmap, savescreen), to do just that.
I want to load gray scale images in C, pre process it and then display the modified image. My question is:
What is the right way to import gray scale images(jpeg,png formats)in C language ?
My own search before asking this question.
1- fread or with file management, we can read image but data will be encrypted(compressed), I want gray scale values(0-255) of each pixel.
2- There is one API ImageMagick which can be helpful but it is having installation problem with Mac OS X.
I have done image processing in python and matlab but have no idea about C language.
Thanks
You have a number of options, but I will go through them starting with the easiest and least integrated with OSX, and getting progressively more integrated with OSX.
Easiest Option
Personally, if I was intent on processing greyscale images, I would write my software to use NetPBM's Portable Greymap (PGM) format as that is the very simplest to read and write and readily interchangeable with other formats. There is no compression, DCT, quantisation, colorspaces, EXIF data - just your data with a simple header. The documentation is here.
Basically a PGM file looks like this:
P2
# Shows the word "FEEP" (example from Netpbm man page on PGM)
24 7
15
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 3 3 3 3 0 0 7 7 7 7 0 0 11 11 11 11 0 0 15 15 15 15 0
0 3 0 0 0 0 0 7 0 0 0 0 0 11 0 0 0 0 0 15 0 0 15 0
0 3 3 3 0 0 0 7 7 7 0 0 0 11 11 11 0 0 0 15 15 15 15 0
0 3 0 0 0 0 0 7 0 0 0 0 0 11 0 0 0 0 0 15 0 0 0 0
0 3 0 0 0 0 0 7 7 7 7 0 0 11 11 11 11 0 0 15 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
You can see the P2 says it is in ASCII (easy to read) and a greymap. Then the next line says it is 24 pixels wide by 7 tall and that the brightest pixel is 15. Very simple to read and write. You can change the P2 to P5 and write everything after the MAXVAL in binary to save space.
Now you can just use ImageMagick outside your program to convert JPEG, PNG, GIF, TIFF files to PGM like this - without needing any linking or libraries or compiler switches:
convert input.png output.pgm
convert input.jpg output.pgm
Likewise, when you have finished processing and created your resulting output file in PGM format you can convert it to JPEG or TIFF by simply reversing the parameters:
convert result.pgm result.jpg
Personally, I would install ImageMagick using homebrew. You go to the homebrew website and copy the one-liner and paste it into a terminal to install homebrew. Then you can install ImageMagick with:
brew install imagemagick
and, by the way, if you want to try out the other suggestion here, using OpenCV, then that is as easy as
brew search opencv
brew install homebrew/science/opencv
If you want a small, self-contained example of a little OpenCV project, have a look at my answer to another question here - you can also see how a project like that is possible from the command line with ImageMagick in my other answer to the same question.
Magick++
If you choose to install ImageMagick using homebrew you will get Magick++ which will allow you to write your algorithms in C and C++. It is pretty easy to use and can run on any platforms, including OSX and Windows and Linux so it is attractive from that point of view. It also has many, many image-processing functions built right in. There is a good tutorial here, and documentation here.
Your code will look something like this:
// Read an image from URL
Image url_image("http://www.serverName.com/image.gif");
// Read image from local filesystem
Image local_file_image("my_image.gif");
// Modify image
Pixels my_pixel_cache(my_image);
PixelPacket* pixels;
// define the view area that will be accessed via the image pixel cache
int start_x = 10, start_y = 20, size_x = 200, size_y = 100;
// return a pointer to the pixels of the defined pixel cache
pixels = my_pixel_cache.get(start_x, start_y, size_x, size_y);
// set the color of the first pixel from the pixel cache to black (x=10, y=20 on my_image)
*pixels = Color("black");
// set to green the pixel 200 from the pixel cache:
// this pixel is located at x=0, y=1 in the pixel cache (x=10, y=21 on my_image)
*(pixels+200) = Color("green");
// now that the operations on my_pixel_cache have been finalized
// ensure that the pixel cache is transferred back to my_image
my_pixel_cache.sync();
// Save results as BMP file
image.write(“result.bmp”);
Apple OSX Option
The other ENTIRELY SEPARATE option would be to use the tools that Apple provides for manipulating images - they are fast and easy to use, but are not going to work on Linux or Windows. So, for example, if you want to
a) load a PNG file (or a TIFF or JPEG, just change the extension)
b) save it as a JPEG file
c) process the individual pixels
// Load a PNG file
NSImage * strImage = [[NSImage alloc]initWithContentsOfFile:#"/Users/mark/Desktop/input.png"];
// Save NSImage as JPG
NSData *imageData = [strImage TIFFRepresentation];
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:imageData];
NSDictionary *imageProps = [NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:1.0] forKey:NSImageCompressionFactor];
imageData = [imageRep representationUsingType:NSJPEGFileType properties:imageProps];
[imageData writeToFile:#"/Users/Mark/Desktop/result.jpg" atomically:YES];
// Access individual pixels
int w=imageRep.pixelsWide;
int h=imageRep.pixelsHigh;
int bps=imageRep.bitsPerSample;
printf("Dimensions: %dx%d\n",w,h);
printf("bps: %d\n",bps);
// Get a pointer to the uncompressed, unencoded pixel data
unsigned char *pixelData = [imageRep bitmapData];
for(int j=0;j<10;j++){
printf("Pixel %d: %d\n",j,pixelData[j]);
}
Of course, you could take the code above and easily make a little utility that converts any file format to PGM and then you could go with my first suggestion of using PGM format and wouldn't need to install ImageMagick - although it is actually dead simple with homebrew.
Bear in mind that you can mix Objective-C (as in the last example) with C++ and C in a single project using clang (Apple's compiler), so you can go ahead in C as you indicate in your question with any of the examples I have given above.
If you are new to developing on OSX, you need to go to the AppStore and download, for free, Apple's Xcode to get the compiler and libraries. Then you must do
xcode-select --install
to install the command-line tools if you wish to do traditional development using Makefiles and command-line compilation/linking.
This answer is an attempt to demonstrate how to develop with ImageMagick's MagickWand API with Xcode, and is based on the comments from the OP.
After installing ImageMagick, start a new Xcode command-line C project. Before writing any code, you need to tell llvm about the ImageMagick resources/libraries/etc.
There are many, many, ways to achieve this, but here's the quickest I can think of.
Navigate to top-level project's "Build Settings"
Under "All" (not "Basic") search for "Other Linker Flags"
Outside of Xcode, open up Terminal.app and enter the following
MagickWand-config --ldflags
Enter the output from Terminal.app as the value for "Other Linker Flags"
Back in the settings search; enter "Other C Flags"
Back in Terminal.app run the following
MagickWand-config --cflags
Enter the resulting output as the value for "Other C Flags"
Over in file main.c, you should notice Xcode picking-up MagickWand commands right away.
Try the following (needs X11 installed) ...
#include <stdio.h>
#include <wand/MagickWand.h>
int main(int argc, const char * argv[]) {
MagickWandGenesis();
MagickWand * wand;
wand = NewMagickWand();
MagickReadImage(wand, "wizard:");
MagickQuantizeImage(wand, 255, GRAYColorspace, 0, MagickFalse, MagickTrue);
MagickDisplayImage(wand, ":0");
wand = DestroyMagickWand(wand);
MagickWandTerminus();
return 0;
}
... and build + run to verify.
edit
To get the gray scale (0-255) value for each pixel, you can invoke a pixel iterator (see second example here), or export the values. Here is an example of dynamically populating a list of gray values by exporting...
// Get image size from wand instance
size_t width = MagickGetImageWidth(wand);
size_t height = MagickGetImageHeight(wand);
size_t total_gray_pixels = width * height;
// Allocate memory to hold values (total pixels * size of data type)
unsigned char * blob = malloc(total_gray_pixels);
MagickExportImagePixels(wand, // Image instance
0, // Start X
0, // Start Y
width, // End X
height, // End Y
"I", // Value where "I" = intensity = gray value
CharPixel, // Storage type where "unsigned char == (0 ~ 255)
blob); // Destination pointer
// Dump to stdout
for (int i = 0; i < total_gray_pixels; i++ ) {
printf("Gray value # %lux%lu = %d\n", i % width, i / height, (int)blob[i]);
}
/** Example output...
* Gray value # 0x0 = 226
* Gray value # 1x0 = 189
* Gray value # 2x0 = 153
* Gray value # 3x0 = 116
* Gray value # 4x0 = 80
* ... etc
*/
You can install ImageMagick on OS/X with this command:
sudo port install ImageMagick
You can get the macports package from https://www.macports.org
Don't have much experience with Mac's but I have quite a lot with OpenCV. Now granted a lot of OpenCV is in C++ (which may or may not be a problem) however it definitely supports everything you want to do and more. It's very easy to work with has lot's of helpful wiki's and a very good community.
Link to installing on Mac: http://blogs.wcode.org/2014/10/howto-install-build-and-use-opencv-macosx-10-10/
Link to some OpenCV wikis: http://opencv.org/documentation.html
EDIT:
I should also mention that older versions of OpenCV are in C and a lot of the functions are still supported if you chose to go the C route.
I want to extract the two points (i.e their values) which are marked with black outline in figure. These minima points are 2 and 5. Then after extraction these marked points coordinates I want to calculate the distance between them.
The code that I am using to plot average values of image, calculate minimas and locations is
I1=imread('open.jpg');
I2=rgb2gray(I1);
figure, title('open');
plot(1:size(I2,1), mean(I2,2));
hold on
horizontalAverages = mean(I2 , 2);
plot(1:size(I2,1) , horizontalAverages)
[Minimas locs] = findpeaks(-horizontalAverages)
plot(locs , -1*Minimas , 'r*')
Minima
-86.5647
-80.3647
-81.3588
-106.9882
-77.0765
-77.8235
-92.2353
-106.2235
-115.3118
-98.3706
locs =
30
34
36
50
93
97
110
121
127
136
It is a bit unclear from your question what you are actually looking for, but the following one liner will get you the local minima:
% Some dummy data
x = 1:11;
y = [3 2 1 0.5 1 2 1 0 1 2 3];
min_idx = ([0 sign(diff(y))] == -1) & ([sign(diff(y)) 0] == 1);
figure
plot(x, y);
hold on;
scatter(x(min_idx), y(min_idx))
hold off;
Use the 'findpeaks' function, if you have the signal processing toolbox.
[y,locs]=findpeaks(-x)
will find the local minima. This function has a ton of options to handle all kinds of special cases, so is very useful.
I am trying to losslessly compress an image, and in order to take advantage of regularities, I want to convert the image from RGB to Y'CbCr. (The exact details of what I mean by RGB and Y'CbCr are not important here; the RGB data consists of three bytes, and I have three bytes to store the result in.)
The conversion process itself is pretty straightforward, but there is one problem: although the transformation is mathematically invertible, in practice there will be rounding errors. Of course these errors are small and virtually unnoticeable, but it does mean that the process is not lossless any more.
My question is: does a transformation exist, that converts three eight-bit integers (representing red, green and blue components) into three other eight-bit integers (representing a colour space similar to Y'CbCr, where two components change only slightly with respect to position, or at least less than in an RGB colour space), and that can be inverted without loss of information?
YCoCg24
Here is one color transformation I call "YCoCg24" that that converts three eight-bit integers (representing red, green and blue components) into three other eight-bit (signed) integers (representing a colour space similar to Y'CbCr), and is bijective (and therefore can be inversed without loss of information):
G R B Y Cg Co
| | | | | |
| |->-(-1)->(+) (+)<-(-/2)<-| |
| | | | | |
| (+)<-(/2)-<-| |->-(+1)->(+) |
| | | | | |
|->-(-1)->(+) | | (+)<-(-/2)<-|
| | | | | |
(+)<-(/2)-<-| | | |->-(+1)->(+)
| | | | | |
Y Cg Co G R B
forward transformation reverse transformation
or in pseudocode:
function forward_lift( x, y ):
signed int8 diff = ( y - x ) mod 0x100
average = ( x + ( diff >> 1 ) ) mod 0x100
return ( average, diff )
function reverse_lift( average, signed int8 diff ):
x = ( average - ( diff >> 1 ) ) mod 0x100
y = ( x + diff ) mod 0x100
return ( x, y )
function RGB_to_YCoCg24( red, green, blue ):
(temp, Co) = forward_lift( red, blue )
(Y, Cg) = forward_lift( green, temp )
return( Y, Cg, Co)
function YCoCg24_to_RGB( Y, Cg, Co ):
(green, temp) = reverse_lift( Y, Cg )
(red, blue) = reverse_lift( temp, Co)
return( red, green, blue )
Some example colors:
color R G B Y CoCg24
white 0xFFFFFF 0xFF0000
light grey 0xEFEFEF 0xEF0000
dark grey 0x111111 0x110000
black 0x000000 0x000000
red 0xFF0000 0xFF01FF
lime 0x00FF00 0xFF0001
blue 0x0000FF 0xFFFFFF
G, R-G, B-G color space
Another color transformation that converts three eight-bit integers into three other eight-bit integers.
function RGB_to_GCbCr( red, green, blue ):
Cb = (blue - green) mod 0x100
Cr = (red - green) mod 0x100
return( green, Cb, Cr)
function GCbCr_to_RGB( Y, Cg, Co ):
blue = (Cb + green) mod 0x100
red = (Cr + green) mod 0x100
return( red, green, blue )
Some example colors:
color R G B G CbCr
white 0xFFFFFF 0xFF0000
light grey 0xEFEFEF 0xEF0000
dark grey 0x111111 0x110000
black 0x000000 0x000000
comments
There seem to be quite a few lossless color space transforms.
Several lossless color space transforms are mentioned in Henrique S. Malvar, et al. "Lifting-based reversible color transformations for image compression";
there's the lossless colorspace transformation in JPEG XR;
the original reversible color transform (ORCT) used in several "lossless JPEG" proposals;
G, R-G, B-G color space;
etc.
Malvar et al seem pretty excited about the 26-bit YCoCg-R representation of a 24-bit RGB pixel.
However, nearly all of them require more than 24 bits to store the transformed pixel color.
The "lifting" technique I use in YCoCg24 is similar to the one in Malvar et al and to the lossless colorspace transformation in JPEG XR.
Because addition is reversible (and addition modulo 0x100 is bijective), any transform from (a,b) to (x,y) that can be produced by the following Feistel network is reversible and bijective:
a b
| |
|->-F->-(+)
| |
(+)-<-G-<-|
| |
x y
where (+) indicates 8-bit addition (modulo 0x100), a b x y are all 8-bit values, and F and G indicate any arbitrary function.
details
Why do you only have 3 bytes to store the result in?
That sounds like a counter-productive premature optimization.
If your goal is to losslessly compress an image into as small a compressed file as possible in a reasonable amount of time, then the size of the intermediate stages is irrelevant.
It may even be counter-productive --
a "larger" intermediate representation (such as Reversible Colour Transform or the 26-bit YCoCg-R) may result in smaller final compressed file size than a "smaller" intermediate representation (such as RGB or YCoCg24).
EDIT:
Oopsies.
Either one of "(x) mod 0x100" or "(x) & 0xff" give exactly the same results --
the results I wanted.
But somehow I jumbled them together to produce something that wouldn't work.
I did find one such solution, used by JPEG 2000. It is called a Reversible Colour Transform (RCT), and it is described at Wikipedia as well as the JPEG site (though the rounding methods are not consistent). The results are not as good as with the irreversible colour transform, however.
I also found a better method described in the paper Improved Reversible Integer-to-integer Color Transforms by Soo-Chang Pei and Jian-Jiun Ding. However, the methods described in that paper, and the method used by JPEG 2000, require extra bits to store the result. This means that the transformed values do not fit in 24 bits any more.
I was reading dp, dip, px, sp measurements, but I still have some questions about dp/dpi vs ppi vs px vs inch. I am not able to compare them... is an inch the largest?
They say 160 dpi means 160 pixels per one inch. Does that mean 1 inch contains 160 pixels?
They also say 1 pixel on a 160 dpi screen = 1 dp. Does that mean 1 pixel and 1 dp are equal?
And lastly, why should we use dp instead of px? I understand that it is ideal, but why?
You should (almost) always use flexible sizing units, like dp, which is Density-Independent Pixels, because 300px on one device is not necessarily the same amount of screen real estate as 300px on another. The biggest practical implication is that your layout would look significantly different on devices with a different density than the one your design targeted.
dp or dip means Density-independent Pixels
dpi or ppi means Dots (or Pixels) Per Inch
inch is a physical measurement connected to actual screen size
px means Pixels — a pixel fills an arbitrary amount of screen area depending on density.
For example, on a 160dpi screen, 1dp == 1px == 1/160in, but on a 240dpi screen, 1dp == 1.5px. So no, 1dp != 1px. There is exactly one case when 1dp == 1px, and that's on a 160dpi screen. Physical measurement units like inches should never be part of your design—that is, unless you're making a ruler.
A simple formula for determining how many pixels 1dp works out to is px = dp * (dpi / 160).
dp is a physical measurement like inches. (Yes, it is. Read on.)
"A dp corresponds to the physical size of a pixel at 160 dpi" (https://developer.android.com/training/multiscreen/screendensities.html#TaskUseD)
The physical size of a pixel at 160 dpi is exactly 1/160th of an inch. Therefore the size of a dp is 1/160th of an inch. 160 dp = 1 inch.
Px is a somewhat arbitrary unit of measurement on a screen.
For examples of what dp converts to in px on different devices, see here:
https://stackoverflow.com/a/39495538/984003
How do dp, dip, dpi, ppi, pixels and inches relate?
For the purpose of android development:
dp = dip
dpi = ppi
inch x dpi = pixels
dp = 160 x inch
dp = 160*pixels/dpi
So, on a 160dpi phone (mdpi):
2 inches = 320 dp
2 inches = 320 pixels
On a 180 dpi phone:
2 inches = 320 dp
2 inches = 360 pixels
Note that 2 inches is ALWAYS 320dp, independent of screen size. A dp is a physical distance of 1/160th of an inch.
The dp to pixels formula is interesting:
dp = 160*pixels/dpi
Is equivalent to:
dp = pixels/(dpi/160)
dpi/160 is an interesting factor. Its the relative density compared to android's mdpi bin and the amount you must scale your graphics by for the various resource bins. You'll see that factor mentioned a few times on this page, 0.75 being the factor to ldpi.
I will explain using an example.
float density = context.getResources().getDisplayMetrics().density;
float px = someDpValue * density;
float dp = somePxValue / density;
density equals
.75 on ldpi (120 dpi)
1.0 on mdpi (160 dpi; baseline)
1.5 on hdpi (240 dpi)
2.0 on xhdpi (320 dpi)
3.0 on xxhdpi (480 dpi)
4.0 on xxxhdpi (640 dpi)
so for example,
I have a Samsung S5 with 432 dpi (http://dpi.lv/#1920×1080#5.1″).
So, density = 432/160 = phone's dpi/baseline = 2.7
Let say my top bar is 48dp. This is referenced to baseline (160dpi).
So, w.r.t my S5, it will be 48dp * 2.7.
Then if I want to see the actual height:
It will be (48dp * 2.7) / 432 dpi = 0.3 inches.
DP is the resolution when you only factor the physical size of the screen. When you use DP it will scale your layout to other similar sized screens with different pixel densities.
Occasionally you actually want pixels though, and when you deal with dimensions in code you are always dealing with real pixels, unless you convert them.
When not referring to android, but rather to monitors,
DP actually means Dot Pitch, which originally came about from CRT monitors. It refers to the diagonal distance between 2 pixels in mm. In LCD monitors, a pixel is larger, and assuming the pixels are right next to each other without gap (usually there is a very small gap, but for simplicity, we will assume it is zero), the diagonal distance between the 2 centers of each pixel is equal to the diagonal size of the pixel. The lower the DP, the crisper the image.
DP = 25.4÷ppi
0.25 DP is standard, jagged edge
0.20 DP is considered much clearer
160 ppi = 0.158 DP
So DiP is actually a rounded approximation of 1000 x DP, and the 2 are not equivalent, just very close approximations.
As mentioned before, you should not base things off of pixel size since you can zoom. This is for how clear something will appear on the screen.
In monitors, if you want clarity at 20 inches (average distance between monitor to eye) (< 0.20 DP medium clarity/0.16 DP ultra sharp), this would equate to:
1920x1080 (HD) 17.4 inch/ 14 inch
3840x2160 (4K) 35 inch / 27.8 inch
A high resolution phone might have a DP of 0.05 (approximately 500 ppi), or 3 times higher than that of a ultra sharp monitor but viewed 3 times closer.
Anything larger than these dimensions for the monitor size will appear pixelated, smaller would be clearer.
Also noteworthy is that 72 pixels per inch is a Mac standard, and very old. 96 ppi is what Windows resolution is referenced to. Photoshop was originally designed for the Mac.