I'm new to PDFlib. when use PDFlib to create PDF.I find that the dpi of the page(paper) was 72.and I want to set the dpi to 300 for print use,but i dont know how to use PDFlib to set.
enter image description here
as flomei already mentioned the PDF format itself do not have any kind of resolution. For placing the content or specifying the page dimension PDFlib use the PDF’s default coordinate system, which use the DTP points as unit. From the PDFlib 9.2 Tutorial, chapter 3.2.1.
PDF’s default coordinate system is used within PDFlib. The default coordinate system (or default user space) has the origin in the lower left corner of the page, and uses the DTP point as unit:
1 pt = 1/72 inch = 25.4/72 mm = 0.3528 mm
When you want to address the positions with a different unit, you can scale the coordinate system. Please check out the same PDFlib Tutorial chapter, section "Using metric coordinates":
p.scale(28.3465, 28.3465);
After this call PDFlib will interpret all coordinates (except for interactive features, see below) in centimeters since 72/2.54 = 28.3465.
of course you can use further scale values.
Related
I followed multiple example, to train a custom object detector in TensorflowJS . The main problem I am facing every where it is using pretrained model.
Pretrained models are fine for general use cases, but custom scenario it fails. For example, take this this is example form official Tensorflowjs examples, here it is using mobilenet, and mobilenet and mobilenet has image size restriction 224x224 which defeats all the purpose, because my images are big and also not of same ratio so resizing is not an option.
I have tried multiple example, all follows same path oneway or another.
What I want ?
Any example by which I can train a custom objector from scratch in Tensorflow.js.
Although the answer sounds simple but trust me I searching for this for multiple days. Any help will be greatly appreciated. Thanks
Currently it is not yet possible to use tensorflow object detection api in nodejs. But the image size should not be a restriction. Instead of resizing, you can crop your image and keep only the part that contain your object to be detected.
One approach will be like partition the image in 224x224 and run for all partitions but what if the object is between two partitions
The image does not need to be partitioned for it. When labelling the image, you will need to know the x, y coordinates (from the top left) and the w, h of the detected box. You only need to crop a part of the image that will contain the box. Cropping at the coordinates x - (224-w)/2, y- (224-h)/2 can be a good start. There are two issues with these coordinates:
the detected boxes will always be in the center, so the training will be biaised. To prevent it, a randomn factor can be used. x - (224-w)/r , y- (224-h)/r where r can be randomly taken from [1-10] for instance
if the detected boxes are bigger than 224 * 224 maybe you might first choose to resize the video keeping it ratio before cropping. In this case the boxe size (w, h) will need to be readjusted according to the scale used for the resizing
My Codename One app features a MapContainer. I need to show points of interest (POIs) on it which coordinates reside on the server. There can be hundreds (maybe thousands in the future) of such POIs on the server. That's why I would like to only download from the server the POIs that can be shown on the map. Consequently I need to get the map boundaries to pass them to the server.
I read this for Android and this other SO question for iOS and the key seems to get the map Projection and the map bounding box. However the getProjection() method or the getBoundingBox() seem not to be exposed.
A solution could be to mix the coordinates from getCameraLocation() which is the map center and getZoom() to infer those boundaries. But it may vary depending on the device (see the shown area can be larger).
How can get the map boundaries in Codename one ?
Any help appreciated,
Cheers,
The problem is in the javadocs for getCoordAtPosition(). This will be corrected. getCoordAtPosition() expects absolute coordinates, not relative.
E.g
Coord NE = currentMap.getCoordAtPosition(currentMap.getWidth(), 0);
Coord SW = currentMap.getCoordAtPosition(0, currentMap.getHeight());
Should be
Coord NE = currentMap.getCoordAtPosition(currentMap.getAbsoluteX() + currentMap.getWidth(), currentMap.getAbsoluteY());
Coord SW = currentMap.getCoordAtPosition(currentMap.getAbsoluteX(), currentMap.getAbsoluteY() + currentMap.getHeight());
I tried this out on the coordinates that you provided and it returns valid results.
EDIT March 21, 2017 : It turns out that some of the platforms expected relative coordinates, and others expected absolute coordinates. I have had to standardize it, and I have chosen to use relative coordinates across all platforms to be consistent with the Javadocs. So your first attempt:
Coord NE = currentMap.getCoordAtPosition(currentMap.getWidth(), 0);
Coord SW = currentMap.getCoordAtPosition(0, currentMap.getHeight());
Will now work in the latest version of the library.
I have also added another method : getBoundingBox() that will get the bounding box for you without worrying about relative/absolute coordinates.
This is probably something that can be exposed easily by forking the project and providing a pull request. We're currently working on updating the map component so this is a good time to make changes and add features.
Depending on which skin I use in the simulator, the result from the following method differs :
Display.getInstance().getDeviceDensity();
The results have nothing to do with the real device density since for a Xoom skin it outputs 30 (149 ppi in reality), for a an Iphone 6 it outputs 50 (329 in reality).
I noticed that because I need to translate char height measured in Gimp (72 dpi) into the device world so that it looks alike on an image.
Any help on that topic would be appreciated!
Cheers
The JavaDocs for getDeviceDensity state:
Returns one of the density variables appropriate for this device,
notice that density doesn't always correspond to resolution and an
implementation might decide to change the density based on DPI
constraints.
Returns:
one of the DENSITY constants of Display
The DENSITY constants refers to one of these.
Notice you can also use convertToPixels which is probably a far better API to use. The density API is mostly used to pick the right multi image and should rarely be used in user code.
I have two black-and-white printers, two color printers, and some virtual printers (Fax, CutePDF Writer, etc).
According to the DC_COLORDEVICE query to DeviceCapabilities, only the Fax virtual printer is black-and-white.
According to PLANES and BITSPIXEL queries to GetDeviceCaps, all of the printers have one plane, and only Fax and CutePDF have 1 bit/pixel (are black-and-white).
According to the NUMCOLORS query to GetDeviceCaps, only Fax is black-and-white.
I'm not excited about querying the driver directly, so I haven't tried it yet.
How do I accurately detect a color printer with Win32?
Bummer that DC_COLORDEVICE doesn't give the right answer. The rest of your findings don't surprise me.
You could try creating an information context for the printer with CreateIC, and then use GetDeviceCaps to check the COLORRES property.
(An information context is like a device context that you can query but can't actually draw to. It's useful when you want to know what a printer driver is going to do without actually creating a real device context, which may require the printer being online.)
Checking the number of planes is useless, since everything (to a good approximation) uses a single plane. The number of bits per pixel doesn't actually tell you if those pixels can be color or just grayscale (or just palette entries).
Another idea is to look at the dmColor field in the default DEVMODE for the device.
I had to solve the same problem many, many years ago (before DeviceCapabilities), but I don't remember how I did it.
UPDATE 2022-12-27: I just came across my own answer while trying to figure out how to handle the Fax virtual printer. When querying DeviceCapabilitiesW with DC_COLORDEVICE, the Fax driver returns value of -1 and GetLastError reports 122 (ERROR_INSUFFICIENT_BUFFER "The data area passed to a system call is too small.") That's weird, since there's no requirement to pass a buffer for this query.
My current solution is to check everything. If DeviceCapabilities with DC_COLORDEVICE doesn't explicitly say color OR if the DEVMODE's dmFields bitmask doesn't have the DM_COLOR bit set OR if the DEVMODE's dmColor field isn't explicitly DMCOLOR_COLOR OR if GetDeviceCaps NUMCOLORS isn't at least 8, then I assume it's a monochrome printer or that user selected monochrome for this print job.
Given an image (i.e. newspaper, scanned newspaper, magazine etc), how do I detect the region containing text? I only need to know the region and remove it, don't need to do text recognition.
The purpose is I want to remove these text areas so that it will speed up my feature extraction procedure as these text areas are meaningless for my application. Anyone know how to do this?
BTW, it will be good if this can be done in Matlab!
Best!
You can use Stroke Width Transform (SWT) to highlight text regions.
Using my mex implementation posted here, you can
img = imread('http://i.stack.imgur.com/Eyepc.jpg');
[swt swtcc] = SWT( img, 0, 10 );
Playing with internal parameters of the edge-map extraction and image filtering in SWT.m can help you tweak the resulting mask to your needs.
To get this result:
I used these parameters for the edge map computation in SWT.m:
edgeMap = single( edge( img, 'canny', [0.05 0.25] ) );
Text detection in natural images is an active area of research in computer vision community. U can refer to ICDAR papers. But in your case I think it should be simple enough. As you have text from newspaper or magazines, it should be of fixed size and horizontally oriented.
So, you can apply scanning window of a fixed size, say 32x32. Train it on ICDAR 2003 training dataset for positive windows having text in it. U can use a small feature set of color and gradients and train an SVM which would give a positive or negative result for a window having text or not.
For reference go to http://crypto.stanford.edu/~dwu4/ICDAR2011.pdf . For code, you can try their homepages
This example in the Computer Vision System Toolbox in Matlab shows how to detect text using MSER regions.
If your image is well binarized and you know the usual size of the text you could use the HorizontalRunLengthSmoothing and VerticalRunLengthSmoothing algorithms. They are implemented in the open source library Aforge.Net but it should be easy to reimplement them in Matlab.
The intersection of the result image from these algorithm will give you a good indication that the region contains text, it is not perfect but it is fast.