I'm developing a small proof of concept in a small ARM SoC using Imagemagick as UI/OSD builder. While trying to improve the fancyness of the UI by using transparent gradients I found a strage performance problem with Imagemagick.
Files:
background_with_alpha_gradient.png is http://i.imgur.com/zvnaubR.png.
icon.bmp is a random small image (105x170) with no alpha (http://i.imgur.com/XKk1Vrp.png)
The idea is to use the background_with_alpha_gradient.png file as, well, background and then draw over this icon and some text.
$time convert /tmp/background_with_alpha_gradient.png /tmp/icon.bmp -compose Atop -composite /tmp/foo.png
real 0m0.520s
user 0m0.480s
sys 0m0.030s
Pretty fast. The problem happens when I try change the position of the icon:
$time convert prototype_client/assets/base_transp_workaround.png /tmp/icon.bmp -geometry +49+22 -compose Atop -composite /tmp/foo.png
real 0m5.685s
user 0m5.530s
sys 0m0.080s
I'm also quite intrigued by why composing with Copy is so much slower than using Atop:
time convert prototype_client/assets/base_transp_workaround.png /tmp/icon.bmp -compose Copy -composite foo.png
real 0m5.379s
user 0m5.310s
sys 0m0.050s
Any ideas? Thanks :)
Related
I created a font binary file from a TTF file and I can place it on the TFT screen on the TTGO T5 T-Display just fine. Looks great! I have been looking for 2 days for how to center this information on screen. I cannot find the format of the .vlw file or the format of what I included in the sketch to print from (converted by online site from .vlw format). And I can't find a routine to do it for me.
I am using the TFT_eSPI and it does not contain a getTextBounds routine. There is one in the Adafruit_GFX library and I included that but it not available any way I have tried. And I can't read it enough to fix it for what I need. Too deep for me at this time, especially since I don't know the data file format. I have been programming for decades but can't make this stuff up!
So, simple question with a complex answer, it seems. How to center a proportional font (invoked by name (not one of the numbered default fonts) on TTGO TFT screen using ESP32.
This makes my brain hurt. Help, please...
I'm trying to get pixel data from my X11 instance, I've seen this thread (How do take a screenshot correctly with xlib?) and the double for loop is taking just too long for me (over a million loops, as the system I'm building requires the highest amount of efficiency possible, sitting around for 600 miliseconds is just not an option). Is there no way to just get a raw array of pixels to avoid the for loop? I know the XImage class has a "data" member which is supposed to contain all of the pixels, but the organizational system that it uses is foreign to me. Any help would be greatly appreciated! The best end game here would be for me to just be able to have X11 write directly to /dev/fb0
Given an image (i.e. newspaper, scanned newspaper, magazine etc), how do I detect the region containing text? I only need to know the region and remove it, don't need to do text recognition.
The purpose is I want to remove these text areas so that it will speed up my feature extraction procedure as these text areas are meaningless for my application. Anyone know how to do this?
BTW, it will be good if this can be done in Matlab!
Best!
You can use Stroke Width Transform (SWT) to highlight text regions.
Using my mex implementation posted here, you can
img = imread('http://i.stack.imgur.com/Eyepc.jpg');
[swt swtcc] = SWT( img, 0, 10 );
Playing with internal parameters of the edge-map extraction and image filtering in SWT.m can help you tweak the resulting mask to your needs.
To get this result:
I used these parameters for the edge map computation in SWT.m:
edgeMap = single( edge( img, 'canny', [0.05 0.25] ) );
Text detection in natural images is an active area of research in computer vision community. U can refer to ICDAR papers. But in your case I think it should be simple enough. As you have text from newspaper or magazines, it should be of fixed size and horizontally oriented.
So, you can apply scanning window of a fixed size, say 32x32. Train it on ICDAR 2003 training dataset for positive windows having text in it. U can use a small feature set of color and gradients and train an SVM which would give a positive or negative result for a window having text or not.
For reference go to http://crypto.stanford.edu/~dwu4/ICDAR2011.pdf . For code, you can try their homepages
This example in the Computer Vision System Toolbox in Matlab shows how to detect text using MSER regions.
If your image is well binarized and you know the usual size of the text you could use the HorizontalRunLengthSmoothing and VerticalRunLengthSmoothing algorithms. They are implemented in the open source library Aforge.Net but it should be easy to reimplement them in Matlab.
The intersection of the result image from these algorithm will give you a good indication that the region contains text, it is not perfect but it is fast.
The question isn't exactly concerned with touch develop rather just basic programming "structure" or syntax.
what I am trying to do is create a simple compass working on the phones heading capability. The heading capability just spits out degree readings to several (like 12) decimal places.
Anyway, even just letting the phone spit out the heading, eventually the phone will crash, why is that? Running out of memory?
The reason I came here is because of this:
I want to update the page with a photo of an associated rotation based on degree readout. I can't figure out how to do something like if 0 < x < 1 post this picture. Since the heading readout varies like 321.18364947363 and 321.10243635471
So currently I am testing this: several if / if else statements saying if heading output is 1 post picture with 1 degree rotation, 2 post picture with 2 degree rotation. This definitely and guaranteed crashes the phone. Why? Memory?
If you are a touch developer, would it be easier and more sane to simply take a round object, center it in relation to a square image and use it as a sprite or object which then you can dictate what angular velocity and position the object has without doing / using 360 individual images.
GAH! Damn character limits / thread format
this is what follows what I last wrote below for anyone that cares :
The concept seems simple enough but I am basically a programming noob, I was all over the place trying to learn Python, Java and C/C#/C++. ( I wrote this on my Windows Phone 8 but I was unable to copy the text ( GAY ) ) I am happy to have come across Touch Develop because it is better for me as a visual learner. (Thanks for the life story )right ? haha
The idea would have been to use this dumb pink against black giant compass with three headings / points of interests namely A fixed relative north, the heading and a position given by the person to be found's lat and long coordinates relative to the finder's phone's current location (lat and long ). This app in my mind would be used for party scenarios. I would have benefited from this app had the circumstances been right, I was lost at a party and I had to take a cab home for $110.00 because I didn't drive to that party.
I am designing a jpeg to bmp decoder which scales the image. I have been supplied with the source code for the decoder so my actual work is to design a scaler . I do not know where to begin. I have scouted the internet for the various scaling algorithms but am not sure where to introduce the scaling. So should I do the the scaling after the image is converted into bmp or should I do this during the decoding at the MCU level. am confused :(
If you guys have some information to help me out, its appreciated. any material to read, source code to analyse etc....
Oh I forgot to mention one more thing, this is a porting project from the pc platform to a fpga, so, not all the library files are available on the target platform.
There are many ways to scale an image.
The easiest way is to decode the image and then scale using a naive scaling algorithm, something like:
dest_pixel [x,y] = src_pixel [x * x_scale_factor, y * y_scale_factor]
where x/y_scale_factor is
src_size / dest_size
Once you have that working, you can look into more complex scaling systems, things like bilinear filter. For example, the destination pixel is the average of several source pixels when reducing the size and an interpolation of several source pixels when increasing the size.