I want to show some text in a 640*480 screen. Where can I get the codes for ASCII characters in RGB565 format for a C program, such that I can have a natural look-and-feel as a command-line terminal for such a screen.
1- What would be the best width-height for a character?
2- Where can I get the 16-bit hex code (known as Bitmap Font or Raster Font) for each character?
e.g. const unsigned short myChar[] = {0x0001, 0x0002, 0x0003, 0x0004 ...}
"... the 16-bit hex code ..." is a misconception. You must have meant 16 bytes – one byte (8 pixels) per character line. A 640*480 screen resolution with 'natural' sized text needs 8x16 bitmaps. That will show as 30 lines of 80 columns (the original MCGA screens actually showed only 25 lines, but that was with the equivalent of 640*400 – stretched a bit).
Basic Google-fu turns up this page: https://fossies.org/dox/X11Basic-1.23/8x16_8c_source.html, and the character set comes pretty close to as I remember it from ye olde monochrome monitors:a
................................................................
................................................................
................................................................
................................................................
...XXXX.........................................................
....XX..........................................................
....XX..........................................................
....XX...XXXXX..XX.XXX...XXX.XX.XX...XX..XXXX...XX.XXX...XXXXX..
....XX..XX...XX..XX..XX.XX..XX..XX...XX.....XX...XXX.XX.XX...XX.
....XX..XX...XX..XX..XX.XX..XX..XX.X.XX..XXXXX...XX..XX.XXXXXXX.
XX..XX..XX...XX..XX..XX.XX..XX..XX.X.XX.XX..XX...XX.....XX......
XX..XX..XX...XX..XX..XX..XXXXX..XXXXXXX.XX..XX...XX.....XX...XX.
.XXXX....XXXXX...XX..XX.....XX...XX.XX...XXX.XX.XXXX.....XXXXX..
........................XX..XX..................................
.........................XXXX...................................
................................................................
Since this is a simple monochrome bitmap pattern, you don't need "RGB565 format for a C program" (another misconception). It is way easier to loop over each bitmap and use your local equivalent of PutPixel to draw each character in any color you want. You can choose between not drawing the background (the 0 pixels) at all, or having a "background color". The space at the bottom of the bitmap is large enough to put in an underline.
That said: I've used such bitmaps for years but I recently switched to a fully antialiased gray shade format. The bitmaps are thus larger (a byte per pixel instead of a single bit) but you don't have to loop over individual bits anymore, which is a huge plus. Another is, I now can use the shades of gray as they are (thus drawing 'opaque') or treat them as alpha, and get nicely antialiased text in any color and over any background.
That looks like this:
I did not draw this font; I liked the way it looked on my terminal, so I wrote a C program to dump a basic character set and grabbed a copy of the screen. Then I converted the image to pure grayscale and wrote a quick-and-dirty program to convert the raw data into a proper C structure.
a Not entirely true. The font blitter in the MCGA video card added another column at the right of each character, so effectively the text was 9x16 pixels. For the small set of border graphics – ╔╦╤╕╩ and so on –, the extra column got copied from the rightmost one.
No the most elegant solution, but I created a bmp empty image and filled it with characters.
Then I used This tool to convert the bmp file to the C bitmap array.
You should then be able to distinguish the characters in your array.
If you can access some type of 16 bit dos mode, you might be able to get the fonts from a BIOS INT 10 (hex 10) call. In this example, the address of the font table is returned in es:bp (es is usually 0xc000). This works for 16 bit programs in Windows dos console mode on 32 bit versions of Windows. For 64 bit versions of Windows, DOSBOX may work, or using a virtual PC should also work. If this doesn't work, do a web search for "8 by 16 font", which should get you some example fonts.
INT 10 - VIDEO - GET FONT INFORMATION (EGA, MCGA, VGA)
AX = 1130h
BH = pointer specifier
00h INT 1Fh pointer
01h INT 43h pointer
02h ROM 8x14 character font pointer
03h ROM 8x8 double dot font pointer
04h ROM 8x8 double dot font (high 128 characters)
05h ROM alpha alternate (9 by 14) pointer (EGA,VGA)
06h ROM 8x16 font (MCGA, VGA)
07h ROM alternate 9x16 font (VGA only) (see #0020)
11h (UltraVision v2+) 8x20 font (VGA) or 8x19 font (autosync EGA)
12h (UltraVision v2+) 8x10 font (VGA) or 8x11 font (autosync EGA)
Return: ES:BP = specified pointer
CX = bytes/character of on-screen font (not the requested font!)
DL = highest character row on screen
Note: for UltraVision v2+, the 9xN alternate fonts follow the corresponding
8xN font at ES:BP+256N
BUG: the IBM EGA and some other EGA cards return in DL the number of rows on
screen rather than the highest row number (which is one less).
SeeAlso: AX=1100h,AX=1103h,AX=1120h,INT 1F"SYSTEM DATA",INT 43"VIDEO DATA"
Related
For the purpose of doing a project on character recognition, I found a database I could use as a training set. On the other hand, I am not able to understand the given format even though the below instructions were given with it. I could find no further help on how to figure this format out.
Fields 1-6 are separated by commas.
ID number of source article
2-byte symbol code (written in hexadecimal, using 4 bytes)
Character height of bitmap
Character width of bitmap
Bitmap image, where each 8-bit unit is written as a decimal from 0 to 255
Line feed
The link to the file(Google drive) for the database is attached below.
https://drive.google.com/file/d/0B-WsCQkhd_1iUUtJdHg0R1hfTHM/view?usp=sharing
It would be of great help if someone could figure out the way this format is presented. It is literally puzzling me.
Well, as far as I can understand this format every char description takes one line (until line feed sign).
ID number of source article
byte symbol code (written in hexadecimal, using 4 bytes)
Character height of bitmap
Character width of bitmap
Bitmap image, where each 8-bit unit is written as a decimal from 0 to 255 - and here the magic starts. Bitmap image is not only one comma separated value, but all values until you meet line feed. So it will be a lot of comma separated values that you can divide in rows using bitmap height and width values.
If you open this file in for example Notepad++ instead of stanart windows notepad, you will get a bit better view (turn on "Show all characters" to see line feed).
Hope it will help you.
For an 8-bit embedded system that has a small monochrome LCD with black/white pixels (not grayscale), I need an efficient way of storing and displaying fonts. I will probably choose two fixed-width fonts, 4x5 pixels and 5x7 pixels. The resources are very limited: 30k ROM, 2k RAM. The fonts will be written to the buffer with a 1:1 scale, as a one-line string with a given start offset in pixels (char* str, byte x, byte y)
I think I would use 1k of RAM for the buffer. Unless there's a more efficient structure for writing the fonts to, I would have it arranged so it can be written sequentially into the LCD, which would be as follows:
byte buffer[1024];
Where each byte represents a horizontal line of 8 pixels (MSB on the left), and each line of the display is completed from left to right, and in that fashion, top to bottom. (So each line is represented by (128px / 8 =) 16 bytes.)
So my question:
How should the fonts be stored?
What form should the buffer take?
How should the fonts be written to the buffer?
I presume there are some standard algorithms for this, but I can't find anything in searches. Any suggestions would be very helpful (I don't expect anyone to code this for me!!)
Thanks
As a first cut, implement bit blit, a primitive with many uses, including drawing characters. This dictates the following answers to your questions.
As bitmaps.
A bitmap.
Bit blit.
The implementation of bit blit itself involves a bunch of bitwise operations repeatedly extracting a byte or combination of two partial bytes from the source bitmap to be combined with the destination byte.
I am not a software engineer. Excuse me if you find the question awkward.
I'd like to have an image format which is not supposed to be memory efficient but easy to manipulate in plain C. To be more specific, I desire to store every pixel in an array of the form:
pixel[row#][column#][Color]
where the indexes row# and column# (255 at max) are coordinates, and the index Color (2 at max) contains the RGB values of the pixel specified by the position ( i.e. pixel[255][255][1] is used to check or manipulate the Green amount inside the pixel on the bottom right corner ).
I aim to use this form in robotic applications to be able to find the coordinates of the first red/blue/green pixel easily by scanning the image starting from the top left corner using "nested for loops" (yes, not a creative solution). Here, you might say, if there is a white area on the image, the code will return with wrong coordinates. I am aware of this fact but the images will not have a complex pattern, and (if necessary) i can store the irrelevant colors as if they were black. I do not care about the brightness, gamma, alpha whatever, too.
So is it possible to write a C (or C++ if mandatory) code to take snapshots from the webcam say at every 0.5 seconds and convert the raw image from the webcam to the form specified above? If you say C can not reach to the camera directly, is it possible to write a code which calls for a software that can reach the camera, take a snapshot and then store the raw data in a file? If yes, how can i read this raw data file using C codes to be able to at least try a conversion? I am using Windows Vista on my laptop.
Sorry for keeping the question long, but I don't want to leave any points unclear.
Yes, such a file format would be possible. Only sanity prevents it from being implemented/used.
Some formats in fairly wide use would be almost as simple to scan in the way you're considering though. A 32-bit BMP, for one example, has a small header on the file to tell a few things like the size of the picture (x,y pixel dimensions) followed by the raw pixel values, so it's basically just ColorColorColor... for the number of pixels in the image.
Code to do the scanning you're talking about with a 32-bit BMP will be pretty trivial -- the code to open the file, allocate space for a buffer to read data into, etc., could easily be longer than the scanning code itself.
Adopting a 'standard' image format also means you have tools to generate test data and independently view your results
The easiest to code in pure C (even if it's not very efficent) is portable pixmap (ppm).
It's just a plain text file
P3 <cr> # P3 means an ascii color file R,G,B
640 480 <cr> # 640 pixels wide, 480 rows deep
255 <cr> # maximum value is 255
# then just a row of RGB values separated by space with a CR
255 0 0 0 255 0 0 0 255 #....... 640 triplets <cr>
255 255 0 255 255 255 0 0 0 # ....... next row etc
There is also a more efficnet binary version where the data is pure RGB bytes so is very easy to read into a C array in a single operation
How to determine resolution (width x height) and type (gif, jpeg, png, bmp, etc) of an image from a stream (or byte array) without incurring the cost of decoding the entire image?
I know this can be done by just reading the headers. Just wondering if any such code or library already exists.
In addition to the Jpeg info provided in the answer #Leon links to..
GIF files start with the ASCII encoding for "GIF89a" so you can use that signature to determine the file type. Immediately following that are the Width and the Height, both are Int16 values using little-endian byte ordering.
PNG files start with a byte value 89 then the ASCII encoding for "PNG" followed by 4 other bytes. Immediately after that (at offset 8) there are the Width and Height values both 4 bytes wide (I'm not sure of the byte ordering).
BMP files start with the ASCII encoding for "BM". At offset 18 there is an Int32 value specifying width and at offset 22 the height both will use little endian byte ordering.
Armed with this info you should be able to write a little bit of code to read the first 26 bytes of a file stream and from that determine the file type along with the Width and the Height.
I'm trying to read a 16-bit greyscale TIFF file (BitsPerSample=16) using a small C program to convert into an array of floating point numbers for further analysis. The pixel data are, according to the header information, in a single strip of 2048x2048 pixels. Encoding is little-endian.
With that header information, I was expecting to be able to read a single block of 2048x2048x2 bytes and interpret it as 2048x2048 2-byte integers. What I in fact get is a picture split into four quadrants of 1024x1024 pixels each, the lower two of which contain only zeros. Each of the top two quadrants look like I expected the whole picture to look: alt text http://users.aber.ac.uk/ruw/unlinked/15_inRT_0p457.png
If I read the same file into Gimp or Imagemagick, both tell me that they have to reduce to 8-bit (which doesn't help me - I need the full range), but the pixels turn up in the right places: alt text http://users.aber.ac.uk/ruw/unlinked/15_inRT_0p457_gimp.png
This would suggest that my idea about how the data are arranged within the one strip is wrong. On the other hand, the file must be correctly formatted in terms of the header information as otherwise Gimp wouldn't get it right. Where am I going wrong?
Output from tiffdump:
15_inRT_0p457.tiff:
Magic: 0x4949 Version: 0x2a
Directory 0: offset 8 (0x8) next 0 (0)
ImageWidth (256) LONG (4) 1<2048>
ImageLength (257) LONG (4) 1<2048>
BitsPerSample (258) SHORT (3) 1<16>
Compression (259) SHORT (3) 1<1>
Photometric (262) SHORT (3) 1<1>
StripOffsets (273) LONG (4) 1<4096>
Orientation (274) SHORT (3) 1<1>
RowsPerStrip (278) LONG (4) 1<2048>
StripByteCounts (279) LONG (4) 1<8388608>
XResolution (282) RATIONAL (5) 1<126.582>
YResolution (283) RATIONAL (5) 1<126.582>
ResolutionUnit (296) SHORT (3) 1<3>
34710 (0x8796) LONG (4) 1<0>
(Tag 34710 is camera information; to make sure this doesn't somehow make any difference, I've zeroed the whole range from the end of the image file directory to the start of data at 0x1000, and that in fact doesn't make any difference.)
I've found the problem - it was in my C program...
I had allocated memory for an array of longs and used fread() to read in the data:
#define PPR 2048;
#define BPP 2;
long *pix;
pix=malloc(PPR*PPR*sizeof(long));
fread(pix,BPP,PPR*PPR,in);
But since the data come in 2-byte chunks (BPP=2) but sizeof(long)=4, fread() packs the data densely inside the allocated memory rather than packing them into long-sized parcels. Thus I end up with two rows packed together into one and the second half of the picture empty.
I've changed it to loop over the number of pixels and read two bytes each time and store them in the allocated memory instead:
for (m=0;m<PPR*PPR;m++) {
b1=fgetc(in);
b2=fgetc(in);
*(pix+m)=256*b1+b2;
}
You understand that if StripOffsets is an array, it is an offset to an array of offsets, right? You might not be doing that dereference properly.
What's your platform? What are you trying to do? If you're willing to work in .NET on Windows, my company sells an image processing toolkit that includes a TIFF codec that works on pretty much anything you can throw at it and will return 16 bpp images. We also have many tools that operate natively on 16bpp images.