To display degree symbol to plot on OLED you can use
display.print((char)247); // degree symbol
example:
void loop(void){
display.setTextSize(1);
display.setTextColor(WHITE);
display.setCursor(0,0);
display.clearDisplay();
display.print("Temperature: ");
display.print("23");
display.print((char)247); // degree symbol
display.println("C");
display.display();
delay(1000);
}
The documentation says:
Each filename starts with the face name (“FreeMono”, “FreeSerif”, etc.) followed by the style (“Bold”, “Oblique”, none, etc.), font size in points (currently 9, 12, 18 and 24 point sizes are provided) and “7b” to indicate that these contain 7-bit characters (ASCII codes “ ” through “~”); 8-bit fonts (supporting symbols and/or international characters) are not yet provided but may come later.
In other words, the degree symbol (U+00B0, °) is not supported by the fonts by default.
Of course, nothing stops you from redefining one of them as the degree symbol in the .h file that corresponds to the font you wish to use -- in particular, to make a copy of the existing file, renaming the copy, and replace one of the ASCII characters you do not need with the degree symbol. I would suggest replacing ` (ASCII code 96), because it is very similar to the more commonly used apostrophe ' (ASCII code 39).
An easier option is to just draw a small circle (say, radius 1, 2, or 3 pixels) at the correct position using display.drawCircle(xcenter, ycenter, radius, WHITE);. After all, the degree glyph is just a small circle -- consider the looks of e.g. 25°C or 77°F.
For more complex glyphs, if you don't want to create your own font .h files, you can use the display.drawBitmap() interface. Examine and experiment with the Adafruit example for details.
You can use the image2cpp web page at GitHub to generate the Arduino data array. It also supports font format, so you can use that to generate the data for your degree symbol glyph, just by "drawing" it on that page.
Related
I have a file with a sequence of NxM unsigned integral values of fixed width - let's even assume they're single bytes - and I would like to "wrap" in some kind of common image file format. Preferably, something usable with popular image viewers; and otherwise with an image editor like GIMP.
What image format would require the minimum amount of conversion work, i.e. be as close as possible to just slapping some small header onto the raw data?
Notes:
This is a grayscale/intensity image - there are no color channels / interleaved values etc.
I don't care if the image format is row-major or column-major, i.e. if the image appears transposed relative to the order I wrote the data originally.
I've just noticed the Portable Pixmap Formats, which include the PGM format. The closest I've seen to a spec is on this page.
PGM files are supported by apps such as: Eye of Gnome (on Linux), IrfanView (on Windows), GIMP and others.
Create the image file programmatically
If I understand correctly, the following C function should convert the raw data OP has into a PGM file:
void write_pgm(FILE* file, size_t width, size_t height, uint8_t* data)
{
const char magic = "P5"; // for binary graymap
fprintf(file, "%2s %zu %zu 255\n", magic, width, height);
fwrite(data, 1, width * height, file);
}
This is a variation on a similar function for a PPM, here.
You can easily adapt this to whatever programming language you like.
Converting a file
So, suppose you've put your output in a file on disk. Apparently, there's a utility for making a PGM of it: rawtopgm. Here's a invocation for your case:
rawtopgm -bpp 1 -maxval 255 N M my_data_file > out.pgm
or, exploiting defaults:
rawtopgm N M my_data_file > out.pgm
pretty simple.
So i started off by placing an array of char
char symbols[52] = { '\x03' , '\x04', etc. . etc.)
and the first time I did it, it actually did print the hearts,spades, etc. but after changing my system locale to a korean one (idk if this is what caused it), it started giving me weird symbols that had nothing to do with it.
I also tried to do it in another computer and it actually compiled the symbols correctly.
However, then I tried moving it on to linux and it printed weird squares with a 0 0 0 3 in it.
Does anyone know why these appear or is there a better way to print these symbols?
P.S.: I'm using Visual Studio in Windows and then used the .c code in Linux
Linux systems typically use UTF-8 encoding, in which:
♠ = U+2660 = "\xE2\x99\xA0"
♣ = U+2663 = "\xE2\x99\xA3"
♥ = U+2665 = "\xE2\x99\xA5"
♦ = U+2666 = "\xE2\x99\xA6"
Edit: Unfortunately, the Windows command prompt doesn't support UTF-8, but uses the old DOS code pages. If you want your program to work cross-platform, you'll have to do something like:
#if defined(_WIN32) || defined(__MSDOS__)
#define SPADE "\x06"
#define CLUB "\x05"
#define HEART "\x03"
#define DIAMOND "\x04"
#else
#define SPADE "\xE2\x99\xA0"
#define CLUB "\xE2\x99\xA3"
#define HEART "\xE2\x99\xA5"
#define DIAMOND "\xE2\x99\xA6"
#endif
Characters like '\x03 are control characters. \x03 in particular is the character you get when you type Control-C (though typing that character often has other effects).
Some fonts happen to assign images to control character. As far as I know, there's no standard for doing so. Apparently in your previous configuration, some of the control character happened to be displayed as playing card suit symbols.
There are standard Unicode characters for those symbols:
2660 BLACK SPADE SUIT
2661 WHITE HEART SUIT
2662 WHITE DIAMOND SUIT
2663 BLACK CLUB SUIT
2664 WHITE SPADE SUIT
2665 BLACK HEART SUIT
2666 BLACK DIAMOND SUIT
2667 WHITE CLUB SUIT
(The numbers are in hexadecimal.)
Different systems encode Unicode in different ways. Windows usually uses some form of UTF-16. Linux usually uses UTF-8, but you can vary that by setting the locale.
If your Linux terminal character set is set to UTF-8, as it usually is, then you should be able to display all of the Unicode glyphs you have fonts for. Suit symbols are in the neighborhood of \u2660.
printf("A\u2660");
Should print "A" and the spade symbol, for example.
On Linux:
Verify that your locale is set to <somelanguage>_<SOMEPLACE>.UTF-8 (type locale in the termunal).
Find a Unicode chart on the web. Verify that you can copy interesting characters with a mouse and paste them to the terninal.
Learn what UTF-8 is.
Write a program.
When it doesn't work, search this site using sonething like "how to print utf-8 in Linux". There are tons and tons of relevant answers. You want to call setlocale(LC_ALL, "") at the beginning of your program, as the language standard requires, and use char, not wchar_t.
On Windows:
Give up and cry. No, really.
Type chcp 65001 in the console. No, really!
Set console fonts to Lucida. No, really!!!
Copy your Linux program over and compile and run it. (You do want to use the same program source on both OSes, right?) When it doesn't work, search for solutions online.
Go to 1.
In file text.txt I have this sentenc:
"Příliš žluťoučký kůň úpěl ďábelské ódy."
(I think Windows uses Windows-1250 code page to represent this text.)
In my program I save it to a buffer
char string[1000]
and render string with ttf to SDL_Surface *surface
surface = TTF_RenderText_Blended(font, string, color);
/*(font is true type and support this text)*/
But it gives me not correct result:
I need some reputation points to post images
so I can only describe that ř,í,š,ž,ť,ů,ň,ď are not displayed correctly.
Is it possible to use ttf for rendering this sentence correctly?
(I tried also TTF_RenderUTF8_Blended, TTF_RenderUNICODE_Solid... with worse result.)
The docs for TTF_RenderText_Blended say that it takes a Latin-1 string (Windows-1252) - this will be why it isn't working.
You'll need to convert your input text to UTF-8 and use RenderUTF8, or to UTF-16 and use RenderUNICODE to ensure it is interpreted correctly.
How you do this depends on what platform your app is targeted to - if it is Windows, then the easiest way would be to use the MultiByteToWideChar Win32 API to convert it to UTF-16 and then use the TTF_RenderUNICODE_Blended to draw it.
My solution will be this:
Three input files. In first file there will be a set of symbols from czech alphabet.
Second file will be sprite bitmap file where graphic symbols will be sorted in the
same order as in first file. In my program symbols from third input file will be compared with symbols from first file and right section of sprite will be copied on sreen one by one.
I will leave out sdl_ttf. It has some advantages and disadvantages but I think it will work for my purposes.
Thanks for all responses
I came across this excellent tutorial on image processing by Bill Green - http://dasl.mem.drexel.edu/alumni/bGreen/www.pages.drexel.edu/_weg22/edge.html
He works with BMP formats in the tutorial since they are the simplest. I tried the sobel edge detection code, got it to compile and run. When I try this on the images on that web site (for example, LIAG.bmp, the photo of the lady), the code works just fine. However, when I get other .bmp images (for example, take any image and convert it at - http://www.online-convert.com/result/6c0ce763b5e6cadf3a76a966acdb9505) and the code spits out an image that can't be read by any image editor. The issue is most probably in the line -
nColors = (int)getImageInfo(bmpInput, 46, 4);
of his code. There seems to be some hard coding here which only works on the image sizes on his tutorial. The nColors variable is 256 for all images on his site, but 0 for all images I get otherwise. Can any one tell me how I might change this piece of code to generalize this?
The 46 in this line:
nColors = (int)getImageInfo(bmpInput, 46, 4);
...refers to the bit offset into the header of the BMP. Unless you are creating BMPs that do not use this file structure it should theoretically work. He is referring to 8-bit images on that page. Perhaps, 16 or 32-bit images use a different file structure for the header.
Read this Wikipedia page for more info: https://en.wikipedia.org/wiki/BMP_file_format#File_structure
I'm trying to get the Spot color information from a TIFF file, it normally shows up under 'channels' in Photoshop. Each extra channel would have a name, which is usually a Pantone swatch name, and a CMYK equivalent.
So far, I'm getting the TIFFTAG_PHOTOSHOP with libtiff, and stepping through the blocks within. I'm finding the IRB WORD 0x03EE, which gives me the channel names, and IRB WORD 0x03EF which gives me their color equivalents...
BUT the color equivalents are in CIELab format (Luminance, and two axis of color space data) so I'm trying to use littleCMS to convert just a few TIFF packed Lab color to CMYK.
My question: Is there an easier way? The CMYK is just an approximation of the Pantone, so if there was a quick rough translation from Lab to CMYK, I would use it.
The answer was to use the photoshop docs to parse out the binary photoshop block in the tiff file and grab the fields I needed with bit manipulation.
littleCMS did the job of LAB -> CMYK just right.