I have created with GIMP a C-Source image dump like the following:
/* GIMP RGBA C-Source image dump (example.c) */
static const struct {
guint width;
guint height;
guint bytes_per_pixel; /* 2:RGB16, 3:RGB, 4:RGBA */
guint8 pixel_data[304 * 98 * 2 + 1];
} example= {
304, 98, 2,
"\206\061\206\061..... }
Is there a way to convert this image from RG565 to RGB888?
I mean , I have found a way to covert pixel by pixel:
for (i = 0; i < w * h; i++)
{
uint16_t color = *RGB565p++;
uint8_t r = ((color >> 11) & 0x1F);
uint8_t g = ((color >> 5) & 0x3F);
uint8_t b = (color & 0x1F);
r = ((((color >> 11) & 0x1F) * 527) + 23) >> 6;
g = ((((color >> 5) & 0x3F) * 259) + 33) >> 6;
b = (((color & 0x1F) * 527) + 23) >> 6;
uint32_t RGB888 = r << 16 | g << 8 | b;
printf("%d \n", RGB888);
}
the problem is that using this logic I get numbers that are not represented as the one used n the original image:
P3
304 98
255
3223857
3223857
3223857
3223857
3223857
3223857
3223857
3223857
Did I miss something?
EDIT: here you can find the original image:
https://drive.google.com/file/d/1YBphg5_V6M2FA3HWcaFZT4fHqD6yeEOl/view
There are two things you need to do to create a C file similar to the original.
Increase the size of the pixel buffer, because you are creating three bytes per pixel from the original's two bytes.
Write strings that represent the new pixel data
The first part means simply changing the 2 to 3, so you get:
guint8 pixel_data[304 * 98 * 3 + 1];
} example= {
304, 98, 3,
In the second part the simplest method would be to print ALL characters in hexadecimal or octal representation. (The original code has the "printable" characters visible, but the non-printable as octal escape sequences.)
To print ALL the characters in hexadecimal representation, do similar to
for (i = 0; i < w * h; i++)
{
...
R, G and B calculation goes here
...
// Print start of line and string (every 16 pixels)
if (i % 16 == 0)
printf("\n\"");
printf("\\x%02x\\x%02x\\x%02x", r, g, b);
// Print end of string and line (every 16 pixels)
if ((i+1) % 16 == 0)
printf("\"\n");
}
printf("\"\n"); // Termination of last line
This prints three bytes in hex representation \xab\xcd\xef and after 16 pixels, prints end of string and newline.
Note that the byte order might need changing depending on your implementation. So b, g, r instead of r, g, b.
Related
I want to convert a buffer of binary data in bytes into a buffer of sextets, where a sextet is a byte with the two most significant bits set to zero. I also want to do the reverse, i.e. convert a buffer of sextets back to bytes. As a test I am generating a buffer in bytes using a pseudo-random number generator that creates numbers between 0 and 255 using the built in version available in C. This is in order to simulate binary data. The details of the pseudo-random number generator and how good it is is of little importance, just that a stream of byte with various values is generated. Eventually a binary file will be read.
I've modified the functions in the link:
How do I base64 encode (decode) in C?
so that instead of encoding bytes to base64 characters, then decoding them back to bytes, sextets are used instead of base64. My encoding functions is as follows:
int bytesToSextets(int inx, int iny, int numBytes, CBYTE* byteData, BYTE* sextetData) {
static int modTable[] = { 0, 2, 1 };
int numSextets = 4 * ((numBytes + 2) / 3);
int i, j;
for (i = inx, j = iny; i < numBytes;) {
BYTE byteA = i < numBytes ? byteData[i++] : 0;
BYTE byteB = i < numBytes ? byteData[i++] : 0;
BYTE byteC = i < numBytes ? byteData[i++] : 0;
UINT triple = (byteA << 0x10) + (byteB << 0x08) + byteC;
sextetData[j++] = (triple >> 18) & 0x3F;
sextetData[j++] = (triple >> 12) & 0x3F;
sextetData[j++] = (triple >> 6) & 0x3F;
sextetData[j++] = triple & 0x3F;
}
for (int i = 0; i < modTable[numBytes % 3]; i++) {
sextetData[numSextets - 1 - i] = 0;
}
return j - iny;
}
where inx is the index in the input byte buffer where I want to start encoding, iny is the index in the output sextet buffer where the beginning of the sextets are written to, numBytes is the number of bytes to be encoded, and *byteData, *sextetData are the respective buffers to read from and write to. The last for-loop sets elements of sextetData to zero, not to '=' as given in the original code when there is padding. Although zero bytes can be valid data, as the length of the buffers are known in advance, I presume this is not a problem. The function returns with the number of sextets written, which can be checked against 4 * ((numBytes + 2) / 3). The first few sextets of the output buffer encode the number of bytes of data encodes in the rest of the buffer, with the number of sextets given in the formula.
The code for decoding sextets back to bytes is as follows:
int sextetsToBytes(int inx, int iny, int numBytes, CBYTE* sextetData, BYTE* byteData) {
int numSextets = 4 * ((numBytes + 2) / 3);
int padding = 0;
if (sextetData[numSextets - 1 + inx] == 0) padding++;
if (sextetData[numSextets - 2 + inx] == 0) padding++;
int i, j;
for (i = inx, j = iny; i < numSextets + inx;) {
UINT sextetA = sextetData[i++];
UINT sextetB = sextetData[i++];
UINT sextetC = sextetData[i++];
UINT sextetD = sextetData[i++];
UINT triple = (sextetA << 18) + (sextetB << 12) + (sextetC << 6) + sextetD;
if (j < numBytes) byteData[j++] = (triple >> 16) & 0xFF;
if (j < numBytes) byteData[j++] = (triple >> 8) & 0xFF;
if (j < numBytes) byteData[j++] = triple & 0xFF;
}
return j - iny - padding;
}
where as before inx and iny are the indices to start reading from and writing to a buffer, numBytes is the number of bytes that will be in the output buffer, from which the number of input sextets are calculated. The length of the input buffer is found from the first few sextets written by bytesToSextets(), so inx is the position in the input sextet buffer to start the actual conversion back to bytes. In the original function the number of sextets is given, from which the number of bytes is calculated using numSextets / 4 * 3. As this is already known, this is not done and should not make a difference. The last two arguments *sextetData and *byteData are the respectively input and output buffers.
An input buffer in bytes is created, converted to sextets, then as a test converted back to bytes. A comparison is made between the generated initial buffer of bytes and the output buffer in bytes after converting back from the intermediate sextet buffer. When the length of the input buffer is a multiple of 3, the match is perfect and the final output buffer is exactly the same. However, if the number of bytes in the initial buffer is not a multiple of 3, the last 3 bytes in the final output buffer may not match the original bytes. This has obviously something to do with the padding when the number of bytes is not a multiple of 3, but I am unable to find the source of the problem. Incidentally, the return values from the two functions are always correct, even when the last few bytes do not match.
In a header file I have the following typedefs:
typedef unsigned char BYTE;
typedef const unsigned char CBYTE;
typedef unsigned int UINT;
Although the main function is more complicated, in its simplest version it would have a form like:
// Allocate memory for bufA and bufB.
// Write the data length and other information into sextets 0 to 4 in bufB.
// Convert the bytes in bufA starting at index 0 to sextets in bufB starting at index 5.
int countSextets = bytesToSextets(0, 5, lenBufA, bufA, bufB);
// Allocate memory for bufC.
// Convert the sextets in bufB starting at index 5 back to bytes in bufC starting at index 0.
int countBytes = sextetsToBytes(5, 0, lenBufC, bufB, bufC);
As I said, this all works correctly, except that when the lenBufA is not a multiple of 3, the last 3 recovered bytes in bufC do not match those in bufA, but the calculated buffer lengths are all correct.
Perhaps someone can kindly help throw some light on this.
sextetData[numSextets - 1 - i] = 0; should be sextetData[iny + numSextets - 1 - i] = 0;.
The version of sextetsToBytes() I originally posted had the problem that I tested for padding by using:
if (sextetData[numSextets - 1 + inx] == 0) padding++;
if (sextetData[numSextets - 2 + inx] == 0) padding++;
as of course testing for '=' for base64 cannot be used, however, testing for zero can still cause problems, as zero can be a valid data item. This indeed sometimes caused a difference between the specified number of output bytes and the number found by counting up the bytes in the loop and subtracting the padding bytes. By just removing the padding bytes from the function, then checking the counted number returned against the specified input value numBytes, works. The modified code is as follows:
int sextetsToBytes(int numBytes, CBYTE* sextetData, BYTE* byteData) {
int numSextets = 4 * ((numBytes + 2) / 3);
int i, j;
for (i = 0, j = 0; i < numSextets;) {
UINT sextetA = sextetData[i++];
UINT sextetB = sextetData[i++];
UINT sextetC = sextetData[i++];
UINT sextetD = sextetData[i++];
UINT triple = (sextetA << 18) + (sextetB << 12) + (sextetC << 6) + sextetD;
if (j < numBytes) byteData[j++] = (triple >> 16) & 0xFF;
if (j < numBytes) byteData[j++] = (triple >> 8) & 0xFF;
if (j < numBytes) byteData[j++] = triple & 0xFF;
}
return j;
}
I'm taking my first steps in C, and was trying to make a gradient color function, that draws a bunch of rectangles to the screen (vertically).
This is the code so far:
void draw_gradient(uint32_t start_color, uint32_t end_color) {
int steps = 8;
int draw_height = window_height / 8;
//Change this value inside the loop to write different color
uint32_t loop_color = start_color;
for (int i = 0; i < steps; i++) {
draw_rect(0, i * draw_height, window_width, draw_height, loop_color);
}
}
Ignoring the end_color for now, I want to try and pass a simple red color in like 0xFFFF0000 (ARGB)..and then take the red 'FF' and convert it to an integer or decrease it using the loop_color variable.
I'm not sure how to go get the red value from the hexcode and then minipulate it as a number and then write it back to hex..any ideas?
So in 8 steps the code should for example go in hex from FF to 00 or as integer from 255 to 0.
As you have said, your color is in RGB format. This calculation assumes vertical gradient - meaning from top to the bottom (linear lines).
Steps to do are:
Get number of lines to draw; this is your rectangle height
Get A, R, G, B color components from your start and end colors
uint8_t start_a = start_color >> 24;
uint8_t start_r = start_color >> 16;
uint8_t start_g = start_color >> 8;
uint8_t start_b = start_color >> 0;
uint8_t end_a = end_color >> 24;
uint8_t end_r = end_color >> 16;
uint8_t end_g = end_color >> 8;
uint8_t end_b = end_color >> 0;
Calculate step for each of the components
float step_a = (float)(end_a - start_a) / (float)height;
float step_r = (float)(end_r - start_r) / (float)height;
float step_g = (float)(end_g - start_g) / (float)height;
float step_b = (float)(end_b - start_b) / (float)height;
Run for loop and apply different step for each color
for (int i = 0; i < height; ++i) {
uint32_t color = 0 |
((start_a + i * step_a) & 0xFF) << 24 |
((start_r + i * step_r) & 0xFF) << 16 |
((start_g + i * step_g) & 0xFF) << 8 |
((start_b + i * step_b) & 0xFF) << 0
draw_horizontal_line(i, color);
}
It is better to use float for step_x and multiply/add on each iteration. Otherwise with integer rounding, you may never increase number as it will always get rounded down.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I am writing a program that converts a binary value's hexadecimal representation to a regular string. So each character in the hex representation would convert to two hexadecimal characters in the string. This means the result will be twice the size; a hexadecimal representation of 1 byte would need two bytes in a string.
Hexadecimal Characters
0123456789 ;0x30 - 0x39
ABCDEF ;0x41 - 0x46
Example
0xF05C1E3A ;hex
4032568890 ;dec
would become
0x4630354331453341 ;hex
5057600944242766657 ;dec
Question?
Are there any elegant/alternative(/interesting) methods for converting between these states, other than a lookup table, (bitwise operations, shifts, modulo, etc)?
I'm not looking for a function in a library, but rather how one would/should be implemented. Any ideas?
Here's a solution with nothing but shifts, and/or, and add/subtract. No loops either.
uint64_t x, m;
x = 0xF05C1E3A;
x = ((x & 0x00000000ffff0000LL) << 16) | (x & 0x000000000000ffffLL);
x = ((x & 0x0000ff000000ff00LL) << 8) | (x & 0x000000ff000000ffLL);
x = ((x & 0x00f000f000f000f0LL) << 4) | (x & 0x000f000f000f000fLL);
x += 0x0606060606060606LL;
m = ((x & 0x1010101010101010LL) >> 4) + 0x7f7f7f7f7f7f7f7fLL;
x += (m & 0x2a2a2a2a2a2a2a2aLL) | (~m & 0x3131313131313131LL);
Above is the simplified version I came up with after a little time to reflect. Below is the original answer.
uint64_t x, m;
x = 0xF05C1E3A;
x = ((x & 0x00000000ffff0000LL) << 16) | (x & 0x000000000000ffffLL);
x = ((x & 0x0000ff000000ff00LL) << 8) | (x & 0x000000ff000000ffLL);
x = ((x & 0x00f000f000f000f0LL) << 4) | (x & 0x000f000f000f000fLL);
x += 0x3636363636363636LL;
m = (x & 0x4040404040404040LL) >> 6;
x += m;
m = m ^ 0x0101010101010101LL;
x -= (m << 2) | (m << 1);
See it in action: http://ideone.com/nMhJ2q
Spreading out the nibbles to bytes is easy with pdep:
spread = _pdep_u64(raw, 0x0F0F0F0F0F0F0F0F);
Now we'd have to add 0x30 to bytes in the range 0-9 and 0x41 to higher bytes. This could be done by SWAR-subtracting 10 from every byte and then using the sign to select which number to add, such as (not tested)
H = 0x8080808080808080;
ten = 0x0A0A0A0A0A0A0A0A
cmp = ((spread | H) - (ten &~H)) ^ ((spread ^~ten) & H); // SWAR subtract
masks = ((cmp & H) >> 7) * 255;
// if x-10 is negative, take 0x30, else 0x41
add = (masks & 0x3030303030303030) | (~masks & 0x3737373737373737);
asString = spread + add;
That SWAR compare can probably be optimized since you shouldn't need a full subtract to implement it.
There are some different suggestions here, including SIMD: http://0x80.pl/articles/convert-to-hex.html
A slightly simpler version based on Mark Ransom's:
uint64_t x = 0xF05C1E3A;
x = ((x & 0x00000000ffff0000LL) << 16) | (x & 0x000000000000ffffLL);
x = ((x & 0x0000ff000000ff00LL) << 8) | (x & 0x000000ff000000ffLL);
x = ((x & 0x00f000f000f000f0LL) << 4) | (x & 0x000f000f000f000fLL);
x = (x + 0x3030303030303030LL) +
(((x + 0x0606060606060606LL) & 0x1010101010101010LL) >> 4) * 7;
And if you want to avoid the multiplication:
uint64_t m, x = 0xF05C1E3A;
x = ((x & 0x00000000ffff0000LL) << 16) | (x & 0x000000000000ffffLL);
x = ((x & 0x0000ff000000ff00LL) << 8) | (x & 0x000000ff000000ffLL);
x = ((x & 0x00f000f000f000f0LL) << 4) | (x & 0x000f000f000f000fLL);
m = (x + 0x0606060606060606LL) & 0x1010101010101010LL;
x = (x + 0x3030303030303030LL) + (m >> 1) - (m >> 4);
A bit more decent conversion from the the integer to the string any base from 2 to length of the digits
char *reverse(char *);
const char digits[] = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ";
char *convert(long long number, char *buff, int base)
{
char *result = (buff == NULL || base > strlen(digits) || base < 2) ? NULL : buff;
char sign = 0;
if (number < 0)
{
sign = '-';
number = -number;
}
if (result != NULL)
{
do
{
*buff++ = digits[number % base];
number /= base;
} while (number);
if(sign) *buff++ = sign;
*buff = 0;
reverse(result);
}
return result;
}
char *reverse(char *str)
{
char tmp;
int len;
if (str != NULL)
{
len = strlen(str);
for (int i = 0; i < len / 2; i++)
{
tmp = *(str + i);
*(str + i) = *(str + len - i - 1);
*(str + len - i - 1) = tmp;
}
}
return str;
}
example - counting from -50 to 50 decimal in base 23
-24 -23 -22 -21 -20 -1M -1L -1K -1J -1I -1H -1G -1F -1E -1D
-1C -1B -1A -19 -18 -17 -16 -15 -14 -13 -12 -11 -10 -M -L
-K -J -I -H -G -F -E -D -C -B -A -9 -8 -7 -6
-5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9
A B C D E F G H I J K L M 10 11
12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F 1G
1H 1I 1J 1K 1L 1M 20 21 22 23 24
A LUT (lookup table) C++ variant. I didn't check the actual machine code produced, but I believe any modern C++ compiler can catch the idea and compile it well.
static const char nibble2hexChar[] { "0123456789ABCDEF" };
// 17B in total, because I'm lazy to init it per char
void byteToHex(std::ostream & out, const uint8_t value) {
out << nibble2hexChar[value>>4] << nibble2hexChar[value&0xF];
}
// this one is actually written more toward short+simple source, than performance
void dwordToHex(std::ostream & out, uint32_t value) {
int i = 8;
while (i--) {
out << nibble2hexChar[value>>28];
value <<= 4;
}
}
EDIT: For C code you have just to switch from std::ostream to some other output means, unfortunately your question lacks any details, what you are actually trying to achieve and why you don't use the built-in printf family of C functions.
For example C like this can write to some char* output buffer, converting arbitrary amount of bytes:
/**
* Writes hexadecimally formatted "n" bytes array "values" into "outputBuffer".
* Make sure there's enough space in output buffer allocated, and add zero
* terminator yourself, if you plan to use it as C-string.
*
* #Returns: pointer after the last character written.
*/
char* dataToHex(char* outputBuffer, const size_t n, const unsigned char* values) {
for (size_t i = 0; i < n; ++i) {
*outputBuffer++ = nibble2hexChar[values[i]>>4];
*outputBuffer++ = nibble2hexChar[values[i]&0xF];
}
return outputBuffer;
}
And finally, I did help once somebody on code review, as he had performance bottleneck exactly with hexadecimal formatting, but I did there the code variant conversion, without LUT, also the whole process and other answer + performance measuring may be instructional for you, as you may see that the fastest solution doesn't just blindly convert result, but actually mix up with the main operation, to achieve better performance overall. So that's why I'm wonder what you are trying to solve, as the whole problem may often allow for more optimal solution, if you just ask about conversion, printf("%x",..) is safe bet.
Here is that another approach for "to hex" conversion:
fast C++ XOR Function
Decimal -> Hex
Just iterate throught string and every character convert to int, then you can do
printf("%02x", c);
or use sprintf for saving to another variable
Hex -> Decimal
Code
printf("%c",16 * hexToInt('F') + hexToInt('0'));
int hexToInt(char c)
{
if(c >= 'a' && c <= 'z')
c = c - ('a' - 'A');
int sum;
sum = c / 16 - 3;
sum *= 10;
sum += c % 16;
return (sum > 9) ? sum - 1 : sum;
}
The articles below compare different methods of converting digits to string, hex numbers are not covered but it seems not a big problem to switch from dec to hex
Integers
Fixed and floating point
#EDIT
Thank you for pointing that the answer above is not relevant.
Common way with no LUT is to split integer into nibbles and map them to ASCII
#include <stdio.h>
#include <stdint.h>
#include <string.h>
#define HI_NIBBLE(b) (((b) >> 4) & 0x0F)
#define LO_NIBBLE(b) ((b) & 0x0F)
void int64_to_char(char carr[], int64_t val){
memcpy(carr, &val, 8);
}
uint64_t inp = 0xF05C1E3A;
char tmp_st[8];
int main()
{
int64_to_char(tmp_st,inp);
printf("Sample: %x\n", inp);
printf("Result: 0x");
for (unsigned int k = 8; k; k--){
char tmp_ch = *(tmp_st+k-1);
char hi_nib = HI_NIBBLE(tmp_ch);
char lo_nib = LO_NIBBLE(tmp_ch);
if (hi_nib || lo_nib){
printf("%c%c",hi_nib+((hi_nib>9)?55:48),lo_nib+((lo_nib>9)?55:48));
}
}
printf("\n");
return 0;
}
Another way is to use Allison's Algorithm. I am total noob in ASM, so I post the code in the form I googled it.
Variant 1:
ADD AL,90h
DAA
ADC AL,40h
DAA
Variant 2:
CMP AL, 0Ah
SBB AL, 69h
DAS
This code is supposed to convert a RGB color to an hex in the 5:6:5 format. 5 bits for red, 6 bits for green, 5 bits for blue. I have no idea why this is not picturing the correct color.
Does anyone knows why?
int rgb(unsigned char r, unsigned char g, unsigned char b) {
if (r < 0 || 255 < r || g < 0 || 255 < g || b < 0 || b > 255)
return -1;
int result;
int red = r * 31 / 255;
int green = g * 63/ 255;
int blue = b * 31 / 255;
//int result = (red << 11) | (green << 5) | blue;
green = green << 5;
red = red << 11;
result = red | green | blue;
//tests
printf("\nred: %x", red);
printf("\ngreen: %x", green);
printf("blue: %x\n", blue);
printf("result: %x\n", result);
return result;
}
After another look at your question I don't really know what you're asking about. Anyway, I'm leaving my answer in case you find it useful.
Your rgb(...) function takes three byte arguments - they have 8 bits each.
Let's take "red" component into account first. If you pass XXXX XXXX (8 bits) and want to convert them into a 5-bit equivalent representation, it's enough to shift the value right by 3 bits, so:
int red = r >> 3;
The value XXXXXXXX will be truncated in the place of the pipeline character:
XXXXX|xxx
so that only the bits marked with large Xes will be saved to the red variable.
The same goes for blue, and for the green component, you have to shift it right by two (8 - 6 = 2).
You probably want your function to work like this:
int rgb(unsigned char r, unsigned char g, unsigned char b) {
if (r < 0 || 255 < r || g < 0 || 255 < g || b < 0 || b > 255)
return -1;
unsigned char red = r >> 3;
unsigned char green = g >> 2;
unsigned char blue = b >> 3;
int result = (red << (5 + 6)) | (green << 5) | blue;
//tests
printf("red: %x\n", red);
printf("green: %x\n", green);
printf("blue: %x\n", blue);
printf("result: %x\n", result);
return result;
}
Assuming 8-bit char, your unsigned char arguments must already be in the 0-255 range, so you don't need to check that. And the multiplication you're trying to use to scale the color components is probably not a good approach.
A better approach would be to AND each component with a mask to get the upper 5 bits (6 for green), shift them to the proper positions, and OR them together. When shifting, remember to account for the fact that you're using the upper bits... and for the last component, you won't need to AND with a mask because the unneeded bits are shifted out anyway. So this gets you something like this (as the only line in your function):
return ((r & 0xf8) << 8) | ((g & 0xfc) << 3) | (b >> 3);
(r & 0xf8) gets the upper 5 bits of r. These are then left shifted by 8 bits, so they move from positions 3..7 into 11..15.
(g & 0xfc) gets the upper 6 bits of g. Those are then left shifted by 3 bits, from 2..7 into 5..10.
b doesn't need to be masked... it's just shifted right 3 bits. Its upper 5 bits are then moved from 3..7 into 0..4, and its lower 3 bits are discarded when they're shifted out.
All those values are then ORed together to get your RGB 5:6:5 value, and returned.
Alternatively, if you prefer shifts over AND, you can use:
return ((r >> 3) << 11) | ((g >> 2) << 5) | (b >> 3);
You might also consider changing the return type to an unsigned 16-bit type and not worry about returning an error value (there isn't really any kind of error condition to check for here).
You need a function that shows you the binary contents, so that you can "count" the bits and better find errors. My approach added a rounding routine:
#include <stdio.h>
#include <math.h>
char* sprint_bin (unsigned a, unsigned count, char* bin)
{
char* p = bin;
unsigned i;
unsigned mask = pow(2,count-1);
unsigned b;
for (i = 0; i<count; ++i)
{
b = (a & mask) ? '1' : '0';
p += sprintf (p, "%c ",b);
mask >>= 1;
}
return bin;
}
unsigned rgb(unsigned char r, unsigned char g, unsigned char b) {
char bin[64];
int result;
printf("r: %s\n", sprint_bin(r,8,bin));
printf("g: %s\n", sprint_bin(g,8,bin));
printf("b: %s\n", sprint_bin(b,8,bin));
// masks
unsigned red = (unsigned)(r & 0xF8) << 8;
unsigned green = (unsigned)(g & 0xFC) << 3;
unsigned blue = (unsigned)(b >> 3);
// rounding
if ((r & 4) && (r<0xF8)) red += 0x0800;
if ((g & 2) && (g<0xFC)) green += 0x20;
if ((b & 4) && (b<0xF8)) blue++;
// 5:6:5
result = red | green | blue;
// test
printf("red: %s\n", sprint_bin(red,16,bin));
printf("green: %s\n", sprint_bin(green,16,bin));
printf("blue: %s\n", sprint_bin(blue,16,bin));
printf("result: %s\n", sprint_bin(result,32,bin));
return result;
}
int main ()
{
rgb (0x81, 0x87, 0x9F);
return 0;
}
I have Y, Cb and Cr values, each with a size of 8 bits. What would be a simple C function which can convert these values to R,G,B (each with a size of 8 bits)?
Here is a prototype of the function I am looking for:
void convertYCbCrToRGB(
unsigned char Y,
unsigned char cg,
unsigned char cb,
unsigned char &r,
unsigned char &g,
unsigned char &b);
P.S.
I am looking for the correct conversion formula only, since I have found different versions of it everywhere. I am very well-versed in C/C++.
Here is my solution to my question.
This one is Full Range YCbCr to RGB conversion routine.
Color GetColorFromYCbCr(int y, int cb, int cr, int a)
{
double Y = (double) y;
double Cb = (double) cb;
double Cr = (double) cr;
int r = (int) (Y + 1.40200 * (Cr - 0x80));
int g = (int) (Y - 0.34414 * (Cb - 0x80) - 0.71414 * (Cr - 0x80));
int b = (int) (Y + 1.77200 * (Cb - 0x80));
r = Max(0, Min(255, r));
g = Max(0, Min(255, g));
b = Max(0, Min(255, b));
return Color.FromArgb(a, r, g, b);
}
The problem comes that nearly everybody confuses YCbCr YUV and YPbPr. So literature you can find is often crappy. First you have to know if you really have YCbCr or if someone lies to you :-).
YUV coded data comes from analog sources (PAL video decoder, S-Video, ...)
YPbPr coded data also comes from analog sources but produces better color results as YUV (Component Video)
YCbCr coded data comes from digital sources (DVB, HDMI, ...)
YPbPr and YCbCr are related. Here are the right formulae:
https://web.archive.org/web/20180421030430/http://www.equasys.de/colorconversion.html
(the archive.org has been added to fix the old, broken link).
Integer operation of ITU-R standard for YCbCr is (from Wikipedia)
Cr = Cr - 128 ;
Cb = Cb - 128 ;
r = Y + ( Cr >> 2 + Cr >> 3 + Cr >> 5 ) ;
g = Y - ( Cb >> 2 + Cb >> 4 + Cb >> 5) - ( Cr >> 1 + Cr >> 3 + Cr >> 4 + Cr >> 5) ;
b = Y + ( Cb + Cb >> 1 + Cb >> 2 + Cb >> 6) ;
or equivalently but more concisely to :
Cr = Cr - 128 ;
Cb = Cb - 128 ;
r = Y + 45 * Cr / 32 ;
g = Y - (11 * Cb + 23 * Cr) / 32 ;
b = Y + 113 * Cb / 64 ;
do not forget to clamp values of r, g and b within [0,255].
Check out this page. It contains useful information on conversion formulae.
As an aside, you could return a unsigned int with the values for RGBA encoded from the most significant byte to least significant byte, i.e.
unsigned int YCbCrToRGBA(unsigned char Y, unsigned char Cb, unsigned char Cb) {
unsigned char R = // conversion;
unsigned char G = // conversion;
unsigned char B = // conversion;
return (R << 3) + (G << 2) + (B << 1) + 255;
}