How to pass Parameters? - c

I have been converting convert rose.png -sparse-color barycentric '0,0 black 69,0 white roseModified.png into MagickWand C API.
double arguments[6];
arguments[0] = 0.0;
arguments[1] = 0.0;
// arguments[2] = "black";
arguments[2] = 69.0;
arguments[3] = 0.0;
// arguments[5] = "white";
MagickSparseColorImage(wand0, BarycentricColorInterpolate, 4,arguments);
MagickWriteImage(wand0,"rose_cylinder_22.png");
I don't know how to pass the double argument. click here
for method's defenition.
UPDATE:
Source Image
After I executed convert rose.png -sparse-color barycentric '0,0 black 69,0 white' roseModified.png, I got below Image
I haven't got the output like this with my C program. There might be something with white and black.

For sparse colors, you need to convert the color to doubles for each channel. Depending on how dynamic you need to generate spares color points, you may need to start building basic stack-management methods.
Here's an example. (Mind that this is a quick example, and can be improved on greatly)
#include <stdlib.h>
#include <MagickWand/MagickWand.h>
// Let's create a structure to keep track of arguments.
struct arguments {
size_t count;
double * values;
};
// Set-up structure, and allocate enough memory for all colors.
void allocate_arguments(struct arguments * stack, size_t size)
{
stack->count = 0;
// (2 coords + 3 color channel) * number of colors
stack->values = malloc(sizeof(double) * (size * 5));
}
// Append a double value to structure.
void push_double(struct arguments * stack, double value)
{
stack->values[stack->count++] = value;
}
// Append all parts of a color to structure.
void push_color(struct arguments * stack, PixelWand * color)
{
push_double(stack, PixelGetRed(color));
push_double(stack, PixelGetGreen(color));
push_double(stack, PixelGetBlue(color));
}
#define NUMBER_OF_COLORS 2
int main(int argc, const char * argv[]) {
MagickWandGenesis();
MagickWand * wand;
PixelWand ** colors;
struct arguments A;
allocate_arguments(&A, NUMBER_OF_COLORS);
colors = NewPixelWands(NUMBER_OF_COLORS);
PixelSetColor(colors[0], "black");
PixelSetColor(colors[1], "white");
// 0,0 black
push_double(&A, 0);
push_double(&A, 0);
push_color(&A, colors[0]);
// 69,0 white
push_double(&A, 69);
push_double(&A, 0);
push_color(&A, colors[1]);
// convert rose:
wand = NewMagickWand();
MagickReadImage(wand, "rose:");
// -sparse-color barycentric '0,0 black 69,0 white'
MagickSparseColorImage(wand, BarycentricColorInterpolate, A.count, A.values);
MagickWriteImage(wand, "/tmp/output.png");
MagickWandTerminus();
return 0;
}

Related

is there a way to convert type char to structTexture in Raylib

I keep getting the error "func.h:23: error: cannot cast 'int' to 'struct Texture'" even though im inputting text as tile in renderTiles(). Am i just being dumb here? ¯_(ツ)_/¯ Im new to C so this might just be me. i copied and pasted the basic window example to create this. i know there isnt char before tile in "renderTiles(tile)" i tried that as well and it still did not work
main.c:
*
* raylib [core] example - Basic window
*
* Welcome to raylib!
*
* To test examples, just press F6 and execute raylib_compile_execute script
* Note that compiled executable is placed in the same folder as .c file
*
* You can find all basic examples on C:\raylib\raylib\examples folder or
* raylib official webpage: www.raylib.com
*
* Enjoy using raylib. :)
*
* Example originally created with raylib 1.0, last time updated with raylib 1.0
* Example licensed under an unmodified zlib/libpng license, which is an OSI-certified,
* BSD-like license that allows static linking with closed source software
*
* Copyright (c) 2013-2022 Ramon Santamaria (#raysan5)
*
********************************************************************************************/
#include "raylib.h"
#include "func.h"
//------------------------------------------------------------------------------------
// Program main entry point
//------------------------------------------------------------------------------------
int main(void)
{
// Initialization
//--------------------------------------------------------------------------------------
const int screenWidth = 768;
const int screenHeight = 576;
InitWindow(screenWidth, screenHeight, "raylib [core] example - basic window");
SetTargetFPS(60); // Set our game to run at 60 frames-per-second
//--------------------------------------------------------------------------------------
// textures
Texture2D grass = LoadTexture("grass.png");
Texture2D stone = LoadTexture("stone.png");
Texture2D sand = LoadTexture("sand.png");
Texture2D stone_oasis = LoadTexture("stone-oasis.png");
Texture2D sand_oasis = LoadTexture("sand-oasis.png");
Texture2D UI = LoadTexture("ui.png");
// Main game loop
while (!WindowShouldClose()) // Detect window close button or ESC key
{
// Update
//----------------------------------------------------------------------------------
// TODO: Update your variables here
//----------------------------------------------------------------------------------
// Draw
//----------------------------------------------------------------------------------
BeginDrawing();
ClearBackground(RAYWHITE);
DrawTexture(UI, 0, 0, RAYWHITE);
renderTiles("grass");
EndDrawing();
//----------------------------------------------------------------------------------
}
// De-Initialization
//--------------------------------------------------------------------------------------
CloseWindow(); // Close window and OpenGL context
//--------------------------------------------------------------------------------------
}
func.h
#ifndef FUNC
#define FUNC
const int screenWidth = 768;
const int screenHeight = 576;
int centerX(int x)
{
int ox = x + (screenWidth / 2);
return (ox);
}
int centerY(int y)
{
int oy = y + (screenHeight / 2);
return(oy);
}
void renderTiles(tile)
{
for( int b = 0; b < 12; b = b + 1)
{
for( int a = 0; a < 12; a = a + 1 ){
DrawTexture(tile, (a * 48) + 96, (b * 48), RAYWHITE);
}
}
}
#endif
Answering the question title, you have already converted the string to a texture struct with
Texture2D grass = LoadTexture("grass.png");
The raylib cheat sheet shows the function you are calling to be
void DrawTexture(Texture2D texture, int posX, int posY, Color tint);
but you have passed the char* value "grass" to your function, which has an untyped argument, so the compiler assumes it to be type int.
Your function should be
/* void renderTiles(tile) */
void renderTiles(Texture2D tile)
and you should call it with
/* renderTiles("grass"); */
renderTiles(grass);
Also, you should not have executable code in a header file.

GTK+ Draw bitmap as mask using foreground colour

I've made a custom font as a bitmap stored in a XPM-image, and want to be able to draw it with a changeable foreground colour to a GdkDrawable-object. Basically, what I want is to use an image as a font and be able to change the colour. Any suggestions how to do this?
It's not exactly the solution I intended at first, but it's a solution and it will have to do until a better one appears.
/* XPM-image containing font consist of 96 (16x6) ASCII-characters starting with space. */
#include "font.xpm"
typedef struct Font {
GdkPixbuf *image[16]; /* Image of font for each colour. */
const int width; /* Grid-width. */
const int height; /* Grid-height */
const int ascent; /* Font ascent from baseline. */
const int char_width[96]; /* Width of each character. */
} Font;
typedef enum TextColor { FONT_BLACK,FONT_BROWN,FONT_YELLOW,FONT_CYAN,FONT_RED,FONT_WHITE } TextColor;
typedef enum TextAlign { ALIGN_LEFT,ALIGN_RIGHT,ALIGN_CENTER } TextAlign;
Font font = {
{0},
7,9,9,
{
5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,
5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,
5,5,5,5,5,5,5,5,5,5,5,5,5,6,5,5,
5,5,5,5,5,5,5,6,5,5,5,5,5,5,5,5,
5,5,5,5,5,5,5,5,5,5,5,5,5,6,5,5,
5,5,5,5,5,5,5,6,5,5,5,5,5,5,5,5,
}
};
void load_font(Font *font,const char **font_xpm) {
const char *colors[] = { /* It's not complicated to adjust for more elaborate colour schemes. */
". c #000000", /* Black */
". c #3A2613", /* Brown */
". c #FFFF00", /* Yellow */
". c #00FFFF", /* Cyan */
". c #FF0000", /* Red */
". c #FFFFFF", /* White */
NULL};
int i;
memset(font->image,0,sizeof(GdkPixbuf *)*16);
for(i=0; colors[i]!=NULL; ++i) {
font_xpm[2] = colors[i]; /* Second colour is assumed to be font colour. */
font->image[i] = gdk_pixbuf_new_from_xpm_data(font_xpm);
}
}
int draw_string(Font *font,int x,int y,TextAlign align,TextColor color,const char *str) {
int i,w = 0;
const char *p;
for(p=str; *p; ++p) i = *p-' ',w += i>=0 && i<96? font->char_width[i] : 0;
if(align==ALIGN_RIGHT) x -= w;
else if(align==ALIGN_CENTER) x -= w/2;
for(p=str; *p; ++p) {
i = *p-' ';
if(i>=0 && i<96) {
gdk_draw_pixbuf(pixmap,gc,font->image[(int)color],(i%16)*font->width,(i/16)*font->height,x,y,font->char_width[i],font->height,GDK_RGB_DITHER_NONE,0,0);
x += font->metrics[i];
}
}
return x;
}

How to convert a kCVPixelFormatType_420YpCbCr8BiPlanarFullRange Buffer to YUV420 using libyuv library in ios?

i have captured video using AVFoundation .i have set (video setting )and get in outputsamplebuffer kCVPixelFormatType_420YpCbCr8BiPlanarFullRange format. But i need YUV420 format for further processing.
For that i use libyuv framework.
LIBYUV_API
int NV12ToI420(const uint8* src_y, int src_stride_y,
const uint8* src_uv, int src_stride_uv,
uint8* dst_y, int dst_stride_y,
uint8* dst_u, int dst_stride_u,
uint8* dst_v, int dst_stride_v,
int width, int height);
libyuv::NV12ToI420(src_yplane, inWidth ,
src_uvplane, inWidth,
dst_yplane, inWidth,
dst_vplane, inWidth / 2,
dst_uplane, inWidth / 2,
inWidth, inHeight);
But i am getting output buffer is full green color? i done any mistake for that process pls help me?
Looks right. Make sure your src_uvplane points to src_yplane + inWidth * inHeight
You need convert your data to I420, I am processing camera too, but on Android. I think it should be similar on iOS. Android raw camera is NV21 or NV16 format, I convert from NV21 or NV16 to YV12, I420 is almost the same as YV12:
BYTE m_y[BIG_VIDEO_CX * BIG_VIDEO_CY],
m_u[(BIG_VIDEO_CX/2) * (BIG_VIDEO_CY/2)],
m_v[(BIG_VIDEO_CX/2) * (BIG_VIDEO_CY/2)];
void NV21_TO_YV12(BYTE *data)
{
int width = BIG_VIDEO_CX;
int height = BIG_VIDEO_CY;
m_y2=data;
data=&data[width*height];
for (uint32_t i=0; i<(width/2)*(height/2); ++i)
{
m_v[i]=*data;
m_u[i]=*(data+1);
data+=2;
}
}
void NV16_TO_YV12(BYTE *data)
{
int width = BIG_VIDEO_CX;
int height = BIG_VIDEO_CY;
m_y2=data;
const BYTE* src_uv = (const BYTE*)&data[width*height];
BYTE* dst_u = m_u;
BYTE* dst_v = m_v;
for (uint32_t y=0; y<height/2; ++y)
{
const BYTE* src_uv2 = src_uv + width;
for (uint32_t x=0; x<width/2; ++x)
{
dst_u[x]=(src_uv[0]+src_uv2[0]+1)>>1;
dst_v[x]=(src_uv[1]+src_uv2[1]+1)>>1;
src_uv+=2;
src_uv2+=2;
}
src_uv=src_uv2;
dst_u+=width/2;
dst_v+=width/2;
}
}
Android is NV21, which libyuv supports with Arm as well as Intel. It can also rotate by 90, 180 or 270 as part of the conversion if necessary for orientation.
The Arm optimized version is about 2x faster than C
C
NV12ToI420_Opt (782 ms)
NV21ToI420_Opt (764 ms)
Arm (Neon optimized)
NV12ToI420_Opt (398 ms)
NV21ToI420_Opt (381 ms)
Curious you use NV16 on Android. I'd expect NV61 for consistency with NV21. Your code looks correct, but would nicely optimize into Neon using vrhadd.u8. File a libyuv issue if you'd like to see that. https://code.google.com/p/libyuv/issues/list
Here is how I do it on iOS in my captureOutput after I get a raw video frame from AVCaptureSession(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange):
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef videoFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
CFRetain(sampleBuffer);
CVPixelBufferLockBaseAddress(videoFrame, 0);
size_t _width = CVPixelBufferGetWidth(videoFrame);
size_t _height = CVPixelBufferGetHeight(videoFrame);
const uint8* plane1 = (uint8*)CVPixelBufferGetBaseAddressOfPlane(videoFrame,0);
const uint8* plane2 = (uint8*)CVPixelBufferGetBaseAddressOfPlane(videoFrame,1);
size_t plane1_stride = CVPixelBufferGetBytesPerRowOfPlane (videoFrame, 0);
size_t plane2_stride = CVPixelBufferGetBytesPerRowOfPlane (videoFrame, 1);
size_t plane1_size = plane1_stride * CVPixelBufferGetHeightOfPlane(videoFrame, 0);
size_t plane2_size = CVPixelBufferGetBytesPerRowOfPlane (videoFrame, 1) * CVPixelBufferGetHeightOfPlane(videoFrame, 1);
size_t frame_size = plane1_size + plane2_size;
uint8* buffer = new uint8[ frame_size ];
uint8* dst_u = buffer + plane1_size;
uint8* dst_v = dst_u + plane1_size/4;
// Let libyuv convert
libyuv::NV12ToI420(/*const uint8* src_y=*/plane1, /*int src_stride_y=*/plane1_stride,
/*const uint8* src_uv=*/plane2, /*int src_stride_uv=*/plane2_stride,
/*uint8* dst_y=*/buffer, /*int dst_stride_y=*/plane1_stride,
/*uint8* dst_u=*/dst_u, /*int dst_stride_u=*/plane2_stride/2,
/*uint8* dst_v=*/dst_v, /*int dst_stride_v=*/plane2_stride/2,
_width, _height);
CVPixelBufferUnlockBaseAddress(videoFrame, 0);
CFRelease( sampleBuffer)
// TODO: call your method here with 'buffer' variable. note that you need to deallocated the buffer after using it
}
I made the code a bit more descriptive for clarity.

How to call LSD (LineSegmentDetector) from a c language program?

i'm using LSD to detect straight lines in an image, the code that i have downloaded contains a Minimal example of calling LSD but it's static (i.e it outputs only the value in the main function) i want to apply the code on a video, that's the minimal example that outputs static results.
#include <stdio.h>
#include "lsd.h"
int main(void)
{
image_double image;
ntuple_list out;
unsigned int x,y,i,j;
unsigned int X = 512; /* x image size */
unsigned int Y = 512; /* y image size */
/* create a simple image: left half black, right half gray */
image = new_image_double(X,Y);
for(x=0;x<X;x++)
for(y=0;y<Y;y++)
image->data[ x + y * image->xsize ] = x<X/2 ? 0.0 : 64.0; /* image(x,y) */
IplImage* imgInTmp = cvLoadImage("C:\Documents and Settings\Eslam farag\My Documents\Visual Studio 2008\Projects\line\hand.JPEG", 0);
/* call LSD */
out = lsd(image);
/* print output */
printf("%u line segments found:\n",out->size);
for(i=0;i<out->size;i++)
{
for(j=0;j<out->dim;j++)
printf("%f ",out->values[ i * out->dim + j ]);
printf("\n");
}
/* free memory */
free_image_double(image);
free_ntuple_list(out);
return 0;
}
if anyone can help me to apply the code on video i will be pleased.thanks
best regards,
Since I couldn't find a complete example, I'm sharing a code I wrote that uses OpenCV to load a video file from the disk and perform some image processing on it.
The application takes a filename as input (on the cmd line) and converts each frame of the video to it's grayscale equivalent using OpenCV built-in function cvCvtColor() to do this.
I added some comments on the code to help you understand the basic tasks.
read_video.cpp:
#include <stdio.h>
#include <highgui.h>
#include <cv.h>
int main(int argc, char* argv[])
{
cvNamedWindow("video", CV_WINDOW_AUTOSIZE);
CvCapture *capture = cvCaptureFromAVI(argv[1]);
if(!capture)
{
printf("!!! cvCaptureFromAVI failed (file not found?)\n");
return -1;
}
IplImage* frame;
char key = 0;
while (key != 'q') // Loop for querying video frames. Pressing Q will quit
{
frame = cvQueryFrame( capture );
if( !frame )
{
printf("!!! cvQueryFrame failed\n");
break;
}
/* Let's do a grayscale conversion just 4 fun */
// A grayscale image has only one channel, and most probably the original
// video works with 3 channels (RGB). So, for the conversion to work, we
// need to allocate an image with only 1 channel to store the result of
// this operation.
IplImage* gray_frame = 0;
gray_frame = cvCreateImage(cvSize(frame->width, frame->height), frame->depth, 1);
if (!gray_frame)
{
printf("!!! cvCreateImage failed!\n" );
return -1;
}
cvCvtColor(frame, gray_frame, CV_RGB2GRAY); // The conversion itself
// Display processed frame on window
cvShowImage("video", gray_frame);
// Release allocated resources
cvReleaseImage(&gray_frame);
key = cvWaitKey(33);
}
cvReleaseCapture(&capture);
cvDestroyWindow("video");
}
Compiled with:
g++ read_video.cpp -o read `pkg-config --cflags --libs opencv`
If you want to know how to iterate through the pixels of the frame to do your custom processing, you need to check the following answer because it shows how to do a manual grayscale conversion. There you go: OpenCV cvSet2d.....what does this do
here is example of the code using LSD with opencv
#include "lsd.h"
void Test_LSD(IplImage* img)
{
IplImage* grey = cvCreateImage(cvGetSize(img), IPL_DEPTH_8U, 1);
cvCvtColor(img, grey, CV_BGR2GRAY);
image_double image;
ntuple_list out;
unsigned int x,y,i,j;
image = new_image_double(img->width,img->height);
for(x=0;x<grey->width;x++)
for(y=0;y<grey->height;y++)
{
CvScalar s= cvGet2D(grey,y,x);
double pix= s.val[0];
image->data[ x + y * image->xsize ]= pix; /* image(x,y) */
}
/* call LSD */
out = lsd(image);
//out= lsd_scale(image,1);
/* print output */
printf("%u line segments found:\n",out->size);
vector<Line> vec;
for(i=0;i<out->size;i++)
{
//for(j=0;j<out->dim;j++)
{
//printf("%f ",out->values[ i * out->dim + j ]);
Line line;
line.x1= out->values[ i * out->dim + 0];
line.y1= out->values[ i * out->dim + 1];
line.x2= out->values[ i * out->dim + 2];
line.y2= out->values[ i * out->dim + 3];
vec.push_back(line);
}
//printf("\n");
}
IplImage* black= cvCreateImage(cvGetSize(img), IPL_DEPTH_8U, 3);
cvZero(black);
draw_lines(vec,black);
/*cvNamedWindow("img", 0);
cvShowImage("img", img);*/
cvSaveImage("lines_detect.png",black/*img*/);
/* free memory */
free_image_double(image);
free_ntuple_list(out);
}
or this way
IplImage* get_lines(IplImage* img,vector<Line>& vec_lines)
{
//to grey
//IplImage* grey = cvCreateImage(cvGetSize(img), IPL_DEPTH_8U, 1);
//cvCvtColor(img, grey, CV_BGR2GRAY);
image_double image;
ntuple_list out;
unsigned int x,y,i,j;
image = new_image_double(img->width,img->height);
for(x=0;x</*grey*/img->width;x++)
for(y=0;y</*grey*/img->height;y++)
{
CvScalar s= cvGet2D(/*grey*/img,y,x);
double pix= s.val[0];
image->data[ x + y * image->xsize ]= pix;
}
/* call LSD */
out = lsd(image);
//out= lsd_scale(image,1);
/* print output */
//printf("%u line segments found:\n",out->size);
//vector<Line> vec;
for(i=0;i<out->size;i++)
{
//for(j=0;j<out->dim;j++)
{
//printf("%f ",out->values[ i * out->dim + j ]);
Line line;
line.x1= out->values[ i * out->dim + 0];
line.y1= out->values[ i * out->dim + 1];
line.x2= out->values[ i * out->dim + 2];
line.y2= out->values[ i * out->dim + 3];
/*vec*/vec_lines.push_back(line);
}
//printf("\n");
}
IplImage* black= cvCreateImage(cvGetSize(img), IPL_DEPTH_8U, 1);
cvZero(black);
for(int i=0;i<vec_lines.size();++i)
{
//if(vec[i].x1==vec[i].x2||vec[i].y1==vec[i].y2)
cvLine(black,cvPoint(vec_lines[i].x1,vec_lines[i].y1),cvPoint(vec_lines[i].x2,vec_lines[i].y2),CV_RGB(255,255,255),1, CV_AA);
}
/*cvNamedWindow("img", 0);
cvShowImage("img", img);*/
//cvSaveImage("lines_detect.png",black/*img*/);
/* free memory */
//cvReleaseImage(&grey);
free_image_double(image);
free_ntuple_list(out);
return black;
}

C - Convert GIF to JPG

I need to convert a GIF image to Jpeg image using C programming language. I searched the web, but I didn't find an example which could help me. Any suggestion are appreciated!
EDIT: I want to do this using an cross-platform open-source library like SDL.
Try the GD or ImageMagick libraries
I found libafterimage to be incredibly simple to use.
In this snippet I also scale the image to at most width or at most height, while preserving aspect:
#include <libAfterImage/afterimage.h>
int convert_image_to_jpeg_of_size(const char* infile, const char* outfile, const double max_width, const double max_height)
{
ASImage* im;
ASVisual* asv;
ASImage* scaled_im;
double height;
double width;
double pixelzoom;
double proportion;
im = file2ASImage(infile, 0xFFFFFFFF, SCREEN_GAMMA, 0, ".", NULL);
if (!im) {
return 1;
}
proportion = (double)im->width / (double)im->height;
asv = create_asvisual(NULL, 0, 0, NULL);
if (proportion > 1) {
/* Oblong. */
width = max_width;
pixelzoom = max_width / im->width;
height = (double)im->height * pixelzoom;
} else {
height = max_height;
pixelzoom = max_height / im->height;
width = (double)im->width * pixelzoom;
}
scaled_im = scale_asimage(asv, im, width, height, ASA_ASImage, 0, ASIMAGE_QUALITY_DEFAULT);
/* writing result into the file */
ASImage2file(scaled_im, NULL, outfile, ASIT_Jpeg, NULL);
destroy_asimage(&scaled_im);
destroy_asimage(&im);
return 0;
}
Not the easiest to use, but the fastest way is almost surely using libavcodec/libavformat from ffmpeg.

Resources