I'm trying to blend two images with alpha using Imlib library (this code is changing wallpaper).
So far I got this:
#include <stdio.h>
#include <X11/Xlib.h>
#include <Imlib2.h>
int main( int argc, char **argv )
{
Display *display;
Pixmap pixmap;
Window root;
Screen *screen;
int width,
height,
w1,
w2,
h1,
h2;
Imlib_Image img,
img2;
// loading images
img = imlib_load_image( "/usr/share/wallpapers/wallpaper1.jpg" );
img2 = imlib_load_image( "/usr/share/wallpapers/wallpaper2.jpg" );
if( !img || !img2 ) {
printf( "Unable to load image.\n" );
return 1;
}
// open display, get screen and root window
display = XOpenDisplay(NULL);
screen = DefaultScreenOfDisplay( display );
root = DefaultRootWindow( display );
width = screen->width;
height = screen->height;
// create pixmap
pixmap = XCreatePixmap( display, root, width, height, DefaultDepthOfScreen(screen) );
// #S == BLENDING 1 =========================
imlib_context_set_image( img );
w1 = imlib_image_get_width();
h1 = imlib_image_get_height();
imlib_context_set_image( img2 );
w2 = imlib_image_get_width();
h2 = imlib_image_get_height();
imlib_context_set_image( img );
imlib_blend_image_onto_image( img2, 0, 0, 0, width, height, 0, 0, width, height );
//imlib_blend_image_onto_image( img2, 0, 0, 0, w2, h2, 0, 0, w1, h1 );
// #E == BLENDING 1 =========================
// setting context
imlib_context_set_image( img );
imlib_context_set_display( display );
imlib_context_set_visual( DefaultVisualOfScreen(screen) );
imlib_context_set_colormap( DefaultColormapOfScreen(screen) );
imlib_context_set_drawable( pixmap );
// render image into pixmap
imlib_render_image_on_drawable_at_size( 0, 0, width, height );
// set pixmap as background
XSetWindowBackgroundPixmap( display, root, pixmap );
// clear window
XClearWindow( display, root );
XFlush( display );
// free...
XFreePixmap( display, pixmap );
imlib_context_set_image( img );
imlib_free_image();
imlib_context_set_image( img2 );
imlib_free_image();
// close display
XCloseDisplay( display );
return 0;
}
It's changing wallpaper, but I want to blend this two images with custom alpha, I mean, something like this: How to blend two image
So, how to set image transparency for blending operation in Imlib?
Compile with: gcc main.c -lX11 -lImlib2
Ok, I got it!
First I need to set alpha for image, create second imlib object and fill with rectangle in custom color with alpha. Then just copy alpha from one image to second... That's just... ehh...
imlib_context_set_image( img );
w1 = imlib_image_get_width();
h1 = imlib_image_get_height();
img3 = imlib_create_image( w1, h1 );
imlib_context_set_image( img3 );
imlib_image_set_has_alpha( 1 );
imlib_context_set_color( 0, 0, 0, 150 );
imlib_image_fill_rectangle( 0, 0, w1, h1 );
imlib_context_set_image( img );
imlib_image_set_has_alpha( 1 );
imlib_image_copy_alpha_to_image( img3, 0, 0 );
imlib_context_set_image( img2 );
imlib_context_set_operation( IMLIB_OP_COPY );
imlib_blend_image_onto_image( img, 1, 0, 0, w1, h1, 0, 0, w1, h1 );
Here img and img2 variables are pictures. It gives image blended onto image with given in imlib_context_set_color alpha.
Related
I'm implementing 3D demo application using Babylonjs library for 3D Demo.I'm importing 3D model from S3 and adding texture image on top of material in Reactjs.
But when i add texture image on top of material, rest of area on 3D model gets black color and i want get rid of it. Code works fine in Babylon playground but fails in React app.
Here is the source code
var mat = new BABYLON.CustomMaterial("mat", scene);
mat.diffuseTexture = new BABYLON.Texture(textureImage, scene, false, false);
materialedMeshes.forEach(mesh => mesh.material = mat);
mat.emissiveColor = new BABYLON.Color3(1, 1, 1);
// mat.diffuseColor = new BABYLON.Color3(1, 0, 1);
// mat.specularColor = new BABYLON.Color3(0.5, 0.6, 0.87);
// mat.emissiveColor = new BABYLON.Color3(1, 1, 1);
// mat.ambientColor = new BABYLON.Color3(0.23, 0.98, 0.53);
mat.diffuseTexture.uOffset = -0.1000;
mat.diffuseTexture.vOffset = -1.1800;
mat.diffuseTexture.uScale = 1.2200;
mat.diffuseTexture.vScale = 2.2200;
mat.diffuseTexture.uAng = Math.PI;
mat.diffuseTexture.wrapU = BABYLON.Constants.TEXTURE_CLAMP_ADDRESSMODE;
mat.diffuseTexture.wrapV = BABYLON.Constants.TEXTURE_CLAMP_ADDRESSMODE;
mat.Fragment_Custom_Alpha(`
if (baseColor.r == 0. && baseColor.g == 0. && baseColor.b == 0.) {
baseColor.rgb = vec3(0.85, 0.85, 0.85);
}
baseColor.rgb = mix(vec3(0.85, 0.85, 0.85), baseColor.rgb, baseColor.a);
`)
I am currently wiring an OpenGL application and am getting GL_INVALID_OPERATION. The whole GL is scattered among several files and its hard to create an example out of it, but I have created an OpenGL trace using apitrace. This is one chunk that created the error:
glMatrixMode(mode = GL_PROJECTION)
glLoadIdentity()
glViewport(x = 0, y = 0, width = 1190, height = 746)
glOrtho(left = 0, right = 1190, bottom = 0, top = 746, zNear = 0, zFar = 128)
glBegin(mode = GL_QUADS)
glColor4f(red = 0.5, green = 0.5, blue = 0.5, alpha = 1)
glVertex3f(x = 1190, y = 746, z = 0)
glColor4f(red = 0.5, green = 0.5, blue = 0.5, alpha = 1)
glVertex3f(x = 0, y = 746, z = 0)
glColor4f(red = 0.5, green = 0.5, blue = 0.5, alpha = 1)
glVertex3f(x = 0, y = 100, z = 0)
glColor4f(red = 0.5, green = 0.5, blue = 0.5, alpha = 1)
glVertex3f(x = 1190, y = 100, z = 0)
glEnd()
glGetError() = GL_INVALID_OPERATION
Has someone any idea on this?
GL_QUADS is deprecated since version 3 and removed since version 3.1.
https://www.khronos.org/opengl/wiki/Primitive#Quads
I have a figure with 2 sets of graphs (detections and localizations). First set (localizations) is red ,orange,pink and second set (detections) is blue,black,cyan. I have created a renderer for each set in order to set the colors. I have set the tooltip to true but when I mouseover on the second set (detection) I can't see the labels. I can see labels only for first set on mouseover(see picture) but not for the other set. Here is my code:
JFreeChart avg_chart = ChartFactory.createTimeSeriesChart(
"Average detections and localizations" ,
"" ,
"" ,
null ,
true , true , false);
avg_chart.setBackgroundPaint(Color.WHITE);
final XYPlot plot = avg_chart.getXYPlot( );
plot.setDataset(0,this.dataset_local);
plot.setDataset(1,this.dataset_detect);
plot.setRangeAxis(0,new NumberAxis("Localizations"));
plot.setRangeAxis(1,new NumberAxis("Detections"));
plot.mapDatasetToRangeAxis(0, 0);
plot.mapDatasetToRangeAxis(1, 1);
plot.setDomainCrosshairVisible(true);
plot.setRangeCrosshairVisible(true);
XYLineAndShapeRenderer renderer1 = (XYLineAndShapeRenderer) plot.getRenderer(0);//localization
renderer1.setSeriesPaint( 0 , Color.RED );
renderer1.setSeriesPaint( 1 , Color.MAGENTA );
renderer1.setSeriesPaint( 2 , Color.orange );
renderer1.setBaseItemLabelsVisible(true);
XYLineAndShapeRenderer renderer2 = new XYLineAndShapeRenderer(true, false); //detection ****************
renderer2.setSeriesPaint( 0 , Color.BLUE);
renderer2.setSeriesPaint( 1 , Color.BLACK );
renderer2.setSeriesPaint( 2 , Color.CYAN );
renderer2.setBaseItemLabelsVisible(true);
plot.setRenderer(0,renderer1);
plot.setRenderer(1,renderer2);
plot.setBackgroundPaint(Color.lightGray);
plot.setDomainGridlinePaint(Color.white);
plot.setRangeGridlinePaint(Color.white);
DateAxis axis = (DateAxis) plot.getDomainAxis();
axis.setDateFormatOverride(new SimpleDateFormat("dd/MM/yyyy"));
return avg_chart;
}
I have tried XYLineAndShapeRenderer renderer2 = (XYLineAndShapeRenderer) plot.getRenderer(1) but it gives be a nulll exception.
ChartFactory.createTimeSeriesChart() adds an XYToolTipGenerator to renderer1 for you when tooltips is true. You probably just need to use it with renderer2:
renderer2.setBaseToolTipGenerator(renderer1.getBaseToolTipGenerator());
Or you can add a new one to renderer2:
XYToolTipGenerator toolTipGenerator2 = StandardXYToolTipGenerator.getTimeSeriesInstance();
renderer2.setBaseToolTipGenerator(toolTipGenerator2);
I have been trying to make a SDL program that has the capability of taking a screenshot of the entire screen, and in this case, displaying a live feed of my whole monitor screen. I have succeeded to the extent that I have been able to retrieve a image using GDI functions, but I have no idea how to properly handle the data output in my buffer after the GetDIBits() function returns. My image so far has been way off the expected output. The colors are messed up in all the formats I've currently tried which has been most of the 32-bit and 24-bit pixel formats available for SDL textures. My win32 code might also have a bug, I'm not completely sure since the image displayed is incorrect.
Here is how I get a screenshot :
void WINAPI get_screenshot( app_data * app )
{
HDC desktop = GetDC( NULL );
int width = GetDeviceCaps( desktop, HORZRES );
int height = GetDeviceCaps( desktop, VERTRES );
HDC desktop_copy = CreateCompatibleDC( 0 );
HGDIOBJ old = NULL;
HBITMAP screenshot = CreateCompatibleBitmap( desktop_copy, app->viewport.w, app->viewport.h );
BITMAPINFOHEADER screenshot_header = { 0 };
screenshot_header.biSize = sizeof( BITMAPINFOHEADER );
screenshot_header.biWidth = app->viewport.w;
screenshot_header.biHeight = -app->viewport.h;
screenshot_header.biPlanes = 1;
screenshot_header.biBitCount = 32;
screenshot_header.biCompression = BI_RGB;
if ( !screenshot )
{
ReleaseDC( NULL, desktop );
DeleteDC( desktop_copy );
DeleteObject( screenshot );
free_app_data( app );
win_error( "Creating Bitmap", true );
}
SetStretchBltMode( desktop_copy, HALFTONE );
SetBrushOrgEx( desktop_copy, 0, 0, NULL );
old = SelectObject( desktop_copy, screenshot );
if ( !StretchBlt( desktop_copy, 0, 0, app->viewport.w, app->viewport.h, desktop, 0, 0, width, height, SRCCOPY ) )
{
ReleaseDC( NULL, desktop );
DeleteDC( desktop_copy );
DeleteObject( screenshot );
free_app_data( app );
win_error( "Stretching Screenshot to Window Size", true );
}
if ( !GetDIBits( desktop_copy, screenshot, 0, app->viewport.h, app->pixels, ( BITMAPINFO * )&screenshot_header, DIB_RGB_COLORS ) )
{
ReleaseDC( NULL, desktop );
DeleteDC( desktop_copy );
DeleteObject( screenshot );
free_app_data( app );
win_error( "Getting Window RGB Values", true );
}
SelectObject( desktop_copy, old );
DeleteObject( screenshot );
ReleaseDC( NULL, desktop );
DeleteDC( desktop_copy );
return;
}
I feel most of the code that calls my DLL functions is self explanatory or isn't critical for this post, but I'll be happy to provide pseudo code or pure win32 API code if necessary.
The code that creates the SDL texture and buffer is :
app->frame = SDL_CreateTexture(
app->renderer,
SDL_PIXELFORMAT_ABGR8888,
SDL_TEXTUREACCESS_STREAMING,
app->viewport.w,
app->viewport.h
);
if ( !app->frame )
{
free_app_data( app );
SDL_errorexit( "Creating texture", 1, TRUE );
}
app->pixels = ( Uint32 * )create_array( NULL, ( app->viewport.w * app->viewport.h ), sizeof( Uint32 ), zero_array );
if ( !app->pixels )
{
free_app_data( app );
std_error( "Creating pixel buffer", TRUE );
}
Again, I'm using my DLL function create_array() in this case, but I think you should be able to tell what it does.
The resulting image is this :
Feel free to add better methods or pure SDL methods of doing this. I have tried GetPixel(), and it returns correct values. It has a high overhead for multiple calls though.
The code for capturing and drawing the screen to your form:
HDC hdcScreen, hdcForm;
hdcScreen = GetDC(NULL);
hdcForm = GetWindowDC(hwnd); //hwnd is form handle
StretchBlt(hdcForm, 0, 0, formW, formH, hdcScreen , 0, 0, screenW, screenH, SRCCOPY );
ReleaseDC(hwnd, hdcForm);
ReleaseDC(NULL, hdcScreen);
Its that simple.
valter
From Quake2 source, the function GL_BeginBuildingLightmaps at gl_rsufr.c, i saw these codes:
if ( toupper( gl_monolightmap->string[0] ) == 'A' )
{
gl_lms.internal_format = gl_tex_alpha_format;
}
/*
** try to do hacked colored lighting with a blended texture
*/
else if ( toupper( gl_monolightmap->string[0] ) == 'C' )
{
gl_lms.internal_format = gl_tex_alpha_format;
}
else if ( toupper( gl_monolightmap->string[0] ) == 'I' )
{
gl_lms.internal_format = GL_INTENSITY8;
}
else if ( toupper( gl_monolightmap->string[0] ) == 'L' )
{
gl_lms.internal_format = GL_LUMINANCE8;
}
else
{
gl_lms.internal_format = gl_tex_solid_format;
}
GL_Bind( gl_state.lightmap_textures + 0 );
qglTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
qglTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
qglTexImage2D( GL_TEXTURE_2D,
0,
gl_lms.internal_format,
BLOCK_WIDTH, BLOCK_HEIGHT,
0,
GL_LIGHTMAP_FORMAT,
GL_UNSIGNED_BYTE,
dummy );
qglTexImage2D is same as glTexImage2D.
The problem is from debugging i saw the input value of third parameter(internalFormat) of qglTexImage2D is gl_tex_solid_format, which is 3. Is 3 a valid value for parameter internalFormat?
3 is a perfectly legitimate value for internalFormat.
From the the glTexImage2D() documentation:
internalFormat: Specifies the number of color components in the texture. Must be 1, 2, 3, or 4, or one of the following symbolic constants: ...
Where is the value of the variable gl_tex_solid_format from? Are you sure you have assigned GL_RGBA to the variable gl_tex_solid_format? Maybe you assigned 3 to the variable gl_tex_solid_format.