Difference between DisplayContext, displaySurface & displayBuffer? - c

Normally working on graphics and display, we are encountering wordssuch as Displaybuffer, DisplaySurface & DisplayContext? what is the different between these terms?

It depends on the system these are general terms and are often interchanged. But in general
A DisplaySurface is a surface you'd perform operations on i.e. draw a line, circle etc on. A display surface is the physical screen surface you are writing on.
But, although you'd write on a display surface in many cases you'd have a display buffer so that when you draw on the surface, you actually draw on the display buffer so that the user doesn't see the drawing happening and when you've finished drawing you flip the display buffer onto the surface so that the drawing appears instantaneously
A display context is the description of the physical charecteristics of the drawing surface e.g. width, height, color depth and so on. In win32 for example you obtain a device context for a particular piece of hardware - a printer or screen, but then you draw on this device context so it is also the display surface. Likewise you can obtain a device context for an offscreen bitmap (a display buffer). So the terms can blur a bit.

Related

Create Vulkan surface for only a portion of a window

We have an application which has a window with a horizontal toolbar at the top. The windows-level handle we pass to Vulkan to create the surface ends up including the area behind the toolbar i.e. Vulkan is completely unaware of the toolbar and the surface includes the space "behind" it.
My question is, can a surface represent only a portion of this window? We obviously need not process data for the pixels that lie behind the toolbar, and so want to avoid creating a frame buffer, depth buffer etc. bigger than necessary.
I fully understand that I can accomplish this visually using a viewport which e.g. has an origin offset and height compensation, however to my understanding the frame buffer actually still contains information for pixels the full size of the surface (e.g. 800x600 for an 800x600 client-area window) even if I am only rendering to a portion of that window. The frame buffer then gets "mapped" and therefore squished to the viewport area.
All of this has sort of left me wondering what the purpose of a viewport is. If it simply defines a mapping from your image buffer to an area in the surface, is that not highly inefficient if your framebuffer contains considerably more pixels than the area it is being mapped to? Would it not make sense to rather section of portions in your application using e.g. different windows HWNDs FIRST, and then create different surfaces from then onwards?
How can I avoid rendering to an area bigger than necessary?
The way this gets handled for pretty much every application is that the client area of a window (ie: the stuff that isn't toolbars and the like) is a child window of the main frame window. When the frame is resized, you resize the client window to match the new client area (taking into account the new sizes of the toolbars/etc).
It is this client window which should have a Vulkan surface created for it.

Is there any event for shapes?

I have an ellipse which is drew on a window. I want to show a message when the pointer is on it (on the ellipse). How I do it? Is there any event for shapes? Like WM_MOVE or WM_SIZE.
I use TDM-GCC and C language.
When you draw on a device context, all knowledge of what shape you draw is lost, and the system just retains the pixel by pixel information of that device context. So there is no way for the system to give you any information about the shapes that you draw because it knows nothing of those shapes.
In order to do what you want you need to keep track in your program of the high level logic of where your shapes are. Then when you handle mouse messages you can map them onto your own data structures that represent the shapes.
There are no events for mouse activity over drawings. You are expected to remember where you draw, and then map the mouse coordinates to the drawing coordinates yourself. To help with this, have a look at the PtInRegion() function. Create an elliptical HRGN via CreateEllipticRgn() or CreateEllipticRgnIndirect() that matches your drawing (in fact, you can use the same HRGN to help facilitate the drawing, see the FillRgn() function), and when you want to test if the mouse is currently inside the drawing, such as in a WM_MOUSEMOVE handler, you can use PtInRegion() for that.

SDL Relative Position

I have a theoretical question about SDL' Surface cursor.
If I want to display surface_A on my screen I'll use a cursor created with SDL_Rect cursor; and I'll use it with SDL_BlitSurface();.
The cursor will contain a position relative to the top-left corner of my window.
But if I want to display surface_B inside surface_A, do I have to indicate a cursor relative the top-left corner of my window or the top-left corner of surface_A ?
You may be making some wrong assumptions about the relative positions of your cursors. There is a very good, and detailed set of tutorials at the linked location that may clear things up for you...
From HERE...
Using the first tutorial as our base, we'll delve more into the world
of SDL surfaces. As I attempted to explain in the last lesson, SDL
Surfaces are basically images stored in memory. Imagine we have a
blank 320x240 pixel surface. Illustrating the SDL coordinate system,
we have something like this:
This coordinate system is quite different than the normal one you are
familiar with. Notice how the Y coordinate increases going down, and
the X coordinate increases going right. Understanding the SDL
coordinate system is important in order to properly draw images on the
screen.
Some additional terms that may help clarify:
SDL Window : You can think of this as physical pixels, or your monitor.
SDL Renderer : Controls the properties/settings of what is created in that window.

How to work with Sprite - Byte Array Assembly x86

In the last days, while I'm working on a project, I was introduced to the sprite - Byte Array.
Unfortunately, I didnt find out any kond of information about the sprite which can tell me mote about what is this and how it's works.
I really be pleased if you can give me some information and examples for sprite.
A sprite is basically an image with a transparent background color or alpha channel which can be positioned on the screen and moved (usually involving redraw the background over the old position). In the case of an animated sprite, the sprite may consist of several actual images making up the frames of the animation. The format of the image depends entirely on the hardware and/or technology being used to draw or render it. For speed, the dimensions are usually powers of two (8,16,32,64 etc) but this may not be necessary for modern hardware.
Traditionally (read: back in my day), you might have a 320x200x256 screen resolution and a 16x16x256 sprite with color 0 being transparent. Each refresh of the screen would begin with redrawing the background under the sprites, taking a copy of the background under their new position and then redrawing only the visible colors of every sprite in their new position.
With modern hardware, however, it is more efficient to pass data in a format that the driver can handle (hopefully in the graphics accelerator) rather than do everything by hand.

Can pixel shaders be used when rendering to an offscreen surface?

I'm considering integrating some D3D code I have with WPF via the new D3DImage as described
here:
My question is this: Do pixel shader's work on offscreen surfaces?
Rendering to an offscreen surface is generally less constrained than rendering directly to a back buffer. The only constraints that come with using an offscreen surface with D3DImage is that it must be in a 32-bit RGB/ARGB format (depending on your platform). Other than that, all that the hardware has to offer is at your disposal.
In fact, tons of shader effects take advantage of offscreen surfaces for multipass, or full screen post-processing.
I don't know if there's anything special about it with WPF, but in general yes, pixel shaders work on offscreen surfaces.
For some effects rendering to a different surface is required - glass refraction in front of a shader-rendered scene for example. Pixel shaders cannot access the current screen contents and so the view has to be first rendered to a buffer and then used as a texture in the refraction shader pass so that it can take the background colour from a pixel other than the one being calculated.

Resources