I am looking for a way to create a "buffer" which I can directly copy, or blit, onto a WINDOW* using ncurses. I know there are subwindows, but since the only way to move/ resize them is to create a whole new subwindow, they are not a great fit. I'm looking for something like Microsoft's WriteConsoleOutput.
Would be nice if I could also copy regions, in a reverse-blit fashion (take rect of stdscr and store a copy in a buffer)
windows can in fact be moved or resized without re-creating them:
mvwin
Calling mvwin moves the window so that the upper left-hand corner is at
position (x, y). If the move would cause the window to be off the
is allowed, but should be avoided.
wresize
This is an extension to the curses library. It reallocates storage for
an ncurses window to adjust its dimensions to the specified values. If
either dimension is larger than the current values, the window's data
is filled with blanks that have the current background rendition (as
set by wbkgdset) merged into them.
This extension of ncurses was introduced in mid-1995. It was adopted
in NetBSD curses (2001) and PDCurses (2003).
Regarding the question, updates to a window are based on lines (see waddchnstr for instance).
Related
The function xcb_copy_area by my understanding essentially copies a region from one xcb_drawable to another. I'm not sure then why it would also take a graphics context as a parameter, seeing as the source of the copy has presumably already been drawn or rendered. What is the use of this parameter in this case?
It's worth noting that my understanding of graphics contexts are not great but there aren't many resources on explaining them. I'm assuming this is an issue with my mental model of what's going on within xcb.
Relevant docs: https://www.x.org/releases/X11R7.6/doc/xproto/x11protocol.html#requests:CopyArea
The text description contains this (emphasis mine and original emphasis and a link were lost):
If the dst-drawable is a window with a background other than None, these corresponding destination regions are tiled (with plane-mask of all ones and function Copy) with that background. Regardless of tiling and whether the destination is a window or a pixmap, if graphics-exposures in gc is True, then GraphicsExposure events for all corresponding destination regions are generated.
So, my understanding is: The GC is used to draw the background of the window and this is where most of its properties are used.
The doc says explicitly which GC components are used:
GC components: function, plane-mask, subwindow-mode, graphics-exposures, clip-x-origin, clip-y-origin, clip-mask
I guess that function and plane-mask specify how the source and target are "combined". So, CopyArea can not only copy, but do all the other (weird) things that are possible with a GC.
subwindow-mode says what happens with subwindows. It is possible to clip them out or to draw over them.
graphics-exposures is about events that are generated in response to drawing
clip-x-origin, clip-y-origin, and clip-mask clearly are about clipping the drawing.
I'm working on a small text editor in ncurses with the purpose of learning more about the library. One of the first challenges was implementing a proper scrollable text buffer, retaining the editing abilities. I've created a cursor struct, containing the screen coordinates and the buffer coordinates. When you move the cursor, the x and y are constrained to the LINES and COLS max values. The buffer coordinates, however, are constrainted to the limits of the text file (size and linesize).
This works well, but i was wondering if there's a better way of doing this. Right now, every cursor movement operation results in modifications to both coordinate systems. Maybe there's a way of converting between coordinates and keep just one (the buffer one, preferably)?
Have you tried using a pad? As a window can be no larger than the terminal itself, else data is lost when if passes over the edge boundary. A pad is used to allow for larger data display by the use of newpad. The pad can be any length the system memory has available; viewed by way of a window subpad which displays the contents of the pad at a specified location.
I've derived my own widget type from GtkWidget in order to use it as a drawing surface for OpenGL. To give OpenGL control over the underlying X11 Window, I need to disable the widget's double buffering - else the whole rendering result will be drawn over by GTK's buffer swap.
However, gtk_widget_set_double_buffered and the "double-buffered" property have been deprecated in the current version of GTK+3 for being too platform-dependent.
Is there a way to disable double buffering on the GDK or X11 level instead?
This is a rather old question, but I'll give it a shot.
In any even slightly more recent context (i.e. with GTK+ >= 3.16, which is over 6 years old by now), I guess the best solution would be to avoid rolling your own OpenGL widget and just use Gtk.GLArea instead: https://docs.gtk.org/gtk3/class.GLArea.html
Alternatively, if you happen to be stuck with an ancient GTK+ version, maybe on an embedded device, then there aren't many options besides gtk_widget_set_double_buffered (see also https://people.gnome.org/~shaunm/girdoc/C/Gtk.Widget.set_double_buffered.html): this does not set any X11/Xorg window flags or similar, but just changes the internal event handling of GTK+ to either send draw calls to an offscreen buffer, or directly to the visible surface.
Note that this offscreen buffer is completely separate from anything involving X or OpenGL.
I want to add characters (x/y/z.., not even strings) to a window using OpenGL, WITHOUT using GLUT. I know about glutBitmapString(), but I want to avoid glut. Any suggestions...?
Last time I did this for a retro-style game, I created a bitmap font and wrote a small routine that would draw a quad with the specific character as a texture on it. Another option is to draw every pixel of the bitmap font in a seperate quad.
You can find example code here:
http://svn.berlios.de/wsvn/pong2/trunk/src/Interface.h
http://svn.berlios.de/wsvn/pong2/trunk/src/Interface.cpp
More specifically:
void Interface::createFont() initiates a bitmap font as a display list for each character
void Interface::drawText(const std::string& text) lets OpenGL call the display lists according to the string's characters
In this specific example, I wanted textured "pixels" within the characters, so each bitmap entry results in its own quad with a stock texture on it. Display lists are nowadays less favored as newer OpenGL features like FBOs and VBOs replace their functionality. I don't know if at some point display lists got deprecated as well.
The text in createFont() was created by The Gimp (http://www.gimp.org) export functionality.
Screenshot to celebrate the 20k:
I would suggest using a glyph map, which basically boils down to a bitmap texture with a bunch of letters distributed over it. Load in the texture and draw quads with texture coordinates mapped to the location of the glyph you want in the texture.
There are some drawbacks in a naive implementation that can be partially alleviated. For example, rather than drawing a ton of quads in separate draw calls, you could take a cue from Java and make immutable strings that tie to a GPU buffer and pack all the vertices and uvs you need to draw the word into that buffer. (They don't have to be immutable, just know that if you need to make a word longer or shorter, you'll have to reallocate the buffer or leave extra space to put the new letters).
The site that I used whenever I was trying to learn how to do this can be found here:
Bitmap Fonts
I have used this method with a WebGL implementation and it has worked quite well. I even have wrote a tool to generate the texture from a <canvas> element on the fly.
Using gdk_screen_get_monitor_geometry, I can get the total area in pixels and the relative position of each monitor, even when there are two or more used as a single screen.
However, I want to get the usable area (that is, excluding panels) of each monitor. The only thing I have found is _NET_WORKAREA, but that is one giant area stretching across all monitors. Depending on the resolution and arrangement, there may be panels inside this area.
How can I get the actual usable area of each monitor? Ideally, using only Gtk/Gdk, nothing X11-specific.
The following approach is a bit convoluted, but it is what I'd use. It should be robust even when there is complex interaction between the window manager and GTK+ when a window is mapped -- for example, when some of the panels are automatically hidden.
The basic idea is to create a transparent decorationless maximized window for each screen, obtain its geometry (size and position) when it gets mapped (for example, using a map-event callback), and immediately destroy them. That gets you the usable area within each screen. You can then use your existing gdk_screen_get_monitor_geometry() approach to determine how the usable area is split between monitors, if any.
In detail:
Use gdk_display_get_default() to get the default display, then gdk_display_get_n_screens() to find out how many screens it has.
Create a new window for each screen using gtk_window_new(), moving the windows to their respective screens using gtk_window_set_screen(). Undecorate the windows using gtk_window_set_decorated(,FALSE), maximuze them using gtk_window_maximize(,TRUE), and make them transparent using gtk_window_set_opacity(,0.0). Connect the map-event signal to a callback handler (using g_signal_connect()). Show the window using gtk_widget_show().
The signal handler needs to call gtk_window_get_position() and/or gtk_window_get_size() to get the position and/or size of the newly-mapped window, and then destroy the window using gtk_widget_destroy().
Note that in practice, you only need one window. I would personally use a simple loop. I suspect that due to window manager oddities/bugs, it is much more robust to create a new window for each screen, rather than just move the same window between screens. It turns out it is easier, too, as you can use a single simple callback function to obtain the usable area for each screen.
Like I said, this is quite convoluted. On the other hand, a standard application should not care about the screen sizes; it should simply do what the user or window manager asks. Because of that, I would not be surprised if there are no better facilities to find out this information. Screen size may change at any point, for example if the user rotates their display, or changes the display resolution.
in the end I ended up using xlib directly, various "tricks" like the one suggested above ended up eventually failing in the long run often with odd corner cases and never followed the KISS principle.
The solution I used is in the X-Tile code base.