Scrolling data in ncurses - c

I'm working on a small text editor in ncurses with the purpose of learning more about the library. One of the first challenges was implementing a proper scrollable text buffer, retaining the editing abilities. I've created a cursor struct, containing the screen coordinates and the buffer coordinates. When you move the cursor, the x and y are constrained to the LINES and COLS max values. The buffer coordinates, however, are constrainted to the limits of the text file (size and linesize).
This works well, but i was wondering if there's a better way of doing this. Right now, every cursor movement operation results in modifications to both coordinate systems. Maybe there's a way of converting between coordinates and keep just one (the buffer one, preferably)?

Have you tried using a pad? As a window can be no larger than the terminal itself, else data is lost when if passes over the edge boundary. A pad is used to allow for larger data display by the use of newpad. The pad can be any length the system memory has available; viewed by way of a window subpad which displays the contents of the pad at a specified location.

Related

Overlaying 2 or more shapes in a bitmap file created in C?

i was working on a program that will, depending on the input draw, shapes of different colors onto a bitmap file, it works fine if i just have to draw one shape, but if i for example take two or more shapes it just draws over the old picture and the old one gets lost but i need them to overlay to create more complex pictures. Is there a way when i am writing to a bitmap file to skip over parts i dont want to write over ? I also tryed making an array in which i would save all the pixel data, but that doesnt work if i take a bitmap of a size larger than 800x800, depending on the size of the type of the elements of the array. I am open for any suggestion and comment. Thank you in advance.
You need to draw the second shape using a transparent background, how you would do that is entirely up to you as you don't provide any information about what technology you are using.

Displaying pixel data with GDI+

I am writing a simple 3D rendering engine.
The end result of my 3D processing is pixel data. Next I need to display it on the screen with GDI+.
I am using WinForms and Visual Basic. I am drawing directly on form's ClientRectangle.
I have some questions.
After I process a pixel, should I be writing pixel data to a buffer first, instead of sending each pixel to GDI+ individually?
- If so, how much of a screen should I buffer at one time? Full screen, half, quarter, eighth? I think there may be RAM usage / performance trade-offs here.
- What is the best data structure for the pixel buffer?
- Which GDI+ command do I use to render the pixel buffer (or the individual pixel)? Is it possible to avoid creating the bitmap as an intermediate step and send pixel data directly to screen?
Maximum screen size I anticipate is 1600x1200. RAM could be as low as 1GB.
Thanks.
Hope you can find some of those answers here
Write the data into a buffer of RGBA structs first. This will make it easy if, for example you want to render multiple "layers" and then composite those as well. It will also make it easy if you want to perform any deferred processing at some point. Once a full (tile?) render is complete, you can flush it to the output bitmap/file.
This depends on what resolutions you allow the user to render to. If you want to render gigapixel images, you will need to tile it at some reasonable size. I would recommend that the tile size be configurable and then you can set it at a reasonable default after testing.
I would recommend starting out with a simple RGBA buffer if you're not looking to perform any deferred shading.
If you are NOT performing tiled rendering/rendering images that can fit in memory, you can simply use Bitmap.LockBits and write the data that way. If you are using tiled rendering, you will need to either find a library that allows you to render a scanline at a time (and make that a "tile") or fix the file format you want to write TGA, PNG and seek/write directly to the file. Dumping the image as a RAW file and then using a command-line tool to convert it would also be another option.
Hope this helps!

Can I read a specific image row using libjpeg?

Using libjpeg, if possible, I would like to read a row from the middle of a JPEG image without reading all the preceding rows. Can this be done?
The answer is almost certainly "yes you can, but it will take more effort than you want".
A JPEG image is a stream of markers that contain either information global to the whole compressed image, or information related to specific portions of the image. The compression works by breaking the image into color planes, possibly changing color spaces to one where the color information can be down-sampled, and within each plane operating on 8x8 pixel blocks.
For instance, it is possible to rotate a compressed image by 90 degrees if it is sized such that it is made up of only whole blocks by only transposing the basic blocks and the coefficients inside each block; i.e. without uncompressing, rotating the real image, and recompressing.
Given that, your approach would be to parse the marker stream on the way into the library, passing all the markers that are global to the image, modifying any related to image size, and dropping markers containing coefficients that lie outside your cropping rectangle.
You will likely need to further crop the result if the restriction of cropping to complete basic blocks is too coarse.
What isn't clear to me is whether there is any real win over the alternative, which is to crop the results as it comes out of the library. The library is highly configurable, so you can provide an uncompressed data consumer function that discards all pixels outside your cropping rectangle and only saves pixels you want to keep.

Converting mouse position to world position OpenGL

Hey, I'm working on a map editor for my game, and I'm trying to convert the mouse position to a position in the game world, the view is set up using gluPerspective
A good place to start would be the function gluUnProject, which takes mouse coordinates and calculates object space coordinates. Take a look at http://nehe.gamedev.net/data/articles/article.asp?article=13 for a basic tutorial.
UPDATE:
You must enable depth buffering for the code in that article to work. The Z value for mouse coordinates is determined based on the value in the depth buffer at that point.
In your initialization code, make sure you do the following:
glEnable(GL_DEPTH);
A point on the screen represents an entire line (an infinite set of points) in 3D space.
Most people with questions similar to yours are really trying to select an object by clicking on it. If that's what you're after, OpenGL offers a selection mode that's generally more effective than trying to convert the screen coordinate into real-world coordinates.
Using selection mode is (usually) pretty simple: you start with gluPickMatrix, which you use to specify a small box around the click point. You then draw your scene in selection mode. When you're done, instead of actually drawing anything, it gives you back records of what would have been drawn in the box you specified. If memory serves, those are arranged in Z order, so the first one in the list is what would have displayed front-most (i.e., the one you usually want).

How can I manage a cache texture in OpenGL?

I am writing a text renderer for an OpenGL application. Size, colour, font face, and anti-aliasing can be twiddled at run time (and so multiple font faces can appear on the screen at once). There are too many combinations to allocate one texture to each combination of string and attributes. However, only a small subset of the entire database of strings will be on the screen at any given time.
This leads me into the opportunity to create a cache for the strings that are being printed frame after frame. It has been mandated that I use only one texture for the entire operation, as creating a cache of many textures would incur a texture swapping penalty for every different string printed from the cache.
So I have before me a 2048x2048 texture, into which I can place whatever strings I can fit as they are being requested by the application for caching purposes. I have quickly realized that tracking the free space available in a two dimensional space is not trivial.
I have been looking at things like Best Fit and Next fit, but those seem to be suitable for 1d spaces.
How can I manage this cache texture in OpenGL?
Edit: I have since learned that this is an instance of a "2d packing problem".
What you have is the bin-packing problem.
Bad news first: It's NP-hard, so it's worth to find the optimal solution.
I've done such texture-caching for fonts as well. I didn't cached entire words but just the glyph images. That makes things a lot easier because all your images are roughly square-shaped. A simple grid based approach to keep track of the texture-memory worked pretty good.
In case I got glyphs that are larger than one of my grid-boxes I just allocated two or more boxes using brute force search (it didn't happend that often). In case I didn't found any suitable block I just randomly removed some glyphs from the cache to make free space.
That was much easier than keeping things in a last recently used cache and performed nearly as good.
Btw - you will always have some waste on texture memory for such a cache. Unless you're very tight on memory that shouldn't be a problem. You should use a small texture-format (8 bit alpha works well for fonts).
Also: If you make your grid-blocks a multiple of 8 pixels, and you can drop your antialiasing to 4 bits you can compress the glyphs into one of the compressed DXT or S3TC formats on the fly. The wasted texture-space becomes a non-issue that way.
If you are short on texture memory you could take a look at "Distance Field" or "Signed Distance Field" font rendering technique. You could use 512x512 texture per font family and you could render perfectly antialiased text of any size.
For that algorithm you need to generate a special texture, which contains distance from the texel to the edge of the texture. Take a look at original paper by Valve guys: http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf . There are some frameworks which utilize this. For instance latest version of Qt uses signed distance field for text rendering.
I have opted to use a simple approach. Divide the texture into variable height rows. The first texture to be placed in a row decides the height of the row. If a texture can fit into an existing row by height, check to see if there is enough width remaining and place it there. Otherwise start a new row. If a new row cannot be started, do not cache the string.

Resources