Hey guys I am programming for a primitive type board using some assembly and C, consider the board to be aKin to the old school black and white gameboy.
I am running into a problem while writing a game in that there is no backbuffer. when I clear the screen it draws directly to the screen so that the screen truly is cleared, and thus makes anything I draw invisible, because it is immediately cleared in the next pass. So instead of replacing a drawn screen with a new drawn screen, it clears the screen then draws it.
I came up with a hackish solution in where I Limited the rendering to 10 frames per second.
The way I do this is by clearing the screen, drawing the shape, and then burning a loop for however long remains in the 1/10th second. This way whatever is drawn will stay there longer, and be visible longer, allowing the user to see it before it is immediately erased.
i.e.
while (1)
{
doRender = 1;
screen_clear();
draw_circle(x,y,20,1);
while(doRender)
{
// a interrupt will set doRender to 0, thus ending the loop
}
}
This works!! sort of, it creates a flicker, not horrible, but noticeable to be sure. My game does not require incredible framerates, 10/sec will do.
Does anyone have a better solution to my issue?
Your solution is good. Try optimize it by clearing only the area where the circle has been drawn.
You can also use XOR rendering. E.g: you XOR your sprite to the screen to render it, then on the next frame XOR it again at the same place to remove it and XOR it in its new place.
Can you wait for the vsync? If your drawing is fast enough, you may be able to do it during the vertical blank interval, removing any remaining flicker.
Related
I've been reading through a couple posts with this same question, but most seem to have gone about the easier problem differently than I did, so I'm having trouble finding tips on how to proceed.
Without posting my code, (not sure if we can do that here or not), the pseudo code for the main resizing portion was kind of like this:
for each scanline
{
for number of times to extend vertically
{
for each pixel in infile
{
read pixel from infile
for number of times to extend horizontally
{
write pixel
}
}
done extending horizontally, add padding to outfile
move cursor to front of line
}
done extending vertically, move to next line by passing padding
}
Maybe my nested loops aren't that quick or elegant, but it was the first logic train that came to me and it worked out. Nonetheless, I'm pretty unsure of how to adapt this for floating point numbers and the fact that I need to shrink things now.
I'm aware I'll have to do some sort of rounding. If I have only one pixel and the user tells the program to resize it by .5, I'll still give them one pixel because I can't tear apart the bytes that make it up. Similarly, whenever I upscale, I'll presumably always give full pixels.
But where do I start with this logic? How do I decide what the "scale" is and how will it interact with my loops? I can't loop something 1.5 times, so I imagine I'll have to know what the rounded scale is prior to going into the loops. I can't even begin to think about shrinking something this way... I'd probably have to skip over certain pixels in the loop in order to do that? Seems like a totally different mechanism that could mess up the resulting image.
Anyways, any help is appreciated!
Is it possible to have more than 8 sprites in a rasterline on a real Commodore 64 (not on an emulator)?
the sprites don't need to be different.
Short answer: yes.
Long answer: yes, but there are some caveats:
VICII (the video chip) reads in 3 bytes of sprite data per rasterfor each of the maximum 8 hardware sprite, and the buffered data is meant to be displayed on the next raster.
If you display a hardware sprite a second time on a given raster that buffer will be empty the next raster, so on the following raster you'll end up with a transparent stripe in the sprite.
Also the sprite data fetches happen at around the end of the current raster/start of the next one, so you are pretty much limited to duplicate sprite #0 (because its data is fetched first), and even then the CRT-beam is so far on the right of the screen that you'd have to remove the sideborder to able to see the duplicate sprite visible.
Yes it is possible using assembly in interrupts. The interrupt would be aligned to the raster of the video chip. After the first sprite has been rendered by the video chip (using NOP to wait for the necessary time), its position and shape are changed further to the right. Then the interrupt waits again until the sprite has been rendered to reset it to its original place because the nest raster needs to "see" it there.
Using this technique you can have more than 8 sprites in one raster line. The technique is similar to showing sprites in the border for the case of For sprites in the left/right borders. Instead of changing the register to make the screen less width, you need to change the x-position of the sprite.
I've read a bit about delays in WPF drawing. Before, I make a video and post some code I thought I would describe what, to me, is a very unusual experience. I've got some code that allows the user to draw a shape on the screen (which looks like old paper). When the user lets up on the left mouse button it automatically connects the last point to the first point like so:
BUT, when I uncomment the rest of the code (which occurs AFTER the call to close the poly with a line from the last point to the first point) and I do a modified scanline fill and fill in the shape with random tree objects so it looks like this:
There is a real noticeable delay before the last line that closes the poly is drawn and the filled poly is displayed.
This is what's really weird: adding a bunch of bitmap draw calls AFTER the last line call somehow delays the drawing of the earlier line call.
Is this some weird effect of how WPF draws lines and blits bitmaps? By the way, everything is on a canvas. I've added a bunch of
MainCanvas.InvalidateVisual();
But, it had no effect.
Is this just something that has to be lived with when using WPF?
In real-time games, there is always a game loop that runs every few milliseconds, updates the game with new data and repaints the entire screen.
Is this something that is seen in other types of applications, other than games? A 'constant-update-loop'?
For example, imagine an application like MSPaint. The user can draw lines on the screen with the mouse. The line that is being drawn is displayed on the screen as it is being drawn.
Please imagine this line is actually made of a lot of smaller lines, each 2 pixels long. It would make sense to store each of these small lines in a List.
But as I said, the line that is being drawn (the large line, made out of lots of small lines) is displayed as it is being drawn. This means that a repaint of the screen would be necessary to display the new small line that was added the previous moment.
But - please correct me if I'm mistaken - it would be difficult to repaint only the specific part of the screen where the new small line was drawn. If so, a repaint of the entire screen would be necessary.
Thus it would make sense to use an 'update loop' to constantly repaint the entire screen, and constantly iterate over the list of lines and draw these lines over and over again - like in games.
Is this approach existent in non-game applications, and specifically in 'drawing' applications?
Thanks
Essentially you do have a loop in all applications, and games. How that is implemented depends on the system and desire of your application/game. I will loosely base my response toward the Windows system, if only because you mention MS Paint
MS Paint will likely NOT contain a List of lines like you are mentioning, instead it will edit the bitmap image directly each frame, coloring the required pixels immediately and then drawing it. In this situation drawing small portions of the image/application is as easy as telling "this part" of the image to draw itself "over there". So as you move the pencil tool around the pixels turn black and get drawn.
MS Paint and most applications will use a primary loop that WAITS for the next event, meaning, it will allow the operating system (Windows) to not process anything until it has messages/events to process (such as: Mouse Move, Button Press, Redraw, etc).
For a game, it needs to be a little differently. A game (typically) doesn't want the operating system to WAIT for the next message/event before processing continues. Instead here you Poll the operating system to check if there are messages to be handled, and if so handle them, if not continue with the game creating a single frame (perform an Update and Render/Draw the scene.)
MS Paint doesn't need to keep updating and drawing when the user is not interacting, and this is preferred for applications because constantly updating/drawing uses a lot of system resources and if every application did this, you wouldn't have 30 things running at the same time like you probably do now.
I've got a paint program I've been working on, and I've started to tackle opacity. I'm at a point now where I compare the background color to the brush color, average the two based on their alphas, and then set a new pixel. I just need to cache a portion of what I'm drawing off to the side so that it doesn't continuously sample what is continuously changing while the mouse is down. I figured I would fix this by throwing 50 pixels into a stack or queue that starts changing pixels on screen once it's full and completely empties all it's contents onto the screen on mouse up. What I'd like to know is what would be more efficient, two stacks (one of coordinates and one of colors) or one stack of strings that I parse into coordinates and colors.
TLDR: What's more efficient, two stacks of different data types, or one string stack that I parse into two data types.
Your question seems longer and more confusing than it needs to be, but I think what you're asking is:
I'm designing a paint program. If the user is painting 50%-opaque black pixels on a white background, I want them to show up as gray. My problem is that if the user draws a curve that crosses itself, or just leaves the mouse cursor in the same place for a while, the repeated pixels become darker and darker: 50% black, then 75%, then 87.5%... I don't want this. As long as the mouse button is down, no pixel should be "painted" twice, no matter how many times the curve crosses itself.
This question answers itself. The only way to keep pixels from being painted twice is to keep track of which pixels have been painted since the last time the mouse button was up. Replace
image[mouse.x][mouse.y] = alpha_average(image[mouse.x][mouse.y], current_color, current_alpha);
with
if (not already_painted[mouse.x][mouse.y]) {
image[mouse.x][mouse.y] = alpha_average(image[mouse.x][mouse.y], current_color, current_alpha);
already_painted[mouse.x][mouse.y] = true;
}
handle(mouse_up) {
already_painted[*][*] = false;
}
Problem solved, right?
But to answer another implied question: If you're trying to choose between a bunch of parallel arrays and a bunch of data stuffed into strings, you're probably doing it wrong. All Unity3D languages (Python, C#, Javascript) support struct/class/dict types and tuples, either of which would be a better idea than parallel arrays or everything-is-a-string-ism.