In the last days, while I'm working on a project, I was introduced to the sprite - Byte Array.
Unfortunately, I didnt find out any kond of information about the sprite which can tell me mote about what is this and how it's works.
I really be pleased if you can give me some information and examples for sprite.
A sprite is basically an image with a transparent background color or alpha channel which can be positioned on the screen and moved (usually involving redraw the background over the old position). In the case of an animated sprite, the sprite may consist of several actual images making up the frames of the animation. The format of the image depends entirely on the hardware and/or technology being used to draw or render it. For speed, the dimensions are usually powers of two (8,16,32,64 etc) but this may not be necessary for modern hardware.
Traditionally (read: back in my day), you might have a 320x200x256 screen resolution and a 16x16x256 sprite with color 0 being transparent. Each refresh of the screen would begin with redrawing the background under the sprites, taking a copy of the background under their new position and then redrawing only the visible colors of every sprite in their new position.
With modern hardware, however, it is more efficient to pass data in a format that the driver can handle (hopefully in the graphics accelerator) rather than do everything by hand.
Related
Is it possible to have more than 8 sprites in a rasterline on a real Commodore 64 (not on an emulator)?
the sprites don't need to be different.
Short answer: yes.
Long answer: yes, but there are some caveats:
VICII (the video chip) reads in 3 bytes of sprite data per rasterfor each of the maximum 8 hardware sprite, and the buffered data is meant to be displayed on the next raster.
If you display a hardware sprite a second time on a given raster that buffer will be empty the next raster, so on the following raster you'll end up with a transparent stripe in the sprite.
Also the sprite data fetches happen at around the end of the current raster/start of the next one, so you are pretty much limited to duplicate sprite #0 (because its data is fetched first), and even then the CRT-beam is so far on the right of the screen that you'd have to remove the sideborder to able to see the duplicate sprite visible.
Yes it is possible using assembly in interrupts. The interrupt would be aligned to the raster of the video chip. After the first sprite has been rendered by the video chip (using NOP to wait for the necessary time), its position and shape are changed further to the right. Then the interrupt waits again until the sprite has been rendered to reset it to its original place because the nest raster needs to "see" it there.
Using this technique you can have more than 8 sprites in one raster line. The technique is similar to showing sprites in the border for the case of For sprites in the left/right borders. Instead of changing the register to make the screen less width, you need to change the x-position of the sprite.
I am working in a game project that features a large amout of assets. The character animations are very detailed and that require a lot of frames to happen.
At first, I created large spritesheets containing all the animations for a specific character. It was working well on my PC but when I tested it on an Android tablet, I noticed it ecceeded the maximum texture dimension of its GPU. My solution was to break down the big spritesheet into individual frames (the worst case is 180 frames) and upload them individually to the GPU. Things now seem to be working everywhere I need it to work.
Right now, the largest animation I have been working with is a character with 180 frames with 407x725 pixels of width and height. However, as I couldn't find any orientation on the web regarding how to properly render 2D animations using OpenGL, I would like to ask if there is a problem with this approach. Is there a maximum number of textures that can be uploaded to the GPU? Can I exceed the amout of RAM of the GPU?
The most efficient method for the GPU is to pass the entire sprite sheet to opengl as a single texture, and select which frame you want by adjusting the texture coordinates when you draw. You should also pack the sprites into, ideally, a square texture. Reducing the overall amount of memory used by the GPU is very good for performance esp. on phones and tablets.
You want to avoid if possible frequently changing which texture is bound. Ideally you want to bind a single texture and then render bits and pieces of it to the screen until you don't need it anymore, then bind a different texture and continue.
The reason for this is that the GPU will try hard to optimize the operation of the pipeline it creates to handle the geometry you feed it, and the shaders you select. But when you make big changes to the configuration like changing what texture is bound or what shader is bound, that's necessarily going to be somewhat opaque to optimization. Feeding it more vertices and texture coordinates at a time is better because they basically can all get done in a batch without unloading and reloading resources etc.
However depending what cards you are targetting, you should keep in mind that there may be a maximum of 8192 x 8192 size of textures or something like this. So depending on what assets you have you may be forced to split them up across several textures.
I am using Borland Turbo C and the Borland Graphics Interface.
I have two questions:
I have to process a 256 color bitmap image. It is difficult to process using EGAVGA driver, so I decided to use SVGA driver. It works fine, but when I convert the image into gray scale, instead of showing only the image in gray scale, the whole window goes into gray scale mode. Is there any method to change the color palette for a specific area using outp(0x03c8, data) and outp(0x03c9, data) functions?
The mouse functions works fine with EGAVGA mode but the cursor is not visible in the SVGA mode. Even the mouse is functional. How could I create a custom mouse cursor for SVGA mode in 256 color? I have the codes for creating custom mouse pointer in EGAVGA mode using 0x10 interrupt but it is not working with SVGA mode?
In paletized video modes, palette entries affect the whole screen. If you change any index, all pixels on screen with that index will change, whether if they belong to your image or not.
If your image is going to share the screen with others, and you want that image the only one that changes into grayscale, you have to set aside some palette entries for exclusive use by your image, so changing them won't affect other graphic elements in your screen.
On Windows, and X-Window if my memory serves well, the entire screen will have the colours of your palette when your window application has the focus. When not, it will revert to system palette and your windows and its contents will show "weird".
I've got a paint program I've been working on, and I've started to tackle opacity. I'm at a point now where I compare the background color to the brush color, average the two based on their alphas, and then set a new pixel. I just need to cache a portion of what I'm drawing off to the side so that it doesn't continuously sample what is continuously changing while the mouse is down. I figured I would fix this by throwing 50 pixels into a stack or queue that starts changing pixels on screen once it's full and completely empties all it's contents onto the screen on mouse up. What I'd like to know is what would be more efficient, two stacks (one of coordinates and one of colors) or one stack of strings that I parse into coordinates and colors.
TLDR: What's more efficient, two stacks of different data types, or one string stack that I parse into two data types.
Your question seems longer and more confusing than it needs to be, but I think what you're asking is:
I'm designing a paint program. If the user is painting 50%-opaque black pixels on a white background, I want them to show up as gray. My problem is that if the user draws a curve that crosses itself, or just leaves the mouse cursor in the same place for a while, the repeated pixels become darker and darker: 50% black, then 75%, then 87.5%... I don't want this. As long as the mouse button is down, no pixel should be "painted" twice, no matter how many times the curve crosses itself.
This question answers itself. The only way to keep pixels from being painted twice is to keep track of which pixels have been painted since the last time the mouse button was up. Replace
image[mouse.x][mouse.y] = alpha_average(image[mouse.x][mouse.y], current_color, current_alpha);
with
if (not already_painted[mouse.x][mouse.y]) {
image[mouse.x][mouse.y] = alpha_average(image[mouse.x][mouse.y], current_color, current_alpha);
already_painted[mouse.x][mouse.y] = true;
}
handle(mouse_up) {
already_painted[*][*] = false;
}
Problem solved, right?
But to answer another implied question: If you're trying to choose between a bunch of parallel arrays and a bunch of data stuffed into strings, you're probably doing it wrong. All Unity3D languages (Python, C#, Javascript) support struct/class/dict types and tuples, either of which would be a better idea than parallel arrays or everything-is-a-string-ism.
We have a Silverlight application that shows text over video. Both the text and video can be considered variables. Sometimes we might have a dark video, sometimes a bright video, sometimes a video that has sections of both.
Think of credits at the end of a movie. We want to ensure the end user can always read the text being show over the video. The text is always an overlay on top of the video.
The simple solution is to two show the text twice once in white and once in black with a small offset. This almost works but actually looks a little rough, and takes away from the user experience.
Ideally we would have the text with slight semitransparent glow around the edges. So if the text were white there would be a black glow right around the edges.
Is there a way to do this? Or is there an equal or better work-around?
I've done this with the DropShadow pixel shader effect in Silverlight 3. It works nicely, but since the pixel shaders aren't executed on the hardware, it can have a pretty heavy impact on the performance of the application.
If you wanted to get ambitious, you could write your own pixel shader. Silverlight 3 supports HLSL Shaders.
You could try displaying it with a contrasting outline, rather than just a "drop shadow" like you get if you display it once with a small offset. To do this, display it four times in one color, then a fifth time with a contrasting color, centered over the four previous copies. The four first ones should be offset one pixel up, right, down and left of the center.
The net effect should be an outline. Of course, perhaps this too looks "rough", since it's computer-generated and thus not perfect with respect to issues like kerning, spacing between characters, and so on. But it's quick to try, at least.
In general, automatically finding good contrasting colors when the background is video sounds a bit difficult. In the worst case, the video contains text just like the one you want to display. The correct solution in that case is hard to imagine.
Sounds similar to the problem of ensuring subtitles are always readable in films/TV. The most robust, but not necessarily most elegant solution, is to have a coloured background rectangle for the text which is either opaque or has a low transparency value - often grey or black with good contrasting foreground colour.