Trouble registering screen boundaries for certain inputs (down and right) - c

I am creating a scrolling shooter for DMG using gbdk, it is based off some youtube tutorials and this example. In fact the link is the skeleton of my program. My issue is that the screen boundary conditions aren't working properly for down and right inputs. For up and left, they work correctly however, and the code for those is basically the exact same. I have also compiled the code from the link above, and it works correctly there. Apologies in advance, I have a childish sense of humor, so the game is penis-based.
The main differences between the skeleton code and mine is that I use a meta-sprite for the player, and an array for the x and y coordinates of the player. I have tried using individual integers for the locations and changing the bounds of the screen, but nothing seems to work.
#include <gb/gb.h>
#include <stdio.h>
#include "gameDicks.c"
#include "DickSprites.c"
#define SCREEN_WIDTH 160
BOOLEAN ishard = TRUE, playing = TRUE;
struct gameDicks flacid;
struct gameDicks hard;
INT8 spritesize = 8, dicklocation[2] = {20, 80};
int i;
void moveGameDicks(struct gameDicks* Dick, UINT8 x, UINT8 y){
move_sprite(Dick->spriteids[0], x, y);
move_sprite(Dick->spriteids[1], x + spritesize, y);
move_sprite(Dick->spriteids[2], x, y + spritesize);
move_sprite(Dick->spriteids[3], x + spritesize, y + spritesize);
}
void setuphard(INT8 dicklocation[2]){
hard.x = dicklocation[0];
hard.y = dicklocation[1];
hard.width = 16;
hard.height = 16;
//load sprites
set_sprite_tile(0,0);
hard.spriteids[0] = 0;
set_sprite_tile(1,1);
hard.spriteids[1] = 1;
set_sprite_tile(2,2);
hard.spriteids[2] = 2;
set_sprite_tile(3,3);
hard.spriteids[3] = 3;
}
void init_screen()
{
SHOW_BKG;
SHOW_SPRITES;
DISPLAY_ON;
}
void init_player()
{
SHOW_SPRITES;
set_sprite_data(0, 8, DickSprites);
setuphard(dicklocation);
}
void input()
{
if (joypad() & J_UP && dicklocation[1])
{
if (dicklocation[1] <= 16){
dicklocation[1] = 16;
}
else{
dicklocation[1]--;
}
}
if (joypad() & J_DOWN && dicklocation[1])
{
if (dicklocation[1] >= 150){
dicklocation[1] = 150;
}
else{
dicklocation[1]++;
}
}
}
void update_sprites()
{
moveGameDicks(&hard, dicklocation[0], dicklocation[1]);
}
int main()
{
init_screen();
init_player();
init_screen();
while(playing)
{
wait_vbl_done(2);
input();
update_sprites();
}
return 0;
}
What I expect is to be able to move the player up to y = 16, and down to y = 150. When it hits these values, it stops moving until you go the other direction. Instead, what I see happen is that the up direction works as expected, but as soon as the down key is pressed - no matter the y-location - the player is immediately sent to the bottom of the screen. From there, pressing up sends it to the very top. Further, the player can only move from the top position to the bottom, and not scroll in between. I'm baffled by this because the conditions are the exact same (except for the y-values), so I don't understand why they behave so differently.

Using an unsigned int may help here as an 8-bit integer will only hold values from -128 to 127, which might cause undefined behaviour when you compare it with over 150, pushing it to a negative value?
You have defined dicklocation as an INT8, when it would be better as a UINT8 or even longer if you plan on ever having a screen size larger than 255 bytes.

Related

C xtest emitting key presses for every Unicode character

I wanted to make a program to simulate key presses. I think i am mostly done but i have done something wrong i guess because it is not doing what i expect it to do. I have made a small example program to illustrate the issue. The main problem is that if i want to generate capital letters it does not work with strings like 'zZ'. It is generating only small letters 'zz'. Although symbols like '! $ & _ >' etc. work fine (that require shift on my German keyboard layout) and even multi byte ones like 'πŸ’£' . What i am doing is this:
preamble:
So basically the main problem by emulating key presses is first the layout that changes from user to user and most importantly modifier keys. So if you go the naive route and get a keysym with XStringToKeysym() get a keycode from that keysym with XKeysymToKeycode() and fire that event its not working like most 'newcomers' would expect (like me). The problem here is, that multiple keysyms are mapped to the same keycode. Like the keysysm for 'a' and 'A' are mapped to the same keycode because they're on the same physikal button on your keyboard that is linked to that keycode. So if you go the route from above you end up with the same keycode although the keysyms are different but mapped to the same button/keycode. And there is usually no way around this because it is not clear how the 'A' came to existence in the first place. shift+a or caps+a or you have a fancy keyboard with an 'a' and 'A' button on it. The other problem is how do i emit key presses for buttons that are not even on the keyboard of that person running that application. Like what key is pressed on an english layout if i want to type a 'Γ„' (german umlaut). This does not work because XKeysymToKeycode() will not return a proper keycode for this because there is no keysym mapping for it with that layout.
my approach:
What i am tying to do to circumvent this is finding a keycode that is not being used. You have 255-8 keycodes at your disposal but a regular keyboard has only ~110 keys on it so there is usually some space left. I am trying to find one of those keycodes that are unmapped on the current layout and use it to assign my own keysyms on it. Then i get a keysym from my char i got by iterating over my string and pass it to XStringToKeysym() which gives me the appropriate keysym. In case of β€™πŸ’£β€™ that is in most cases not mapped to any keyboard layout i know of. So i map it to the unused keycode and press it with XTestFakeKeyEvent() and repeat that for every char in the string. This works great with all fancy glyph one can think of but it does not work with simple letters and i really don't know why :( in my debugging sessions keysyms and keycodes seem to be correct its just that XTestFakeKeyEvent() does not do the right things in that case. Its possible that i fucked something up at the keymapping part but i am not really sure whats the problem here and i hope someone has a good idea and can help me find a way to a working solution.
I am just using this unicode notation in the strings array because i don't want to deal with this in the example here. Just assume there is code producing this from an arbitrary input string.
be aware that the code below can ruin your keymapping in such a way that you're not able to type and use your keyboard anymore and need to restart your X-Server/PC ... i hope it does not in its current state (working fine here) just be aware if you fiddle with the code
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <X11/X.h>
#include <X11/Xlib.h>
#include <X11/extensions/XTest.h>
#include <unistd.h>
//gcc -g enigo2.c -lXtst -lX11
int main(int argc, char *argv[])
{
Display *dpy;
dpy = XOpenDisplay(NULL);
//my test string already transformed into unicode
//ready to be consumed by XStringToKeysym
const char *strings[] = {
"U1f4a3",// πŸ’£
"U007A", //z
"U005A", //Z
"U002f", //'/'
"U005D", //]
"U003a", //:
"U002a", //*
"U0020", //' '
"U0079", //y
"U0059", //Y
"U0020", //' '
"U0031", //1
"U0021", //!
"U0020", //' '
"U0036", //6
"U0026", //&
"U0020", //' '
"U0034", //4
"U0024", //$
"U0020", //' '
"U002D", //-
"U005F", //_
"U0020", //' '
"U003C", //<
"U003E", //>
"U0063", //c
"U0043", //C
"U006f", //o
"U004f", //O
"U00e4", //Γ€
"U00c4", //Γ„
"U00fc", //ΓΌ
"U00dc", //Ü
};
KeySym *keysyms = NULL;
int keysyms_per_keycode = 0;
int scratch_keycode = 0; // Scratch space for temporary keycode bindings
int keycode_low, keycode_high;
//get the range of keycodes usually from 8 - 255
XDisplayKeycodes(dpy, &keycode_low, &keycode_high);
//get all the mapped keysyms available
keysyms = XGetKeyboardMapping(
dpy,
keycode_low,
keycode_high - keycode_low,
&keysyms_per_keycode);
//find unused keycode for unmapped keysyms so we can
//hook up our own keycode and map every keysym on it
//so we just need to 'click' our once unmapped keycode
int i;
for (i = keycode_low; i <= keycode_high; i++)
{
int j = 0;
int key_is_empty = 1;
for (j = 0; j < keysyms_per_keycode; j++)
{
int symindex = (i - keycode_low) * keysyms_per_keycode + j;
// test for debugging to looking at those value
// KeySym sym_at_index = keysyms[symindex];
// char *symname;
// symname = XKeysymToString(keysyms[symindex]);
if(keysyms[symindex] != 0) {
key_is_empty = 0;
} else {
break;
}
}
if(key_is_empty) {
scratch_keycode = i;
break;
}
}
XFree(keysyms);
XFlush(dpy);
usleep(200 * 1000);
int arraysize = 33;
for (int i = 0; i < arraysize; i++)
{
//find the keysym for the given unicode char
//map that keysym to our previous unmapped keycode
//click that keycode/'button' with our keysym on it
KeySym sym = XStringToKeysym(strings[i]);
KeySym keysym_list[] = { sym };
XChangeKeyboardMapping(dpy, scratch_keycode, 1, keysym_list, 1);
KeyCode code = scratch_keycode;
usleep(90 * 1000);
XTestFakeKeyEvent(dpy, code, True, 0);
XFlush(dpy);
usleep(90 * 1000);
XTestFakeKeyEvent(dpy, code, False, 0);
XFlush(dpy);
}
//revert scratch keycode
{
KeySym keysym_list[] = { 0 };
XChangeKeyboardMapping(dpy, scratch_keycode, 1, keysym_list, 1);
}
usleep(100 * 1000);
XCloseDisplay(dpy);
return 0;
}
When you send a single keysym for a given keycode to XChangeKeyboardMapping and it is a letter, it automatically fills correct upper and lower case equivalents for shift and capslock modifiers. That is, after
XChangeKeyboardMapping(dpy, scratch_keycode, 1, &keysym, 1);
the keycode map for scratch_keycode effectively changes (on my machine) to
tolower(keysym), toupper(keysym), tolower(keysym), toupper(keysym), tolower(keysym), toupper(keysym), 0, 0, 0, 0, ...
In order to inhibit this behaviour, send 2 identical keysyms per keycode:
KeySym keysym_list[2] = { sym, sym };
XChangeKeyboardMapping(dpy, scratch_keycode, 2, keysym_list, 1);
This will fill both shifted and unshifted positions with the same keysym.

Print doesn't show in printed array although specified

I'm working a simple candy crush game for my year 1 assignment.
I am at this stage where I need to show my self-made simple marker( *box made of '|' and '_'* ) on the center of the board ( board[5][5] ) once the program is executed.
Here is the current code:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
//FUNCTION: Draw the Board
int drawBoard()
{
//Declare array size
int board[9][9];
//initialize variables
int rows, columns, randomNumber, flag;
//random number seed generator
srand(time(NULL));
for ( rows = 0 ; rows < 9 ; rows++ )
{
for ( columns = 0 ; columns < 9 ; columns++ )
{
flag = 0;
do
{
//generate random numbers from 2 - 8
randomNumber = rand() %7 + 2;
board[rows][columns] = randomNumber;
//Checks for 2 adjacent numbers.
if ( board[rows][columns] == board[rows - 1][columns] || board[rows][columns] == board[rows][columns - 1] )
{
flag = 0;
continue;
}
else
{
flag = 1;
printf( " %d ", board[rows][columns] );
}
} while ( flag == 0 );
}//end inner for-loop
printf("\n\n");
}//end outer for-loop
//call FUNCTION marker() to display marker around board[5][5]
marker( board[5][5] );
}//end FUNCTION drawBoard
//FUNCTION: Mark the surrounding of the number with "|" and "_" at board[5][5]
void marker( int a )
{
printf( " _ \n" );
printf( "|%c|\n", a );
printf( " _ \n" );
}
int main()
{
drawBoard();
}
At the end of function drawBoard(), I placed the code marker( board[5][5] ).
This should have printed the markers around the number printed at coordinate board[5][5]..but for some reason it displays right after the board has been printed.
So why doesn't it print at that coordinate although I specified it at board[5][5]?
What could be the problem here?
so in your marker function you need to pass the board and the coordinate you want to print at
void marker( int x, int y, int** board )
{
board[x][y-1]="_";
board[x-1][y]="|";
board[x+1][y]="|";
board[x][y+1]="_";
}
then after the call to marker(5,5,board), call drawboard again
my code's a bit off but that's the logic, except you need to check for the case that the marker is at the edge of the board
in other words, you need to keep board around, and any time you make a change to it, clear the screen and print the whole board out again.
There is no persistent drawing in the way that you are doing this. You are just printing straight to the shell/command prompt. The way that you trying to do things will not work. You can't edit something drawn to the prompt after you have drawn it, you need to basically clear the screen and then draw again but with your indicated maker.
I don't know if you are able to use libraries in your assignment, but a very good library that WILL let you do is ncurses
EDIT Full rewrite of answer
Drawing Things On Top of One Another In CMD
Alright, I had some downtime at work, so I wrote a project to do what you need and I'm going to post code and explain what it does and why you need it along the way.
First thin that you are going to need a basically a render buffer or a render context. Whenever you are programming in a graphics API such as OpenGL, you don't just render straight to the screen, you render each object that you have to a buffer that rasterizes your content and turns it into pixels. Once it's in that form, the API shoves the rendered picture onto the screen. We are going to take a similar approach where instead of drawing to a pixel buffer on the GPU, we are going to draw to a character buffer. Think about each character as a pixel on the screen.
Here is a pastebin of the complete source:
Complete Source of Project
RenderContext
Our class to do this will be the RenderContext class. It has fields to hold width and height as well as an array of chars and a special char that we fill our buffer with whenever we clear it.
This class simply holds an array and functions to let us render to it. It makes sure that when we draw to it, we are within bounds. It is possible for an object to try to draw outside of the clipping space (off screen). However, whatever is drawn there is discarded.
class RenderContext {
private:
int m_width, m_height; // Width and Height of this canvas
char* m_renderBuffer; // Array to hold "pixels" of canvas
char m_clearChar; // What to clear the array to
public:
RenderContext() : m_width(50), m_height(20), m_clearChar(' ') {
m_renderBuffer = new char[m_width * m_height];
}
RenderContext(int width, int height) : m_width(width), m_height(height), m_clearChar(' ') {
m_renderBuffer = new char[m_width * m_height];
}
~RenderContext();
char getContentAt(int x, int y);
void setContentAt(int x, int y, char val);
void setClearChar(char clearChar);
void render();
void clear();
};
The two most important functions of this class are setContentAt and render
setContentAt is what an object calls to fill in a "pixel" value. To make this a little more flexible, our class uses a pointer to an array of chars rather than a straight array (or even a two dimensional array). This lets us set the size of our canvas at runtime. Because of this, we access elements of this array with x + (y * m_width) which replaces a two dimensional dereference such as arr[i][j]
// Fill a specific "pixel" on the canvas
void RenderContext::setContentAt(int x, int y, char val) {
if (((0 <= x) && (x < m_width)) && ((0 <= y) && (y < m_height))) {
m_renderBuffer[(x + (y * m_width))] = val;
}
}
render is what actually draws to the prompt. All it does is iterate over all the "pixels" in it's buffer and place them on screen and then moves to the next line.
// Paint the canvas to the shell
void RenderContext::render() {
int row, column;
for (row = 0; row < m_height; row++) {
for (column = 0; column < m_width; column++) {
printf("%c", getContentAt(column, row));
}
printf("\n");
}
}
I_Drawable
Our next class is an Interface that lets us contract with objects that they can draw to our RenderContext. It is pure virtual because we don't want to actually be able to instantiate it, we only want to derive from it. It's only function is draw which accepts a RenderContext. Derived classes use this call to receive the RenderContext and then use RenderContext's setContentAt to put "pixels" into the buffer.
class I_Drawable {
public:
virtual void draw(RenderContext&) = 0;
};
GameBoard
The first class to implement the I_Drawable, thus being able to render to our RenderContext, is the GameBoard class. This is where a majority of the logic comes in. It has fields for width, height, and a integer array that holds the values of the elements on the board. It also has two other fields for spacing. Since when you draw your board using your code, you have spaces between each element. We don't need to incorporate this into the underlying structure of the board, we just need to use them when we draw.
class GameBoard : public I_Drawable {
private:
int m_width, m_height; // Width and height of the board
int m_verticalSpacing, m_horizontalSpacing; // Spaces between each element on the board
Marker m_marker; // The cursor that will draw on this board
int* m_board; // Array of elements on this board
void setAtPos(int x, int y, int val);
void generateBoard();
public:
GameBoard() : m_width(10), m_height(10), m_verticalSpacing(5), m_horizontalSpacing(3), m_marker(Marker()) {
m_board = new int[m_width * m_height];
generateBoard();
}
GameBoard(int width, int height) : m_width(width), m_height(height), m_verticalSpacing(5), m_horizontalSpacing(3), m_marker(Marker()) {
m_board = new int[m_width * m_height];
generateBoard();
}
~GameBoard();
int getAtPos(int x, int y);
void draw(RenderContext& renderTarget);
void handleInput(MoveDirection moveDirection);
int getWidth();
int getHeight();
};
It's key functions are generateBoard, handleInput, and the derived virtual function draw. However, do note that in its constructor it creates a new int array and gives it to its pointer. Then its destructor automatically removes the allocated memory whenever the board goes away.
generateBoard is what we use to actual create the board and fill it with numbers. It will iterate over each location on the board. Each time, it will look at the elements directly to the left and above and store them. Then it will generate a random number until the number it generates does not match either of the stored elements, then it stores the number in the array. I rewrote this to get rid of the flag usage. This function gets called during the construction of the class.
// Actually create the board
void GameBoard::generateBoard() {
int row, column, randomNumber, valToLeft, valToTop;
// Iterate over all rows and columns
for (row = 0; row < m_height; row++) {
for (column = 0; column < m_width; column++) {
// Get the previous elements
valToLeft = getAtPos(column - 1, row);
valToTop = getAtPos(column, row - 1);
// Generate random numbers until we have one
// that is not the same as an adjacent element
do {
randomNumber = (2 + (rand() % 7));
} while ((valToLeft == randomNumber) || (valToTop == randomNumber));
setAtPos(column, row, randomNumber);
}
}
}
handleInput is what deals with moving the cursor around on the board. It's basically a freebie and your next step after getting the cursor to draw over the board. I needed a way to test the drawing. It accepts an enumeration that we switch on to know where to move our cursor to next. If you maybe wanted to have your cursor wrap around the board whenever you reach an edge, you would want to do that here.
void GameBoard::handleInput(MoveDirection moveDirection) {
switch (moveDirection) {
case MD_UP:
if (m_marker.getYPos() > 0)
m_marker.setYPos(m_marker.getYPos() - 1);
break;
case MD_DOWN:
if (m_marker.getYPos() < m_height - 1)
m_marker.setYPos(m_marker.getYPos() + 1);
break;
case MD_LEFT:
if (m_marker.getXPos() > 0)
m_marker.setXPos(m_marker.getXPos() - 1);
break;
case MD_RIGHT:
if (m_marker.getXPos() < m_width - 1)
m_marker.setXPos(m_marker.getXPos() + 1);
break;
}
}
draw is very important because it's what gets the numbers into the RenderContext. To summarize, it iterates over every element on the board, and draws in the correct location on the canvas placing an element under the correct "pixel". This is where we incorporate the spacing. Also, take care and note that we render the cursor in this function.
It's a matter of choice, but you can either store a marker outside of the GameBoard class and render it yourself in the main loop (this would be a good choice because it loosens the coupling between the GameBoard class and the Marker class. However, since they are fairly coupled, I chose to let GameBoard render it. If we used a scene graph, as we probably would with a more complex scene/game, the Marker would probably be a child node of the GameBoard so it would be similar to this implementation but still more generic by not storing an explicit Marker in the GameBoard class.
// Function to draw to the canvas
void GameBoard::draw(RenderContext& renderTarget) {
int row, column;
char buffer[8];
// Iterate over every element
for (row = 0; row < m_height; row++) {
for (column = 0; column < m_width; column++) {
// Convert the integer to a char
sprintf(buffer, "%d", getAtPos(column, row));
// Set the canvas "pixel" to the char at the
// desired position including the padding
renderTarget.setContentAt(
((column * m_verticalSpacing) + 1),
((row * m_horizontalSpacing) + 1),
buffer[0]);
}
}
// Draw the marker
m_marker.draw(renderTarget);
}
Marker
Speaking of the Marker class, let's look at that now. The Marker class is actually very similar to the GameBoard class. However, it lacks a lot of the logic that GameBoard has since it doesn't need to worry about a bunch of elements on the board. The important thing is the draw function.
class Marker : public I_Drawable {
private:
int m_xPos, m_yPos; // Position of cursor
public:
Marker() : m_xPos(0), m_yPos(0) {
}
Marker(int xPos, int yPos) : m_xPos(xPos), m_yPos(yPos) {
}
void draw(RenderContext& renderTarget);
int getXPos();
int getYPos();
void setXPos(int xPos);
void setYPos(int yPos);
};
draw simply puts four symbols onto the RenderContext to outline the selected element on the board. Take note that Marker has no clue about the GameBoard class. It has no reference to it, it doesn't know how large it is, or what elements it holds. You should note though, that I got lazy and didn't take out the hard coded offsets that sort of depend on the padding that the GameBoard has. You should implement a better solution to this because if you change the padding in the GameBoard class, your cursor will be off.
Besides that, whenever the symbols get drawn, they overwrite whatever is in the ContextBuffer. This is important because the main point of your question was how to draw the cursor on top of the GameBoard. This also goes to the importance of draw order. Let's say that whenever we draw our GameBoard, we drew a '=' between each element. If we drew the cursor first and then the board, the GameBoard would draw over the cursor making it invisible.
If this were a more complex scene, we might have to do something fancy like use a depth buffer that would record the z-index of an element. Then whenever we drew, we would check and see if the z-index of the new element was closer or further away than whatever was already in the RenderContext's buffer. Depending on that, we might skip drawing the "pixel" altogether.
We don't though, so take care to order your draw calls!
// Draw the cursor to the canvas
void Marker::draw(RenderContext& renderTarget) {
// Adjust marker by board spacing
// (This is kind of a hack and should be changed)
int tmpX, tmpY;
tmpX = ((m_xPos * 5) + 1);
tmpY = ((m_yPos * 3) + 1);
// Set surrounding elements
renderTarget.setContentAt(tmpX - 0, tmpY - 1, '-');
renderTarget.setContentAt(tmpX - 1, tmpY - 0, '|');
renderTarget.setContentAt(tmpX - 0, tmpY + 1, '-');
renderTarget.setContentAt(tmpX + 1, tmpY - 0, '|');
}
CmdPromptHelper
The last class that I'm going to talk about is the CmdPromptHelper. You don't have anything like this in your original question. However, you will need to worry about it soon. This class is also only useful on Windows so if you are on linux/unix, you will need to worry about dealing with drawing to the shell yourself.
class CmdPromptHelper {
private:
DWORD inMode; // Attributes of std::in before we change them
DWORD outMode; // Attributes of std::out before we change them
HANDLE hstdin; // Handle to std::in
HANDLE hstdout; // Handle to std::out
public:
CmdPromptHelper();
void reset();
WORD getKeyPress();
void clearScreen();
};
Each one of the functions is important. The constructor gets handles to the std::in and std::out of the current command prompt. The getKeyPress function returns what key the user presses down (key-up events are ignored). And the clearScreen function clears the prompt (not really, it actually moves whatever is already in the prompt up).
getKeyPress just makes sure you have a handle and then reads what has been typed into the console. It makes sure that whatever it is, it is a key and that it is being pressed down. Then it returns the key code as a Windows specific enum usually prefaced by VK_.
// See what key is pressed by the user and return it
WORD CmdPromptHelper::getKeyPress() {
if (hstdin != INVALID_HANDLE_VALUE) {
DWORD count;
INPUT_RECORD inrec;
// Get Key Press
ReadConsoleInput(hstdin, &inrec, 1, &count);
// Return key only if it is key down
if (inrec.Event.KeyEvent.bKeyDown) {
return inrec.Event.KeyEvent.wVirtualKeyCode;
} else {
return 0;
}
// Flush input
FlushConsoleInputBuffer(hstdin);
} else {
return 0;
}
}
clearScreen is a little deceiving. You would think that it clears out the text in the prompt. As far as I know, it doesn't. I'm pretty sure it actually shifts all the content up and then writes a ton of characters to the prompt to make it look like the screen was cleared.
An important concept that this function brings up though is the idea of buffered rendering. Again, if this were a more robust system, we would want to implement the concept of double buffering which means rendering to an invisible buffer and waiting until all drawing is finished and then swap the invisible buffer with the visible one. This makes for a much cleaner view of the render because we don't see things while they are still getting drawn. The way we do things here, we see the rendering process happen right in front of us. It's not a major concern, it just looks ugly sometimes.
// Flood the console with empty space so that we can
// simulate single buffering (I have no idea how to double buffer this)
void CmdPromptHelper::clearScreen() {
if (hstdout != INVALID_HANDLE_VALUE) {
CONSOLE_SCREEN_BUFFER_INFO csbi;
DWORD cellCount; // How many cells to paint
DWORD count; // How many we painted
COORD homeCoord = {0, 0}; // Where to put the cursor to clear
// Get console info
if (!GetConsoleScreenBufferInfo(hstdout, &csbi)) {
return;
}
// Get cell count
cellCount = csbi.dwSize.X * csbi.dwSize.Y;
// Fill the screen with spaces
FillConsoleOutputCharacter(
hstdout,
(TCHAR) ' ',
cellCount,
homeCoord,
&count
);
// Set cursor position
SetConsoleCursorPosition(hstdout, homeCoord);
}
}
main
The very last thing that you need to worry about is how to use all these things. That's where main comes in. You need a game loop. Game loops are probably the most important thing in any game. Any game that you look at will have a game loop.
The idea is:
Show something on screen
Read input
Handle the input
GOTO 1
This program is no different. The first thing it does is create a GameBoard and a RenderContext. It also makes a CmdPromptHelper which lets of interface with the command prompt. After that, it starts the loop and lets the loop continue until we hit the exit condition (for us that's pressing escape). We could have a separate class or function do dispatch input, but since we just dispatch the input to another input handler, I kept it in the main loop. After you get the input, you send if off to the GameBoard which alters itself accordingly. The next step is to clear the RenderContext and the screen/prompt. Then rerun the loop if escape wasn't pressed.
int main() {
WORD key;
GameBoard gb(5, 5);
RenderContext rc(25, 15);
CmdPromptHelper cph;
do {
gb.draw(rc);
rc.render();
key = cph.getKeyPress();
switch (key) {
case VK_UP:
gb.handleInput(MD_UP);
break;
case VK_DOWN:
gb.handleInput(MD_DOWN);
break;
case VK_LEFT:
gb.handleInput(MD_LEFT);
break;
case VK_RIGHT:
gb.handleInput(MD_RIGHT);
break;
}
rc.clear();
cph.clearScreen();
} while (key != VK_ESCAPE);
}
After you have taken into consideration all of these things, you understand why and where you need to be drawing your cursor. It's not a matter of calling a function after another, you need to composite your draws. You can't just draw the GameBoard and then draw the Marker. At least not with the command prompt. I hope this helps. It definitely alleviated the down time at work.

Error in color fading function

I found this old color fading function in my snippets folder and would like to implement it to one of my projects. It can be used to fade one color to another. It's a very long one-liner:
D3DCOLOR GetFadedColor(D3DCOLOR from, D3DCOLOR to, float factor)
{
return (factor<0.0f)?from:((factor>1.0f)?to:((((from>>24)>(to>>24))?((from>>24)-(D3DCOLOR)(factor*(float)((from>>24)-(to>>24)))):((from>>24)+(D3DCOLOR)(factor*(float)((to>>24)-(from>>24))))<<24)|((((from<<8)>>24)>((to<<8)>>24))?(((from<<8)>>24)-(D3DCOLOR)(factor*(float)(((from<<8)>>24)-((to<<8)>>24)))):(((from<<8)>>24)+(D3DCOLOR)(factor*(float)(((to<<8)>>24)-((from<<8)>>24))))<<16)|((((from<<16)>>24)>((to<<16)>>24))?(((from<<16)>>24)-(D3DCOLOR)(factor*(float)(((from<<16)>>24)-((to<<16)>>24)))):(((from<<16)>>24)+(D3DCOLOR)(factor*(float)(((to<<16)>>24)-((from<<16)>>24))))<<8)|((((from<<24)>>24)>((to<<24)>>24))?(((from<<24)>>24)-(D3DCOLOR)(factor*(float)(((from<<24)>>24)-((to<<24)>>24)))):(((from<<24)>>24)+(D3DCOLOR)(factor*(float)(((to<<24)>>24)-((from<<24)>>24)))))));
}
D3DCOLOR is just a DWORD (unsigned long). A color can for example be 0xAARRGGBB (A-alpha, R-red, G-green, B-blue), but works with other compositions aswell.
Obviously it's a total mess, but this is exactly what I need.
The problem is that it doesn't work as intended:
GetFadedColor(0x00000000, 0xff33cccc, 0.3f)
// = 0x4c0f3d3d - working as intended
GetFadedColor(0xff33cccc, 0x00000000, 0.3f)
// = 0x000000bf - pretty wrong
GetFadedColor(0xff00ff00, 0x00ff00ff, 0.3f)
// = 0x004c00ff - second color value is correct, everything else wrong
I actually don't know how it works and don't remember where I have it from, so I'm asking here for help. Either help me find the error or find an alternative function that does exactly this.
What you should to now is first you should spend maybe 5 minutes to write down some really basic tests with the cases where you know what you expect. You don't even need to use any test framework, because to get rolling you could just use assert:
// basicTests.c
#include <assert.h>
int getFadedColor_basicTests()
{
assert(GetFadedColor(0x00000000, 0xff33cccc, 0.3f) == 0x4c0f3d3d && "30% from black to light blue should be greenish");
assert(GetFadedColor(0xff33cccc, 0x00000000, 0.3f) == something && "30% from one color to another should be...");
// if you're not sure what the exact value should be, you should write a helper function
// that returns true/false for if each of the four components of the actual color
// are in a sensible expected range
...
}
int main()
{
getFadedColor_basicTests();
return 0;
}
Once you're happy with how much coverage you get with tests, be it just 3 asserts total, or maybe 50 asserts if you feel like it, you should start reformatting the one-liner, breaking the line, adding meaningful indentation and comments. Start refactoring, extract out common expressions, add comments on what they do or should do, all while running the tests in between changes and adding tests as you devise new ones.
EDIT:
Isn't it just supposed to linearly extrapolate each of the components separately?
int fade(int from_, int to_, float factor)
{
unsigned char *from = (unsigned char*)&from_;
unsigned char *to = (unsigned char*)&to_;
int result_;
unsigned char *result = (unsigned char*)&result_;
for (int i = 0 ; i < 4; ++i)
{
result[i] = factor * ((int)to[i] - (int)from[i]) + from[i];
}
return result_;
}

Convert BMP to pure RGB, no color map?

Using the EasyBMP library (a close adaptation, anyway), I have code to convert a BMP to greyscale.
int monochromeValue (RGBApixel foo)
{
return (foo.Red+foo.Green+foo.Blue)/3;
}
void setToColor (RGBApixel* loc, int newColor)
{
loc->Red = loc->Green = loc->Blue = newColor;
}
void greyscaleImage (BMP* image)
{
int x, y;
for (x = 0; x < image->Width; ++x)
for (y = 0; y < image->Height; ++y)
{
RGBApixel* pixel = elementAt (image, x, y);
setToColor (pixel, monochromeValue (*pixel));
}
}
An RGBA pixel is
typedef unsigned char ebmpBYTE;
typedef struct RGBApixel
{
ebmpBYTE Blue;
ebmpBYTE Green;
ebmpBYTE Red;
ebmpBYTE Alpha;
} RGBApixel;
The code doesn't make it greyscale. One image is more sepia, and another is mostly greyscale but has some colored highlights.
I'm assuming this has something to do with the color map. What can I do to make it so that it just uses RGB, without running it through a palette? (Changing the bit depth is fine, if that'll work.)
TIA.
This page suggests that palettes aren't used on 16+ bit depth images. So I tried changing the bit depth to 32, and it worked. 24 also worked. So that seems to be the answer: use higher bit depths, and it won't need a palette, and it'll instead use the RGB values as they are.
My guess is your are being bitten by overflow in your monocromeValue(...) function. As you are adding 3 u8 values together in parenthesis, I don't think the compiler will up-convert the adds to a larger integer type. I would try:
int monochromeValue (RGBApixel foo)
{
return ((int)foo.Red+(int)foo.Green+(int)foo.Blue)/3;
}
As a test to be sure though.

Texture management / pointer question

I'm working on a texture management and animation solution for a small side project of mine. Although the project uses Allegro for rendering and input, my question mostly revolves around C and memory management. I wanted to post it here to get thoughts and insight into the approach, as I'm terrible when it comes to pointers.
Essentially what I'm trying to do is load all of my texture resources into a central manager (textureManager) - which is essentially an array of structs containing ALLEGRO_BITMAP objects. The textures stored within the textureManager are mostly full sprite sheets.
From there, I have an anim(ation) struct, which contains animation-specific information (along with a pointer to the corresponding texture within the textureManager).
To give you an idea, here's how I setup and play the players 'walk' animation:
createAnimation(&player.animations[0], "media/characters/player/walk.png", player.w, player.h);
playAnimation(&player.animations[0], 10);
Rendering the animations current frame is just a case of blitting a specific region of the sprite sheet stored in textureManager.
For reference, here's the code for anim.h and anim.c. I'm sure what I'm doing here is probably a terrible approach for a number of reasons. I'd like to hear about them! Am I opening myself to any pitfalls? Will this work as I'm hoping?
anim.h
#ifndef ANIM_H
#define ANIM_H
#define ANIM_MAX_FRAMES 10
#define MAX_TEXTURES 50
struct texture {
bool active;
ALLEGRO_BITMAP *bmp;
};
struct texture textureManager[MAX_TEXTURES];
typedef struct tAnim {
ALLEGRO_BITMAP **sprite;
int w, h;
int curFrame, numFrames, frameCount;
float delay;
} anim;
void setupTextureManager(void);
int addTexture(char *filename);
int createAnimation(anim *a, char *filename, int w, int h);
void playAnimation(anim *a, float delay);
void updateAnimation(anim *a);
#endif
anim.c
void setupTextureManager() {
int i = 0;
for(i = 0; i < MAX_TEXTURES; i++) {
textureManager[i].active = false;
}
}
int addTextureToManager(char *filename) {
int i = 0;
for(i = 0; i < MAX_TEXTURES; i++) {
if(!textureManager[i].active) {
textureManager[i].bmp = al_load_bitmap(filename);
textureManager[i].active = true;
if(!textureManager[i].bmp) {
printf("Error loading texture: %s", filename);
return -1;
}
return i;
}
}
return -1;
}
int createAnimation(anim *a, char *filename, int w, int h) {
int textureId = addTextureToManager(filename);
if(textureId > -1) {
a->sprite = textureManager[textureId].bmp;
a->w = w;
a->h = h;
a->numFrames = al_get_bitmap_width(a->sprite) / w;
printf("Animation loaded with %i frames, given resource id: %i\n", a->numFrames, textureId);
} else {
printf("Texture manager full\n");
return 1;
}
return 0;
}
void playAnimation(anim *a, float delay) {
a->curFrame = 0;
a->frameCount = 0;
a->delay = delay;
}
void updateAnimation(anim *a) {
a->frameCount ++;
if(a->frameCount >= a->delay) {
a->frameCount = 0;
a->curFrame ++;
if(a->curFrame >= a->numFrames) {
a->curFrame = 0;
}
}
}
You may want to consider a more flexible Animation structure that contains an array of Frame structures. Each frame structure could contain the frame delay, an x/y hotspot offset, etc. This way different frames of the same animation could be different sizes and delays. But if you don't need those features, then what you're doing is fine.
I assume you'll be running the logic at a fixed frame rate (constant # of logical frames per second)? If so, then the delay parameters should work out well.
A quick comment regarding your code:
textureManager[i].active = true;
You probably shouldn't mark it as active until after you've checked if the bitmap loaded.
Also note that Allegro 4.9/5.0 is fully backed by OpenGL or D3D textures and, as such, large bitmaps will fail to load on some video cards! This could be a problem if you are generating large sprite sheets. As of the current version, you have to work around it yourself.
You could do something like:
al_set_new_bitmap_flags(ALLEGRO_MEMORY_BITMAP);
ALLEGRO_BITMAP *sprite_sheet = al_load_bitmap("sprites.png");
al_set_new_bitmap_flags(0);
if (!sprite_sheet) return -1; // error
// loop over sprite sheet, creating new video bitmaps for each frame
for (i = 0; i < num_sprites; ++i)
{
animation.frame[i].bmp = al_create_bitmap( ... );
al_set_target_bitmap(animation.frame[i].bmp);
al_draw_bitmap_region( sprite_sheet, ... );
}
al_destroy_bitmap(sprite_sheet);
al_set_target_bitmap(al_get_backbuffer());
To be clear: this is a video card limitation. So a large sprite sheet may work on your computer but fail to load on another. The above approach loads the sprite sheet into a memory bitmap (essentially guaranteed to succeed) and then creates a new, smaller hardware accelerated video bitmap per frame.
Are you sure you need a pointer to pointer for ALLEGRO_BITMAP **sprite; in anim?
IIRC Allegro BITMAP-handles are pointers already, so there is no need double-reference them, since you seem to only want to store one Bitmap per animation.
You ought to use ALLEGRO_BITMAP *sprite; in anim.
I do not see any other problems with your code.

Resources