Initialize a minesweeper in c - c

I'm currently rewriting a minesweeper program in C using the CSFML library.
I'm having some issues at managing the initialization only after the first click, more precisely in the part where I'm supposed to set the tiles around the click empty.
I can't find a way to make these tiles empty without having a risk of removing some bombs.
Here's my init code block for now :
int current = 0;
temp.bombs = BOMB_EASY;
temp.difficulty = EASY;
temp.mapEasy = malloc(sizeof(sTILE *) * (Y_EASY + 1));
for (int i = 0; i < Y_EASY + 1 ; i++)
{
temp.mapEasy[i] = malloc(sizeof(sTILE) * (X_EASY + 1));
}
for (int i = 0; i < X_EASY + 1; i++)
{
temp.mapEasy[Y_EASY][i].type = 0;
}
while (current < BOMB_EASY)
{
for (int i = 0; i < Y_EASY; i++)
{
for (int j = 0; j < X_EASY; j++)
{
int isBomb = rand() % 10;
if (isBomb == 0 && current < BOMB_EASY && temp.mapEasy[i][j].type != 9)
{
temp.mapEasy[i][j].type = 9;
current++;
}
else if (temp.mapEasy[i][j].type != 9)
{
temp.mapEasy[i][j].type = 0;
}
}
}
}
for (int i = 0; i < Y_EASY; i++)
{
for (int j = 0; j < X_EASY; j++)
{
if (temp.mapEasy[i][j].type == 0)
{
temp.mapEasy[i][j].type = HowManyBombs(temp.mapEasy, i, j, Y_EASY, X_EASY);
}
temp.mapEasy[i][j].isRevealed = sfFalse;
temp.mapEasy[i][j].isFlagged = sfFalse;
}
}
}
I know my question might seem stupid and someone probably already answered it but I couldn't find the answer so thanks at the ones who will answer me.

Create an empty matrix
Fill it with n mines at random locations. Upon generating (x, y) coordinates, check if they are already taken.
If the coordinates are already taken, you should place the mine at the next available position.
For example by increasing x by 1, check if free, if not, increase x again. Upon reaching the end of the row, increase y instead and start over with x=0.
If you simply generate a new random number, your algorithm could in theory get forever stuck. In practice it will probably work, but I would expect such an algorithm to generate the grid slower1) than one that just picks the next free spot.
1) rand() call overhead is the most likely bottleneck in this algorithm. But also if you pick the next spot rather than calling rand() again, the CPU might be able to speculatively load (parts of) the array in prefetch data cache. This wouldn't be possible when the memory location is literally random each time you pick it.

Related

Efficiently print every x iterations in for loop

I am writing a program in which a certain for-loop gets iterated over many many times.
One single iteration doesn't take to long but since the program iterates the loop so often it takes quite some time to compute.
In an effort to get more information on the progress of the program without slowing it down to much I would like to print the progress every xth step.
Is there a different way to do this, than a conditional with a modulo like so:
for(int i = 0; i < some_large_number; i++){
if(i % x == 0)
printf("%f%%\r", percent);
//some other code
.
.
.
}
?
Thanks is advance
This code:
for(int i = 0; i < some_large_number; i++){
if(i % x == 0)
printf("%f%%\r", percent);
//some other code
.
.
.
}
can be restructured as:
/* Partition the execution into blocks of x iterations, possibly including a
final fragmentary block. The expression (some_large_number+(x-1))/x
calculates some_large_number/x with any fraction rounded up.
*/
for (int block = 0, i = 0; block < (some_large_number+(x-1))/x; ++block)
{
printf("%f%%\r", percent);
// Set limit to the lesser of the end of the current block or some_large_number.
int limit = (block+1) * x;
if (some_large_number < limit) limit = some_large_number;
// Iterate the original code.
for (; i < limit; ++i)
{
//some other code
}
}
With the following caveats and properties:
The inner loop has no more work than the original loop (it has no extra variable to count or test) and has the i % x == 0 test completely removed. This is optimal for the inner loop in the sense it reduces the nominal amount of work as much as possible, although real-world hardware sometimes has finicky behaviors that can result in more compute time for less actual work.
New identifiers block and limit are introduced but can be changed to avoid any conflicts with uses in the original code.
Other than the above, the inner loop operates identically to the original code: It sees the same values of i in the same order as the original code, so no changes are needed in that code.
some_large_number+(x-1) could overflow int.
I would do it like this:
int j = x;
for (int i = 0; i < some_large_number; i++){
if(--j == 0) {
printf("%f%%\r", percent);
j = x;
}
//some other code
.
.
.
}
Divide the some_large_number by x. Now loop for x times and nest it with the new integer and then print the percent. I meant this:
int temp = some_large_number/x;
for (int i = 0; i < x; i++){
for (int j = 0; j < temp; j++){
//some code
}
printf("%f%%\r", percent);
}
The fastest approach regarding your performance concern would be to use a nested loop:
unsigned int x = 6;
unsigned int segments = some_large_number / x;
unsigned int y;
for ( unsigned int i = 0; i < segments; i++ ) {
printf("%f%%\r", percent);
for ( unsigned int j = 0; j < x; j++ ) {
/* some code here */
}
}
// If some_large_number canĀ“t be divided evenly through `x`:
if (( y = (some_large_number % x)) != 0 )
{
for ( unsigned int i = 0; i < y; i++ ) {
/* same code as inside of the former inner loop. */
}
}
Another example would be to use a different counting variable for the check to execute the print process by comparing that to x - 1 and reset the variable to -1 if it matches:
unsigned int x = 6;
unsigned int some_large_number = 100000000;
for ( unsigned int i = 0, int j = 0; i < some_large_number; i++, j++ ) {
if(j == (x - 1))
{
printf("%f%%\r", percent);
j = -1;
}
/* some code here */
}

How would I traverse a 2D array by finding the local maxima by checking if all the numbers around it are smaller than it? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
How would I traverse a 2D array by finding the local maxima by checking if all the numbers around it are smaller than it? I am really confused on how I would do this in code. I need to get the position and I only need local maximums, not absolute maximums.
void reportMaxima(int rows, int cols, int grid[ rows ][ cols ])
{
}
This should work:
#include <stdbool.h>
#include <string.h>
void report_maxima(int rows, int cols, int arr_in[rows][cols],
bool arr_out[rows][cols])
{
int i, j;
int k, l;
memset(arr_out, 0, rows * cols * sizeof(arr_out[0][0]));
// memset(arr_out, 0, sizeof(arr_out)); I think this doesn't work :(
for (i = 0; i < rows; i++) {
for (j = 0; j < cols; j++) {
for (k = i - 1; k <= (i + 1); k++) {
if (k < 0)
continue;
if (k >= rows)
break;
for (l = j - 1; l <= (j + 1); l++) {
if (l < 0)
continue;
if (l >= cols)
break;
if (arr_in[i][j] < arr_in[k][l])
goto not_maxima;
}
}
arr_out[i][j] = true;
continue;
not_maxima:
}
}
}
First you need a bool array where to store the output info: whether a point is a maxima (true) or not (false).
You need to initialize that array to 0 (false) before storing the points where it is true. The best way to do that is by using memset().
Then, you need obviously to iterate over the input array. (i and j do that)
For each point of the input array, you check all the neighbours. (k and l do that).
You need to be sure that the neighbour you are trying to access is inside the array bounds (the if - continue and if - break do that).
Then, you check if all those neighbours are smaller than the point you are on. The first neighbour you find that is greater than your point tells you that you are not in a local maxima, and you should skip to the next point. If after checking all the neighboours you haven't found any neighbour greater than your point, then you are in a local maxima. (or at least in an inflection point).
That last thing is important: If you want to be sure, you should add a lot of checking, which would slow down the algorithm a lot. It depends on your needs.
EDIT:
Fixed a bug when using incorrect input to sizeof().
Simply run throw all of the cells in the array using 2 for loops
int i,j;
for(i = 0; i < rows; i++) {
for(j = 0; j < cols; j++) {
if(check(i,j,rows,cols,grid)) {
//do something.
}
else {
//do something else.
}
}
}
Then in code you can check all of the numbers around it. The key for this task is to not be lazy, just check every cell around it. Make sure that you don't try to access memory that is not part of the array.
[i-1][j-1] , [i-1][j] , [i-1][j+1]
[i][j-1] , the cell , [i][j+1]
[i+1][j-1] , [i+1][j] , [i+1][j+1]
So you will need to verify that the +1's are smaller then rows and cols (respectively) and that the -1's are bigger-equal to 0. After that check if the cell in question is smaller then the specific cell next to it, If so return false. At the end of the functions, if no near cell is bigger return true.
bool check(int i, int j, int rows, int cols,int grid[rows][cols]) {
if((i - 1 >= 0) && (j - 1 >= 0) && (grid[i-1][j-1]) > (grid[i][j]))
return false;
if((i - 1 >= 0) && (grid[i-1][j] > grid[i][j]))
return false;
//etc...
return true;
}
There are more esthetic ways to do it, but when you begin to code readability should be the most important thing. If you use a helper function remember to declare it before using it, Good luck!

Using Hash Tables in Leu of Multidimensional Array

EDIT: Found a solution! Like the commenters suggested, using memset is an insanely better approach. Replace the entire for loop with
memset(lookup->n, -3, (dimensions*sizeof(signed char)));
where
long int dimensions = box1 * box2 * box3 * box4 * box5 * box6 * box7 * box8 * memvara * memvarb * memvarc * memvard * adirect * tdirect * fs * bs * outputnum;
Intro
Right now, I'm looking at a beast of a for-loop:
for (j = 0;j < box1; j++)
{
for (k = 0; k < box2; k++)
{
for (l = 0; l < box3; l++)
{
for (m = 0; m < box4; m++)
{
for (x = 0;x < box5; x++)
{
for (y = 0; y < box6; y++)
{
for (xa = 0;xa < box7; xa++)
{
for (xb = 0; xb < box8; xb++)
{
for (nb = 0; nb < memvara; nb++)
{
for (na = 0; na < memvarb; na++)
{
for (nx = 0; nx < memvarc; nx++)
{
for (nx1 = 0; nx1 < memvard; nx1++)
{
for (naa = 0; naa < adirect; naa++)
{
for (nbb = 0; nbb < tdirect; nbb++)
{
for (ncc = 0; ncc < fs; ncc++)
{
for (ndd = 0; ndd < bs; ndd++)
{
for (o = 0; o < outputnum; o++)
{
lookup->n[j][k][l][m][x][y][xa][xb][nb][na][nx][nx1][naa][nbb][ncc][ndd][o] = -3; //set to default value
}
}
}
}
}
}
}
}
}
}
}
}
}
}
}
}
}
The Problem
This loop is called every cycle in the main run to reset values to an initial state. Unfortunately, it is necessary for the structure of the program that this many values are kept in a single data structure.
Here's the kicker: for every 60 seconds of program run time, 57 seconds goes to this function alone.
The Question
My question is this: would hash tables be an appropriate substitute for a linear array? This array has an O(n^17) cardinality, yet hash tables have an ideal of O(1).
If so, what hash library would you recommend? This program is in C and has no native hash support.
If not, what would you recommend instead?
Can you provide some pseudo-code on how you think this should be implemented?
Notes
OpenMP was used in an attempt to parallelize this loop. Numerous implementations only resulted in slightly-to-greatly increased run time.
Memory usage is not particularly an issue -- this program is intended to be ran on an insanely high-spec'd computer.
We are student researchers, thrust into a heretofore unknown world of optimization and parallelization -- please bear with us, and thank you for any help
Hash vs Array
As comments have specified, an array should not be a problem here. Lookup into an array with a known offset is O(1).
The Bottleneck
It seems to me that the bulk of the work here (and the reason it is slow) is the number of pointer de-references in the inner-loop.
To explain in a bit more detail, consider myData[x][y][z] in the following code:
for (int x = 0; x < someVal1; x++) {
for (int y = 0; y < someVal2; y++) {
for (int z = 0; z < someVal3; z++) {
myData[x][y][z] = -3; // x and y only change in outer-loops.
}
}
}
To compute the location for the -3, we do a lookup and add a value - once for myData[x], then again to get to myData[x][y], and once more finally for myData[x][y][z].
Since this lookup is in the inner-most portion of the loop, we have redundant reads. myData[x] and myData[x][y] are being recomputed, even when only z's value is changing. The lookups were performed during a previous iteration, but the results weren't stored.
For your loop, there are many layers of lookups being computed each iteration, even when only the value of o is changing in that inner-loop.
An Improvement for the Bottleneck
To make one lookup, per loop iteration, per loop level, simply store intermediate lookups. Using int* as the indirection (though any type would work here), the sample code above (with myData) would become:
int **a, *b;
for (int x = 0; x < someVal1; x++) {
a = myData[x]; // Store the lookup.
for (int y = 0; y < someVal2; y++) {
b = a[y]; // Indirection based on the stored lookup.
for (int z = 0; z < someVal3; z++) {
b[z] = -3; // This can be extrapolated as needed to deeper levels.
}
}
}
This is just sample code, small adjustments may be necessary to get it to compile (casts and so forth). Note that there is probably no advantage to using this approach with a 3-dimensional array. However, for a 17-dimensional large data set with simple inner-loop operations (such as assignment), this approach should help quite a bit.
Finally, I'm assuming you aren't actually just assigning the value of -3. You can use memset to accomplish that goal much more efficiently.

OpenMP and 17 Nested For-Loops

I have a giant nested for-loop, designed to set a large array to its default value. I'm trying to use OpenMP for the first time to parallelize, and have no idea where to begin. I have been reading tutorials, and am afraid the process will be performed independently on N number of cores, instead of N cores divided the process amongst itself for a common output. The code is in C, compiled in Visual Studio v14. Any help for this newbie is appreciated -- thanks!
(Attached below is the monster nested for-loop...)
for (j = 0;j < box1; j++)
{
for (k = 0; k < box2; k++)
{
for (l = 0; l < box3; l++)
{
for (m = 0; m < box4; m++)
{
for (x = 0;x < box5; x++)
{
for (y = 0; y < box6; y++)
{
for (xa = 0;xa < box7; xa++)
{
for (xb = 0; xb < box8; xb++)
{
for (nb = 0; nb < memvara; nb++)
{
for (na = 0; na < memvarb; na++)
{
for (nx = 0; nx < memvarc; nx++)
{
for (nx1 = 0; nx1 < memvard; nx1++)
{
for (naa = 0; naa < adirect; naa++)
{
for (nbb = 0; nbb < tdirect; nbb++)
{
for (ncc = 0; ncc < fs; ncc++)
{
for (ndd = 0; ndd < bs; ndd++)
{
for (o = 0; o < outputnum; o++)
{
lookup->n[j][k][l][m][x][y][xa][xb][nb][na][nx][nx1][naa][nbb][ncc][ndd][o] = -3; //set to default value
}
}
}
}
}
}
}
}
}
}
}
}
}
}
}
}
}
If n is actually a multidimensional array, you can do this:
size_t i;
size_t count = sizeof(lookup->n) / sizeof(int);
int *p = (int*)lookup->n;
for( i = 0; i < count; i++ )
{
p[i] = -3;
}
Now, that's much easier to parallelize.
Read more on why this works here (applies to C as well): How do I use arrays in C++?
This is more of an extended comment than an answer.
Find the iteration limit (ie the variable among box1, box2, etc) with the largest value. Revise your loop nest so that the outermost loop runs over that. Simply parallelise the outermost loop. Choosing the largest value means that you'll get, in the limit, an equal number of inner loop iterations to run for each thread.
Collapsing loops, whether you can use OpenMP's collapse clause or have to do it by hand, is only useful when you have reason to believe that parallelising over only the outermost loop will result in significant load imbalance. That seems very unlikely in this case, so distributing the work (approximately) evenly across the available threads at the outermost level would probably provide reasonably good load balancing.
I believe, based on tertiary research, that the solution might be found in adding #pragma omp parallel for collapse(N) directly above the nested loops. However, this seems to only work in OpenMP v3.0, and the whole project is based on Visual Studio (and therefore, OpenMP v2.0) for now...

Sort function failing for larger arrays

I'm writing a program to implement Prim's Algorithm for minimum spanning trees for a short project for a course. The first step is to sort the edges according to weight; the code I have for this works sometimes, but not always.
Here is the code:
for(int i = 0; i < graph.edges; ++i)
{
least_remain_edge = i;
for(int k=i+1; k<graph.edges; ++k)
{
if(graph.edge[k][3]<graph.edge[least_remain_edge][3])
{
least_remain_edge = k;
}
}
if(least_remain_edge != i)
{
swap_temp = graph.edge[i][0];
graph.edge[i][0] = graph.edge[least_remain_edge][0];
graph.edge[least_remain_edge][0] = swap_temp;
}
}
graph.edge[i][3] is the weight of the ith edge, and [i][0] is the edges reference/name. It's something like a bubble sort, where it finds the smallest in the remainder of the list, and puts it in the ith place. I can't see why this isn't always working!
When you're moving around elements, you're only moving their name/reference around, and not the weights and whatever else you're storing. So, maybe do something like
for (int k = 0; k < 4; k++) {
swap_temp = graph.edge[i][k];
graph.edge[i][k] = graph.edge[least_remain_edge][k];
graph.edge[least_remain_edge][k] = swap_temp;
}

Resources