Loop through 2 arrays in one for loop? - arrays

anyone know how we can loop through two arrays in one for loop?
function setwinner() internal returns(address){
for (uint stime = 0 ; stime < squareStartTimeArray.length; stime++ & uint etime = 0; etime = squareEndTimeArray.length etime++) {
if (winningTime >= stime & winningTime <= etime) {
winningIndex = stime;
if (assert(stime == etime) == true) {
winningAddress = playerArray[stime];
}
}
}
}

To loop through multiple arrays in the same loop you should make sure that they both have the same length first. then you can use this:
require(arrayOne.length == arrayTwo.length)
for (i; arrayOne.length > i; i++) {
arrayOne[i] = ....;
arrayTwo[i] = ....;
}

Related

C, help continue..? in while loop - polynomial ADT

When the coef is 0, I used continue to not print, but only printTerm(a) comes out and the printTerm(b) part does not come out.
When I delete the (if & continue) statement, both printTerm(a) and printTerm(b) appear, so it seems that there is a problem here (if & continue) statement.
How can I solve this?
int main() {
a[0].coef = 2;
a[0].expon = 1000; // 2x^1000
a[1].coef = 1;
a[1].expon = 2; // x^2
a[2].coef = 1;
a[2].expon = 0; // 1
b[0].coef = 1;
b[0].expon = 4; // x^4
b[1].coef = 10;
b[1].expon = 3; // 10x^3
b[2].coef = 3;
b[2].expon = 2; // 3x^2
b[2].coef = 1;
b[2].expon = 0; // 1
printTerm(a);
printTerm(b);
return 0;
}
void printTerm(polynomial *p) {
int i=0;
printf("polynomial : ");
while(p[i].expon != -1) {
if(p[i].coef == 0) continue;
printf("%dx^%d", p[i].coef, p[i].expon);
i++;
if(p[i].expon != -1 && p[i].coef > 0) printf(" + ");
}
printf("\n");
}
Because you only increment i if p[i].coef is not equal to 0.
If p[i].coef == 0 it skips the increment part and function is stuck in infinite loop, always checking the same array item.
EDIT:
Way to fix this:
Instead of if(p[i].coef == 0) continue; use:
if (p[i].coef == 0)
{
i++;
continue;
}
This way while loop evaluetes next array item instead of being stuck on the same.

OpenMP parallel for loop

void calc_mean(float *left_mean, float *right_mean, const uint8_t* left, const uint8_t* right, int32_t block_width, int32_t block_height, int32_t d, uint32_t w, uint32_t h, int32_t i,int32_t j)
{
*left_mean = 0;
*right_mean = 0;
int32_t i_b;
float local_left = 0, local_right = 0;
for (i_b = -(block_height-1)/2; i_b < (block_height-1)/2; i_b++) {
#pragma omp parallel for reduction(+:local_left,local_right)
for ( int32_t j_b = -(block_width-1)/2; j_b < (block_width-1)/2; j_b++) {
// Borders checking
if (!(i+i_b >= 0) || !(i+i_b < h) || !(j+j_b >= 0) || !(j+j_b < w) || !(j+j_b-d >= 0) || !(j+j_b-d < w)) {
continue;
}
// Calculating indices of the block within the whole image
int32_t ind_l = (i+i_b)*w + (j+j_b);
int32_t ind_r = (i+i_b)*w + (j+j_b-d);
// Updating the block means
//*left_mean += *(left+ind_l);
//*right_mean += *(right+ind_r);
local_left += left[ind_l];
local_right += right[ind_r];
}
}
*left_mean = local_left/(block_height * block_width);
*right_mean = local_right/(block_height * block_width);
}
This now makes the program execution longer than non-threaded version. I added private(left,right) but it leads to bad memory access for ind_l.
I think this should get you closer to what you want, although I'm not quite sure about one final part.
float local_left, local_right = 0;
for ( int32_t i_b = -(block_height-1)/2; i_b < (block_height-1)/2; i_b++) {
#pragma omp for schedule(static, CORES) reduction(+:left_mean, +: right_mean)
{
for ( int32_t j_b = -(block_width-1)/2; j_b < (block_width-1)/2; j_b++) {
if (your conditions) continue;
int32_t ind_l = (i+i_b)*w + (j+j_b);
int32_t ind_r = (i+i_b)*w + (j+j_b-d);
local_left += *(left+ind_l);
local_right += *(right+ind_r);
}
}
}
*left_mean = local_left/(block_height * block_width);
*right_mean = local_right/(block_height * block_width);
Part I am unsure of is whether you need the schedule() and how to do two different reductions. I know for one reduction, you can simply do
reduction(+:left_mean)
EDIT: some reference for the schedule() http://pages.tacc.utexas.edu/~eijkhout/pcse/html/omp-loop.html#Loopschedules
It looks like you do not need this, but using it could produce a better runtime

Optimization of Brute-Force algorithm or Alternative?

I have a simple (brute-force) recursive solver algorithm that takes lots of time for bigger values of OpxCnt variable. For small values of OpxCnt, no problem, works like a charm. The algorithm gets very slow as the OpxCnt variable gets bigger. This is to be expected but any optimization or a different algorithm ?
My final goal is that :: I want to read all the True values in the map array by
executing some number of read operations that have the minimum operation
cost. This is not the same as minimum number of read operations.
At function completion, There should be no True value unread.
map array is populated by some external function, any member may be 1 or 0.
For example ::
map[4] = 1;
map[8] = 1;
1 read operation having Adr=4,Cnt=5 has the lowest cost (35)
whereas
2 read operations having Adr=4,Cnt=1 & Adr=8,Cnt=1 costs (27+27=54)
#include <string.h>
typedef unsigned int Ui32;
#define cntof(x) (sizeof(x) / sizeof((x)[0]))
#define ZERO(x) do{memset(&(x), 0, sizeof(x));}while(0)
typedef struct _S_MB_oper{
Ui32 Adr;
Ui32 Cnt;
}S_MB_oper;
typedef struct _S_MB_code{
Ui32 OpxCnt;
S_MB_oper OpxLst[20];
Ui32 OpxPay;
}S_MB_code;
char map[65536] = {0};
static int opx_ListOkey(S_MB_code *px_kod, char *pi_map)
{
int cost = 0;
char map[65536];
memcpy(map, pi_map, sizeof(map));
for(Ui32 o = 0; o < px_kod->OpxCnt; o++)
{
for(Ui32 i = 0; i < px_kod->OpxLst[o].Cnt; i++)
{
Ui32 adr = px_kod->OpxLst[o].Adr + i;
// ...
if(adr < cntof(map)){map[adr] = 0x0;}
}
}
for(Ui32 i = 0; i < cntof(map); i++)
{
if(map[i] > 0x0){return -1;}
}
// calculate COST...
for(Ui32 o = 0; o < px_kod->OpxCnt; o++)
{
cost += 12;
cost += 13;
cost += (2 * px_kod->OpxLst[o].Cnt);
}
px_kod->OpxPay = (Ui32)cost; return cost;
}
static int opx_FindNext(char *map, int pi_idx)
{
int i;
if(pi_idx < 0){pi_idx = 0;}
for(i = pi_idx; i < 65536; i++)
{
if(map[i] > 0x0){return i;}
}
return -1;
}
static int opx_FindZero(char *map, int pi_idx)
{
int i;
if(pi_idx < 0){pi_idx = 0;}
for(i = pi_idx; i < 65536; i++)
{
if(map[i] < 0x1){return i;}
}
return -1;
}
static int opx_Resolver(S_MB_code *po_bst, S_MB_code *px_wrk, char *pi_map, Ui32 *px_idx, int _min, int _max)
{
int pay, kmax, kmin = 1;
if(*px_idx >= px_wrk->OpxCnt)
{
return opx_ListOkey(px_wrk, pi_map);
}
_min = opx_FindNext(pi_map, _min);
// ...
if(_min < 0){return -1;}
kmax = (_max - _min) + 1;
// must be less than 127 !
if(kmax > 127){kmax = 127;}
// is this recursion the last one ?
if(*px_idx >= (px_wrk->OpxCnt - 1))
{
kmin = kmax;
}
else
{
int zero = opx_FindZero(pi_map, _min);
// ...
if(zero > 0)
{
kmin = zero - _min;
// enforce kmax limit !?
if(kmin > kmax){kmin = kmax;}
}
}
for(int _cnt = kmin; _cnt <= kmax; _cnt++)
{
px_wrk->OpxLst[*px_idx].Adr = (Ui32)_min;
px_wrk->OpxLst[*px_idx].Cnt = (Ui32)_cnt;
(*px_idx)++;
pay = opx_Resolver(po_bst, px_wrk, pi_map, px_idx, (_min + _cnt), _max);
(*px_idx)--;
if(pay > 0)
{
if((Ui32)pay < po_bst->OpxPay)
{
memcpy(po_bst, px_wrk, sizeof(*po_bst));
}
}
}
return (int)po_bst->OpxPay;
}
int main()
{
int _max = -1, _cnt = 0;
S_MB_code best = {0};
S_MB_code work = {0};
// SOME TEST DATA...
map[ 4] = 1;
map[ 8] = 1;
/*
map[64] = 1;
map[72] = 1;
map[80] = 1;
map[88] = 1;
map[96] = 1;
*/
// SOME TEST DATA...
for(int i = 0; i < cntof(map); i++)
{
if(map[i] > 0)
{
_max = i; _cnt++;
}
}
// num of Opx can be as much as num of individual bit(s).
if(_cnt > cntof(work.OpxLst)){_cnt = cntof(work.OpxLst);}
best.OpxPay = 1000000000L; // invalid great number...
for(int opx_cnt = 1; opx_cnt <= _cnt; opx_cnt++)
{
int rv;
Ui32 x = 0;
ZERO(work); work.OpxCnt = (Ui32)opx_cnt;
rv = opx_Resolver(&best, &work, map, &x, -42, _max);
}
return 0;
}
You can use dynamic programming to calculate the lowest cost that covers the first i true values in map[]. Call this f(i). As I'll explain, you can calculate f(i) by looking at all f(j) for j < i, so this will take time quadratic in the number of true values -- much better than exponential. The final answer you're looking for will be f(n), where n is the number of true values in map[].
A first step is to preprocess map[] into a list of the positions of true values. (It's possible to do DP on the raw map[] array, but this will be slower if true values are sparse, and cannot be faster.)
int pos[65536]; // Every position *could* be true
int nTrue = 0;
void getPosList() {
for (int i = 0; i < 65536; ++i) {
if (map[i]) pos[nTrue++] = i;
}
}
When we're looking at the subproblem on just the first i true values, what we know is that the ith true value must be covered by a read that ends at i. This block could start at any position j <= i; we don't know, so we have to test all i of them and pick the best. The key property (Optimal Substructure) that enables DP here is that in any optimal solution to the i-sized subproblem, if the read that covers the ith true value starts at the jth true value, then the preceding j-1 true values must be covered by an optimal solution to the (j-1)-sized subproblem.
So: f(i) = min(f(j) + score(pos(j+1), pos(i)), with the minimum taken over all 1 <= j < i. pos(k) refers to the position of the kth true value in map[], and score(x, y) is the score of a read from position x to position y, inclusive.
int scores[65537]; // We effectively start indexing at 1
scores[0] = 0; // Covering the first 0 true values requires 0 cost
// Calculate the minimum score that could allow the first i > 0 true values
// to be read, and store it in scores[i].
// We can assume that all lower values have already been calculated.
void calcF(int i) {
int bestStart, bestScore = INT_MAX;
for (int j = 0; j < i; ++j) { // Always executes at least once
int attemptScore = scores[j] + score(pos[j + 1], pos[i]);
if (attemptScore < bestScore) {
bestStart = j + 1;
bestScore = attemptScore;
}
}
scores[i] = bestScore;
}
int score(int i, int j) {
return 25 + 2 * (j + 1 - i);
}
int main(int argc, char **argv) {
// Set up map[] however you want
getPosList();
for (int i = 1; i <= nTrue; ++i) {
calcF(i);
}
printf("Optimal solution has cost %d.\n", scores[nTrue]);
return 0;
}
Extracting a Solution from Scores
Using this scheme, you can calculate the score of an optimal solution: it's simply f(n), where n is the number of true values in map[]. In order to actually construct the solution, you need to read back through the table of f() scores to infer which choice was made:
void printSolution() {
int i = nTrue;
while (i) {
for (int j = 0; j < i; ++j) {
if (scores[i] == scores[j] + score(pos[j + 1], pos[i])) {
// We know that a read can be made from pos[j + 1] to pos[i] in
// an optimal solution, so let's make it.
printf("Read from %d to %d for cost %d.\n", pos[j + 1], pos[i], score(pos[j + 1], pos[i]));
i = j;
break;
}
}
}
}
There may be several possible choices, but all of them will produce optimal solutions.
Further Speedups
The solution above will work for an arbitrary scoring function. Because your scoring function has a simple structure, it may be that even faster algorithms can be developed.
For example, we can prove that there is a gap width above which it is always beneficial to break a single read into two reads. Suppose we have a read from position x-a to x, and another read from position y to y+b, with y > x. The combined costs of these two separate reads are 25 + 2 * (a + 1) + 25 + 2 * (b + 1) = 54 + 2 * (a + b). A single read stretching from x-a to y+b would cost 25 + 2 * (y + b - x + a + 1) = 27 + 2 * (a + b) + 2 * (y - x). Therefore the single read costs 27 - 2 * (y - x) less. If y - x > 13, this difference goes below zero: in other words, it can never be optimal to include a single read that spans a gap of 12 or more.
To make use of this property, inside calcF(), final reads could be tried in decreasing order of start-position (i.e. in increasing order of width), and the inner loop stopped as soon as any gap width exceeds 12. Because that read and all subsequent wider reads tried would contain this too-large gap and therefore be suboptimal, they need not be tried.

Help in a DP problem

I'm trying to solve a problem from SPOJ ( link ), which can be briefly described like this: Given n intervals, each with an integer beginning and end, and given the end with max time ( let's call it max_end ) , find in how many ways you can choose a set of intervals that covers 1...max_end. Intervals may overlap. I tried a DP; first sort by end time, then dp[ i ] is a pair, where dp[ i ].first is the minimum number of intervals needed to cover 1...end[ i ] last using interval i and dp[ i ].second is the number of ways to do it. Here's my main DP loop:
for( int i = 1; i < n; i ++ ) {
for( int j = 0; j < i; j ++ ) {
if( ! ( x[ j ].end >= x[ i ].start - 1 ) )
continue;
if( dp[ j ].first + 1 < dp[ i ].first ) {
dp[ i ].first = dp[ j ].first + 1;
dp[ i ].second = dp[ j ].second;
}
else if( dp[ j ].first + 1 == dp[ i ].first ) {
dp[ i ].second += dp[ j ].second;
}
}
}
Unfortunately, it didn't work. Can somebody please tell me where I have a mistake? Thanks in advance! :)
I'm not sure I get your solution idea, but I describe my AC solution:
I'm using function with memorization, but you can re-write it using non-recurcive DP.
Let's say we have our intervals in array
pair a[100];
where
a[i].first is interval begin and a[i].second is interval end.
Sort this array by begin first (default behavior of stl sort algorithm with default pair comparator).
Now imagine that we are 'putting' intervals one by one from beginning to end.
let f(int x, int prev) return the number of ways to finish the filling if currently last interval is x and previous is 'prev'.
we'll calculate it as follows:
int f(int x, int prev) {
// if already calculated dp[x][prev], return it. Otherwise, calculate it
if (dp[x][prev] != -1) {
return dp[x][prev];
}
if (a[x].second == m) {
return dp[x][prev] = 1; // it means - X is last interval in day
}
else {
dp[x][prev] = 0;
for (int i = x + 1; i < n; ++i) { // try to select next interval
if (a[i].first <= a[x].second && // there must be not empty space after x interval
a[i].second > a[x].second && // if this is false, the set won't be minimal - i interval is useless
a[i].first > a[x].first && // if this is false, the set won't be minimal, x interval is useless
a[prev].second < a[i].first) { // if this is false, the set won't be minimal, x interval is useless.
dp[x][prev] = (dp[x][prev] + f(i, x)) % 100000000;
}
}
}
return dp[x][prev];
}
After that we need to call this function for every pair of intervals, first of which start at 0 and second is connected with first:
for (int i = 0; i < n; ++i) {
if (a[i].first == 0) {
for (int j = i + 1; j < n; ++j) {
if (a[j].first > 0 && // we don't need to start at 0 - in this case either i or j will be useless
a[j].first <= a[i].second && // there must be no space after i interval
a[j].second > a[i].second) { // in opposite case j will be useless
res = (res + f(j, i)) % 100000000;
}
}
// also we need to check the case when we use only one interval:
if (a[i].second == m) {
res = (res + 1) % 100000000;
}
}
}
After that we only need to print the res.

Identify the index corresponding to the smallest data in a set of arrays

This is a trivial algorithmic question, I believe, but I don't seem to be able to find an efficient and elegant solution.
We have 3 arrays of int (Aa, Ab, Ac) and 3 cursors (Ca, Cb, Cc) that indicate an index in the corresponding array. I want to identify and increment the cursor pointing to the smallest value. If this cursor is already at the end of the array, I will exclude it and increment the cursor pointing to the second smallest value. If there is only 1 cursor that is not at the end of the array, we increment this one.
The only solutions I can come up are complicated and/or not optimal. For example, I always end up with a huge if...else...
Does anyone see a neat solution to this problem ?
I am programming in C++ but feel free to discuss it in pseudo-code or any language you like.
Thank you
Pseudo-java code:
int[] values = new int[3];
values[0] = aa[ca];
values[1] = ab[cb];
values[2] = ac[cc];
Arrays.sort(values);
boolean done = false;
for (int i = 0; i < 3 && !done; i++) {
if (values[i] == aa[ca] && ca + 1 < aa.length) {
ca++;
done = true;
}
else if (values[i] == ab[cb] && cb + 1 < ab.length) {
cb++;
done = true;
}
else if (cc + 1 < ac.length) {
cc++;
done = true;
}
}
if (!done) {
System.out.println("cannot increment any index");
stop = true;
}
Essentially, it does the following:
initialize an array values with aa[ca], ab[cb] and ac[cc]
sort values
scan values and increment if possible (i.e. not already at the end of the array) the index of the corresponding value
I know, sorting is at best O(n lg n), but I'm only sorting an array of 3 elements.
what about this solution:
if (Ca != arraySize - 1) AND
((Aa[Ca] == min(Aa[Ca], Ab[Cb], Ac[Cc]) OR
(Aa[Ca] == min(Aa[Ca], Ab[Cb]) And Cc == arraySize - 1) OR
(Aa[Ca] == min(Aa[Ca], Ac[Cc]) And Cb == arraySize - 1) OR
(Cc == arraySize - 1 And Cb == arraySize - 1))
{
Ca++;
}
else if (Cb != arraySize - 1) AND
((Ab[Cb] == min(Ab[Cb], Ac[Cc]) OR (Cc == arraySize - 1))
{
Cb++;
}
else if (Cc != arraySize - 1)
{
Cc++;
}
Pseudo code: EDIT : tidied it up a bit
class CursoredArray
{
int index;
std::vector<int> array;
int val()
{
return array[index];
}
bool moveNext()
{
bool ret = true;
if( array.size() > index )
++index;
else
ret = false;
return ret;
}
}
std::vector<CursoredArray> arrays;
std::vector<int> order = { 0, 1, 2 };//have a default order to start with
if( arrays[0].val() > arrays[1].val() )
std::swap( order[0], order [1] );
if( arrays[2].val() < arrays[order[1]].val() )//if the third is less than the largest of the others
{
std::swap( order[1], order [2] );
if( arrays[2].val() < arrays[order[0]].val() )//if the third is less than the smallest of the others
std::swap( order[0], order [1] );
}
//else third pos of order is already correct
bool end = true;
for( i = 0; i < 3; ++i )
{
if( arrays[order[i]].MoveNext() )
{
end = false;
break;
}
}
if( end )//have gone through all the arrays

Resources