Distribute elements between equivalent arrays to achieve balanced sums - c

I am given a set of elements from, say, 10 to 21 (always sequential),
I generate arrays of the same size, where size is determined runtime.
Example of 3 generated arrays (arrays # is dynamic as well as # of elements in all arrays, where some elements can be 0s - not used):
A1 = [10, 11, 12, 13]
A2 = [14, 15, 16, 17]
A3 = [18, 19, 20, 21]
these generated arrays will be given to different processes to to do some computations on the elements. My aim is to balance the load for every process that will get an array. What I mean is:
With given example, there are
A1 = 46
A2 = 62
A3 = 78
potential iterations over elements given for each thread.
I want to rearrange initial arrays to give equal amount of work for each process, so for example:
A1 = [21, 11, 12, 13] = 57
A2 = [14, 15, 16, 17] = 62
A3 = [18, 19, 20, 10] = 67
(Not an equal distribution, but more fair than initial). Distributions can be different, as long as they approach some optimal distribution and are better than the worst (initial) case of 1st and last arrays. As I see it, different distributions can be achieved using different indexing [where the split of arrays is made {can be uneven}]
This works fine for given example, but there may be weird cases..
So, I see this as a reflection problem (due to the lack of knowledge of proper definition), where arrays should be seen with a diagonal through them, like:
10|111213
1415|1617
181920|21
And then an obvious substitution can be done..
I tried to implement like:
if(rest == 0)
payload_size = (upper-lower)/(processes-1);
else
payload_size = (upper-lower)/(processes-1) + 1;
//printf("payload size: %d\n", payload_size);
long payload[payload_size];
int m = 0;
int k = payload_size/2;
int added = 0; //track what been added so far (to skip over already added elements)
int added2 = 0; // same as 'added'
int p = 0;
for (i = lower; i <= upper; i=i+payload_size){
for(j = i; j<(i+payload_size); j++){
if(j <= upper){
if((j-i) > k){
if(added2 > j){
added = j;
payload[(j-i)] = j;
printf("1 adding data: %d at location: %d\n", payload[(j-i)], (j-i));
}else{
printf("else..\n");
}
}else{
if(added < upper - (m+1)){
payload[(j-i)] = upper - (p*payload_size) - (m++);
added2 = payload[(j-i)];
printf("2 adding data: %d at location: %d\n", payload[(j-i)], (j-i));
}else{
payload[(j-i)] = j;
printf("2.5 adding data: %d at location: %d\n", payload[(j-i)], (j-i));
}
}
}else{ payload[(j-i)] = '\0'; }
}
p++;
k=k/2;
//printf("send to proc: %d\n", ((i)/payload_size)%(processes-1)+1);
}
..but failed horribly.
You definitely can see the problem in the implementation, because it is poorly scalable, not complete, messy, badly written and so on, and on, and on, ...
So, I need help either with the implementation or with an idea of a better approach to do what I want to achieve, given the description.
P.S. I need the solution to be as 'in-liney' as possible (avoid loop nesting) - that is why I am using bunch of flags and global indexes.
Surely this can be done with extra loops and unnecessary iterations. I invite people that can and appreciate t̲h̲e̲ ̲a̲r̲t̲ ̲o̲f̲ ̲i̲n̲d̲e̲x̲i̲n̲g̲ when it comes to arrays.
I am sure there is a solution somewhere out there, but I just cannot make an appropriate Google query to find it.
Hint? I thought of using index % size_of_my_data to achieve this task..
P.S. Application: described here

Here is an O(n) solution I wrote using deque (double-ended queue, a deque is not necessary and a simple array can be used, but a deque makes the code clean because of popRight and popLeft). The code is Python, not pseudocode, but it should be pretty to understand (because it's Python).:
def balancingSumProblem(seqStart = None, seqStop = None, numberOfArrays = None):
from random import randint
from collections import deque
seq = deque(xrange(seqStart or randint(1, 10),
seqStop and seqStop + 1 or randint(11,30)))
arrays = [[] for _ in xrange(numberOfArrays or randint(1,6))]
print "# of elements: {}".format(len(seq))
print "# of arrays: {}".format(len(arrays))
averageNumElements = float(len(seq)) / len(arrays)
print "average number of elements per array: {}".format(averageNumElements)
oddIteration = True
try:
while seq:
for array in arrays:
if len(array) < averageNumElements and oddIteration:
array.append(seq.pop()) # pop() is like popright()
elif len(array) < averageNumElements:
array.append(seq.popleft())
oddIteration = not oddIteration
except IndexError:
pass
print arrays
print [sum(array) for array in arrays]
balancingSumProblem(10,21,3) # Given Example
print "\n---------\n"
balancingSumProblem() # Randomized Test
Basically, from iteration to iteration, it alternates between grabbing large elements and distributing them evenly in the arrays and grabbing small elements and distributing them evenly in the arrays. It goes from out to in (though you could go from in to out) and tries to use what should be the average number of elements per array to balance it out further.
It's not 100 percent accurate with all tests but it does a good job with most randomized tests. You can try running the code here: http://repl.it/cJg

With a simple sequence to assign, you can just iteratively add the min and max elements to each list in turn. There are some termination details to fix up, but that's the general idea. Applied to your example the output would look like:
john-schultzs-macbook-pro:~ jschultz$ ./a.out
10 21 13 18 = 62
11 20 14 17 = 62
12 19 15 16 = 62
A simple reflection assignment like this will be optimal when num_procs evenly divides num_elems. It will be sub-optimal, but still decent, when it doesn't:
#include <stdio.h>
int compute_dist(int lower, int upper, int num_procs)
{
if (lower > upper || num_procs <= 0)
return -1;
int num_elems = upper - lower + 1;
int num_elems_per_proc_floor = num_elems / num_procs;
int num_elems_per_proc_ceil = num_elems_per_proc_floor + (num_elems % num_procs != 0);
int procs[num_procs][num_elems_per_proc_ceil];
int i, j, sum;
// assign pairs of (lower, upper) to each process until we can't anymore
for (i = 0; i + 2 <= num_elems_per_proc_floor; i += 2)
for (j = 0; j < num_procs; ++j)
{
procs[j][i] = lower++;
procs[j][i+1] = upper--;
}
// handle left overs similarly to the above
// NOTE: actually you could use just this loop alone if you set i = 0 here, but the above loop is more understandable
for (; i < num_elems_per_proc_ceil; ++i)
for (j = 0; j < num_procs; ++j)
if (lower <= upper)
procs[j][i] = ((0 == i % 2) ? lower++ : upper--);
else
procs[j][i] = 0;
// print assignment results
for (j = 0; j < num_procs; ++j)
{
for (i = 0, sum = 0; i < num_elems_per_proc_ceil; ++i)
{
printf("%d ", procs[j][i]);
sum += procs[j][i];
}
printf(" = %d\n", sum);
}
return 0;
}
int main()
{
compute_dist(10, 21, 3);
return 0;
}

I have used this implementation, which I mentioned in this report (Implementation works for cases I've used for testing (1-15K) (1-30K) and (1-100K) datasets. I am not saying that it will be valid for all the cases):
int aFunction(long lower, long upper, int payload_size, int processes)
{
long result, i, j;
MPI_Status status;
long payload[payload_size];
int m = 0;
int k = (payload_size/2)+(payload_size%2)+1;
int lastAdded1 = 0;
int lastAdded2 = 0;
int p = 0;
int substituted = 0;
int allowUpdate = 1;
int s;
int times = 1;
int times2 = 0;
for (i = lower; i <= upper; i=i+payload_size){
for(j = i; j<(i+payload_size); j++){
if(j <= upper){
if(k != 0){
if((j-i) >= k){
payload[(j-i)] = j- (m);
lastAdded2 = payload[(j-i)];
}else{
payload[(j-i)] = upper - (p*payload_size) - (m++) + (p*payload_size);
if(allowUpdate){
lastAdded1 = payload[(j-i)];
allowUpdate = 0;
}
}
}else{
int n;
int from = lastAdded1 > lastAdded2 ? lastAdded2 : lastAdded1;
from = from + 1;
int to = lastAdded1 > lastAdded2 ? lastAdded1 : lastAdded2;
int tempFrom = (to-from)/payload_size + ((to-from)%payload_size>0 ? 1 : 0);
for(s = 0; s < tempFrom; s++){
int restIndex = -1;
for(n = from; n < from+payload_size; n++){
restIndex = restIndex + 1;
payload[restIndex] = '\0';
if(n < to && n >= from){
payload[restIndex] = n;
}else{
payload[restIndex] = '\0';
}
}
from = from + payload_size;
}
return 0;
}
}else{ payload[(j-i)] = '\0'; }
}
p++;
k=(k/2)+(k%2)+1;
allowUpdate = 1;
}
return 0;
}

Related

C - getting prime numbers using this algorithm

I am fighting some simple question.
I want to get prime numbers
I will use this algorithm
and... I finished code writing like this.
int k = 0, x = 1, n, prim, lim = 1;
int p[100000];
int xCount=0, limCount=0, kCount=0;
p[0] = 2;
scanf("%d", &n);
start = clock();
do
{
x += 2; xCount++;
if (sqrt(p[lim]) <= x)
{
lim++; limCount++;
}
k = 2; prim = true;
while (prim && k<lim)
{
if (x % p[k] == 0)
prim = false;
k++; kCount++;
}
if (prim == true)
{
p[lim] = x;
printf("prime number : %d\n", p[lim]);
}
} while (k<n);
I want to check how much repeat this code (x+=2; lim++; k++;)
so I used xCount, limCount, kCount variables.
when input(n) is 10, the results are x : 14, lim : 9, k : 43. wrong answer.
answer is (14,3,13).
Did I write code not well?
tell me correct point plz...
If you want to adapt an algorithm to your needs, it's always a good idea to implement it verbatim first, especially if you have pseudocode that is detailed enough to allow for such a verbatim translation into C-code (even more so with Fortran but I digress)
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
int main (void){
// type index 1..n
int index;
// var
// x: integer
int x;
//i, k, lim: integer
int i, k, lim;
// prim: boolean
bool prim;
// p: array[index] of integer {p[i] = i'th prime number}
/*
We cannot do that directly, we need to know the value of "index" first
*/
int res;
res = scanf("%d", &index);
if(res != 1 || index < 1){
fprintf(stderr,"Only integral values >= 1, please. Thank you.\n");
return EXIT_FAILURE;
}
/*
The array from the pseudocode is a one-based array, take care
*/
int p[index + 1];
// initialize the whole array with distinguishable values in case of debugging
for(i = 0;i<index;i++){
p[i] = -i;
}
/*
Your variables
*/
int lim_count = 0, k_count = 0;
// begin
// p[1] = 2
p[1] = 2;
// write(2)
puts("2");
// x = 1
x = 1;
// lim = 1
lim = 1;
// for i:=2 to n do
for(i = 2;i < index; i++){
// repeat (until prim)
do {
// x = x + 2
x += 2;
// if(sqr(p[lim]) <= x) then
if(p[lim] * p[lim] <= x){
// lim = lim +1
lim++;
lim_count++;
}
// k = 2
k = 2;
// prim = true
prim = true;
// while (prim and (k < lim)) do
while (prim && (k < lim)){
// prim = "x is not divisible by p[k]"
if((x % p[k]) == 0){
prim = false;
}
// k = k + 1
k++;
k_count++;
}
// (repeat) until prim
} while(!prim);
// p[i] := x
p[i] = x;
// write(x)
printf("%d\n",x);
}
// end
printf("x = %d, lim_count = %d, k_count = %d \n",x,lim_count,k_count);
for(i = 0;i<index;i++){
printf("%d, ",p[i]);
}
putchar('\n');
return EXIT_SUCCESS;
}
It will print an index - 1 number of primes starting at 2.
You can easily change it now--for example: print only the primes up to index instead of index - 1 primes.
In your case the numbers for all six primes up to 13 gives
x = 13, lim_count = 2, k_count = 3
which is distinctly different from the result you want.
Your translation looks very sloppy.
for i:= 2 to n do begin
must translate to:
for (i=2; i<=n; i++)
repeat
....
until prim
must translate to:
do {
...
} while (!prim);
The while prim... loop is inside the repeat...until prim loop.
I leave it to you to apply this to your code and to check that all constructs have been properly translated. it doesn't look too difficult to do that correctly.
Note: it looks like the algorithm uses 1-based arrays whereas C uses 0-based arrays.

Faster algorithm to find how many numbers are not divisible by a given set of numbers

I am trying to solve an online judge problem: http://opc.iarcs.org.in/index.php/problems/LEAFEAT
The problem in short:
If we are given an integer L and a set of N integers s1,s2,s3..sN, we have to find how many numbers there are from 0 to L-1 which are not divisible by any of the 'si's.
For example, if we are given, L = 20 and S = {3,2,5} then there are 6 numbers from 0 to 19 which are not divisible by 3,2 or 5.
L <= 1000000000 and N <= 20.
I used the Inclusion-Exclusion principle to solve this problem:
/*Let 'T' be the number of integers that are divisible by any of the 'si's in the
given range*/
for i in range 1 to N
for all subsets A of length i
if i is odd then:
T += 1 + (L-1)/lcm(all the elements of A)
else
T -= 1 + (L-1)/lcm(all the elements of A)
return T
Here is my code to solve this problem
#include <stdio.h>
int N;
long long int L;
int C[30];
typedef struct{int i, key;}subset_e;
subset_e A[30];
int k;
int gcd(a,b){
int t;
while(b != 0){
t = a%b;
a = b;
b = t;
}
return a;
}
long long int lcm(int a, int b){
return (a*b)/gcd(a,b);
}
long long int getlcm(int n){
if(n == 1){
return A[0].key;
}
int i;
long long int rlcm = lcm(A[0].key,A[1].key);
for(i = 2;i < n; i++){
rlcm = lcm(rlcm,A[i].key);
}
return rlcm;
}
int next_subset(int n){
if(k == n-1 && A[k].i == N-1){
if(k == 0){
return 0;
}
k--;
}
while(k < n-1 && A[k].i == A[k+1].i-1){
if(k <= 0){
return 0;
}
k--;
}
A[k].key = C[A[k].i+1];
A[k].i++;
return 1;
}
int main(){
int i,j,add;
long long int sum = 0,g,temp;
scanf("%lld%d",&L,&N);
for(i = 0;i < N; i++){
scanf("%d",&C[i]);
}
for(i = 1; i <= N; i++){
add = i%2;
for(j = 0;j < i; j++){
A[j].key = C[j];
A[j].i = j;
}
temp = getlcm(i);
g = 1 + (L-1)/temp;
if(add){
sum += g;
} else {
sum -= g;
}
k = i-1;
while(next_subset(i)){
temp = getlcm(i);
g = 1 + (L-1)/temp;
if(add){
sum += g;
} else {
sum -= g;
}
}
}
printf("%lld",L-sum);
return 0;
}
The next_subset(n) generates the next subset of size n in the array A, if there is no subset it returns 0 otherwise it returns 1. It is based on the algorithm described by the accepted answer in this stackoverflow question.
The lcm(a,b) function returns the lcm of a and b.
The get_lcm(n) function returns the lcm of all the elements in A.
It uses the property : LCM(a,b,c) = LCM(LCM(a,b),c)
When I submit the problem on the judge it gives my a 'Time Limit Exceeded'. If we solve this using brute force we get only 50% of the marks.
As there can be upto 2^20 subsets my algorithm might be slow, hence I need a better algorithm to solve this problem.
EDIT:
After editing my code and changing the function to the Euclidean algorithm, I am getting a wrong answer, but my code runs within the time limit. It gives me a correct answer to the example test but not to any other test cases; here is a link to ideone where I ran my code, the first output is correct but the second is not.
Is my approach to this problem correct? If it is then I have made a mistake in my code, and I'll find it; otherwise can anyone please explain what is wrong?
You could also try changing your lcm function to use the Euclidean algorithm.
int gcd(int a, int b) {
int t;
while (b != 0) {
t = b;
b = a % t;
a = t;
}
return a;
}
int lcm(int a, int b) {
return (a * b) / gcd(a, b);
}
At least with Python, the speed differences between the two are pretty large:
>>> %timeit lcm1(103, 2013)
100000 loops, best of 3: 9.21 us per loop
>>> %timeit lcm2(103, 2013)
1000000 loops, best of 3: 1.02 us per loop
Typically, the lowest common multiple of a subset of k of the s_i will exceed L for k much smaller than 20. So you need to stop early.
Probably, just inserting
if (temp >= L) {
break;
}
after
while(next_subset(i)){
temp = getlcm(i);
will be sufficient.
Also, shortcut if there are any 1s among the s_i, all numbers are divisible by 1.
I think the following will be faster:
unsigned gcd(unsigned a, unsigned b) {
unsigned r;
while(b) {
r = a%b;
a = b;
b = r;
}
return a;
}
unsigned recur(unsigned *arr, unsigned len, unsigned idx, unsigned cumul, unsigned bound) {
if (idx >= len || bound == 0) {
return bound;
}
unsigned i, g, s = arr[idx], result;
g = s/gcd(cumul,s);
result = bound/g;
for(i = idx+1; i < len; ++i) {
result -= recur(arr, len, i, cumul*g, bound/g);
}
return result;
}
unsigned inex(unsigned *arr, unsigned len, unsigned bound) {
unsigned i, result = bound, t;
for(i = 0; i < len; ++i) {
result -= recur(arr, len, i, 1, bound);
}
return result;
}
call it with
unsigned S[N] = {...};
inex(S, N, L-1);
You need not add the 1 for the 0 anywhere, since 0 is divisible by all numbers, compute the count of numbers 1 <= k < L which are not divisible by any s_i.
Create an array of flags with L entries. Then mark each touched leaf:
for(each size in list of sizes) {
length = 0;
while(length < L) {
array[length] = TOUCHED;
length += size;
}
}
Then find the untouched leaves:
for(length = 0; length < L; length++) {
if(array[length] != TOUCHED) { /* Untouched leaf! */ }
}
Note that there is no multiplication and no division involved; but you will need up to about 1 GiB of RAM. If RAM is a problem the you can use an array of bits (max. 120 MiB).
This is only a beginning though, as there are repeating patterns that can be copied instead of generated. The first pattern is from 0 to S1*S2, the next is from 0 to S1*S2*S3, the next is from 0 to S1*S2*S3*S4, etc.
Basically, you can set all values touched by S1 and then S2 from 0 to S1*S2; then copy the pattern from 0 to S1*S2 until you get to S1*S2*S3 and set all the S3's between S3 and S1*S2*S3; then copy that pattern until you get to S1*S2*S3*S4 and set all the S4's between S4 and S1*S2*S3*S4 and so on.
Next; if S1*S2*...Sn is smaller than L, you know the pattern will repeat and can generate the results for lengths from S1*S2*...Sn to L from the pattern. In this case the size of the array only needs to be S1*S2*...Sn and doesn't need to be L.
Finally, if S1*S2*...Sn is larger than L; then you could generate the pattern for S1*S2*...(Sn-1) and use that pattern to create the results from S1*S2*...(Sn-1) to S1*S2*...Sn. In this case if S1*S2*...(Sn-1) is smaller than L then the array doesn't need to be as large as L.
I'm afraid your problem understanding is maybe not correct.
You have L. You have a set S of K elements. You must count the sum of quotient of L / Si. For L = 20, K = 1, S = { 5 }, the answer is simply 16 (20 - 20 / 5). But K > 1, so you must consider the common multiples also.
Why loop through a list of subsets? It doesn't involve subset calculation, only division and multiple.
You have K distinct integers. Each number could be a prime number. You must consider common multiples. That's all.
EDIT
L = 20 and S = {3,2,5}
Leaves could be eaten by 3 = 6
Leaves could be eaten by 2 = 10
Leaves could be eaten by 5 = 4
Common multiples of S, less than L, not in S = 6, 10, 15
Actually eaten leaves = 20/3 + 20/2 + 20/5 - 20/6 - 20/10 - 20/15 = 6
You can keep track of the distance until then next touched leaf for each size. The distance to the next touched leaf will be whichever distance happens to be smallest, and you'd subtract this distance from all the others (and wrap whenever the distance is zero).
For example:
int sizes[4] = {2, 5, 7, 9};
int distances[4];
int currentLength = 0;
for(size = 0 to 3) {
distances[size] = sizes[size];
}
while(currentLength < L) {
smallest = INT_MAX;
for(size = 0 to 3) {
if(distances[size] < smallest) smallest = distances[size];
}
for(size = 0 to 3) {
distances[size] -= smallest;
if(distances[size] == 0) distances[size] = sizes[size];
}
while( (smallest > 1) && (currentLength < L) ) {
currentLength++;
printf("%d\n", currentLength;
smallest--;
}
}
#A.06: u r the one with username linkinmew on opc, rite?
Anyways, the answer just requires u to make all possible subsets, and then apply inclusion exclusion principle. This will fall well within the time bounds for the data given. For making all possible subsets, u can easily define a recursive function.
i don't know about programming but in math there is a single theorem which works on a set that has GCD 1
L=20, S=(3,2,5)
(1-1/p)(1-1/q)(1-1/r).....and so on
(1-1/3)(1-1/2)(1-1/5)=(2/3)(1/2)(4/5)=4/15
4/15 means there are 4 numbers in each set of 15 number which are not divisible by any number rest of it can be count manually eg.
16, 17, 18, 19, 20 (only 17 and 19 means there are only 2 numbers thatr can't be divided by any S)
4+2=6
6/20 means there are only 6 numbers in first 20 numbers that can't be divided by any s

Optimization of Brute-Force algorithm or Alternative?

I have a simple (brute-force) recursive solver algorithm that takes lots of time for bigger values of OpxCnt variable. For small values of OpxCnt, no problem, works like a charm. The algorithm gets very slow as the OpxCnt variable gets bigger. This is to be expected but any optimization or a different algorithm ?
My final goal is that :: I want to read all the True values in the map array by
executing some number of read operations that have the minimum operation
cost. This is not the same as minimum number of read operations.
At function completion, There should be no True value unread.
map array is populated by some external function, any member may be 1 or 0.
For example ::
map[4] = 1;
map[8] = 1;
1 read operation having Adr=4,Cnt=5 has the lowest cost (35)
whereas
2 read operations having Adr=4,Cnt=1 & Adr=8,Cnt=1 costs (27+27=54)
#include <string.h>
typedef unsigned int Ui32;
#define cntof(x) (sizeof(x) / sizeof((x)[0]))
#define ZERO(x) do{memset(&(x), 0, sizeof(x));}while(0)
typedef struct _S_MB_oper{
Ui32 Adr;
Ui32 Cnt;
}S_MB_oper;
typedef struct _S_MB_code{
Ui32 OpxCnt;
S_MB_oper OpxLst[20];
Ui32 OpxPay;
}S_MB_code;
char map[65536] = {0};
static int opx_ListOkey(S_MB_code *px_kod, char *pi_map)
{
int cost = 0;
char map[65536];
memcpy(map, pi_map, sizeof(map));
for(Ui32 o = 0; o < px_kod->OpxCnt; o++)
{
for(Ui32 i = 0; i < px_kod->OpxLst[o].Cnt; i++)
{
Ui32 adr = px_kod->OpxLst[o].Adr + i;
// ...
if(adr < cntof(map)){map[adr] = 0x0;}
}
}
for(Ui32 i = 0; i < cntof(map); i++)
{
if(map[i] > 0x0){return -1;}
}
// calculate COST...
for(Ui32 o = 0; o < px_kod->OpxCnt; o++)
{
cost += 12;
cost += 13;
cost += (2 * px_kod->OpxLst[o].Cnt);
}
px_kod->OpxPay = (Ui32)cost; return cost;
}
static int opx_FindNext(char *map, int pi_idx)
{
int i;
if(pi_idx < 0){pi_idx = 0;}
for(i = pi_idx; i < 65536; i++)
{
if(map[i] > 0x0){return i;}
}
return -1;
}
static int opx_FindZero(char *map, int pi_idx)
{
int i;
if(pi_idx < 0){pi_idx = 0;}
for(i = pi_idx; i < 65536; i++)
{
if(map[i] < 0x1){return i;}
}
return -1;
}
static int opx_Resolver(S_MB_code *po_bst, S_MB_code *px_wrk, char *pi_map, Ui32 *px_idx, int _min, int _max)
{
int pay, kmax, kmin = 1;
if(*px_idx >= px_wrk->OpxCnt)
{
return opx_ListOkey(px_wrk, pi_map);
}
_min = opx_FindNext(pi_map, _min);
// ...
if(_min < 0){return -1;}
kmax = (_max - _min) + 1;
// must be less than 127 !
if(kmax > 127){kmax = 127;}
// is this recursion the last one ?
if(*px_idx >= (px_wrk->OpxCnt - 1))
{
kmin = kmax;
}
else
{
int zero = opx_FindZero(pi_map, _min);
// ...
if(zero > 0)
{
kmin = zero - _min;
// enforce kmax limit !?
if(kmin > kmax){kmin = kmax;}
}
}
for(int _cnt = kmin; _cnt <= kmax; _cnt++)
{
px_wrk->OpxLst[*px_idx].Adr = (Ui32)_min;
px_wrk->OpxLst[*px_idx].Cnt = (Ui32)_cnt;
(*px_idx)++;
pay = opx_Resolver(po_bst, px_wrk, pi_map, px_idx, (_min + _cnt), _max);
(*px_idx)--;
if(pay > 0)
{
if((Ui32)pay < po_bst->OpxPay)
{
memcpy(po_bst, px_wrk, sizeof(*po_bst));
}
}
}
return (int)po_bst->OpxPay;
}
int main()
{
int _max = -1, _cnt = 0;
S_MB_code best = {0};
S_MB_code work = {0};
// SOME TEST DATA...
map[ 4] = 1;
map[ 8] = 1;
/*
map[64] = 1;
map[72] = 1;
map[80] = 1;
map[88] = 1;
map[96] = 1;
*/
// SOME TEST DATA...
for(int i = 0; i < cntof(map); i++)
{
if(map[i] > 0)
{
_max = i; _cnt++;
}
}
// num of Opx can be as much as num of individual bit(s).
if(_cnt > cntof(work.OpxLst)){_cnt = cntof(work.OpxLst);}
best.OpxPay = 1000000000L; // invalid great number...
for(int opx_cnt = 1; opx_cnt <= _cnt; opx_cnt++)
{
int rv;
Ui32 x = 0;
ZERO(work); work.OpxCnt = (Ui32)opx_cnt;
rv = opx_Resolver(&best, &work, map, &x, -42, _max);
}
return 0;
}
You can use dynamic programming to calculate the lowest cost that covers the first i true values in map[]. Call this f(i). As I'll explain, you can calculate f(i) by looking at all f(j) for j < i, so this will take time quadratic in the number of true values -- much better than exponential. The final answer you're looking for will be f(n), where n is the number of true values in map[].
A first step is to preprocess map[] into a list of the positions of true values. (It's possible to do DP on the raw map[] array, but this will be slower if true values are sparse, and cannot be faster.)
int pos[65536]; // Every position *could* be true
int nTrue = 0;
void getPosList() {
for (int i = 0; i < 65536; ++i) {
if (map[i]) pos[nTrue++] = i;
}
}
When we're looking at the subproblem on just the first i true values, what we know is that the ith true value must be covered by a read that ends at i. This block could start at any position j <= i; we don't know, so we have to test all i of them and pick the best. The key property (Optimal Substructure) that enables DP here is that in any optimal solution to the i-sized subproblem, if the read that covers the ith true value starts at the jth true value, then the preceding j-1 true values must be covered by an optimal solution to the (j-1)-sized subproblem.
So: f(i) = min(f(j) + score(pos(j+1), pos(i)), with the minimum taken over all 1 <= j < i. pos(k) refers to the position of the kth true value in map[], and score(x, y) is the score of a read from position x to position y, inclusive.
int scores[65537]; // We effectively start indexing at 1
scores[0] = 0; // Covering the first 0 true values requires 0 cost
// Calculate the minimum score that could allow the first i > 0 true values
// to be read, and store it in scores[i].
// We can assume that all lower values have already been calculated.
void calcF(int i) {
int bestStart, bestScore = INT_MAX;
for (int j = 0; j < i; ++j) { // Always executes at least once
int attemptScore = scores[j] + score(pos[j + 1], pos[i]);
if (attemptScore < bestScore) {
bestStart = j + 1;
bestScore = attemptScore;
}
}
scores[i] = bestScore;
}
int score(int i, int j) {
return 25 + 2 * (j + 1 - i);
}
int main(int argc, char **argv) {
// Set up map[] however you want
getPosList();
for (int i = 1; i <= nTrue; ++i) {
calcF(i);
}
printf("Optimal solution has cost %d.\n", scores[nTrue]);
return 0;
}
Extracting a Solution from Scores
Using this scheme, you can calculate the score of an optimal solution: it's simply f(n), where n is the number of true values in map[]. In order to actually construct the solution, you need to read back through the table of f() scores to infer which choice was made:
void printSolution() {
int i = nTrue;
while (i) {
for (int j = 0; j < i; ++j) {
if (scores[i] == scores[j] + score(pos[j + 1], pos[i])) {
// We know that a read can be made from pos[j + 1] to pos[i] in
// an optimal solution, so let's make it.
printf("Read from %d to %d for cost %d.\n", pos[j + 1], pos[i], score(pos[j + 1], pos[i]));
i = j;
break;
}
}
}
}
There may be several possible choices, but all of them will produce optimal solutions.
Further Speedups
The solution above will work for an arbitrary scoring function. Because your scoring function has a simple structure, it may be that even faster algorithms can be developed.
For example, we can prove that there is a gap width above which it is always beneficial to break a single read into two reads. Suppose we have a read from position x-a to x, and another read from position y to y+b, with y > x. The combined costs of these two separate reads are 25 + 2 * (a + 1) + 25 + 2 * (b + 1) = 54 + 2 * (a + b). A single read stretching from x-a to y+b would cost 25 + 2 * (y + b - x + a + 1) = 27 + 2 * (a + b) + 2 * (y - x). Therefore the single read costs 27 - 2 * (y - x) less. If y - x > 13, this difference goes below zero: in other words, it can never be optimal to include a single read that spans a gap of 12 or more.
To make use of this property, inside calcF(), final reads could be tried in decreasing order of start-position (i.e. in increasing order of width), and the inner loop stopped as soon as any gap width exceeds 12. Because that read and all subsequent wider reads tried would contain this too-large gap and therefore be suboptimal, they need not be tried.

Algorithm to find Lucky Numbers

I came across this question.A number is called lucky if the sum of its digits, as well as the sum of the squares of its digits is a prime number. How many numbers between A and B are lucky? 1 <= A <= B <= 1018. I tried this.
First I generated all possible primes between 1 and the number that could be resulted by summing the squares (81 *18 = 1458).
I read in A and B find out maximum number that could be generated by summing the digits If B is a 2 digit number ( the max number is 18 generated by 99).
For each prime number between 1 an max number. I applied integer partition algorithm.
For each possible partition I checked whether their sum of squares of their digits form prime. If so the possible permutations of that partition are generated and if they lie with in range they are lucky numbers.
This is the implementation:
#include<stdio.h>
#include<malloc.h>
#include<math.h>
#include <stdlib.h>
#include<string.h>
long long luckynumbers;
int primelist[1500];
int checklucky(long long possible,long long a,long long b){
int prime =0;
while(possible>0){
prime+=pow((possible%10),(float)2);
possible/=10;
}
if(primelist[prime]) return 1;
else return 0;
}
long long getmax(int numdigits){
if(numdigits == 0) return 1;
long long maxnum =10;
while(numdigits>1){
maxnum = maxnum *10;
numdigits-=1;
}
return maxnum;
}
void permuteandcheck(char *topermute,int d,long long a,long long b,int digits){
if(d == strlen(topermute)){
long long possible=atoll(topermute);
if(possible >= getmax(strlen(topermute)-1)){ // to skip the case of getting already read numbers like 21 and 021(permuted-210
if(possible >= a && possible <= b){
luckynumbers++;
}
}
}
else{
char lastswap ='\0';
int i;
char temp;
for(i=d;i<strlen(topermute);i++){
if(lastswap == topermute[i])
continue;
else
lastswap = topermute[i];
temp = topermute[d];
topermute[d] = topermute[i];
topermute[i] = temp;
permuteandcheck(topermute,d+1,a,b,digits);
temp = topermute[d];
topermute[d] = topermute[i];
topermute[i] = temp;
}
}
}
void findlucky(long long possible,long long a,long long b,int digits){
int i =0;
if(checklucky(possible,a,b)){
char topermute[18];
sprintf(topermute,"%lld",possible);
permuteandcheck(topermute,0,a,b,digits);
}
}
void partitiongenerator(int k,int n,int numdigits,long long possible,long long a,long long b,int digits){
if(k > n || numdigits > digits-1 || k > 9) return;
if(k == n){
possible+=(k*getmax(numdigits));
findlucky(possible,a,b,digits);
return;
}
partitiongenerator(k,n-k,numdigits+1,(possible + k*getmax(numdigits)),a,b,digits);
partitiongenerator(k+1,n,numdigits,possible,a,b,digits);
}
void calcluckynumbers(long long a,long long b){
int i;
int numdigits = 0;
long long temp = b;
while(temp > 0){
numdigits++;
temp/=10;
}
long long maxnum =getmax(numdigits)-1;
int maxprime=0,minprime =0;
temp = maxnum;
while(temp>0){
maxprime+=(temp%10);
temp/=10;
}
int start = 2;
for(;start <= maxprime ;start++){
if(primelist[start]) {
partitiongenerator(0,start,0,0,a,b,numdigits);
}
}
}
void generateprime(){
int i = 0;
for(i=0;i<1500;i++)
primelist[i] = 1;
primelist[0] = 0;
primelist[1] = 0;
int candidate = 2;
int topCandidate = 1499;
int thisFactor = 2;
while(thisFactor * thisFactor <= topCandidate){
int mark = thisFactor + thisFactor;
while(mark <= topCandidate){
*(primelist + mark) = 0;
mark += thisFactor;
}
thisFactor++;
while(thisFactor <= topCandidate && *(primelist+thisFactor) == 0) thisFactor++;
}
}
int main(){
char input[100];
int cases=0,casedone=0;
long long a,b;
generateprime();
fscanf(stdin,"%d",&cases);
while(casedone < cases){
luckynumbers = 0;
fscanf(stdin,"%lld %lld",&a,&b);
int i =0;
calcluckynumbers(a,b);
casedone++;
}
}
The algorithm is too slow. I think the answer can be found based on the property of numbers.Kindly share your thoughts. Thank you.
Excellent solution OleGG, But your code is not optimized. I have made following changes to your code,
It does not require to go through 9*9*i for k in count_lucky function, because for 10000 cases it would run that many times, instead i have reduced this value through start and end.
i have used ans array to store intermediate results. It might not look like much but over 10000 cases this is the major factor that reduces the time.
I have tested this code and it passed all the test cases. Here is the modified code:
#include <stdio.h>
const int MAX_LENGTH = 18;
const int MAX_SUM = 162;
const int MAX_SQUARE_SUM = 1458;
int primes[1460];
unsigned long long dyn_table[20][164][1460];
//changed here.......1
unsigned long long ans[19][10][164][1460]; //about 45 MB
int start[19][163];
int end[19][163];
//upto here.........1
void gen_primes() {
for (int i = 0; i <= MAX_SQUARE_SUM; ++i) {
primes[i] = 1;
}
primes[0] = primes[1] = 0;
for (int i = 2; i * i <= MAX_SQUARE_SUM; ++i) {
if (!primes[i]) {
continue;
}
for (int j = 2; i * j <= MAX_SQUARE_SUM; ++j) {
primes[i*j] = 0;
}
}
}
void gen_table() {
for (int i = 0; i <= MAX_LENGTH; ++i) {
for (int j = 0; j <= MAX_SUM; ++j) {
for (int k = 0; k <= MAX_SQUARE_SUM; ++k) {
dyn_table[i][j][k] = 0;
}
}
}
dyn_table[0][0][0] = 1;
for (int i = 0; i < MAX_LENGTH; ++i) {
for (int j = 0; j <= 9 * i; ++j) {
for (int k = 0; k <= 9 * 9 * i; ++k) {
for (int l = 0; l < 10; ++l) {
dyn_table[i + 1][j + l][k + l*l] += dyn_table[i][j][k];
}
}
}
}
}
unsigned long long count_lucky (unsigned long long maxp) {
unsigned long long result = 0;
int len = 0;
int split_max[MAX_LENGTH];
while (maxp) {
split_max[len] = maxp % 10;
maxp /= 10;
++len;
}
int sum = 0;
int sq_sum = 0;
unsigned long long step_result;
unsigned long long step_;
for (int i = len-1; i >= 0; --i) {
step_result = 0;
int x1 = 9*i;
for (int l = 0; l < split_max[i]; ++l) {
//changed here........2
step_ = 0;
if(ans[i][l][sum][sq_sum]!=0)
{
step_result +=ans[i][l][sum][sq_sum];
continue;
}
int y = l + sum;
int x = l*l + sq_sum;
for (int j = 0; j <= x1; ++j) {
if(primes[j + y])
for (int k=start[i][j]; k<=end[i][j]; ++k) {
if (primes[k + x]) {
step_result += dyn_table[i][j][k];
step_+=dyn_table[i][j][k];
}
}
}
ans[i][l][sum][sq_sum] = step_;
//upto here...............2
}
result += step_result;
sum += split_max[i];
sq_sum += split_max[i] * split_max[i];
}
if (primes[sum] && primes[sq_sum]) {
++result;
}
return result;
}
int main(int argc, char** argv) {
gen_primes();
gen_table();
//changed here..........3
for(int i=0;i<=18;i++)
for(int j=0;j<=163;j++)
{
for(int k=0;k<=1458;k++)
if(dyn_table[i][j][k]!=0ll)
{
start[i][j] = k;
break;
}
for(int k=1460;k>=0;k--)
if(dyn_table[i][j][k]!=0ll)
{
end[i][j]=k;
break;
}
}
//upto here..........3
int cases = 0;
scanf("%d",&cases);
for (int i = 0; i < cases; ++i) {
unsigned long long a, b;
scanf("%lld %lld", &a, &b);
//changed here......4
if(b == 1000000000000000000ll)
b--;
//upto here.........4
printf("%lld\n", count_lucky(b) - count_lucky(a-1));
}
return 0;
}
Explanation:
gen_primes() and gen_table() are pretty much self explanatory.
count_lucky() works as follows:
split the number in split_max[], just storing single digit number for ones, tens, hundreds etc. positions.
The idea is: suppose split_map[2] = 7, so we need to calculate result for
1 in hundreds position and all 00 to 99.
2 in hundreds position and all 00 to 99.
.
.
7 in hundreds position and all 00 to 99.
this is actually done(in l loop) in terms of sum of digits and sum of square of digits which has been precalcutaled.
for this example: sum will vary from 0 to 9*i & sum of square will vary from 0 to 9*9*i...this is done in j and k loops.
This is repeated for all lengths in i loop
This was the idea of OleGG.
For optimization following is considered:
its useless to run sum of squares from 0 to 9*9*i as for particular sums of digits it would not go upto the full range. Like if i = 3 and sum equals 5 then sum of square would not vary from 0 to 9*9*3.This part is stored in start[] and end[] arrays using precomputed values.
value for particular number of digits and particular digit at most significant position of number and upto particular sum and upto particular sum of square isstored for memorization. Its too long but still its about 45 MB.
I believe this could be further optimized.
You should use DP for this task. Here is my solution:
#include <stdio.h>
const int MAX_LENGTH = 18;
const int MAX_SUM = 162;
const int MAX_SQUARE_SUM = 1458;
int primes[1459];
long long dyn_table[19][163][1459];
void gen_primes() {
for (int i = 0; i <= MAX_SQUARE_SUM; ++i) {
primes[i] = 1;
}
primes[0] = primes[1] = 0;
for (int i = 2; i * i <= MAX_SQUARE_SUM; ++i) {
if (!primes[i]) {
continue;
}
for (int j = 2; i * j <= MAX_SQUARE_SUM; ++j) {
primes[i*j] = 0;
}
}
}
void gen_table() {
for (int i = 0; i <= MAX_LENGTH; ++i) {
for (int j = 0; j <= MAX_SUM; ++j) {
for (int k = 0; k <= MAX_SQUARE_SUM; ++k) {
dyn_table[i][j][k] = 0;
}
}
}
dyn_table[0][0][0] = 1;
for (int i = 0; i < MAX_LENGTH; ++i) {
for (int j = 0; j <= 9 * i; ++j) {
for (int k = 0; k <= 9 * 9 * i; ++k) {
for (int l = 0; l < 10; ++l) {
dyn_table[i + 1][j + l][k + l*l] += dyn_table[i][j][k];
}
}
}
}
}
long long count_lucky (long long max) {
long long result = 0;
int len = 0;
int split_max[MAX_LENGTH];
while (max) {
split_max[len] = max % 10;
max /= 10;
++len;
}
int sum = 0;
int sq_sum = 0;
for (int i = len-1; i >= 0; --i) {
long long step_result = 0;
for (int l = 0; l < split_max[i]; ++l) {
for (int j = 0; j <= 9 * i; ++j) {
for (int k = 0; k <= 9 * 9 * i; ++k) {
if (primes[j + l + sum] && primes[k + l*l + sq_sum]) {
step_result += dyn_table[i][j][k];
}
}
}
}
result += step_result;
sum += split_max[i];
sq_sum += split_max[i] * split_max[i];
}
if (primes[sum] && primes[sq_sum]) {
++result;
}
return result;
}
int main(int argc, char** argv) {
gen_primes();
gen_table();
int cases = 0;
scanf("%d", &cases);
for (int i = 0; i < cases; ++i) {
long long a, b;
scanf("%lld %lld", &a, &b);
printf("%lld\n", count_lucky(b) - count_lucky(a-1));
}
return 0;
}
Brief explanation:
I'm calculating all primes up to 9 * 9 * MAX_LENGTH using Eratosthenes method;
Later, using DP, I'm building table dyn_table where value X in dyn_table[i][j][k] means that we have exactly X numbers of length i with sum of digits equal to j and sum of its squares equal to k
Then we can easily count amount of lucky numbers from 1 to 999..999(len times of 9). For this we just sum up all dyn_table[len][j][k] where both j and k are primes.
To calculate amount of lucky number from 1 to random X we split interval from 1 to X into intervals with length equal to 10^K (see *count_lucky* function).
And our last step is subtract count_lucky(a-1) (cause we are including a in our interval) from count_lucky(b).
That's all. Precalculation work for O(log(MAX_NUMBER)^3), each step have also this complexity.
I've tested my solution against linear straightforward one and results were equal
Instead of enumerating the space of numbers, enumerate the different "signatures" of numbers that are lucky. and then print all the differnet combination of those.
This can be done with trivial backtracking:
#define _GNU_SOURCE
#include <assert.h>
#include <limits.h>
#include <stdbool.h>
#include <stdint.h>
#include <stdio.h>
#define bitsizeof(e) (CHAR_BIT * sizeof(e))
#define countof(e) (sizeof(e) / sizeof((e)[0]))
#define BITMASK_NTH(type_t, n) ( ((type_t)1) << ((n) & (bitsizeof(type_t) - 1)))
#define OP_BIT(bits, n, shift, op) \
((bits)[(unsigned)(n) / (shift)] op BITMASK_NTH(typeof(*(bits)), n))
#define TST_BIT(bits, n) OP_BIT(bits, n, bitsizeof(*(bits)), & )
#define SET_BIT(bits, n) (void)OP_BIT(bits, n, bitsizeof(*(bits)), |= )
/* fast is_prime {{{ */
static uint32_t primes_below_1M[(1U << 20) / bitsizeof(uint32_t)];
static void compute_primes_below_1M(void)
{
SET_BIT(primes_below_1M, 0);
SET_BIT(primes_below_1M, 1);
for (uint32_t i = 2; i < bitsizeof(primes_below_1M); i++) {
if (TST_BIT(primes_below_1M, i))
continue;
for (uint32_t j = i * 2; j < bitsizeof(primes_below_1M); j += i) {
SET_BIT(primes_below_1M, j);
}
}
}
static bool is_prime(uint64_t n)
{
assert (n < bitsizeof(primes_below_1M));
return !TST_BIT(primes_below_1M, n);
}
/* }}} */
static uint32_t prime_checks, found;
static char sig[10];
static uint32_t sum, square_sum;
static void backtrack(int startdigit, int ndigits, int maxdigit)
{
ndigits++;
for (int i = startdigit; i <= maxdigit; i++) {
sig[i]++;
sum += i;
square_sum += i * i;
prime_checks++;
if (is_prime(sum) && is_prime(square_sum)) {
found++;
}
if (ndigits < 18)
backtrack(0, ndigits, i);
sig[i]--;
sum -= i;
square_sum -= i * i;
}
}
int main(void)
{
compute_primes_below_1M();
backtrack(1, 0, 9);
printf("did %d signature checks, found %d lucky signatures\n",
prime_checks, found);
return 0;
}
When I run it it does:
$ time ./lucky
did 13123091 signature checks, found 933553 lucky signatures
./lucky 0.20s user 0.00s system 99% cpu 0.201 total
Instead of found++ you want to generate all the distinct permutations of digits that you can build with that number. I also precompute the first 1M of primes ever.
I've not checked if the code is 100% correct, you may have to debug it a bit. But the rought idea is here, and I'm able to generate all the lucky permutation below 0.2s (even without bugs it should not be more than twice as slow).
And of course you want to generate the permutations that verify A <= B. You may want to ignore generating partitions that have more digits than B or less than A too. Anyway you can improve on my general idea from here.
(Note: The blurb at the start is because I cut and paste code I wrote for project euler, hence the very fast is_prime that works for N <= 1M ;) )
For those who weren't aware already, this is a problem on the website InterviewStreet.com (and in my opinion, the most difficult one there). My approach started off similar to (and was inspired by) OleGG's below. However, after creating the first [19][163][1459] table that he did (which I'll call table1), I went in a slightly different direction. I created a second table of ragged length [19][x][3] (table2), where x is the number of unique sum pairs for the corresponding number of digits. And for the third dimension of the table, with length 3, the 1st element is the quantity of unique "sum pairs" with the sum and squareSum values held by the 2nd and 3rd elements.
For example:
//pseudocode
table2[1] = new long[10][3]
table2[1] = {{1, 0, 0}, {1, 1, 1}, {1, 2, 4},
{1, 3, 9}, {1, 4, 16}, {1, 5, 25},
{1, 6, 36}, {1, 7, 49}, {1, 8, 64}, {1, 9, 81}}
table2[2] = new long[55][3]
table2[3] = new long[204][3]
table2[4] = new long[518][3]
.
.
.
.
table2[17] = new long[15552][3]
table2[18] = new long[17547][3]
The numbers I have for the second dimension length of the array (10, 55, 204, 518, ..., 15552, 17547) can be verified by querying table1, and in a similar fashion table2 can be populated. Now using table2 we can solve large "lucky" queries much faster than OleGG's posted method, although still employing a similar "split" process as he did. For example, if you need to find lucky(00000-54321) (i.e. the lucky numbers between 0 and 54321), it breaks down to the sum of the following 5 lines:
lucky(00000-54321) = {
lucky(00000-49999) +
lucky(50000-53999) +
lucky(54000-54299) +
lucky(54300-53319) +
lucky(54320-54321)
}
Which breaks down further:
lucky(00000-49999) = {
lucky(00000-09999) +
lucky(10000-19999) +
lucky(20000-29999) +
lucky(30000-39999) +
lucky(40000-49999)
}
.
.
lucky(54000-54299) = {
lucky(54000-54099) +
lucky(54100-54199) +
lucky(54200-54299)
}
.
.
.
etc
Each of these values can be obtained easily by querying table2. For example, lucky(40000-49999) is found by adding 4 and 16 to the 2nd and 3rd elements of the third dimension table2:
sum = 0
for (i = 0; i < 518; i++)
if (isPrime[table2[4][i][1] + 4] && isPrime[table2[4][i][2] + 4*4])
sum += table2[4][i][0]
return sum
Or for lucky(54200-54299):
sum = 0
for (i = 0; i < 55; i++)
if (isPrime[table2[2][i][1] + (5+4+2)]
&& isPrime[table2[2][i][2] + (5*5+4*4+2*2)])
sum += table2[2][i][0]
return sum
Now, OleGG's solution performed significantly faster than anything else I'd tried up until then, but with my modifications described above, it performs even better than before (by a factor of roughly 100x for a large test set). However, it is still not nearly fast enough for the blind test cases given on InterviewStreet. Through some clever hack I was able to determine I am currently running about 20x too slow to complete their test set in the allotted time. However, I can find no further optimizations. The biggest time sink here is obviously iterating through the second dimension of table2, and the only way to avoid that would be to tabulate the results of those sums. However, there are too many possibilities to compute them all in the time given (5 seconds) or to store them all in the space given (256MB). For example, the lucky(54200-54299) loop above could be pre-computed and stored as a single value, but if it was, we'd also need to pre-compute lucky(123000200-123000299) and lucky(99999200-99999299), etc etc. I've done the math and it is way too many calculations to pre-compute.
I have just solved this problem.
It's just a Dynamic Programming problem. Take DP[n](sum-square_sum) as the DP function, and DP[n](sum-square_sum) is the count of all of the numbers whose digits is less than or equal to n, with the sum and square_sum of digits of the number is respectively represented by sum and square_sum. For example:
DP[1](1-1) = 1 # only 1 satisfies the condition
DP[2](1-1) = 2 # both 1 and 10 satisfies the condition
DP[3](1-1) = 3 # 1 10 100
DP[3](2-4) = 3 # 11 110 101
Since we can easily figure out the first DP state DP[1][..][..], it is:
(0-0) => 1 (1-1) => 1 (2-4) => 1 (3-9) => 1 (4-16) => 1
(5-25) => 1 (6-36) => 1 (7-49) => 1 (8-64) => 1 (9-81) => 1
then we can deduce DP[1] from DP[1], and then DP[3] ... DP[18]
the deduce above is made by the fact that every time when n increase by 1, for example from DP[1] to DP[2], we got a new digit (0..9), and the set of (sum, square_sum) pair (i.e. DP[n]) must be updated.
Finally, we can traverse the DP[18] set and count of the numbers that are lucky.
Well, how about the time and space complexity of the algorithm above?
As we know sum <= 18*9=162, square_sum <= 18*9*9 = 1458, so the set of (sum, square_sum) pair
(i.e. DP[n]) is very small, less than 162*1458=236196, in fact it's much smaller than 236196;
The fact is: my ruby program counting all the lucky numbers between 0 and 10^18 finishes in less than 1s.
ruby lucky_numbers.rb 0.55s user 0.00s system 99% cpu 0.556 total
and I test my program by writing a test function using brute force algorithm, and it's right for numbers less than 10^7 .
Based on the requirements, you can do it in different ways. If I was doing it, I would calculate the prime numbers using 'Sieve of Eratosthenes' in the required range (A to (9*2)*B.length), cache them (again, depending on your setup, you can use in-memory or disk cache) and use it for the next run.
I just coded a fast solution (Java), as below (NOTE: Integer overflow is not checked. Just a fast example. Also, my code is not optimized.):
import java.util.ArrayList;
import java.util.Arrays;
public class LuckyNumbers {
public static void main(String[] args) {
int a = 0, b = 1000;
LuckyNumbers luckyNums = new LuckyNumbers();
ArrayList<Integer> luckyList = luckyNums.findLuckyNums(a, b);
System.out.println(luckyList);
}
private ArrayList<Integer> findLuckyNums(int a, int b) {
ArrayList<Integer> luckyList = new ArrayList<Integer>();
int size = ("" + b).length();
int maxNum = 81 * 4; //9*2*b.length() - 9 is used, coz it's the max digit
System.out.println("Size : " + size + " MaxNum : " + maxNum);
boolean[] primeArray = sieve(maxNum);
for(int i=a;i<=b;i++) {
String num = "" + i;
int sumDigits = 0;
int sumSquareDigits = 0;
for(int j=0;j<num.length();j++) {
int digit = Integer.valueOf("" + num.charAt(j));
sumDigits += digit;
sumSquareDigits += Math.pow(digit, 2);
}
if(primeArray[sumDigits] && primeArray[sumSquareDigits]) {
luckyList.add(i);
}
}
return luckyList;
}
private boolean[] sieve(int n) {
boolean[] prime = new boolean[n + 1];
Arrays.fill(prime, true);
prime[0] = false;
prime[1] = false;
int m = (int) Math.sqrt(n);
for (int i = 2; i <= m; i++) {
if (prime[i]) {
for (int k = i * i; k <= n; k += i) {
prime[k] = false;
}
}
}
return prime;
}
}
And the output was:
[11, 12, 14, 16, 21, 23, 25, 32, 38, 41, 49, 52, 56, 58, 61, 65, 83, 85, 94, 101, 102, 104, 106, 110, 111, 113, 119, 120, 131, 133, 137, 140, 146, 160, 164, 166, 173, 179, 191, 197, 199, 201, 203, 205, 210, 223, 229, 230, 232, 250, 289, 292, 298, 302, 308, 311, 313, 317, 320, 322, 331, 335, 337, 344, 346, 353, 355, 364, 368, 371, 373, 377, 379, 380, 386, 388, 397, 401, 409, 410, 416, 434, 436, 443, 449, 461, 463, 467, 476, 490, 494, 502, 506, 508, 520, 533, 535, 553, 559, 560, 566, 580, 595, 601, 605, 610, 614, 616, 634, 638, 641, 643, 647, 650, 656, 661, 665, 674, 683, 689, 698, 713, 719, 731, 733, 737, 739, 746, 764, 773, 779, 791, 793, 797, 803, 805, 829, 830, 836, 838, 850, 863, 869, 883, 892, 896, 904, 911, 917, 919, 922, 928, 937, 940, 944, 955, 968, 971, 973, 977, 982, 986, 991]
I haven't carefully analyzed your current solution but this might improve on it:
Since the order of digits doesn't matter, you should go through all possible combinations of digits 0-9 of length 1 to 18, keeping track of the sum of digits and their squares and adding one digit at a time, using result of previous calculation.
So if you know that for 12 sum of digits is 3 and of squares is 5, look at numbers
120, 121, 122... etc and calculate sums for them trivially from the 3 and 5 for 12.
Sometimes the fastest solution is incredibly simple:
uint8_t precomputedBitField[] = {
...
};
bool is_lucky(int number) {
return precomputedBitField[number >> 8] & (1 << (number & 7));
}
Just modify your existing code to generate "precomputedBitField".
If you're worried about size, to cover all numbers from 0 to 999 it will only cost you 125 bytes, so this method will probably be smaller (and a lot faster) than any other alternative.
I was trying to come up with a solution using Pierre's enumeration method, but never came up with a sufficiently fast way to count the permutations. OleGG's counting method is very clever, and pirate's optimizations are necessary to make it fast enough. I came up with one minor improvement, and one workaround to a serious problem.
First, the improvement: you don't have to step through all the sums and squaresums one by one checking for primes in pirate's j and k loops. You have (or can easily generate) a list of primes. If you use the other variables to figure out which primes are in range, you can just step through the list of suitable primes for sum and squaresum. An array of primes and a lookup table to quickly determine at which index the prime >= a number is at is helpful. However, this is probably only a fairly minor improvement.
The big issue is with pirate's ans cache array. It is not 45MB as claimed; with 64 bit entries, it is something like 364MB. This is outside the (current) allowed memory limits for C and Java. It can be reduced to 37MB by getting rid of the "l" dimension, which is unnecessary and hurts cache performance anyway. You're really interested in caching counts for l + sum and l*l + squaresum, not l, sum, and squaresum individually.
First I would like to add that a lucky number can be calculated by a sieve, the explanation of the sieve can be found here: http://en.wikipedia.org/wiki/Lucky_number
so you can improve your solution speed using a sieve to determine the numbers,

Find the 2nd largest element in an array with minimum number of comparisons

For an array of size N, what is the number of comparisons required?
The optimal algorithm uses n+log n-2 comparisons. Think of elements as competitors, and a tournament is going to rank them.
First, compare the elements, as in the tree
|
/ \
| |
/ \ / \
x x x x
this takes n-1 comparisons and each element is involved in comparison at most log n times. You will find the largest element as the winner.
The second largest element must have lost a match to the winner (he can't lose a match to a different element), so he's one of the log n elements the winner has played against. You can find which of them using log n - 1 comparisons.
The optimality is proved via adversary argument. See https://math.stackexchange.com/questions/1601 or http://compgeom.cs.uiuc.edu/~jeffe/teaching/497/02-selection.pdf or http://www.imada.sdu.dk/~jbj/DM19/lb06.pdf or https://www.utdallas.edu/~chandra/documents/6363/lbd.pdf
You can find the second largest value with at most 2·(N-1) comparisons and two variables that hold the largest and second largest value:
largest := numbers[0];
secondLargest := null
for i=1 to numbers.length-1 do
number := numbers[i];
if number > largest then
secondLargest := largest;
largest := number;
else
if number > secondLargest then
secondLargest := number;
end;
end;
end;
Use Bubble sort or Selection sort algorithm which sorts the array in descending order. Don't sort the array completely. Just two passes. First pass gives the largest element and second pass will give you the second largest element.
No. of comparisons for first pass: n-1
No. of comparisons for second pass: n-2
Total no. of comparison for finding second largest: 2n-3
May be you can generalize this algorithm. If you need the 3rd largest then you make 3 passes.
By above strategy you don't need any temporary variables as Bubble sort and Selection sort are in place sorting algorithms.
Here is some code that might not be optimal but at least actually finds the 2nd largest element:
if( val[ 0 ] > val[ 1 ] )
{
largest = val[ 0 ]
secondLargest = val[ 1 ];
}
else
{
largest = val[ 1 ]
secondLargest = val[ 0 ];
}
for( i = 2; i < N; ++i )
{
if( val[ i ] > secondLargest )
{
if( val[ i ] > largest )
{
secondLargest = largest;
largest = val[ i ];
}
else
{
secondLargest = val[ i ];
}
}
}
It needs at least N-1 comparisons if the largest 2 elements are at the beginning of the array and at most 2N-3 in the worst case (one of the first 2 elements is the smallest in the array).
case 1-->9 8 7 6 5 4 3 2 1
case 2--> 50 10 8 25 ........
case 3--> 50 50 10 8 25.........
case 4--> 50 50 10 8 50 25.......
public void second element()
{
int a[10],i,max1,max2;
max1=a[0],max2=a[1];
for(i=1;i<a.length();i++)
{
if(a[i]>max1)
{
max2=max1;
max1=a[i];
}
else if(a[i]>max2 &&a[i]!=max1)
max2=a[i];
else if(max1==max2)
max2=a[i];
}
}
Sorry, JS code...
Tested with the two inputs:
a = [55,11,66,77,72];
a = [ 0, 12, 13, 4, 5, 32, 8 ];
var first = Number.MIN_VALUE;
var second = Number.MIN_VALUE;
for (var i = -1, len = a.length; ++i < len;) {
var dist = a[i];
// get the largest 2
if (dist > first) {
second = first;
first = dist;
} else if (dist > second) { // && dist < first) { // this is actually not needed, I believe
second = dist;
}
}
console.log('largest, second largest',first,second);
largest, second largest 32 13
This should have a maximum of a.length*2 comparisons and only goes through the list once.
I know this is an old question, but here is my attempt at solving it, making use of the Tournament Algorithm. It is similar to the solution used by #sdcvvc , but I am using two-dimensional array to store elements.
To make things work, there are two assumptions:
1) number of elements in the array is the power of 2
2) there are no duplicates in the array
The whole process consists of two steps:
1. building a 2D array by comparing two by two elements. First row in the 2D array is gonna be the entire input array. Next row contains results of the comparisons of the previous row. We continue comparisons on the newly built array and keep building the 2D array until an array of only one element (the largest one) is reached.
2. we have a 2D-array where last row contains only one element: the largest one. We continue going from the bottom to the top, in each array finding the element that was "beaten" by the largest and comparing it to the current "second largest" value. To find the element beaten by the largest, and to avoid O(n) comparisons, we must store the index of the largest element in the previous row. That way we can easily check the adjacent elements. At any level (above root level),the adjacent elements are obtained as:
leftAdjacent = rootIndex*2
rightAdjacent = rootIndex*2+1,
where rootIndex is index of the largest(root) element at the previous level.
I know the question asks for C++, but here is my attempt at solving it in Java. (I've used lists instead of arrays, to avoid messy changing of the array size and/or unnecessary array size calculations)
public static Integer findSecondLargest(List<Integer> list) {
if (list == null) {
return null;
}
if (list.size() == 1) {
return list.get(0);
}
List<List<Integer>> structure = buildUpStructure(list);
System.out.println(structure);
return secondLargest(structure);
}
public static List<List<Integer>> buildUpStructure(List<Integer> list) {
List<List<Integer>> newList = new ArrayList<List<Integer>>();
List<Integer> tmpList = new ArrayList<Integer>(list);
newList.add(tmpList);
int n = list.size();
while (n>1) {
tmpList = new ArrayList<Integer>();
for (int i = 0; i<n; i=i+2) {
Integer i1 = list.get(i);
Integer i2 = list.get(i+1);
tmpList.add(Math.max(i1, i2));
}
n/= 2;
newList.add(tmpList);
list = tmpList;
}
return newList;
}
public static Integer secondLargest(List<List<Integer>> structure) {
int n = structure.size();
int rootIndex = 0;
Integer largest = structure.get(n-1).get(rootIndex);
List<Integer> tmpList = structure.get(n-2);
Integer secondLargest = Integer.MIN_VALUE;
Integer leftAdjacent = -1;
Integer rightAdjacent = -1;
for (int i = n-2; i>=0; i--) {
rootIndex*=2;
tmpList = structure.get(i);
leftAdjacent = tmpList.get(rootIndex);
rightAdjacent = tmpList.get(rootIndex+1);
if (leftAdjacent.equals(largest)) {
if (rightAdjacent > secondLargest) {
secondLargest = rightAdjacent;
}
}
if (rightAdjacent.equals(largest)) {
if (leftAdjacent > secondLargest) {
secondLargest = leftAdjacent;
}
rootIndex=rootIndex+1;
}
}
return secondLargest;
}
Suppose provided array is inPutArray = [1,2,5,8,7,3] expected O/P -> 7 (second largest)
take temp array
temp = [0,0], int dummmy=0;
for (no in inPutArray) {
if(temp[1]<no)
temp[1] = no
if(temp[0]<temp[1]){
dummmy = temp[0]
temp[0] = temp[1]
temp[1] = temp
}
}
print("Second largest no is %d",temp[1])
PHP version of the Gumbo algorithm: http://sandbox.onlinephpfunctions.com/code/51e1b05dac2e648fd13e0b60f44a2abe1e4a8689
$numbers = [10, 9, 2, 3, 4, 5, 6, 7];
$largest = $numbers[0];
$secondLargest = null;
for ($i=1; $i < count($numbers); $i++) {
$number = $numbers[$i];
if ($number > $largest) {
$secondLargest = $largest;
$largest = $number;
} else if ($number > $secondLargest) {
$secondLargest = $number;
}
}
echo "largest=$largest, secondLargest=$secondLargest";
Assuming space is irrelevant, this is the smallest I could get it. It requires 2*n comparisons in worst case, and n comparisons in best case:
arr = [ 0, 12, 13, 4, 5, 32, 8 ]
max = [ -1, -1 ]
for i in range(len(arr)):
if( arr[i] > max[0] ):
max.insert(0,arr[i])
elif( arr[i] > max[1] ):
max.insert(1,arr[i])
print max[1]
try this.
max1 = a[0].
max2.
for i = 0, until length:
if a[i] > max:
max2 = max1.
max1 = a[i].
#end IF
#end FOR
return min2.
it should work like a charm. low in complexity.
here is a java code.
int secondlLargestValue(int[] secondMax){
int max1 = secondMax[0]; // assign the first element of the array, no matter what, sorted or not.
int max2 = 0; // anything really work, but zero is just fundamental.
for(int n = 0; n < secondMax.length; n++){ // start at zero, end when larger than length, grow by 1.
if(secondMax[n] > max1){ // nth element of the array is larger than max1, if so.
max2 = max1; // largest in now second largest,
max1 = secondMax[n]; // and this nth element is now max.
}//end IF
}//end FOR
return max2;
}//end secondLargestValue()
Use counting sort and then find the second largest element, starting from index 0 towards the end. There should be at least 1 comparison, at most n-1 (when there's only one element!).
#include<stdio.h>
main()
{
int a[5] = {55,11,66,77,72};
int max,min,i;
int smax,smin;
max = min = a[0];
smax = smin = a[0];
for(i=0;i<=4;i++)
{
if(a[i]>max)
{
smax = max;
max = a[i];
}
if(max>a[i]&&smax<a[i])
{
smax = a[i];
}
}
printf("the first max element z %d\n",max);
printf("the second max element z %d\n",smax);
}
The accepted solution by sdcvvc in C++11.
#include <algorithm>
#include <iostream>
#include <vector>
#include <cassert>
#include <climits>
using std::vector;
using std::cout;
using std::endl;
using std::random_shuffle;
using std::min;
using std::max;
vector<int> create_tournament(const vector<int>& input) {
// make sure we have at least two elements, so the problem is interesting
if (input.size() <= 1) {
return input;
}
vector<int> result(2 * input.size() - 1, -1);
int i = 0;
for (const auto& el : input) {
result[input.size() - 1 + i] = el;
++i;
}
for (uint j = input.size() / 2; j > 0; j >>= 1) {
for (uint k = 0; k < 2 * j; k += 2) {
result[j - 1 + k / 2] = min(result[2 * j - 1 + k], result[2 * j + k]);
}
}
return result;
}
int second_smaller(const vector<int>& tournament) {
const auto& minimum = tournament[0];
int second = INT_MAX;
for (uint j = 0; j < tournament.size() / 2; ) {
if (tournament[2 * j + 1] == minimum) {
second = min(second, tournament[2 * j + 2]);
j = 2 * j + 1;
}
else {
second = min(second, tournament[2 * j + 1]);
j = 2 * j + 2;
}
}
return second;
}
void print_vector(const vector<int>& v) {
for (const auto& el : v) {
cout << el << " ";
}
cout << endl;
}
int main() {
vector<int> a;
for (int i = 1; i <= 2048; ++i)
a.push_back(i);
for (int i = 0; i < 1000; i++) {
random_shuffle(a.begin(), a.end());
const auto& v = create_tournament(a);
assert (second_smaller(v) == 2);
}
return 0;
}
I have gone through all the posts above but I am convinced that the implementation of the Tournament algorithm is the best approach. Let us consider the following algorithm posted by #Gumbo
largest := numbers[0];
secondLargest := null
for i=1 to numbers.length-1 do
number := numbers[i];
if number > largest then
secondLargest := largest;
largest := number;
else
if number > secondLargest then
secondLargest := number;
end;
end;
end;
It is very good in case we are going to find the second largest number in an array. It has (2n-1) number of comparisons. But what if you want to calculate the third largest number or some kth largest number. The above algorithm doesn't work. You got to another procedure.
So, I believe tournament algorithm approach is the best and here is the link for that.
The following solution would take 2(N-1) comparisons:
arr #array with 'n' elements
first=arr[0]
second=-999999 #large negative no
i=1
while i is less than length(arr):
if arr[i] greater than first:
second=first
first=arr[i]
else:
if arr[i] is greater than second and arr[i] less than first:
second=arr[i]
i=i+1
print second
It can be done in n + ceil(log n) - 2 comparison.
Solution:
it takes n-1 comparisons to get minimum.
But to get minimum we will build a tournament in which each element will be grouped in pairs. like a tennis tournament and winner of any round will go forward.
Height of this tree will be log n since we half at each round.
Idea to get second minimum is that it will be beaten by minimum candidate in one of previous round. So, we need to find minimum in potential candidates (beaten by minimum).
Potential candidates will be log n = height of tree
So, no. of comparison to find minimum using tournament tree is n-1
and for second minimum is log n -1
sums up = n + ceil(log n) - 2
Here is C++ code
#include <iostream>
#include <cstdio>
#include <cstdlib>
#include <cmath>
#include <vector>
using namespace std;
typedef pair<int,int> ii;
bool isPowerOfTwo (int x)
{
/* First x in the below expression is for the case when x is 0 */
return x && (!(x&(x-1)));
}
// modified
int log_2(unsigned int n) {
int bits = 0;
if (!isPowerOfTwo(n))
bits++;
if (n > 32767) {
n >>= 16;
bits += 16;
}
if (n > 127) {
n >>= 8;
bits += 8;
}
if (n > 7) {
n >>= 4;
bits += 4;
}
if (n > 1) {
n >>= 2;
bits += 2;
}
if (n > 0) {
bits++;
}
return bits;
}
int second_minima(int a[], unsigned int n) {
// build a tree of size of log2n in the form of 2d array
// 1st row represents all elements which fights for min
// candidate pairwise. winner of each pair moves to 2nd
// row and so on
int log_2n = log_2(n);
long comparison_count = 0;
// pair of ints : first element stores value and second
// stores index of its first row
ii **p = new ii*[log_2n];
int i, j, k;
for (i = 0, j = n; i < log_2n; i++) {
p[i] = new ii[j];
j = j&1 ? j/2+1 : j/2;
}
for (i = 0; i < n; i++)
p[0][i] = make_pair(a[i], i);
// find minima using pair wise fighting
for (i = 1, j = n; i < log_2n; i++) {
// for each pair
for (k = 0; k+1 < j; k += 2) {
// find its winner
if (++comparison_count && p[i-1][k].first < p[i-1][k+1].first) {
p[i][k/2].first = p[i-1][k].first;
p[i][k/2].second = p[i-1][k].second;
}
else {
p[i][k/2].first = p[i-1][k+1].first;
p[i][k/2].second = p[i-1][k+1].second;
}
}
// if no. of elements in row is odd the last element
// directly moves to next round (row)
if (j&1) {
p[i][j/2].first = p[i-1][j-1].first;
p[i][j/2].second = p[i-1][j-1].second;
}
j = j&1 ? j/2+1 : j/2;
}
int minima, second_minima;
int index;
minima = p[log_2n-1][0].first;
// initialize second minima by its final (last 2nd row)
// potential candidate with which its final took place
second_minima = minima == p[log_2n-2][0].first ? p[log_2n-2][1].first : p[log_2n-2][0].first;
// minima original index
index = p[log_2n-1][0].second;
for (i = 0, j = n; i <= log_2n - 3; i++) {
// if its last candidate in any round then there is
// no potential candidate
if (j&1 && index == j-1) {
index /= 2;
j = j/2+1;
continue;
}
// if minima index is odd, then it fighted with its index - 1
// else its index + 1
// this is a potential candidate for second minima, so check it
if (index&1) {
if (++comparison_count && second_minima > p[i][index-1].first)
second_minima = p[i][index-1].first;
}
else {
if (++comparison_count && second_minima > p[i][index+1].first)
second_minima = p[i][index+1].first;
}
index/=2;
j = j&1 ? j/2+1 : j/2;
}
printf("-------------------------------------------------------------------------------\n");
printf("Minimum : %d\n", minima);
printf("Second Minimum : %d\n", second_minima);
printf("comparison count : %ld\n", comparison_count);
printf("Least No. Of Comparisons (");
printf("n+ceil(log2_n)-2) : %d\n", (int)(n+ceil(log(n)/log(2))-2));
return 0;
}
int main()
{
unsigned int n;
scanf("%u", &n);
int a[n];
int i;
for (i = 0; i < n; i++)
scanf("%d", &a[i]);
second_minima(a,n);
return 0;
}
function findSecondLargeNumber(arr){
var fLargeNum = 0;
var sLargeNum = 0;
for(var i=0; i<arr.length; i++){
if(fLargeNum < arr[i]){
sLargeNum = fLargeNum;
fLargeNum = arr[i];
}else if(sLargeNum < arr[i]){
sLargeNum = arr[i];
}
}
return sLargeNum;
}
var myArray = [799, -85, 8, -1, 6, 4, 3, -2, -15, 0, 207, 75, 785, 122, 17];
Ref: http://www.ajaybadgujar.com/finding-second-largest-number-from-array-in-javascript/
A good way with O(1) time complexity would be to use a max-heap. Call the heapify twice and you have the answer.
int[] int_array = {4, 6, 2, 9, 1, 7, 4, 2, 9, 0, 3, 6, 1, 6, 8};
int largst=int_array[0];
int second=int_array[0];
for (int i=0; i<int_array.length; i++){
if(int_array[i]>largst) {
second=largst;
largst=int_array[i];
}
else if(int_array[i]>second && int_array[i]<largst) {
second=int_array[i];
}
}
I suppose, follow the "optimal algorithm uses n+log n-2 comparisons" from above, the code that I came up with that doesn't use binary tree to store the value would be the following:
During each recursive call, the array size is cut in half.
So the number of comparison is:
1st iteration: n/2 comparisons
2nd iteration: n/4 comparisons
3rd iteration: n/8 comparisons
...
Up to log n iterations?
Hence, total => n - 1 comparisons?
function findSecondLargestInArray(array) {
let winner = [];
if (array.length === 2) {
if (array[0] < array[1]) {
return array[0];
} else {
return array[1];
}
}
for (let i = 1; i <= Math.floor(array.length / 2); i++) {
if (array[2 * i - 1] > array[2 * i - 2]) {
winner.push(array[2 * i - 1]);
} else {
winner.push(array[2 * i - 2]);
}
}
return findSecondLargestInArray(winner);
}
Assuming array contain 2^n number of numbers.
If there are 6 numbers, then 3 numbers will move to the next level, which is not right.
Need like 8 numbers => 4 number => 2 number => 1 number => 2^n number of number
package com.array.orderstatistics;
import java.util.Arrays;
import java.util.Collections;
public class SecondLargestElement {
/**
* Total Time Complexity will be n log n + O(1)
* #param str
*/
public static void main(String str[]) {
Integer[] integerArr = new Integer[] { 5, 1, 2, 6, 4 };
// Step1 : Time Complexity will be n log(n)
Arrays.sort(integerArr, Collections.reverseOrder());
// Step2 : Array.get Second largestElement
int secondLargestElement = integerArr[1];
System.out.println(secondLargestElement);
}
}
Sort the array into ascending order then assign a variable to the (n-1)th term.

Resources