Suppose that I have a n x n 2D array where the entries are either 0 or 1. For example:
[[0, 1, 1]
[1, 0, 0]
[0, 0, 0]]
Now I want to find the neighbor cells of the 1s in the array, which are the cells to the sides and directly diagonal of the 1s in the array that equal to 0. So in the example above, the neighbor cells would be {(0, 0), (1, 1), (1, 2), (2, 0), (2, 1)}. There is the brute-force method of doing this, where I iterate through every entry and if it is a 1, I look at its neighbors and check if it equal to 0. For large n with a high density of 1s, the number of checks made is around 8n^2. However, I feel like I can make use of the redundancy of this problem to come up with a faster solution. For example, after look at the first entry in the cell (0, 0), I see that that it has two neighboring ones and a neighboring 0. So I know that I don't have to check the cell (1, 1) and its neighbors. I also know that at (0, 1) and (1, 0) the entry is 1, so I can add (0, 0) as a neighbor cell.
What's the fastest implementation of a solution to this problem that someone can come up with for this problem? Personally, I've thinking of using some sort of BFS or DFS implementation, but I'm not sure how I would implement it. I was thinking instead of taking around 8n^2 checks, it would only take around n^2 checks.
(Also, I don't know if this is a leetcode problem. It seem suitable to be one, so if anyone knows the name or number of this problem on leetcode, please let me know!)
Well, I can think of an idea that will lower the 8.
First you sum all the numbers int the matrix, that will gives you how many 1s there are in the matrix. This step can be made in O(n^2).
Then if there are less 1s than (n * n) / 2 you do the check by the 1s. I mean you go for every item and if it is a 1 you look for all the 0 positions in the eight neighbor (and add them to your answer).
In the other side, if there are more 1s than (n * n) / 2 you do the same but this time you do the check by the 0s. You go for every item and if it is a 0 you look for at least one 1 in the eight neighbor. If there is a 1 neighbor you add to your answer the current 0 position.
Why doing this? Well you are checking the 8 neighbor at most (n^2)/2 so the final time in the worst case will be: n^2 + n^2 + 8(n^2)/2 = 2n^2 + 4(n^2) = 6n^2
Ps: Thanks to #unlut that pointed some error this answer had
I was thinking instead of taking around 8n^2 checks, it would only take around n^2 checks.
I think this is impossible. It all depends on input. For every 1, you must check/overwrite neighbors. So, minimum of number of 1s in input matrix * 8 checks are required.
Try out some examples
0 0 0 1 1 1 0 1 0 1 0 1
0 1 0 1 1 1 1 1 1 0 0 0
0 0 0 1 1 1 0 1 0 1 0 1
Related
We are given an array of integers. We have to change the minimum number of those integers however we'd like so that, for some fixed parameter k, the sum of any k consecutive items in the array is even.
Example:
N = 8; K = 3;
A = {1,2,3,4,5,6,7,8}
We can change 3 elements (4th,5th,6th)
so the array can be {1,2,3,5,6,7,7,8}
then
1+2+3=6 is even
2+3+5=10 is even
3+5+6=14 is even
5+6+7=18 is even
6+7+7=20 is even
7+7+8=22 is even
There's a very nice O(n)-time solution to this problem that, at a high level, works like this:
Recognize that determining which items to flip boils down to determining a pattern that repeats across the array of which items to flip.
Use dynamic programming to determine what that pattern is.
Here's how to arrive at this solution.
First, some observations. Since all we care about here is whether the sums are even or odd, we actually don't care about the numbers' exact values. We just care about whether they're even or odd. So let's begin by replacing each number with either 0 (if the number is even) or 1 (if it's odd). Now, our task is to make each window of k elements have an even number of 1s.
Second, the pattern of 0s and 1s that results after you've transformed the array has a surprising shape: it's simply a repeated copy of the first k elements of the array. For example, suppose k = 5 and we decide that the array should start off as 1 0 1 1 1. What must the sixth array element be? Well, in moving from the first window to the second, we dropped a 1 off the front of the window, changing the parity to odd. We therefore have to have the next array element be a 1, which means that the sixth array element must be a 1, equal to the first array element. The seventh array element then has to be a 0, since in moving from the second window to the third we drop off a zero. This process means that whatever we decide on for the first k elements turns out to determine the entire final sequence of values.
This means that we can reframe the problem in the following way: break the original input array of n items into n/k blocks of size k. We're now asked to pick a sequence of 0s and 1s such that
this sequence differs in as few places as possible from the n/k blocks of k items each, and
the sequence has an even number of 1s.
For example, given the input sequence
0 1 1 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1
and k = 3, we would form the blocks
0 1 1, 0 1 1, 1 0 0, 1 0 1, 1 1 0, 1 1 1
and then try to find a pattern of length three with an even number of 1s in it such that replacing each block with that pattern requires the fewest number of edits.
Let's see how to take that problem on. Let's work one bit at a time. For example, we can ask: what's the cost of making the first bit a 0? What's the cost of making the first bit a 1? The cost of making the first bit a 0 is equal to the number of blocks that have a 1 at the front, and the cost of making the first bit a 1 is equal to the number of blocks that have a 0 at the front. We can work out the cost of setting each bit, individually, to either to zero or to one. That gives us a matrix like this one:
Bit #0 Bit #1 Bit #2 Bit #3 ... Bit #k-1
---------------------+--------+--------+--------+--------+--------+----------
Cost of setting to 0 | | | | | | |
Cost of setting to 1 | | | | | | |
We now need to choose a value for each column with the goal of minimizing the total cost picked, subject to the constraint that we pick an even number of bits to be equal to 1. And this is a nice dynamic programming exercise. We consider subproblems of the form
What is the lowest cost you can make out of the first m columns from the table, provided your choice has parity p of items chosen from the bottom row?
We can store this in an (k + 1) × 2 table T[m][p], where, for example, T[3][even] is the lowest cost you can achieve using the first three columns with an even number of items set to 1, and T[6][odd] is the lowest cost you can achieve using the first six columns with an odd number of items set to 1. This gives the following recurrence:
T[0][even] = 0 (using zero columns costs nothing)
T[0][odd] = ∞ (you cannot have an odd number of bits set to 1 if you use no colums)
T[m+1][p] = min(T[m][p] + cost of setting this bit to 0, T[m][!p] + cost of setting this bit to 1) (either use a zero and keep the same parity, or use a 1 and flip the parity).
This can be evaluated in time O(k), and the resulting minimum cost is given by T[n][even]. You can use a standard DP table walk to reconstruct the optimal solution from this point.
Overall, here's the final algorithm:
create a table costs[k+1][2], all initially zero.
/* Populate the costs table. costs[m][0] is the cost of setting bit m
* to 0; costs[m][1] is the cost of setting bit m to 1. We work this
* out by breaking the input into blocks of size k, then seeing, for
* each item within each block, what its parity is. The cost of setting
* that bit to the other parity then increases by one.
*/
for i = 0 to n - 1:
parity = array[i] % 2
costs[i % k][!parity]++ // Cost of changing this entry
/* Do the DP algorithm to find the minimum cost. */
create array T[k + 1][2]
T[0][0] = 0
T[0][1] = infinity
for m from 1 to k:
for p from 0 to 1:
T[m][p] = min(T[m - 1][p] + costs[m - 1][0],
T[m - 1][!p] + costs[m - 1][1])
return T[m][0]
Overall, we do O(n) work with our initial pass to work out the costs of setting each bit, independently, to 0. We then do O(k) work with the DP step at the end. The overall work is therefore O(n + k), and assuming k ≤ n (otherwise the problem is trivial) the cost is O(n).
Let us assume that we have a two dimensional array A (n X n). All elements of A are either O or 1. We also have a given integer K. Our task is to find the number of all possible "rectangles" in A, which contain elements with total sum K.
To give an example , if A =
0 0 1 0
1 0 0 1
1 1 1 1
1 0 0 1 and k=3 ,
0 0 1 0
1 0 0 1 holds the property ,
1 1 1 holds the property ,
1 1 1 holds the property ,
0 0
1 0
1 1 holds the property ,
1 1
1 0 holds the property ,
1 1
0 1 holds the property ,
1
1
1 holds the property
1
1
1 holds the property
So unless I missed something, the answer should be 8 for this example.
In other words, we need to check all possible rectangles in A to see if the sum of their elements is K. Is there a way to do it faster than O(n^2 * k^2) ?
You could do this in O(n^3).
First note that a summed area table allows you to compute the sum of any rectangle in O(1) time given O(n^2) preprocessing time.
In this problem we only need to sum the columns, but the general technique is worth knowing.
Then for each start row and end row combination you can do a linear scan across the matrix to count the solutions either with a two pointers approach or simply by storing the previous sums.
Example Python code (finds 14 solutions to your example):
from collections import defaultdict
A=[[0, 0, 1, 0],
[1, 0, 0, 1],
[1, 1, 1, 1],
[1, 0, 0, 1]]
k=3
h=len(A)
w=len(A[0])
C=[ [0]*w for i in range(h+1)]
for x in range(w):
for y in range(1,h+1):
C[y][x] = C[y-1][x] + A[y-1][x]
# C[y][x] contains sum of all values A[y2][x] with y2<y
count=0
for start_row in range(h):
for end_row in range(start_row,h):
D=defaultdict(int) # Key is sum of columns from start to here, value is count
D[0]=1
t=0 # Sum of all A[y][x] for x <= col, start_row<=y<=end_row
for x in range(w):
t+=C[end_row+1][x] - C[start_row][x]
count += D[t-k]
D[t] += 1
print count
I think it's worse than you calculated. I found a total of 14 rectangles with three 1's (green squares). The method I used was to take each {row,column} position in the array as the upper-left of a rectangle, and then consider every possible combination of width and height.
Since the width and height are not constrained by k(at least not directly), the search time is O(n^4). Of course, for any given {row,column,width}, the search ends when the height is such that the sum is greater than k. But that doesn't change the worst case time.
The three starting points in the lower-right need not be considered because it's not possible to construct a rectangle containing k 1's starting from those positions. But again, that doesn't change the time complexity.
Note: I'm aware that this is more of a comment than an answer. However, it doesn't fit in a comment, and I believe it's still useful to the OP. You can't solve a problem until you fully understand it.
Let's say I have a vector A = [-1,2];
Each element in A is described by the actual number and sign. So each element has a 2 dimensional feature-set.
I would like to generate a matrix, in this case 2x2 where the columns correspond to the element, and rows correspond to the presence of a feature. The presence of a feature is described by 1's and 0's. So, if an element is positive, it is 1, if the element is the number 1, then the result is 1 as well. In the case above I would get:
Element 1 Element 2
Is this a 1? 1 0
Is this a positive number? 0 1
What is the smartest way to go about accomplishing this? Obviously if statements would work, but I feel that there should be a faster, much smarter way of going about this. I am coding this in matlab by the way, and I would appreciate any help.
#Benoit_11's solution is a fine one. Here's a similar but maybe simpler solution. You could try both and see which is faster if you care about speed.
features = [abs(A) == 1; A > 0];
this assumes A is a row vector in order to get the output in the format you specified.
Simple way using ismember for the first condition and logical operation for the 2nd condition. ismember outputs a logical array which you can plug into the output you need (here called DescribeA; and likewise when you check for values greater than 0 using the > operator.
%// Test array
A = [-1,2,1,-10,5,-3,1]
%// Initialize output
DescribeA = zeros(2,numel(A));
%// 1st condition. Check if values are 1 or -1
DescribeA(1,:) = ismember(A,1)|ismember(A,-1);
%// Check if they are > 0
DescribeA(2,:) = A>0;
Output in Command Window:
A =
-1 2 1 -10 5 -3 1
DescribeA =
1 0 1 0 0 0 1
0 1 1 0 1 0 1
I feel there is a smarter way for the 1st condition but I can't seem to find it.
I was asked this question on in a recent Java telephonic interview:
You are given an NxN binary (0-1) matrix with following properties:
Each row is sorted (sequence of 0's followed by sequence of 1's)
Every row represents an unsigned integer (by reading the bits)
Each row is unique
Example:
0 1 1
1 1 1
0 0 1
The bit values in each row is sorted and the rows represent the integers 3, 7 and 1.
Find the row representing the smallest integer. In the example above, the answer is row 3, which represents the integer 1.
I started with brute force of quadratic complexity. The interviewer replied saying I was not exploiting the sorted property.
After thinking a lot I used binary search on each row and it came to O(nlogn). He asked if I could improve any further. I thought a lot but failed to improve.
I would appreciate if anyone can give any pointers on imporoving it.
Another example:
0 1 1 1
0 0 0 1
0 0 0 0
1 1 1 1
The answer will be row 3, which represents the integer 0.
Start with row 1. Go right until you hit the first 1. Then go down to row 2, but remain in the same column and repeat the process of going right until you hit a 1. Do this repeatedly. The row in which you last stepped right is your answer.
This is an O(N+M) solution (for an NxM matrix, or O(N) for a square NxN matrix as given in the question).
Using your example of:
0 1 1 1
0 0 0 1
0 0 0 0
1 1 1 1
The .'s here represent the path traversed:
. . 1 1
0 . . .
0 0 0 . . Last right step, therefore this is our answer
1 1 1 1 .
This solution works on non-square matrixes, retaining a worst case O(N+M) efficiency for an NxM matrix.
Why does this work? The guarantee that the numbers will be sorted means every row will be a series of 0's followed by a series of 1's. So the magnitude of a row is equivalent to how far right you can go before hitting a 1. So if a row can ever take you further by just following the 0's, then it must be longer than anything we've processed before.
Python code:
li = [[0, 1, 1, 1],
[0, 0, 0, 1],
[0, 0, 0, 0],
[1, 1, 1, 1]]
ans, j = 0, 0
for i, row in enumerate(li):
while j < len(row) and row[j] == 0:
j += 1
ans = i
print "Row", ans+1, "".join(map(str, li[ans]))
There is also a simpler solution, because of the constraints of always having a square NxN matrix and distinct rows together. Together they mean that the row with the lowest value will be either 0 0 ... 0 1 or 0 0 ... 0 0. This is because there are N of N+1 possible numbers represented in the matrix, so the "missing" number is either 0 (in which case the smallest value represented is 1) or it's something else (smallest value is 0).
With this knowledge, we check the column second from the right for a 0. When we find one, we look to its right and if that contains another 0 we have our answer (there can only be one row ending in a 0). Otherwise, we continue to search the column for another 0. If we don't find another 0, the first one we found was the row we're looking for (there can only be one row ending in 01 and since there was none ending in 00, this is the smallest).
Python code:
li = [[0, 1, 1, 1],
[0, 0, 0, 1],
[0, 0, 0, 0],
[1, 1, 1, 1]]
for i, row in enumerate(li):
if row[-2] == 0:
ans = i
if row[-1] == 0:
break
print "Row", ans+1, "".join(map(str, li[ans]))
That solution answers the question with least difficulty in O(N), but generalising it to handle non-square NxM matrixes or non-distinct numbers will make its worst-case efficiency O(N^2). I personally prefer the first solution.
the lowest number must be 0 or 1. (because there are no duplications and the rows are sorted). all you have to do is go over the last column, if ti contains 0 lowest number is 0 else lowest number is 1.
EDIT - explanation
In N rows with the constraint you sated there can be a maximum of N+1 unique values.
so for sure at least 0 or 1 must be in the matrix....
Edit 2 - algorithm
//assuming the matrix is 0 indexed base
for i = 0...N-1
if M[i][N-1] == 0
return "Value is 0 in row i";
for i = 0...N-1
if M[i][N-2] == 0
return "Value is 1 in row i";
//because of the explanation above the flow will never reach here.
Since the numbers are unique and since the digits are sorted, it is quite clear that for any value of N, the smallest number can be either of the form [0(N-1 times) followed by 1] or 0(N times).
For example, for N=4, the smallest number
can either be 0001 or 0000.
In other words, second last digit of the number we wish to find HAS to be 0. And the last digit can either be 0 or 1
This problem then reduces to just finding these patterns in the array, which can be done using a simple for loop
int rowNum = -1;
for(int i=0;i<N;i++)
{
if(arr[i][N-2]==0) //Second last digit is 0. Hence the number could be min.
{
rowNum = i;
if(arr[i][N-1]==1) // If number of the form 0001 was found, keep looking for 0000
{
continue;
}
else
//If number of the form 0000 was found, exit.
//No other number can be lesser than 0000
{
break;
}
}
}
return rowNum;
This algorithm would have complexity O(n)
You want to find a rows with maximum number of zeros.
Start at arr[0][0]
If it is 0, check the element towards
right of it, arr[0][1].
If it's not 0 then skip that row and
start checking at the element in the
next row below the current element.
Keep doing till you go past the last row/last column or you find a row with all zeros.
Algorithm:
i = 0
j = 0
answer = 0
# continue till i is a valid index.
while(i<N)
# continue till j is valid index and ele is 0.
while(j < N AND arr[i][j] == 0)
# move towards right.
j++
#update answer.
answer = i
# found a row with all zeros.
if(j == N)
break all loops.
end-if
end-while
# skip current row..continue on next row.
i++
end-while
print answer
The complexity of this is O(N+N) which is O(N), that is linear.
Java implementation
Related question Which makes use of exact same trick
How to efficiently search in an ordered matrix?
Start at the top-left.
The first row is the best row so far.
Repeat until you reach the bottom:
If you're not already over a 1:
Go right until you find a 1.
This row is the best row so far.
Go down one row.
Report the best row that was found.
You never go up or left - you only go down (n-1) times and right no more than (n-1) times, making this O(n). This exploits the sortedness by realizing that you never have to go left to check for a 1 - if there is a 1 somewhere to the left, then there is also a 1 in the current spot (and thus the number in this row is at least as big as the one in the previous row).
Since the bits in each row are sorted, once you have found a 1 bit, all bits to the right must be 1 too. In other words, the array only stores values of the form 2^n-1.
So the answer is the row with the most zero entries is the smallest.
However, since only 2**m-1 entries can be present, and there are n of those, and no two are the same we can deduce more - for any N there are N+1 such values. So either 0 or 1 must be present as we know there are no duplicates.
So look for an empty row (which is hte only one with rightmost column zero). If you don't find it, the answer is 1, else it's 0.
O(N)
How about looping each line in reverse order and checking where the 1s end and the zeroes start?
In fact it is guaranteed that in NxN, the worst case is that 0 won't be there. So you can just check the last 2 entries of each row. Which makes it linear.
Since my explanation was not understood, here it is in somewhat pseudo code:
int lowestRow = -1;
for (k = 0; k < array.length; k ++) {
byte[] row = array[k];
if (row[row.length - 1] == 0) {
lowestRow = k;
break;
}
if (row[row.length - 2] == 0) {
lowestRow = k;
//possibly look for all-zeroes, so not breaking
}
}
You have to start with the last column, and check if the sum of the element is N-1, once you have found the column with the sum = N-1 search the column containing a 0 and this is the one you are looking for...
An optimised version of #codaddict
int best = N;
int answer = -1;
for(int i=0;i<N;i++)
for(int j=0;j<best;j++)
if (arr[i][j] != 0) {
best = j;
answer = i;
}
The inner loops stops as soon as it determines this row won't be the better than the current answer. This could cut out alot of searching down rows which are longer than the best answer so far.
Find which row has the first non zero value in the furthest column. If it is binary, with the MSB on the left, and the LSB on the right the answer is the row that starts with the most zeros.
I would add this as a comment to Jeremy's answer if i could, because his solution is mostly correct. Also, I like the approach. In many instances it will be MUCH faster than other answers. There is a possible problem. If "Each row is sorted." does not mean that all ones are shifted to the right, but has some other implication(I can think of a couple of implications. I would need more from the individual asking the question). One problem...what about 0011 and 0010. Rows are sorted could mean that the algorithm you are implementing is already used. The algorithm specified in his answer cannot distinguish between the two. I would store the index of both answers in an array. If array length is 1 then you have a solution, otherwise you need to recurse further...just a thought. If anyone reads this that can post comments on others posts please reference this in a comment to his post. It is a serious problem, and it would be upsetting to have a technically incorrect answer get the check. If my comment is added I will delete my answer completely.
The smallest number can be 0 which looks like (0000...0000) or 1 which looks like (0000...0001).
Every larger number looks like (xxxx...xx11). So you should check the next to last digit in every row. If it is 0 then check if the last digit is 0. If it is it is the smallest number. If not, then remember the row number and continue looking for row with 0 on the next to last digit. If you find it, this will be the smallest number. If not, the first found number is smallest one.
This is the solution with N+1 steps (worst case scenario) which is O(N) complexity.
I don't know if it's admitted, but if it's sorted don't you need just to convert each row to decimal number and choose the row with the lower one.
example:
[0111] -> 7
[0011] -> 3
[0000] -> 0
[0001] -> 1
The solution is the row with value 0. OR?
I have written a O(n) algorithm, similar to what has been stated above, we start from top-left corner and work downwards:
a = [
[0, 1, 1, 1],
[0, 0, 0, 1],
[0, 0, 0, 0],
[1, 1, 1, 1]
]
a2 = [
[0, 0, 0, 0],
[0, 1, 1, 1],
[0, 0, 0, 1],
[1, 1, 1, 1]
]
def search(a):
n = len(a)
c = 0
r = 0
best_row = 0
while c<n and r<n:
if a[r][c] == 0:
c += 1
else:
best_row = r
r += 1
if c==n : best_row = r
print( " best row: %d" % best_row )
search( a )
search( a2 )
I have a problem with describing algorithm for finding maximum rectangular area of binary data, where 1 occurs k-times more often than 0. Data is always n^2 bits like this:
For example data for n = 4 looks like:
1 0 1 0
0 0 1 1
0 1 1 1
1 1 0 1
Value of k can be 1 .. j (k = 1 means, that number of 0 and 1 is equal).
For above example of data and for k = 1 solution is:
1 0 1 0 <- 4 x '0' and 4 x '1'
0 0 1 1
0 1 1 1
1 1 0 1
But in this example:
1 1 1 0
0 1 0 0
0 0 0 0
0 1 1 1
Solution would be:
1 1 1 0
0 1 0 0
0 0 0 0
0 1 1 1
I tried with few brute force algorithms but for n > 20 it is getting too slow. Can you advise me how I should solve this problem?
As RBerteig proposed - the problem can be also described like that: "In a given square bitmap with cells set to 1 or 0 by some arbitrary process, find the largest rectangular area where the 1's and 0's occur in a specified ratio, k."
Bruteforce should do just fine here for n < 100, if properly implemented: solution below has O(n^4) time and O(n^2) memory complexity. 10^8 operations should be well under 1 second on modern PC (especially considering that each operation is very cheap: few additions and subtractions).
Some observations
There're O(n^4) sub-rectangles to consider and each of them can be a solution.
If we can find number of 1's and 0's in each sub-rectangle in O(1) (constant time), we'll solve problem in O(n^4) time.
If we know number of 1's in some sub-rectangle, we can find number of zeroes (through area).
So, the problem is reduced to following: create data structure allowing to find number of 1's in each sub-rectangle in constant time.
Now, imagine we have sub-rectangle [i0..i1]x[j0..j1]. I.e., it occupies rows between i0 and i1 and columns between j0 and j1. And let count_ones be the function to count number of 1's in subrectangle.
This is the main observation:
count_ones([i0..i1]x[j0..j1]) = count_ones([0..i1]x[0..j1]) - count_ones([0..i0 - 1]x[0..j1]) - count_ones([0..i1]x[0..j0 - 1]) + count_ones([0..i0 - 1]x[0..j0 - 1])
Same observation with practical example:
AAAABBB
AAAABBB
CCCCDDD
CCCCDDD
CCCCDDD
CCCCDDD
If we need to find number of 1's in D sub-rectangle (3x4), we can do it by taking number of 1's in the whole rectangle (A + B + C + D), subtracting number of 1's in (A + B) rectangle, subtracting number of 1's in (A + C) rectangle, and adding number of 1's in (A) rectangle. (A + B + C + D) - (A + B) - (A + C) + (A) = D
Thus, we need table sums, for each i and j containing number of 1's in sub-rectangle [0..i][0..j].
You can create this table in O(n^2), but even the direct way to fill it (for each i and j iterate all elements of [0..i][0..j] area) will be O(n^4).
Having this table,
count_ones([i0..i1]x[j0..j1]) = sums[i1][j1] - sums[i0 - 1][j1] - sums[i1][j0 - 1] + sums[i0 - 1][j0 - 1]
Therefore, time complexity O(n^4) reached.
This is still brute force, but something you should note is that you don't have to recompute everything from scratch for a new i*j rectangle. Instead, for each possible rectangle size, you can move the rectangle across the n*n grid one step at a time, decrementing the counts for the bits no longer within the rectangle and incrementing the counts for the bits that newly entered the rectangle. You could potentially combine this with varying the rectangle size, and try to find an optimal pattern for moving and resizing the rectangle.
Just some hints..
You could impose better restrictions on the values. The requirement leads to condition
N1*(k+1) == S*k, where N1 is number of ones in an area, and S=dx*dy is its surface.
It can be rewritten in better form:
N1/k == S/(k+1).
Because the greatest common divisor of numbers n and n+1 is always 1, then N1 have to be multiple of k and dx*dy to be multiple of k+1. It reduces greatly the possible space of solutions, the larger is k, the better (for dx*dy case you'll need to play with prime divisors of k+1).
Now, because you need just the surface of the largest area with such property, it would be wise to start from largest areas and move to smaller ones. By trying dx*dy from n^2 downto k+1 that would satisfy the divisor and the bounding conditions, you'll find quite fast the solution, muuuch faster than O(n^4), because of a special reason: except cases when the array was specially constructed, if we assume a random input, the probability that there are N1 ones out of S values in the (n-dx+1)*(n-dy+1) areas that have the surface S will constantly grow with decrease of S. (large values of k will make the probability smaller, but in the same time they will make the filter for dx and dy pairs stronger).
Also, this problem: http://ioinformatics.org/locations/ioi99/contest/land/land.shtml , looks somehow similar, maybe you'll find some ideas in their solution.