I've always found it awkward to have a function or expression that requires use of the values, as well as indices, of a list (or array, applies just the same) in Haskell.
I wrote validQueens below while experimenting with the N-queens problem here ...
validQueens x =
and [abs (x!!i - x!!j) /= j-i | i<-[0..length x - 2], j<-[i+1..length x - 1]]
I didn't care for the use of indexing, all the plus and minuses, etc. It feels sloppy. I came up with the following:
enumerate x = zip [0..length x - 1] x
validQueens' :: [Int] -> Bool
validQueens' x = and [abs (snd j - snd i) /= fst j - fst i | i<-l, j<-l, fst j > fst i]
where l = enumerate x
being inspired by Python's enumerate (not that borrowing imperative concepts is necessarily a great idea). Seems better in concept, but snd and fst all over the place kinda sucks. It's also, at least at first glance, costlier both in time and space. I'm not sure whether or not I like it any better.
So in short, I am not really satisfied with either
Iterating thru by index bounded by lengths, or even worse, off-by-ones and twos
Index-element tuples
Has anyone found a pattern they find more elegant than either of the above? If not, is there any compelling reason one of the above methods is superior?
Borrowing enumerate is fine and encouraged. However, it can be made a bit lazier by refusing to calculate the length of its argument:
enumerate = zip [0..]
(In fact, it's common to just use zip [0..] without naming it enumerate.) It's not clear to me why you think your second example should be costlier in either time or space. Remember: indexing is O(n), where n is the index. Your complaint about the unwieldiness of fst and snd is justified, and can be remedied with pattern-matching:
validQueens' xs = and [abs (y - x) /= j - i | (i, x) <- l, (j, y) <- l, i < j]
where l = zip [0..] xs
Now, you might be a bit concerned about the efficiency of this double loop, since the clause (j, y) <- l is going to be running down the entire spine of l, when really we just want it to start where we left off with (i, x) <- l. So, let's write a function that implements that idea:
pairs :: [a] -> [(a, a)]
pairs xs = [(x, y) | x:ys <- tails xs, y <- ys]
Having made this function, your function is not too hard to adapt. Pulling out the predicate into its own function, we can use all instead of and:
validSingleQueen ((i, x), (j, y)) = abs (y - x) /= j - i
validQueens' xs = all validSingleQueen (pairs (zip [0..] xs))
Or, if you prefer point-free notation:
validQueens' = all validSingleQueen . pairs . zip [0..]
Index-element tuples are quite a common thing to do in Haskell. Because zip stops when the first list stops, you can write them as
enumerate x = zip [0..] x
which is both more elegant and more efficient (as it doesn't compute length x up front). In fact I wouldn't even bother naming it, as zip [0..] is so short.
This is definitely more efficient than iterating by index for lists, because !! is linear in the second argument due to lists being linked lists.
Another way you can make your program more elegant is to use pattern-matching instead of fst and snd:
validQueens' :: [Int] -> Bool
validQueens' x = and [abs (j2 - i2) /= j1 - i1 | (i1, i2) <-l, (j1, j2) <-l, j1 > i1]
where l = zip [0..] x
Related
This is my first time asking a question on here, so I hope I have adequately followed the guidelines for asking a proper question.
For some quick context: I am currently trying to implement and verify a recursive version of Quicksort in Dafny. At this point, it seems that all there is left to do is to prove one last lemma (i.e., the implementation completely verifies when I remove this lemma's body. If I am not mistaking, this should mean that the implementation completely verifies when assuming this lemma holds.).
Specifically, this lemma states that, if a sequence of values is currently properly partitioned around a pivot, then, if one permutes the (sub)sequences left and right of the pivot, the complete sequence is still a valid partition. Eventually, using this lemma, I essentially want to say that, if the subsequences left and right of the pivot get sorted, the complete sequence is still a valid partition; as a result, the complete sequence is sorted.
Now, I have tried to prove this lemma, but I get stuck on the part where I try to show that, if all values in a sequence are less than some value, then all values in a permutation of that sequence are also less than that value. Of course, I also need to show the equivalent property with "less than" replaced by "greater than or equal to", but I suppose that they are nearly identical, so knowing one would be sufficient.
The relevant part of the code is given below:
predicate Permutation(a: seq<int>, b: seq<int>)
requires 0 <= |a| == |b|
{
multiset(a) == multiset(b)
}
predicate Partitioned(a: seq<int>, lo: int, hi: int, pivotIndex: int)
requires 0 <= lo <= pivotIndex < hi <= |a|
{
(forall k :: lo <= k < pivotIndex ==> a[k] < a[pivotIndex])
&&
(forall k :: pivotIndex <= k < hi ==> a[k] >= a[pivotIndex])
}
lemma PermutationPreservesPartition(apre: seq<int>, apost: seq<int>, lo: int, hi: int, pivotIndex: int)
requires 0 <= lo <= pivotIndex < hi <= |apre| == |apost|
requires Partitioned(apre, lo, hi, pivotIndex)
requires Permutation(apre[lo..pivotIndex], apost[lo..pivotIndex])
requires Permutation(apre[pivotIndex + 1..hi], apost[pivotIndex + 1..hi])
requires apre[pivotIndex] == apost[pivotIndex]
ensures Partitioned(apost, lo, hi, pivotIndex)
{
}
I've tried several things, such as:
assert
Partitioned(apre, lo, hi, pivotIndex) && apre[pivotIndex] == apost[pivotIndex]
==>
(
(forall k :: lo <= k < pivotIndex ==> apre[k] < apost[pivotIndex])
&&
(forall k :: pivotIndex <= k < hi ==> apre[k] >= apost[pivotIndex])
);
assert
(forall k :: lo <= k < pivotIndex ==> apre[k] < apost[pivotIndex])
&&
(Permutation(apre[lo..pivotIndex], apost[lo..pivotIndex]))
==>
(forall k :: lo <= k < pivotIndex ==> apost[k] < apost[pivotIndex]);
However, here the second assertion already fails to verify.
After this first attempt, I figured that Dafny might not be able to verify this property between the sequences because the "Permutation" predicate uses the corresponding multisets instead of the sequences themselves. So, I tried to make the relation between the sequences more explicit by doing the following:
assert
Permutation(apre[lo..pivotIndex], apost[lo..pivotIndex])
==>
forall v :: v in multiset(apre[lo..pivotIndex]) <==> v in multiset(apost[lo..pivotIndex]);
assert
forall v :: v in multiset(apre[lo..pivotIndex]) <==> v in apre[lo..pivotIndex];
assert
forall v :: v in multiset(apost[lo..pivotIndex]) <==> v in apost[lo..pivotIndex];
assert
forall v :: v in apre[lo..pivotIndex] <==> v in apost[lo..pivotIndex];
assert
(
(forall v :: v in apre[lo..pivotIndex] <==> v in apost[lo..pivotIndex])
&&
(forall v :: v in apre[lo..pivotIndex] ==> v < apre[pivotIndex])
)
==>
(forall v :: v in apost[lo..pivotIndex] ==> v < apre[pivotIndex]);
assert
(
(forall v :: v in apost[lo..pivotIndex] ==> v < apre[pivotIndex])
&&
apre[pivotIndex] == apost[pivotIndex]
)
==>
(forall v :: v in apost[lo..pivotIndex] ==> v < apost[pivotIndex]);
This all verifies, which I thought was great, since there only seems one step left to connect this to the definition of "Partitioned", viz.:
assert
(forall v :: v in apost[lo..pivotIndex] ==> v < apost[pivotIndex])
==>
(forall k :: lo <= k < pivotIndex ==> apost[k] < apost[pivotIndex]);
Nevertheless, Dafny then fails to verify this assertion.
So, at this point, I am not sure how to convince Dafny that this lemma holds. I've tried looking at implementations of Quicksort in Dafny from other people, as well as any potentially relevant question I could find. However, this has, as of yet, been to no avail. I hope someone could help me out here.
My apologies for any potential ignorance regarding Dafny, I am just starting out with the language.
It is difficult to give a usable definition of "permutation". However, to prove the correctness of a sorting algorithm, you only need that the multiset of elements stays the same. For a sequence s, the expression multiset(s) gives you the multiset of elements of s. If you start with an array a, then a[..] gives you a sequence consisting of the elements of the array, so multiset(a[..]) gives you the multiset of elements in the array.
See https://github.com/dafny-lang/dafny/blob/master/Test/dafny3/GenericSort.dfy#L59 for an example.
Dafny's verifier cannot work out all properties of such multisets by itself. However, it generally does understand that the multiset of elements is unchanged when you swap two elements.
I´ve run into problem while working on my school project in C.
I am working with two equtions:
x=a+ui ;
y=b+vj
Known is the value of a, b, u, v. I need to multiply u, v with natural numbers (i, j) until equality x=y occurs.
(Note: i and j may not have a same value/ could be a different numbers while x=y).
And here comes the problem, I dont know how to solve the equations while i and j differs, i suppose that possible solution is by cycles, or diofantic equations (I dont know how to apply them to the equations above.)
I am beginner and stuck. Can someone help me with solution in C code, please? Thank you.
EDIT
If i just want to solve the equations by the cycles,
I think I know what should I do, but I just dont know how to write it in C..
1) every step of cycle, I count x, y
2) if x is lower then y, i is increased by 1, if not, j is increased by 1
3) its repeating until x=y
4) sometimes the solution is not there at all, so I have to put there a condition so it doesnt run forever.
Rewriting your equations, you get:
a + u i = b + v j
With the solution
i = (b + v j - a) / u
Now you need to find a natural number j, such that i becomes natural. Obviously, b + v j - a has to be a positive multiple of u. So:
b + v j - a = k * u, k \in N
j = (k * u + a - b) / v
Now k * u + a - b must be a multiple of v. This is not always possible. The easiest way to check is to iterate the first few possible ks and see where it gets you. If you get a natural number, you can plug the solution of j into the above equation and you are guaranteed to get a natural i.
This assumes that x, y, a, b, u, and v are real numbers. If they are also integers, you might get a bit farther than that.
I've read that there are ways of coaxing the Clojure compiler into producing code that rivals the performance of similar code in Java, at least for code that already looks a lot like the Java code you want it to turn into. That's sound reasonable to me: idiomatic, high level Clojure code might have performance in the ballpark of what I'm used from CPython or MRI, but "ugly" Java-like code runs more or less like Java. This is a tradeoff I appreciate in Haskell, for example. Low level Haskell code with mutable arrays, loops and what not runs under GHC with appropriate compiler flags about as fast as it does in C (and then some high-tech libraries can sometimes squeeze similar performance out of prettier, higher level code).
I want help learning how to get my Java-like Clojure code to run as fast as in Java. Take this example:
(defn f [x y z n]
(+ (* 2 (+ (* x y) (+ (* y z) (* x z))))
(* 4 (+ x y z n -2) (- n 1))))
(defmacro from [[var ini cnd] & body]
`(loop [~var ~ini]
(when ~cnd
~#body
(recur (inc ~var)))))
(defn g [n]
(let [c (long-array (inc n))]
(from [x 1 (<= (f x x x 1) n)]
(from [y x (<= (f x y y 1) n)]
(from [z y (<= (f x y z 1) n)]
(from [k 1 (<= (f x y z k) n)]
(let [l (f x y z k)]
(aset c l (inc (aget c l))))))))
c))
(defn h [x]
(loop [n 1000]
(let [^longs c (g n)]
(if-let [k (some #(when (= x (aget c %)) %)
(range 1 (inc n)))]
k
(recur (* 2 n))))))
(time (print (h 1000)))
It takes about 85 seconds using Clojure 1.6 on my (admittedly) slow machine. Equivalent code in Java runs in about 0.4 seconds. I'm not greedy, I just want to get the Clojure code to run in, say, around 2 seconds.
The first thing I did was enable *warn-on-reflection* but sadly, with that lonely type hint there are no further warnings. What am I doing wrong?
This gist contains both the Java and Clojure versions of the code.
Unfortunately *warn-on-reflection* doesn't warn you about primitive boxing - which I think is the main problem here. You want to be using unboxed primitive arithmetic at all times for maximum speed.
The following hints should help you optimise this:
Do a (set! *unchecked-math* true) to get faster primitive numerical operations
Try initialising your loops with (long ~ini). You want to force the use of primitives this way
Try putting a primitive hint ^long n to the function g
Try type-hinting your long array ^longs c - this should hopefully make Clojure use the faster primitive aget.
Type hint f as a primitive function ^long [^long x ^long y ^long z ^long n] or similar. This is very important, otherwise f will return boxed numbers....
If you succeed in eliminating all the boxed numbers, then this kind of code should be nearly as fast as pure Java.
So, i would like to convert a part of C code to Haskell. I wrote this part (it's a simplified example of what I want to do) in C, but being the newbie I am in Haskell, I can't really make it work.
float g(int n, float a, float p, float s)
{
int c;
while (n>0)
{
c = n % 2;
if (!c) s += p;
else s -= p;
p *= a;
n--;
}
return s;
}
Anyone got any ideas/solutions?
Lee's translation is already pretty good (well, he confused the odd and even cases(1)), but he fell into a couple of performance traps.
g n a p s =
if n > 0
then
let c = n `mod` 2
s' = (if c == 0 then (-) else (+)) s p
p' = p * a
in g (n-1) a p' s'
else s
He used mod instead of rem. The latter maps to machine division, the former performs additional checks to ensure a non-negative result. Thus mod is a bit slower than rem, and if either satisfies the needs - because they yield identical results in the case where both arguments are non-negative; or because the result is only compared to 0 (both conditions are satisfied here) - rem is preferable. Even better, and a bit more idiomatic is to use even (which uses rem for the reasons mentioned above). The difference is not huge, though.
No type signature. That means that the code is (type-class) polymorphic, and thus no strictness analysis is possible, nor any specialisations. If the code is used in the same module at a specific type, GHC can (and usually will, if optimisations are enabled) create a specialised version for that specific type that allows strictness analysis and some other optimisations (inlining of class methods like (+) etc.), in that case, one does not pay the polymorhism penalty. But if the use site is in a different module, that cannot happen. If (type-class) polymorphic code is desired, one should mark it INLINABLE or INLINE (for GHC < 7), so that its unfolding is exposed in the .hi file and the function can be specialised and optimised at the use site.
Since g is recursive, it cannot be inlined [meaning, GHC cannot inline it; in principle it is possible] at use sites, which often would enable more optimisations than a mere specialisation.
One technique that often allows better optimisation for recursive functions is the worker/wrapper transformation. One creates a wrapper that calls a recursive (local) worker, then the non-recursive wrapper can be inlined, and when the worker is called with known arguments, that can enable further optimisations like constant folding or, in the case of function arguments, inlining. In particular the latter often has an enormous impact, when combined with a static-argument-transformation (arguments that never change in the recursive calls are not passed as arguments to the recursive worker).
In this case, we only have one static argument of type Float, so a worker/wrapper transformation with a SAT typically makes no difference (as a rule of thumb, a SAT pays off when
the static argument is a function
several non-function arguments are static
so by this rule, we shouldn't expect any benefit from w/w + SAT, and in general, there is none). Here we have one special case where w/w + SAT can make a big difference, and that is when the factor a is 1. GHC has {-# RULES #-} that eliminate multiplication by 1 for various types, and with such a short loop body, a multiplication more or less per iteration makes a difference, the running time is reduced by about 40% after points 3 and 4 have been applied. (There are no RULES for multiplication by 0 or by -1 for floating point types because 0*x = 0 resp. (-1)*x = -x don't hold for NaNs.) For all other a, the w/w + SATed
{-# INLINABLE g #-}
g n a p s = worker n p s
where
worker n p s
| n <= 0 = s
| otherwise = let s' = if even n then s + p else s - p
in worker (n-1) a (p*a) s'
does not perform measurably different from the top-level recursive version with the same optimisations done.
Strictness. GHC's strictness analyser is good, but not perfect. It cannot see far enough through the algorithm to determine that the function is
strict in p if n >= 1 (assuming addition - (+) - is strict in both arguments)
also strict in a if n >= 2 (assuming strictness of (*) in both arguments)
and then produce a worker that is strict in both. Instead you get a worker that uses an unboxed Int# for n and an unboxed Float# for s (I'm using the type Int -> Float -> Float -> Float -> Float here, corresponding to the C), and boxed Floats for a and p. Thus in each iteration you get two unboxings and a re-boxing. That costs (relatively) a lot of time, since besides that it's just a bit of simple arithmetic and tests.
Help GHC along a bit, and make the worker (or g itself, if you don't do the worker/wrapper transform) strict in p (bang pattern for example). That is enough to allow GHC producing a worker using unboxed values throughout.
Using division to test parity (not applicable if the type is Int and the LLVM backend is used).
GHC's optimiser hasn't got down to the low-level bits very much yet, so the native code generator emits a division instruction for
x `rem` 2 == 0
and, when the rest of the loop body is as cheap as it is here, that costs a lot of time. LLVM's optimiser has already been taught to replace that with a bitmasking at type Int, so with ghc -O2 -fllvm you don't need to do that manually. With the native code generator, substituting that with
x .&. 1 == 0
(needs import Data.Bits of course) produces a significant speedup (on normal platforms where a bitwise and is much faster than a division).
The final result
{-# INLINABLE g #-}
g n a p s = worker n p s
where
worker k !ap acc
| k > 0 = worker (k-1) (ap*a) (if k .&. (1 :: Int) == 0 then acc + ap else acc - ap)
| otherwise = acc
performs not measurably different (for the tested values) from the result of gcc -O3 -msse2 loop.c, except for a = -1, where gcc replaces the multiplication with a negation (assuming all NaNs equivalent).
(1) He's not alone in that,
c = n % 2;
if (!c) s += p;
else s -= p;
seems to be really tricky, as far as I can see everybody(2) got that wrong.
(2) With one exception ;)
As a first step, let's simplify your code:
float g(int n, float a, float p, float s) {
if (n <= 0) return s;
float s2 = n % 2 == 0 ? s + p : s - p;
return g(n - 1, a, a*p, s2)
}
We have turned your original function into a recursive one that exhibits a certain structure. It's a sequence! We can turn this into Haskell conveniently:
gs :: Bool -> Float -> Float -> Float -> [Float]
gs nb a p s = s : gs (not nb) a (a*p) (if nb then s - p else s + p)
Finally we just need to index this list:
g :: Integer -> Float -> Float -> Float -> Float
g n a p s = gs (even n) a p s !! (n - 1)
The code is not tested, but it should work. If not, it's probably just an off-by-one error.
Here is how I would tackle this problem in Haskell. First, I observe that there are several loops merged into one here: we are
forming a geometric sequence (whose factor is a suitably negative version of p)
taking a prefix of the sequence
summing the result
So my solution follows this structure as well, with a tiny bit of s and p thrown in for good measure because that's what your code does. In a from-scratch version, I'd probably drop those two parameters entirely.
g n a p s = sum (s : take n (iterate (*(-a)) start)) where
start | odd n = -p
| otherwise = p
A fairly direct translation would be:
g n a p s =
if n > 0
then
let c = n `mod` 2
s' = (if c == 0 then (-) else (+)) s p
p' = p * a
in g (n-1) a p' s'
else s
Look at the signature of the g function (i.e., float g (int n, float a, float p, float s)) you know that your Haskell function will receive 4 elements and return a float, thus:
g :: Integer -> Float -> Float -> Float -> Float
let us now look into the loop, we see that n > 0 is the stop case, and n--; will be the decreasing step used on the recursive call. Therefore:
g :: Integer -> Float -> Float -> Float -> Float
g n a p s | n <= 0 = s
to n > 0, you have another conditional if (!(n % 2)) s += p; else s -= p; inside the loop. If n is odd than you will do s += p, p *= a and n--. In Haskell it will be:
g :: Integer -> Float -> Float -> Float -> Float
g n a p s | n <= 0 = s
| odd n = g (n-1) a (p*a) (s+p)
If n is even than you will do s-=p, p*=a; and n--. Thus:
g :: Integer -> Float -> Float -> Float -> Float
g n a p s | n <= 0 = s
| odd n = g (n-1) a (p*a) (s+p)
| otherwise = g (n-1) a (p*a) (s-p)
To expand on #Landei and #MathematicalOrchid 's comments below the question: The algorithm proposed to solve the problem at hand is always O(n). However, if you realize that what you're actually doing is computing a partial sum of the geometric series, you can use the well-known summation formula:
g n a p s = s + (-1)**n * p * ((-a)**n-1) / (-a-1)
This will be faster as the exponentiation can be done faster than O(n) by repeated squaring or other clever methods, which are likely automatically employed for integer powers by modern compilers.
You can encode loops almost-naturally with the Haskell Prelude function until :: (a -> Bool) -> (a -> a) -> a -> a:
g :: Int -> Float -> Float -> Float -> Float
g n a p s =
fst.snd $
until ((<= 0).fst)
(\(n,(!s,!p)) -> (n-1, (if even n then s+p else s-p, p*a)))
(n,(s,p))
The bang-patterns !s and !p mark strictly-calculated intermediate variables, to prevent excessive laziness which would otherwise harm efficiency.
until pred step start repeatedly applies the step function until pred called with the last generated value will hold, starting with initial value start. It can be represented by the pseudocode:
def until (pred, step, start): // well, actually,
while( true ): def until (pred, step, start):
if pred(start): return(start) if pred(start): return(start)
start := step(start) call until(pred, step, step(start))
The first pseudocode is equivalent to the second (which is how until is actually implemented) in the presence of tail call optimization, which is why in many functional languages where TCO is present loops are encoded via recursion.
So in Haskell, until is coded as
until p f x | p x = x
| otherwise = until p f (f x)
But it could have been coded differently, making explicit the interim results:
until p f x = last $ go x -- or, last (go x)
where go x | p x = [x]
| otherwise = x : go (f x)
Using the Haskell standard higher-order functions break and iterate this could be written as a stream-processing code,
until p f x = let (_,(r:_)) = break p (iterate f x) in r
-- or: span (not.p) ....
or just
until p f x = head $ dropWhile (not.p) $ iterate f x -- or, equivalently,
-- head . dropWhile (not.p) . iterate f $ x
If TCO weren't present in a given Haskell implementation, the last version would be the one to use.
Hopefully this makes clearer how the stream-processing code from Daniel Wagner's answer comes about,
g n a p s = s + (sum . take n . iterate (*(-a)) $ if odd n then (-p) else p)
because the predicate involved is about counting down from n, and
fst . snd . head . dropWhile ((> 0).fst) $
iterate (\(n,(!s,!p)) -> (n-1, (if even n then s+p else s-p, p*a)))
(n,(s,p))
===
fst . snd . head . dropWhile ((> 0).fst) $
iterate (\(n,(!s,!p)) -> (n-1, (s+p, p*(-a))))
(n,(s, if odd n then (-p) else p)) -- 0 is even
===
fst . (!! n) $
iterate (\(!s,!p) -> (s+p, p*(-a)))
(s, if odd n then (-p) else p)
===
foldl' (+) s . take n . iterate (*(-a)) $ if odd n then (-p) else p
In pure FP, the stream-processing paradigm makes all history of a computation available, as a stream (list) of values.
How can I prove if this language is regular or not?
L = {an bn: n≥1} union {an bn+2: n≥1}
I'll give an approach and a sketch of a prove, there might be some holes in it that I believe you can fill yourself.
The idea is to use nerode's theorem - show that there are infinte number of equivalence groups for RL - and from the theorem you can derive that the language is irregular.
Define two types of sets:
G_j = {anb k | n-k = j , k≥1} for each j in
[-2,-1,0,1,...]
H_j = {aj } for each j in
[0,1,...]
G_illegal = {0,1}* / (G_j U H_j) [for each j in the specified range]
It is easy to see that for each x in G_illegal, and for each z in {a,b}*: xz is not in L.
So, for every x,y in G_illegal and for each z in {a,b}*: xz in L <-> yz in L.
Also, for each z in {a,b}* - and for each x,y in some G_j [same j for both]:
if z contains a, both xz and yz are not in L
if z = bj, then xz = an bk bj, and since k+j = n - xz is in L. Same applies for y, so yz is in L.
if z = bj+2, then xz = an bk bj+2, and since k+j+2 = n+2 - xz is in L. Same applies for y, so yz is in L.
otherwise, x is bi such that i≠j and i≠j+2, and you get that both xz and yz are not in L.
So, for every j and for every x,y in G_j and for each z in {a,b}*: xz in L <-> yz in L.
Prove the same for every H_j using the same approach.
Also, it is easy to show that for each x G_j U H_j, and for each y in G_illegal - for z = bj, xz is in L and yz is not in L.
For x in G_j, and y in H_i, for z = abj+1 - it is easy to see that xz is not in L and yz is in L.
It is easy to see that for x,y in G_j and G_i respectively or x, y in H_j, H_i - for z = bj: xz is in L while yz is not.
We just proved that the sets we created are actually the equivalence relations for RL from nerode's theorem, and since we have infinite number of these sets, each is an equivalence relation for RL [we have H_j and G_j for every j] - we can derive from nerode's theorem that the language L is irregular.
You could just use the pumping lemma for regular languages. It basically says that if you can find a string for any given integer n and any partition of this string into xyz such that |xy| <= n, and |y| > 0, then you can pump the y part of the string, and it has to stay in the language, that means, if xy^iz it's not in the language for some i, then the language is not regular.
The proof goes like this, kind of a adversary proof. Suppose someone tells you that this language is regular. Then ask him for a number n > 0. You build a convenient string of length greater than n, and you give to the adversary. He partitions the string in x, y z, in any way he wants, as long as |xy| <= n. Then you have to pump y (repeat it i times) until you find a string that is not in that same language.
In this case, I tell you: give me n. You fix n. The I tell you: take the string "a^n b^{n+2}", and tell you to split it. In any way that you can split this string, you will always have to make y = a^k, with k > 0, since you are force to make |xy| <= n, and my string begins with a^n. Here is the trick, you give the adversary a string such that any way he can split it, he gives you a part that you can pump. So now we pump y, let's say, 0 times, and you get "a^m b^{n+2}" with m < n, which is not in your language. Done. We can also pump a 1 time, n times, n! factorial times, anything you need to make it leave the language.
The proof of this theorem goes around saying that if you have a regular language then you have an automaton with n states for some fixed n. If a string has more than n characters, then it must go through some cycle in your automaton. If we name x the part of the string before entering the cycle, and y the part in the cycle, it's clear that we can pump y as many times as we want, because we can keep running on the cycle as many times as we want, and the resulting string has to be in the language, because it will be recognized by that automaton. To use the theorem to prove for non-regularity, since we don't know how the supposed automaton will be, we have to leave to the adversary the choose for n and for the position of the cycle inside the automaton (there will be no automaton, but you say to the adversary something like: dare to give me an automaton and I will show you it cannot exist.)