No. of DFA's that can be constructed - dfa

How many DFA's can be constructed given 5 states over the alphabet {0,1}. Given that the initial state is not fixed.

A DFA can have at most one transition for each symbol in the alphabet. So we can enumerate the DFAs by assigning all the valid options for the states.
I will use the following notation, a 5-tuple of 2-tuples with the destination of the transitions for symbols in {0, 1}, for instance.
((None, 4), (4, 3), (1, None), (None, 0), (3, 3))
This gives you an FST that from state 0 it goes to state 4 (symbol 1), from state 4 it can go to state 3 to state (either symbol 0 or 1), and from there back to state 0 (symbol 1). But this is not connected, not using all the 5-states. If you count all the DFAs of this kind the answer is (6^2)^5 = 60466176, because for each symbol can take you to one of 5 states or be missing from a given state (the 6 in the formula), there are two symbols (2 in the formula), and there are 5 states (5 exponent in the formula).
Strictly 5 states
If we want an FST that can be generated with 5 states, and that all the states are reachable then we can generate the above FSTs and filter those thata are connected.
def is_connected(dfa):
N = len(dfa)
reached = [False] * N
nReached = 0
stack = [0]
while len(stack):
s = stack.pop()
if not reached[s]:
reached[s] = True
nReached += 1
if nReached == N:
return True
t, u = dfa[s]
if t is not None:
stack.append(t)
if u is not None:
stack.append(u)
return False
import itertools
all_dfas = list(itertools.product(
list(itertools.product([None, 0, 1, 2, 3, 4],
repeat=2)
), repeat=5))
num_connected = sum(1 if is_connected(dfa) else 0 for dfa in all_dfas[:])
It gives 15184800.
Isomorphisms
You could try to find the DFAs that are equivalent up to relabeling states, since this does not change the language defined by the DFA, in general we only look to the transitions not the states, but that seems to not be the case since you ask to consider all possible initial states.

Related

Most computationally efficient way to batch alter values in each array of a 2d array, based on conditions for particular values by indices

Say that I have a batch of arrays, and I would like to alter them based on conditions of particular values located by indices.
For example, say that I would like to increase and decrease particular values if the difference between those values are less than two.
For a single 1D array it can be done like this
import numpy as np
single2 = np.array([8, 8, 9, 10])
if abs(single2[1]-single2[2])<2:
single2[1] = single2[1] - 1
single2[2] = single2[2] + 1
single2
array([ 8, 7, 10, 10])
But I do not know how to do it for batch of arrays. This is my initial attempt
import numpy as np
single1 = np.array([6, 0, 3, 7])
single2 = np.array([8, 8, 9, 10])
single3 = np.array([2, 15, 15, 20])
batch = np.array([
np.copy(single1),
np.copy(single2),
np.copy(single3),
])
if abs(batch[:,1]-batch[:,2])<2:
batch[:,1] = batch[:,1] - 1
batch[:,2] = batch[:,2] + 1
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
Looking at np.any and np.all, they are used to create an array of booleans values, and I am not sure how they could be used in the code snippet above.
My second attempt uses np.where, using the method described here for comparing particular values of a batch of arrays by creating new versions of the arrays with values added to the front/back of the arrays.
https://stackoverflow.com/a/71297663/3259896
In the case of the example, I am comparing values that are right next to each other, so I created copies that shift the arrays forwards and backwards by 1. I also use only the particular slice of the array that I am comparing, since the other numbers would also be used in the comparison in np.where.
batch_ap = np.concatenate(
(batch[:, 1:2+1], np.repeat(-999, 3).reshape(3,1)),
axis=1
)
batch_pr = np.concatenate(
(np.repeat(-999, 3).reshape(3,1), batch[:, 1:2+1]),
axis=1
)
Finally, I do the comparisons, and adjust the values
batch[:, 1:2+1] = np.where(
abs(batch_ap[:,1:]-batch_ap[:,:-1])<2,
batch[:, 1:2+1]-1,
batch[:, 1:2+1]
)
batch[:, 1:2+1] = np.where(
abs(batch_pr[:,1:]-batch_pr[:,:-1])<2,
batch[:, 1:2+1]+1,
batch[:, 1:2+1]
)
print(batch)
[[ 6 0 3 7]
[ 8 7 10 10]
[ 2 14 16 20]]
Though I am not sure if this is the most computationally efficient nor programmatically elegant method for this task. Seems like a lot of operations and code for the task, but I do not have a strong enough mastery of numpy to be certain about this.
This works
mask = abs(batch[:,1]-batch[:,2])<2
batch[mask,1] -= 1
batch[mask,2] += 1

Finding max sum with operation limit

As an input i'm given an array of integers (all positive).
Also as an input i`m given a number of "actions". The goal is to find max possible sum of array elements with given number of actions.
As an "action" i can either:
Add current element to sum
Move to the next element
We are starting at 0 position in array. Each element could be added only once.
Limitation are:
2 < array.Length < 20
0 < number of "actions" < 20
It seems to me that this limitations essentially not important. Its possible to find each combination of "actions", but in this case complexity would be like 2^"actions" and this is bad...))
Examples:
array = [1, 4, 2], 3 actions. Output should be 5. In this case we added zero element, moved to first element, added first element.
array = [7, 8, 9], 2 actions. Output should be 8. In this case we moved to the first element, then added first element.
Could anyone please explain me the algorithm to solve this problem? Or at least the direction in which i shoudl try to solve it.
Thanks in advance
Here is another DP solution using memoization. The idea is to represent the state by a pair of integers (current_index, actions_left) and map it to the maximum sum when starting from the current_index, assuming actions_left is the upper bound on actions we are allowed to take:
from functools import lru_cache
def best_sum(arr, num_actions):
'get best sum from arr given a budget of actions limited to num_actions'
#lru_cache(None)
def dp(idx, num_actions_):
'return best sum starting at idx (inclusive)'
'with number of actions = num_actions_ available'
# return zero if out of list elements or actions
if idx >= len(arr) or num_actions_ <= 0:
return 0
# otherwise, decide if we should include current element or not
return max(
# if we include element at idx
# we spend two actions: one to include the element and one to move
# to the next element
dp(idx + 1, num_actions_ - 2) + arr[idx],
# if we do not include element at idx
# we spend one action to move to the next element
dp(idx + 1, num_actions_ - 1)
)
return dp(0, num_actions)
I am using Python 3.7.12.
array = [1, 1, 1, 1, 100]
actions = 5
In example like above, you just have to keep moving right and finally pickup the 100. At the beginning of the array we never know what values we are going to see further. So, this can't be greedy.
You have two actions and you have to try out both because you don't know which to apply when.
Below is a python code. If not familiar treat as pseudocode or feel free to convert to language of your choice. We recursively try both actions until we run out of actions or we reach the end of the input array.
def getMaxSum(current_index, actions_left, current_sum):
nonlocal max_sum
if actions_left == 0 or current_index == len(array):
max_sum = max(max_sum, current_sum)
return
if actions_left == 1:
#Add current element to sum
getMaxSum(current_index, actions_left - 1, current_sum + array[current_index])
else:
#Add current element to sum and Move to the next element
getMaxSum(current_index + 1, actions_left - 2, current_sum + array[current_index])
#Move to the next element
getMaxSum(current_index + 1, actions_left - 1, current_sum)
array = [7, 8, 9]
actions = 2
max_sum = 0
getMaxSum(0, actions, 0)
print(max_sum)
You will realize that there can be overlapping sub-problems here and we can avoid those repetitive computations by memoizing/caching the results to the sub-problems. I leave that task to you as an exercise. Basically, this is Dynamic Programming problem.
Hope it helped. Post in comments if any doubts.

Algorithm of Minimum steps to transform a list to the desired array. (Using InsertAt and DeleteAt Only)

Situation
To begin with, you have an array/list A, then you want to transform it to an expected array/list B given. The only actions that you can apply on array are InsertAt and DeleteAt where they are able to insert and delete an element at certain index from list.
note: Array B is always sorted while Array A may not be.
For instance, you have an array A of [1, 4, 3, 6, 7]
and you want it to become [2, 3, 4, 5, 6, 6, 7, 8]
One way of doing that is let A undergo following actions:
deleteAt(0); // which will delete element 1, arrayA now [4, 3, 6, 7]
deleteAt(0); // delete element 4 which now at index 0
// array A now [3, 6, 7]
insertAt(0, 2); // Insert value to at index 0 of array A
// array A now [2, 3, 6, 7]
insertAt(2, 4); // array now [2, 3, 4, 6, 7]
insertAt(3, 5); // array Now [2, 3, 4, 5, 6, 7]
insertAt(5, 6); // array Now [2, 3, 4, 5, 6, 6, 7]
insertAt(7, 8); // array Now [2, 3, 4, 5, 6, 6, 7, 8]
On the above example, 7 operations were done on array A to transform it to array we wanted.
Hence, how do we find the what are the steps to transform A to B, as well as the minimum steps? Thanks!
btw, solution of deleting all element at A then add everything from B to A is only applicable when A & B have nothing in common.
My thoughts
What I have done so far:
Compare the array A and array B, then save delete all the elements in Array A that can't be found in array B.
Find the longest increasing subsequence from the common list of A and B.
delete All elements that are not in Longest increasing sub sequence.
compare what is left with B, then add elements accordingly.
However, I'm struggling from implementing that..
Change log
fixed a typo of missing out element 7, now least operation is 7.
Added MY THOUGHTS section
There was a answer that elaborated on Levenshtein distance (AKA min edit distance), somehow it disappeared.. But I found that really useful after reading git/git levenshtein.c file, it seems to be a faster algorithm then what I already have. However, I'm not sure will that algorithm give me the detailed steps, or it is only capable of giving min num of steps.
I have a python program that seems to work, but it is not very short
__version__ = '0.2.0'
class Impossible(RuntimeError): pass
deleteAt = 'deleteAt'
insertAt = 'insertAt'
incOffset = 'incOffset'
def remove_all(size):
return [(deleteAt, i, None) for i in range(size-1, -1, -1)]
def index_not(seq, val):
for i, x in enumerate(seq):
if x != val:
return i
return len(seq)
def cnt_checked(func):
"""Helper function to check some function's contract"""
from functools import wraps
#wraps(func)
def wrapper(src, dst, maxsteps):
nsteps, steps = func(src, dst, maxsteps)
if nsteps > maxsteps:
raise RuntimeError(('cnt_checked() failed', maxsteps, nsteps))
return nsteps, steps
return wrapper
#cnt_checked
def strategy_1(src, dst, maxsteps):
# get dst's first value from src
val = dst[0]
try:
index = src.index(val)
except ValueError:
raise Impossible
# remove all items in src before val's first occurrence
left_steps = remove_all(index)
src = src[index:]
n = min(index_not(src, val), index_not(dst, val))
score = len(left_steps)
assert n > 0
left_steps.append([incOffset, n, None])
right_steps = [[incOffset, -n, None]]
nsteps, steps = rec_find_path(src[n:], dst[n:], maxsteps - score)
return (score + nsteps, (left_steps + steps + right_steps))
#cnt_checked
def strategy_2(src, dst, maxsteps):
# do not get dst's first value from src
val = dst[0]
left_steps = []
src = list(src)
for i in range(len(src)-1, -1, -1):
if src[i] == val:
left_steps.append((deleteAt, i, None))
del src[i]
n = index_not(dst, val)
right_steps = [(insertAt, 0, val) for i in range(n)]
dst = dst[n:]
score = len(left_steps) + len(right_steps)
nsteps, steps = rec_find_path(src, dst, maxsteps - score)
return (score + nsteps, (left_steps + steps + right_steps))
#cnt_checked
def rec_find_path(src, dst, maxsteps):
if maxsteps <= 0:
if (maxsteps == 0) and (src == dst):
return (0, [])
else:
raise Impossible
# if destination is empty, clear source
if not dst:
if len(src) > maxsteps:
raise Impossible
steps = remove_all(len(src))
return (len(steps), steps)
found = False
try:
nsteps_1, steps_1 = strategy_1(src, dst, maxsteps)
except Impossible:
pass
else:
found = True
maxsteps = nsteps_1 - 1
try:
nsteps_2, steps_2 = strategy_2(src, dst, maxsteps)
except Impossible:
if found:
return (nsteps_1, steps_1)
else:
raise
else:
return (nsteps_2, steps_2)
def find_path(A, B):
assert B == list(sorted(B))
maxsteps = len(A) + len(B)
nsteps, steps = rec_find_path(A, B, maxsteps)
result = []
offset = 0
for a, b, c in steps:
if a == incOffset:
offset += b
else:
result.append((a, b + offset, c))
return result
def check(start, target, ops):
"""Helper function to check correctness of solution"""
L = list(start)
for a, b, c in ops:
print(L)
if a == insertAt:
L.insert(b, c)
elif a == deleteAt:
del L[b]
else:
raise RuntimeError(('Unexpected op:', a))
print(L)
if L != target:
raise RuntimeError(('Result check failed, expected', target, 'got:', L))
start = [1, 4, 3, 6, 7]
target = [2, 3, 4, 5, 6, 6, 7, 8]
ops = find_path(start, target)
print(ops)
check(start, target, ops)
After some tests with this code, it is now obvious that the result is
a two phases operation. There is a first phase where items are deleted from
the initial list, all but a increasing sequence of items all belonging to the
target list (with repetition). Missing items are then added to list until
the target list is built.
The temporary conclusion is that if we find an algorithm to determine the
longest subsequence of items from the target list initially present in the
first list, in the same order but not necessarily contiguous, then it gives the shortest path. This is a new
and potentially simpler problem. This is probably what you meant above, but it is much clearer from the program's output.
It seems almost obvious that this problem can be reduced to the problem of the longest increasing subsequence
We can prove quite easily that the problem reduces to the longest non-decreasing subsequence by noting that if there were a collection of elements in A that did not merit deletion and was greater in number than the longest non-decreasing subsequence in A of elements also in B, then all the elements of this collection must exist in the same order in B, which means it's a longer non-decreasing subsequence, and contradicts our assertion. Additionally, any smaller collection in A that does not merit deletion, necessarily exists in B in the same order and is therefore part of the longest non-decreasing subsequence in A of elements also in B.
The algorithm then is reduced to the longest increasing subsequence problem, O(n log n + m):
(1) Find the longest non-decreasing subsequence of elements
in A that have at least the same count in B.
(2) All other items in A are to be deleted and the
missing elements from B added.

When / How to stop Ruby loop that creates nested arrays within nested arrays

I'm unsure when to end the loop that runs the map statement, the times is simply put in place as an example of where the loop should be and what code should be contained within. I would like to run it until the first value of the created multidimensional array is 0 (because it will consistently be the largest value until it becomes 0 itself and creates the last nested array), but I'm completely stumped on how to do so. Any help would be greatly appreciated!
def wonky_coins(n)
coins = [n]
if n == 0
return 1
end
i = 1
n.times do
coins.map! do |x|
if x != 0
i+= 2
else
next
o = []
o << x/2
o << x/3
o << x/4
x = o
puts x
end
end
end
return i
end
wonky_coins(6)
Problem:
# Catsylvanian money is a strange thing: they have a coin for every
# denomination (including zero!). A wonky change machine in
# Catsylvania takes any coin of value N and returns 3 new coins,
# valued at N/2, N/3 and N/4 (rounding down).
#
# Write a method `wonky_coins(n)` that returns the number of coins you
# are left with if you take all non-zero coins and keep feeding them
# back into the machine until you are left with only zero-value coins.
#
# Difficulty: 3/5
describe "#wonky_coins" do
it "handles a coin of value 1" do
wonky_coins(1).should == 3
end
it "handles a coin of value 5" do
wonky_coins(5).should == 11
# 11
# => [2, 1, 1]
# => [[1, 0, 0], [0, 0, 0], [0, 0, 0]]
# => [[[0, 0, 0], 0, 0], [0, 0, 0], [0, 0, 0]]
end
it "handles a coin of value 6" do
wonky_coins(6).should == 15
end
it "handles being given the zero coin" do
wonky_coins(0).should == 1
end
end
First of all, you should not have nested arrays. You want to flatten the array after each pass, so you just have coins; even better, use flat_map to do it in one step. 0 produces just itself: [0]; don't forget it or your code will lose all of your target coins!
Next, there is no logic to doing it n times that I can see. No fixed number of times will do. You want to do it until all coins are zero. You can set a flag at the top (all_zero = true), and flip it when you find a non-zero coin, that should tell your loop that further iterations are needed.
Also, you don't need to track the number of coins, since the number will be the final size of the array.
And finally, and unrelated to the problem, do get into the habit of using correct indentation. For one thing, it makes it harder for yourself to debug and maintain the code; for another, bad indentation makes many SO people not want to bother reading your question.
Went back a bit later now knowing about how to use .flatten and I got it! Thanks to #Amadan for the helpful tips. Feel free to leave any comments concerning my syntax as I am just starting out and can use all the constructive feedback I can get!
def wonky_coins(n)
coins = [n]
return 1 if n == 0
until coins[0] == 0
coins.map! { |x|
next if x == 0
x = [x/2, x/3, x/4]
}
coins.flatten!
end
return coins.length
end
wonky_coins(6)

Python looping: idiomatically comparing successive items in a list

I need to loop over a list of objects, comparing them like this: 0 vs. 1, 1 vs. 2, 2 vs. 3, etc. (I'm using pysvn to extract a list of diffs.) I wound up just looping over an index, but I keep wondering if there's some way to do it which is more closely idiomatic. It's Python; shouldn't I be using iterators in some clever way? Simply looping over the index seems pretty clear, but I wonder if there's a more expressive or concise way to do it.
for revindex in xrange(len(dm_revisions) - 1):
summary = \
svn.diff_summarize(svn_path,
revision1=dm_revisions[revindex],
revision2 = dm_revisions[revindex+1])
This is called a sliding window. There's an example in the itertools documentation that does it. Here's the code:
from itertools import islice
def window(seq, n=2):
"Returns a sliding window (of width n) over data from the iterable"
" s -> (s0,s1,...s[n-1]), (s1,s2,...,sn), ... "
it = iter(seq)
result = tuple(islice(it, n))
if len(result) == n:
yield result
for elem in it:
result = result[1:] + (elem,)
yield result
What that, you can say this:
for r1, r2 in window(dm_revisions):
summary = svn.diff_summarize(svn_path, revision1=r1, revision2=r2)
Of course you only care about the case where n=2, so you can get away with something much simpler:
def adjacent_pairs(seq):
it = iter(seq)
a = it.next()
for b in it:
yield a, b
a = b
for r1, r2 in adjacent_pairs(dm_revisions):
summary = svn.diff_summarize(svn_path, revision1=r1, revision2=r2)
I'd probably do:
import itertools
for rev1, rev2 in zip(dm_revisions, itertools.islice(dm_revisions, 1, None)):
summary = svn.diff_sumeraize(svn_python, revision1=rev, revision2=rev2)
Something similarly cleverer and not touching the iterators themselves could probably be done using
So many complex solutions posted, why not keep it simple?
myList = range(5)
for idx, item1 in enumerate(myList[:-1]):
item2 = L[idx + 1]
print item1, item2
>>>
0 1
1 2
2 3
3 4
Store the previous value in a variable. Initialize the variable with a value you're not likely to find in the sequence you're handling, so you can know if you're at the first element. Compare the old value to the current value.
Reduce can be used for this purpose, if you take care to leave a copy of the current item in the result of the reducing function.
def diff_summarize(revisionList, nextRevision):
'''helper function (adaptor) for using svn.diff_summarize with reduce'''
if revisionList:
# remove the previously tacked on item
r1 = revisionList.pop()
revisionList.append(svn.diff_summarize(
svn_path, revision1=r1, revision2=nextRevision))
# tack the current item onto the end of the list for use in next iteration
revisionList.append(nextRevision)
return revisionList
summaries = reduce(diff_summarize, dm_revisions, [])
EDIT: Yes, but nobody said the result of the function in reduce has to be scalar. I changed my example to use a list. Basically, the last element is allways the previous revision (except on first pass), with all preceding elements being the results of the svn.diff_summarize call. This way, you get a list of results as your final output...
EDIT2: Yep, the code really was broken. I have here a workable dummy:
>>> def compare(lst, nxt):
... if lst:
... prev = lst.pop()
... lst.append((prev, nxt))
... lst.append(nxt)
... return lst
...
>>> reduce(compare, "abcdefg", [])
[('a', 'b'), ('b', 'c'), ('c', 'd'), ('d', 'e'), ('e', 'f'), ('f', 'g'), 'g']
This was tested in the shell, as you can see. You will want to replace (prev, nxt) in the lst.append call of compare to actually append the summary of the call to svn.diff_summarize.
>>> help(reduce)
Help on built-in function reduce in module __builtin__:
reduce(...)
reduce(function, sequence[, initial]) -> value
Apply a function of two arguments cumulatively to the items of a sequence,
from left to right, so as to reduce the sequence to a single value.
For example, reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) calculates
((((1+2)+3)+4)+5). If initial is present, it is placed before the items
of the sequence in the calculation, and serves as a default when the
sequence is empty.

Resources