I have this little guessing game code. In order to deal with the string input, I used try/catch blocks. Try works perfectly, but catch-block is outside of the loop and I can't seem to make it work inside. So the program stops after catching an exception. What should I do so that my loop continues after catching an exception?
fun main(args: Array<String>) {
val rand = java.util.Random()
val n = 1 + rand.nextInt(100)
var guess: Int
var numberOfTries = 0
println("I guessed a number from 1 до 100. What is it?\n")
try {
do {
guess = readLine()!!.toInt()
var x = Math.abs(n - guess)
numberOfTries++
when (x) {
in 1..3 -> println("А-а-аh! It's burning!")
in 4..7 -> println("Really hot!")
in 8..15 -> println("Warm")
in 16..31 -> println("A bit warm!")
in 32..63 -> println("Pretty cold")
in 64..99 -> println("It's freezing!")
}
} while (guess != n)
} catch (e: NumberFormatException) {
println("Use digits, please!") }
println("Wow! You only used $numberOfTries tries!")
}
As MFazio23 mentioned, you use the try inside the while loop. Otherwise, it will exit the loop if an exception is thrown.
If an exception is thrown, anything inside is halted, which includes further code. if you have a method that throws an exception, no code after it will be executed. The try-catch creates an entry-point for the exception; your code will continue inside the relevant catch block (or exit the program if there is none), which means the loop inside the try-catch will stop.
However, you actually don't need the try-catch at all. Kotlin has a nice extension function called toIntOrNull, which does exactly what you'd expect; it attempts to convert the input to an int, and returns the number, or null if it failed. So, you can do this:
fun main(args: Array<String>) {
val rand = java.util.Random()
val n = 1 + rand.nextInt(100)
var guess: Int?
var numberOfTries = 0
println("I guessed a number from 1 до 100. What is it?\n")
do {
guess = readLine()?.toIntOrNull() // Note that this now uses ?. instead of !!. This is to make the null check useful If it throws an NPE, it kinda defeats the point. If the line is null, it now prints the same message as an invalid number
// numberOfTries++ // move this up here if you want to count invalid guesses as a guess
if(guess == null){
System.out.println("Only use numbers")
continue;
}
val x = Math.abs(n - guess)// I also changed this to a val; it's immutable, so it doesn't need to be a var
numberOfTries++
when (x) {
in 1..3 -> println("А-а-аh! It's burning!")
in 4..7 -> println("Really hot!")
in 8..15 -> println("Warm")
in 16..31 -> println("A bit warm!")
in 32..63 -> println("Pretty cold")
in 64..99 -> println("It's freezing!")
}
} while (guess != n)
println("Wow! You only used $numberOfTries tries!")
}
You can also optimize it further, but using an extension function/variable (I'm not sure what it is, it's a variable declared as an extension function, but since there's a getter too, I'm not sure what to call it) called absoluteValue.
You could also use if-statements, but it is slightly more boilerplate than using this. You cannot call Math.abs with null, because it uses primitives. Primitives in Java can never be null.
Which means anything you pass you the method cannot be null, which in Kotlin means for an instance Int. If it's nullable, it's an Int?, but the method requires non-null from Kotlin. You can't pass Int? to Int (you can do it the other way around, but that's not relevant here).
Under the hood, .absoluteValue calls Math.abs(n), but since it's a call, you can use the null-safe operator (?.)
guess = readLine()?.toIntOrNull()
val x = guess?.absoluteValue
numberOfTries++
when (x) {
in 1..3 -> println("А-а-аh! It's burning!")
in 4..7 -> println("Really hot!")
in 8..15 -> println("Warm")
in 16..31 -> println("A bit warm!")
in 32..63 -> println("Pretty cold")
in 64..99 -> println("It's freezing!")
null -> println("Please only use numbers")
}
And now that x is nullable, you can add null to the when statement (in response to your comment).
Also, if you only want numberOfTries to increment on valid numbers, add an if(x != null) before you call it.
You should be able to add the try...catch block right in your do...while. The only other change needed would be to initialize guess with a value (since it's not guaranteed to be set before the while block is hit):
val rand = java.util.Random()
val n = 1 + rand.nextInt(100)
var guess = 0
var numberOfTries = 0
println("I guessed a number from 1 до 100. What is it?\n")
do {
try {
guess = readLine()!!.toInt()
val x = Math.abs(n - guess)
numberOfTries++
when (x) {
in 1..3 -> println("А-а-аh! It's burning!")
in 4..7 -> println("Really hot!")
in 8..15 -> println("Warm")
in 16..31 -> println("A bit warm!")
in 32..63 -> println("Pretty cold")
in 64..99 -> println("It's freezing!")
}
} catch (e: NumberFormatException) {
println("Use digits, please!")
}
} while (guess != n)
println("Wow! You only used $numberOfTries tries!")
Related
I have a following code. It contains getPointAndPos function that needs to be as fast as possible:
struct Point {
let x: Int
let y: Int
}
struct PointAndPosition {
let pnt: Point
let pos: Int
}
class Elements {
var points: [Point]
init(points: [Point]) {
self.points = points
}
func addPoint(x: Int, y: Int) {
points.append(Point(x: x, y: y))
}
func getPointAndPos(pos: Int) -> PointAndPosition? {
guard pos >= 0 && points.count > pos else {
return nil
}
return PointAndPosition(pnt: points[pos], pos: pos)
}
}
However, due to Swift memory management it is not fast at all. I used to use dictionary, but it was even worse. This function is heavily used in the application, so it is the main bottleneck now. Here are the profiling results for getPointAndPos function:
As you can see it takes ~4.5 seconds to get an item from array, which is crazy. I tried to follow all performance optimization techniques that I could find, namely:
Using Array instead of Dictionary
Using simple types as Array elements (struct in my case)
It helped, but it is not enough. Is there a way to optimize it even further considering that I do not change elements from array after they are added?
UPDATE #1:
As suggested I replaced [Point] array with [PointAndPosition] one and removed optionals, which made the code 6 times faster. Also, as requested providing the code which uses getPointAndPos function:
private func findPoint(el: Elements, point: PointAndPosition, curPos: Int, limit: Int, halfLevel: Int, incrementFunc: (Int) -> Int) -> PointAndPosition? {
guard curPos >= 0 && curPos < el.points.count else {
return nil
}
// get and check point here
var next = curPos
while true {
let pnt = el.getPointAndPos(pos: next)
if checkPoint(pp: point, pnt: pnt, halfLevel: halfLevel) {
return pnt
} else {
next = incrementFunc(next)
if (next != limit) {
continue //then findPoint next limit incrementFunc
}
break
}
}
return nil
}
Current implementation is much faster, but ideally I need to make it 30 times faster than it is now. Not sure if it is even possible. Here is the latest profiling result:
I suspect you're creating a PointAndPosition and then immediately throwing it away. That's the thing that's going to create a lot of memory churn. Or you're creating a lot of duplicate PointAndPosition values.
First make sure that this is being built in Release mode with optimizations. ARC can often remove a lot of unnecessary retains and releases when optimized.
If getPointAndPos has to be as fast as possible, then the data should be stored in the form it wants, which is an array of PointAndPosition:
class Elements {
var points: [PointAndPosition]
init(points: [Point]) {
self.points = points.enumerated().map { PointAndPosition(pnt: $0.element, pos: $0.offset) }
}
func addPoint(x: Int, y: Int) {
points.append(PointAndPosition(pnt: Point(x: x, y: y), pos: points.endIndex))
}
func getPointAndPos(pos: Int) -> PointAndPosition? {
guard pos >= 0 && points.count > pos else {
return nil
}
return points[pos]
}
}
I'd take this a step further and reduce getPointAndPos to this:
func getPointAndPos(pos: Int) -> PointAndPosition {
points[pos]
}
If this is performance critical, then bounds checks should already have been done, and you shouldn't need an Optional here.
I'd also be very interested in the code that calls this. That may be more the issue than this code. It's possible you're calling getPointAndPos more often than you need to. (Though getting rid of the struct creation will make that less important.)
I am trying to understand which of the following would be a better approach.
I have an Array of structs
struct A {
var selectionCount: Int
}
var ayes = [A]()
Should I loop over the items each time if I want to know if any element has been selected.
func selectedCount() -> Int {
return ayes.filter({ $0.selectionCount != 0 }).reduce(0, +)
}
// OR
Store a var and access it each time if I want to know if any selection has been made.
var totalSelectedElements = 0
func select(at: Int) {
ayes[at].selectionCount += 1
totalSelectedElements += 1
}
func deselect(at: Int) {
ayes[at].selectionCount -= 1
totalSelectedElements -= 1
}
It is important to distinguish interface from implementation. First design the interface you want, and then you can always changed the internal implementation to suit your (performance vs. storage) needs.
I believe the array of A should be protected and you should only allow access via the select(at:) and deselect(at:) methods. This allows you to do the internal implementation either way:
struct Ayes {
private struct A {
var selectionCount = 0
}
private var ayes = [A](repeating: A(), count: 100)
private var totalSelectedElements = 0
mutating func select(at: Int) {
ayes[at].selectionCount += 1
totalSelectedElements += 1
}
mutating func deselect(at: Int) {
guard ayes[at].selectionCount > 0 else { return }
ayes[at].selectionCount -= 1
totalSelectedElements -= 1
}
func selectCount(at: Int) -> Int {
return ayes[at].selectionCount
}
var totalElements: Int {
return totalSelectedElements
}
}
It really depends on how often you will be accessing the totalElements whether you want to store it or compute it. By hiding that implementation detail, you are free to change the implementation without affecting the rest of your program.
I like the idea of maintaining the count for quick access, and by protecting access to the internal implementation you can guarantee that the count is accurate.
Example:
var ayes = Ayes()
print(ayes.totalElements) // 0
ayes.select(at: 3)
ayes.select(at: 3)
ayes.select(at: 4)
print(ayes.totalElements) // 3
print(ayes.selectCount(at: 3)) // 2
ayes.deselect(at: 3)
print(ayes.selectCount(at: 3)) // 1
ayes.deselect(at: 3)
print(ayes.selectCount(at: 3)) // 0
ayes.deselect(at: 3)
print(ayes.selectCount(at: 3)) // 0
print(ayes.totalElements) // 1
Alternate Implementation - same interface
This solution combines #RakeshaShastri's suggestion of using a dictionary with your idea of maintaining a count:
struct Ayes {
private var ayes = [Int : Int]()
private var totalSelectedElements = 0
mutating func select(at: Int) {
ayes[at, default: 0] += 1
totalSelectedElements += 1
}
mutating func deselect(at: Int) {
guard var count = ayes[at] else { return }
count -= 1
totalSelectedElements -= 1
ayes[at] = count == 0 ? nil : count
}
func selectCount(at: Int) -> Int {
return ayes[at, default: 0]
}
var totalElements: Int {
return totalSelectedElements
}
}
This avoids the need for a preallocated array but still provides quick access via a dictionary and the internal count.
I tend to vote against storing information which can be derived from already existing data. This approach, however, may be critical to performance. So these two questions arise:
What is the order of magnitude in your array? Are we talking of only a few hundred items? If so, you should be able to safely ignore the added overhead.
How often will it be necessary to access the value in question?
If performance is what's meant by "better approach" then having a value at the ready is, of course, way quicker than going through hundreds if not thousands of elements and getting their properties and then adding them up.
If "better approach" means better API design, then the former is more versatile since from your code any object an call select(at:) or deselect(at:) and so selectionCount may become negative... And your code would be stateful, it would rely on the state of a variable.
I tried executing Sieve Of Eratosthenes algorithm using a large Integer array and a large Bool array.
The integer version seems to execute MUCH faster than the boolean one. What is the possible reason for this?
import Foundation
var n : Int = 100000000;
var prime = [Bool](repeating: true, count: n+1)
var p = 2
let start = DispatchTime.now()
while((p*p)<=n)
{
if(prime[p] == true)
{
var i = p*2
while (i<=n)
{
prime[i] = false
i = i + p
}
}
p = p+1
}
let stop = DispatchTime.now()
let time = (Double)(stop.uptimeNanoseconds - start.uptimeNanoseconds) / 1000000.0
print("Time = \(time) ms")
Boolean array execution time : 78223.342295 ms
import Foundation
var n : Int = 100000000;
var prime = [Int](repeating: 1, count: n+1)
var p = 2
let start = DispatchTime.now()
while((p*p)<=n)
{
if(prime[p] == 1)
{
var i = p*2
while (i<=n)
{
prime[i] = 0
i = i + p
}
}
p = p+1
}
let stop = DispatchTime.now()
let time = (Double)(stop.uptimeNanoseconds - start.uptimeNanoseconds) / 1000000.0
print("Time = \(time) ms")
Integer array execution time : 8535.54546 ms
TL, DR:
Do not attempt to optimize your code in a Debug build. Always run it through the Profiler. Int was faster then Bool in Debug but the oposite was true when run through the Profiler.
Heap allocation is expensive. Use your memory judiciously. (This question discusses the complications in C, but also applicable to Swift)
Long answer
First, let's refactor your code for easier execution:
func useBoolArray(n: Int) {
var prime = [Bool](repeating: true, count: n+1)
var p = 2
while((p*p)<=n)
{
if(prime[p] == true)
{
var i = p*2
while (i<=n)
{
prime[i] = false
i = i + p
}
}
p = p+1
}
}
func useIntArray(n: Int) {
var prime = [Int](repeating: 1, count: n+1)
var p = 2
while((p*p)<=n)
{
if(prime[p] == 1)
{
var i = p*2
while (i<=n)
{
prime[i] = 0
i = i + p
}
}
p = p+1
}
}
Now, run it in the Debug build:
let count = 100_000_000
let start = DispatchTime.now()
useBoolArray(n: count)
let boolStop = DispatchTime.now()
useIntArray(n: count)
let intStop = DispatchTime.now()
print("Bool array:", Double(boolStop.uptimeNanoseconds - start.uptimeNanoseconds) / Double(NSEC_PER_SEC))
print("Int array:", Double(intStop.uptimeNanoseconds - boolStop.uptimeNanoseconds) / Double(NSEC_PER_SEC))
// Bool array: 70.097249517
// Int array: 8.439799614
So Bool is a lot slower than Int right? Let's run it through the Profiler by pressing Cmd + I and choose the Time Profile template. (Somehow the Profiler wasn't able to separate these functions, probably because they were inlined so I had to run only 1 function per attempt):
let count = 100_000_000
useBoolArray(n: count)
// useIntArray(n: count)
// Bool: 1.15ms
// Int: 2.36ms
Not only they are an order of magnitude faster than Debug but the results are reversed to: Bool is now faster than Int!!! The Profiler doesn't tell us why how so we must go on a witch hunt. Let's check the memory allocation by adding an Allocation instrument:
Ha! Now the differences are laid bare. The Bool array uses only one-eight as much memory as Int array. Swift array uses the same internals as NSArray so it's allocated on the heap and heap allocation is slow.
When you think even more about it: a Bool value only take up 1 bit, an Int takes 64 bits on a 64-bit machine. Swift may have chosen to represent a Bool with a single byte, while an Int takes 8 bytes, hence the memory ratio. In Debug, this difference may have caused all the difference as the runtime must do all kinds of checks to ensure that it's actually dealing with a Bool value so the Bool array method takes significantly longer.
Moral of the lesson: don't optimize your code in Debug mode. It can be misleading!
(A partial answer ...)
As #MartinR mentions in his comments to the question, there is no such major difference between the two cases if you build for release mode (with optimizations); the Bool case is slightly faster due its smaller memory footprint (but equally fast as e.g. UInt8 which has the same footprint).
Running instruments to profile the (non-optimized) debug build, we clearly see that the array element access & assignment is the culprit for the Bool case (an as far as my brief testing has seen; for all types except the integer ones, Int, UInt16, and so on).
We can further ascertain that its not the writing part in particular that yields the overhead, but rather the repeated accessing of the i:th element.
The same explicit read-access tests for an array of integer elements show no such large overhead.
It would almost seem as if the random element access is, for some reason, not working as it should (for non-integer types) when compiling with debug build config.
I'm new to Scala and I was playing around with the Array.tabulate method. I am getting a StackOverFlowError when executing this simplified piece of code snippet (originally a dp problem).
import Lazy._
class Lazy[A](x: => A) {
lazy val value = x
}
object Lazy {
def apply[A](x: => A) = new Lazy(x)
implicit def fromLazy[A](z: Lazy[A]): A = z.value
implicit def toLazy[A](x: => A): Lazy[A] = Lazy(x)
}
def tabulatePlay(): Int = {
lazy val arr: Array[Array[Lazy[Int]]] = Array.tabulate(10, 10) { (i, j) =>
if (i == 0 && j == 0)
0 // some number
else
arr(0)(0)
}
arr(0)(0)
}
Debugging, I noticed that since arr is lazy and when it reaches the arr(0)(0) expression it tries to evaluate it by calling the Array.tabulate method again -- infinitely over and over.
What am i doing wrong? (I updated the code snippet since I was basing it off the solution given in Dynamic programming in the functional paradigm in particular Antal S-Z's answer )
You have effectively caused an infinite recursion. You simply can't reference a lazy val from within its own initialization code. You need to compute arr(0)(0) separately.
I'm not sure why you are trying to access arr before it's built, tabulate seems to be used to fill the array with a function - calling arr would always result in infinite recursion.
See Rex's example here (and a vote for him), perhaps that will help.
In a multidimensional sequence created with tabulate, is the innermost seq the 1. dimension?
I was able to solve this by wrapping arr(0)(0) in Lazy so it is evaluated as a call-by-name parameter, thereby not evaluating arr in the tabulate method. The code that I referenced was automatically converting it using implicits (the binary + operator), so it wasn't clear cut.
def tabulatePlay(): Int = {
lazy val arr: Array[Array[Lazy[Int]]] = Array.tabulate(10, 10) { (i, j) =>
if (i == 0 && j == 0)
1 // some number
else
new Lazy(arr(0)(0))
}
arr(0)(0)
}
Thanks all.
I am trying loop over an array and return a value as shown below. But this gives me an error on the line after the if statement. It says "This expression was expected to have type unit but has type int"
let findMostSignificantBitPosition (inputBits:System.Collections.BitArray) =
for i = inputBits.Length - 1 to 0 do
if inputBits.[i] then
i
done
How would I do this? I am in the middle of recoding this with a recursive loop, as it seems to be the more accepted way of doing such loops in functional languages, but I still want to know what I was doing wrong above.
for loops are not supposed to return values, they only do an operation a fixed number of times then return () (unit). If you want to iterate and finally return something, you may :
have outside the loop a reference where you put the final result when you get it, then after the loop return the reference content
use a recursive function directly
use a higher-order function that will encapsulate the traversal for you, and let you concentrate on the application logic
The higher-function is nice if your data structure supports it. Simple traversal functions such as fold_left, however, don't support stopping the iteration prematurely. If you wish to support this (and clearly it would be interesting in your use case), you must use a traversal with premature exit support. For easy functions such as yours, a simple recursive function is probably the simplest.
In F# it should also be possible to write your function in imperative style, using yield to turn it into a generator, then finally forcing the generator to get the result. This could be seen as a counterpart of the OCaml technique of using an exception to jump out of the loop.
Edit: A nice solution to avoid the "premature stop" questions is to use a lazy intermediate data structure, which will only be built up to the first satisfying result. This is elegant and good scripting style, but still less efficient than direct exit support or simple recursion. I guess it depends on your needs; is this function to be used in a critical path?
Edit: following are some code sample. They're OCaml and the data structures are different (some of them use libraries from Batteries), but the ideas are the same.
(* using a reference as accumulator *)
let most_significant_bit input_bits =
let result = ref None in
for i = Array.length input_bits - 1 downto 0 do
if input_bits.(i) then
if !result = None then
result := Some i
done;
!result
let most_significant_bit input_bits =
let result = ref None in
for i = 0 to Array.length input_bits - 1 do
if input_bits.(i) then
(* only the last one will be kept *)
result := Some i
done;
!result
(* simple recursive version *)
let most_significant_bit input_bits =
let rec loop = function
| -1 -> None
| i ->
if input_bits.(i) then Some i
else loop (i - 1)
in
loop (Array.length input_bits - 1)
(* higher-order traversal *)
open Batteries_uni
let most_significant_bit input_bits =
Array.fold_lefti
(fun result i ->
if input_bits.(i) && result = None then Some i else result)
None input_bits
(* traversal using an intermediate lazy data structure
(a --- b) is the decreasing enumeration of integers in [b; a] *)
open Batteries_uni
let most_significant_bit input_bits =
(Array.length input_bits - 1) --- 0
|> Enum.Exceptionless.find (fun i -> input_bits.(i))
(* using an exception to break out of the loop; if I understand
correctly, exceptions are rather discouraged in F# for efficiency
reasons. I proposed to use `yield` instead and then force the
generator, but this has no direct OCaml equivalent. *)
exception Result of int
let most_significant_bit input_bits =
try
for i = Array.length input_bits - 1 downto 0 do
if input_bits.(i) then raise (Result i)
done;
None
with Result i -> Some i
Why using a loop when you can use high-order functions?
I would write:
let findMostSignificantBitPosition (inputBits:System.Collections.BitArray) =
Seq.cast<bool> inputBits |> Seq.tryFindIndex id
Seq module contains many functions for manipulating collections. It is often a good alternative to using imperative loops.
but I still want to know what I was
doing wrong above.
The body of a for loop is an expression of type unit. The only thing you can do from there is doing side-effects (modifying a mutable value, printing...).
In F#, a if then else is similar to ? : from C languages. The then and the else parts must have the same type, otherwise it doesn't make sense in a language with static typing. When the else is missing, the compiler assumes it is else (). Thus, the then must have type unit. Putting a value in a for loop doesn't mean return, because everything is a value in F# (including a if then).
+1 for gasche
Here are some examples in F#. I added one (the second) to show how yield works with for within a sequence expression, as gasche mentioned.
(* using a mutable variable as accumulator as per gasche's example *)
let findMostSignificantBitPosition (inputBits: BitArray) =
let mutable ret = None // 0
for i = inputBits.Length - 1 downto 0 do
if inputBits.[i] then ret <- i
ret
(* transforming to a Seq of integers with a for, then taking the first element *)
let findMostSignificantBitPosition2 (inputBits: BitArray) =
seq {
for i = 0 to inputBits.Length - 1 do
if inputBits.[i] then yield i
} |> Seq.head
(* casting to a sequence of bools then taking the index of the first "true" *)
let findMostSignificantBitPosition3 (inputBits: BitArray) =
inputBits|> Seq.cast<bool> |> Seq.findIndex(fun f -> f)
Edit: versions returning an Option
let findMostSignificantBitPosition (inputBits: BitArray) =
let mutable ret = None
for i = inputBits.Length - 1 downto 0 do
if inputBits.[i] then ret <- Some i
ret
let findMostSignificantBitPosition2 (inputBits: BitArray) =
seq {
for i = 0 to inputBits.Length - 1 do
if inputBits.[i] then yield Some(i)
else yield None
} |> Seq.tryPick id
let findMostSignificantBitPosition3 (inputBits: BitArray) =
inputBits|> Seq.cast<bool> |> Seq.tryFindIndex(fun f -> f)
I would recommend using a higher-order function (as mentioned by Laurent) or writing a recursive function explicitly (which is a general approach to replace loops in F#).
If you want to see some fancy F# solution (which is probably better version of using some temporary lazy data structure), then you can take a look at my article which defines imperative computation builder for F#. This allows you to write something like:
let findMostSignificantBitPosition (inputBits:BitArray) = imperative {
for b in Seq.cast<bool> inputBits do
if b then return true
return false }
There is some overhead (as with using other temporary lazy data structures), but it looks just like C# :-).
EDIT I also posted the samples on F# Snippets: http://fssnip.net/40
I think the reason your having issues with how to write this code is that you're not handling the failure case of not finding a set bit. Others have posted many ways of finding the bit. Here are a few ways of handling the failure case.
failure case by Option
let findMostSignificantBitPosition (inputBits:System.Collections.BitArray) =
let rec loop i =
if i = -1 then
None
elif inputBits.[i] then
Some i
else
loop (i - 1)
loop (inputBits.Length - 1)
let test = new BitArray(1)
match findMostSignificantBitPosition test with
| Some i -> printf "Most Significant Bit: %i" i
| None -> printf "Most Significant Bit Not Found"
failure case by Exception
let findMostSignificantBitPosition (inputBits:System.Collections.BitArray) =
let rec loop i =
if i = -1 then
failwith "Most Significant Bit Not Found"
elif inputBits.[i] then
i
else
loop (i - 1)
loop (inputBits.Length - 1)
let test = new BitArray(1)
try
let i = findMostSignificantBitPosition test
printf "Most Significant Bit: %i" i
with
| Failure msg -> printf "%s" msg
failure case by -1
let findMostSignificantBitPosition (inputBits:System.Collections.BitArray) =
let rec loop i =
if i = -1 then
i
elif inputBits.[i] then
i
else
loop (i - 1)
loop (inputBits.Length - 1)
let test = new BitArray(1)
let i = findMostSignificantBitPosition test
if i <> -1 then
printf "Most Significant Bit: %i" i
else
printf "Most Significant Bit Not Found"
One of the options is to use seq and findIndex method as:
let findMostSignificantBitPosition (inputBits:System.Collections.BitArray) =
seq {
for i = inputBits.Length - 1 to 0 do
yield inputBits.[i]
} |> Seq.findIndex(fun e -> e)