In quantum computing is there a preference to usage of little endian or big endian? - quantum-computing

I've been learning quantum computing related concepts over the past few months. We've generally used the big endian notation while solving problems on paper.
Recently on starting to code I find that at a lot of places the little endian notation is used. I see the same in Quantum Katas by Microsoft and also in Qiskit. On paper, however, thinking in terms of Little endian reverses the order of tensor products, etc. So sometimes it gets confusing.
Is there any particular trend on using little endian in quantum computing softwares (QDK, Qiskit, etc) or any reason for the same?
Any suggestions in terms of what is the best way to think (in the above context), that can help while developing quantum algos to problems and smoothly translating them into code are welcome.

I believe the preference in the user code is mostly dictated by the notation used by two sources: the libraries and the books/papers detailing the topic; and the preference in the libraries is dictated by the notation in the books/papers used to implement the libraries.
For example, quantum Fourier transform as described in Nielsen and Chuang uses big endian notation for input/output registers; so if a library uses this book as a reference (as the first part of the QFT kata does), it is likely to use big endian notation as well.
I don't think there is a quantum-specific reason to prefer little endian over big endian or vice versa, at some level it's an arbitrary choice informed by the notation preferred by the sources.

Related

Fixed Point FFT Algorithm Needed

I am developing an algorithm for an audio application for mobile platforms. It appears to me that currently the float point calculation support on many mobile processors is not ubiquitous and developing in fixed point would be a safer bet.
I have written FFT routines in float point form for some time now to a degree of success, however writing one in fixed point turned out to be rather difficult. Namely, I would be happy to improve the precision, as well as to find a way to handle potential overflows. The problem is, unlike float point FFTs, descriptions of fixed point FFT algorithms are hard to come by on the Internet.
Has anyone had some experience developing such algorithms?
Your first choice should probably be to use a native-optimized FFT. There are processing requirement for fixed point FFTs that are difficult to express efficiently in portable C (or any language probably): saturation arithmetic is probably the biggest obstacle. Assembly libraries will tend to take advantage of processor-specific instructions for these .
If you still want a portable ANSI C fixed point FFT, I only know of one choice: kissfft. (Disclaimer : I wrote it)
I have read great things about http://anthonix.com/ffts/index.html - this works well on mobile platforms - The site contains benchmarks
I have been working on an automated tool that converts floating-point C code to fixed-point, with a variety of options for tradeoffs between accuracy and execution time. I have had good results with a number of algorithms, including a 2D 8x8 discrete cosine transform. My target platform is typically an ARM Cortex-M processor but similar results should be achievable on other platforms. Would you be interested in letting me take a crack at your FFT?

which encoding to use for genetic algorithm?

I want to code a genetic algorithm in C for optimizing a function of 10 variables (x1 to x10). However I am not able to figure out which encoding I should use. I have mostly seen binary encoding being used in example but the variables in my case can take real values. Also, is value encoding a good option for these types of problems?
For real valued problems I would suggest to try CMA-ES or another ES variant. CMA-ES certainly is the current state of the art for real-valued problems. It is designed to find good solutions in multidimensional problems quickly. There are implementations available on Hansen's page. There's also a C# implementation in the work for HeuristicLab. Evolution strategies are algorithms that were specifically designed for real-valued optimization problems. They are very similar to genetic algorithms (both were invented around the same time, but in different places). The main distinction is that for ES the main driver is mutation and it features a clever adaption of the mutation strength. Without this adaption the (local) optimum cannot be located in time. CMA-ES is easy to configure, all it needs is the initial standard deviation and optionally the population size (otherwise there's a formula that estimates this given the problem size).
Genetic algorithms can of course also be applied, but you have to use some specific operators which are able to mutate variables only with very small degree. For example there's the Breeder Genetic Algorithm from Mühlenbein. In general however genetic algorithms are more suited for problems that need a right combination of things. E.g. which items to include in a knapsack problem or which functions and terminals to combine to a formula (genetic programming). Less for problems, where you need to find the right value for something. Although of course there are variants of the genetic algorithm to solve these, look for Real coded Genetic Algorithm (RCGA or RGA).
Another algorithm suited for real-valued problems is Particle Swarm Optimization, but in my opinion it is harder to configure. I'd start with SPSO-2011 the 2011 standard PSO.
If your problem contains integer variables choices become more difficult. Evolution strategies do not perform so well when variables are discrete, because the adaptation schemes for integer variables are different. A genetic algorithm becomes an interesting first-choice algorithm again.
A genetic algorithm is best used when two answers that are pretty close to optimal will make something else pretty close to optimal when combined. The problem with a pure binary encoding is that if you don't check your crossover you end up getting two answers which may not have all that much to do with the original answers.
That said, this is only really an issue if your number of variables is very small and the amount of data in your variables is large. As far as picking an encoding, it's more of an art than a science and it depends on your problem. I would suggest going with an encoding that fits the amount of precision you want. With 10 variables you won't got that far wrong however you encode it, an 8-bit ASCII encoder would probably work fine.
Hope that helps.

Is a Turing machine a real device or an imaginary concept?

When I am studying about Turing machines and PDAs, I was thinking that the first computing device was the Turing machine.
Hence, I thought that there existed a practical machine called the Turing machine and its states could be represented by some special devices (say like flip-flops) and it could accept magnetic tapes as inputs.
Hence I asked how input strings are represented in magnetic tapes, but by the answer and by the details given in my book, I came to know that a Turing machine is somewhat hypothetical.
My question is, how would a Turing machine be implemented practically? For example, how it is used to check spelling errors in our current processors.
Are Turing machines outdated? Or are they still being used?
When Turing first devised what are now called Turing machines, he was doing so for purely theoretical reasons (they were used to prove the existence of undecidable problems) and without having actually constructed one in the real world. Fast forward to the present, and with the exception of hobbyists building Turing machines for the fun of doing so, TMs are essentially confined to Theoryland.
Turing machines aren't used in practice for several reasons. For starters, it's impossible to build a true TM, since you'd need infinite resources to construct the infinite tape. (You could imagine building TMs with a limited amount of tape, then adding more tape in as necessary, though.) Moreover, Turing machines are inherently slower than other models of computation because of the sequential nature of their data access. Turing machines cannot, for example, jump into the middle of an array without first walking across all the elements of the array that it wants to skip. On top of that, Turing machines are extremely difficult to design. Try writing a Turing machine to sort a list of 32-bit integers, for example. (Actually, please don't. It's really hard!)
This then begs the question... why study Turing machines at all? Fortunately, there are a huge number of reasons to do this:
To reason about the limits of what could possibly be computed. Because Turing machines are capable of simulating any computer on planet earth (or, according to the Church-Turing thesis, any physically realizable computing device), if we can show the limits of what Turing machines can compute, we can demonstrate the limits of what could ever hope to be accomplished on an actual computer.
To formalize the definition of an algorithm. Why is binary search an algorithm while the statement "guess the answer" is not? In order to answer this question, we have to have a formal model of what a computer is and what an algorithm means. Having the Turing machine as a model of computation allows us to rigorously define what an algorithm is. No one actually ever wants to translate algorithms into the format, but the ability to do so gives the field of algorithms and computability theory a firm mathematical grounding.
To formalize definitions of deterministic and nondeterministic algorithms. Probably the biggest open question in computer science right now is whether P = NP. This question only makes sense if you have a formal definition for P and NP, and these in turn require definitions if deterministic and nndeterministic computation (though technically they could be defined using second-order logic). Having the Turing machine then allows us to talk about important problems in NP, along with giving us a way to find NP-complete problems. For example, the proof that SAT is NP-complete uses the fact that SAT can be used to encode a Turing machine and it's execution on an input.
Hope this helps!
It is a conceptual device that is not realizable (due to the requirement of infinite tape). Some people have built physical realizations of a Turing machine, but it is not a true Turing machine due to physical limitations.
Here's a video of one: http://www.youtube.com/watch?v=E3keLeMwfHY
Turing Machine are not exactly physical machines, instead they are basically conceptual machine. Turing concept is hypothesis and this is very difficult to implement in real world since we require infinite tapes for small and easy solution too.
It's a theoretical machine, the following paragraph from Wikipedia
A Turing machine is a theoretical device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a computer.
This machine along with other machines like non-deterministic machine (doesn't exist in real) are very useful in calculating complexity and prove that one algorithm is harder than another or one algorithm is not solvable...etc
Turing machine (TM) is a mathematical model for computing devices. It is the smallest model that can really compute. In fact, the computer that you are using is a very big TM. TM is not outdated. We have other models for computation but this one was used to build the current computers and because of that, we owe a lot to Alan Turing who proposed this model in 1936.

How do algorithms differ from design patterns?

I am new to C programming; coming from an OOP PHP background.
I find C to be (no wonder) a much more difficult language. I had particularly lots of problems figuring out a couple of things on arrays at first: like there is no native associative array.
Now, this part I guess I'm figuring out little by little, but now I have a question regarding a conversation I had just yesterday with a C developer. She was explaining the binary search algorithm to me because I asked her whether there were libraries to do array related stuff in C or not because it seemed like a smarter solution than always re-inventing the wheel.
I would really love to learn more about algorithms in C, in particular what differences are there between algorithms and the design patterns I'm used to using in PHP?
Taking things in order: the extent of C's support for anything like an associative array would be qsort to sort an array of structures based on a key, and bsearch to find one based on a key. There are, of course, quite a few alternatives -- various other libraries have hash tables, balanced trees, etc. Exactly which will suit your purposes is hard to guess though.
Offhand, I don't know of many good books covering algorithms that use C as their primary vehicle for demonstration. A few obvious recommendations for books on algorithms in general (mostly language independent) would be:
The Art of Computer Programming by Donald Knuth. This is pretty much the class algorithms book. It's now (finally) up to four volumes. Knuth originally started on it in 1967, planning to write 7 volumes. Only three volumes were available for a long time. A fourth was added quite recently. At the rate he's going, it's only going to make it to 7 if Knuth lives to be well past 100 years old. Nonetheless, the parts that are there are extremely good -- but (warning!) he analyzes the algorithms in considerable detail; if you don't know at least a little calculus, a fair amount will probably be hard to follow.
Introduction to Algorithms by Cormen, Leiserson, Rivest and Stein. IIRC, there's now a newer edition than I have, which adds yet another author. This is a large book (dropping it on your toes would be quite painful). It uses a fair amount of mathematical notation and such throughout, but if you're willing to work a little at looking up the notation, it's really pretty understandable. It covers quite a bit of important ground (e.g., graph algorithms) that are scheduled for later volumes of Knuth, but not (at least yet) available there.
Algorithms and Data Structures by Aho, Hopcraft and Ullman. This is (by a pretty fair margin) the smallest, lightest, and at least for most people probably the easiest of these to follow.
Though it's only available used anymore, if you can find a copy of Algorithms + Data Structures = Programs by Niklaus Wirth, that's what I'd really suggest. It uses Pascal (no surprise -- Niklaus Wirth invented Pascal), but that's enough like C that it doesn't cause a real problem. It doesn't go into as much depth as Knuth about each algorithm, but still enough to give a good feel for when one is likely to be a good choice versus another. For somebody in your position (some background in programming, but little in this area) it's my top recommendation.
Though I've said it before, I think it bears repeating: IMO, all of Robert Sedgewick's books on algorithms should be avoided. Algorithms in C++ is probably the worst of them, but the others are only marginally better. The code they include (again, especially the C++ version) is truly execrable, and the descriptions of algorithms are often incomplete and/or misleading. The most recent editions have fixed some of the problems, but (IMO) not nearly enough to qualify as something that should ever be recommended. If there was no alternative, you could probably get by with these, but given the number of alternatives that are dramatically superior, the only reason to read these at all is if somebody gives them to you, and you absolutely can't afford anything else.
As far as algorithms versus design patterns goes, the line can get blurry in places, but generally an algorithm is much more tightly defined. An algorithm will normally have a specific, tightly defined input which it processes in a specific way to produce an equally specific result/output. A design pattern tends to be more loosely defined, more generic. An algorithm can be generic as well (e.g., a sorting algorithms might require a type that defines a strict, weak ordering) but still has specific requirements on the type.
A design pattern tends to be somewhat more loosely defined. For example, the visitor pattern involves processing groups of objects -- but we don't want to modify the types of those objects when we decide we need to process them in a new and different way. We do that by defining the processes separately from the objects to be processed, along with how we'll traverse the groups of objects, and allow a process to work with each.
To look at it from a rather different direction, you can usually implement an algorithm with a function or a small group of functions. A design pattern tends to be oriented more toward the style in which you write your code, rather than just "here's a function, use it."
"Algorithms in C, Parts 1-5 (Bundle): Fundamentals, Data Structures, Sorting, Searching, and Graph Algorithms (3rd Edition)"
Cannot stress how good that series is.

Why can Conway’s Game of Life be classified as a universal machine?

I was recently reading about artificial life and came across the statement, "Conway’s Game of Life demonstrates enough complexity to be classified as a universal machine." I only had a rough understanding of what a universal machine is, and Wikipedia only brought me as close to understanding as Wikipedia ever does. I wonder if anyone could shed some light on this very sexy statement?
Conway's Game of Life seems, to me, to be a lovely distraction with some tremendous implications: I can't make the leap between that and calculator? Is that even the leap that I should be making?
Paul Rendell implemented a Turing machine in Life. Gliders represent signals, and interactions between them are gates and logic that together can create larger components which implement the Turing machine.
Basically, any automatic machinery that can implement AND, OR, and NOT can be combined together in complex enough ways to be Turing-complete. It's not a useful way to compute, but it meets the criteria.
You can build a Turing machine out of Conway's life - although it would be pretty horrendous.
The key is in gliders (and related patterns) - these move (slowly) along the playing field, so can represent streams of bits (the presence of a glider for a 1 and the absence for a 0). Other patterns can be built to take in two streams of gliders (at right angles) and emit another stream of bits corresponding to the AND/OR/etc of the original two streams.
EDIT: There's more on this on the LogiCell web site.
Conway's "Life" can be taken even further: It's not only possible to build a Life pattern that implements a Universal Turing Machine, but also a Von Neumann "Universal Constructor:" http://conwaylife.com/wiki/Universal_constructor
Since a "Universal Constructor" can be programmed to construct any pattern of cells, including a copy of itself, Coway's "Life" is therefore capable of "self-replication," not just Universal Computation.
I highly recommend the book The Recursive Universe by Poundstone. Out of print, but you can probably find a copy, perhaps in a good library. It's almost all about the power of Conway's Life, and the things that can exist in a universe with that set of natural laws, including self-reproducing entities and IIRC, Darwinian evolution.
And Paul Chapman actually build a universal turing machine with game of life: http://www.igblan.free-online.co.uk/igblan/ca/ by building a "Universal Minsky Register Machine".
The pattern is constructed on a
lattice of 30x30 squares. Lightweight
Spaceships (LWSSs) are used to
communicate between components, which
have P60 logic (except for Registers -
see below). A LWSS takes 60
generations to cross a lattice square.
Every 60 generations, therefore, any
inter-component LWSS (pulse) is in the
same position relative to the square
it's in, allowing for rotation
.

Resources