Polynomial Equations in Q# E=MC^2 - quantum-computing

I am trying to understand how to use quantum computing and have started to understand some of the basic gates and other concepts, but am not able to understand how to put it to practice in real world problems.
Let's say I want to write a function in Q# that returns the value of E in the equation
E= MC^2
Can someone help me write this operation?

To answer the literal question: if M and C are just floating-point numbers, the calculation can be done using purely classical Q# constructs:
// The function that carries out the computation itself
function Energy (m : Double, c : Double) : Double {
return m * c ^ 2.0;
}
// The operation that you'll invoke to pass the parameters to the function and to print the results
operation PrintEnergy () : Unit {
let c = 299792458.0;
let energy1 = Energy(1.0, c);
Message($"Calculated energy of 1 gram of mass = {energy1}");
let energy2 = Energy(2.0, c);
Message($"Calculated energy of 2 grams of mass = {energy2}");
}
The output is:
Calculated energy of 1 gram of mass = 89875517873681760
Calculated energy of 2 grams of mass = 1.7975103574736352E+17
You will notice that this code fragment does not use any qubits or gates, so it's not really a good example of using quantum computing to solve real-world problems, even though it's implemented using a quantum programming language. This problem involved very simple mathematical computations, which can be done very efficiently using classical computers.
Quantum computers are going to use a co-processor model of computation - we'll use them to do computations that they are well suited to do (such as solving chemistry problems), and use classical computers for the rest of the computations.
To learn to apply quantum computing to solving problems with Q#, you can check out the Quantum Katas - a collection of tutorials and programming exercises. In particular, they show how to translate classical problems such as SAT or graph coloring into a form that can take advantage of quantum computing algorithms. (Disclosure: I'm the maintainer of this project)

Related

Generating a Population for Genetic Algorithm in C

I have just learned the basic (more of an introduction) of genetic algorithm. For an assignment, we are to find the value of x that maximizes f(x) = sin (x*pi/ 256) in the interval 0 <= x <= 256.
While I understand how to get the fitness of an individual and how to normalize the fitness, I am a little lost on generating the population. In the text, for the purposes of performing crossover and mutation, represent each individual using 8 bits. Example:
189 = 10111101
35 = 00100011
My questions are this:
Using c, what is the best way to generate the population? I have looked it up and all I could find was using uint8_t. I'm thinking of generating it as an array and then find a way to convert to it's integer representation.
What purposes does normalizing fitness serves?
As this is my first time at writing a program that uses genetic algorithm, is there any advice I should keep in mind?
Thank you for your time.
The usual way is the population to be random, but if you have some preliminary optimization you can form population around results already available.
It is very common to use hybrid algorithms when GA is mixed with altoritmsh like PSO, simulated annealing and so on.

Gaussian elimination (with no pivoting) in CUDA

I am trying to solve Gaussian elimination with CUDA.
I have a N*N matrix. To get new elements of this matrix, I use the CPU code below, where C.width=N:
for(int z=0; z< C.width-1; z++)
{
for ( int c = z+1 ; c < C.width ; c++ )
{
for (int d = z ; d < C.width ; d++ )
{
C.elements[c*C.width+d]=C.elements[c*C.width+d] - (B.elements[c*C.width+z]*C.elements[z*C.width+d]);
}
}
}
I am trying to implement it with CUDA. For example, for N=512
dim3 dimBlock(16,16,1);
dim3 dimGrid(32,32,1);
MatMulKernel<<<dimGrid, dimBlock>>>(d_A, d_B, d_C);
I think for every iteration I should use N-i*N threads to calculate the elements update, that is
if(idx>511 || idy>510)
return;
for(int i=1; i<512;i++)
{
if(idx>=i-1 && idy>=i-1)
C.elements[(idy+1)*C.width+idx]=C.elements[(idy+1)*C.width+idx]-((C.elements[(idy+1)*C.width+(i-1)]/C.elements[(i-1)*C.width+(i-1)])*C.elements[(i-1)*C.width+idx]);
__syncthreads();
}
}
The results obtained on GPU and CPU are the same, but the processing time is Time(CPU)=2*Time(GPU)
For N=512: Time(CPU) = 1900 ms; Time(GPU) = 980 ms
For N=1024: Time(CPU) = 14000 ms; Time(GPU) = 7766 ms`
.
.
.
I think the speed-up should be larger than what I have now. Is there any mistake in my parallel code? Can you help me how can I rewrite my code?
Thanks for any help!
Gaussian elimination can be seen as a two steps procedure. The first step aims at transforming the linear system to an upper triangular linear system and the second consists of solving the so obtained upper triangular linear system. The second step is trivial in CUDA and can be efficiently performed by cublasStrsm. The first step, which you are addressing in your post, is the tricky part.
There are several optimized approaches to solve the first step. I think you approach is somewhat naive and I recommend studying the literature to achieve decent speedups.
Basically, performing the transformation of the original system to an upper triangular one can be performed by a tiling approach which, from some points of view, resembles the tiling approach which is used to perform the matrix-matrix multiplication in the classical example of the CUDA C Programming Guide.
The tiling approach can be performed either by purposely written kernels or by making massive use of cuBLAS routines.
Last month (November 2013), the following paper
Manuel Carcenac, "From tile algorithm to stripe algorithm: a CUBLAS-based parallel implementation on GPUs of Gauss method for the resolution of extremely large dense linear systems stored on an array of solid state devices", Journal of Supercomputing, DOI 10.1007/s11227-013-1043-3
has proposed a tiling/stripping approach based on the use of cuBLAS.
All the above mentioned approaches are summarized in a presentation available at M. Carcenac's webpage entitled Application: linear system resolution with Gauss method.
Furthermore, a downloadable Visual Studio 2010 project implementing all of them with some performance testing is available at the Gaussian elimination with CUDA post. From the available code, you can make your own tests for your architecture of interest and experience the improvements the approach by M. Carcenac is introducing with respect to the others.

How do I implement a set of qubits on my computer?

I would like to get familiar with quantum computing basics.
A good way to get familiar with it would be writing very basic virtual quantum computer machines.
From what I can understand of it, the, effort of implementing a single qubit cannot simply be duplicated to implement a two qubit system. But I don't know how I would implement a single qubit either.
How do I implement a qubit?
How do I implement a set of qubits?
Example Code
If you want to start from something simple but working, you can play around with this basic quantum circuit simulator on jsfiddle (about ~2k lines, but most of that is UI stuff [drawing and clicking] and maths stuff [defining complex numbers and matrices]).
State
The state of a quantum computer is a set of complex weights, called amplitudes. There's one amplitude for each possible classical state. In the case of qubits, the classical states are just the various states a normal bit can be in.
For example, if you have three bits, then you need a complex weight for the 000, 001, 010, 011, 100, 101, 110, and 111 states.
var threeQubitState = new Complex[8];
The amplitudes must satisfy a constraint: if you add up their squared magnitudes, the result is 1. Classical states correspond to one amplitude having magnitude 1 while the others are all 0:
threeQubitState[3] = 1; // the system is 100% in the 011 state
Operations
Operations on quantum states let you redistribute the amplitude by flowing it between the classical states, but the flows you choose must preserve the squared-magnitudes-add-up-to-1 property in all cases. More technically, the operation must correspond to some unitary matrix.
var myOperation = state => new[] {
(state[1] + state[0])/sqrt(2),
(state[1] - state[0])/sqrt(2),
state[2],
state[3],
state[4],
state[5],
state[6],
state[7]
};
var myNewState = myOperation(threeQubitState);
... and those are the basics. The state is a list of complex numbers with unit 2-norm, the operations are unitary matrices, and the probability of measuring a state is just its squared amplitude.
Etc
Other things you probably need to consider:
What kinds of operations do you want to include?
A 1-qubit operation is a 2x2 matrix and a 3-qubit operation is an 8x8 matrix. How do you convert a 1-qubit operation into an 8x8 matrix when applying it to a single qubit in a 3-qubit state? (Use the Kronecker Product.)
What kinds of tricks can you use to speed up the simulation? For example, if only a few states are non-zero, or if the qubits are not entangled, there's no need to do a full matrix multiplication.
How does the user tell the simulation what to do? How can you represent what's going on for the user? There's an awful lot of numbers flowing around...
I don't actually know the answer, but an interesting place to start reading about qubits is this article. It doesn't describe in detail how entangled qubits work, but it hints at the complexity involved:
If this is how complicated things can get with only two qubits, how
complicated will it get for 3 or 4, or 100? It turns out that the
state of an N-qubit quantum computer can only be completely defined
when plotted as a point in a space with (4^N-1) dimensions. That means
we need 4^N good old fashion classical numbers to simulate it.
Note that this is the maximum space complexity, which for example is about 1 billion numbers (2^30=4^15) for 15 qubits. It says nothing about the time complexity of a simulation.
The article that #Qwertie cites is a very good introduction. If you want to implement these on your computer, you can play with the libquantum simulator, which implements sophisticated quantum operations in a C library. You can look at this example to see what using the code is like.
The information is actually stored in the interaction between different Qbits, so no implementing 1 Qbit will not translate to using multiple. I'd think another way you could play around is to use existing languages like QCL or google QCP http://qcplayground.withgoogle.com/#/home to play around

Can a neural network learn a multiplexer pattern?

Let's say you have 3 inputs: A, B, C. Can an artificial neural network (not necessarily feed forward) learn this pattern?
if C > k
output is A
else
output is B
Are there curtain types of networks, which can or which are well suited for this type of problem?
Yes, that's a relatively easy pattern for a feedforward neural network to learn.
You will need at least 3 layers I think assuming sigmoid functions:
1st layer can test C>k (and possibly also scale A and B down into the linear range of the sigmoid function)
2nd layer can calculate A/0 and 0/B conditional on the 1st layer
3rd (output) layer can perform a weighted sum to give A/B (you may need to make this layer linear rather than sigmoid depending on the scale of values you want)
Having said that, if you genuinely know the structure of you problem and what kind of calculation you want to perform, then Neural Networks are unlikely to be the most effective solution: they are better in situations when you don't know much about the exact calculations required to model the functions / relationships.
If the inputs can be only zeros and ones, then this is the network:
Each neuron has a Heaviside step function as an activation function. The neurons y0 and z have bias = 0.5; the neuron y1 has a bias = 1.5. The weights are shown above the corresponding connections. When s = 0, the output z = d0. When s = 1, the output z = d1.
If the inputs are continuous, then Sigmoid, tanh or ReLU can be used as the activation functions of the neurons, and the network can be trained with the back-propagation algorithm.

Algorithm for voice comparison

Given two recorded voices in digital format, is there an algorithm to compare the two and return a coefficient of similarity?
I recommend to take a look into the HTK toolkit for speech recognition http://htk.eng.cam.ac.uk/, especially the part on feature extraction.
Features that I would assume to be good indicators:
Mel-Cepstrum coefficients (general timbre)
LPC (for the harmonics)
Given your clarification I think what you are looking for falls under speech recognition algorithms.
Even though you are only looking for the measure of similarity and not trying to turn speech into text, still the concepts are the same and I would not be surprised if a large part of the algorithms would be quite useful.
However, you will have to define this coefficient of similarity more formally and precisely to get anywhere.
EDIT:
I believe speech recognition algorithms would be useful because they do abstraction of the sound and comparison to some known forms. Conceptually this might not be that different from taking two recordings, abstracting them and comparing them.
From wikipedia article on HMM
"In speech recognition, the hidden
Markov model would output a sequence
of n-dimensional real-valued vectors
(with n being a small integer, such as
10), outputting one of these every 10
milliseconds. The vectors would
consist of cepstral coefficients,
which are obtained by taking a Fourier
transform of a short time window of
speech and decorrelating the spectrum
using a cosine transform, then taking
the first (most significant)
coefficients."
So if you run such an algorithm on both recordings you would end up with coefficients that represent the recordings and it might be far easier to measure and establish similarities between the two.
But again now you come to the question of defining the 'similarity coefficient' and introducing dogs and horses did not really help.
(Well it does a bit, but in terms of evaluating algorithms and choosing one over another, you will have to do better).
There are many different algorithms - the general name for this task is Speaker Identification - start with this Wikipedia page and work from there: http://en.wikipedia.org/wiki/Speaker_recognition
I'm not sure this will work for soundfiles, but it gives you an idea how to proceed i hope. That is a basic way how to find a pattern (image) in another image.
You first have to calculate the fft of both the soundfiles and then do a correlation. In formular it would look like (pseudocode):
fftSoundFile1 = fft(soundFile1);
fftConjSoundFile2 = conj(fft(soundFile2));
result_corr = real(ifft(soundFile1.*soundFile2));
Where fft= fast Fourier transform, ifft = inverse, conj = conjugate complex.
The fft is performed on the sample values of the soundfiles.
The peaks in the result_corr vector will then give you the positions of high correlation.
Note that both soundfiles must in this case be of the same size-otherwise you have to place the shorter one into a file of max(soundFileLength) vector.
Regards
Edit: .* means (in matlab style) a component wise mult, you must not do a vector mult!
Next Edit: Note that you have to operate with complex numbers - but there are several Complex classes out there so I think you don't have to bother about this.

Resources