Can you run 64 bit applications on a quantum computer? [closed] - quantum-computing

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Since conventional computers use bits, which can either be 1 or 0, can we simulate a 64-bit operating system on a quantum computer by restricting the values of the qubits to either 0 or 1, instead of 0, 1, and everything in between?

Yes, in principle it's possible to simulate any classical computation on a sufficiently large quantum computer.
Any deterministic classical circuit can be implemented as an equivalent quantum circuit using Toffoli (CCNOT) gate - it can simulate classical gates NAND and FANOUT, which are universal for classical circuits.
Quantum computer can also simulate non-deterministic classical circuits by generating fair coin tosses using Hadamard gates followed by measurements (which gives 0 or 1 result with 50/50 probability).
So with the proper software you can simulate any classical computation. However, running a classical OS on a quantum computer would be rather pointless, since classical computers do this quite well already.

Related

How to increase performance of sin and cos using neon instructions? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
How to use arm_neon.h headerfile to increase the performance of a code using sin and cos functions.?
The board used is a Xilinx T1 accelerator card with ARM architecture armv8-a and cortex a53.
Language is c.
arm_neon.h contains SIMD intrinsics, which offer a C API to access/invoke individual low level instructions.
Thus, if you intend to speed up sin/cos with arm_neon.h, the method is to rewrite those trigonometric functions using vector arithmetic calculating 4 values at the same time.
Things you need to concern are:
the code needs to be branchless
you need to define how accurate you need to be
you need to define the input range (no need to handle multiples of 2*pi ?)
you need to define input unit (radians vs degrees vs fractions of 2^n)
All of this will determine what kind of approximation to use -- polynomial, linear piece-wise, rational polynomial and what steps or corner cases can be omitted.

When does an algorithm become considered artificial intelligence? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I understand that an algorithm is a set of instructions. Ai is essentially the same thing, only, more complicated? Let's say I use a minmax algorithm to allow moves to be played on a tic tac toe board, generally people would consider this ai. But if I implement an algorithm to solve a rubiks cube, is that considered ai?
I guess what I'm asking is, is it the complexity of the algorithm, the fact that situations change on the fly in an algorithm, the ignorance of the user/programmer as to how the algorithm works or all/some of the above? Or am I missing something?
I feel like this field is quite arbitrary. I imagine for good reason.I imagine because complexity is complex.
It is indeed quite arbitrary.
If you consult wikipedia you might find following definition which in my personal opinion catches the load quite accurately:
Computer science defines AI research as the study of "intelligent
agents": any device that perceives its environment and takes actions
that maximize its chance of successfully achieving its goals. A more
elaborate definition characterizes AI as "a system's ability to
correctly interpret external data, to learn from such data, and to use
those learnings to achieve specific goals and tasks through flexible
adaptation."
To take your Rubiks Cube as an example, there would be at least 2 ways you could write the algoritm to solve the puzzle. Firstly, any cube can be solved by following a hardcoded path or set of instructions once you have a certain start position. Implementing this would not be considered AI in my opinion as the machine itself is not learning anything. It just follows a well defined path of instructions till the end.
A second way to implement this would be to have the program just start solving it randomly. But the machine remembers it's moves, and learns the most effective path to reach the solution. When solving the next cube, the machine can build upon this newly learned information to solve it faster and again learn from this iteration to improve it's algorithm.
So in short, as far as I'm concerned, it can be considered AI when a machine is capable of optimizing/extending its own algorithms to become more efficient in its tasks.

Handling decimals in C without float [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a problem in C, I am not allowed to use floats as the microcontroller it will be flashed does not support that data type. Now all my integers are being rounded off as it should. How do I handle this case?
A short research indicates using bit wise operation such as left shift and right shift. I know what are these operations. But I do not know how to use these operations to achieve what I want.
Another possibility is the Q number format.
You will get some results if you google "Q number format" or some variations.
It is often used for some DSP related topics in C. Here another blog post that explains that number format and here is an example code implementation for q-numbers in C.
In general you can say that q-numbers represent a number between -1 and 1 without using floating point arithmetic.
Normally a microcontroller don't have a floating point unit, everything works with integers. But its up to you which unit you like for your integers.
For example:
100 could be 100 cm or 1,00 m
1000 could be 100,0 cm or 1,000 m and so on..
Please have a look at the description:
electronic.stackexchange

Can Endianess of system be changed using c code [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I know that endianess little or big is inherent to the system. Then, how is it possible to change using c code, as I have seen a code, which says it can change the endianess.
Endianess depends on the CPU hardware. So normally you can't do anything about it.
The code you have seen was most likely just tossing bytes around from one endianess to the other. Though some CPUs (for example some PowerPC) do have the possibility to configure endianess by writes to a hardware register.
You can't change the endianness of the system in general (there are bi-endian architectures), this would require you to change the instruction set. You can change the endianness of the data you use though. Take a look at this question to see how.

How to give an estimation of the energy consumed by a program on an ARM platform? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Is there a way to estimate the energy consumed by a program on an ARM CPU? In embedded systems, energy consumption is one of the most important parameters and I was wondering whether it is possible for a programmer to know approximately how much energy is needed to run the program?
For example, since on the ARM CPU division executed on multiple cycles I imagine that a code using divisions would consume more energy than a code that doesn't. But this reasing is quite intuitive, is there a better way to qulify the energy consumed by a CPU when executing a code?
I don't think there are any ARM-specific tricks here (and 'ARM' covers umpteen different things anyway). You usually look at the current consumption in the various different power states you use (run, sleep, etc) and then estimate what proportion of time is spent in each state. This lets you calculate average current/power.
It doesn't usually make much sense to say 'this instruction uses a lot of power' - what you might instead care about is 'this sequence of instructions take a lot of time to run, hence I can't get back to sleep quickly'.
Closest you'll get with off the shelf tools is something similar to http://ds.arm.com/ds-5/optimize/arm-energy-probe/
Generally battery run systems have fuel gauges which are exposed through sysfs entries and can provide how much current is passing by. Think it like smart phone battery/charge indicator. Those are generally not that reliable and hard to correlate with exact time of application run, but may give you a rough estimate.

Resources