I have a single threshold unit truth table to simulate for 100 epochs, with my input of the learning curve, target, activation, bias and weight to detect errors. Is there some software which can do this?
If you have neural network toolbox on Matlab you can use it.
You can design your network on it and feed learning curve via input and set target and bias or other parameters then run learning for 100 epochs.
SciLab is an open-source software and is the best free option instead of Matlab. You can use Neural Network Toolbox of it for your purpose.
Related
I think that quantum teleportation can be realized by the Microsoft Quantum Development Kit, but by placing one piece of data (name A) on a PC not connected to the Internet and acting on one side (name B) Is it possible to react experimentally?
In the sample code, I do not feel realized just by comparing two variables in one code in one PC.
At the moment, Microsoft's Quantum Development Kit is focused on making it easier to write programs that act on stationary qubits such as topological qubits built using Majorana fermions. In that context, teleportation is a useful procedure for moving information around inside a single quantum device.
That said, you are quite right in that teleportation can also be used to transfer quantum states between distinct quantum devices — qubits such as polarization modes of photons that are used in such procedures are often called "flying qubits" to distinguish from stationary qubits. If you're interested in quantum networking protocols such as quantum key distribution that focus on what you can do by sending flying qubits, I'd recommend looking at the SimulaQron project. Their implementation of teleportation can be run on two distinct classical computers that share a classical network connection to simulate sending flying qubits.
I've done many projects that include a PC & an arduino / PLC / some kind of other microcontroller / processor, and in every project we had a different protocol used for communication between the PC application and the embedded one. Usually the hardware / controller developer invents a simple protocol which always changes throughout the project, and goes into the form of
Barker | Size | Data | Checksum
This time I'm implementing both sides, so I figured - This has been done a million times before. There must be a base protocol for these things with implementations in C, C#, Java, and such.
What I'm looking for is a lightweight layer that transfers stream based serial communication into a message based one.
I've been looking around for one for a while, but I couldn't find anything on my own.
Do you happen to know one?
I had exactly the same requirements for a recent project and I found nothing simple enough for low-end 8-bit microcontrollers. So I designed MIN (Microcontroller Interconnect Network) to do the job (inspired by CAN and LIN).
The code is on github here: https://github.com/min-protocol/min (check out the wiki there).
I defined a layer 0 (the UART settings) and layer 1 (the frame layer, with checksums, etc.) plus a C API.
I'm also working on a higher layer that formally defines how sensor data (temperature, pressure, voltage, etc.) are packed, with a JSON representation and a tool to autogenerate the embedded code to pack/unpack them from frames. The end goal is to create a Wireshark dissector that can be clipped on to the serial line and when fed with the JSON will display the signals in human-readable form.
I wrote a blog post showing a Hello World app running on an Arduino board (with an FTDI UART-USB breakout board carrying the data up to my host PC):
https://kentindell.wordpress.com/2015/02/18/micrcontroller-interconnect-network-min-version-1-0/
This serial problem occurs so often that it would be nice if we as a community just nailed it rather than keep re-coding it for every project.
Check Open Source HDLC
I recently came across MIN - never used this one though
Also check this
Simple serial point-to-point communication protocol
Using X/Y/Z MODEM protocol must be a good choice to solve your problem. It's easy to implement and ready-to-use. I use X-MODEM on an ISP tool communicates with our cortex-m0 powered MCU, and it works pretty well.
I'm trying to learn a bit about FPGA cards and I'm very new to the subject. I'm more of a software developper and have had no real experience programming FPGA devices.
I am currently building a project on a linux OS in the C language. I would like to know how it may be possible to implement such code on an FPGA device. For that, I have a few questions.
Firstly, do I have to translate my code to VHDL or can I use C? Also, how would one come about to installing an OS on an FPGA card, and are there devices that already have an OS installed in them?
Sorry for the newbie type questions, and any help would be appreciated!
FPGAs are great at running simple, fixed data flows through parallel processing, while CPUs are optimized for complex and/or dynamic data flows.
The C language is not designed for describing highly parallel systems, as it follows a clearly sequential pattern ("assign a to b, then add c to d"); while compilers introduce some parallelization as an optimization, the focus is on generating code that behaves as if the instructions were sequentialized.
In an FPGA, on the other hand, you want to break up sequences as far as possible and create parallel circuitry and pipelines, so normally the system is described in the form of interconnected blocks, where each is kept as simple as possible.
For example, where you have (a+b)*(c+d), a CPU based design would probably have a single adder, feed it with a and b first, then with c and d, and finally pass both results to the multiplier.
In an FPGA design, that is rather costly, as you have to create a state machine that keeps track of which of the three computation stages we are at and where the results are kept, so it may be easier to have two dedicated adders hardwired to a and b, and c and d, respectively, and to have their outputs connected to a multiplier block.
At this point, you basically have created a dedicated machine that can compute this single term and nothing else, but its speed is limited by the speed of the transistors making up the logic gates only, and compared to the state machine you get a speed increase of at least a factor of three (because we only have a single state/instruction now), probably more because we can also discard the logic for storing intermediate results.
In order to decide when to create a state machine/processor, and when to hardcode computations, the compiler would have to know more about the program flow and timing requirements than can be expressed in C/C++, so these languages are not a good choice.
The OS as such also looks vastly different. There are no resources to arbitrate dynamically, so this part is omitted, and all that is left are device drivers. As everything is parallel, these take the form of external modules that are simply linked into your design, and interfaced directly.
If you are just starting out, I'd suggest you get a development kit with a few LEDs, and start with the basic functionality:
Make the LED blink
Use a PLL block from the system library to derive a secondary clock, and make the LED blink with a different frequency
Add a simple bus interface, e.g. SPI, and communicate with a simple external device, e.g. a WS2811 based LED strip
After you have a basic grasp of how the system works, try to get a working simulation environment (the equivalent of a Debug build), and begin including more complex peripherals.
It sounds like you could use a tutorial for beginners. I would recommend starting here and reading through an introduction to digital design. Some of your basic questions should be answered by reading through these tutorials. This will put you in a better place to ask more specific questions in the future.
Knowing very little about AI, just a little puzzled by the claim that memristors may finally lead to AI that will equal, and likely pass, the power of the human brain.
So, what's the difference between memristors (hardware) vs neural network nodes (software)?
Very possible the two are complete non-related, but given that it's my understanding is neural networks are used to simulate "bio neural networks" seems to me that memristors are just the silco version of the bio version that is emulated by neural networks.
Reason I ask, is because if they're very close or the same in concept (meaning they only differ in implementation) no idea how one could make the claim that memristors will close the gap on AI.
neural networks are able to form new connections. Hardware cannot do this.
Memristors are much more useful for creating fast non-volatile memory. In the future there won't be RAM and storage, but a unified memory.
Its intended to move away from the von neumann architecture; a memory node can be a compute node. Yes, a fundamental shift. It also allows other logic operations which enables AI from a different perspective.
Memristors allow in-memory computing, e.g., they can accelerate matrix-vector multiplication (MVM), bitwise operations and search operations. Also, memristor is a high-density non-volatile memory.
Since computation pattern of convolution layer is MVM and the fully-connected layer is memory-intensive, memristors can accelerate both convolution and fully-connected layers. As such, memristors can allow efficient hardware acceleration of convolution neural networks and other variants of neural networks.
However, current implementations of memristors also have several issues, such as poor reliability, hard errors. See my survey paper for more details on the use of memristors for accelerating neural networks and the discussion on current challenges.
Basically I'm working on a model of an automated vacuum cleaner. I currently have made software simulation of the same. How do I figure out which SOC or SDK board to use for the hardware implementation? My code is mostly written in C. Will this be compatible with the sdk provided by board manufacturers? How do i know what clock speed,memory etc the hardware will need?
I'm a software guy and have only basic knowledge about practical hardware implementations. Have some experience in programming the 8086 to carry out basic tasks.
You need to perform some kind of analysis of the required performance of your application. I'm certainly no expert in this, but questions that come to mind include:
How much performance do you need? Profile your application, and try to come up with some estimate of its minimum performance requirements, in e.g. MIPS.
Is your application code and/or data going to be large? Do you need a controller with 8 KB of code space and 100 bytes of RAM, or one with 1 MB of code and 128 KB of RAM? Somewhere inbetween? Where?
Do you need lots (tens) of I/O channels? With what characteristics? Is it enough with just basic digital I/O, a handful of pins, or do you need 20 channels of 10-bit A/D conversion? PWM? Communications peripherals?
Followups:
Manufacturers will of course make sure that their customers can build and run software on their boards. They will provide either free compilers, or (since embedded devlopment is an industry and a very large market, after all) sell them as tools.
There are free development environments, often based around GNU's gcc compiler, for many low-end (and of course many medium and high-end, too) architectures.
You could, for instance, look through Atmel's range of AVR 8-bit controllers, they're very popular in the hobbyist world and easy to port C code to. Free compilers are available, and basic development boards are cheap.