How to read keyboard input in real time with C? - c

I bought a DE1 SoC this summer and programmed a small CPU into the FPGA part. Back then, I had access to a PS2 keyboard that connected directly to the board, but now I don't and I need a way of communicating with the CPU. Luckily, the Cyclone V chip contains an ARM Cortex A9 that I can connect to the FPGA fabric and can run Linux distributions. The CPU in the FPGA part has a memory register which updates every clock cycle with the key scan code it used to read from the PS2 keyboard. My plan was to emulate this functionality by writing some code in C that would likewise read my keystrokes and send them over to the FPGA. This last part, I know how to do. It is the reading the keys part that confuses me. Ideally, I would like a function that, on execution, it read whatever input was coming from the keyboard, even if no keys are being pressed, without delay. I read some other questions on here that suggested things like getch() and getchar() and some other functions, but they all seem to require third party libraries and I am not sure these are supported by the compiler for the ARM processor.
It need not be necessarily C, it could be any other programming language that I could compile to run on the Cortex A9, it can even be assembly if it makes it more straightforward. All I need is to read whatever input is currently being sent by the keyboard, even if there is none. I can use a VNC to talk to the ARM chip over the local network or I can also connect to it via UART, whichever simplifies the process.

Related

I2C-Bus Implementation on Linux (Raspberry Pi)

I have recently written numerous functions in C for microfluidic pumps which are controlled via I2C on a Raspberry Pi. They work perfectly. I made use of the O_RDWR to write and read from "/dev/i2c-1".
However, I am still wondering how exactly these file changes are implemented on a low level. How is the code, which generates the correct bit sequences to communicate with these pumps, transferred to the PINs? What tells the kernel to change the states of the 2 I2C-PINs accordingly to code?
Thank you!
In most cases, software is not directly responsible for generating each individual logic level change. Instead, it is a combination of a hardware based I2C controller, that is managed from software by a driver.
In the case of your RPi, you are most likely using this driver:
https://elixir.bootlin.com/linux/v5.19/source/drivers/i2c/busses/i2c-bcm2835.c
The hardware block that this driver is controlling is described in section 3 of this reference manual:
https://www.raspberrypi.org/app/uploads/2012/02/BCM2835-ARM-Peripherals.pdf
(A side note: the kernel does know how to generate all logic changes in software as well, for cases where no dedicated hardware is available. If you want to know more about that, have a look at the i2c-gpio driver)

Disable/enable USB interrupts in MSDOS

I use a program for MS-DOS that reads and writes data to the parallel port and uses the hardware timer for timing. It doesn't work unless I disable USB support in the BIOS. With USB enabled, it looks like the operation of the program is interrupted hundreds of times at regular intervals. The source is available and compiles with DJGPP.
I'm looking for a way to programmatically disable/enable USB interrupts, either in a standalone program that I may write, or directly inside the program I use. I prefer C & DJGPP but everything goes. An already existing utility that does just this would be OK as well.
You may ask why I don't just use the BIOS for this. My old, parallel port equipped PC, has non-working PS/2 ports, so a USB keyboard must be used. If I disable USB from the BIOS, I can't operate the computer at all (I could put the command in 'autoexec.bat' but how uncomfortable is that?), while in this way I could just lose the keyboard when the program has started and I don't need it (well, almost, but I can live with that).
Thank you for reading. Here's the program (mtap):
http://markus.brenner.de/

Writing device library C/C++ for STM32 or ARM

I need to develop device libraries like uBlox, IMUs, BLE, ecc.. from scratch (almost). Is there any doc or tutorial that can help me?
Question is, how to write a device library using C/C++ (Arduino style if you want) given a datasheet and a platform like STM32 or other ARMs?
Thanks so much
I've tried to read device libraries from Arduino library and various Github, but I would like to have a guide/template to follow (general rules) to write proper device libraries from a given datasheet.
I'm not asking a full definitive guide, just where to start, docs, methods approach.
I've found this one below, but is very basic and quite lite for my targets.
http://blog.atollic.com/device-driver-development-the-ultimate-guide-for-embedded-system-developers
I don't think that you can actually write libraries for STM32 in Arduino style. Most Arduino libraries you can find in the wild promote ease of usage rather than performance. For example, a simple library designed for a specific sensor works well if reading the sensor and reporting the results via serial port is the only thing that firmware must do. When you work on more complex projects where uC has lots to do and satisfy some real time constraints, the general Arduino approach doesn't solve your problems.
The problem with STM32 library development is the complex connection between peripherals, DMA and interrupts. I code them in register level without using the Cube framework and I often find myself digging the reference manual for tables that shows the connections between DMA channels or things like timer master-slave relations. Some peripherals (timers mostly) work similar but each one of them has small differences. It makes development of a hardware library that fits all scenarios practically impossible.
The tasks you need to accomplish are also more complex in STM32 projects. For example, in one of my projects, I fool SPI with a dummy/fake DMA transfer triggered by a timer, so that it can generate periodic 8-pulse trains from its clock pin (data pins are unused). No library can provide you this kind of flexibility.
Still, I believe not all is lost. I think it may be possible to build an hardware abstraction layer (HAL, but not The HAL by ST). So, it's possible to create useful libraries if you can abstract them from the hardware. A USB library can be a good example for this approach, as the STM32 devices have ~3 different USB peripheral hardware variations and it makes sense to write a separate HAL for each one of them. The upper application layer however can be the same.
Maybe that was the reason why ST created Cube framework. But as you know, Cube relies on external code generation tools which are aware of the hardware of each device. So, some of the work can be avoided in runtime. You can't achieve the same result when you write your own libraries unless you also design a similar external code generation tool. And also, the code Cube generates is bloated in most cases. You trade development time for runtime performance and code space.
I assume you will be using a cross toolchain on some platform like Linux, and that the cross toolchain is compatible with some method to load object code on the target CPU. I also assume that you already have a working STM32 board that is documented well enough to figure out how the sensors will connect to the board or to the CPU.
First, you should define what your library is supposed to provide. This part is usually surprisingly difficult. It’s a bit hard to know what it can provide, without knowing a bit about what the hardware sensors are capable of providing. Some iteration on the requirements is expected.
You will need to have access to the documentation for the sensors, usually in the form of the manufacturer’s data sheets. Using the datasheet, and knowing how the device is connected to the target CPU/board, you will need to access the STM32 peripherals that comprise the interface to the sensors. Back to the datasheets, this time for the STM32, to see how to access its peripheral interfaces. That might be simple GPIO bits and bytes, or might be how to use built-in peripherals such as SPI or I2C.
The datasheets for the sensors will detail a bunch of registers, describing the meaning of each, including the meanings of each bit, or group of bits, in certain registers. You will write code in C that accesses the STM32 peripherals, and those peripherals will access the sensors across the electrical interface that is part of the STM32 board.
The workflow usually starts out by writing to a register or three to see if there is some identifiable effect. For example, if you are exercising a digital IO port, you might wire up an LED to see if you can turn it on or off, or a switch to see if you can correctly read its state. This establishes that your code can poke or peek at IO using register level access. There may be existing helper functions to do this work as part of the cross toolchain. Or you might have to develop your own, using pointer indirection to access memory mapped IO. Or there might be specially instructions needed that can only be accessed from inline assembler code. This answer is generic as I don’t know the specifics of the STM32 processor or its typical ecosystem.
Then you move on to more complex operations that might involve sequences of operations, like cycling a bit or two to effect some communication with the device. Or it might be as simple as finding the proper sequence of registers to access for operation of a SPI interface. Often, you will find small chunks of code are complete enough to be re-used by your driver; like how to read or write an individual byte. You can then make that a reusable function to simplify the rest of the work, like accessing certain registers in sequence and printing the contents of register that you read to see if they make sense. Ultimately, you will have two important pieces of information: and understanding of the low-level register accesses needed to create a formal driver, and an understanding of what components and capabilities make up the hardware (ie, you know how the device(s) work).
Now, throw away most of what you’ve done, and develop a formal spec. Use what you now know to include everything that can be useful. Use what you now know to develop a spec that includes an appropriate interface API that your application code can use. Rewrite the driver, armed with the knowledge of how are the pieces work, and taking advantage of the blank canvas afforded you by the fresh rewrite of the spec. Only reuse code that you are completely confident is optimal and appropriate to the format dictated by the spec. Write test code for all of the modules, and use the test code to actually test that the code works and that it conforms to the spec. Re-use the test code every time you modify anything it tests.

How can I write my C program in two functions?

I was wondering how I can write my C code (just one single .c with a couple of different functions) into just two functions with inputs and outputs.
I am looking for these because I am going two put some part of my code into CPU and leave the other in FPGA, they can communicate with each other via the interface in Zynq family board (e.g. ZC706).
In this regards, via the Vivado HLS, I have to have just one single Function which can be translated to e.g. VHDL via the Vivado HLS and the other function can stay in CPU.
Thanks in advance and if needed can share my code.
Not at all!
There are no pthreads or functions in FPGA.
You must think the FPGA is more like a circuit. There are physical connections like wires. Internally especially in the Zynq family you can communicate through the RAM, DMA controllers or via Registers.
There is documentation from Xilinx, what you need is AXI/AXI Stream.
But what you want to do is writing any C code and run them in the FPGA area like in a processor. And this approach is not promising.
In Vivado HLS you can write "functions" in C/C++/OpenCL/SystemC but it is only a block with inputs and outputs translated to a hardware description language (VHDL/Verilog).
You have to export it and add it in your Vivado Project to use it.
At this point maybe your IP created in HLS will do something you expect, but there is still a lot of work todo connecting the Ports in the right manner.
My advice is get familiar with the Zynq family and especially the AXI protocol. If you feel familiar with DMA/AXI4/AXI4S and how to access them from ARM and Logic then start using HLS. Otherwise you will not have the feeling how to write code HLS understands.

Any open-source ARM7 emulators suitable for linking with C?

I have an open-source Atari 2600 emulator (Z26), and I'd like to add support for cartridges containing an embedded ARM processor (NXP 21xx family). The idea would be to simulate the 6507 until it tries to read or write a byte of memory (which it will do every 841ns). If the 6507 performs a write, put the address and data on some of the ARM's I/O ports and let the ARM code run 20 cycles, confirm that the ARM is floating its data bus, and let the ARM run for another 38 cycles. If the 6507 performs a read, put the address on the ARM's I/O ports, let the ARM run 38 cycles, grab the data from the ARM's I/O port (hopefully the ARM software will have put it there), and let the ARM run another 20 cycles.
The ARM7 seems pretty straightforward to implement; I don't need to simulate a whole lot of hardware features. Any thoughts?
Edit
What I have in mind would be a routine that would take as a parameter a struct holding the machine state and pointers to a memory access routine. When called, the routine would emulate the ARM's instruction engine, generating appropriate reads, writes, and code fetches. I could then write the memory access routine to regard appropriate areas as flash (with roughly-approximated wait states), RAM, I/O ports, and timer registers. Some other areas would be marked as don't-care, and accesses to any other areas would flag an error and stop the emulator.
Perhaps QEMU uses such a thing internally. Since the ARM emulation would be integrated into an already-existing emulation engine (which I didn't write and don't fully understand--the only parts of Z26 I've patched have been the memory read/write logic) I would need something with a fairly small footprint.
Any idea how QEMU works inside? Any idea what the GPL licence would require if I just use 2% of the code in QEMU--whether I'd have to bundle the code for the whole thing, or just the part that I use, or what?
Try QEMU.
With some work, you can make my emulator do what you want. It was written for ARM920, and the Thumb instruction set isn't done yet. Neither is the MMU/cache interface. Also, it's slow because it is an interpreter. On the bright side, it's all written in C99.
http://code.google.com/p/gp2xemu/
I haven't worked on it for a while (The svn trunk is 2 years old), but if you're going to use the code, I'll be glad to help you out with the missing features. It is licensed under MIT, so it's just the same as the broad BSD license.

Resources