using arm CMSIS on Psoc system - arm

I am attempting to utilize the arm dsp cores with the Psoc5LP system from cypress. I have found examples at
http://www.disca.upv.es/aperles/arm_cortex_m3/curset/CMSIS/Documentation/DSP/html/arm_fft_bin_example_f32_8c-example.html
Primarily the fft example is what I am interested in replicating, but I am confused slightly on how cores work. I have used the picoblaze core on Xilinx Spartan-6 before but I have never used premade cores, especially for Psoc.
I have looked at the psoc system reference guide and found information on CMSIS under startup and linking, but it does not make full sense to me. could someone please point me in the right direction to get me started? also will I have to download all the files individually that I need such as the arm_math.c for the fft example (if so I think that's the only file I need?), or will I just need to download the CMSIS version 4.3 from arms website
https://silver.arm.com/browse/CMSIS#
I'm trying to implement spectral flux analysis and autocorrelation using these cores and I think they are a good place to start.
Thanks in advance,
Scarlson

You need to download the CMSIS package from the ARM website.
Inside of the package you will find a "CMSIS" folder which you have to copy to your project (Step #1).
Next you have to follow these steps:
http://www.cypress.com/knowledge-base-article/including-cortex-microcontroller-software-interface-standard-cmsis-library
Step #5 seems to be obsolete.
You now have to include the functions you want to use manually (Step #6) into the project.
For FFT this is:
CMSIS\DSP_Lib\Source\CommonTables\arm_common_tables.c (for the
twiddle factor table)
CMSIS\DSP_Lib\Source\TransformFunctions\arm< type >init< format >.c
CMSIS\DSP_Lib\Source\TransformFunctions\arm< type >init< format >.c

Related

Convert an existing code from STM32F0 to Atmel SAMD21 (both are arm-based cortex-M0)

The existing project is running well on STM32F0. The code should be converted to Atmel SAMD21.
The code is written in Visual Studio and using the IntelliSense configuration (ARM Paths). Both projects should use the I2C interface. The package which is used is ChibiOS for STM32.
Because I didn't write the code from scratch, Which part should I change in the code or read carefully? In other words, what is common and different in programming 2 different arms?
Do I need to do my own bootstrap (I mean the initialization part) and linker script? What else is needed?
Also, is there any configuration file for arm definition that I should change?
As I should change chibiOS and add the libraries for Atmel, Which library or package is better to use for Atmel SAMD?
Is there any idea or example that is helpful to compare between these
2 arms?
Much appreciated for any helpful suggestion, in advance.

Create a shared library for multiple applications for ARM cortex-m4

I'm trying to create project which contains a drivers library and two separate application (Booltloader + app), now I want to share the drives library between the two apps in order to save space on the flash...
I saw this tutorial for IAR, but I must use Keil uvision5 and I didnt find anything helpful online.
Anyone can guide me through this?
thanks!
Splitting the code into three parts (bootloader, library, application) most likely is too much. I think it is better to combine the bootloader and the drivers in a single binary. While calling the application, the bootloader can provide information necessary to use the drivers.
A word of caution, though: a solution like this is way more tricky than just compiling the drivers in the application. Depending on the driver functions, there may be no true benefit on flash usage. In particular, if many drivers are not necessary, they will just occupy the flash instead of getting optimized-out.

Is it possible to compile and run the dlib library on embedded devices with ARM Cortex-M7 processors?

I have just started using the amazing dlib library in Visual Studio and I have been able to compile and run the face detection examples. I was wondering if it would be possible to compile and run the library on an Mbed device, such as this one, with an M7 (or other M-series) processor. In other words, what specifications should I look out for to determine whether a microcontroller can, if at all, run dlib. Note that Mbed devices run C++ code, so it would be possible to copy and paste the source code of dlib and compile it, but I want to know if this is possible before I purchase a board. Also, if the RAM and ROM of the board are not enough, I can always attach external RAM/ROM.
Alternatively, if anyone knows of a library that can perform face detection or recognition on an embedded device, I would be happy to hear it.
Thanks.
Although the F769 is a considerably powerful embedded device there is no chance that dlib will run on it. Machine learning algorithms, even if not run in real-time, typically require a vast amount of RAM memory, specially for online-learning (learning on the target). You can take a look at ARMs very own CMSIS NN library to see what's currently "state-of-the-art" for devices that size.
Take a look at Tensorflow Lite for Microcontrollers. You can put these on embedded devices. Wake words and object detection runs easily on various boards (Arduino Nano 33, SparkFun Edge). There's a compiler included for Mbed.
Microcontrollers are not suitable for video and image recognition even if you attach external ram. The chip you where suggestiong is top of the line in microcontroller world. But this means only 2Mb for ALL your software and only 512kb of ram onboard. Think of it this way the image you need with enough detail to recoginze someone would be atleast a few mb.
I would suggest that you look at to the application processors of ARM (A series) or NVIDA Jetson.

How to convert images to video using FFMpeg for embedded applications?

I'm encoding images as video using FFmpeg using custom C code rather than linux commands because I am developing the code for an embedded system.
I am currently following through the first dranger tutorial and the code provided in the following question.
How to encode a video from several images generated in a C++ program without writing the separate frame images to disk?
I have found some "less abstract" code in the following github location.
https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/encode_video.c
And I plan to use it as well.
My end goal is simply to save video on an embedded system using embedded C source code, and I am coming up the curve too slowly. So in summary my question is, Does it seem like I am following the correct path here? I know that my system does not come with hardware for video codec conversion, which means I need to do it with software, but I am unsure if FFmpeg is even a feasible option for embedded work because I am yet to compile.
The biggest red flag for me thus far is that FFmpeg uses dynamic memory allocation. I am unfamiliar with how to assess the amount of dynamic memory that it uses. This is very important information to me, and if anyone is familiar with the amount of memory used or how to assess it before compiling, I would greatly appreciate the input.
After further research, it seems to me that encoding video is often a hardware intensive task that can use multiple processors and mega-gigbyte sizes of RAM. In order to avoid this I am performing a minimal amount of compression by utilizing the AVI format.
I have found that FFmpeg can't readily be utilized for raw-metal embedded systems because the initial "make" of the library sets up configuration settings specific to the computer compiling, which conflicts with the need to cross compile. I can see that there are cross compilation flags available, but I have not found any documentation describing how to use them. Either way I want to avoid big heaps and multi-threading, so I moved on.
I decided to look for more basic source code elsewhere. mikekohn.net/file_formats/libkohn_avi.php Is a great resource for very basic encoding without any complicated library dependencies or multi-threading. I am yet to implement, so no guarantees, but best of luck. This is actually one of the only understandable encoding source codes that I have found for image to video applications, other than https://www.jonolick.com/home/mpeg-video-writer. However, Jon Olick's source code uses lossy encoding and a minimum framerate (inherent to MPEG), both of which I am trying to avoid.

remoteproc based inter-core communication

I am trying investigate various inter-core communication mechanism on my Dual core Arm-Cortex Processor. One of the core is running a baremetal application and another one is running Linux operating system. I just came across the remoteproc framework ( rpmsg) and I could not find much information online. Only information i found was http://lwn.net/Articles/489009/ which is quite less to get started. Is there any one who could help me with this?
I come across the same issue as well. I found some additional resources:
Doc in the kernel tree as always:
https://www.kernel.org/doc/Documentation/remoteproc.txt
OMAP wiki that gives the overview of the design:
http://omappedia.org/wiki/Design_Overview_-_RPMsg
BTW. Thanks for the lwn link. That's quite helpful.
Since Xilinx' Zynq SoC also includes 2 ARM Cortex-A9 cores, they have published an application note in which they make a Linux Kernel communicate with a FreeRTOS system via remoteproc/rpmsg. You can find the document here: PDF
Although the document is quite specific, you might be able to pull out some information. You can download the sources here: Sign in to Download File (a Xilinx account is required). The *.bsp file can be renamed to *.tar.gz which can then be extracted.
If you have any further questions, don't hesitate to ask.

Resources