I'm working on STM32l476g-DISCO and I want to try the Artifical intelligence feature using STM32 CUBEMX but I couldn't see the output (prediction/decision) neither understand the validation (validation on desktop and validation on target)
I was following the ST Microelectronics Demo: https://www.youtube.com/watch?v=szMGedsp9jc&t=314s
Can someone please explain the results of the output of the validation on desktop and the validation on the target, and how can I see the decision output?
If I enter a custom data of someone 'sitting' for example how can I see if my model is working perfectly on the STM32?
I think you are asking two questions.
1) For the validation, I think that it basically means that it shows how different are the results in the original NN (done in python and with Keras) and the results in the converted C++ network (implemented in your desktop and in the microprocessor)
2) If you want to see the network in action and predicting something I recommend the following example:
Hand written digits recognition on STM32F4
In the code you can see the original NN in python and then the implementation of it in C to be used to recognize digits.
Take special attention to the function MX_X_CUBE_AI_Process(in_data,out_data,1); which is where the prediction occurs.
in addition to what kansaiTobot has said,
from CubeMX-AI you have three modes of operation
1- Validation ==> compare the result of model implemented in Python against the C/C++ model , this mode has two options you can do validaion of C/C++ on Host PC and you can do it on target micro-Controller
2- System Performance ==> measure the required CPU load and memory usage for the NN
3- Application template ==> here you can supply your input data and get the result from the neural network
you can find more information in the user manual AI on STM32
"Getting started with X-CUBE-AI Expansion Package for Artificial Intelligence (AI)"
Related
I am a student and new to Pspice, I am given the following example circuit
and asked to created the following circuit,
which I have this
I think I have my circuit designed correctly. However, I am having trouble with finding the correct diode. I have tried different library such as diode, ediode, diode_bridge, on_diode and infineon, but the diodes from these library don't give a current of 20mA(typically lower than that). I also tried to find discrete.olb, however it's not in my capture library.... Is there any other library contains the 2legs led like the diodes in example circuit?I think the reason why I am not getting 20mA is because of the diode?
As can be seen here on the PSPICE website, you should be able to find all of the LED models in the Optoelectronics->LEDs library.
I'm developing a positional tracking system and found references to Madgwick's 2010 paper, which included an open-source method for obtaining Absolute Orientation.
However, upon executing the filter, the gyro data I'm getting seems to be distorted somehow, by a magnitude of ~100 (smaller than I would expect).
I did a bit of searching and found this post talking about what seems to be an alternate form of the original Madgwick filter, however comparing it to the "original 2010 version" showed different values for the Gradient Descent algorithm---as well as most folks talking about their Yaw-values "flipping".
I'm not really sure what version of the Madgwick AHRS I should be using anymore, nor am I certain if anyone has been experiencing the same magnitude/scaling problem I have been with this "older version". Have other folks used the 2010 MARG in C? If so, have you had the same issue of "scaling" in Gyro Heading values?
In IMU, some registers are usually dedicated to the sensor range. Check it, it may be the source of your distortion.
Note that the sensors must be calibrated, especially the magnetometer. Have a look at this page, it's a working implementation of the Sebastian Madgwick's AHRS algorithm.
I used model to connect to database , which has emails to process them and make them input to Neural Network , The Output Of NN Is great on consol but bad on wpf , in other words , The probability of tested message bigger than 0.8556546 when the email is spam and probability of ham is less than 0.005465465
When i useed The same code On Wpf The Results were ambiguous ; in the other words, the NN gives different probability on the same message for each training ;e.g : The probability of spam tested message 0.25654654 , 0.9999 and , 0.45654564654, 0.5564654654 ; the same results for ham tested message ;
at last i added consol to solution as project to run code on it , and be referenced by wpf project to display results on wpf ,
Note: The code is the same for both Wpf and Consol
Thank you
There is no good reason why this would be.
So something happened when you ran the tests. If it is the same assembly that you are using the differences might be caused by differences between test sets, operating systems, .NET platform versions or other external influences (in general the tester or programmer)
In many embedded applications there is a tradeoff between making the code very efficient or isolating the code from the specific system configuration to be immune to changing requirements.
What kinds of C constructs do you usually employ to achieve the best of both worlds (flexibility and reconfigurabilty without losing efficiency)?
If you have the time, please read on to see exactly what I am talking about.
When I was developing embedded SW for airbag controllers, we had the problem that we had to change some parts of the code every time the customer changed their mind regarding the specific requirements. For example, the combination of conditions and events that would trigger the airbag deployment changed every couple weeks during development. We hated to change that piece of code so often.
At that time, I attended the Embedded Systems Conference and heard a brilliant presentation by Stephen Mellor called "Coping with changing requirements". You can read the paper here (they make you sign-up but it's free).
The main idea of this was to implement the core behavior in your code but configure the specific details in the form of data. The data is something you can change easily and it can even be programmable in EEPROM or a different section of flash.
This idea sounded great to solve our problem. I shared this with my colleague and we immediately started reworking some of the SW modules.
When trying to use this idea in our coding, we encountered some difficulty in the actual implementation. Our code constructs got terribly heavy and complex for a constrained embedded system.
To illustrate this I will elaborate on the example I mentioned above. Instead of having a a bunch of if-statements to decide if the combination of inputs was in a state that required an airbag deployment, we changed to a big table of tables. Some of the conditions were not trivial, so we used a lot of function pointers to be able to call lots of little helper functions which somehow resolved some of the conditions. We had several levels of indirection and everything became hard to understand. To make a long story short, we ended up using a lot of memory, runtime and code complexity. Debugging the thing was not straightforward either. The boss made us change some things back because the modules were getting too heavy (and he was maybe right!).
PS: There is a similar question in SO but it looks like the focus is different. Adapting to meet changing business requirements?
As another point of view on changing requirements ... requirements go into building the code. So why not take a meta-approach to this:
Separate out parts of the program that are likely to change
Create a script that will glue parts of source together
This way you are maintaining compatible logic-building blocks in C ... and then sticking those compatible parts together at the end:
/* {conditions_for_airbag_placeholder} */
if( require_deployment)
trigger_gas_release()
Then maintain independent conditions:
/* VAG Condition */
if( poll_vag_collision_event() )
require_deployment=1
and another
/* Ford Conditions */
if( ford_interrupt( FRONT_NEARSIDE_COLLISION ))
require_deploymen=1
Your build script could look like:
BUILD airbag_deployment_logic.c WITH vag_events
TEST airbag_deployment_blob WITH vag_event_emitter
Thinking outloud really. This way you get a tight binary blob without reading in config.
This is sort of like using overlays http://en.wikipedia.org/wiki/Overlay_(programming) but doing it at compile-time.
Our system is subdivided into many components, with exposed configuration and test points. There is a configuration file that is read at start-up that actually helps us instantiate components, attach them to each other, and configure their behavior.
It's very OO-like, in C, with the occasional hack to implement something like inheritance.
In the defense/avionics world software upgrades are very strictly controlled, and you can't just upgrade SW to fix issues... however, for some bizarre reason you can update a configuration file without a major fight. So it's been darn useful for us to be able to specify a lot of our implementation in those configuration files.
There is no magic, just good separation of concerns when designing the system and a bit of foresight on the part of the developers.
What are you trying to save exactly? Effort of code re-work? The red tape of a software version release?
It's possible that changing the code is reasonably straight-forward, and quite possibly easier than changing data in tables. Moving your often-changing logic from code to data is only helpful if, for some reason, it's less effort to modify data rather than code. That might be true if the changes are better expressed in a data form (e.g. numeric parameters stored in EEPROM). Or it might be true if the customer's requests make it necessary to release a new version of software, and a new software version is a costly procedure to build (lots of paperwork, or perhaps OTP chips burned by the chip maker).
Modularity is very good principle for these sort of things. Sounds as though you're already doing it to some degree. It's good to aim to isolate the often-changing code to as small an area as possible, and try to keep the rest of the code ("helper" functions) separate (modular) and as stable as possible.
I don't make the code immune to requirements changes per se, but I always tag a section of code that implements a requirement by putting a unique string in a comment. With the requirements tags in place, I can easily search for that code when the requirement needs a change. This practice also satisfies a CMMI process.
For example, in the requirements document:
The following is a list of
requirements related to the RST:
[RST001] Juliet SHALL start the RST with 5 minute delay when the ignition
is turned OFF.
And in the code:
/* Delay for RST when ignition is turned off [RST001] */
#define IGN_OFF_RST_DELAY 5
...snip...
/* Start RST with designated delay [RST001] */
if (IS_ROMEO_ON())
{
rst_set_timer(IGN_OFF_RST_DELAY);
}
I suppose what you could do is to specify several valid behaviors based on a byte or word of data that you could fetch from EEPROM or an I/O port if necessary and then create generic code to handle all possible events described by those bytes.
For instance, if you had a byte that specified the requirements for releasing the airbag it could be something like:
Bit 0: Rear collision
Bit 1: Speed above 55mph (bonus points for generalizing the speed value!)
Bit 2: passenger in car
...
Etc
Then you pull in another byte that says what events happened and compare the two. If they're the same, execute your command, if not, don't.
For adapting to changing requirements I would concentrate on making the code modular and easy to change, e.g. by using macros or inline functions for parameters which are likely to change.
W.r.t. a configuration which can be changed independently from the code, I would hope that the parameters which are reconfigurable are specified in the requirements, too. Especially for safety-critical stuff like airbag controllers.
Hooking in a dynamic language can be a lifesaver, if you've got the memory and processor power for it.
Have the C talk to the hardware, and then pass up a known set of events to a language like Lua. Have the Lua script parse the event and callback to the appropriate C function(s).
Once you've got your C code running well, you won't have to touch it again unless the hardware changes. All of the business logic becomes part of the script, which in my opinion is a lot easier to create, modify and maintain.
I'm trying to implement a fuzzy logic membership function in C for a hobby robotics project but I'm not quite sure how to start.
I have inputs about objects near a point, such as distance or which directions are clear/obstructed, and I want to map how strongly these inputs belong to sets like very near, near, far, very far. Does anyone have a tip on how to start? Thanks.
Disclaimer: I've never implemented a fuzzy controller (I've only ever used PI or PID in real-life) and control class was 10 years ago.
Here's an presentation demonstrating moving towards a target using distance and angle for inputs and power as the output. FuzzyTech's Example positioning a crane
This just presents the topic and theory i.e. no code.
Best source is probably one of the robotics groups
e.g Seattle Robotic Society fuzzy logic tutorial it is technical ... and long.
if you can access technical journals then search Google scholar for "fuzzy logic" "path planning" robotics
if you're looking for some ideas on how to implement fuzzy logic then perhaps a Application Note from one of the microchip manufactures will get you started e.g Microchip's paper on Airflow control or servo control. I know it's not Arduino but Microchips papers are usually very clearly presented.
And finally an example in c++ its probably more complex than you're looking for. Free fuzzy logic library
Good luck.
I'm not expert with fuzzy logic, but according to my basic understanding, you could start by deciding what distances would constitute near (say 10 cm) far (say 1m), then you use probabilities to fill in the range in between (so 55cm might be 50% near, 50% far). Then you do something similar for your other properties, and combine the probabilities associated with each property with more probabilities.
Do you have a good reference for designing fuzzy controls?
I suppose you could start here. I think they at least describe simple fuzzification and defuzzification routines.
The guys at MakeProto have created an automatic code generator for Fuzzy Systems that outputs C code from Matlab fuzzy systems, or by a hand-defined fuzzy system.
Might be worth taking a look at.
http://makeproto.com/blog/?p=35
Fuzzy inference system can be implemented in both C and C++. Learn How to frame fuzzy logic in c