I am looking to do a performance analysis (using the µVision Performance Analyzer tool) to benchmark the runtime of functions in a program written in C and ASM. The program runs as it should, and I'm able to use the Performance Analyzer in debug simulation. However, when I debug the code on the actual environment (that is, NOT in simulation) I am unable to use the Performance Analyzer.
The program works as it should when run on the target environment, confirmed by checking variable values using the Watch windows. Would anyone be able to tell me how I can use the Performance Analyzer when debugging on the actual environment? Or, would anyone be able to direct me to sources relevant to my issue?
NB: The program does not call to any peripherals.
Thanks.
Related
In our company we develop bare metal embedded software for microcontrollers. Until now we have been using manual unit test on targets or simulators, specially for Renesas microcontrollers (RL78 and RX families). We're planning now to go into automatic unit tests. The idea is to integrate them in our existing CI system.
At this point we've got a dilema. Until now we've been running unit test using the same compiler and target (or simulator) that later has been used to deploy the software into production. We'd like to maintain this approach, as the developers (and everybody) specially appreciate to test and deploy using the same conditions. So the idea would be to take a testing tool/library programmed in C that allows as to compile and run the tests in an embedded environment using a simulator. (Ex. http://www.throwtheswitch.org/unity)
But, on the other side, we cope with two upcoming situations that make the dilema arise:
We're more and more going to Cortex uC, where it's more difficult to get specific simulators to allow automation. (Ex. Renesas RA family)
Many of the advanced testing tools are developed in C++ and thought for PC environment using gcc/g++ compiler in a x86 architecture, that doesn't match that of the Cortex targets compiled using arm-none-eabi-gcc that we foresee to use.
So, at this point, we're wondering, and this would be my question, what kind of reliability can have unit tests run using gcc if our final target will be a Cortex uC and the binaries will finally be generated using arm-none-eabi-gcc. indirectly I'd be asking for the differences between gcc and arm-none-eabi-gcc when compiling for different targets.
I'd appreciate feedback from someone knowing about gcc internals that could have coped with the same kind of problem.
Thanks in advance,
Ignasi Villagrasa
Generally, simulators are useless, but especially so for production testing. Since it is an embedded system, you want to test software and hardware both - testing software without the intended MCU and hardware in place is just nonsense.
If you insist on using fluffware like simulators or PC "test suites" then realize:
It is an incomplete test which does not test core functionality of your product.
It cannot be used to test drivers/hardware-related code, it can only test abstract algorithms.
It can only be used for development testing, never for production testing.
As for how to correctly test your specific embedded system, it depends on the application and what the product is supposed to do. If you do your projects by the book then you have: Specification, leading to implementation, leading to tests. The sole purpose of a test is to verify that the implementation follows the specification.
So if the specification says that the product should activate 10 relays, you will need to flash the software onto the live MCU on the real PCB and a correctly performed test then verifies that all 10 relays get activated as they should.
This complete and correct product test cannot be done in any other way. So ask yourself if you actually need the incorrect and incomplete simulated test at all. Perhaps your development-related testing should focus on more meaningful things like design reviews, coding standards, static analysis, code reviews etc.
I can't figure out how a desktop environment developer test his code. Usually, a C or C++ programmer compiles his code an then run it (i'm not one of those programmers, i'm a web one).
So, you usually build your gui application over some kind of desktop environment (windows, mac os x, gnome, kde, xfce...), sow how they build and test their gui desktop?
And if this is a silly question, how does a kernel programmer test his code? for example linux kernel? how do you know that what you just wrote works?
Testing is a very broad term there are many types (partial list):
unit tests - test small pieces of code. test that the code behaves as expected.
system tests - test whole application in real world scenarios.
performance tests - test what is the performance of the application or part of it.
GUI testing - test operation of GUI elements (not so common as automated tests)
static analysis - compiler warnings on steroids
dynamic analysis - at a minimum memory checks - check mem allocations and usage
coverage tests - check that all code is executed.
formal verification tests (very advanced) - e.g. check when assertions/assumptions are broken.
Kernel code can be debugged by connecting using a 2nd computer (host). Virtual machines uses the same principal and simplify the setup but can't always work as HW might not exist in the guest VM.
The kernel (all OSes) has trace mechanism(s) for printing progress/problems. In Linux the simple trace is shown via the dmesg command (prints a cyclic buffer).
User mode code can easily be stopped and debugged via a debugger.
Desktop Environments
Testing Desktop Environments in real world scenarios can be kind of annoying, so the developer would have to watch out for every small error he makes, if he doesn't, he will have a hard time developing the DE.
As stated by #egur, there are multiple ways of testing his code, the easiest one and most important (but cannot be used in some cases, of course), he can test that code in a simplified program.
A Desktop Environment consists of many parts, however, in your case, I suppose you're talking about the session manager (or window manager) which is responsible for almost everything. So, if he were to test that, he would simply exit his current DE and use the new executable. In case of some error, he can always keep a backup of the old executable or fix the faulty code using some commandline text editor (like vim, or nano).
Kernel
It's quite hard to test, some kernel developers just write some code and make sure it's fine and compiles, then simply let his users test (by ACK'ing the code, etc.), then it can be submitted into the kernel code. Reasoning behind that is, the developer may not have the hardware needed to test the code.
Right now, you can compile and run the kernel in usermode (UML) if you have heard of it, so some developers may go for it. However, some developers may also want to test it themselves (They of course back up the current kernel incase of a screw up).
The way to test a desktop application is related to the way of control the application unassisted or remotely.
The Cross Platform GUI Test Automation tool (I don't know if this project has a web) project helps you to chose the interfaces/libraries required to solve the problem.
In Linux[1] uses the accessibility libraries to control the application, you have Cobra[2] for Windows and PyATOM[3] for MacOS, but I don't know what kind of technology uses in this platforms.
http://ldtp.freedesktop.org/wiki/
https://github.com/ldtp/cobra
https://github.com/pyatom/pyatom
I make some C-style functions in WP8 C++ runtime component. Every function take ponters for const input and output arrays. Debug version work great, but in Release some functions works wrong. The magic consist in simple thing: this functions have the same interface and works with pointers in the same way, but some functions work correct and other functions work wrong.
Which are standart problem exists with switching from Debug to Release in WP8 SDK Visual Studio 2012?
The problems are the same as any other C/C++ Debug/Release build configuration - the exact issues will depend on your Debug/Release settings and what your code does.
Typically:
the optimizer will move code and data around and/or remove code.
Release code will also typically run faster due to the optimizer, so you will notice changes due to race-conditions.
You will need to get used to debugging in Release configuration on a real device. Getting the same code to run on the Emulator reliably will also help you with some race conditions too (as the x86 Emulator is faster than the ARM devices).
See "Release /Debug hell, with V-studio C++ project", "Separate 'debug' and 'release' builds?".
i'm working on a project that will have builds for Windows and Linux, 32 and 64 bits.
This project is based on loading strings for a text file, process it and write results to a SQLite3 database.
On linux it reaches almost 400k sequences per second, compiled by GCC without any optimization. However on Windows it stucks in 100k sequences per second, compiled on VS2010 without any optimization.
I tried using optimizations in compilers but nothing changed.
Is this right? C code on Windows runs slower?
EDIT:
I think i need to be more clear on some points.
I made tests with code optimization enabled AND disabled. Performance didn't changed, probably because my program's bottleneck is the time wasted reading data from HD.
This program takes benefits of parallel computing. There a queue where a thread queues processed data and another dequeue to write in the SQLite database. This way i don't think there is any performance lose from this.
Is this right? C code on Windows runs slower?
No. C doesn't have speed. It's the implementations of C that introduce speed. There are implementations that produce fast behaviour (generally "compilers that produce fast machine code") and implementations that produce slow behaviour for both Windows and Linux.
It isn't just Windows and Linux that are significant here, either. Some compilers optimise for specific processors, and will produce slow machine code for any other processors.
I tried using optimizations in compilers but nothing changed.
Testing speed without optimisations enabled makes no sense. However, this does tend to indicate that something else is slow. Perhaps the implementation that produced the library files for SQLite3 client in Windows is an implementation that produces slow code. I'd start by rebuilding the lot (including the SQLite3 library) with full optimisations enabled. Following that, you could try using a profiler to determine where the difference is and use the results to perform intelligent optimisations to your code.
What is the best way to profile and optimize clutter-box2d application on an arm target?
I have tried using valgrind to profile the code on x86 before porting, but it doesnt seem to help. Ported application still runs considerably slow on ARM target.
I wasn't able to get valgrind working properly on arm target to profile and identify bottlenecks.
Used a bit of Oprofile but it gives a system wide snapshot and doesnt do much good. Since it does not produce call-graphs.
If all things fail (and you are on a glibc-based system) you can go the traditional route and use gprof to collect profiling data.
http://en.wikipedia.org/wiki/Gprof