Change uvm verbosity during run time of simuation - uvm

In the simulation, after doing VCS Save (For more details: https://blogs.synopsys.com/vip-central/2014/12/30/run-time-save-restore-strategy-with-uvm-vcs/) with verbosity low, I am trying to do VCS Restore from the saved checkpoint and run rest of the tests with high verbosity.
While passing +uvm_set_verbosity="*,_ALL_,UVM_HIGH,run" through command-line resulted in verbosity low which is the verbosity while creating the saved checkpoint.
Anyone have any suggestion of logging with high verbosity for VCS Save-Restore ?

Yes, the UVM infrastructure only reads the plusargs at the start of the simulation. When you "restore" a simulation, you are effectively starting in the middle of the simulation. Your plusargs don't take effect because the infrastructure knows only about the original plusargs.
If you are going to take this approach, then upon restore, you should manually query the UVM_VERBOSITY plusarg (as well as anything else you might want, for example "what file do I load into RAM?"). Then you do what is required with that information, for-example, you can could call "set_report_verbosity_level_hier()" on your top-level. (See the code to do this in "uvm_root.svh".)

Related

Creating a Gmod Lua Timer

I was wondering how to create a timer in lua that works also in servers with no players on.
Timer.Simple or Timer.Create don't work, they need CurTime().
How could I do it?
Well one option is always to set the convar sv_hibernate_think to 1.
That is also the option provided on the official Wiki as shown here.
Depends on what's available. You likely can't import any extra libs, and Lua's capabilities have 'prolly been nerfed.
If standard clock capabilities still exist, you can do something with
local init, pause = os.clock(), 3
while os.clock() -init < pause do end
I don't know your exact use; could be made into a function if need be. That will eat up clock cycles. If coroutines exist, you might be able to have another script runing in the background while occasionally checking on timer.

Determining cause of delay/pause - kernel scheduler etc

System is an embedded Linux/Busybox core on a small embedded board with a web server (Boa) running.
We are seeing some high latency in responses from the web server - sometimes >500ms for no good reason, so I've been digging...
On liberally scattering debug prints throughout the code it seems to come down to the entire process just... stopping for a bit, in a way which I can only assume must be the process/thread being interrupted by another process.
Using print statements and clock_gettime() to calculate time taken to process a request, I can see the code reach the bottom of a while() loop (parsing input), print something like "Time so far: 5ms" and then the next line at the top of the loop will print "Time so far: 350ms" - and all that the code does between the bottom of the loop and the 1st print back at the top is a basic check along the lines of while(position < end), it has nothing complicated that could hold it up.
There's no IO blocking, the data it's parsing has all arrived already, and it's not making any external calls or wandering off into complex functions.
I then looked into whether the kernel scheduler (CFS in our case) might be holding things up, adding calls to clock() (processor time rather than wall-clock) and again calculating time differences Vs processor time used I can see that the wall-clock time delay may run beyond 300ms from one loop to the next, but the reported processor time taken (which seems to have a ~10ms resolution) is more like 50ms.
So, that suggests the task scheduler is holding the process up for hundreds of milliseconds at a time. I've checked the scheduler granularity and max delay and it's nowhere near 100ms, scheduler latency is set at 6ms for example.
Any advice on what I can do now to try and track down the problem - identifying processes which could hog the CPU for >100ms, measuring/tracking what the scheduler is doing, etc.?
First you should try and run your program using strace to see if there are any system calls holding things up.
If that is ambiguous or does not help I would suggest you try and profile the kernel. You could try OProfile
This will create a call graph that you can analyze and see what is happening.

How to controll windowed watchdog (WWDG) with dynamically scaling CPU frequencies?

I have a project using ARM Cortex M4 with scaling CPU frequencies dependent on the workload. I would like to use the WWDG because it allows a lot more options like interrupt on watchdog. Question is: is there any standard workaround for variable time length CPU tick?
There are very different solutions for that. Which to choose depends on your setting and your aaplication (more precise on its criticality). If the WD is used only to detect a stuck in an uncritical application, i.e., no serious danger of hurts to human, animals, or expensive material damage, then a normal WD with relaxed timing is absolutely sufficient. If the aplication is critical and you expect some serious misbehaviour in case of underrunning a lower time limit, then a WWDG can be used.
So I have two possible solutions in mind, one simple and one complex; which one is best for your use case depends on what you require for your system (I cannot judge as you didnt tell on what kind of system you are working). The first solution would be to configure the WWD in a way that the limits are fullfilled with any of the settings. So the configuration is quite relaxed but sufficient for many use cases. So you dont have to take care for the dynamic switching of clock frequencies.
The more complex solution is to measure the time between to clock changes and determine the target time till the next WD serve with the newly selected frequency. When no more change happens in between, then the WD will beserved at that time. Otherwise you take the intervall with the latest frequency in account and calculate the next relative time stamp when the WD has to be served. But it depends on the timing you require if this is can be realized or not. If your timing is very tough (e.g, <1ms), then this would not realy be a viable option. But on the other hand, if the calculation is complex, you will obtain a simple challange response WD that checks the health your ALU in addition to the timing.

U-Boot prompt timeout

I'm accessing U-boot's console via serial connection and when u-boot prompt me to enter commands, it seems that I have limited time to do that. I want to enter several commands, but I need more time.
Does anyone experienced such an issue and how can I increase that time (if that is the problem)?
U-Boot's Boot Retry Mechanism, AKA, Preventing Eternally Hung Boot
Having the U-Boot command prompt timeout can actually be desirable behavior, as without this an inadvertent interruption of the boot could leave a system permanently stuck at the U-Boot prompt until the next power cycle.
Given this, in addition to the hardware watchdog possibility mentioned by Tom Rini, it is also possible that your U-Boot build could be set up with the "Boot Retry" feature - and not unlikely that others finding this page will (as I was) be seeking a way to intentionally cause such behavior.
If you see the following, you likely have boot retry:
Timeout waiting for command
resetting ...
Three build-time configuration options and one run-time variable govern boot retry:
CONFIG_BOOT_RETRY_TIME is the default number of seconds without a valid command, after which the (still interruptible) auto boot sequence will be automatically re-run.
bootretry is an environment variable containing the current delay in effect. Negative values mean boot retry will not occur. Unfortunately, this value is only sampled on startup - changing it will not prevent boot retry in the current session.
CONFIG_BOOT_RETRY_MIN is a safety limit on the above environment variable, however it appears that negative or disabling values get a pass through the check. This makes it harder to deduce the intended usage of this setting; if not explicitly set in the config it is assigned the value of CONFIG_BOOT_RETRY_TIME.
CONFIG_RESET_TO_RETRY is an option which means that instead of directly resuming the autoboot sequence, the processor will reboot. This may in fact be the only supported way of using boot retry; it seems that a build error asking you to set it results if you do not.
Critical note: Except in a few patched forks, these are not KConfig options which you can put in your board_defconfig, but rather #define's which must go in a C header file of the code itself, specifically one applicable to the system configuration which you build.
Disable Boot Retry
If you saw the above timeout message and suspect that boot retry is at fault, there are a few possible ways to stop it.
First, if your u-boot supports saving environment variables persistently, you could
u-boot> setenv bootretry -1
u-boot> saveenv
and then reboot. A few systems may still have an ancient bug which prevents parsing a negative value, in which case you could use a large positive one, such as 3600 seconds (one hour).
But unfortunately, you cannot do this without saving the environment variable, as it is only read on startup. To enable using the environment variable as a temporary override for maintenance, you could do something like this to re-evaluate it each time the timeout is reset by a valid command:
--- a/common/bootretry.c
+++ b/common/bootretry.c
## -39,6 +39,7 ## void bootretry_init_cmd_timeout(void)
*/
void bootretry_reset_cmd_timeout(void)
{
+ bootretry_init_cmd_timeout(); //pickup any environment change
endtime = endtick(retry_time);
}
This seems to work, in that you can set the bootretry to -1 for extended manual maintenance. It also seems you can set the bootretry to longer than default, but for reasons not understood, trying to set it shorter does not seem to work.
There does appear to be at least part of designed in mechanism where using configuring CONFIG_AUTOBOOT_STOP_STR and then entering it is supposed to stop the boot retry mechanism, but I couldn't get that to work or find any useful hits when searching on it.
To remove boot retry feature entirely
To remove the boot retry feature entirely, find where it is being defined in code applicable to your board (grep -r CONFIG_BOOT_RETRY * or similar), remove that, rebuild and reflash.
To achieve boot retry as a desired feature
First, put the necessary #define in a header applicable to your specific board, for example, if you had an Allwinner SoC you might do:
--- a/include/configs/sunxi-common.h
+++ b/include/configs/sunxi-common.h
## -16,6 +16,8 ##
#include <asm/arch/cpu.h>
#include <linux/stringify.h>
+#define CONFIG_BOOT_RETRY_TIME 60 //command prompt will timeout
+#define CONFIG_RESET_TO_RETRY //required for above on this chip
+
#ifdef CONFIG_OLD_SUNXI_KERNEL_COMPAT
/*
* The U-Boot workarounds bugs in the outdated buggy sunxi-3.4 kernels at the
Then rebuild u-boot, probably something like this:
make CROSS_COMPILE=~/path/to/gcc-xxx-yyy-zzz-/bin/xxx-yyy-zzz- clean
make CROSS_COMPILE=~/path/to/gcc-xxx-yyy-zzz-/bin/xxx-yyy-zzz- your_board_defconfig
make CROSS_COMPILE=~/path/to/gcc-xxx-yyy-zzz-/bin/xxx-yyy-zzz-
Repackage the result appropriately and flash it to your board
Warning: Always make sure you have a backup means of booting or flashing before over-writing the existing U-Boot!
Depending on your board, that might be something like the ability for the hardware itself to boot from an SD card or USB stick, to push code via a USB utility, or the or the ability to start the board via JTAG or similar. In a pinch some SoC's will release the lines to an SPI flash if you hold them in reset, allowing you to use an external programmer - but others will not release the lines, meaning you have to desolder the flash chip. Loading a bad U-Boot into a board where you have no other way of injecting code but through U-Boot itself can result in a brick!.
Without more details (such as platform, config and version), it's hard to say. Under normal circumstances the only timeout you have is to stop the automatic boot. If the board is resetting reliably after N seconds of being on it is likely that a watchdog is being triggered and U-Boot is not configured to know about and either disable or periodically pet the watchdog to keep the system from resettting.
I don't understand why these CONFIG options are part of Kconfig so that one can configure with "make menuconfig" and also save the settings in _defconfig files.
That makes the most sense rather than having to add and customise header files.
It's 2021 now. I wonder if it's worth submitting a patch?

How to Disable/Delay the watchDog Timer for a certain Task in an embedded system

I'm working on a project for automotive system where we use the MPC5748 MCU. The application uses an RTOS based on AUTOSAR OS, and this MPC target support two type of watchdogs; software and hardware (they have used soft WDT).
My mission is to fit an algorithm within this application, the development of the algorithm has been done, the problem is that in the task where the algorithm is running is a 1ms task and the algorithm needs much more time than the time dedicated to this function.
I'm a newbie to the embedded world.By the way, in the algorithm main function the program will reset itself and this seems to be a timeOut generated by the expiration of the watchdog.
My questions are:
Can I disable the watchdog timer for this specified function (which must not be disabled but just for testing purpose)? It is possible to use more timeOut for the watchdog on that specified function?
Must I develop another task with a big delay in other to run the algorithm? But the problem is that the algorithm need to be synchronised with the 1ms task since we are receiving CAN commands.
Can i add a sleep(<1ms) on the desired function in order to wait a little bit witout affecting other tasks
What are other options to try?
NB: This is a general problem on the watchdog timer and any useful informations will be much helpful for me. Sorry because I can't share the code.
Can I disable the watchdog timer for this specified function (which must not be disabled but just for testing purpose)? It is possible to use more timeOut for the watchdog on that specified function?
Let's forget that one - it is a really bad idea. If it is possible to defeat the watchdog, then it is possible to do it by error, and then the whole point of the watchdog is defeated. Apart from that its an XY question - a question about your proposed solution to a different problem - you should ask about the problem directly.
Must I develop another task with a big delay in other to run the algorithm? But the problem is that the algorithm need to be synchronised with the 1ms task since we are receiving CAN commands.
Yes you need another task, but you should not add a "big delay" and it is probably unnecessary and certainly a bad design. If the 1ms task needs the result of the algorithm then, the algorithm should run in a service task triggered by the 1ms task and run asynchronously to the 1ms task, the service task then makes the results available to the 1ms task when available (by shared memory or message passing perhaps). Alternatively if the result is not specifically needed by the 1ms task, the service task could take the necessary action independently of the 1ms task.
There are many options, but essentially it seems that your task partitioning is inappropriate; your CAN Rx task should be responsible for receiving CAN messages only, and any action required in response to CAN messages deferred to one or more other tasks, perhaps fed from a message queue.
What are other options to try ?
Software design should not be a matter of trial and error - get the design right, implement the design. However you might consider whether 1ms is appropriate; is it possible that the period can be extended to encompass the worst case execution time without causing a failure to meet deadlines in general? If the answer is "no" then the algorithm does not belong in this task.
I don't think so you can disable/delay the WATCHDOG timer and even if you could that's not a good option to go for.
The problem what think is that the task you are calling is of 1ms, which is very less to read CAN messages and then operate on the same. The minimum task time i think should be of 5ms and the optimal time should be of 10ms.
Can I disable the watchdog timer for this specified function (which must not be disabled but just for testing purpose)? It is possible to use more timeOut for the watchdog on that specified function?
You should never disable the watchdog anywhere in your code.
It might not even be possible, on the MPC5x families you typically set up the watchdog once, and then for safety reasons all watchdog registers turn to read-only registers.
Must I develop another task with a big delay in other to run the algorithm? But the problem is that the algorithm need to be synchronised with the 1ms task since we are receiving CAN commands.
Ideally you should only service the watchdog from one single location in the program. Your CAN peripheral will be FlexCAN, which has a lot of available "mailboxes" for CAN messages. In most cases, you shouldn't need to poll it, but a flag will be set when the desired message arrive.
So it isn't obvious to me why you would need a delay to wait for them. Simply do:
void the_task (void)
{
wdog_refresh();
... // do other things
if(can_message_available)
{
// do something with the message
}
... // do other things
}
rather than
// BAD:
while(!can_message_available)
; // do nothing
Even if you need to use the CAN as FIFO and poll it repeatedly, you would still use the same approach. You'd just have to ensure that the task runs often enough that there will never be an overflow in the FIFO buffer.

Resources