I'm writing an audio device driver which needs to process device interrupts in real-time. When the CPU enters C3 state, interrupts are delayed, causing problems to the driver. Is there a way for the driver to tell the OS not to enter idle C-states?
What I've found is that idle C-states can be disabled from user-space:
const DWORD DISABLED = 1;
const DWORD ENABLED = 0;
GUID *scheme;
PowerGetActiveScheme(NULL, &scheme);
PowerWriteACValueIndex(NULL, scheme, &GUID_PROCESSOR_SETTINGS_SUBGROUP, &GUID_PROCESSOR_IDLE_DISABLE, DISABLED);
PowerSetActiveScheme(NULL, scheme);
However, it is a global setting, which can be overwritten by the user or another application (e.g. when the user changes the Power Plan).
What I need is something like PoRegisterSystemState, but not for S- and P-states, but for C-states. (ref: https://learn.microsoft.com/en-us/windows-hardware/drivers/kernel/preventing-system-power-state-changes)
Is there any way to achieve this?
=====
It turns out there isn't a supported way to disable idle C-states from kernel space, and there isn't a service in user space to provide common API to do this.
The way to control C-states is from "Processor Power Management" in "Change advanced power settings" dialog, through registry, or via C API PowerWriteACValueIndex / PowerWriteDCValueIndex.
The original problem was delayed interrupts in all but C1 idle state, so I needed to disable both C2, C3 and deeper idle states. The problem with disabling all idle C-states, including C1 (as shown in the example code PowerWriteACValueIndex(NULL, scheme, &GUID_PROCESSOR_SETTINGS_SUBGROUP, &GUID_PROCESSOR_IDLE_DISABLE, DISABLED)) is that the CPU usage is reported as 100%, and some applications (DAWs) get confused.
The solution for my problem is to disable all but C1 idle state, which can be done by setting the following values in the Processor Power Management:
- Processor idle threshold scaling -> Disable scaling;
- Processor idle promote threshold -> 100%;
- Processor idle demote threshold -> 100%.
Perhaps I'll create a service that does just that, that will use the PowerWriteACValueIndex / PowerWriteDCValueIndex API.
Related
I would like to trigger a timer with an external signal which happens every 1ms. Then, the timer has to count up to 90us at every rising edge of the external signal. The question is, can I do that using a general purpose timer configured as Input Compare? I don’t understand which callBack to use for this purpose.
I’m using the HAL library and TIM2 peripheral in STM32F446 microcotnroller.
This is how I configured my timer peripheral
void TIMER2_Config(void)
{
TIM_IC_InitTypeDef timer2IC_Config;
htimer2.Instance = TIM2;
htimer2.Init.CounterMode = TIM_COUNTERMODE_UP;
htimer2.Init.Period = 89; //Fck=50MHz, Timer period = 90us
htimer2.Init.Prescaler = 49;
if ( HAL_TIM_IC_Init(&htimer2) != OK)
Error_handler();
timer2IC_Config.ICPolarity = TIM_ICPOLARITY_RISING;
timer2IC_Config.ICPrescaler = TIM_ICPSC_DIV1;
if (HAL_TIM_IC_ConfigChannel(&htimer2, &timer2IC_Config, TIM_CHANNEL_1) != OK)
Error_handler();
}
What you are asking for is well within the features of this peripheral, but you must remember that the HAL library is not capable of using the full features of the chip. Sometimes you have to use access the registers directly (the LL library is another way to do this).
To have the external signal start the timer you need to use trigger mode, not input capture. Input capture means record the value of the timer which is already started. You need to set the field TIMx_CCMRx_CCxS to 0b11 (3) to make the input a trigger, then set the field TIMx_SMCR_TS to select the channel you are using, and TIMx_SMCR_SMS to 0b110 (6) to select start on trigger mode.
Next set up the prescaler and reload register to to count for the 90 microsecond delay that you want, and set TIMx_CR1_OPM to 1 to stop the counter wrapping when it reaches the limit.
Next set TIMx_CR2_MMS to 0b010 to output a trigger on the update event.
Finally you can set the ADCx_CR2_EXTSEL bits to 0b00110 to trigger on TIM2_TRGO trigger output.
This is all a bit complicated, but the reference manual is very thorough and you should read the whole chapter through and check every field in the register description section. I would recommend not mixing the HAL library with direct register access, it will probably interfere with what you are trying to do.
I am a newbie in CANOPEN. I wrote a program that read actual position via PDO1 (default is statusword + actual position).
void canopen_init() {
// code1 setup PDO mapping
nmtPreOperation();
disablePDO(PDO_TX1_CONFIG_COMM);
setTransmissionTypePDO(PDO_TX1_CONFIG_COMM, 1);
setInhibitTimePDO(PDO_TX1_CONFIG_COMM, 0);
setEventTimePDO(PDO_TX1_CONFIG_COMM, 0);
enablePDO(PDO_TX1_CONFIG_COMM);
setCyclePeriod(1000);
setSyncWindow(100);
//code 2: enable OPeration
readyToSwitchOn();
switchOn();
enableOperation();
motionStart();
// code 3
nmtActiveNode();
}
int main (void) {
canopen_init();
while {
delay_ms(1);
send_sync();
}
}
If I remove "code 2" (the servo is in Switch_on_disable status), i can read position each time sync send. But if i use "code 2", the driver has error "sync frame timeout". I dont know driver has problem or my code has problem. Does my code has problem? thank you!
I don't know what protocol stack this is or how it works, but these:
setCyclePeriod(1000);
setSyncWindow(100);
likely correspond to these OD entries :
Object 1006h: Communication cycle period (CiA 301 7.5.2.6)
Object 1007h: Synchronous window length (CiA 301 7.5.2.7)
They set the SYNC interval and time window for synchronous PDOs respectively. The latter is described by the standard as:
If the synchronous window length expires all synchronous TPDOs may be discarded and an EMCY message may be transmitted; all synchronous RPDOs may be discarded until the next SYNC message is received. Synchronous RPDO processing is resumed with the next SYNC message.
Now if you set this sync time window to 100us but have a sloppy busy-wait delay delay_ms(1), then that doesn't add up. If you write zero to Object 1007h, you disable the sync window feature. I suppose setSyncWindow(0); might do that. You can try to do that to see if that's the issue. If so, you have to drop your busy-wait in favour for proper hardware timers, one for the SYNC period and one for PDO timeout (if you must use that feature).
Problem fixed. Due to much EMI from servo, that make my controller didn't work properly. After isolating, it worked very well :)!
I want change my timer period while running program
I make different measures requiring different timer periods.
After initialization:
TIM_TimeBaseInitStructure.TIM_Period = period - 1;
TIM_TimeBaseInitStructure.TIM_Prescaler = 8399+1;
TIM_TimeBaseInitStructure.TIM_ClockDivision = TIM_CKD_DIV1;
TIM_TimeBaseInitStructure.TIM_CounterMode = TIM_CounterMode_Up;
TIM_TimeBaseInit(TIM3, &TIM_TimeBaseInitStructure);
In main function I set: period = 10000;
Then, I receive new value via UART and try to set another value:
arr3[0] = received_str[11];
arr3[1] = received_str[12];
arr3[2] = received_str[13];
arr3[3] = received_str[14];
arr3[4] = received_str[15];
arr3[5] = '\0';
per = atoi(arr3);
period = per;
But timer period don't changes. How can I do it?
This is the problem with HAL libraries. People who use them have no clue what is behind it.
What is the timer period?
It is the combination of the PCS (prescaller) and ARR (auto reload register). The period is calculated as (ARR + 1) * (PSC + 1) / TimerClockFreq.
When you try to change the period when the timer is running you need to make sure that it is done in the safe moment to prevent glitches. The safest moment is then the UG event happens.
You have two ways of achieving it:
UG interrupt. In the interrupt routine if the ARR or PSC have changed - you should update the register. Bare in mind that the change may happen in the next cycle if the registers are shadowed.
Using the timers DMA burst more. It is more complicated to config - but the hardware take care of the registers update on the selected event. The change is instant and register shadowing does not affect it. More details read RM chapter about the timers DMA burst mode.
If you want to use more advanced hardware features forget about the HAL and program it using the bare registers having the full control.
At run time by updating auto reload register we can change the timer value.
I have done this practically.
TIM5->ARR = Value; //This is for timer5
After some study, I found that SQLite has two kinds of caches: "private page cache" and "shared cache". I try to use them and test the performance, but I really confuse about the usage of them. Following is my questions:
Will it open the cache by default?
What is correct way to open private page cache and shared cache?
Could I check the sqlite cache status by SQLITE_DBSTATUS_CACHE_USED and SQLITE_STATUS_PAGECACHE_USED? What is different between the two items?
My way to disable / enable private cache / enable shared cache are as following, are these right?
disable (open by default?):
ret = sqlite3_open_v2(db_name, db_handle,SQLITE_OPEN_READONLY , NULL);
enable private cache:
ret = sqlite3_open_v2(db_name, db_handle,
SQLITE_OPEN_READONLY | SQLITE_OPEN_PRIVATECACHE, NULL);
enable shared cache:
ret = sqlite3_open_v2(db_name, db_handle,
SQLITE_OPEN_READONLY | SQLITE_OPEN_SHAREDCACHE, NULL);
And I check the sqlite by looking at SQLITE_STATUS_PAGECACHE_USED and SQLITE_DBSTATUS_CACHE_USED now.
This is really stuck me. There is always a value for SQLITE_DBSTATUS_CACHE_USED even if I didn't enable the cache. For SQLITE_STATUS_PAGECACHE_USED, no matter I enable or disable the cache, there will not be 0 only when I add following code before I init the sqlite:
sqlite3_config(SQLITE_CONFIG_PAGECACHE, buf, sz, N);
It looks like the sqlite_open_v2 flag is not working without any reason??
SQLite has a single kind of cache, the page cache, and it is always enabled.
When in shared-cache mode, multiple connections in the same process can share the page cache. So you will not see any difference as long as you are using a single connection.
(Shared-cache mode is intended for multithreaded servers running in a device with restricted memory; it's probably not useful for you.)
I'm following lazyfoo's tutorial on SDL and have adapted the code in Lesson 03 a number of ways, but the adaptation of main concern is while(SDL_PollEvent(&e)!=0) to if(SDL_WaitEvent(&e)). Originally, I decided to used SDL_Delay() to throttle the CPU, but ultimately decided SDL_WaitEvent() in an If statement was best. As you can imagine, the CPU usage is much better. Thinking ahead, I thought about situations in which SDL_PollEvent() would be useful and realized that some kind of timer should be used with SDL_PollEvent() such as to throttle the FPS.
The following code is in my main thread. While if(SDL_WaitEvent(&e)) significantly reduced the CPU usage, it's not perfect. Particularly, comparing Firefox on my system to this application: Firefox was using 0.2% CPU while this application used around 4.4%. How could this be?
while( !quit ) { // Keep running until quit
if( SDL_WaitEvent( &e ) ) { // Suspend until event received
switch( e.type ) { // Switch on event types
case SDL_QUIT: // User requests quit
printf( "Shutting down...\n" );
quit = true;
break;
}
}
SDL_BlitSurface( gXOut, NULL, gScreenSurface, NULL ); // Apply image to surface
SDL_UpdateWindowSurface( gWindow ); // Update the surface
}
After fiddling with Firefox and this application while monitoring the Activity Monitor (Mac OS X), I found that Firefox immediately picked up CPU once it was active. This is perfectly fine because when Firefox is the active application, the user isn't paying attention to other applications.
However, my application insists on a 4-6% CPU. Why? I think it has to do with the while(!quit) loop. This loop is always active, whether in the foreground or background. So, my question to all of you fabulous people is this: How can I suspend a particular loop, thread, or even entire application once the application is in the background and then resurrect it once it returns to the foreground? View code paste here. The bitmap used in code can be found on lazy foo's tutorial, or you can make your own 640 x 480 bitmap so long as it's referenced correctly in the code. Refer to const char* BMPimage = "x.bmp" at the top of the file.