I'm a newbie and I'm playing with ESP32 and IR receiver to capture signal from AC IR remote.
Currently, I refer to example code for capturing IR signal as follows:
static void nec_rx_init()
{
rmt_config_t rmt_rx;
rmt_rx.channel = RMT_RX_CHANNEL;
rmt_rx.gpio_num = RMT_RX_GPIO_NUM;
rmt_rx.clk_div = RMT_CLK_DIV;
rmt_rx.mem_block_num = 1;
rmt_rx.rmt_mode = RMT_MODE_RX;
rmt_rx.rx_config.filter_en = true;
rmt_rx.rx_config.filter_ticks_thresh = 100;
rmt_rx.rx_config.idle_threshold = rmt_item32_tIMEOUT_US / 10 * (RMT_TICK_10_US);
rmt_config(&rmt_rx);
rmt_driver_install(rmt_rx.channel, 3000, 0);
}
//get RMT RX ringbuffer
RingbufHandle_t rb = NULL;
rmt_get_ringbuf_handle(RMT_RX_CHANNEL, &rb);
// rmt_rx_start(channel, rx_idx_rst) - Set true to reset memory index for receiver
rmt_rx_start(RMT_RX_CHANNEL, 1);
while(rb) {
uint32_t rx_size = 0;
//try to receive data from ringbuffer.
//RMT driver will push all the data it receives to its ringbuffer.
//We just need to parse the value and return the spaces of ringbuffer.
rmt_item32_t* item = (rmt_item32_t*) xRingbufferReceive(rb, &rx_size, 1000);
...
}
Although IR signal is emitted from an AC IR remote is about 100 items but I always see that rx_size is only 256 (64 items). So it is problem, how can I capture total signals from AC IR remote?. Note that I set the buffer size from 3000 to 10000.
I appreciate any suggestion for me in order to deal with this problem.
I had this same issue this past week. I was receiving a serial packet that was 128 bits, but only ever got the first 64. After digging into it a bit deeper I found that the hardware buffer on the RMT interface defaults to a 64x32-bit RAM block per channel. You can set the channel up to use the memory blocks normally assigned to subsequent channels if you need to receive more data at once.
For my project I used the following function to give 4 RAM blocks to channel 0, thus increasing the maximum receive size to 256 bits, which is more than enough for my application. I also had to move the receive to channel 4 since channel 0 was now using the memory block for channel 1.
rmt_set_mem_block_num((rmt_channel_t) 0, 4);
Documentation for this function can be found here:
https://docs.espressif.com/projects/esp-idf/en/stable/api-reference/peripherals/rmt.html#_CPPv421rmt_set_mem_block_num13rmt_channel_t7uint8_t
It's also worth noting that it did throw an error in the serial monitor while the issue was occurring, which did help find the cause of the issue.
E (33323) rmt: RMT[0] ERR
E (33323) rmt: status: 0x14000100
E (33373) rmt: RMT RX BUFFER FULL
With the default amount of RAM I was getting an error code of 0x14000040, and when I increased it to 2 blocks I got a status code of 0x13000040. After increasing to 4 blocks of RAM the error message stopped appearing.
Related
Have USB 3.0 HDMI Capture device. It uses YUY2 format (2 bytes per pixel) and 1920x1080 resolution.
Video capture Output Pin connects directly to Video Render input Pin.
And all works good. It shows me 1920x1080 without any freezes.
But I need to make screenshot every second. So this is what I do:
void CaptureInterface::ScreenShoot() {
IMemInputPin* p_MemoryInputPin = nullptr;
hr = p_RenderInputPin->QueryInterface(IID_IMemInputPin, (void**)&p_MemoryInputPin);
IMemAllocator* p_MemoryAllocator = nullptr;
hr = p_MemoryInputPin->GetAllocator(&p_MemoryAllocator);
IMediaSample* p_MediaSample = nullptr;
hr = p_MemoryAllocator->GetBuffer(&p_MediaSample, 0, 0, 0);
long buff_size = p_MediaSample->GetSize(); //buff_size = 4147200 Bytes
BYTE* buff = nullptr;
hr = p_MediaSample->GetPointer(&buff);
//BYTE CaptureInterface::ScreenBuff[1920*1080*2]; defined in header
//--------- TOO SLOW (1.5 seconds for 4 MBytes) ----------
std::memcpy(ScreenBuff, buff, buff_size);
//--------------------------------------------
p_MediaSample->Release();
p_MemoryAllocator->Release();
p_MemoryInputPin->Release();
return;
}
Any other operations with this buffer is very slow too.
But If I use memcpy on other data (2 arrays in my class for example same size 4MB) It is very fast. <0.01sec
Video memory is (might be) slow to read back by its nature (e.g. VMR9 IBasicVideo->GetCurrentImage very slow and you can find other references). You normally want to grab the data before it actually reaches video memory.
Additionally, the way you read data is not quite reliable. You don't know what frame you are actually copying and it might so happen that you even read blackness or garbage, or vice versa your acquiring access to buffer freezes the main video streaming. This is because you are grabbing an unused buffer from pool of available buffers rather than a buffer that corresponds to specific video frame. Your getting an image from such buffer happen in a fragile assumption that unused data from previously streamed frame was initialized and is not yet overwritten by anything else.
I'm writing an program which enumerates hooks created by SetWindowsHookEx() Here is the process:
Use GetProcAddress() to obtain gSharedInfo exported in User32.dll(works, verified)
Read User-Mode memory at gSharedInfo + 8, the result should be a pointer of first handle entry. (works, verified)
Read User-Mode memory at [gSharedInfo] + 8, the result should be countof handles to enumerate. (works, verified)
Read data from address obtained in step 2, repeat count times
Check if HANDLEENTRY.bType is 5(which means it's a HHOOK). If so, print informations.
The problem is, although step 1-3 only mess around with user mode memory, step 4 requires the program to read kernel memory. After some research I found that ZwSystemDebugControl can be used to access Kernel Memory from user mode. So I wrote the following function:
BOOL GetKernelMemory(PVOID pKernelAddr, PBYTE pBuffer, ULONG uLength)
{
MEMORY_CHUNKS mc;
ULONG uReaded = 0;
mc.Address = (UINT)pKernelAddr; //Kernel Memory Address - input
mc.pData = (UINT)pBuffer;//User Mode Memory Address - output
mc.Length = (UINT)uLength; //length
ULONG st = -1;
ZWSYSTEMDEBUGCONTROL ZwSystemDebugControl = (ZWSYSTEMDEBUGCONTROL)GetProcAddress(
GetModuleHandleA("ntdll.dll"), "NtSystemDebugControl");
st = ZwSystemDebugControl(SysDbgCopyMemoryChunks_0, &mc, sizeof(MEMORY_CHUNKS), 0, 0, &uReaded);
return st == 0;
}
But the function above didn't work. uReaded is always 0 and st is always 0xC0000002. How do I resolve this error?
my full program:
http://pastebin.com/xzYfGdC5
MSFT did not implement NtSystemDebugControl syscall after windows XP.
The Meltdown vulnerability makes it possible to read Kernel memory from User Mode on most Intel CPUs with a speed of approximately 500kB/s. This works on most unpatched OS'es.
I'm using ALSA for and audio application on Linux, I found great docs explain how to use it : 1 and this one. although I have some issues to understand this part of the setup :
/* Set number of periods. Periods used to be called fragments. */
if (snd_pcm_hw_params_set_periods(pcm_handle, hwparams, periods, 0) < 0) {
fprintf(stderr, "Error setting periods.\n");
return(-1);
}
what does mean set a number of period when I'm using the PLAYBACK mode
and :
/* Set buffer size (in frames). The resulting latency is given by */
/* latency = periodsize * periods / (rate * bytes_per_frame) */
if (snd_pcm_hw_params_set_buffer_size(pcm_handle, hwparams, (periodsize * periods)>>2) < 0) {
fprintf(stderr, "Error setting buffersize.\n");
return(-1);
}
and the same question here about the latency , how should I understand it?
I assume you've read and understood this section of linux-journal. You may also find that this blog clarify things with respect to period size selection (or fragment in the blog) in the context of ALSA. To quote:
You shouldn't misuse the fragments logic of sound devices. It's like
this:
The latency is defined by the buffer size.
The wakeup interval is defined by the fragment size.
The buffer fill level will oscillate between 'full buffer' and 'full
buffer minus 1x fragment size minus OS scheduling latency'. Setting
smaller fragment sizes will increase the CPU load and decrease battery
time since you force the CPU to wake up more often. OTOH it increases
drop out safety, since you fill up playback buffer earlier. Choosing
the fragment size is hence something which you should do balancing out
your needs between power consumption and drop-out safety. With modern
processors and a good OS scheduler like the Linux one setting the
fragment size to anything other than half the buffer size does not
make much sense.
...
(Oh, ALSA uses the term 'period' for what I call 'fragment'
above. It's synonymous)
So essentially, typically you would set period to 2 (as was done in the howto you referenced). Then periodsize * period is your total buffer size in bytes. Finally, the latency is the delay that is induced by the buffering of that many samples, and can be computed by dividing the buffer size by the rate at which samples are played back (ie. according to the formula latency = periodsize * periods / (rate * bytes_per_frame) in the code comments).
For example, the parameters from the howto:
period = 2
periodsize = 8192 bytes
rate = 44100Hz
16 bits stereo data (4 bytes per frame)
correspond to a total buffer size of period * periodsize = 2 * 8192 = 16384 bytes, and a latency of 16384 / (44100 * 4) ~ 0.093` seconds.
Note also that your hardware may have some size limitations for the supported period size (see this trouble shooting guide)
When the application tries to write samples into the buffer, an if the buffer is already full, the process goes to sleep. It gets woken up by the hardware through an interrupt; this interrupt is raised at the end of each period.
There should be at least two periods per buffer; otherwise, the buffer is already empty when a wakeup happens, which result in an underrun.
Increasing the number of periods (i.e., reducing the period size) increases the safety margin against underruns caused by scheduling or processing delays.
The latency is just proportional to the buffer size: when you completely fill the buffer, the last sample written is played by the hardware only after all the other samples have been played.
I'm receiving byte by byte via serial at baud rate of 115200. How to calculate bytes per sec im receiving in a c program?
There are only 3 ways to measure bytes actually received per second.
The first way is to keep track of how many bytes you receive in a fixed length of time. For example, each time you receive bytes you might do counter += number_of_bytes, and then every 5 seconds you might do rate = counter/5; counter = 0;.
The second way is to keep track of how much time passed to receive a fixed number of bytes. For example, every time you receive one byte you might do temp = now(); rate = 1/(temp - previous); previous = temp;.
The third way is to combine both of the above. For example, each time you receive bytes you might do temp = now(); rate = number_of_bytes/(temp - previous); previous = temp;.
For all of the above, you end up with individual samples and not an average. To convert the samples into an average you'd need to do something like average = sum_of_samples / number_of_samples. The best way to do this (e.g. if you want nice/smooth looking graphs) would be to store a lot of samples; where you'd replace the oldest sample with a new sample and recalculate the average.
For example:
double sampleData[1024];
int nextSlot = 0;
double average;
addSample(double value) {
double sum = 0;
sampleData[nextSlot] = value;
nextSlot++;
if(nextSlot >= 1024) nextSlot = 0;
for(int i = 0; i < 1024; i++) sum += sampleData[1024];
average = sum/1024;
}
Of course the final thing (collecting the samples using one of the 3 methods, then finding the average) would need some fiddling to get the resolution how you want it.
Assuming you have some fairly continuous input, just count the number of bytes you receive, and after some number of characters have been received, print out the time and number of characters over that time. You'll need a fairly good timestamp - clock() may be one reasonable source, but it depends on what system you are on what is the "best" option - as well as how portable you want it, but serial comms tend to not be very portable anyways, or your error will probably be large. Each time you print, reset the count.
To correct some odd comments in this thread about the theoretical maximum:
Around the time that 14400 Baud modems came to the pre-web world, the measure of Baud changed from Baud (wiki it) to match emerging digital technologies such as ISDN 64kbit. At that time, Baud became to mean Bits/second.
Being serial data in the format of 8N1, a common shorthand notation, there are eight bits, no parity bit, and one stop bit for every byte. There is no start bit.
So a theoretical maximum for 8N1 serial over 115200 Baud (bits/sec) = 115200/(8+1) = 12800 bytes/sec.
Similar (but not the same) to watching your download speeds, the rough ball-park way to work out bytes/sec from bits/sec, without a calculator, is to divide by 10.
Baud rate is measurement of how many times per second a signal is able to change. In one of that cycles, depending on the modulation you are using, you can send one or more bits (if you are using no modulation - bit rate is the same as baud rate).
Let's say you are using QPSK modulation, so you can transmit/receive 2 bits per baud. So, if you are receiving data at 115200 baud rate, 2 bits per symbol, you are receiving data with 115200 * 2 = 230400bps.
How do I calculate network utilization for both transmit and receive either using C or a shell script?
My system is an embedded linux. My current method is to recorded bytes received (b1), wait 1 second, then recorded again (b2). Then knowing the link speed, I calculate the percentage of the receive bandwidth used.
receive utilization = (((b2 - b1)*8)/link_speed)*100
is there a better method?
Check out open source programs that does something similar.
My search turned up a little tool called vnstat.
It tries to query the /proc file system, if available, and uses getifaddrs for systems that do not have it. It then fetches the correct AF_LINK interface, fetches the corresponding if_data struct and then reads out transmitted and received bytes, like this:
ifinfo.rx = ifd->ifi_ibytes;
ifinfo.tx = ifd->ifi_obytes;
Also remember that sleep() might sleep longer than exactly 1 second, so you should probably use a high resolution (wall clock) timer in your equation -- or you could delve into the if-functions and structures to see if you find anything appropriate for your task.
thanks to 'csl' for pointing me in the direction of vnstat. using vnstat example here is how I calculate network utilization.
#define FP32 4294967295ULL
#define FP64 18446744073709551615ULL
#define COUNTERCALC(a,b) ( b>a ? b-a : ( a > FP32 ? FP64-a-b : FP32-a-b))
int sample_time = 2; /* seconds */
int link_speed = 100; /* Mbits/s */
uint64_t rx, rx1, rx2;
float rate;
/*
* Either read:
* '/proc/net/dev'
* or
* '/sys/class/net/%s/statistics/rx_bytes'
* for bytes received counter
*/
rx1 = read_bytes_received("eth0");
sleep(sample_time); /* wait */
rx2 = read_bytes_received("eth0");
/* calculate MB/s first the convert to Mbits/s*/
rx = rintf(COUNTERCALC(rx1, rx2)/(float)1048576);
rate = (rx*8)/(float)sample_time;
percent = (rate/(float)link_speed)*100;