I'm using 3.1 Sarge, kernel 2.4.26 on a TS-7400 board running ARM 9 architecture.
I am using the POSIX library terminos and fcntl.
I am writing a program to communicate between 2 embedded devices over serial. The program uses the POSIX time out flag VTIME and works successfully Ubuntu 10.1 but it does not time out on the board. I need the program to try resending a command if there is no response after a certain time. I know the board is transmitting OK the first time but then the program locks up waiting for a response. I am running the serial in delay mode so it will wait in read() until at least 1 byte is received or .1 secs have passed as defined by VTIME.
What is the problem or if VTIME simply does not work in this kernel what is another way to accomplish this?
Investigate the select() system call. This will let you execute a read when there is something to actually read, instead of waiting for .1 seconds, hoping something will show up. If this is supposed to be a straight port of your code then this may not be and appropriate thing to do.
It is an alternative....
Related
When using snd_pcm_writei() in non-blocking mode everything works perfect for a while but eventually the audio gets choppy. It sounds like the ring buffer pointers are getting out of sync (ie. sometimes I can tell that the audio is playing out of order). How long it takes for the problem to start it's hardware dependent. On a Gentoo box on real hardware it seldom happens, but on a buildroot system running on QEMU it happens after about 5 minutes. On both cases draining the pcm stream fixes the problem. I have verified that I'm writing the samples correctly by also writting them to a file and playing them with aplay.
Currently I'm setting avail_min to the period size (1024 frames) and calling snd_pcm_wait() before writting chunks of the period size. But I tried a number of different variations (different chunk sizes, checking avail myself and use pthread_cond_timedwait() instead of snd_pcm_wait(), etc). But the only thing that works fine is using blocking mode but I can not do that.
You can see the current source code here: https://bitbucket.org/frodzdev/mediabox/src/5a6471316c7ae481b329e7e0d4af1bb68a32e71d/src/audio.c?at=staging&fileviewer=file-view-default (it needs a little cleanup since I'm trying all kinds of things). The code that does the actual IO starts at line 375.
Edit:
I think I got a solution but I don't understand why it seems to work. It seems that it does not matter if I'm using non-blocking mode, the problem is when I wait to make sure there's room on the buffer (either through snd_pcm_wait(), pthread_cond_timedwait(), or usleep()).
The version that seems to work is here: https://bitbucket.org/frodzdev/mediabox/src/c3eb290087d9bbe0d5f37653a33a1ba88ef0628b/src/audio.c?fileviewer=file-view-default. I switched to blocking mode while still waiting before calling snd_pcm_writei() and it didn't made a difference. Then I added the call to snd_pcm_avail() before calling snd_pcm_status() on avbox_audiostream_gettime(). This function is called constantly by another thread to get the stream clock and it only uses snd_pcm_status() to get the timestamps. Now it seems to work (at least it is a lot less probable to happen) but I don't understand exactly why. I understand that snd_pcm_avail() will synchronize the pointers with the kernel but I don't really understand when it needs to be called and the difference between snd_pcm_state() et al and snd_pcm_status(). Does snd_pcm_status() also synchronize anything? It seems not because sometimes snd_pcm_status_get_state() will return RUNNING when snd_pcm_avail() returns -EPIPE. The ALSA documentation is really vague. Perhaps understanding these things will help me understand my problem?
Now, when I said that it seems to be working I mean that I cannot reproduce it on real hardware. It still happens on QEMU though way less often. But considering that on the next commit I switched to blocking mode without waiting (which I've used in the past and never had a problem with on real hardware) and it still happens in QEMU and also the fact that this is a common issue with QEMU I'm starting to think that I may have fixed the issue on my end and now it's just a QEMU problem. Is there any way to determine if the problem is a bug on my end that is easier to trigger on the emulator or if it's just an emulator problem?
Edit: I realize that I should fill the buffer before waiting but at this point my concern is not to prevent underruns but to make sure that my code can handle them when they happen. Besides the buffer is filling up after a few iterations. I confirmed this by outputing avail, buffer_size, etc before writing each packet and the numbers I get don't make perfect sense, they show an error of 1 or 2 periods about every 8th period. Also (and this is the main problem) I'm not detecting any underruns, the audio get choppy but all writes succeed. In fact, if the problem start happening and I trigger an underrun by overloading the CPU it will correct itself when the pcm is reset.
In line 505: You're using time as argument to malloc.
In line 568: Weren't you playing audio? In this case you should do wait only after you wrote the frames. Let's think ...
Audio device generates an interrupt when it terminates to process a period.
| period A | period B |
^ ^
irq irq
Before you start the pcm, audio device doesn't generate any interrupt. Notice here that you're waiting and you haven't started the pcm yet. You only starts it when you call snd_pcm_writei().
When you wait for audio data you'll be awake only when the current period has been fully processed -- in your first wait the first period wasn't even written -- so in a comfortable situation you should write the whole buffer, wait for the first interrupt, and then write the just processed period, and on and on.
Initially, buffer is empty:
| | |
write():
|############|############|
wait():
..............
When we wake up:
| |############|
write():
|############|############|
I found the problem is you're writing audio just before it be played, then sometimes it may arrive delayed in the buffer.
I am creating a software switch, as a school project. It's implemented in C using lpcap and working fine (despite some bugs) on my Ubuntu machine. However I have a Mac and it's not working there as it should.
When frame is captured using pcap_next_ex() number of captured frames is increased. For some reason during first few seconds (5 to 30) it doesn't increment number of frames, like no frames were received, BUT I CAN SEE those frames in Wireshark. How is this possible?
If interested here is my code.
https://github.com/Horkyze/Software-switch
For some reason during first few seconds (5 to 30) it doesn't increment number of frames, like no frames were received,
Or, rather, like no frames were passed from the capture mechanism to libpcap.
Given that you did not set a timeout, the default timeout is used. It happens to be 0; the behavior with a timeout is platform-dependent and undefined and, for systems that use BPF, such as OS X (and *BSD and Solaris 11), that behavior is "don't pass packets from the capture mechanism to userland until there's no room for the next packet in the kernel packet buffer", which means that the delay between the reception of a frame and its delivery to userland could be arbitrarily long.
Apple's pcap_set_timeout() man page is more emphatic about this (and I'm going to change the standard libpcap man page to say the same thing:
The behavior, if the timeout isn't specified, is undefined. We recom-
mend always setting the timeout to a non-zero value.
Given the "switch" in the name of your application, you probably don't want any timeout at all but instead want "immediate mode". In immediate mode, set with pcap_set_immediate_mode() rather than pcap_set_timeout(), packets are delivered to user mode as soon as they arrive.
This will also work on Ubuntu (including immediate mode if it's a new enough version of Ubuntu that it has a version of libpcap with immediate mode). Note that, on a Linux system with a version of the kernel new enough to implement TPACKET_V3 and a version of libpcap new enough to use TPACKET_V3, the behavior can be quite different from versions of Linux where either the kernel or libpcap doesn't do TPACKET_V3, so setting the timeout is a good idea on any OS.
I'm writing a sniffer using libpcap. My problem is that there's a 7-10 second delay between calling pcap_loop() or pcap_next() and actually getting a packet(the callback function being called). However, if I use wireshark with the same filter on the same device, there is no such delay after I hit the "start" button. Why is there a delay in my program and is there a way to fix that?
I'm working on atheros wifi chips. The device is set to monitor mode using
airmon-ng start wlan0
I'm sure there're plenty of traffic to listen to, for I can see the packages in wireshark.
Thank you.
I'm using 10000
The to_ms argument to pcap_open_live() and pcap_set_timeout() is in milliseconds.
10000 milliseconds is 10 seconds.
Try using 1000, which is the value tcpdump uses - that'll reduce the delay to 1 second - or using 100, which is the value Wireshark uses - that'll reduce the delay to 1/10 second.
I read on a tutorial about this field: " on at least some platforms, this means that you may wait until a sufficient number of packets arrive before seeing any packets, so you should use a non-zero timeout"
The tutorial in question is the tcpdump.org "How to use libpcap" tutorial, and the passage in question was added in this CVS commit:
revision 1.8
date: 2005/08/27 23:58:39; author: guy; state: Exp; lines: +34 -31
Use a non-zero timeout in pcap_open_live(), so you don't wait for a
bufferful of packets before any are processed.
Correctly explain the difference between pcap_loop() and
pcap_dispatch().
In sniffex.c, don't print the payload if there isn't any.
so I'm familiar with it. :-)
I'd have to spend some time looking at the Linux kernel code (again) to see what effect a timeout value of 0 would have on newer kernels. However, when writing code that uses libpcap/WinPcap to do live captures, you should always act as if you're writing code for such a platform; your code will then be more portable to other platforms and will not break if the behavior of a zero timeout changes.
I'm working on a C program that transmits samples over USB3 for a set period of time (1-10 us), and then receives samples for 100-1000 us. I have a rudimentary pthread implementation where the TX and RX routines are each handled as a thread. The reason for this is that in order to test the actual TX routine, the RX needs to run and sample before the transmitter is activated.
Note that I have very little C experience outside of embedded applications and this is my first time dabbling with pthread.
My question is, since I know exactly how many samples I need to transmit and receive, how can I e.g. start the RX thread once the TX thread is done executing and vice versa? How can I ensure that the timing stays consistent? Sampling at 10 MHz causes some harsh timing requirements.
Thanks!
EDIT:
To provide a little more detail, my device is a bladeRF x40 SDR, and communication to the device is handled by a FX3 microcontroller, which occurs over a USB3 connection. I'm running Xubuntu 14.04. Processing, scheduling and configuration however is handled by a C program which runs on the PC.
You don't say anything about your platform, except that it supports pthreads.
So, assuming Linux, you're going to have to realize that in general Linux is not a real-time operating system, and what you're doing sure sounds as if has real-time timing requirements.
There are real-time variants of Linux, I'm not sure how they'd suit your needs. You might also be able to achieve better performance by doing the work in a kernel driver, but then you won't have access to pthreads so you're going to have to be a bit more low-level.
Thought I'd post my solution.
While the next build of the bladeRF firmware and FPGA image will include the option to add metadata (timestamps) to the synchronous interface, until then there's no real way in which I can know at which time instants certain events occurred.
What I do know is my sampling rate, and exactly how many samples I need to transmit and receive at which times relative to each other. Therefore, by using conditional variables (with pthread), I can signal my receiver to start receiving samples at the desired instant. Since TX and RX operations happen in a very specific sequence, I can calculate delays by counting the number of samples and multiplying by the sampling rate, which has proven to be within 95-98% accurate.
This obviously means that since my TX and RX threads are running simultaneously, there are chunks of data within the received set of samples that will be useless, and I have another routine in place to discard those samples.
Can we schedule a program to execute every 5 ms or 10 ms etc?
I need to generate a pulse through the serial port for 1 khz and 15 khz.
But the program should only toggle the pins in the serial port , so the frequency has to be produced by a scheduler. is this possible in linux with a rt patch?
I believe a better solution is to put your "generate a pulse" function in a loop, for example:
for (;;) {
generate_pulse(); /* generate a pulse */
sleep(5ms); /* or 10ms */
}
is this possible in linux with a rt patch?
I suggest to go for RT patch, if timing is critical.
Xenomai is a RT patch which I used on 2.6 kernel some days back.
Here is an example which runs every 1 second.
http://www.xenomai.org/documentation/trunk/html/api/trivial-periodic_8c-example.html
There is the PPS project that is now part ( at least a portion of it for the 2.6 branch, but in the latest 3.x kernel branch it looks like there is a full integration ) of the mainline linux kernel.
There is also an explicit reference to using this PPS implementation with a serial port in the linked txt file
A PPS source can be connected to a serial port (usually to the Data
Carrier Detect pin) or to a parallel port (ACK-pin) or to a special
CPU's GPIOs (this is the common case in embedded systems) but in each
case when a new pulse arrives the system must apply to it a timestamp
and record it for userland.
Apparently good examples / tutorials / guides, are not even that hard to find , I'm sure that you'll find a lot of good resources while just using search engine.
The header for the APIs is usually under /usr/include/linux/pps.h .
I have finally found a way to get it done.
The best way to do it is to first create a timer with the required amount of time. and then to call the task( which is the pulse generating program) every time the timer overflows. The program for the timer can be run in the background. the timer can be created and set using the timer_create() and timer_settime() respectively. A different program can be called from one program using fork() and execl(). The program can be run in the background using the daemon().
By using all these things we can create our own scheduler.