I have to do a code in C to detect the inter-characters time within a rs232 line on linux. Inter-characters time to detect could be 1ms. So I need something to timestamp very quickly an incomming characters. When I say very quickly is less than 1ms.
I don't ask for a coding solution, I just want a initial help to know what path I have to take : is it possible to do this on linux ? I have to modify a driver to reach this kind of time ? Or Something on user space can do it (I don't think so).
No chance to achieve this in user space, as far as I know there is no serial port configuration that allows you to specify precise inter-character timeout. Maybe coding custom driver could bring you closer to UART interrupts since that's what you need.
However every time I had to solve similar task, I ended up creating a tiny hardware module that performs my time-critical task very precisely and only reports results to the linux machine. It totally depends on what you need and how precise your communication gaps detection should be.
Related
I am currently working with the BeagleBone Black using Ubuntu and I am trying to find some direction. I have created a c program that listens for SIGIO and runs a read() to get the data on that line. From my research on the internet and looking through some books, it appears that this method is not very efficient in that using a loop listening for a Signal interrupt is bad because of the large amount of context switching (it should be noted that this I/O line will be busy so the SIGIO will trigger at least 4 times a second and this is an asynchronous). It was suggested to use hardware interrupts and have that trigger a response to take the data from the line and place it into a register and will be accessable from the User using Direct Memory Access preferably. So the question remains to be where can I look to get more info on how to do this, I find a lot of info on this topic but most of which just talk about how to OS does interrupts or using Signals, which with a busy line is pretty taxing.
If you are that much concerned about the timings and latency, you should probably use some real time system.
Fortunately, Beaglebone black has real-time processing cores on its SOC, called the PRU (Programmable real-time units).
If you are new to the concept of PRUs, you probably would like to start here and then, once you have understood the need and purpose of the PRUs, that same website has some tutorial to get started.
With the latest software support like remoteproc, rpmsg and Beaglescope project, PRUs can be used quite easily, once you have understood its working.
I've got an experimental box of tricks running that, every 100 ms or so, will spit out a 4 microsecond long +5V pulse of electricity on a TTL line. The exact time that this happens is not known ahead of time, but it's important -- so I'd like to use the Red Hat 5.3 computer that essentially runs the experiment to service this TTL, and create a glorified timestamp.
At the moment, what I've done is wired the TTL into pin 13 of the parallel port (STATUS_SELECT, one of the input lines on a parallel port) on the linux box, spawn a process when the experiment starts, use chrt to change its scheduled priority to 99 -- i.e. high -- and then just poll the parallel port repeatedly in a while loop until the pin goes high. I then create an accurate timestamp, and, in a non-blocking way write it to disk.
Obviously, this is inefficient -- sometimes the process is suspended, and a TTL will be missed. As the computer is, itself, busy doing other things (namely acquiring data from my experimental bit of kit -- an MRI scanner!) this happens quite often. Polling is easy, but probably bad.
My question is this: doing something quickly when a TTL occurs seems like the bread-and-butter of computing, but, as far as I can tell, it's only possible to deal with interrupts on linux if you're a kernel module. The parallel port can generate interrupts, and libraries like paraport let you build kernel modules relatively quickly, where you have to supply your own handler.
Is the best way to deal with this problem and create accurate (±25 ms) timestamps for an experiment whenever that TTL comes in -- to write a kernel module that provides a list of recent interrupts to somewhere in /proc, and then read them out with a normal process later? Is that approach not going to work, and be very CPU inefficient -- or open a bag of worms to do with interrupt priority I'm not aware of?
Most importantly, this seems like it should be a solved problem -- is it, and if so do any wise people wish to point me in the right direction? Writing a kernel module seems like, frankly, a lot of hard, risky work for something that feels as if it should perhaps be simple.
The premise that "it's only possible to deal with interrupts on linux if you're a kernel module" dismisses some fairly common and effective strategies.
The simple course of action for responding to interrupts in userspace (especially infrequent ones) is to have a driver which created a kernel device (or in some cases sysfs node) where either a read() or perhaps a custom ioctl() from userspace will block until the interrupt occurs. You'd have to check if the default parallel port driver supports this, but it's extremely common with the GPIO drivers on embedded-type boards, and the basic scheme could be borrowed into the parallel port - provided that the hardware supports true interrupts.
If very precise timing is the goal, you might do better to customize the kernel module to record the timestamp there, and implement a mechanism where a read() from userspace blocks until the interrupt occurs, and then obtains the kernel's already recorded timestamp as the read data - thus avoiding the variable latency of waking userspace and calling back into the kernel to get the time.
You might also look at true local-bus serial ports (if present) as an alternate-interrupt capable interface in cases where the available parallel port is some partial or indirect implementation which doesn't support them.
In situations where your only available interface is something indirect and high latency such as USB, or where you want a lot of host- and operation-system- independence, then it may indeed make sense to use an external microcontroller. In that case, you would probably try to set the micro's clock from the host system, and then have it give you timestamp messages every time it sees an event. If your experiment only needs the timestamps to be relative to each other within a given experimental session, this should work well. But if you need to establish an absolute time synchronization across the USB latency, you may have to do some careful roundtrip measurement and then estimation of the latency in order to compensate it (see NTP for an extreme example).
I am student and I am writting simple application in C99 standard. Program should working on Ubuntu.
I have one problem - I don't know how can I get some Wifi parameters like bandwidth or delay. I haven't any idea how to do this. It is possible to do this using standard functions or any linux API (ech I am windows user)?.
In general, you don't know the bandwidth or delay of a wifi device.
Bandwidth and delay is the type of information from a link.
As far as I know, there is no such information holding in WiFi drivers.
The most link-related information is SINR.
For measuring bandwidth or delay, you should write your own code.
Maybe you should tell us more about your concrete problem. For now, I assume that you are interested in the throughput and latency of a specific wireless link, i.e. a link between two 802.11 stations. This could be a link between an access point and a client or between two ad-hoc stations.
The short answer is that there is no such API. In fact, it is not trivial even to estimate these two link parameters. They depend on the signal quality, on the data rate used by the sending station, on the interference, on the channel utilization, on the load of the computer systems at both ends, and probably a lot of other factors.
Depending on the wireless driver you are using it may be possible to obtain information about the currently used data rate and some packet loss statistics for the station you are communicating with. Have a look at net/mac80211/sta_info.h in your Linux kernel source tree. If you are using MadWifi, you may find useful information in the files below /proc/net/madwifi/ath0/ and in the output of wlanconfig ath0 list sta.
However, all you can do is to make a prediction. If the link quality changes suddenly, your prediction may be entirely wrong.
It is possible to make real time intervals in not Real-Time Linux application in C/C++?
I'm writing a ADC simulator. This is an application that generates packages with certain frequency. It is important that the frequency of package generation as closely as possible corresponded to the sampling rate of ADC. Why I don't want to use sleep() and usleep() to set package generation time intervals.
Thanks.
It is possible to make real time intervals in not Real-Time Linux application in C/C++?
No... if it were, it would be a Real-Time Linux system.
That said, you can probably get very close, so it depends on your intervals and tolerances. Your only serious option for sub-timeslice precision is to nail the sending thread to a core and let it spin, while keeping other processing off that core, but that's very wasteful of hardware....
If you can afford to have latencies long enough for your sending code to be re-scheduled then you can look at setting up alarms & signal handlers, but that's potentially massively higher latency, perhaps only on relatively rare occasions where the cores have all been otherwise utilised. To assess how well this works, you've got to do real measurements under realistic system loads.
The packet generator shouldn't be with the packet sender.
If you want the packets to be sent on time, you should create the packets before hand, and send them to the packer sender.
So you need a thread with a work queue, and use a sleep on that thread to send the packets on time. (you can look a boost's sleep())
I am a beginning C programmer (though not a beginning programmer) looking to dive into a project to teach myself C. My project is music-based, and because of this I am curious whether there are any 'best practices' per-se, when it comes to timing functions.
Just to clarify, my project is pretty much an attempt to build some barebones music notation/composition software (remember, emphasis on barebones). I was originally thinking about using OSX as my platform, but I want to do it in C, not Obj-C (though I know it would probably be easier...CoreAudio looked like a pretty powerful tool for this kind of stuff). So even though I don't have to build OSX apps in Obj-C, I will probably end up building this on a linux system (probably debian...).
Thanks everyone, for your great answers.
There are two accurate methods for timing functions:
Single process execution.
Timer event handler / callback
Single Process Execution
Most modern computers execute more than one program simultaneously. Actually, they execute pieces of many programs, swapping them out based on priorities and other metrics to look like more than one program is executing at the same time. This overhead effects timing in programs. Either the program gets delayed in reading the time or the OS gets delayed in setting its own time variables.
The solution in this case is to eliminate as many tasks from running. The ideal environment is for best accuracy is to have your program as the sole program running. Some OSes provide API for superuser applications to block all other programs or kill them.
Timer event handling / callback
Since the OS can't be trusted to execute your program with high precision, most OS's will provide Timer APIs. Many of these APIs include the ability to call one of your functions when the timer expires. This is known as a callback function. Other OS's may send a message or generate an event when the timer expires. These fall under the class of timer handlers. The callback process has less overhead than the handlers and thus is more accurate.
Music Hardware
Although you may have your program send music to the speakers, many computers now have separate processors that play music. This frees up the main processor and provides more continuous notes, rather than sounds separated by silent gaps due to platform overhead of your program send the next sounds to the speaker.
A quality music processor has at least these to functions:
Start Playing
End Music Notification
Start Playing
This is the function where you tell the music processor where your data is and the size of the data. The processor will start playing the music.
End Music Notification
You provide the processor with a pointer to a function that it will call when the music data has been processed. Nice processors will call the function early so there will be no gaps in the sounds while reloading.
All of this is platform dependent and may not be standard across platforms.
Hope this helps.
This is quite a vast area, and, depending on exactly what you want to do, potentially very difficult.
You don't give much away by saying your project is "music based".
Is it a musical score typesetting program?
Is it processing audio?
Is it filtering MIDI data?
Is it sequencing MIDI data?
Is it generating audio from MIDI data
Does it only perform playback?
Does it need to operate in a real time environment?
Your question though hints at real time operation, so in that case...
The general rule when working in a real time environment is don't do anything which may block the real time thread. This includes:
Calling free/malloc/calloc/etc (dynamic memory allocation/deallocation).
File I/O.any
Use of spinlocks/semaphores/mutexes upon threads.
Calls to GUI code.
Calls to printf.
Bearing these considerations in mind for a real time music application, you're going to have to learn how to do multi-threading in C and how to pass data from the UI/GUI thread to the real time thread WITHOUT breaking ANY of the above restrictions.
For an open source real time audio (and MIDI) (routing) server take a look at http://jackaudio.org
gettimeofday() is the best for wall clock time. getrusage() is the best for CPU time, although it may not be portable. clock() is more portable for CPU timing, but it may have integer overflow.
This is pretty system-dependent. What OS are you using?
You can take a look at gettimeofday() for fairly high granularity. It should work ok if you just need to read time once in awhile.
SIGALRM/setitimer can be used to receive an interrupt periodically. Additionally, some systems have higher level libraries for dealing with time.