scheduling tasks in linux - c

Can we schedule a program to execute every 5 ms or 10 ms etc?
I need to generate a pulse through the serial port for 1 khz and 15 khz.
But the program should only toggle the pins in the serial port , so the frequency has to be produced by a scheduler. is this possible in linux with a rt patch?

I believe a better solution is to put your "generate a pulse" function in a loop, for example:
for (;;) {
generate_pulse(); /* generate a pulse */
sleep(5ms); /* or 10ms */
}

is this possible in linux with a rt patch?
I suggest to go for RT patch, if timing is critical.
Xenomai is a RT patch which I used on 2.6 kernel some days back.
Here is an example which runs every 1 second.
http://www.xenomai.org/documentation/trunk/html/api/trivial-periodic_8c-example.html

There is the PPS project that is now part ( at least a portion of it for the 2.6 branch, but in the latest 3.x kernel branch it looks like there is a full integration ) of the mainline linux kernel.
There is also an explicit reference to using this PPS implementation with a serial port in the linked txt file
A PPS source can be connected to a serial port (usually to the Data
Carrier Detect pin) or to a parallel port (ACK-pin) or to a special
CPU's GPIOs (this is the common case in embedded systems) but in each
case when a new pulse arrives the system must apply to it a timestamp
and record it for userland.
Apparently good examples / tutorials / guides, are not even that hard to find , I'm sure that you'll find a lot of good resources while just using search engine.
The header for the APIs is usually under /usr/include/linux/pps.h .

I have finally found a way to get it done.
The best way to do it is to first create a timer with the required amount of time. and then to call the task( which is the pulse generating program) every time the timer overflows. The program for the timer can be run in the background. the timer can be created and set using the timer_create() and timer_settime() respectively. A different program can be called from one program using fork() and execl(). The program can be run in the background using the daemon().
By using all these things we can create our own scheduler.

Related

How to control CS signal in SPIDEV moudule on raspberry pi

I am trying to use SPIDEV module in Python 2.7 to interface Raspberry Pi 3 with ADS1256 ADC over the SPI bus available on the Raspberry Pi unit.
The project is to communicate with two of those ADCs and sample all the channels (8 channels each) at 250Hz sampling rate.
The functions in the SPIDEV module responsible for data transaction are xfer and xfer2. The problem with these functions is that they issue a CS active command (bring CS low), do the transaction and issue a CS release command (bring CS high). In order to communicate with ADS1256, a series of commands needs to be sent to the ADC while the CS is kept at logic low. This is possible by listing all the commands together and pass them to the xfer/xfer2 function like this:
$xfer2([10, 20, 30, 40])$
However, this way of sending commands do not give the ADC sufficient time to process each command or in other words, the timing between instructions violates the timing specifications of the ADC. If, on the other hand, one command is sent at a time, then the CS toggle causes the ADC to forget the previous command.
Two other alternatives suggested online that I tried, introduce too much delay that I cannot squeeze all channel reads within the time frame I have between each sampling instance. These alternatives are:
WiringPi module: This module has the wiringPiSPIDataRW function that is doing the data transaction only and CS can be controlled separately by IO functions in the module. The drawback is that time between each call to this function and as well as the time between bringing the CS low and calling this function is more than 200 microseconds which in aggregate will go over 4 milliseconds (my sampling period) when I sample all the channels.
Using a separate pin for CS when using SPIDEV: This option also introduced more than 100-microsecond delays between function calls.
These are the two suggestions that I have learned through digging Raspberry Pi community and Stack Overflow.
The xfer functions in SPIDEV also provide an argument called delay and according to the documentation it should control the delay between blocks but it only means that how long the CS should be kept low after a transaction is complete. For example: if I issue:
$xfer2([12, 23, 34, 46], 1800000, 30)$
It will keep CS low for 30 microseconds at the end only after sending 46 is over. It doesn't provide 30 microseconds between each byte i.e 12, 23, 34 and 46 which is an ideal thing that I need. However, if I do
xfer2([12])
xfer2([23])
xfer2([34])
xfer2([46])
of course, due to nature of Raspberry Pi, the time between each will be more than 100 microsecond which I cannot handle.
So something that will help me control the delay between commands is an ideal thing.
If not possible, something that will let me control CS in the xfer functions so they do not toggle it. Meaning that I can control the CS pin with IO functions. This will prevent a hardware modification on my board which is using the raspberry pi GPIO header CE pin as CS. Although it is still a slow solution but much faster than the functions in the wiringPi module.
In a worst case, I will have to modify my hardware and use a different GPIO pin to use as CS.
Thanks for reading

How to get NTP working with custom I/O Pin?

I have a motherboard with I/O pins and I have written a C library with functions to set and query the status of these I/O pins. Lets say the name of one of these functions is get_pin(int pin_no), and it returns the logical voltage of that pin. I would like to send a 1 pulse-per-second (PPS) signal to one of my pins and tell Linux's NTPD to calibrate based off this signal.
Is it possible to tell the NTPD to use one of these I/O pins as its PPS? If so, what is the approach to do this? Ie. Is it via config file or does it require modifying NTPD's source code? My early research seems to suggest the latter may be necessary.
Edit: I'm working with the ntpd on Centos
Does your kernel have PPS support?
$ grep PPS /boot/config-$(uname -r)
# PPS support
CONFIG_PPS=m
# CONFIG_PPS_DEBUG is not set
# PPS clients support
# CONFIG_PPS_CLIENT_KTIMER is not set
CONFIG_PPS_CLIENT_LDISC=m
CONFIG_PPS_CLIENT_PARPORT=m
CONFIG_PPS_CLIENT_GPIO=m
# PPS generators support
Is ldattach installed?
$ which ldattach
/usr/sbin/ldattach
You may not need ldattach. It was mentioned in the LinuxPPS installation instructions. However, it appears that it is only used for PPS sent over a serial line (e.g. RS-232).
Are the pps-tools installed?
$ which ppstest
/usr/bin/ppstest
Is the pps-gpio.ko module installed?
$ modinfo pps-gpio
filename: /lib/modules/4.4.0-38-generic/kernel/drivers/pps/clients/pps-gpio.ko
version: 1.0.0
license: GPL
description: Use GPIO pin as PPS source
author: James Nuss <jamesnuss#nanometrics.ca>
author: Ricardo Martins <rasm#fe.up.pt>
srcversion: D2C22B0A465DA63746EFB59
alias: of:N*T*Cpps-gpio*
depends: pps_core
intree: Y
vermagic: 4.4.0-38-generic SMP mod_unload modversions
You can tell the kernel that a GPIO pin will be used as a PPS signal by adding something like this to your kernel line in your GRUB config:
dtoverlay=pps-gpio,gpiopin=18
You will need to change "18" to the GPIO pin you are using.
You will need to add a couple of lines like this to your ntp.conf:
server 127.127.22.1 # ATOM(PPS)
fudge 127.127.22.1 flag3 1 # enable PPS API
References:
http://www.ntp.org/ntpfaq/NTP-s-config-adv.htm
http://linuxpps.org/wiki/index.php/Main_Page
http://rdlazaro.info/compu-Raspberry_Pi-RPi-stratum0.html
http://doc.ntp.org/4.1.1/refclock.htm
http://doc.ntp.org/4.1.1/driver22.htm
One pulse per second calibration signal will also require reading input pin exactly at 1 second for calibration. Polling will not serve as timer execution may be differed by OS on high priority work.
Same way using interrupt on change* for this pin hooked with calibrate function will also not grantee 1 PPS execution of calibration method due to interrupt processing delays like say when a higher priority interrupt occurs.
If I understood question correctly, you are using something like Raspberry and wish to syncronize your system by another controller by receiving some sequence of logical 1s that will mean, for instance, time teac for your board?
The only one thing I do not understand is why do you need ntp daemon for that. Isn't it better to create a static time_t variable that will increment upon each teac receipt?
If you wish to syncronize some external devices later and the board acts as a time server - just adjust system date each time when the difference between your static variable and time(0) value will be greater than defined value.

Scheduling routines in C and timing requirements

I'm working on a C program that transmits samples over USB3 for a set period of time (1-10 us), and then receives samples for 100-1000 us. I have a rudimentary pthread implementation where the TX and RX routines are each handled as a thread. The reason for this is that in order to test the actual TX routine, the RX needs to run and sample before the transmitter is activated.
Note that I have very little C experience outside of embedded applications and this is my first time dabbling with pthread.
My question is, since I know exactly how many samples I need to transmit and receive, how can I e.g. start the RX thread once the TX thread is done executing and vice versa? How can I ensure that the timing stays consistent? Sampling at 10 MHz causes some harsh timing requirements.
Thanks!
EDIT:
To provide a little more detail, my device is a bladeRF x40 SDR, and communication to the device is handled by a FX3 microcontroller, which occurs over a USB3 connection. I'm running Xubuntu 14.04. Processing, scheduling and configuration however is handled by a C program which runs on the PC.
You don't say anything about your platform, except that it supports pthreads.
So, assuming Linux, you're going to have to realize that in general Linux is not a real-time operating system, and what you're doing sure sounds as if has real-time timing requirements.
There are real-time variants of Linux, I'm not sure how they'd suit your needs. You might also be able to achieve better performance by doing the work in a kernel driver, but then you won't have access to pthreads so you're going to have to be a bit more low-level.
Thought I'd post my solution.
While the next build of the bladeRF firmware and FPGA image will include the option to add metadata (timestamps) to the synchronous interface, until then there's no real way in which I can know at which time instants certain events occurred.
What I do know is my sampling rate, and exactly how many samples I need to transmit and receive at which times relative to each other. Therefore, by using conditional variables (with pthread), I can signal my receiver to start receiving samples at the desired instant. Since TX and RX operations happen in a very specific sequence, I can calculate delays by counting the number of samples and multiplying by the sampling rate, which has proven to be within 95-98% accurate.
This obviously means that since my TX and RX threads are running simultaneously, there are chunks of data within the received set of samples that will be useless, and I have another routine in place to discard those samples.

EPP port watchdog timer: how does it work?

I am working on a project that involves fast data acquisition (a scientific experiment). I will build an MCU-based module that will supply (at its fastest rate) 2 to 4 bytes of data every 10 microsecond. This data will have to be transferred to a PC in real time for further processing. In order to keep the cost of equipment low I have chosen to use the Enhanced Parallel Port (EPP) of the PC for connection. Its data rate (500 KB/s to 2 MB/s) should be sufficient.
The control program will be programmed in C and will run under DOS (I use DJGPP) and the EPP port will be handled by direct I/O port reading/writing for maximum efficiency.
Unfortunately, most of the documents I found on the net about programming the EPP port are badly written and confusing. My first request is actually for a pointer/link to a comprehensive document that would clearly and logically explain the operation of the EPP port.
Anyway, I managed to find out most of the things I needed, but there is one thing that baffles me. The documents mention a 'watchdog timer' in the EPP port that will set bit 0 of the status register if there is no response from the attached device in about 10 usec. One of the docs even suggests to monitor and reset this status bit if it goes active. AFAIK it is nonsense: the status port is read-only. So how does this watchdog timer really work? I assume that the logical way would be for the LPT controller circuit to reset this bit every time a new read or write operation is initiated. Is this assumption correct? If not, how should I handle this signal?

Serial Comm in Debian Auto Timeout

I'm using 3.1 Sarge, kernel 2.4.26 on a TS-7400 board running ARM 9 architecture.
I am using the POSIX library terminos and fcntl.
I am writing a program to communicate between 2 embedded devices over serial. The program uses the POSIX time out flag VTIME and works successfully Ubuntu 10.1 but it does not time out on the board. I need the program to try resending a command if there is no response after a certain time. I know the board is transmitting OK the first time but then the program locks up waiting for a response. I am running the serial in delay mode so it will wait in read() until at least 1 byte is received or .1 secs have passed as defined by VTIME.
What is the problem or if VTIME simply does not work in this kernel what is another way to accomplish this?
Investigate the select() system call. This will let you execute a read when there is something to actually read, instead of waiting for .1 seconds, hoping something will show up. If this is supposed to be a straight port of your code then this may not be and appropriate thing to do.
It is an alternative....

Resources