Programmatically getting time required for complete system boot up? - c

I have written an application for defragmenation which starts at boot time.We know every system boots up faster or slower depending on its configuration.I have given hard coded value of 3 seconds during boot time so that,windows registers all ports during that time.I just want to know,are there any APIs existing which gives exact time when windows system is completely ready at boot time.That is whether all USB ports and others got recognized or not.?

Related

RN4871 with Gattlib C library high latency

I am using RN4871 with the latest firmware to turn on multiple LEDs via an I2C LED driver. I use gattlib C/C++ library to communicate ("write without response") with the RN4871 through my Ubuntu (20.04) system and I measure the latency of communication via an oscilloscope connected both to my computer and the RN4871.
I set the communication parameters on the Bluetooth device to 7.5 ms intervals with zero latency.
The problem is if I send commands from my computer in 500ms intervals, my communication latency is below 20ms which is just perfect for my application. However if my commands are 1 seconds or longer apart the latency rises up to 250ms! In my application I need extremely fast communications with minimal latency (below 40ms) and of course my commands are sent at variable intervals (can be even 10 seconds apart). I dont know where this issue comes from? Does it have to do with some sleeping process within the RN4871 that occurs when there is no data to transfer for more than 500ms or something else?
Thanks!

How to output program logs to SD card without causing damage

I am working on an application that runs on a small Linux computer with an SD card for storage. The application runs automatically on startup and we want to be able to easily check the logs that it produces. Normally I would just write to a file, since that also seems to be what most normal software would do. But I am hesitant about doing this because I think continuously writing logs is a bad idea because of the SD card for storage.
The problem is that sometimes when we want to check what is happening on the system, say for debugging purposes, we have stop the application via SSH and then start it again so that we can see the output messages.
So my question is: is there a way to say write logs to some kind of circular list that can then be viewed when connecting to the system over SSH? The application is written in C and C++ if that matters.
Is your application on a Raspberry Pi?
The Linux Operating system, and all other technology, is probably writing so much to the SD card, that your 500 KB/ hour would be next to nothing in comparison.
I would personally just have the program log to the file.
If you really do not want this, you have a few other options:
Have the application send the logs via the internet to some service, which you can then monitor
have your application store the logs in a buffer in-memory, and then write to file when you reach some threshold. Expose an endpoint on localhost which listens for a message, and when received, writes the in-memory contents to the file. This allows you to see log-files for current in-memory logs without having to wait.
First thing, I think that the SD driver care about writing and about I/O operation scheduling them in the better way for the safety of the SD card itself (using a virtual filesystem). Maybe you can work on your log level to be sure to write the necessary information and nothing more.
Based on shared inputs SD card wear-out might not happen easily, however, there are many ways to handle this scenario based on the hardware and software architecture of your system :
Check if you can write the logs to some other storage device within your system (Depends on your architecture).
If external communication peripherals are available in your system, check if logging can be done by redirecting the logs to remotes servers or other devices.
Perform selective logging based on some log-level as per your architecture / framework. Also, you can do only critical logging in SD card and the other logs can be re-directed based on your architecture. This can reduce the number of writes.
Based on your need/architecture, check if the data can be compressed and logged. This can reduce the number of times, the logs are written to SD card.
To continue working and simultaneously view the logs :
Based on your architecture/need, check if you can write to a file periodically or based on threshold so that you can view the file irrespective of operation.
Send selective logs to external server / device

Delay measurement and synchronisation between raspberry pi

I am doing a project with 2 raspberry pi which work as servers and a laptop which is the client.
I have attached to each raspberry and usb microphone and using the Portaudio Library im capturing audio streaming
and send it back to the laptop through a tcp/ip connection.
The scope of this project is to locate sound sources and it works like this. I run a .c file on each raspberry which are
connected on the same LAN as with the PC laptop. When this program is running on both raspberryies i have a message
"Waiting connection for a client". The next thing to do is just to run the matlab file which will start the both raspberries
and record. I have managed to synchronize the raspberries to start in the same time through a simple condition like
do
{
sleep(0.01);
j = read(newsockfd, &start,1 );
} while (j==0);
so right before both raspberries have to start recording i pause them in order to finish the initialization commands and so on
and then i just send a character "start = 'k'" through my matlab program
t1,t2 are tcp connections
start = 'k';
fwrite (t1, k);
fwrite (t2, k);
from this point both raspberries open the PortAudio stream and call recordCallBack function.
When I run the application and clap, i still get a delay of 0.2s between them which causes
an error of 60 meters. I have also checked the execution time of the fwrite function but that might
save me about 0.05 seconds which will still lead to results far from reality.
This project is based on TDOA measurement and it is desired to have a delay under 0.01 seconds to get accuracy <1m.
I have heard that linux has some very accurate timers, and i was thinking that maybe i could use that to
clock the time inside the functions in the .c file. Anyway if you have any ideas of how i can measure the delay from
the point i send the character 'k' from matlab until the point where the audio stream is opened in microphone, or any
way how i could synchronize the 2 linux servers please help.
ps: both are raspberry 2 pi and connected through UTP cables so the processing and transmission rates should be the same
It looks like an interesting project but I think you underestimate the problem a little bit. The first issue is that you need to synchonize the two sensors. Given the speed of sound and if you want an accuracy of about 1m you need to synchronize them with about 1ms accuracy. You could try with the Network Time Protocol but I'm not sure you can reach this accuracy even with a master on the local network. Better synchronization can be achieved with PTP (over ethernet) or GPS if you can receive a GPS signal.
Then if you manage to achieve this, a first step could be to record a few hand claps on both raspberry pi, save the timestamp when you start recording on both and see if you actually obtain something significant. Maybe you will also need to use a microcontroller and a real-time operating system instead!
There are many ways to synchronise clocks. It could be in a system level or in application level.
System level tend to be easier because there are already tools to do the job. I don't recommend you doing PTP at this stage, as mentioned by Emilien, since it is quite complicated to make it work. Instead I would recommend you to use normal setup via the same NTP network on all machines.
Example of NTP setup:
Query the server with # ntpdate -q 0.rhel.pool.ntp.org
If it is running, setup your local clock with # ntpdate 0.rhel.pool.ntp.org 1.rhel.pool.ntp.org
OBS: # means root user (which most likely means that you will need to run the command with sudo), whilst $ means normal user.
Check all machines times with $ date +%k:%M:%S.%N which will return the clock down to a nanosecond resolution.
If that doesn't acheive the desired result then try the PTP aproach, or just synchronise all your devices when they connect to the master, where your master can normalise each independant clock. I will not go into details here.
Then you can send your audio data via TCP/IP (or perhaps UDP/IP to lower latency) like you mentioned before, but always send the timestamp of your slave machine associated to a audio frame using clock_gettime() function with CLOCK_REALTIME as the clk_id argument.

Does USB mass-storage class requires re-enumeration after timeout?

this might be a stupid question,
I was debugging a USB storage device on an ARM-CortexM4 platform (STM32F4 series) which runs embedded Linux. The ARM is working as USB host, and tries to communicate with a thumb drive in USB full speed (12Mb/s).
Now here is the problem. After successful enumeration and several SCSI commands thru BULK transfers, the capacity and everything can be read correctly. However, after about 15 seconds when I try to send these SCSI commands again (under same condition), the USB host controller just returns 'Transaction Error', which looks like the device is not responding to BULK transfers anymore (not ACKing) and the host controller times out. The question is, is there any timeout mechanism for USB mass-storage class or SCSI system such that, after a timeout the system must be re-enumerated or re-probed, otherwise it won't respond anymore?
I understand this might be due to a stupid error in my program, or due to some limitations on the specific hardware. However when I used usbmon module in Linux on a PC to capture the transfers on the very same thumb drive, I can see the operating system actually sends a sequence probing command (Read-max-Lun followed by Test-unit-ready) every 5 sec, which could be the reason why the thumb drive doesn't fail on my PC.
Thanks! I'm looking forward to any replies.
I think you're on the right track with the Test Unit Ready commands.. I am in the middle of writing a mass storage device driver for an embedded device and When testing on OS X, after the initial SCSI queries, my device receives Test Unit Ready command about once every second when no other activity is occurring. Since your post is quite old, I recommend you post your own solution if you've since solved your problem.
Otherwise try adding periodic test unit ready commands from the host side when there is no other activity.. You could set and activate a timer whenever USB activity is occurring. If the timer fires, u can send a Test unit ready command.. Rinse repeat.

How its made such as digicoder vcr dvd players graphical user interfaces from poweron till user interface?

I have C/Java knowledge but i never understand yet, how some hardwares show there own screens/graphics from poweron stage to user interface (where it never shows linux/unix boot screen nor it shows windows booting screens).
My question is, Compared to VCR/TV digicoders poweron till user interfaces, how its made? Do we use regular linux kernel or is there any special open source framework which allow us to develop such?
Thanks
Many embedded systems use u-boot as a boot loader. U-boot provides the ability to display a "splash" screen while the linux kernel is booting.
A device will start the bootloader right after the CPU comes out of reset (usually milliseconds after power-on at most). The bootloader code can initialize the display and show a splash screen if it wants (in the same way most modern non-embedded Linux distributions have a graphical grub splashscreen). The kernel can avoid changing the display configuration, and on an embedded device the kernel can boot pretty quickly to running userspace (at least an initramfs), which can take over the display and show whatever animation, progress bar, etc until the full UI is ready.
An operating systems such a Windows or Linux are both large and general purpose. They have to initialise themselves and the hardware, which includes interrogating all connected devices for "plug & play". The OS does not know in advance which such devices are connected; it has to "discover" the hardware every time it starts. The connected hardware may even have changed since it last booted.
Embedded systems do not usually have large operating systems (or often do not have an operating system at all), and they usually have very specific hardware known to the system a priori, so do not need to test and determine the correct configuration for such devices. Often also these devices are far simpler, and are often 'on-chip' peripherals.
That said, your PC is capable of instantly displaying a user interface (just not Windows). The BIOS boot process outputs text to the display almost immediately, and the BIOS console is an interactive user interface that starts on request during boot. Also last time I booted MS-DOS on a modern PC, it took only a few seconds to start.
Not all embedded systems start "instant-on", my digital TV PVR even has a progress bar while booting, but being application specific, it still starts far faster than a general purpose computer. My Network Attached Storage (NAS) device which is an embedded system running Linux on the other hand, takes considerable time to boot since among other things, it has to start the file-system, network, USB interfaces, print server, DNLA server, and web-server. In fact many of the things required for a general purpose computer (but it has no display, the UI is presented via the web-server)
Some embedded systems with large operating systems and complex hardware can achieve "instant-on" by never truly switching off, but rather going into a low power mode where the system state is retained in memory while all the high powered devices such as a screen, WiFi, Bluetooth etc. are switched off.

Resources