Waiting for all USB drives to load - c

I'm currently working with an application that has branching logic depending on whether a specific USB drive is inserted into the system. It does this by polling all drive letters looking for a path on the root of each drive.
This works for the majority of machines, but often, this application runs on startup with the USB drive inserted. In addition, some machines are especially slow and take a good minute to load the USB drive once Windows boots. In these machines, the code reaches the check of whether this drive exists, it can't find the drive, and the wrong branch is executed.
It could be possible to wait a minute before checking for drives. I'd prefer to have the application wait for all USB devices (or simply only mass storage devices) to load before checking for the drives, or something even more intelligent.
Unfortunately, I am unfamiliar with the methods needed to wait for all USB devices to finish loading, and with the DDK in general. I can see that it's possible to register a window for device notifications with GUID_DEVINTERFACE_USB_DEVICE, possibly receiving messages like DBT_DEVICEARRIVAL, DBT_DEVNODES_CHANGED, and WM_DEVICECHANGE, the distinctions of which I do not know.
However, the USB drive will likely be already inserted and detected (but not having a drive letter) into the system before the program executes. So, registering for all device change notifications would not make sense. If it's possible to identify devices which are inserted but not loaded (possibly with SetupDiEnumDeviceInterfaces) and then register for "loaded" notifications on all of those devices, it might work. I don't have familiarity with any this, so pointers (or sample code) would be extremely helpful.

I don't think you can.
The problem is that you're tyring to distinguish two cases which are not actually distinct, that of plugging in a USB device during boot and plugging it in later.
You have to know that USB is a protocol which requires a non-trivial amount of intelligence in the USB slave devices. They exchange multiple messages with the USB host (i.e. your OS). This exchange isn't instant. For instance, your USB harddrive will need to ask for permission to draw more than 100mA power. To answer that, the power drivers of Windows must be up and running. The phsyical disk can only spin up when the answer arrives.
So, there's a whole message sequence going on, and the drive letter shows up only fairly late. Windows must know how many partitions exist. So during this exchange, new devices are being created all the time.
When you enumerate devices while devices are actively being added, you're really asking for troubles. The SetupDiEnumDeviceInterfaces API doesn't operate on a snapshot (which we know because there's no Close method); you're asking for the N'th device until you get an "no more devices" error an you know N was to big. But when devices are still actively being added, N changes. And I don't see a guarantee that the list order is by age; devices might be added in the middle as well.

I don't think that getting notifications about drivers being installed for newly plugged-in devices would help you much. When you plug in the same device repeatedly, the drivers are usually installed only the first time.
Also, an USB flash drive, although physically looks like one compact device, is represented by at least three PnP devices by Windows: an USB mass storage device (represents the USB endpoint), a disk device (represents the physical disk inside the flash drive) and one or multiple Volume devices (each represents one volume = partition in your case). Drive letters may be assigned to the Volume devices.
What you can possibly do is monitoring the arrival and removal of Volume devices (RegisterDeviceNotification for GUID_DEVINTERFACE_VOLUME) and examining each volume device that arrives (I believe that Setup API allows you to track its "parents" to the USB stack).

Related

How do I increase the speed of my USB cdc device?

I am upgrading the processor in an embedded system for work. This is all in C, with no OS. Part of that upgrade includes migrating the processor-PC communications interface from IEEE-488 to USB. I finally got the USB firmware written, and have been testing it. It was going great until I tried to push through lots of data only to discover my USB connection is slower than the old IEEE-488 connection. I have the USB device enumerating as a CDC device with a baud rate of 115200 bps, but it is clear that I am not even reaching that throughput, and I thought that number was a dummy value that is a holdover from RS232 days, but I might be wrong. I control every aspect of this from the front end on the PC to the firmware on the embedded system.
I am assuming my issue is how I write to the USB on the embedded system side. Right now my USB_Write function is run in free time, and is just a while loop that writes one char to the USB port until the write buffer is empty. Is there a more efficient way to do this?
One of my concerns that I have, is that in the old system we had a board in the system dedicated to communications. The CPU would just write data across a bus to this board, and it would handle communications, which means that the CPU didn't have to waste free time handling the actual communications, but could offload the communications to a "co processor" (not a CPU but functionally the same here). Even with this concern though I figured I should be getting faster speeds given that full speed USB is on the order of MB/s while IEEE-488 is on the order of kB/s.
In short is this more likely a fundamental system constraint or a software optimization issue?
I thought that number was a dummy value that is a holdover from RS232 days, but I might be wrong.
You are correct, the baud number is a dummy value. If you create a CDC/RS232 adapter you would use this to configure your RS232 hardware, in this case it means nothing.
Is there a more efficient way to do this?
Absolutely! You should be writing chunks of data the same size as your USB endpoint for maximum transfer speed. Depending on the device you are using your stream of single byte writes may be gathered into a single packet before sending but from my experience (and your results) this is unlikely.
Depending on your latency requirements you can stick in a circular buffer and only issue data from it to the USB_Write function when you have ENDPOINT_SZ number of byes. If this results in excessive latency or your interface is not always communicating you may want to implement Nagles algorithm.
One of my concerns that I have, is that in the old system we had a board in the system dedicated to communications.
The NXP part you mentioned in the comments is without a doubt fast enough to saturate a USB full speed connection.
In short is this more likely a fundamental system constraint or a software optimization issue?
I would consider this a software design issue rather than an optimisation one, but no, it is unlikely you are fundamentally stuck.
Do take care to figure out exactly what sort of USB connection you are using though, if you are using USB 1.1 you will be limited to 64KB/s, USB 2.0 full speed you will be limited to 512KB/s. If you require higher throughput you should migrate to using a separate bulk endpoint for the data transfer.
I would recommend reading through the USB made simple site to get a good overview of the various USB speeds and their capabilities.
One final issue, vendor CDC libraries are not always the best and implementations of the CDC standard can vary. You can theoretically get more data through a CDC endpoint by using larger endpoints, I have seen this bring host side drivers to their knees though - if you go this route create a custom driver using bulk endpoints.
Try testing your device on multiple systems, you may find you get quite different results between windows and linux. This will help to point the finger at the host end.
And finally, make sure you are doing big buffered reads on the host side, USB will stop transferring data once the host side buffers are full.

How to unsafely remove blockdevice driver in Linux

I am writing a block device driver for linux.
It is crucial to support unsafe removal (like usb unplug). In other words, I want to be able to shut down the block device without creating memory leaks / crashes even while applications hold open files or performing IO on my device or if it is mounted with file system.
Surely unsafe removal would possibly corrupt the data which is stored on the device, but that is something the customers are willing to accept.
Here is the basics steps I have done:
Upon unsafe removal, block device spawns a zombie which will automatically fail all new IO requests, ioctls, etc. The zombie substitutes make_request function and changes other function pointers so kernel would not need the original block device.
Block device waits for all IO which is running now (and use my internal resources) to complete
It does del_gendisk(); however this does not really free's kernel resources because they are still used.
Block device frees itself.
The zombie keeps track of the amount of opens() and close() on the block device and when last close() occurs it automatically free() itself
Result - I am not leaking the blockdevice, request queue, gen disk, etc.
However this is a very difficult mechanism which requires a lot of code and is extremely prone to race conditions. I am still struggling with corner cases, per_cpu counting of io's and occasional crashes
My questions: Is there a mechanism in the kernel which already does that? I searched manuals, literature, and countless source code examples of block device drivers, ram disks and USB drivers but could not find a solution. I am sure, that I am not the first one to encounter this problem.
Edited:
I learned from the answer below, by Dave S about the hot-plug mechanism but it does not help me. I need a solution of how to safely shut down the driver and not how to notify the kernel that driver was shut down.
Example of one problem:
blk_queue_make_request() registers a function through which my block devices serves IO. In that function I increment per_cpu counters to know how many IO's are in flight by each cpu. However there is a race condition of function being called but counter was not increased yet, so my device thinks there are 0 IO's, releases the resources and then IO comes and crashes the system. Hotplug will not assist me with this problem as far as I understand
About a decade ago I used hotplugging on a software driver project to safely add/remove an external USB disk drive which interfaced to an embedded Linux driven Set-top Box.
For your project you will also need to write a hot plug. A hotplug is a program which is used by the kernel to notify user mode software when some significant (usually hardware-related) events take place. An example is when a USB device has just been plugged in or removed.
From Linux 2.6 kernel onwards, hotplugging has been integrated with the driver model core so that any bus or class can report hotplug events when devices are added or removed.
In the kernel tree, /usr/src/linux/Documentation/usb/hotplug.txt has basic information about USB Device Driver API support for hotplugging.
See also this link, and GOOGLE as well for examples and documentation.
http://linux-hotplug.sourceforge.net/
Another very helpful document which discusses hotplugging with block devices can be found here:
https://www.kernel.org/doc/pending/hotplug.txt
This document also gives a good example of illustrating hotplug events handling:
Below is a table of the main variables you should be aware of:
Hotplug event variables:
Every hotplug event should provide at least the following variables:
ACTION
The current hotplug action: "add" to add the device, "remove" to remove it.
The 2.6.22 kernel can also generate "change", "online", "offline", and
"move" actions.
DEVPATH
Path under /sys at which this device's sysfs directory can be found.
SUBSYSTEM
If this is "block", it's a block device. Anything other subsystem is
either a char device or does not have an associated device node.
The following variables are also provided for some devices:
MAJOR and MINOR
If these are present, a device node can be created in /dev for this device.
Some devices (such as network cards) don't generate a /dev node.
DRIVER
If present, a suggested driver (module) for handling this device. No
relation to whether or not a driver is currently handling the device.
INTERFACE and IFINDEX
When SUBSYSTEM=net, these variables indicate the name of the interface
and a unique integer for the interface. (Note that "INTERFACE=eth0" could
be paired with "IFINDEX=2" because eth0 isn't guaranteed to come before lo
and the count doesn't start at 0.)
FIRMWARE
The system is requesting firmware for the device.
If the driver is creating device it could be possible to suddenly delete it:
echo 1 > /sys/block/device-name/device/delete where device-name may be sde, for example,
or
echo 1 > /sys/class/scsi_device/h:c:t:l/device/delete, where h is the HBA number, c is the channel on the HBA, t is the SCSI target ID, and l is the LUN.
In my case, it perfectly simulates scenarios for crushing writes and recovery of data from journaling.
Normally to safely remove device more steps is needed so deleting device is a pretty drastic event for data and could be useful for testing :)
please consider this:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/online_storage_reconfiguration_guide/removing_devices
http://www.sysadminshare.com/2012/09/add-remove-single-disk-device-in-linux.html

How to output program logs to SD card without causing damage

I am working on an application that runs on a small Linux computer with an SD card for storage. The application runs automatically on startup and we want to be able to easily check the logs that it produces. Normally I would just write to a file, since that also seems to be what most normal software would do. But I am hesitant about doing this because I think continuously writing logs is a bad idea because of the SD card for storage.
The problem is that sometimes when we want to check what is happening on the system, say for debugging purposes, we have stop the application via SSH and then start it again so that we can see the output messages.
So my question is: is there a way to say write logs to some kind of circular list that can then be viewed when connecting to the system over SSH? The application is written in C and C++ if that matters.
Is your application on a Raspberry Pi?
The Linux Operating system, and all other technology, is probably writing so much to the SD card, that your 500 KB/ hour would be next to nothing in comparison.
I would personally just have the program log to the file.
If you really do not want this, you have a few other options:
Have the application send the logs via the internet to some service, which you can then monitor
have your application store the logs in a buffer in-memory, and then write to file when you reach some threshold. Expose an endpoint on localhost which listens for a message, and when received, writes the in-memory contents to the file. This allows you to see log-files for current in-memory logs without having to wait.
First thing, I think that the SD driver care about writing and about I/O operation scheduling them in the better way for the safety of the SD card itself (using a virtual filesystem). Maybe you can work on your log level to be sure to write the necessary information and nothing more.
Based on shared inputs SD card wear-out might not happen easily, however, there are many ways to handle this scenario based on the hardware and software architecture of your system :
Check if you can write the logs to some other storage device within your system (Depends on your architecture).
If external communication peripherals are available in your system, check if logging can be done by redirecting the logs to remotes servers or other devices.
Perform selective logging based on some log-level as per your architecture / framework. Also, you can do only critical logging in SD card and the other logs can be re-directed based on your architecture. This can reduce the number of writes.
Based on your need/architecture, check if the data can be compressed and logged. This can reduce the number of times, the logs are written to SD card.
To continue working and simultaneously view the logs :
Based on your architecture/need, check if you can write to a file periodically or based on threshold so that you can view the file irrespective of operation.
Send selective logs to external server / device

Does USB mass-storage class requires re-enumeration after timeout?

this might be a stupid question,
I was debugging a USB storage device on an ARM-CortexM4 platform (STM32F4 series) which runs embedded Linux. The ARM is working as USB host, and tries to communicate with a thumb drive in USB full speed (12Mb/s).
Now here is the problem. After successful enumeration and several SCSI commands thru BULK transfers, the capacity and everything can be read correctly. However, after about 15 seconds when I try to send these SCSI commands again (under same condition), the USB host controller just returns 'Transaction Error', which looks like the device is not responding to BULK transfers anymore (not ACKing) and the host controller times out. The question is, is there any timeout mechanism for USB mass-storage class or SCSI system such that, after a timeout the system must be re-enumerated or re-probed, otherwise it won't respond anymore?
I understand this might be due to a stupid error in my program, or due to some limitations on the specific hardware. However when I used usbmon module in Linux on a PC to capture the transfers on the very same thumb drive, I can see the operating system actually sends a sequence probing command (Read-max-Lun followed by Test-unit-ready) every 5 sec, which could be the reason why the thumb drive doesn't fail on my PC.
Thanks! I'm looking forward to any replies.
I think you're on the right track with the Test Unit Ready commands.. I am in the middle of writing a mass storage device driver for an embedded device and When testing on OS X, after the initial SCSI queries, my device receives Test Unit Ready command about once every second when no other activity is occurring. Since your post is quite old, I recommend you post your own solution if you've since solved your problem.
Otherwise try adding periodic test unit ready commands from the host side when there is no other activity.. You could set and activate a timer whenever USB activity is occurring. If the timer fires, u can send a Test unit ready command.. Rinse repeat.

How its made such as digicoder vcr dvd players graphical user interfaces from poweron till user interface?

I have C/Java knowledge but i never understand yet, how some hardwares show there own screens/graphics from poweron stage to user interface (where it never shows linux/unix boot screen nor it shows windows booting screens).
My question is, Compared to VCR/TV digicoders poweron till user interfaces, how its made? Do we use regular linux kernel or is there any special open source framework which allow us to develop such?
Thanks
Many embedded systems use u-boot as a boot loader. U-boot provides the ability to display a "splash" screen while the linux kernel is booting.
A device will start the bootloader right after the CPU comes out of reset (usually milliseconds after power-on at most). The bootloader code can initialize the display and show a splash screen if it wants (in the same way most modern non-embedded Linux distributions have a graphical grub splashscreen). The kernel can avoid changing the display configuration, and on an embedded device the kernel can boot pretty quickly to running userspace (at least an initramfs), which can take over the display and show whatever animation, progress bar, etc until the full UI is ready.
An operating systems such a Windows or Linux are both large and general purpose. They have to initialise themselves and the hardware, which includes interrogating all connected devices for "plug & play". The OS does not know in advance which such devices are connected; it has to "discover" the hardware every time it starts. The connected hardware may even have changed since it last booted.
Embedded systems do not usually have large operating systems (or often do not have an operating system at all), and they usually have very specific hardware known to the system a priori, so do not need to test and determine the correct configuration for such devices. Often also these devices are far simpler, and are often 'on-chip' peripherals.
That said, your PC is capable of instantly displaying a user interface (just not Windows). The BIOS boot process outputs text to the display almost immediately, and the BIOS console is an interactive user interface that starts on request during boot. Also last time I booted MS-DOS on a modern PC, it took only a few seconds to start.
Not all embedded systems start "instant-on", my digital TV PVR even has a progress bar while booting, but being application specific, it still starts far faster than a general purpose computer. My Network Attached Storage (NAS) device which is an embedded system running Linux on the other hand, takes considerable time to boot since among other things, it has to start the file-system, network, USB interfaces, print server, DNLA server, and web-server. In fact many of the things required for a general purpose computer (but it has no display, the UI is presented via the web-server)
Some embedded systems with large operating systems and complex hardware can achieve "instant-on" by never truly switching off, but rather going into a low power mode where the system state is retained in memory while all the high powered devices such as a screen, WiFi, Bluetooth etc. are switched off.

Resources