SCSI read capacity (10) reports wrong LBA and sector size - c

I've been trying to test my AHCI driver for my hobby OS on bare metal. Before that, I tested my driver in QEMU with the parameters:
qemu-system-i386 -boot d -cdrom elfboot.iso -machine q35 -m 2G -hda hda.img -serial stdio
The output is something like that:
...
PCI: 00:1F.2 [0106] (01): 8086:2922 Mass Storage Controller
...
AHCI: Initialize module...
...
AHCI: Found AHCI device with signature EB140101
AHCI: ahci2: sectors = 204, block size = 2048
DEV: Added block device ahci2
...
The PCI device is the AHCI controller which is used when starting QEMU with the above parameters. Now, I felt confident enough that the same setup would work on real hardware too, so I gave it a try. This is what was printed on the screen:
...
PCI: 00:17.0 [0106] (01): 8086:A102 Mass Storage Controller
...
AHCI: Initialize module...
...
AHCI: Found AHCI device with signature EB140101
AHCI: ahci1: sectors = 4068212736, block size = 0
DEV: Added block device ahci1
...
I also tested it on another machine, which doesn't have an AHCI controller, but a RAID controller which is running in AHCI mode.
On that machine, the output is basically the same as on the other bare metal setup:
...
PCI: 00:17.0 [0104] (00): 8086:282A Mass Storage Controller
...
AHCI: Initialize module...
...
AHCI: Found AHCI device with signature EB140101
AHCI: ahci1: sectors = 4068212736, block size = 0
DEV: Added block device ahci1
...
According to the SCSI specification, the value of 4068212736 (0xF27C0000) does not have a set meaning. Also, the block size is set to zero, which does not match with what I see on QEMU. Only the output shown in the QEMU console is correct. On both of my real machines the reported capacity is wrong, with the same values.
What am I supposed to do next? Is this a vendor-related issue? It seems that the read capacity (10) command is mandatory for all SCSI devices of that type, so in my inexperienced opinion, I don't think that is the problem here.
EDIT: I put an SCSI request sense after the read capacity (10) command since it was stated that request sense should be put after every SCSI command for stalling the device.
Now, the reported number of blocks is 65263, which is still wrong, but also different from 4068212736, so I'm not sure If I move in the correct direction now. Also, the block size is now correctly reported as 2048 bytes.

The READ CAPACITY (10) response is two big-endian (network byte order) unsigned 32-bit integers for the last LBA and the block size. The problems you're having look like they're related to byte ordering. I think it's likely that the highest LBA is really 0x00007CF2. The blocksize is showing as 0 because it's in the byte at the other end of the integer, and was truncated.

Related

Handling PCI read/write to configuration space in a QEMU device

I'm working on implementing a simple PCI device in QEMU and a kernel driver for it, and I have some trouble with handling pci_read/write_config_* function calls from the device side.
Unlike simple rw operations on a memory mapped bar, where the MemoryRegionOps callbacks receive the exact offset used by the driver, the config_read/write callbacks implemented as members in PCIDevice struct, receive an address that went through some manipulations/mapping that I have a hard time understanding.
Following the code path up to pci_config_host_read/write in QEMU sources, and the same in the kernel side for pci_read/write_config_* functions, didn't provide any clear answers.
Can anyone help me understand how to extract the config offset used by the driver when calling the pci config rw functions?
If you set your PCI device model up to implement the QEMU PCIDevice config_read and config_write methods, the addresses passed to them should be the offsets into the PCI config space (ie starting with the standard 0 == PCI_VENDOR_ID, 2 == PCI_DEVICE_ID, 4 == PCI_COMMAND and so on, and any device-specific stuff after the 64 bytes of standardized config space).

How's the Major number allocated for platform device driver?

I wonder how the major number is allocated for platform device driver.
For example, in the driver code, I don't see any of the function calls like
alloc_chrdev_region()
or
register_chrdev_region()
Somebody, please make me understand this.
Thank you.
Kernel creates a great deal of devices attached to various virtual buses (which may or may not represent a physical one). Only some of those devices can be meaningfully accessed directly from user space. And only a subset of those relies on "device node" interface to do so (as ample other options exist in modern kernels). If this particular interface is not used by a driver, then there's no need whatsoever to allocate device node numbers.
Inside the kernel devices are located by their affiliation to particular buses (using internal device names and bus ids). For example, mcspi driver registers as "device" on "platform bus" and as "bus master" on "spi bus". Upon seeing that bus master had registered, spi subsystem will trigger a "bus rescan" on a newly connected bus.
The spidev driver is rigged in such a way as to always "match" an imaginary device present on every spi bus, so it will get instantiated for every "bus master" registration. It will create the user space device node which can be used for direct communication with its "bus master" (spi bus controller, mcspi in this particular case).
Controller doesn't need to be exposed. Hence, no device nos.
On the other hand, SPI devices do require a MAJOR?MINOR no defined in spidev.c, here it is registering the device. And on the top of the same file, there's a macro for the major no defined as:
56 #define SPIDEV_MAJOR 153 /* assigned */
57 #define N_SPI_MINORS 32 /* ... up to 256 */

are ALSA hw_params buffer sizes the physical card memory size?

I am trying to come up to speed on the ALSA API and have some questions regarding this extensive API. I am using the "hw" interface not the "plughw" interface.
Question #1: Does the snd_hw_params_any() routine retrieve the default/current parameters?
Some documentation/example code indicate this routine fills in the allocated snd_pcm_hw_params_t struct with some values. Are these the current configured settings for the card:device?
The reason I am confused is because, if that routine retrieves the hw_params values, then I should be able to use any of the snd_hw_params_get routines to actually get those values. This works for the snd_hw_params_get_channels() routine. But a call to either snd_pcm_hw_params_get_format() or snd_pcm_hw_params_get_buffer_size() will fail. If I first set the format or buffer size using the set routines, I can then call the get routines to retrieve the values without error. Another way to express this question would be: why does snd_pcm_hw_params_get_format() and snd_pcm_hw_parmas_get_buffer_size() fail when I use this sequence of calls:
snd_pcm_hw_params_alloca();
snd_pcm_hw_params_any();
snd_pcm_hw_params_get_channels();
snd_pcm_hw_params_get_format();
snd_pcm_hw_params_get_buffer_size();
Question #2: How do I determine the actual size of the physical memory on the sound card?
I have noticed that no matter what size I use when calling snd_pcm_hw_params_set_buffer_size/min/max() and then call the snd_pcm_hw_params_get_buffer_size() that I get the same size. Again, I am using the "hw" interface and not the "plughw" interface so it seems reasonable that I cannot set the buffer size because it is actually the amount of physical memory on the card.
Since the snd_pcm_hw_params_get_buffer_size() retrieves the frame size, then the actually size of available physical capture memory on the sound card would that frame size times the number of channels times the sample word length. For example for:
int nNumberOfChannels = 2;
snd_pcm_format_t tFormat = SND_PCM_FORMAT_S16; // 2 bytes per sample
snd_pcm_uframes_t = 16384;
then the actual physical memory on the card would be: 2 * 2 * 16384 = 65536 bytes of physical memory available on the card.
Is this correct or am I confused here?
Thanks,
-Andres
The buffer size returned by snd_pcm_hw_params_get_buffer_size() is not (and never was) the size of the memory resident on the 'soundcard'. In the case of local audio interfaces (e.g. not serial bus-attached devices such as USB or Firewire), this is the size of a buffer in the system's main memory to or from which DMA of audio samples periodically takes place.
For USB and Firewire audio, the buffer this refers to is an entirely software concept - although it might be DMAd from in the case of Firewire. Don't expect to be able to set the buffer size below the minimum isochronous transfer unit size.
I don't think you get any guarantees from ALSA that you can actually change these parameters - it rather depends on the constraints of the hardware in the DMA controller servicing the audio interface - and on its device driver. Arrangements where the buffer size is a power of two is not unusual because it's far easier to implement in hardware.
You should take great care to check the return values from calls to snd_pcm_hw_* API calls and not assume that you got what you requested.

How do I obtain PCI Region size in Windows?

I needed to scan my PCI bus and obtain information for specific devices from specific vendors.
My goal is to find the PCI Region size for the AMD Graphics card, in order to map the PCI memory of that card to userspace in order to do i2c transfers and view information from various sensors.
For scanning the PCI bus I downloaded and compiled pciutils 3.1.7 for Windows x64 around a year ago. It supposedly uses DirectIO.
This is my code.
int scan_pci_bus()
{
struct pci_access *pci;
struct pci_dev *dev;
int i;
pci = pci_alloc();
pci_init(pci);
pci_scan_bus(pci);
for(dev = pci->devices; dev; dev = dev->next)
{
pci_fill_info(dev, PCI_FILL_IDENT | PCI_FILL_CLASS | PCI_FILL_IRQ | PCI_FILL_BASES | PCI_FILL_ROM_BASE | PCI_FILL_SIZES | PCI_FILL_PHYS_SLOT);
if(dev->vendor_id == 0x1002 && dev->device_id == 0x6899)
{
//Vendor is AMD, Device ID is a AMD HD5850 GPU
for(i = 0; i < 6; i++)
{
printf("Region Size %d %x ID %x\n", dev->size[i], dev->base_addr[i], dev->device_id);
}
}
}
pci_cleanup(pci);
return 0;
}
As you see in my printf line, I try to print some data, I am successfully printing device_id and base_addr however size which should contain the PCI region size for this device is always 0. I expected, at least one of the cycles from the loop to display a size > 0.
My code is based on a Linux application which uses the same code, though it uses the pci.h headers that come with Linux(pciutils apparenltly has the same APIs).
Apparently, Windows(that is Windows 7 x64 in my case) does not show this information or the at the very least is not exposed to PCIUtils.
How do you propose I obtain this information? If there are alternatives to pciutils for Windows and provide this information, I'd be glad to obtain a link to them.
EDIT:I have still found no solution. If there are any solutions to my problem and also work for 32-bit Windows, It would be deeply appreciated.
How this works is pretty complicated. PCI devices use Base Address Registers to let the BIOS and Operating System decide where to locate their memory regions. Each PCI device is allowed to specify several memory or IO regions it wants, and lets the BIOS/OS decide where to put it. Complicating matters, there's only one register that is used both to specify the size AND the address. How does this work?
When the card is first powered up, it's 32-bit address register will have something like 0xFFFF0000 in it. Any binary 1 means "the OS can change this", any binary 0 means "must stay zero". So this is telling the OS that any of the top 16 bits can be set to whatever the OS wants, but the bottom 16 bits have to stay zero. Which also means that this memory region takes up 16 bits of address space, or 64k. Because of this, memory regions have to be aligned to their size. If a card wants 64K of address space, the OS can only put it on memory addresses that are a multiple of 64K. When the OS has decided where it wants to locate this card's 64K memory space, it writes it back into this register, overwriting the initial 0xFFFF0000 that was in there.
In other words, the card tells the OS what size/alignment it needs for the memory, then the OS overwrites that same register/variable with the address for the memory. Once it's done this, you can't get the size back out of the register without resetting the address.
This means there's no portable way of asking a card how big its region is, all you can ask it is WHERE the region is.
So why does this work in Linux? Because it's asking the kernel for this information. The kernel has an API to provide this stuff, the same way that lspci works. I am not a Windows expert, but I'm unaware of any way for an application to ask the Windows kernel this information. There may be an API to do this somehow, or you may need to write something that runs on the kernel side to pass this information back to you. If you look in the libpci source, for windows it calls the "generic" version of pci_fill_info(), which returns:
return flags & ~PCI_FILL_SIZES;
which basically means "I'm returning everything you asked for, but the sizes."
BUT, this may not matter anyway. If all you're doing is wanting to read/write to the I2C registers, they're usually (always?) in the first 4K of the control/configuration region. You can probably just map 4K (one page) and ignore the fact that there might be more. Also be warned that you may need to take additional steps to stop the real driver for this card from reading/writing while you are. If you're bit-banging the I2C bus manually, and the driver tries to at the same time, it's likely to cause a mess on the bus.
There also may be an existing way to ask the radeon driver to do I2C requests for you, which might avoid all of this.
(also note I'm simplifying and glossing over a lot of details with how the BARs work, including 64 bit addresses, I/O space, etc, read PCI documentation if you want to learn more)
Well whamma gave a very good answer [but] there's one thing he was wrong about, which is region sizes. Region sizes are pretty easy to find, here i will show two ways, the first by deciphering it from the address of the bar, the second through Windows user interface.
Let's assume that E2000000 is the address of the Base Register. If we convert that to binary we get:
11100010000000000000000000000000
Now there are 32 bits here in total, you can count them if you must. Now if you are not familiar with how the bits in
a BAR are layed out, look here -> http://wiki.osdev.org/PCI , specifically "Base Address Registers" and more specifically
the image that reads "Memory Space BAR Layout". Now lets start reading the bits from the right end to the left end and use
the image in the link i pointed to you above as a guide.
So the first bit(Bit 0) starting from the right is 0, indicating that this is a memory address BAR.
Bits(1-2) are 0, indicating that it's a 32-bit(note this is not the size) memory BAR.
Bit 3 is 0, indicating that it's not Prefetchable memory.
Bits 4-31 represent the address.
The page documents the PCI approved process:
To determine the amount of address space needed by a PCI device, you
must save the original value of the BAR, write a value of all 1's to
the register, then read it back. The amount of memory can then be
determined by masking the information bits, performing a bitwise NOT
('~' in C), and incrementing the value by 1. The original value of the
BAR should then be restored. The BAR register is naturally aligned and
as such you can only modify the bits that are set.
The other way is using Device Manager:
Start->"Device Manager"->Display Adapters->Right Click your video card->Properties->Resources. Each resource type marked
"Memory Range" should be a memory BAR and as you can see it says [start address] to [end address]. For example lets say
it read [00000000E2000000 - 00000000E2FFFFFF], to get the size you would take [start address] from [end address]:
00000000E2FFFFFF - 00000000E2000000 = FFFFFF, FFFFFF in decimal = 16777215 = 16777215 bytes = 16MB.

booting from a disk/cd/usb

How can I boot my small console from a disk/cd/usb, with the following configuration:
The media that I want to use will be completely raw i.e no filesystem on it.
When I insert the media in my system or assume that its already inserted, I want to make it boot my own small OS.
The procedure that I want to use is that when my system starts, it boots from the disk/cd/usb, and starts my OS. For now imagine that the OS is going to be a hello world program. I actually want to see how the real world OS implement themselves.
A bootloader must be 512 bytes. No less no more.
And it must end with the standard PC boot signature: 0xAA55.
Also note a PC boots in 16 bits mode.
You need to load your kernel or secondstage bootloader from that code into memory, and then jump to that code (and maybe switch the CPU to 32 bits protected mode).
For instance (nasm):
BITS 16
; Your assembly code here (510 bytes max)...
jmp $
; Fills the remaining space with 0
times 510 - ( $ - $$ ) db 0
; Boot signature
dw 0xAA55
Thas the job of a boot loader. The bootloader should be present in the first 512 bytes of a HardDisk. This location is called MBR(Master boot record)
When bios loads it checks if the media contains the MBR. it verifies the MBR signature 0xAA55 which should be present as the last 2 bytes of MBR.
Then the Bios loads the BootLoader into RAM at address 0x7C00
Then the boot loader is the one who actually loads the kernal into memory, by reading the filesystem.
usually you cannot fit in all the code in 512 bytes, so there will be a secondory boot loader.
secondary bootloader will be loaded by your primary bootloader.
secondary boot loader loads IDT and GDT (Interupt vector table and Global descriptor table). Enables A20 gate to move into protected mode.
secondary boot loader loads the 32 bit kernel from disk into memory, then jumps to the kernel code
For more information you can download Linux kernel v0.01 (First version). Look how it is done. to my surprise the code to read the File system + the code to move into protected mode is fit into 512 bytes of code.

Resources