I wonder about LBA and cluster number.
My question is this:
is LBA 0 always cluster 2?
then what does cluster 0 and 1 for?
only difference between cluster and LBA is just where do they start from the disk?
relation among CHS, LBA, cluster nubmer?
and in the flowing code, what does add ax, WORD [datasector] code for?
;************************************************;
; Convert CHS to LBA
; LBA = (cluster - 2) * sectors per cluster
;************************************************;
ClusterLBA:
sub ax, 0x0002 ; zero base cluster number
xor cx, cx
mov cl, BYTE [bpbSectorsPerCluster] ; convert byte to word
mul cx
add ax, WORD [datasector] ; base data sector
ret
There are many sector numbering schemes on disk drives. One of the earliest was CHS (Cylinder-Head-Sector). One sector can be selected by specifying the cylinder (track), read/write head and sector per track triplet. This numbering scheme depends on the actual physical characteristics of the disk drive.
The first logical sector resides on cylinder 0, head 0, sector 1. The second is on sector 2, and so on. If there isn't any more sectors on the disk (eg. on a 1.44M floppy disk there's 18 sectors per track), then the next head is applied, starting on sector 1 again, and so on.
You can convert CHS addresses to an absolute (or logical) sector number with a little math:
L = (C * Nh + H) * Ns + S - 1
where C, H ans S are the cylinder, head and sector numbers according to CHS adressing, while Nh and Ns are the number of heads and number of sectors per track (cylinder), respectively. The reverse calculation (to convert LBA to CHS) is as simple as this.
In this numbering scheme, which is called LBA (Logical Block Addressing), each sector can be identified by a single number. The first logical sector is LBA 0, the second is LBA 1, and so on. This scheme is linear and easier to deal with.
Clusters are simply groups of continuous sectors on the disk, which are treated together by the operating system and the file system, in order to reduce disk fragmentation and disk space needed for file system metadata (eg. to describe in which sectors could a specific file found on the disk). A cluster may consist of only 1 sector (512 bytes), up to 128 sectors (64 kilobytes) or more, depending on the capacity of the disk.
Again, the logical sector number of the first sector of a cluster can be easily calculated:
L = ((U - Sc) * Nc) + Sd
where U is the cluster number, Nc is the number of sectors in a cluster, Sc is the first valid cluster number, and Sd is the number of the first logical sector available for generic file data. The latter two parameters (Sc and Sd) are completely filesystem and operating system specific values.
Some filesystems (for example FAT16, and the whole FAT-family) reserve cluster number 0 and 1 for special purposes, that's why the first actual cluster is cluster number two (Sc = 2 in this case). Similarly, there may be some reserved sectors in the beginning of the disk, where no data is allowed to be written to and read from. This reserved area can range from a few sectors (e.g. a boot record) to millions of sectors (e.g. a completely different partition which preceeds our partition on the hard disk).
Huh, this was the long answer. After all, the short answers to your questions can be summarized as follows:
No, LBA 0 is not always cluster 2, it's filesystem specific (in case of FAT, cluster 2 is the first available sector on the disk, but not always LBA 0 - see answer 5).
Interpretation of cluster number 0 and 1 are also filesystem specific (in case of FAT, cluster number 0 represents an empty cluster in the File Allocation Table, and cluster number 1 is reserved).
No, the main difference is that a cluster number addresses a group of continous sectors, while LBA addresses a single sector on the disk.
See the formulas (formulae?), and the accompanying description in the long answer above.
It's hard to tell from such a short assembly code, but my best guess would be the number of reserved sectors in the beginning of the partition (noted by Sd in the formula above).
Related
I'm working on an embedded project on an ARM mcu that has a custom linker file with several different memory spaces:
/* Memory Spaces Definitions */
MEMORY
{
rom (rx) : ORIGIN = 0x00400000, LENGTH = 0x00200000
data_tcm (rw) : ORIGIN = 0x20000000, LENGTH = 0x00008000
prog_tcm (rwx) : ORIGIN = 0x00000000, LENGTH = 0x00008000
ram (rwx) : ORIGIN = 0x20400000, LENGTH = 0x00050000
sdram (rw) : ORIGIN = 0x70000000, LENGTH = 0x00200000
}
Specifically, I have a number of different memory devices with different characteristics (TCM, plain RAM (with a D-Cache in the way), and an external SDRAM), all mapped as part of the same address space.
I'm specifically placing different variables in the different memory spaces, depending on the requirements (am I DMA'ing into it, do I have cache-coherence issues, do I expect to overflow the D-cache, etc...).
If I exceed any one of the sections, I get a linker error. However, unless I do so, the linker only prints the memory usage as bulk percentage:
Program Memory Usage : 33608 bytes 1.6 % Full
Data Memory Usage : 2267792 bytes 91.1 % Full
Given that I have 3 actively used memory spaces, and I know for a fact that I'm using 100% of one of them (the SDRAM), it's kind of a useless output.
Is there any way to make the linker output the percentage of use for each memory space individually? Right now, I have to manually open the .map file, search for the section header, and then manually subtract the size from the total available memory specified in the .ld file.
While this is kind of a minor thing, it'd sure be nice to just have the linker do:
Program Memory Usage : 33608 bytes 1.6 % Full
Data Memory Usage : 2267792 bytes 91.1 % Full
data_dtcm : xxx bytes xx % Full
ram : xxx bytes xx % Full
sdram : xxx bytes xx % Full
This is with GCC-ARM, and therefore GCC-LD.
Arrrgh, so of course, I find the answer right after asking the question:
--print-memory-usage
Used as -Wl,--print-memory-usage, you get the following:
Memory region Used Size Region Size %age Used
rom: 31284 B 2 MB 1.49%
data_tcm: 26224 B 32 KB 80.03%
prog_tcm: 0 GB 32 KB 0.00%
ram: 146744 B 320 KB 44.78%
sdram: 2 MB 2 MB 100.00%
I am trying to read a FAT16 file system to gain information about it like number of sectors, clusters, bytespersector etc...
I am trying to read it like this:
FILE *floppy;
unsigned char bootDisk[512];
floppy = fopen(name, "r");
fread(bootDisk, 1, 512, floppy);
int i;
for (i = 0; i < 80; i++){
printf("%u,",bootDisk[i]);
}
and it outputs this:
235,60,144,109,107,100,111,115,102,115,0,0,2,1,1,0,2,224,0,64,11,240,9,0,18,0,2,0,0,0,0,0,0,0,0,0,0,0,41,140,41,7,68,32,32,32,32,32,32,32,32,32,32,32,70,65,84,49,50,32,32,32,14,31,190,91,124,172,34,192,116,11,86,180,14,187,7,0,205,16,
What do these numbers represent and what type are they? Bytes?
You are not reading the values properly. Most of them are longer than 1 byte.
From the spec you can obtain the length and meaning of every attributes in the boot sector:
Offset Size (bytes) Description
0000h 3 Code to jump to the bootstrap code.
0003h 8 Oem ID - Name of the formatting OS
000Bh 2 Bytes per Sector
000Dh 1 Sectors per Cluster - Usual there is 512 bytes per sector.
000Eh 2 Reserved sectors from the start of the volume.
0010h 1 Number of FAT copies - Usual 2 copies are used to prevent data loss.
0011h 2 Number of possible root entries - 512 entries are recommended.
0013h 2 Small number of sectors - Used when volume size is less than 32 Mb.
0015h 1 Media Descriptor
0016h 2 Sectors per FAT
0018h 2 Sectors per Track
001Ah 2 Number of Heads
001Ch 4 Hidden Sectors
0020h 4 Large number of sectors - Used when volume size is greater than 32 Mb.
0024h 1 Drive Number - Used by some bootstrap code, fx. MS-DOS.
0025h 1 Reserved - Is used by Windows NT to decide if it shall check disk integrity.
0026h 1 Extended Boot Signature - Indicates that the next three fields are available.
0027h 4 Volume Serial Number
002Bh 11 Volume Label - Should be the same as in the root directory.
0036h 8 File System Type - The string should be 'FAT16 '
003Eh 448 Bootstrap code - May schrink in the future.
01FEh 2 Boot sector signature - This is the AA55h signature
You should probably use a custom struct to read the boot sector.
Like:
typedef struct {
unsigned char jmp[3];
char oem[8];
unsigned short sector_size;
unsigned char sectors_per_cluster;
unsigned short reserved_sectors;
unsigned char number_of_fats;
unsigned short root_dir_entries;
[...]
} my_boot_sector;
Keep in mind your endianness and padding rules in your implementation. This struct is an example only.
If you need more details this is a thorough example.
I'm using ALSA for and audio application on Linux, I found great docs explain how to use it : 1 and this one. although I have some issues to understand this part of the setup :
/* Set number of periods. Periods used to be called fragments. */
if (snd_pcm_hw_params_set_periods(pcm_handle, hwparams, periods, 0) < 0) {
fprintf(stderr, "Error setting periods.\n");
return(-1);
}
what does mean set a number of period when I'm using the PLAYBACK mode
and :
/* Set buffer size (in frames). The resulting latency is given by */
/* latency = periodsize * periods / (rate * bytes_per_frame) */
if (snd_pcm_hw_params_set_buffer_size(pcm_handle, hwparams, (periodsize * periods)>>2) < 0) {
fprintf(stderr, "Error setting buffersize.\n");
return(-1);
}
and the same question here about the latency , how should I understand it?
I assume you've read and understood this section of linux-journal. You may also find that this blog clarify things with respect to period size selection (or fragment in the blog) in the context of ALSA. To quote:
You shouldn't misuse the fragments logic of sound devices. It's like
this:
The latency is defined by the buffer size.
The wakeup interval is defined by the fragment size.
The buffer fill level will oscillate between 'full buffer' and 'full
buffer minus 1x fragment size minus OS scheduling latency'. Setting
smaller fragment sizes will increase the CPU load and decrease battery
time since you force the CPU to wake up more often. OTOH it increases
drop out safety, since you fill up playback buffer earlier. Choosing
the fragment size is hence something which you should do balancing out
your needs between power consumption and drop-out safety. With modern
processors and a good OS scheduler like the Linux one setting the
fragment size to anything other than half the buffer size does not
make much sense.
...
(Oh, ALSA uses the term 'period' for what I call 'fragment'
above. It's synonymous)
So essentially, typically you would set period to 2 (as was done in the howto you referenced). Then periodsize * period is your total buffer size in bytes. Finally, the latency is the delay that is induced by the buffering of that many samples, and can be computed by dividing the buffer size by the rate at which samples are played back (ie. according to the formula latency = periodsize * periods / (rate * bytes_per_frame) in the code comments).
For example, the parameters from the howto:
period = 2
periodsize = 8192 bytes
rate = 44100Hz
16 bits stereo data (4 bytes per frame)
correspond to a total buffer size of period * periodsize = 2 * 8192 = 16384 bytes, and a latency of 16384 / (44100 * 4) ~ 0.093` seconds.
Note also that your hardware may have some size limitations for the supported period size (see this trouble shooting guide)
When the application tries to write samples into the buffer, an if the buffer is already full, the process goes to sleep. It gets woken up by the hardware through an interrupt; this interrupt is raised at the end of each period.
There should be at least two periods per buffer; otherwise, the buffer is already empty when a wakeup happens, which result in an underrun.
Increasing the number of periods (i.e., reducing the period size) increases the safety margin against underruns caused by scheduling or processing delays.
The latency is just proportional to the buffer size: when you completely fill the buffer, the last sample written is played by the hardware only after all the other samples have been played.
I have a virtual memory system that consists of:-
• 32-bit virtual address
• 4-kbyte virtual page size
• 32-bit Page Table Entry (PTE)
• 2-Gbyte physical memory
I have been asked to find the number of physical frames available in the system and the size (in bytes) of the page table.
I have found the answer to the amount of physical frames, which i think is
physical memory/virtual page size
2^31/2^12 = 2^19 = 524,288
Firstly i want to know if that is correct.
Secondly, i would like to calculate the size of the page table in bytes.
Thanks in advance.
LA(logical address) = 32 bits
=> LAS(Logical address space ) =232 bytes
PA(physical address) =30 bits
=> PAS(Physical address space ) =230 bytes
we know, page size ==frame size
No. of pages= (LAS/page size)= 232-12 =220= 1 M pages
No. of frames =(PAS/frame size) = 230-12 = 218 frames
Since no. of entries in page table is equal to number of pages in LAS.
Hence page table size = No. of entries * entry size
=> page table size= 220* 4 Bytes= 222 Bytes.
I need to know how big a given in-memory buffer will be as an on-disk (usb stick) file before I write it. I know that unless the size falls on the block size boundary, its likely to get rounded up, e.g. a 1 byte file takes up 4096 bytes on-disk. I'm currently doing this using GetDiskFreeSpace() to work out the disk block size, then using this to calculate the on-disk size like this:
GetDiskFreeSpace(szDrive, &dwSectorsPerCluster,
&dwBytesPerSector, NULL, NULL);
dwBlockSize = dwSectorsPerCuster * dwBytesPerSector;
if (dwInMemorySize % dwBlockSize != 0)
{
dwSizeOnDisk = ((dwInMemorySize / dwBlockSize) * dwBlockSize) + dwBlockSize;
}
else
{
dwSizeOnDisk = dwInMemorySize;
}
Which seems to work fine, BUT GetDiskFreeSpace() only works on disks up to 2GB according to MSDN. GetDiskFreeSpaceEx() doesn't return the same information, so my question is, how else can I calculate this information for drives >2GB? Is there an API call I've missed? Can I assume some hard values depending on the overall disk size?
MSDN only states that the GetDiskFreeSpace() function cannot report volume sizes greater than 2GB. It works fine for retrieving sectors per cluster and bytes per sector, I've used it myself for very similar-looking code ;-)
But if you want disk capacity too, you'll need an additional call to GetDiskFreeSpaceEx().
The size of a file on disk is a fuzzy concept. In NTFS, a file consists of a set of data elements. You're primarilty thinking of the "unnamed data stream". That's an attribute of a file that, if small, can be packed with the other attributes in the directory entry. Apparently, you can store a data stream of up to 700-800 bytes in the directory entry itself. Hence, your hypothetical 1 byte file would be as big as a 0 byte or 700 byte file.
Another influence is file compression. This will make the on-disk size potentially smaller than the in-memory size.
You should be able to obtain this information using the DeviceIoControl function and
DISK_GEOMETRY_EX. It will return a structure that contains the information you are looking for I think
http://msdn.microsoft.com/en-us/library/aa363216(VS.85).aspx
http://msdn.microsoft.com/en-us/library/ms809010.aspx
In actionscript!
var size:Number = 19912;
var sizeOnDisk:Number = size;
var reminder:Number = size % (1024 * 4);
if(reminder>0){
sizeOnDisk = size + ((1024 * 4)- reminder)
}
trace(size)
trace(sizeOnDisk)