Linux : setting locale at runtime and interprocess dependencies - c

I am stuck in a strange problem.
I have two scripts (C program executables) running on ARM linux machine that are mounting the same USB device (containing chinese character filenames) on two different paths, as soon as the device is inserted.
int mount(const char *source, const char *target,
const char *filesystemtype, unsigned long mountflags,
const void *data);
In the last parameter,
Script A passes "utf8" and Script B passes 0.
So, as soon as I insert the USB device, the scripts race to mount the device.
If Script A mounts first (which passes utf8 parameter), I get proper filenames. This is the mount command output [Notice that even second mount has utf8 as parameter, even if its not passed. Why?]
/dev/sdb1 on /home/root/script1 type vfat (ro,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,utf8,errors=remount-r
o)
/dev/sdb1 on /home/root/script2 type vfat (ro,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed
,utf8,errors=remount-ro)
But if script B mounts first(which passes 0 as last parameter to mount), I get broken filenames ?????.mp3 from readdir(). This is the mount command output.
/dev/sdb1 on /home/root/script2 type vfat (ro,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)
/dev/sdb1 on /home/root/script1 type vfat (ro,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed
,errors=remount-ro)
EDIT
This is the basic mount code of both the scripts developed for testing(only difference in last mount argument). Both scripts are executed immediately on reboot using a service.
//mount the device
ret = mount("/dev/sda1", "/home/root/script1/", "vfat", 1, "utf8");
if (ret == 0) {
fprintf(stdout,"mount() succeeded.\n");
sleep(2000);
} else {
ret = mount("/dev/sdb1", "/home/root/script1/", "vfat", 1, "utf8");
if(ret == 0)
{
fprintf(stdout,"mount() succeeded\n");
sleep(2000);
}
else
{
fprintf(stdout,"/dev/sdb1 mount() failed: %d, %s\n", errno, strerror(errno));
ret = mount("/dev/sdc1", "/home/root/script1/", "vfat", 1, "utf8");
if(ret == 0)
{
fprintf(stdout,"mount() succeeded\n");
sleep(2000);
}
else
fprintf(stdout,"mount() failed: %d, %s\n", errno, strerror(errno));
}
}

Generally speaking, you should never mount same filesystem twice -- if OS drivers will decide to write twice to the same block, you'll get filesystem corruption. Use bind-mounts in such cases.
Linux, however, is smart enough to help you with that -- it will reuse older filesystem mount super_block (with all mountpoint flags) for a to a location.
I couldn't find it in documentation, but it is traceable through kernel source in sget() which is called by mount_bdev():
hlist_for_each_entry(old, &type->fs_supers, s_instances) {
if (!test(old, data))
continue;
if (!grab_super(old))
goto retry;
if (s) {
up_write(&s->s_umount);
destroy_super(s);
s = NULL;
}
return old;
}
In this snippet it'll seek for previous instance of super_block corresponding to a block device, and if it already exists -- simply returns it.
Some practical proof using SystemTap:
# stap -e 'probe kernel.function("sget").return {
sb = $return;
active = #cast(sb, "super_block")->s_active->counter;
fsi = #cast(sb, "super_block")->s_fs_info;
uid = fsi == 0 ? -1
: #cast(fsi, "msdos_sb_info", "vfat")->options->fs_uid;
printf("%p active=%d uid=%d\n", sb, active, uid);
}'
Setting uid in second mount doesn't alter option, but increases number of active mounts (obvious):
# mount /dev/sdd1 /tmp/mnt1
0xffff8803ce87e800 active=1 uid=-1
# mount -o uid=1000 /dev/sdd1 /tmp/mnt2
0xffff8803ce87e800 active=2 uid=0
Mounting in reverse order also inherits mount options:
# mount -o uid=1000 /dev/sdd1 /tmp/mnt2
0xffff8803cc609c00 active=1 uid=-1
# mount /dev/sdd1 /tmp/mnt1
0xffff8803cc609c00 active=2 uid=1000
If you wish to know who was responsible for such behavior, ask Linus, similiar code exists since 0.11:
struct super_block * get_super(int dev)
{
struct super_block * s;
if (!dev)
return NULL;
s = 0+super_block;
while (s < NR_SUPER+super_block)
if (s->s_dev == dev) {
wait_on_super(s);
if (s->s_dev == dev)
return s;
s = 0+super_block;
} else
s++;
return NULL;
}
(but when this code was in charge, sys_mount() explicitly checked that no other mountpoints exist for that superblock).
You can possibly try to ask a question at LKML.

Related

getpwuid_r and getgrgid_r are slow, how can I cache them?

I am writing a clone of find while learning C. When implementing the -ls option I've stumbled upon a problem that the getpwuid_r and getgrgid_r calls are really slow, the same applies to getpwuid and getgrgid. I need them to display the user/group names from ids provided by stat.h.
For example, listing the whole filesystem gets 3x slower:
# measurements were made 3 times and the fastest run was recorded
# with getgrgid_r
time ./myfind / -ls > list.txt
real 0m4.618s
user 0m1.848s
sys 0m2.744s
# getgrgid_r replaced with 'return "user";'
time ./myfind / -ls > list.txt
real 0m1.437s
user 0m0.572s
sys 0m0.832s
I wonder how GNU find maintains such a good speed. I've seen the sources, but they are not exactly easy to understand and to apply without special types, macros etc:
time find / -ls > list.txt
real 0m1.544s
user 0m0.884s
sys 0m0.648s
I thought about caching the uid - username and gid - groupname pairs in a data structure. Is it a good idea? How would you implement it?
You can find my complete code here.
UPDATE:
The solution was exactly what I was looking for:
time ./myfind / -ls > list.txt
real 0m1.480s
user 0m0.696s
sys 0m0.736s
Here is a version based on getgrgid (if you don't require thread safety):
char *do_get_group(struct stat attr) {
struct group *grp;
static unsigned int cache_gid = UINT_MAX;
static char *cache_gr_name = NULL;
/* skip getgrgid if we have the record in cache */
if (cache_gid == attr.st_gid) {
return cache_gr_name;
}
/* clear the cache */
cache_gid = UINT_MAX;
grp = getgrgid(attr.st_gid);
if (!grp) {
/*
* the group is not found or getgrgid failed,
* return the gid as a string then;
* an unsigned int needs 10 chars
*/
char group[11];
if (snprintf(group, 11, "%u", attr.st_gid) < 0) {
fprintf(stderr, "%s: snprintf(): %s\n", program, strerror(errno));
return "";
}
return group;
}
cache_gid = grp->gr_gid;
cache_gr_name = grp->gr_name;
return grp->gr_name;
}
getpwuid:
char *do_get_user(struct stat attr) {
struct passwd *pwd;
static unsigned int cache_uid = UINT_MAX;
static char *cache_pw_name = NULL;
/* skip getpwuid if we have the record in cache */
if (cache_uid == attr.st_uid) {
return cache_pw_name;
}
/* clear the cache */
cache_uid = UINT_MAX;
pwd = getpwuid(attr.st_uid);
if (!pwd) {
/*
* the user is not found or getpwuid failed,
* return the uid as a string then;
* an unsigned int needs 10 chars
*/
char user[11];
if (snprintf(user, 11, "%u", attr.st_uid) < 0) {
fprintf(stderr, "%s: snprintf(): %s\n", program, strerror(errno));
return "";
}
return user;
}
cache_uid = pwd->pw_uid;
cache_pw_name = pwd->pw_name;
return pwd->pw_name;
}
UPDATE 2:
Changed long to unsigned int.
UPDATE 3:
Added the cache clearing. It is absolutely necessary, because pwd->pw_name may point to a static area. getpwuid can overwrite its contents if it fails or simply when executed somewhere else in the program.
Also removed strdup. Since the output of getgrgid and getpwuid should not be freed, there is no need to require free for our wrapper functions.
The timings indeed indicate a strong suspicion on these functions.
Looking at your function do_get_group, there are some issues:
You use sysconf(_SC_GETPW_R_SIZE_MAX); for every call to do_get_group and do_get_user, definitely cache that, it will not change during the lifetime of your program, but you will not gain much.
You use attr.st_uid instead of attr.st_gid, which probably causes the lookup to fail for many files, possibly defeating the cacheing mechanism, if any. Fix this first, this is a bug!
You return values that should not be passed to free() by the caller, such as grp->gr_name and "". You should always allocate the string you return. The same issue is probably present in do_get_user().
Here is a replacement for do_get_group with a one shot cache. See if this improves the performance:
/*
* #brief returns the groupname or gid, if group not present on the system
*
* #param attr the entry attributes from lstat
*
* #returns the groupname if getgrgid() worked, otherwise gid, as a string
*/
char *do_get_group(struct stat attr) {
char *group;
struct group grp;
struct group *result;
static size_t length = 0;
static char *buffer = NULL;
static gid_t cache_gid = -1;
static char *cache_gr_name = NULL;
if (!length) {
/* only allocate the buffer once */
long sysconf_length = sysconf(_SC_GETPW_R_SIZE_MAX);
if (sysconf_length == -1) {
sysconf_length = 16384;
}
length = (size_t)sysconf_length;
buffer = calloc(length, 1);
}
if (!buffer) {
fprintf(stderr, "%s: malloc(): %s\n", program, strerror(errno));
return strdup("");
}
/* check the cache */
if (cache_gid == attr.st_gid) {
return strdup(cache_gr_name);
}
/* empty the cache */
cache_gid = -1;
free(cache_gr_name);
cache_gr_name = NULL;
if (getgrgid_r(attr.st_gid, &grp, buffer, length, &result) != 0) {
fprintf(stderr, "%s: getpwuid_r(): %s\n", program, strerror(errno));
return strdup("");
}
if (result) {
group = grp.gr_name;
} else {
group = buffer;
if (snprintf(group, length, "%ld", (long)attr.st_gid) < 0) {
fprintf(stderr, "%s: snprintf(): %s\n", program, strerror(errno));
return strdup("");
}
}
/* load the cache */
cache_gid = attr.st_gid;
cache_gr_name = strdup(group);
return strdup(group);
}
Whether the getpwuid and getgrgid calls are cached depends on how they are implemented and on how your system is configured. I recently wrote an implementation of ls and ran into a similar problem.
I found that on all modern systems I tested, the two functions are uncached unless you run the name service caching dæmon (nscd) in which case nscd makes sure that the cache stays up to date. It's easy to understand why this happens: Without an nscd, caching the information could lead to outdated output which is a violation of the specification.
I don't think you should rely on these functions caching the group and passwd databases because they often don't. I implemented custom caching code for this purpose. If you don't require to have up-to-date information in case the database contents change during program execution, this is perfectly fine to do.
You can find my implementation of such a cache here. I'm not going to publish it on Stack Overflow as I do not desire to publish the code under the MIT license.

Getting the actual executable path of current process context - Linux kernel

I'm trying to get the actual executable path of a running process through my kernel driver.
I've done the following:
static struct kretprobe do_fork_probe = {
.entry_handler = (kprobe_opcode_t *) process_entry_callback,
.handler = (kprobe_opcode_t *) NULL,
.maxactive = 1000,
.data_size = 0
};
do_fork_probe.kp.addr = (kprobe_opcode_t*)kallsyms_lookup_name("do_fork");
if ((ret = register_kretprobe(&do_fork_probe)) < 0)
return -1;
static int process_entry_callback(struct kretprobe_instance *ri, struct pt_regs *regs)
{
printk("Executable path = %s\n", executable_path(current));
return 0;
}
The executable_path function:
char* executable_path(struct task_struct* process)
{
#define PATH_MAX 4096
char* p = NULL, *pathname;
struct mm_struct* mm = current->mm;
if (mm)
{
down_read(&mm->mmap_sem);
if (mm->exe_file)
{
pathname = kmalloc(PATH_MAX, GFP_ATOMIC);
if (pathname)
p = d_path(&mm->exe_file->f_path, pathname, PATH_MAX);
}
up_read(&mm->mmap_sem);
}
return p;
}
The problem is that if I run an executable using bash as follows:
./execname
I'm getting the following output:
Executable path = /bin/bash
While what I really want is the : execname (Actually its full path but lets start with the name)
Any suggestions?
It is unclear what you try to get, so here are list of options:
execname as it is considered by SystemTap. Simple process->comm should suffice. That is how comm field defined in Kernel:
char comm[TASK_COMM_LEN]; /* executable name excluding path
- access with [gs]et_task_comm (which lock
it with task_lock())
- initialized normally by setup_new_exec */
But if bash is a symlink, than comm should contain symlink's name, not the real executable name.
argv[0] first element of command line arguments array as it seen my application (and may be altered by it). There is a get_cmdline() function in kernel, but it seem not to be exported.
Basename of full path. In this case, do not call d_path, just take d_name field of dentry:
strlcpy(pathname, mm->exe_file->f_path->d_name, PATH_MAX);
But it sounds like a XY problem. You trying to get executable names for all forking processes? Why not use SystemTap directly?
# stap -v -e 'probe scheduler.process_fork { println(execname()); }'

File IO in the apache portable runtime library

While working through Zed Shaw's learn C the Hard Way, I encountered the function apr_dir_make_recursive() which according to the documentation here has the type signature
apr_status_t apr_dir_make_recursive(const char *path, apr_fileperms_t perm, apr_pool_t *pool)
Which makes the directory, identical to the Unix command mkdir -p.
Why would the IO function need a memory pool in order to operate?
My first thought was that it was perhaps an optional argument to populate the newly made directory, however the code below uses an initialized but presumptively empty memory pool. Does this mean that the IO function itself needs a memory pool, that we are passing in for it to use? But that doesn't seem likely either; couldn't the function simply create a local memory pool for it to use which is then destroyed upon return or error?
So, what use is the memory pool? The documentation linked is unhelpful on this point.
Code shortened and shown below, for the curious.
int DB_init()
{
apr_pool_t *p = NULL;
apr_pool_initialize();
apr_pool_create(&p, NULL);
if(access(DB_DIR, W_OK | X_OK) == -1) {
apr_status_t rc = apr_dir_make_recursive(DB_DIR,
APR_UREAD | APR_UWRITE | APR_UEXECUTE |
APR_GREAD | APR_GWRITE | APR_GEXECUTE, p);
}
if(access(DB_FILE, W_OK) == -1) {
FILE *db = DB_open(DB_FILE, "w");
check(db, "Cannot open database: %s", DB_FILE);
DB_close(db);
}
apr_pool_destroy(p);
return 0;
}
If you pull up the source, you'll see: apr_dir_make_recursive() calls path_remove_last_component():
static char *path_remove_last_component (const char *path, apr_pool_t *pool)
{
const char *newpath = path_canonicalize (path, pool);
int i;
for (i = (strlen(newpath) - 1); i >= 0; i--) {
if (path[i] == PATH_SEPARATOR)
break;
}
return apr_pstrndup (pool, path, (i < 0) ? 0 : i);
}
This function is creating copies of the path in apr_pstrndup(), each representing a smaller component of it.
To answer your question - because of how it was implemented. Would it be possible to do the same without allocating memory, yes. I think in this case everything came out cleaner and more readable by copying the necessary path components.
The implementation of the function (found here) shows that the pool is used to allocate strings representing the individual components of the path.
The reason the function does not create its own local pool is because the pool may be reused across multiple calls to the apr_*() functions. It just so happens that DB_init() does not have a need to reuse an apr_pool_t.

tmpfile() on windows 7 x64

Running the following code on Windows 7 x64
#include <stdio.h>
#include <errno.h>
int main() {
int i;
FILE *tmp;
for (i = 0; i < 10000; i++) {
errno = 0;
if(!(tmp = tmpfile())) printf("Fail %d, err %d\n", i, errno);
fclose(tmp);
}
return 0;
}
Gives errno 13 (Permission denied), on the 637th and 1004th call, it works fine on XP (haven't tried 7 x86). Am I missing something or is this a bug?
I've got similar problem on Windows 8 - tmpfile() was causing win32 ERROR_ACCESS_DENIED error code - and yes, if you run application with administrator privileges - then it works fine.
I guess problem is mentioned over here:
https://lists.gnu.org/archive/html/bug-gnulib/2007-02/msg00162.html
Under Windows, the tmpfile function is defined to always create
its temporary file in the root directory. Most users don't have
permission to do that, so it will often fail.
I would suspect that this is kinda incomplete windows port issue - so this should be an error reported to Microsoft. (Why to code tmpfile function if it's useless ?)
But who have time to fight with Microsoft wind mills ?! :-)
I've coded similar implementation using GetTempPathW / GetModuleFileNameW / _wfopen. Code where I've encountered this problem came from libjpeg - I'm attaching whole source code here, but you can pick up code from jpeg_open_backing_store.
jmemwin.cpp:
//
// Windows port for jpeg lib functions.
//
#define JPEG_INTERNALS
#include <Windows.h> // GetTempFileName
#undef FAR // Will be redefined - disable warning
#include "jinclude.h"
#include "jpeglib.h"
extern "C" {
#include "jmemsys.h" // jpeg_ api interface.
//
// Memory allocation and freeing are controlled by the regular library routines malloc() and free().
//
GLOBAL(void *) jpeg_get_small (j_common_ptr cinfo, size_t sizeofobject)
{
return (void *) malloc(sizeofobject);
}
GLOBAL(void) jpeg_free_small (j_common_ptr cinfo, void * object, size_t sizeofobject)
{
free(object);
}
/*
* "Large" objects are treated the same as "small" ones.
* NB: although we include FAR keywords in the routine declarations,
* this file won't actually work in 80x86 small/medium model; at least,
* you probably won't be able to process useful-size images in only 64KB.
*/
GLOBAL(void FAR *) jpeg_get_large (j_common_ptr cinfo, size_t sizeofobject)
{
return (void FAR *) malloc(sizeofobject);
}
GLOBAL(void) jpeg_free_large (j_common_ptr cinfo, void FAR * object, size_t sizeofobject)
{
free(object);
}
//
// Used only by command line applications, not by static library compilation
//
#ifndef DEFAULT_MAX_MEM /* so can override from makefile */
#define DEFAULT_MAX_MEM 1000000L /* default: one megabyte */
#endif
GLOBAL(long) jpeg_mem_available (j_common_ptr cinfo, long min_bytes_needed, long max_bytes_needed, long already_allocated)
{
// jmemansi.c's jpeg_mem_available implementation was insufficient for some of .jpg loads.
MEMORYSTATUSEX status = { 0 };
status.dwLength = sizeof(status);
GlobalMemoryStatusEx(&status);
if( status.ullAvailPhys > LONG_MAX )
// Normally goes here since new PC's have more than 4 Gb of ram.
return LONG_MAX;
return (long) status.ullAvailPhys;
}
/*
Backing store (temporary file) management.
Backing store objects are only used when the value returned by
jpeg_mem_available is less than the total space needed. You can dispense
with these routines if you have plenty of virtual memory; see jmemnobs.c.
*/
METHODDEF(void) read_backing_store (j_common_ptr cinfo, backing_store_ptr info, void FAR * buffer_address, long file_offset, long byte_count)
{
if (fseek(info->temp_file, file_offset, SEEK_SET))
ERREXIT(cinfo, JERR_TFILE_SEEK);
size_t readed = fread( buffer_address, 1, byte_count, info->temp_file);
if (readed != (size_t) byte_count)
ERREXIT(cinfo, JERR_TFILE_READ);
}
METHODDEF(void)
write_backing_store (j_common_ptr cinfo, backing_store_ptr info, void FAR * buffer_address, long file_offset, long byte_count)
{
if (fseek(info->temp_file, file_offset, SEEK_SET))
ERREXIT(cinfo, JERR_TFILE_SEEK);
if (JFWRITE(info->temp_file, buffer_address, byte_count) != (size_t) byte_count)
ERREXIT(cinfo, JERR_TFILE_WRITE);
// E.g. if you need to debug writes.
//if( fflush(info->temp_file) != 0 )
// ERREXIT(cinfo, JERR_TFILE_WRITE);
}
METHODDEF(void)
close_backing_store (j_common_ptr cinfo, backing_store_ptr info)
{
fclose(info->temp_file);
// File is deleted using 'D' flag on open.
}
static HMODULE DllHandle()
{
MEMORY_BASIC_INFORMATION info;
VirtualQuery(DllHandle, &info, sizeof(MEMORY_BASIC_INFORMATION));
return (HMODULE)info.AllocationBase;
}
GLOBAL(void) jpeg_open_backing_store(j_common_ptr cinfo, backing_store_ptr info, long total_bytes_needed)
{
// Generate unique filename.
wchar_t path[ MAX_PATH ] = { 0 };
wchar_t dllPath[ MAX_PATH ] = { 0 };
GetTempPathW( MAX_PATH, path );
// Based on .exe or .dll filename
GetModuleFileNameW( DllHandle(), dllPath, MAX_PATH );
wchar_t* p = wcsrchr( dllPath, L'\\');
wchar_t* ext = wcsrchr( p + 1, L'.');
if( ext ) *ext = 0;
wchar_t* outFile = path + wcslen(path);
static int iTempFileId = 1;
// Based on process id (so processes would not fight with each other)
// Based on some process global id.
wsprintfW(outFile, L"%s_%d_%d.tmp",p + 1, GetCurrentProcessId(), iTempFileId++ );
// 'D' - temporary file.
if ((info->temp_file = _wfopen(path, L"w+bD") ) == NULL)
ERREXITS(cinfo, JERR_TFILE_CREATE, "");
info->read_backing_store = read_backing_store;
info->write_backing_store = write_backing_store;
info->close_backing_store = close_backing_store;
} //jpeg_open_backing_store
/*
* These routines take care of any system-dependent initialization and
* cleanup required.
*/
GLOBAL(long)
jpeg_mem_init (j_common_ptr cinfo)
{
return DEFAULT_MAX_MEM; /* default for max_memory_to_use */
}
GLOBAL(void)
jpeg_mem_term (j_common_ptr cinfo)
{
/* no work */
}
}
I'm intentionally ignoring errors from some of functions - have you ever seen GetTempPathW or GetModuleFileNameW failing ?
A bit of a refresher from the manpage of on tmpfile(), which returns a FILE*:
The file will be automatically deleted when it is closed or the program terminates.
My verdict for this issue: Deleting a file on Windows is weird.
When you delete a file on Windows, for as long as something holds a handle, you can't call CreateFile on something with the same absolute path, otherwise it will fail with the NT error code STATUS_DELETE_PENDING, which gets mapped to the Win32 code ERROR_ACCESS_DENIED. This is probably where EPERM in errno is coming from. You can confirm this with a tool like Sysinternals Process Monitor.
My guess is that CRT somehow wound up creating a file that has the same name as something it's used before. I've sometimes witnessed that deleting files on Windows can appear asynchronous because some other process (sometimes even an antivirus product, in reaction to the fact that you've just closed a delete-on-close handle...) will leave a handle open to the file, so for some timing window you will see a visible file that you can't get a handle to without hitting delete pending/access denied. Or, it could be that tmpfile has simply chosen a filename that some other process is working on.
To avoid this sort of thing you might want to consider another mechanism for temp files... For example a function like Win32 GetTempFileName allows you to create your own prefix which might make a collision less likely. That function appears to resolve race conditions by retrying if a create fails with "already exists", so be careful about deleting the temp filenames that thing generates - deleting the file cancels your rights to use it concurrently with other processes/threads.

How to get mounted drive's volume name in linux using C?

I'm currently working on program, which must display information about mounted flash drive. I want to display full space, free space, file system type and volume name. But problem is that, i can't find any API through which i can get volume name(volume label). Is there any api to do this?
p.s. full space, free space and file system type i'm getting via statfs function
Assuming that you work on a recent desktop-like distribution (Fedora, Ubuntu, etc.), you have HAL daemon running and a D-Bus session.
Within org.freedesktop.UDisks namespace you can find the object that represents this drive (say org/freedekstop/UDisks/devices/sdb/. It implements org.freedesktop.UDisks.interface. This interface has all the properties that you can dream of, including UUID (IdUuid), FAT label (IdLabel), all the details about filesystem, SMART status (if the drive supports that) etc. etc.
How to use D-Bus API in C is a topic for another question. I assume that's been already discussed in detail -- just search [dbus] and [c] tags.
Flash drives are generally FAT32, which means the "name" that you're looking for is probably the FAT drive label. The most common linux command to retrieve that information is mlabel from the mtools package.
The command looks like this:
[root#localhost]$ mlabel -i /dev/sde1 -s ::
Volume label is USB-DISK
This program works by reading the raw FAT header of the filesystem and retrieving the label from that data. You can look at the source code for the applciation to see how you can replicate the parsing of FAT data in your own application... or you can simply execute run the mlabel binary and read the result into your program. The latter sounds simpler to me.
To call the methods:
kernResult = self->FindEjectableCDMedia(&mediaIterator);
if (KERN_SUCCESS != kernResult) {
printf("FindEjectableCDMedia returned 0x%08x\n", kernResult);
}
kernResult = self->GetPath(mediaIterator, bsdPath, sizeof(bsdPath));
if (KERN_SUCCESS != kernResult) {
printf("GetPath returned 0x%08x\n", kernResult);
}
and the methods:
// Returns an iterator across all DVD media (class IODVDMedia). Caller is responsible for releasing
// the iterator when iteration is complete.
kern_return_t ScanPstEs::FindEjectableCDMedia(io_iterator_t *mediaIterator)
{
kern_return_t kernResult;
CFMutableDictionaryRef classesToMatch;
// CD media are instances of class kIODVDMediaTypeROM
classesToMatch = IOServiceMatching(kIODVDMediaClass);
if (classesToMatch == NULL) {
printf("IOServiceMatching returned a NULL dictionary.\n");
} else {
CFDictionarySetValue(classesToMatch, CFSTR(kIODVDMediaClass), kCFBooleanTrue);
}
kernResult = IOServiceGetMatchingServices(kIOMasterPortDefault, classesToMatch, mediaIterator);
return kernResult;
}
// Given an iterator across a set of CD media, return the BSD path to the
// next one. If no CD media was found the path name is set to an empty string.
kern_return_t GetPath(io_iterator_t mediaIterator, char *Path, CFIndex maxPathSize)
{
io_object_t nextMedia;
kern_return_t kernResult = KERN_FAILURE;
DADiskRef disk = NULL;
DASessionRef session = NULL;
CFDictionaryRef props = NULL;
char * bsdPath = '\0';
*Path = '\0';
nextMedia = IOIteratorNext(mediaIterator);
if (nextMedia) {
CFTypeRef bsdPathAsCFString;
bsdPathAsCFString = IORegistryEntryCreateCFProperty(nextMedia,CFSTR(kIOBSDNameKey),kCFAllocatorDefault,0);
if (bsdPathAsCFString) {
//strlcpy(bsdPath, _PATH_DEV, maxPathSize);
// Add "r" before the BSD node name from the I/O Registry to specify the raw disk
// node. The raw disk nodes receive I/O requests directly and do not go through
// the buffer cache.
//strlcat(bsdPath, "r", maxPathSize);
size_t devPathLength = strlen(bsdPath);
if (CFStringGetCString( (CFStringRef)bsdPathAsCFString , bsdPath + devPathLength,maxPathSize - devPathLength, kCFStringEncodingUTF8)) {
qDebug("BSD path: %s\n", bsdPath);
kernResult = KERN_SUCCESS;
}
session = DASessionCreate(kCFAllocatorDefault);
if(session == NULL) {
qDebug("Can't connect to DiskArb\n");
return -1;
}
disk = DADiskCreateFromBSDName(kCFAllocatorDefault, session, bsdPath);
if(disk == NULL) {
CFRelease(session);
qDebug( "Can't create DADisk for %s\n", bsdPath);
return -1;
}
props = DADiskCopyDescription(disk);
if(props == NULL) {
CFRelease(session);
CFRelease(disk);
qDebug("Can't get properties for %s\n",bsdPath);
return -1;
}
CFStringRef daName = (CFStringRef )CFDictionaryGetValue(props, kDADiskDescriptionVolumeNameKey);
CFStringGetCString(daName,Path,sizeof(Path),kCFStringEncodingUTF8);
if(daName) {
qDebug("%s",Path);
CFRetain(daName);
}
CFRelease(daName);
CFRelease(props);
CFRelease(disk);
CFRelease(session);
CFRelease(bsdPathAsCFString);
}
IOObjectRelease(nextMedia);
}
return kernResult;
}

Resources