I have an embedded C application which is developed using the CrossWorks for ARM toolchain.
This project targets a specific processor which is getting old and hard to source, we are working towards revising our design with a new processor. So my plan is to divide the source code into a set of low level driver code which targets the old processor, and another set of common code which will be able to compile on both processors.
I got started making a drivers project which compiles down to a drivers.a file. Currently this file is literally empty. It's entire contents are
!<arch>
The problem I have is that mealy including this file into the compilation of the common code causes much bloating of the compiled size. And the resulting binary is about 33% larger...
Below is an example of the size of some of the sections from the map file, the symbols listed are the FatFs functions.
Size without drivers.a Size with drivers.a
f_close 76 f_close 148
f_closedir 84 f_closedir 136
f_findfirst 48 f_findfirst 108
f_findnext 116 f_findnext 144
f_getfree 368 f_getfree 636
f_lseek 636 f_lseek 1,148
f_mkdir 488 f_mkdir 688
f_mount 200 f_mount 256
f_open 1,096 f_open 1,492
f_opendir 324 f_opendir 472
f_read 564 f_read 1,132
f_readdir 176 f_readdir 268
f_stat 156 f_stat 228
f_sync 244 f_sync 440
f_unlink 380 f_unlink 556
f_write 668 f_write 1,324
So clearly because of the additional drivers.a file the linker is unable to determine that certain parts of the code are unreachable due to the possibility that the linked in drivers.a code would call those routines. This makes sense I guess, but I need a way to get around it so that I can divide the code into separately maintainable code, while still compiling down as efficiently as before.
I hand not realized that linking *.a files could have this consequence, I previously had the mental image that *.a files were no different from a bunch of *.o files effectively tar'ed together into a single file. Clearly this isn't the case.
It turns out this has nothing to do with mealy linking in the drivers.a file...
The way I had my project set up the compiler options were changing when I included the drivers.a. Effectively I believe I was actually comparing Debugging Level 3 to Debugging Level 2, in which case the added binary size is understandable.
Related
i got the problem that a device isnt booting up into linux.
It just holds on "Starting kernel ...".
To get a better grip on what goes wrong i thought it would be nice to get access to the logs from linux.
I can access the userland from uboot via "ls":
Zynq> ls mmc 0:2
ostree/deploy/poky/deploy/9d325972b955e6584d3fad0a7ff1bf1a8.0/etc
<DIR> 2048 .
<DIR> 1024 ..
<DIR> 1024 modprobe.d
0 motd
<DIR> 1024 xdg
<DIR> 1024 logrotate.d
58 rpcbind.conf
1633 inputrc
828 mke2fs.conf
15 timestamp
10929 login.defs
324 issue
<DIR> 1024 sudoers.d
etc ...
Now im looking for a way to copy files from the userland to another device(remote-pc).
I learned about "tftpput" which is available in uboot.
My problem is that "tftpput" expects a save address and size. But i dont know how to get those information.
tftpput - TFTP put command, for uploading files to a server
Usage:
tftpput Address Size [[hostIPaddr:]filename]
I was not able to find a good documentation on "tftpput". Maybe someone has a link for me or provide me a small "how to" about this?
Thanks in advance
To answer the specific question, you need a tftp server on another machine. Then when you use 'load' to bring a file into memory you will now have that address, $filesize will now be set for you (for the size parameter) and the machine you setup a tftp server on is the final part of the command.
That said, if you only see "Starting kernel" and nothing else, it is quite likely that the linux kernel isn't getting to the point where the rootfs is mounted, userland runs and you're going to see log files. Without more information it's hard to say what you need to do here, but your bootargs are the first place to make sure are correct.
To analyze why the kernel is not booting you could enable the early console.
For ARM 64bit systems the early console is enabled via the kernel command line parameters. U-Boot takes these from the environment variable bootargs.
The arguments for earlycon depend on your board, e.g. for the Odroid C2:
setenv bootargs earlycon=meson,0xc81004c0
For an early console on 32bit ARM system you will have to compile the kernel with appropriate configuration options, e.g. for the Banana Pi:
CONFIG_DEBUG_LL=y
CONFIG_DEBUG_SUNXI_UART0=y
CONFIG_EARLY_PRINTK=y
lets assume that file.txt has 16bytes of size (it is 10 in hex)
First it is necessary load the file into the memory
fatload mmc 1:1 0x40400000 file.txt
Then you can send it to tftp server:
tftpput 0x40400000 10 192.168.7.1:filetxt
I am working on Pintos OS project. I get this message:
Page fault at 0xbfffefe0: not present error writing page in user context.
The problem with Pintos OS project is that it won't simply tell the line and method that caused the exception.
I know how to use breakpoints/watchpoints etc. but is there any way to step right to it without going through the WHOLE flow and ALL OS files line by line so that I could jump into line that caused exception and put breakpoint there? I looked at GDB commands but didn't find anything.
When I debug this project I have to step through the whole program until I find that error/exception which is very time consuming. There is probably a faster way to do this.
Thanks.
Whole trace:
nestilll#vdebian:~/Class/pintos/proj-3-bhling-nestilll-nsren/src/vm/build$ pintos -v -k -T 60 --qemu --gdb --filesys-size=2 -p tests/vm/pt-grow-pusha -a pt-grow-pusha --swap-size=4 -- -q -f run pt-grow-pusha
Use of literal control characters in variable names is deprecated at /home/nestilll/Class/pintos/src/utils/pintos line 909.
Prototype mismatch: sub main::SIGVTALRM () vs none at /home/nestilll/Class/pintos/src/utils/pintos line 933.
Constant subroutine SIGVTALRM redefined at /home/nestilll/Class/pintos/src/utils/pintos line 925.
warning: disabling timeout with --gdb
Copying tests/vm/pt-grow-pusha to scratch partition...
qemu -hda /tmp/N2JbACdqyV.dsk -m 4 -net none -nographic -s -S
PiLo hda1
Loading............
Kernel command line: -q -f extract run pt-grow-pusha
Pintos booting with 4,088 kB RAM...
382 pages available in kernel pool.
382 pages available in user pool.
Calibrating timer... 419,020,800 loops/s.
hda: 13,104 sectors (6 MB), model "QM00001", serial "QEMU HARDDISK"
hda1: 205 sectors (102 kB), Pintos OS kernel (20)
hda2: 4,096 sectors (2 MB), Pintos file system (21)
hda3: 98 sectors (49 kB), Pintos scratch (22)
hda4: 8,192 sectors (4 MB), Pintos swap (23)
filesys: using hda2
scratch: using hda3
swap: using hda4
Formatting file system...done.
Boot complete.
Extracting ustar archive from scratch device into file system...
Putting 'pt-grow-pusha' into the file system...
Erasing ustar archive...
Executing 'pt-grow-pusha':
(pt-grow-pusha) begin
Page fault at 0xbfffefe0: not present error writing page in user context.
pt-grow-pusha: dying due to interrupt 0x0e (#PF Page-Fault Exception).
Interrupt 0x0e (#PF Page-Fault Exception) at eip=0x804809c
cr2=bfffefe0 error=00000006
eax=bfffff8c ebx=00000000 ecx=0000000e edx=00000027
esi=00000000 edi=00000000 esp=bffff000 ebp=bfffffa8
cs=001b ds=0023 es=0023 ss=0023
pt-grow-pusha: exit(-1)
Execution of 'pt-grow-pusha' complete.
Timer: 71 ticks
Thread: 0 idle ticks, 63 kernel ticks, 8 user ticks
hda2 (filesys): 62 reads, 200 writes
hda3 (scratch): 97 reads, 2 writes
hda4 (swap): 0 reads, 0 writes
Console: 1359 characters output
Keyboard: 0 keys pressed
Exception: 1 page faults
Powering off...
to have the GDB debugger run and stop at the desired location:
gdb filename <--start debug session
br main <--set a breakpoint at the first line of the main() function
r <--run until that breakpoint is reached
br filename.c:linenumber <--set another breakpoint at the desired line of code
c <--continue until second breakpoint is encuntered
The debugger will stop at the desired location in the file, IF it ever actually gets there,
When I debug this project I have to step through the whole program
until I find what caused error/exception which is very time consuming.
There is probably a faster way to do this.
Normally what you would do is set a breakpoint just before the error. Then your program will run at full speed, without your intervention, until it reaches that point.
There are several wrinkles here.
First, sometimes it is difficult to know where to put the breakpoint. In this case I suppose I would look for the code that is printing the message, then work backward from there. Sometimes you have to stop at the failure point, examine the stack, set a new breakpoint further up, and re-run the program.
Then there is the mechanics of setting the breakpoint. One simple way is to break by function name, like break my_function. Another is to use the file name and line number, like break my_file.c:73.
Finally, sometimes a breakpoint can be hit many times before the failure is seen. You can use ignore counts (see help ignore) or conditional breakpoints (like break my_function if variable = 27) to limit the number of stops.
GDB is complaining that my source file is more recent than the executable, and it appears the debugging information is indeed related to an older version of the source file, because gdb is stopping on a blank line:
Program received signal SIGSEGV, Segmentation fault.
0x0000000000000000 in ?? ()
(gdb) up
#1 0x00007ffff7ba2d88 in CBKeyPairGenerate (keyPair=0x602010) at library/src/CBHDKeys.c:246
warning: Source file is more recent than executable.
246
(gdb) list
241 if (versionBytes == CB_HD_KEY_VERSION_TEST_PUBLIC
242 || versionBytes == CB_HD_KEY_VERSION_TEST_PRIVATE)
243 return CB_NETWORK_TEST;
244
245 return CB_NETWORK_UNKNOWN;
246
247 }
248
249 uint8_t * CBHDKeyGetPrivateKey(CBHDKey * key) {
250
But the executable is more recent than the source file, see here:
$ ls -l library/src/CBHDKeys.c
-rw-r--r-- 1 matt matt 9249 Apr 29 22:40 library/src/CBHDKeys.c
$ ls -l bin/noLowerAddressGenerator
-rwxr-xr-x 1 matt matt 17845 Apr 30 15:52 bin/noLowerAddressGenerator
I tried rebuilding after make clean and ccache -C but the same problem occurs. When I updated the source file I only added whitespace, so the program logic remains equal.I feel that has something to do with it, but since I cleared the ccache and cleaned the build and bin directory with make clean I'm not sure what is going on.
Versions:
GNU Make 3.81
gcc (Debian 4.8.2-16) 4.8.2
GNU gdb (GDB) 7.6.2 (Debian 7.6.2-1)
ccache version 3.1.9
SolydXK - SMP Debian 3.13.5-1 (2014-03-04)
Perhaps you're not using the most recent compiled version of the code, if it's in a shared library. You could use ldd noLowerAddressGenerator to see the library dependencies of your program; I don't know if it's possible from within GDB to locate the relevant library, but there ought to be a way (please comment or edit if you know how).
If this is indeed the case, you might want to set environment LD_LIBRARY_PATH in GDB prior to running the program, to place your newly-built library ahead of any installed ones. You could look into setting the RPATH ELF variable when linking, but that's likely to be less help.
Another possibility is to run your debugger on a system where you know the library isn't installed. I've had good results using schroot to keep build/debug/install environments separated.
I have to start by saying that I am very much a programming noob. I do not understand all the compiler options or nuances of the IDE, not by a longshot. But I am trying to teach myself more about native programming languages. (I'm decent with C#, but that is much easier than C as I am discovering.)
Today, I wrote this small program in C. It is a console/command line program. I used Visual Studio 2012 and my development machine alternates between Windows 7 and 8, 64 bit. To start, what I did was create a new VC++ project, and I chose a Blank Project. Then I created a new app.c file. I also created a *.rc file to give the executable some extra properties like "File Version" and "Company Name" when you browse the file properties in Windows Explorer. Then I went to the properties of the project, chose Configuration Properties -> C/C++ -> Code Generation and I changed Runtime Library to "Multi-threaded (/MT) so that I wouldn't have to distribute the msvcr100.dll file along with my executable.
In the app.c file, I placed the following code:
#include <stdio.h>
#include <string.h>
#include <Windows.h>
#include <WtsApi32.h>
#pragma comment(lib, "WtsApi32.lib")
void main(int argc, char *argv[])
{
char *helpMsg = "blah";
char *hostName, *connState = "";
char *addrFamily = "";
HANDLE hHost = NULL;
...stuff and so forth and so on...
}
Then I built/compiled the program, and the executable works just fine on Windows 7, 8, Server 2008R2, Server 2012, all 64 bit. But when I try to run the program on Server 2003 (and I am guessing WinXP, etc., as well,) I am greeted with the Windows dialog box:
"Foo.exe is not a valid Win32 application."
So my question is, is there something obvious/simple that I am missing that will allow this executable to also work on earlier XP/2003/32bit platforms that I am missing? I do not believe that I am using any 64-bit exclusive features in my program. But I figured that since I did choose "Blank Project" instead of "Win32 Console Application" that I may be missing some setting.
Edit: Here is the dumpbin.exe /headers output when run against my exe:
Microsoft (R) COFF/PE Dumper Version 11.00.50727.1
Copyright (C) Microsoft Corporation. All rights reserved.
Dump of file C:\users\me\Release\foo.exe
PE signature found
File Type: EXECUTABLE IMAGE
FILE HEADER VALUES
14C machine (x86)
5 number of sections
50F604BC time date stamp Tue Jan 15 19:39:08 2013
0 file pointer to symbol table
0 number of symbols
E0 size of optional header
102 characteristics
Executable
32 bit word machine
OPTIONAL HEADER VALUES
10B magic # (PE32)
11.00 linker version
7800 size of code
A200 size of initialized data
0 size of uninitialized data
16A7 entry point (004016A7) _mainCRTStartup
1000 base of code
9000 base of data
400000 image base (00400000 to 00414FFF)
1000 section alignment
200 file alignment
6.00 operating system version
0.00 image version
6.00 subsystem version
0 Win32 version
15000 size of image
400 size of headers
0 checksum
3 subsystem (Windows CUI)
8140 DLL characteristics
Dynamic base
NX compatible
Terminal Server Aware
100000 size of stack reserve
1000 size of stack commit
100000 size of heap reserve
1000 size of heap commit
0 loader flags
10 number of directories
0 [ 0] RVA [size] of Export Directory
D374 [ 3C] RVA [size] of Import Directory
11000 [ 538] RVA [size] of Resource Directory
0 [ 0] RVA [size] of Exception Directory
0 [ 0] RVA [size] of Certificates Directory
12000 [ C04] RVA [size] of Base Relocation Directory
9160 [ 38] RVA [size] of Debug Directory
0 [ 0] RVA [size] of Architecture Directory
0 [ 0] RVA [size] of Global Pointer Directory
0 [ 0] RVA [size] of Thread Storage Directory
CF98 [ 40] RVA [size] of Load Configuration Directory
0 [ 0] RVA [size] of Bound Import Directory
9000 [ 118] RVA [size] of Import Address Table Directory
0 [ 0] RVA [size] of Delay Import Directory
0 [ 0] RVA [size] of COM Descriptor Directory
0 [ 0] RVA [size] of Reserved Directory
SECTION HEADER #1
.text name
7670 virtual size
1000 virtual address (00401000 to 0040866F)
7800 size of raw data
400 file pointer to raw data (00000400 to 00007BFF)
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
60000020 flags
Code
Execute Read
SECTION HEADER #2
.rdata name
49E2 virtual size
9000 virtual address (00409000 to 0040D9E1)
4A00 size of raw data
7C00 file pointer to raw data (00007C00 to 0000C5FF)
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
40000040 flags
Initialized Data
Read Only
Debug Directories
Time Type Size RVA Pointer
-------- ------ -------- -------- --------
50F604BC cv 61 0000CFE0 BBE0 Format: RSDS, {582D0FF2-59C1-4633-AF2A-E4A4AD6BFA2C}, 1, C:\Users\me\Release\users.pdb
50F604BC feat 10 0000D044 BC44 Counts: Pre-VC++ 11.00=0, C/C++=116, /GS=116, /sdl=0
SECTION HEADER #3
.data name
2C04 virtual size
E000 virtual address (0040E000 to 00410C03)
E00 size of raw data
C600 file pointer to raw data (0000C600 to 0000D3FF)
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
C0000040 flags
Initialized Data
Read Write
SECTION HEADER #4
.rsrc name
538 virtual size
11000 virtual address (00411000 to 00411537)
600 size of raw data
D400 file pointer to raw data (0000D400 to 0000D9FF)
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
40000040 flags
Initialized Data
Read Only
SECTION HEADER #5
.reloc name
235C virtual size
12000 virtual address (00412000 to 0041435B)
2400 size of raw data
DA00 file pointer to raw data (0000DA00 to 0000FDFF)
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
42000040 flags
Initialized Data
Discardable
Read Only
Summary
3000 .data
5000 .rdata
3000 .reloc
1000 .rsrc
8000 .text
I have also tried going to Project Properties -> Linker -> System: Minimum Required Version and changing that to 5.00 and 1.00 or whatever, but it has no effect. dumpbin.exe still reports the OS version as 6.00. I have even used editbin.exe /version 5.00 on the exe and no errors were reported... and yet dumpbin.exe still reports 6.00 for the OS version.
VS2012 originally shipped without supporting XP/2003. The updated CRT and runtime support libraries are using too many Windows api functions that are not available on those operating systems. This created quite a stir among its customers, to put it mildly, and they re-engineered the libraries to dynamically bind to these functions and limp along it they are missing. This was made available in Update 1, you'll need to use Project + Properties, General, Platform Toolset = v110_xp to build programs that use those libraries.
Note how it changes a linker setting, the important one, Linker > System > Minimum Required Version = "5.01". Which ensures that the executable file is marked to be compatible with the XP sub-system version. You'll also build against SDK version 7.1, the last one that is still compatible with XP.
When you use the default toolset (v110) then you target sub-system 6.00 and SDK version 8. Version 6.00 was the last major kernel revision, started with Vista.
A brief overview of the new api functions being used to give you a (very rough) idea what is missing in the XP version:
FlsAlloc, FlsFree, FlsGetValue, FlsSetValue : safe thread-local storage
InitializeCriticalSectionEx, CreateSemaphoreEx : safety
SetThreadStackGuarantee : stability
CreateThreadPoolTimer, SetThreadPoolTimer, WaitForThreadPoolTimerCallbacks, CloseThreadPoolTimer : cheaper timers
CreateThreadPoolWait, SetThreadPoolWait, CloseThreadPoolWait : cheaper waits?
FlushProcessWriteBuffers, GetCurrentProcessorNumber, GetLogicalProcessorInformation : threading
FreeLibraryWhenCallbackReturns : stability?
CreateSymbolicLink : functionality
InitOnceExecuteOnce : unknown
SetDefaultDllDirectories : unknown
EnumLocalesEx, CompareStringEx, GetDateFormatEx, GetLocalInfoEx, GetTimeFormatEx, GetUserDefaultLocaleName, IsValidLocaleName, LCMapStringEx : better locale support
I figured it out myself. (But thank you Hans for steering me in the right direction.) For some reason, even with Update 1 and even after setting my toolset to v110_xp, and setting the minimum required version to 5.01 in the Linker options, the resulting dumpbin app.exe /headers still reports a minimum operating system version of 6.0.
So I simply ran
editbin.exe app.exe /SUBSYSTEM:CONSOLE,5.01 /OSVERSION:5.1
And the executable now runs just fine on older operating systems. I'm thinking there still might be a little bit of a bug somewhere in Visual Studio.
The MSVC Team Blog says that when using MSBuild or DEVENV from the command-line with the v110_xp platform toolset, no other changes are necessary. This information is incorrect/incomplete. The /SUBSYSTEM linker argument and associated "Minimum Required Version" must also be set appropriately.
The MSDN documentation for /ENTRY states that, if the /SUBSYSTEM argument is not specified that the SUBSYSTEM and ENTRY POINT are determined automatically. My hunch is that when this happens, the SUBSYSTEM's "Minimum Required Version" argument is also automatically overridden.
The v110_xp toolset automatically specifies the SUBSYSTEM's MRV ("5.1" (WindowsXP)) but not the SUBSYSTEM. As such, the MRV will be overridden, for example, by the linker to "6.0". Running the application will then cause WindowsXP to show the error message stating that the application "is not a valid Win32 application."
I'd like to create a sparse file such that all-zero blocks don't take up actual disk space until I write data to them. Is it possible?
There seems to be some confusion as to whether the default Mac OS X filesystem (HFS+) supports holes in files. The following program demonstrates that this is not the case.
#include <stdio.h>
#include <string.h>
#include <fcntl.h>
#include <unistd.h>
void create_file_with_hole(void)
{
int fd = open("file.hole", O_WRONLY|O_TRUNC|O_CREAT, 0600);
write(fd, "Hello", 5);
lseek(fd, 99988, SEEK_CUR); // Make a hole
write(fd, "Goodbye", 7);
close(fd);
}
void create_file_without_hole(void)
{
int fd = open("file.nohole", O_WRONLY|O_TRUNC|O_CREAT, 0600);
write(fd, "Hello", 5);
char buf[99988];
memset(buf, 'a', 99988);
write(fd, buf, 99988); // Write lots of bytes
write(fd, "Goodbye", 7);
close(fd);
}
int main()
{
create_file_with_hole();
create_file_without_hole();
return 0;
}
The program creates two files, each 100,000 bytes in length, one of which has a hole of 99,988 bytes.
On Mac OS X 10.5 on an HFS+ partition, both files take up the same number of disk blocks (200):
$ ls -ls
total 400
200 -rw------- 1 user staff 100000 Oct 10 13:48 file.hole
200 -rw------- 1 user staff 100000 Oct 10 13:48 file.nohole
Whereas on CentOS 5, the file without holes consumes 88 more disk blocks than the other:
$ ls -ls
total 136
24 -rw------- 1 user nobody 100000 Oct 10 13:46 file.hole
112 -rw------- 1 user nobody 100000 Oct 10 13:46 file.nohole
As in other Unixes, it's a feature of the filesystem. Either the filesystem supports it for ALL files or it doesn't. Unlike Win32, you don't have to do anything special to make it happen. Also unlike Win32, there is no performance penalty for using a sparse file.
On MacOS, the default filesystem is HFS+ which does not support sparse files.
Update: MacOS used to support UFS volumes with sparse file support, but that has been removed. None of the currently supported filesystems feature sparse file support.
This thread becomes a comprehensive source of info about the sparse files. Here is the missing part for Win32:
Decent article with examples
Tool that estimates if it makes sense to make file as sparse
Regards
hdiutil can handle sparse images and files but unfortunately the framework it links against is private.
You could try defining external symbols as defined by the DiskImages framework below but this is most likely not acceptable for production code, plus since the framework is private you'd have to reverse engineer its use cases.
cristi:~ diciu$ otool -L /usr/bin/hdiutil
/usr/bin/hdiutil:
/System/Library/PrivateFrameworks/DiskImages.framework/Versions/A/DiskImages (compatibility version 1.0.8, current version 194.0.0)
[..]
cristi:~ diciu$ nm /System/Library/PrivateFrameworks/DiskImages.framework/Versions/A/DiskImages | awk -F' ' '{print $3}' | c++filt | grep -i sparse
[..]
CSparseFile::sector2Band(long long)
CSparseFile::addIndexNode()
CSparseFile::readIndexNode(long long, SparseFileIndexNode*)
CSparseFile::readHeaderNode(CBackingStore*, SparseFileHeaderNode*, unsigned long)
[... cut for brevity]
Later Edit
You could use hdiutil as an external process and have it create an sparse disk image for you. From the C process you would then create a file in the (mounted) sparse disk image.
If you seek (fseek, ftruncate, ...) to past the end, the file size will be increased without allocating blocks until you write to the holes. But there's no way to create a magic file that automatically converts blocks of zeroes to holes. You have to do it yourself.
This may be helpful to look at (the OpenBSD cp command inserts holes instead of writing zeroes).
patch
If you want portability, the last resort is to write your own access function so that you manage an index and a set of blocks.
In essence you manage a single file as the OS manages the disk keeping the chain of the blocks that are part of the file, the bitmap of allocated/free blocks etc.
Of course this will lead to a non optimized and slower access, I would reccomend this apprach only if the requirement to save space is absolutely critical and you have enough time to write a robust set of access functions.
And even in that case, I would first investigate if your problem is in need of a different solution. Probably you should store your data differently?
It looks like OS X supports sparse files on UDF volumes. I tried titaniumdecoy's test program on OS X 10.9 and it did generate a sparse file on a UDF disk image. Also, not that UFS is no longer supported in OS X, so if you need sparse files, UDF is the only natively supported file system that supports them.
I also tried the program on SMB shares. When the server is Ubuntu (ext4 filesystem) the program creates a sparse file, but 'ls -ls' through SMB doesn't show that. If you do 'ls -ls' on the Ubuntu host itself it does show the file is sparse. When the server is Windows XP (NTFS filesystem) the program does not generate a sparse file.