Optimising cross platform build system - c

I'm looking for a cross platform build system for C which helps to find good compiler flags on a specific machine. It would need some notions of testing for correctness, benchmarking for performance and multiple versioning of the target, and perhaps even recognising the machine it is running on. For example, in a typical build I'd want to compare 64 bit versus 32 bit executables, with and without openmp, fast-math, with different optimisation levels, and builds by entirely different compilers. The atlas-blas libraries are an impressive example here but are a bit of a pain on windows due the shell scripting. Is this something that can be hacked onto systems like Scons or Waf? Any other suggestions?

Other than the one I'm thinking about writing when I'm done procrastinating, Boost Jam (bjam) would probably match your description closest.

There is also CMake, but I think it would require a scripting layer to automate multi-target building and testing.

Related

Porting Autodesk Animator Pro to be cross platform

a previous relevant question from me is here Reverse Engineering old paint programs
I have set up my base of operations here: http://animatorpro.org
wiki coming soon.
Okay, so now I have a 300,000 line legacy MSDOS codebase. It's sort of a "be careful what you wish for" situation. I am not an experienced C programmer. I'm not entirely inexperienced either, but for all intents and purposes I'm a noob to the language and in particular the intricacies of its libraries. I am especially ignorant of the vagaries of the differences between C programs written specifically for MSDOS and programs that are cross platform. However I have been studying this code base for over a year now, and this is what I know about Animator Pro:
Compilers and tools used:
Watcom C compiler
tcmake (make program from Turbo C)
386asm, a specialised assembler for the Phar Lap dos extender
and of course, the Phar Lap dos extender itself.
a selection of obscure dos utilities
Much of the compilation seems to be driven by batch files. Though I have obtained copies of all these tools, I have not yet succeeded at compiling it. (though I have compiled its older brother, autodesk animator original.
It's got a plugin system that replicates DLL before DLL's were available, based on REX. The plugin system handles:
Video Drivers (with a plethora of included VESA drivers)
Input drivers (including wacom tablets, and keyboards)
Drawing Tools
Inks (Like photoshop's filters, or blending modes)
Scripting Addons (essentially compiled scripts)
File formats
It's got its own script interpreter named POCO, based on the C language- The scripting language has enough power to do virtually all the things the plugin system can do- Just slower.
Given this information, this is my development plan. Please criticise this. The source code is available in the link above, so you can easily, if you are so inclined, assess the situation yourself.
Compile with its original tools.
Switch to using DJGPP, and make the necessary changes to get it to compile with that, plus the original assembler.
Include the Allegro.cc "Game" library, and switch over as much functionality to that library as possible- Perhaps by simply writing new video and input drivers that use the Allegro API. I'm thinking allegro rather than SDL because: there is a DOS version of Allegro, and fascinatingly, one of its core functions is the ability to play Animator Pro's native format FLIC.
Hopefully after 3, I will have eliminated most or all of the Assembler in the project. I say hopefully, because it's in an obscure dialect that doesn't assemble in any modern free assembler without significant modification. I have tried them all. Whatever is left gets converted to assemble in NASM, or to C code if I can define the assembler's actual function.
Switch the dos extender from Phar Lap to HX Dos http://www.japheth.de/HX.html, Which promises to replicate as much of the WIN32 api as possible. Then make all the necessary code changes for that to work.
Switch to the win32 version of Allegro.cc, assuming that the win32 version can run on top of HXDos. Make any further necessary changes
Modify the plugin system to use some kind of standard cross platform plugin library. What this would be, I have no idea. Maybe you can offer some suggestions? I talked to the developer who originally wrote the plugin system, and he said some of the things it does aren't possible on modern OS's because of segmentation restrictions. I'm not sure what this means, but I'm guessing it means all the plugins will need to be rewritten almost from scratch.
Magically, I got all the above done, and we can try and make it run in windows, osx, and linux, whilst dealing with other cross platform niggles like long file names, and things I haven't thought of.
Anyone got a problem with any of this? Is allegro a good choice? if not, why? what would you do about this plugin system? What would you do different? Is this whole thing foolish, and should I just rewrite it from scratch, using the original as inpiration? (it would apparently take the original developer "About a month" to do that)
One thing I haven't covered above is the text/font system. Not sure what to do about that, but Animator Pro has its own custom font format, but also is able to use Postscript Type 1 fonts, and some other formats.
My biggest concern with your plan, in a nutshell: Your approach seems to be to attempt to keep the whole enormous thing working at all times, tweaking the environment ever-further away from DOS. During each tweak to the environment, that means you will have approximately a billion subtle assumptions that might have broken at once, none of which you necessarily understand yet. Untangling them all at once will be incredibly painful.
If I were doing the port, my approach would be to disable as much code as possible to get SOMETHING running in a modern environment, and bring the parts back online, one piece at a time. Write a simple test harness program that loads a display driver and draws some stuff, and compile it for DOS to make sure you understand the interface. Then write some C code that implements the same interface, but with Allegro (or SDL or SFML), and make that program work under Windows or Linux. When the output differs, you have a simple test case to work from.
Your entire job on this port is swapping out implementations of various interfaces and functions with completely new ones. This is a job that unit testing excels at. Don't write any new code without a test of some kind that runs on the old code under DOS! Make your potential problems as small and simple as you possibly can. Port assembly code instead of rewriting it only if you're reasonably confident that it will actually make your job easier (ie, algorithmic stuff that compiles fine with few tweaks under NASM). Don't bite off a bigger piece than you can comfortably fit in your brain at once.
I, for one, look forward to seeing your progress! I think what you're attempting to do is great. Thanks for doing it.
Hmmm - I might approach it by writing an OpenGL video "driver" for it. and todays machines are fast enough with tons of ram that you could do all the pixel specific algorithms on main CPU into a back buffer and it would work. As the "generic" VGA driver just mapped the video buffer to a pointer this would be a place to start. There was a zoom mode in the UI so you can look at the pixels on a high res display.
It is often very difficult to take an existing non-trivial code base that wasn't written with portability in mind - you mention a few - and then try to make it portable. There will be a lot of problems on the way. It is probably a better idea to start from scratch and rewrite the code using the existing code as reference only. If you start from scratch you can leverage existing portable UI solution in your new project like Qt.

C compiler producing lightweight executeables

I'm currently using MSVC for C++ but as I'm switching to C to write a very performance-intensive program (interpreter) I have to search for a fitting C compiler.
I've looked at some binaries produced by Turbo-C and even if its old they seem pretty straigthforward and optimized.
Now I don't know what the best compiler for building an interpreter is, but maybe you can help me.
I've considered GCC but as I don't know much about it, I can't be really sure.
99.9% a programs performance depends on the code you write and the language you choose.
you can safely ignore the performance of the the compiler.
Stick to MSVC...and dont waste time :)
If I were you, I would take the approach of worrying less about the compiler and worrying more about your own code. Write the code for the interpreter in a reasonable way. Then, profile it, and optimize spots based on how much time they take. That is more likely to produce a performance benefit than using a particular compiler.
If you want a lightweight program, it is not the compiler you need to worry about so much as the code you write and the libraries you use. Most compilers will produce similar results from the same source code.
For example, using C++ with MFC, a basic windows application used to start off at about 900kB and grow rapidly. Linking with the dynamic MFC dlls would get you down to a few hundred kB. But by dropping MFC totally - using Win32 APIs directly - and using a minimal C runtime it was relatively easy to implement the same thing in an .exe of about 25kB or less (IIRC - it's been a long time since I did this).
So ditch the libraries and get back to proper low level C (or even C++ if you don't use too many "clever" features), and you can easily write very compact applications.
edit
I've just realised I was confused by the question title into talking about lightweight applications as opposed to concentrating on performance, which appears to be the real thrust of the question. If you want performance, then there is no specific need to use C, or move to a painful development environment - just write good, high performance code. Fundamentally this is about using the correct designs and algorithms and then profiling and optimising the resulting code to eliminate bottlenecks and inefficiencies. Note that these days you may achieve a much bugger bang for your buck by switching to a multithreaded approach than just concentrating on raw code optimisation - make sure you utilise the hardware well.
You can use GCC, through MingW, Eclipse CDT, or one of the other Windows ports. You can optimize for executable size, speed of resulting executable, or speed of compilation.
C++ was designed to be backward compatible with C. So any C++ compiler should be able to compile pure C. You might want to tell it that it's C and not C++, so compiler doesn't do name mangling, etc. If the compiler is very good with C++, it should be equally good, or better with C, because C is much simpler.
I would suggest sticking with MSVC. It's a very decent system. Though if you are not convinced - compare your options. Build the same program with multiple compilers, look at the assembly they produce, measure the resulting executable performance, etc.

Detecting CPU capability without assembly

I've been looking at ways to determine the CPU, and its capabilities (eg SEE,SSE2,etc).
However all the ways I've found involved assembly code using the cpuid instruction. Given the differing ways of doing assembly in c/c++ between compilers and even targets (no inline assembly for 64bit targets under VC), id rather avoid that.
Is there some simple library around, or OS functions (for both windows and linux) for getting this information?
Currently I'm only interested in platforms using the x86 and x86-64 CPU's, and I defiantly need to support at least AMD and Intel.
EDIT: At first, I googled for "libcpuid" remembering a conversation with peer programmers, and the google search pointed me to this sourceforge page which looks quite outdated. Then I noticed it was the project page of "libcpu".
Actually, there really is a libcpuid that might suit your needs.
Last time I looked for such a library, I found crisscross. It's C++ though.
There is also geekinfo.
But I don't know a cross-platform library that just does cpuid.
Under linux peek in /proc/cpuinfo
This so hardware-dependent --- which is an aspect the OS usually shields you from --- that requiring it to be cross-platform is a tad hard.
It would be useful to have, but I fear you may just have to enumerate the cross-product of CPU s and features you would like to support, times the number of OSs (here two) and write tests for all. I do not think it has been done -- and someone will surely correct me if I overlooked something.
That would be a useful library to have, so good luck with the endeavor!

How would I go about creating my own VM?

I'm wondering how to create a minimal virtual machine that'll be modeled after the Intel 16 bit system. This would be my first actual C project, most of my code is 100 lines or less, but I have the core fundamentals down, read K&R, and understand how things ought to work, so this pretty much is a test of wits.
Could anyone guide me in as far as documentation, tools, tutorials, or plain old tips/pointers on how to go about this, thus far I understand that I require somewhere to store data, a CPU of sorts and some sort of mechanism that functions as an interrupt controller.
I'm doing this to learn: Systems internals, ASM internals and C - three facets of computing that I want to learn in a singular project.
Please be kind enough not to tell me to do something simpler - that would only be annoying. :)
Thanks for reading, and hopefully writing!
Virtual machines fall into two categories: those that interpret the code instruction at a time and those that compile the code to native instructions (e.g. "JIT").
The interpretation category is usually built around an instruction execution loop, using a switch statement, computed gotos or function pointers to determine how to execute each instruction.
There is a fun platform that is worth studying for its simplicity and fun: Corewars.
Corewars is a programming challenge game where programs written in "Redcode" run on a MARS VM. There are many MARS VMs, typically written in C.
It has also inspired 8086-based versions, where programs written in 8086 assembler battle.
Well, for starters I would pick up a reference book for assembly language for the processor you intend to virtualize, like 80286 or similar.
For a JIT, you might want to dynamically generate and execute x86 code.
If you want to write a Virtual Machine using the x86 VMM technology you will need quite a bit of things.
There are a few instructions that are critical such as VM_ENTER/VM_EXIT (name can change depending on the chip, AMD and INTEL use different names but the functionalities are the same). Those instructions are actually privileged and therefore, you will need to write a kernel module to use them.
The first step for your VM to start is to boot it and therefore, you will need a 'BIOS' which will be loaded. Then you need to emulate devices, etc. You could even run an old version of MSDOS in such a VM if you wanted to.
All in all, it clearly isn't trivial and requires a lot of time and effort.
Now, you could do something similar to what VMWare used to do before the Virtualization ready CPUs appeared.

Writing cross-platform apps in C

What things should be kept most in mind when writing cross-platform applications in C? Targeted platforms: 32-bit Intel based PC, Mac, and Linux. I'm especially looking for the type of versatility that Jungle Disk has in their USB desktop edition ( http://www.jungledisk.com/desktop/download.aspx )
What are tips and "gotchas" for this type of development?
I maintained for a number of years an ANSI C networking library that was ported to close to 30 different OS's and compilers. The library didn't have any GUI components, which made it easier. We ended up abstracting out into dedicated source files any routine that was not consistent across platforms, and used #defines where appropriate in those source files. This kept the code that was adjusted per platform isolated away from the main business logic of the library. We also made extensive use of typedefs and our own dedicated types so that we could easily change them per platform if needed. This made the port to 64-bit platforms fairly easy.
If you are looking to have GUI components, I would suggest looking at GUI toolkits such as WxWindows or Qt (which are both C++ libraries).
Try to avoid platform-dependent #ifdefs, as they tend to grow exponentially when you add new platforms. Instead, try to organize your source files as a tree with platform-independent code at the root, and platform-dependent code on the "leaves". There is a nice book on the subject, Multi-Platform Code Management. Sample code in it may look obsolete, but ideas described in the book are still brilliantly vital.
Further to Kyle's answer, I would strongly recommend against trying to use the Posix subsystem in Windows. It's implemented to an absolute bare minimum level such that Microsoft can claim "Posix support" on a feature sheet tick box. Perhaps somebody out there actually uses it, but I've never encountered it in real life.
One can certainly write cross-platform C code, you just have to be aware of the differences between platforms, and test, test, test. Unit tests and a CI (continuous integration) solution will go a long way toward making sure your program works across all your target platforms.
A good approach is to isolate the system-dependent stuff in one or a few modules at most. Provide a system-independent interface from that module. Then build everything else on top of that module, so it doesn't depend on the system you're compiling for.
XVT have a cross platform GUI C API which is mature 15+ years and sits on top of the native windowing toollkits. See WWW.XVT.COM.
They support at least LINUX, Windows, and MAC.
Try to write as much as you can with POSIX. Mac and Linux support POSIX natively and Windows has a system that can run it (as far as I know - I've never actually used it). If your app is graphical, both Mac and Linux support X11 libraries (Linux natively, Mac through X11.app) and there are numerous ways of getting X11 apps to run on Windows.
However, if you're looking for true multi-platform deployment, you should probably switch to a language like Java or Python that's capable of running the same program on multiple systems with little or no change.
Edit: I just downloaded the application and looked at the files. It does appear to have binaries for all 3 platforms in one directory. If your concern is in how to write apps that can be moved from machine to machine without losing settings, you should probably write all your configuration to a file in the same directory as the executable and not touch the Windows registry or create any dot directories in the home folder of the user that's running the program on Linux or Mac. And as far as creating a cross-distribution Linux binary, 32-bit POSIX/X11 would probably be the safest bet. I'm not sure what JungleDisk uses as I'm currently on a Mac.
There do exist quite few portable libraries just examples I've worked within the past
1) glib and gtk+
2) libcurl
3) libapr
Those cover nearly every platform and so they are extremly useful tool.
Posix is fine on Unices but well I doubt it's that great on windows, besides we do not have any stuff for portable GUIs there.
I also second the recommendation to separate code for different platforms into different modules/trees instead of ifdefs.
Also I recommend to check beforehand what are the differences in you platforms and how you could abstract them. E.g. this is some OS related stuff (e.g. the annoying CR,CRLF,LF in text files), or hardware stuff. E.g. the previous mentioned posix compability doesnt stop you from
int c;
fread(&c, sizeof(int), 1, file);
But on different hardware platforms the internal memory layout can be complete different (endianess), forcing you to use conversion functions on some of the target platforms.
You can use NAppGUI for both console and desktop apps. The SDK uses ANSI-C and your code will work on Windows/macOS/Linux.
https://www.nappgui.com
It's free and OpenSource.

Resources