i have a unix c programming assignment in which i do some advanced C network programming.this is my first step into advance programming. so i was wondering what is the best combination of tools for this on a mac. using an IDE like Eclipse is how i'd normally do it but i have to make my own makefiles and stuff. so i would like to learn how it can be done effectively using may be emacs or vim + other tools. it will be quite a big project so i am worried about project management and debugging issues mostly as well as the productivity factor. in essence i want to learn how programmers do it in the professional environment without the bloated IDE part. i am using Snow Leopard.i would also delve into C++ and python in the future so may be something that will be useful for those as well.
I know you're asking for how to do it with makefiles/VI/etc. but on the Mac, Xcode is really the way to go, especially for large projects. It's a very effective wrapper that will call gcc and gdb and the linker for you. Especially when moving to a new platform, not having to worry with many of the pesky details will be a big leap in productivity. It's IDE debugger is quite awesome.
Of course you can also use makefiles etc. Many projects (just to take OpenSSL as an example) come with makefiles and you can compile them on the Mac from the commandline just like under the *ix operating systems, i.e. calling ./configure and then make. But setting up stuff like that (e.g. compiler options for universal binaries and such) is tedious while the IDE it's just a few options. Also, if you google for specific questions, you will find far more answers on how to do it with Xcode.
If you want to get started with Xcode, it's either on your Mac operating system CD (it just does not pre-install automatically) or you can download it from Apple. When you run it, just open a Mac OS X Project of type "Application - Commandline Tool" and you'll have a project with a main.c set up in a minute. You can then just run it or run it in the debugger like that and adding more source files to it is rather easy.
Xcode can be quite a beast for setting up an already large project (we ported a large project with DLLs and depending exes (overall 250000 lines of code) to the Mac and just getting that all set up wasn't what you call a piece of cake) but if you start from scratch you'll easily grow into it.
Bottom line is that Xcode certainly is equipped to deal with large projects and I can not imagine a more productive way of doing it (I have used hand written makefiles and such in the past so I know both worlds).
Xcode is your friend. It's free and is a very nice IDE. When you launch XCode, just start a new Console application (it'll be ANSI C).
Enjoy.
If learning the rudiments of unix editors, shell programming, make, etc., are part of the assignment, then you just need to dive in and learn what you need to learn. Some good books will help. Obviously you need K&R. I always liked the O'Reilly books for Unix stuff, usually because they are the thinnest. I hate thick computer books because they never get read. You should also learn how to use the man pages.
Vim vs. Emacs is a religious choice. If you ask any Unix guy what is the best, he will invariably tell you the one he learned first, because chances are he never learned the other. In my case, I've been using Vim so long that my escape key is worn out and the commands are hard-wired into my brain. Obviously, I think it's way better than emacs (which I never learned!) If you are lucky enough to have a Mac as a work station, install mac vim. It's great.
Make is complicated enough so that you will never really master it. Just learn enough to compile and link your program. You can always learn more if you need it.
Version control is an interesting question... I use RCS for small stuff. Like vi, it is on every Unix machine. For really big projects, I use subversion, but like editors, most people use whatever they learned first. Git people will say its the only one to use, etc.
Command line debuggers are a pain, which is a main selling point for Xcode. I've used gdb, but I don't remember it as a pleasant experience. Its been so long since I used it, I can't even remember how to start it up. There must be better debuggers by now. Try google.
Bottom line, all the things you mentioned are big topics. You need to take realistic bites of each and not get tangled in the weeds. It can take years to master them all.
Finally, I'd stay as far away from C++ as possible! Objective C is much better. Personal prejudice!
Related
Short version of question: How do I get started with C programming? Note that I am not asking for a tutorial on learning C language (I can learn that easy enough). I need to setup the environment (I hope I'm asking this question clearly). Here's what I mean:
For my math thesis, I need to write a program in C on Gentoo Linux, using a library called CVODE/SUNDIALS. There is nobody (it seems) in my department who can help me set this up - my professor has left the computer work 100% to me because I have some programming background and he's a math geek. But my experience is with scripting languages (think VBA) and not full, powerful programming languages where you have to link the compiler and libraries, etc. like C.
There is no development environment on the Linux cluster - or at least not that's friendly, and has a debugger - that I've found. So, what I need to figure out how to setup a C programming environment with CVODE library on my PC (Win 7 x64, at little to no cost.
I have found plenty of tutorials on programming in C. I looked up Eclipse, which I have a little bit of experience with, as a development environment, but it's instructions say you need to install a compiler, too.
What I would like is someone to tell me, in simple language that I can understand (which might be the most difficult part of this question) the big picture of what I need and what to do (and maybe even links to where I can find what I need) to set up a C environment with CVODE. If the information is Windows/Gentoo Linux cross platform, even better.
Thank you.
P.S. I did search the site and saw lots of "How do I setup" quesitons, but no C one. Because I know someone will yell at me for that. Also, I don't want to have a convo about whether to use C#, C++, Java, etc. That just complicates the issue - and I need to get this done.
Edit: I have learned a little more since this question and now realize that I left out a key part of the question. The CVODE library and Linux cluster at school use MPI - parallel programming - which is not available on your average, run-of-the-mill PC. So all development must be done directly on the cluster.
Linux: Simple way is to install gcc or g++.
You can write your code in your plain text editor (nano, vim, gedit, kwrite, etc)
Save your file in .c or .cpp extention and type in terminal
gcc filename.c
or
g++ filename.cpp
You said that you want to write c code on Gentoo Linux, as i understand you're not familiar with Linux? The best choice in this case is to:
Install virtualbox in your windows machine (https://www.virtualbox.org/), it's a free software that let you emulate in your desktop another systems like Linux...
Install Gentoo linux on virtualbox, there are a lot of tutorials on the net, for example this video: http://www.youtube.com/watch?v=DUf_1wAPeyA
When you install Gentoo Linux on virtualbox you have all you need to develop C (gcc compiler, gdb debugger...)
Now you can download your library, and decompress it
In general all (Good) Linux libraries come with a 'README' file that contain all instructions for installing the library.
I think you need to do this:
./configure --prefix=/DIRECTORY_YOU_WANT_TO_INSTALL_THE_LIBRARY
make
make install
You can now play with C and you new library, like this:
suppose you create a new file test_lib_ CVODE.c you can compile it like this:
gcc -Wall test_lib_ CVODE.c -o test_lib_ CVODE -lcvode
I assume that the installed library is named libcvode.so
If you have any questions, you can always get help here.
Regards.
I think you should use Code::Block in Linux, it is very similar to Window's Code::Block and it is very easy to debug and other things.
These were all useful answers. I tried pursuing each of them at least a little bit. However, the only reasonable solution seems to be to use emacs on a terminal window. This is because I'm using MPI - yes, I know I didn't mention that in the OP - which can only be done on a cluster.
I am new to this environment and was not aware of the MPI or the affect it would have on my attempt to develop.
I believe I can do better than this if I can figure out X/Windows using Cygwin. But I am a long way from that.
Thanks all for your effort and sorry I can't really award a best answer (I guess).
a previous relevant question from me is here Reverse Engineering old paint programs
I have set up my base of operations here: http://animatorpro.org
wiki coming soon.
Okay, so now I have a 300,000 line legacy MSDOS codebase. It's sort of a "be careful what you wish for" situation. I am not an experienced C programmer. I'm not entirely inexperienced either, but for all intents and purposes I'm a noob to the language and in particular the intricacies of its libraries. I am especially ignorant of the vagaries of the differences between C programs written specifically for MSDOS and programs that are cross platform. However I have been studying this code base for over a year now, and this is what I know about Animator Pro:
Compilers and tools used:
Watcom C compiler
tcmake (make program from Turbo C)
386asm, a specialised assembler for the Phar Lap dos extender
and of course, the Phar Lap dos extender itself.
a selection of obscure dos utilities
Much of the compilation seems to be driven by batch files. Though I have obtained copies of all these tools, I have not yet succeeded at compiling it. (though I have compiled its older brother, autodesk animator original.
It's got a plugin system that replicates DLL before DLL's were available, based on REX. The plugin system handles:
Video Drivers (with a plethora of included VESA drivers)
Input drivers (including wacom tablets, and keyboards)
Drawing Tools
Inks (Like photoshop's filters, or blending modes)
Scripting Addons (essentially compiled scripts)
File formats
It's got its own script interpreter named POCO, based on the C language- The scripting language has enough power to do virtually all the things the plugin system can do- Just slower.
Given this information, this is my development plan. Please criticise this. The source code is available in the link above, so you can easily, if you are so inclined, assess the situation yourself.
Compile with its original tools.
Switch to using DJGPP, and make the necessary changes to get it to compile with that, plus the original assembler.
Include the Allegro.cc "Game" library, and switch over as much functionality to that library as possible- Perhaps by simply writing new video and input drivers that use the Allegro API. I'm thinking allegro rather than SDL because: there is a DOS version of Allegro, and fascinatingly, one of its core functions is the ability to play Animator Pro's native format FLIC.
Hopefully after 3, I will have eliminated most or all of the Assembler in the project. I say hopefully, because it's in an obscure dialect that doesn't assemble in any modern free assembler without significant modification. I have tried them all. Whatever is left gets converted to assemble in NASM, or to C code if I can define the assembler's actual function.
Switch the dos extender from Phar Lap to HX Dos http://www.japheth.de/HX.html, Which promises to replicate as much of the WIN32 api as possible. Then make all the necessary code changes for that to work.
Switch to the win32 version of Allegro.cc, assuming that the win32 version can run on top of HXDos. Make any further necessary changes
Modify the plugin system to use some kind of standard cross platform plugin library. What this would be, I have no idea. Maybe you can offer some suggestions? I talked to the developer who originally wrote the plugin system, and he said some of the things it does aren't possible on modern OS's because of segmentation restrictions. I'm not sure what this means, but I'm guessing it means all the plugins will need to be rewritten almost from scratch.
Magically, I got all the above done, and we can try and make it run in windows, osx, and linux, whilst dealing with other cross platform niggles like long file names, and things I haven't thought of.
Anyone got a problem with any of this? Is allegro a good choice? if not, why? what would you do about this plugin system? What would you do different? Is this whole thing foolish, and should I just rewrite it from scratch, using the original as inpiration? (it would apparently take the original developer "About a month" to do that)
One thing I haven't covered above is the text/font system. Not sure what to do about that, but Animator Pro has its own custom font format, but also is able to use Postscript Type 1 fonts, and some other formats.
My biggest concern with your plan, in a nutshell: Your approach seems to be to attempt to keep the whole enormous thing working at all times, tweaking the environment ever-further away from DOS. During each tweak to the environment, that means you will have approximately a billion subtle assumptions that might have broken at once, none of which you necessarily understand yet. Untangling them all at once will be incredibly painful.
If I were doing the port, my approach would be to disable as much code as possible to get SOMETHING running in a modern environment, and bring the parts back online, one piece at a time. Write a simple test harness program that loads a display driver and draws some stuff, and compile it for DOS to make sure you understand the interface. Then write some C code that implements the same interface, but with Allegro (or SDL or SFML), and make that program work under Windows or Linux. When the output differs, you have a simple test case to work from.
Your entire job on this port is swapping out implementations of various interfaces and functions with completely new ones. This is a job that unit testing excels at. Don't write any new code without a test of some kind that runs on the old code under DOS! Make your potential problems as small and simple as you possibly can. Port assembly code instead of rewriting it only if you're reasonably confident that it will actually make your job easier (ie, algorithmic stuff that compiles fine with few tweaks under NASM). Don't bite off a bigger piece than you can comfortably fit in your brain at once.
I, for one, look forward to seeing your progress! I think what you're attempting to do is great. Thanks for doing it.
Hmmm - I might approach it by writing an OpenGL video "driver" for it. and todays machines are fast enough with tons of ram that you could do all the pixel specific algorithms on main CPU into a back buffer and it would work. As the "generic" VGA driver just mapped the video buffer to a pointer this would be a place to start. There was a zoom mode in the UI so you can look at the pixels on a high res display.
It is often very difficult to take an existing non-trivial code base that wasn't written with portability in mind - you mention a few - and then try to make it portable. There will be a lot of problems on the way. It is probably a better idea to start from scratch and rewrite the code using the existing code as reference only. If you start from scratch you can leverage existing portable UI solution in your new project like Qt.
I'm really sorry if this sounds kinda dumb. I just finished reading K&R and I worked on some of the exercises. This summer, for my project, I'm thinking of re-implementing a linux utility to expand my understanding of C further so I downloaded the source for GNU tar and sed as they both seem interesting. However, I'm having trouble understanding where it starts, where's the main implementation, where all the weird macros came from, etc.
I have a lot of time so that's not really an issue. Am I supposed to familiarize myself with the GNU toolchain (ie. make, binutils, ..) first in order to understand the programs? Or maybe I should start with something a bit smaller (if there's such a thing) ?
I have little bit of experience with Java, C++ and python if that matters.
Thanks!
The GNU programs big and complicated. The size of GNU Hello World shows that even the simplest GNU project needs a lot of code and configuration around it.
The autotools are hard to understand for a beginner, but you don't need to understand them to read the code. Even if you modify the code, most of the time you can simply run make to compile your changes.
To read code, you need a good editor (VIM, Emacs) or IDE (Eclipse) and some tools to navigate through the source. The tar project contains a src directory, that is a good place to start. A program always start with the main function, so do
grep main *.c
or use your IDE to search for this function. It is in tar.c. Now, skip all the initialization stuff, untill
/* Main command execution. */
There, you see a switch for subcommands. If you pass -x it does this, if you pass -c it does that, etc. This is the branching structure for those commands. If you want to know what these macro's are, run
grep EXTRACT_SUBCOMMAND *.h
there you can see that they are listed in common.h.
Below EXTRACT_SUBCOMMAND you see something funny:
read_and (extract_archive);
The definition of read_and() (again obtained with grep):
read_and (void (*do_something) (void))
The single parameter is a function pointer like a callback, so read_and will supposedly read something and then call the function extract_archive. Again, grep on it and you will see this:
if (prepare_to_extract (current_stat_info.file_name, typeflag, &fun))
{
if (fun && (*fun) (current_stat_info.file_name, typeflag)
&& backup_option)
undo_last_backup ();
}
else
skip_member ();
Note that the real work happens when calling fun. fun is again a function pointer, which is set in prepare_to_extract. fun may point to extract_file, which does the actual writing.
I hope I walked you a great deal through this and shown you how I navigate through source code. Feel free to contact me if you have related questions.
The problem with programs like tar and sed is twofold (this is just my opinion, of course!). First of all, they're both really old. That means they've had multiple people maintain them over the years, with different coding styles and different personalities. For GNU utilities, it's usually pretty good, because they usually enforce a reasonably consistent coding style, but it's still an issue. The other problem is that they're unbelievably portable. Usually "portability" is seen as a good thing, but when taken to extremes, it means your codebase ends up full of little hacks and tricks to work around obscure bugs and corner cases in particular pieces of hardware and systems. And for programs as widely ported as tar and sed, that means there's a lot of corner cases and obscure hardware/compilers/OSes to take into account.
If you want to learn C, then I would say the best place to start is not trying to study code that others have written. Rather, try to write code yourself. If you really want to start with an existing codebase, choose one that's being actively maintained where you can see the changes that other people are making as they make them, follow along in the discussions on the mailing lists and so on.
With well-established programs like tar and sed, you see the result of the discussions that would've happened, but you can't see how software design decisions and changes are being made in real-time. That can only happen with actively-maintained software.
That's just my opinion of course, and you can take it with a grain of salt if you like :)
Why not download the source of the coreutils (http://ftp.gnu.org/gnu/coreutils/) and take a look at tools like yes? Less than 100 lines of C code and a fully functional, useful and really basic piece of GNU software.
GNU Hello is probably the smallest, simplest GNU program and is easy to understand.
I know sometimes it's a mess to navigate through C code, especially if you're not familiarized with it. I suggest you use a tool that will help you browse through the functions, symbols, macros, etc. Then look for the main() function.
You need to familiarize yourself with the tools, of course, but you don't need to become an expert.
Learn how to use grep if you don't know it already and use it to search for the main function and everything else that interests you. You might also want to use code browsing tools like ctags or cscope which can also integrate with vim and emacs or use an IDE if you like that better.
I suggest using ctags or cscope for browsing. You can use them with vim/emacs. They are widely used in the open-source world.
They should be in the repository of every major linux distribution.
Making sense of some code which uses a lot of macros, utility functions, etc, can be hard. To better browse the code of a random C or C++ software, I suggest this approach, which is what I generally use:
Install Qt development tools and Qt Creator
Download the sources you want to inspect, and set them up for compilation (usually just ./configure for GNU stuff).
Run qmake -project in the root of the source directory, to generate Qt .pro file for Qt Creator.
Open the .pro file in Qt Creator (do not use shadow build, when it asks).
Just to be safe, in Qt Creator Projects view, remove the default build steps. The .pro file is just for navigation inside Qt Creator.
Optional: set up custom build and run steps, if you want to build and run/debug under Qt Creator. Not needed for navigation only.
Use Qt Creator to browse the code. Note especially the locator (kb shortcut Ctrl+K) to find stuff by name, and "follow symbol under cursor" (kb shortcut F2), and "find usages" (kb shortcut Ctrl-Shift-U).
I had to take a look at "sed" just to see what the problem was; it shouldn't be that big. I looked and I see what the issue is, and I feel like Charleton Heston catching first sight of a broken statue on the beach. All of what I'm about to describe for "sed" might also apply to "tar". But I haven't looked at it (yet).
A lot of GNU code got seriously grunged up - to the point of unmaintainable morbid legacy - for reasons I don't know. I don't know exactly when it happened, maybe late 1990's or early 2000's, but it was like someone flipped a switch and suddenly, all the nice modular mostly self-contained code widgets got massively grunged with all sorts of extraneous entanglements having little or no connection to what the application itself was trying to do.
In your case, "sed": an entire library got (needlessly) dragged in with the application. This was the case at least as early as version 4.2 (the last version predating your query), probably before that - I'd have to check.
Another thing that got grunged up was the build system (again) to the point of unmaintainability.
So, you're really talking about legacy rescue here.
My advice ... which is generic for any codebase that's been around a long time ... is to dig as deep as you can and go back to its earliest forms first; and to branch out and look at other "sed"'s - like those in the UNIX archive.
https://www.tuhs.org/Archive/
or in the BSD archive:
https://github.com/freebsd
https://github.com/weiss/original-bsd
(the second one goes deeper into early BSD in its earlier commits.)
Many of the "sed"'s on the GNU page - but not all of them - may be found under "Downloads" as a link "mirrors" on the GNU sed page:
https://www.gnu.org/software/sed/
Version 1.18 is still intact. Version 1.17 is implicitly intact, since there is a 1.17 to 1.18 diff present there. Neither version has all the extra stuff piled on top of it. It's more representative of what GNU software looked like, before before becoming knotted up with all the entanglements.
It's actually pretty small - only 8863 lines for the *.c and *.h files, in all. Start with that.
For me the process of analysis of any codebase is destructive of the original and always entails a massive amount of refactoring and re-engineering; and simplification coming from just writing it better and more natively, while yet keeping or increasing its functionality. Almost always, it is written by people who only have a few years' experience (by which I mean: less than 20 years, for instance) and have thus not acquired full-fledged native fluency in the language, nor the breadth of background to be able to program well.
For this, if you do the same, it's strongly advised that you have some of test suite already in place or added. There's one in the version 4.2 software, for instance, though it may be stress-testing new capabilities added between 1.18 and 4.2. Just be aware of that. (So, it might require reducing the test suite to fit 1.18.) Every change you make has to be validated by whatever tests you have in your suite.
You need to have native fluency in the language ... or else the willingness and ability to acquire it by carrying out the exercise and others like it. If you don't have enough years behind you, you're going to hit a soft wall. The deeper you go, the harder it might be to move forward. That's an indication that you're not experienced enough yet, and that you don't have enough breadth. So, this exercise then becomes part of your learning experience, and you'll just have to plod through.
Because of how early the first versions date from, you will have to do some rewriting anyhow, just to bring it up to standard. Later versions can be used as a guide, for this process. At a bare minimum, it should be brought up to C99, as this is virtually mandated as part of POSIX. In other words, you should at least be at least as far up to date as the present century!
Just the challenge of getting it to be functional will be exercise enough. You'll learn a lot of what's in it, just by doing that. The process of getting it to be operational is establishing a "baseline". Once you do that, you have your own version, and you can start with the "analysis".
Once a baseline is established, then you can proceed full throttle forward with refactoring and re-engineering. The test suite helps to provide cover against stumbles and inserted errors. You should keep all the versions that you have (re)made in a local repository so that you can jump back to earlier ones, in case you need to track down the sudden emergence of test failures or other bugs. Some bugs, you may find, were rooted all the way back in the beginning (thus: the discovery of hidden bugs).
After you have the baseline (re)written to your satisfaction, then you can proceed to layer in the subsequent versions. On GNU's archive, 1.18 jumps straight to 2.05. You'll have to make a "diff" between the two to see where all the changes were, and then graft them into your version of 1.18 to get your version of 2.05. This will help you better understand both the issues that the changes made addressed, and what changes were made.
At some point you're going to hit GNU's Grunge Wall. Version 2.05 jumped straight to 3.01 in GNU's historical archive. Some entanglements started slipping in with version 3.01. So, it's a soft wall we have here. But there's also an early test suite with 3.01, which you should use with 1.18, instead of 4.2's test suite.
When you hit the Grunge Wall, you'll see directly what the entanglements were, and you'll have to decide whether to go along for the ride or cast them aside. I can't tell you which direction is the rabbit hole, except that SED has been perfectly fine for a long time, most or all of it is what is listed in and mandated by the POSIX standard (even the current one), and what's there before version 3 serves that end.
I ran diffs. Between 2.05 and 3.01, the diff file is 5000 lines. Ok. That's (mostly) fine and is natural for code that's in development, but some of that may be coming from the soft Grunge Wall. Running a diff on 3.01 versus 4.2 yields a diff file that over 60000 lines. You need only ask yourself: how can a program that's under 10000 lines - that abides by an international standard (POSIX) - be producing 60000 lines of differences? The answer is: that's what we call bloat. So, between 3.01 and 4.2, you're witnessing a problem that is very common to code bases: the rise of bloat.
So, that pretty much tells you which direction ("go along for the ride" versus "cast it aside") is the rabbit hole. I'd probably just stick with 3.01, and do a cursory review of the differences between 3.01 and 4.2 and of the change logs to get an overview of what the changes were, and just leave it at that, except maybe to find a different way to write in what they thought was necessary to change, if the reason for it was valid.
I've done legacy rescue before, before the term "legacy" was even in most people's vocabulary and am quick to recognize the hallmark signs of it. This is the kind of process one might go through.
We've seen it happen with some large codebases already. In effect, the superseding of X11 by Wayland was a massive exercise in legacy rescue. It's also possible that the ongoing superseding of GNU's gcc by clang may be considered instance of that.
I'm looking for a make platform. I've read a little about gnu make, and that its got some issues on windows platforms (from slash/backslash, to shell determination ... ) so I would like to hear what are my alternatives to it ?
If it matters, i'm doing fortran development combined with (very)little c on small sized projects (50k lines max), but I don't think that matters since most of those are of the language agnostic type.
What are gnu make drawbacks, and what alternatives do I have, with what advantages?
There are a couple of good tools for continuous integration and building on windows. The two I have in mind are NAnt which describes itself as .Net build tool, but could be used to build anything - its open source and very extensible, although the UI is lacking. I've recently started to use Hudson which is brilliant, the output is way better than NAnt, making it much easier to use. I have zero experience with these tools and Fortran, so good luck there.
My thought on make and its derivatives is to avoid based on it's age, a good tool in its time but it must 20 years old now, and tech (even in the build area) has moved on a fair bit since then.
You can have a look at cmake. It's a kind of "meta-make" system: You write a make-file for it, which says how your project is structured, what libs and sources it needs, and so on. And it can build make-files for you for GNU make, nmake (i believe), project files for Kdevelop and Visual Studio.
KDE has adopted it for KDE4 onwards and it was since greatly enhanced: CMake
Another such system is Bakefile which was built to generate make-files and project-files for the wxWidgets GUI toolkit. It can be used for non-wx applications too, and is relatively young and modern (uses XML as its makefile description).
There is also nmake, which is Microsoft's version of nmake. I would recommend to stick with gnu make though. My advise is to always use Unix like slashes; they also work for Windows. Gnu make is widely used, you can easily find tutorials and get advices about it's use. It is also a better investment, since you can also use it in other areas in the future. Finally, it is much richer in functionality.
I use GNU make under Windows and have no problems with it. However, I also use bash as my shell. Both make and bash are available as part of the Cygwin package from www.cygwin.com and I strongly recommend you install bash & all the common command line tools (grep, sed etc.) if you are going to use make from the command line.
Make has stood the test of time even on windows, and I use it everyday, but there's also msbuild
Details, details...
Given your small project, I wuld just start with MS nmake. Then if that doesn't suffice, move on to GNUmake. Other advice above is also good. Ant and CMake are fine, but you don't need them and there are so many make users who can help you if you have problems.
For that matter, since you are on windows, doesn't the MS IDE have buil tools built in. Just click and go.
keep it simple. Plan to throw the first on away, you will anyway.
Wikipedia also has this to say:
http://en.wikipedia.org/wiki/List_of_build_automation_software
I was looking at the Linux From Scratch project awhile ago and was sort of disapointed that you needed an existing copy of Linux on your machine to build it. I know that Linux is very easy to obtain, install, etc. but I was hoping to build the LFS project outside of the modern operating systems (Unix/Linux/OS-X/Windows/Etc.) and in something like DOS.
My question is, how might I build a project whether it be C, C++ or some other language with a C compiler, without building that project within another operating system. By operating system I mean Unix, Linux, OS-X, Windows, and every other GUI capable 'modern-ish' OS.
So specifically I'm looking for something that works much like DOS. I'm not above using DOS if thats all that is available, however I'm thinking something that has the ability to use all available memory, processing power, etc. I want to start my computer and be welcomed by a "prompt" from which I can build or execute a program (like another Operating System).
In order to build a program you need to: execute other programs (compiler, linker), access a filesystem both for reading the code and writing out the compiled files, and so on. You need a "real" operating system, even more so if you want to "use all available memory" and processing power. If you don't like the "high level appearence" of GUI capable OSes, just try one of the many stripped-down linux distros: for instance, "damn small linux" comes to mind.
I think the closest you're going to come is a Gentoo Linux Stage 1 install. It basically gives you a prompt and then you compile EVERYTHING, including the kernel, from that minimal starting point. It's about as close as you're going to get without keying in the binary for the bootloader by hand ;)
My guess is, it will be lots of work, but this DOS compiler may help DJGPP. Minix may also be an option, but it does have X Windows. Beyond that, you are going to be hard pressed to find anything.