I'll write a program for Interactive UNIX (http://en.wikipedia.org/wiki/INTERACTIVE_UNIX). But in a year it will be ported to Windows. I'll write it in ANSI C and/or SH-script. When it runs on Windows it will be run as a Windows service. How do I make it as easy as possible for me?
I want to change as little as possible when I port it, but to make it good code.
Unfortunately, Interactive Unix is a old system and the only shell that exist is /bin/sh
If you are even considering doing this in SH script, then you should give serious consideration to Python which is already portable.
Port early and frequently
Encapsulate non portable code. (Don't spread too many #ifdefs all over your code - rather create functions implemented separately for each OS in separate source files.
Be very strict with data types (use long short in structs/classes and not int)
I.e. switch on the highest warning level and resolve all warnings.
You can use platform-dependent ifdef-include pragmas and as strict types as possible. GLib has some nice ones defined which could be used on nearly every platform or architecture.
A shell script only option is not a viable alternative as on Windows platforms, there's no Bourne shell, Bash or KSH by default, and unfortunately PowerShell seems to be rare on XP machines. But you can create both a traditional batch file and a Bourne shell script.
But as others said, it's easier if you use a higher level language that's platform independent. And why wouldn't you? :)
I would recommand using ANSI-C and Lua (an embeddable small script interpreter). Try to use this with the basic required C functions you need.
You need to port and test often. If you work one year on unix and then try to switch it will be much harder, because often the best porting solution is a different design which is implemented on all platforms.
Windows can't run sh scripts directly, you need to use cygwin for that. So if you really want to run on vanilla Windows, you better use C. Stick to C89 and be careful. If you use any system calls, stick to POSIX ones and you should find them or equivalents on Windows. Windows also has a pretty comprehensive Berkeley sockets-alike library, so you can use that too within reason.
You're still going to have to do some #ifdefing.
You'll end up compiling it with MinGW if you make it a Windows task, if you stray too far into the UNIX den, you'll have to make it a cygwin binary instead, which has some baggage associated with it.
If it is not an option to add something that is inherently portable like python, ruby, perl, java etc. then your best option is probably to use ANSI C. One reason for C's initial popularity was it's (relatively good) portability. That said, anything that is closely tied to the OS, such as graphics, networking, etc are much less portable in C than in something like Python. You should strive to make "wrappers" for OS specific functions and keep those partitioned off from the main code. This way when it comes time to port it over, you're rewriting the wrappers, and everything else should compile without many issues.
All that said, it is a LOT easier to write something in Python and have it work everywhere. Plus it is more "fun" to write. So if you can avoid "interactive unix" in the future, do so.
Related
I need to write some C functions that will be called by a java program running on a CenOS Linux server, as part of a web application. The server is a hosted dedicated server sitting in another physical location, far away from me.
Do I need to develop the C stuff on the server directly, that is, doing development tunneling into the server? Or can I develop the C program on a Mac or Windows PC in my office, then once everything is working fine, store the final results on the server for use? If the latter, does it limit the choices for development environment in any way? That is, which compiler I should use, or any settings in the IDE or compiler I need to worry about since the development environment will be different than the production environment?
If I use Xcode version 3 on a Mac, it uses GCC by default, whereas Xcode version 4 uses LLVM-GCC to compile. Does the choice of compiler matter assuming I'm using C99 standard things? I don't want the code to be dependent on the development environment since I can't guarantee it'll stay the same in the future. Can I switch the compiler manually in Xcode somehow to verify the code works in GCC as well as LLVM?
Ignoring windows, things are pretty portable across mac/linux. If you develop it on mac in whatever development environment you want (I personally use TextWrangler and GCC from the command line.
Once you develop your software, it's a simple matter of copying the file to your remote server and compiling it there.
You may or may not need to change a few things. The only portability issue I've run into was mac's socket() using PF_ instead of AF_ (Mac will still accept AF_ but it doesn't advertise it in it's manpage, and other systems will not necessarily accept PF_) and sranddev() not being available on some systems; both of which were very easily resolvable.
If, however, you wanted to write the software directly on the remote box, its definitely not a hard thing to do, I would just ssh there and take your pick of text editors (usually vi or emacs) and compilers (usually gcc).
In general, for programs that are just traditional unix command line things, I tend to avoid Xcode as much as possible because it likes to hide things, and IMO its a good thing to actually understand what is going on behind the scenes. (Especially if you use other *nix systems.)
Whatever you do, it will need to be recompiled on the server.
You can probably create code that's runnable/testable under both environments, although you may have to #ifdef around compatibility issues. How much of that, if any, depends a lot on what you're actually writing.
Does the choice of compiler matter assuming I'm using C99 standard things?
Yes: Microsoft, AFAIK, still doesn't fully support C99 (but maybe that's changed in the latest MSVC). Also, you have to resist the temptation of using non-standard features just because they are there. OTOH, a local build env might force you to write portable programs.
The choice depends on how your program is going to communicate with the larger system, but developing at least parts locally is probably the most convenient option.
I've run into an issue with writing some code in c. My basic problem is that I am pressed for time and the code I am dealing with has lots of bugs which I have to "erradicate" before tomorrow evening.
The bigger part of the problem is that there is no adequate IDE that can do real time debugging, especially when using threads or spawning processes with fork(). I tried Mono, Eclipse, and finally NetBeans, and have concluded that those are very good but not for coding in C. More over the learning curve to utilize the command line debugger properly is quite steep. (Like I mentioned earlier... I am pressed on time.)
So, since I am a C# developer by profession I was wondering whether I can pull this off in VS2003/VS2005/VS2008/VS2010. If I abstain from using system calls, can I do this?
Of particular interest are FILE* descriptor and fread(), fclose(), fseek() methods. I know they are part of the standard C library, however are they tied to the platform itself? Are the headers the same in Linux vs Windows? What about fork() or shared memory?
Maybe if I use VS2010 to build parts of the component at a time (by mocking inputs and stuff), debug those, and then migrate the working code in the overall Linux project would prove most useful?
Any input would be greatly appreciated.
The bigger part of the problem is that there is no adequate IDE that can do real time debugging, especially when using threads or spawning processes with fork().
The Eclipse CDT would probably have the best overall support for C/C++ development and integrated debugging.
Note that multithreaded and multiprocess debugging can be difficult at the best of times. Investing in a good logging framework would be advisable at this point, and probably more useful than relying on a debugger. There are many to choose from - have a look at Log4C++ and so on. Even printf in a pinch can be invaluable.
So, since I am a C# developer by profession I was wondering whether I can pull this off in VS2003/VS2005/VS2008/VS2010. If I abstain from using system calls, can I do this?
If you take care to only use portable calls and not Win32-specific APIs, you should be ok. Also, there are many libraries (for C++ libraries such as Boost++ that provide a rich set of functionality which work the same on Windows, Linux and others.
Of particular interest are FILE* descriptor and fread(), fclose(), fseek() methods. I know they are part of the standard C library, however are they tied to the platform itself? Are the headers the same in Linux vs Windows? What about fork() or shared memory?
Yes, the file I/O functions you mention are in <stdio.h> and part of the portable standard C library. They work essentially the same on both Windows and Linux, and are not tied to a particular platform.
However, fork() and the shared memory functions shmget() are POSIX functions, available on *nix platforms but not natively on Windows. The Cygwin project provides implementations of these functions in a library for ease of porting.
If you are using C++, Boost++ will give you portable versions of all these system-level calls.
Maybe if I use VS2010 to build parts of the component at a time (by mocking inputs and stuff), debug those, and then migrate the working code in the overall Linux project would prove most useful?
You could certainly do that. Just be mindful that Visual Studio has a tendency to lead you down the Win32 path, and you must be vigilant to not start using non-portable functions. Fortunately the library reference on MSDN gives you the compatibility information. In general, using standard C or POSIX calls will be portable. In my experience, it is actually easier to write on *nix and port to Windows, but YMMV.
Looks like I am the first to recommend Emacs here. Here is how Emacs works. When you install it, it is simply a text editor with a lot of extensions(debugger and C font-locking are included by default). As you start using it and install the extensions you miss, it becomes more than just an editor. It grows to become an IDE very soon and easily, then on to something that can eschew the OS under one frame.
Emacs might take long to learn, in the mean time, you could use Visual Slick Edit if you are not pressed on the cost part. I have used it on both platforms and seen it work good with version control, tags, etc.
Perhaps Code::Blocks? I love it and while it says it's for C++ it is, of course, very good for plain C as well.
I'm building something that installs a high-level stack, and to do that, I need to install the lower-level stuff.
The simplest way to look for whether, say, Java is installed, is to just shell out a which java in a shell script and check if it can find it. I'm now to the point where I need to do some libraries without an obvious binary- basically stuff that is an include from within C. libxml, for example.
I'm woefully green to C in general, so this makes things a little tricky for me. :) Ideally I could just make a shell script that calls a little C applicaiton that calls #include <xxxx>, where xxxx is the library that I'm checking the existence of. If it can't find it, it errors out. Unfortunately, of course, all that happens prior to compilation, so it's not as dynamic as I'd like.
I'm doing this on a system that probably doesn't have anything installed on it (be it high-level language or package managers or what have you), so I'm looking more for a basic shell script way of doing things (or maybe some clever C or command-line gcc options). Or maybe just manually search the include paths that gcc would look for anyway /usr/local/include, /usr/include, etc.). Any thoughts?
Autotools is really what you need. Its a huge (and bizarre) framework for dealing with this very problem:
http://www.gnu.org/software/autoconf/
You can also use pkg-config, which will work with newer software making use of that mechanism:
http://pkg-config.freedesktop.org/wiki/
this is the purpose of configure (part of automake and autoconf)
So, I have a large C project that was built entirely on Unix (SPARC Solaris). me and several others have begun to revisit it because their was some interest in a windows build.
none of us have done this with a project of such size, so for starters, has anyone ported something from unix to windows and could maybe give me some pointers or how they did it.
our first step on our plan was to decide on a compiler/dev environment.
it seems that our options are MS Visual Studio, Cygwin, mingw/gcc, and Windows Services for UNIX (SFU).
we are on a fairly short timetable so we want to rewrite as little code as possible.
so, Deciding on a compiler.
Another issue is that the code does use POSIX thread commands (pthread, etc)
we would prefer to compile natively, not using some sort of layer between the executable and the OS. unfortunatly with the pthread calls in our code, this may not be possible.
I believe both Cygwin and SFU do just that. Cygwin has a .dll that must be included in compiled code to work. I am not sure about SFU, any information about that would be greatly appreciated. It seems like it would be a good option but was developed to allow for UNIX compiled software to run on a windows machine with SFU, not any old windows box.
mingw does have the ability to create native exes, but lacks the POSIX support.
So, can anyone give me any more information, suggestions, knowledge on any of these compilers in this context. or any experience they have with this sort of thing, it is greatly appreciated.
Short timetable? CygWin, plain and simple.
Despite your preference to not use a layer, that's going to provide the fastest path and you don't seem to indicate that the timeframe requirement is flexible.
We've ported both command-line and X-based UNIX programs to Windows using CygWin with minimal hassle.
Cygwin is likely the fastest path to a working executable. However it will leave you with some interesting distribution choices. Most obviously, cygwin.dll becomes a dependency. Its licensed GPL, unless you pay money to buy commercial use rights.
Cygwin is not particularly friendly to an ordinary Windows user. Its goal is to provide a full POSIX experience on Windows, supplying a shell, all the familiar *nix utilities, and even a port of X. However, it also remaps the Windows disk drive naming into a POSIX-like file system. I've never attempted to distribute an application built for Cygwin to machines that don't already have a full Cygwin installation. I will note that to my knowledge none of the big well-known open-source applications with Windows ports are based on Cygwin.
If the only hard POSIX dependency you have is pthreads, then that is solvable. There is a pthreads port built on native Windows threads that works well with MinGW. IIRC, it is even distributed along with MinGW, or at least is one of their core supported packages.
If the rest of your handling of file names is largely as opaque strings, you may not even need to care about changing / to \. The Windows API is generally happy to treat either character as a path separator, even mixed in the same name. It is the CMD.EXE and early DOS convention of using / for command line options that prevents the use of / for pathnames at the command prompt, not the underlying Windows API.
For tools that might make porting your build process easier, check out the MSYS component of MinGW. It provides a lightweigh fork from the Cygwin environment in which enough *nix utilities are available to generally run ./configure and similar processes.
In addition, the GnuWin32 project has ports of a large number of utilities and libraries that are all built to run as native Windows applications without unusual dependencies.
If the code is (at least mostly) portable and the only major issue is the use of pthreads, you might want to use the Pthreads Win32 library. While incomplete, it's sufficiently complete and accurate to deal with most pthreads code I've tried it with. While normally built as a DLL, this can also be built as a static library to avoid creating an extra dependencies in your executable.
That, of course, leaves everything else to port -- but you haven't said enough to even guess whether porting the rest within your timeframe is at all reasonable.
What things should be kept most in mind when writing cross-platform applications in C? Targeted platforms: 32-bit Intel based PC, Mac, and Linux. I'm especially looking for the type of versatility that Jungle Disk has in their USB desktop edition ( http://www.jungledisk.com/desktop/download.aspx )
What are tips and "gotchas" for this type of development?
I maintained for a number of years an ANSI C networking library that was ported to close to 30 different OS's and compilers. The library didn't have any GUI components, which made it easier. We ended up abstracting out into dedicated source files any routine that was not consistent across platforms, and used #defines where appropriate in those source files. This kept the code that was adjusted per platform isolated away from the main business logic of the library. We also made extensive use of typedefs and our own dedicated types so that we could easily change them per platform if needed. This made the port to 64-bit platforms fairly easy.
If you are looking to have GUI components, I would suggest looking at GUI toolkits such as WxWindows or Qt (which are both C++ libraries).
Try to avoid platform-dependent #ifdefs, as they tend to grow exponentially when you add new platforms. Instead, try to organize your source files as a tree with platform-independent code at the root, and platform-dependent code on the "leaves". There is a nice book on the subject, Multi-Platform Code Management. Sample code in it may look obsolete, but ideas described in the book are still brilliantly vital.
Further to Kyle's answer, I would strongly recommend against trying to use the Posix subsystem in Windows. It's implemented to an absolute bare minimum level such that Microsoft can claim "Posix support" on a feature sheet tick box. Perhaps somebody out there actually uses it, but I've never encountered it in real life.
One can certainly write cross-platform C code, you just have to be aware of the differences between platforms, and test, test, test. Unit tests and a CI (continuous integration) solution will go a long way toward making sure your program works across all your target platforms.
A good approach is to isolate the system-dependent stuff in one or a few modules at most. Provide a system-independent interface from that module. Then build everything else on top of that module, so it doesn't depend on the system you're compiling for.
XVT have a cross platform GUI C API which is mature 15+ years and sits on top of the native windowing toollkits. See WWW.XVT.COM.
They support at least LINUX, Windows, and MAC.
Try to write as much as you can with POSIX. Mac and Linux support POSIX natively and Windows has a system that can run it (as far as I know - I've never actually used it). If your app is graphical, both Mac and Linux support X11 libraries (Linux natively, Mac through X11.app) and there are numerous ways of getting X11 apps to run on Windows.
However, if you're looking for true multi-platform deployment, you should probably switch to a language like Java or Python that's capable of running the same program on multiple systems with little or no change.
Edit: I just downloaded the application and looked at the files. It does appear to have binaries for all 3 platforms in one directory. If your concern is in how to write apps that can be moved from machine to machine without losing settings, you should probably write all your configuration to a file in the same directory as the executable and not touch the Windows registry or create any dot directories in the home folder of the user that's running the program on Linux or Mac. And as far as creating a cross-distribution Linux binary, 32-bit POSIX/X11 would probably be the safest bet. I'm not sure what JungleDisk uses as I'm currently on a Mac.
There do exist quite few portable libraries just examples I've worked within the past
1) glib and gtk+
2) libcurl
3) libapr
Those cover nearly every platform and so they are extremly useful tool.
Posix is fine on Unices but well I doubt it's that great on windows, besides we do not have any stuff for portable GUIs there.
I also second the recommendation to separate code for different platforms into different modules/trees instead of ifdefs.
Also I recommend to check beforehand what are the differences in you platforms and how you could abstract them. E.g. this is some OS related stuff (e.g. the annoying CR,CRLF,LF in text files), or hardware stuff. E.g. the previous mentioned posix compability doesnt stop you from
int c;
fread(&c, sizeof(int), 1, file);
But on different hardware platforms the internal memory layout can be complete different (endianess), forcing you to use conversion functions on some of the target platforms.
You can use NAppGUI for both console and desktop apps. The SDK uses ANSI-C and your code will work on Windows/macOS/Linux.
https://www.nappgui.com
It's free and OpenSource.