I'm interested in studying the 9P FS, currently been reading the source available from these implementations: http://9p.cat-v.org/implementations
Is 9P obsolete? Are you using it for some application?
(also I've found this, some perfomance test between 9P and NFS: http://graverobbers.blogspot.com/2007/08/v9fs-performance-versus-nfs.html)
No, 9P isn't obsolete; I don't know of a protocol that does what it does and is clean and well defined enough to be implemented correctly in almost any language that exists.
9P is used in a variety of systems. A couple of recent uses in arm-js (an ARM emulator) and 9webdraw (a GSoC project that implements the Plan 9 /dev/draw). Both are HTML5 Javascript implementations.
Just to add a bit, both the Linux client implementation and several servers are under active development, so I'd say that's a pretty clear sign that folks still have use for it. One of the areas its seen heavy use more recently is the virtio-9P (aka virtfs) which is part of qemu/kvm and can be used for direct guest to host file access. It's also been used in several experimental operating systems projects (Libra, PROSE, FusedOS) and incorporated into other operating systems (BSD, MacOSX, Windows, Linux) and hypervisors (in addition to the KVM instance above, its also been incorporated in various ways into Xen). 9P is actually being used in supercomputing deployments (both for Plan 9 and Linux, see the diod project on Sourceforge).
I think the reason is that the protocol is quite simple, so implementations also tend to be quite simple and easy to integrate elsewhere (there are several applications both inside and outside the Plan 9 world which use 9P as an interface to the application, in much the same way that some web developers use RESTful interfaces).
The protocol has a couple of different variations including the 9P.L variant which was developed specifically to match the Linux VFS API better. It adds a bit of complexity to the protocol in the addition of operations, but removes some of the complexity of mapping Linux VFS API -> 9P and vice versa.
It is used in Erlang-on-Xen both as a storage protocol for goofs http://erlangonxen.org/blog/goofs-simple-filesystem
It is the way erlang on xen instances in other ways too, see here:
http://erlangonxen.org/more/9p2000e
Also, it's used by libvirt stuff with QEMU.
http://wiki.qemu.org/Documentation/9psetup
9p, to me, is like the Scheme of network protocols. For the most part, it is very simple, but people see need to extend it to fit their environments. Luckily this is done in ways that are often backwards compatible.
In addition to everything mentioned in the other answers, Microsoft is using 9P as part of their Windows Subsystem for Linux.
They add a 9P server to each Linux distribution that is running as a guest, so that Windows can mount the Linux filesystem over 9P, and Windows processes can transparently access the files on Linux's ext4 partition.
Related
I'm looking for a library that allows me to authenticate data sent to embedded modules. Due to the hardware constraints, it needs to be of small footprint (both code and memory wise) and yet have security comparable to RSA-1024.
The requirements are as follows
Verification on embedded modules (custom CPUs, with only a C89 compiler available)
Signing and verification in Windows (C/C++ code)
Signing in Java (some data needs to be generated via a webpage, so Java would be a big perk)
I would very much like to not have to implement a PKCS #1 v1.5/PSS-like system myself, but I haven't been able to find any good libraries that match the above requirements. Open source would be nice, but commercial solutions are of equal interest. Note that I need access to the C-code, since it has to be recompiled for the custom CPUs.
NaCl looks promising, but it seems to be in development still.
I've had a look at OpenSSL, but it does a lot more than digital signatures and stripping out just the signature verification code was non-trivial.
Am I looking at it the wrong way?
I tried implementing SHA+RSA first, but I wasn't sure if the padding step was correct (which means that it probably wasn't secure), so I decided to post here instead for help.
EDIT: Clarification, only the verification part have the tough constraints on it. Signature and key generation will run on normal PCs.
Take a look at mbed TLS (formerly known as PolarSSL):
mbed TLS (formerly known as PolarSSL) makes it trivially easy for developers to include cryptographic and SSL/TLS capabilities in their (embedded) products, facilitating this functionality with a minimal coding footprint.
How can implement such a kind of solution is related to CPU and memory architecture that we have available, therefore would have to tell me more about your system. The first way would be to develop this on the cloud. Another alternative would be SCL. Also, you can found some answers on Small RSA or DSA lib without dependencies
I looked at some other questions on SO and its not clear if c is built on top of, under, or alongside the WINAPI. Like for example could someone write something in pure c that was able to open a window, or would they need to use the windows api?
I noticed similarities between the c (library?) version of opening a file (fopen) vs the windows API version (CreateFile) which makes me wonder if one is just a wrapper for the other. Does anyone know?
If windows is running; is a programmer forced to program using the windows api to get something running on it or can the programmer not use the windows api at all and directly access the hardware (ie. does the windows operating system protect access to the hardware)?
Which is more portable between different versions of windows of windows ce. The documentation I found (which has now changed) used to say that CreateFile only goes back to version 2.0 of windows ce (here: http://msdn.microsoft.com/en-us/library/ms959950.aspx -
Notice the note on the link at the very bottom that shows the supported version information has been changed). So what is one supposed to use for windows ce version 1? In other words is programming using c functions or the functions labeled WINAPI more likely to work on all versions of windows CE?
I read the following in a book about programming windows ce and it confused me, so all of the above questions can be understood better in context of making sense of the following:
Windows CE supports the most of the same file I/O functions found on Windows NT and Windows 98. The same Win32 API calls, such as CreateFile, ReadFile, WriteFile and CloseFile, are all supported. A Windows CE programmer must be aware of a few differences, however. First of all, the standard C file I/O functions, such as fopen, fread, and fprintf, aren't supported under Windows CE. Likewise, the old Win16 standards, _lread, _lwrite, and _llseek, aren't supported. This isn't really a huge problem because all of these functions can easily be implemented by wrapping the Windows CE file functions with a small amount of code.
My understanding of wrapping is that you have to have something to wrap, given how it appears that win16 and c library are not available is he stating to wrap the CreateFile function to make your own c-like version of fopen? (The only other thing I am aware of is assembly, and if that is what he was suggesting to wrap it wouldn't be written in such a casual manner.)
Given the above, what is the dependency relationship between c language (syntax, data structures, flow control), the c function library (ex. fopen), and the windows api (ex. CreateFile)?
C existed long before Windows did. The Windows API is a bunch of libraries written in C. It may or may not be possible to duplicate its functionality yourself, depending on what Microsoft has documented or made available through the API. At some level it is likely that fopen() and CreateFile() each call the same or a similar operating system service, but it's unlikely that one is a strict wrapper for the other. It would probably be difficult to bypass the Windows API to access the hardware directly, but anything is possible given enough time and programming effort.
C doesn't know anything about GUIs, and VERY little about operating systems at all. Anything you do graphics-wise in C is through the use of libraries, of which the win32 api is an example.
The windows API is implemented in the C programming language. Functionality provided by the C standard libraries, such as fopen, is portable because it is compiled down to the appropriate assembly code for different architectures by different compilers. Windows API functions such as CreateFile only work on machines running Windows and are therefore not portable.
In theory it's possible to write C that talks directly to the hardware. Back in the days of MS-DOS (for one example) quite a few of us did on a fairly regular basis (since MS-DOS simply didn't provide what we needed). Edit: On some small embedded systems, it's still quite commonplace, but on typical desktop systems and such this has mostly disappeared.
Two things have changed. First, modern systems such as Linux and Windows are much more complete, so there's a lot less need to deal directly with the hardware. Second, most systems now run in protected mode, so normal user code can't talk directly to the hardware -- it has to go through some sort of device driver.
Yes, most of the C library uses the underlying OS so (for example) on Windows, fopen and fwrite will eventually call CreateFile and WriteFile, but on Linux they'll eventually call open and write instead.
I noticed similarities between the c (library?) version of opening a file (fopen) vs the windows API version (CreateFile)
Not surprising. They do similar things.
[is] one is just a wrapper for the other? Does anyone know?
You can't find out because the source code is owned and kept as a trade secret.
It doesn't matter which is more "fundamental". You use the windows API from a windows program. You use C API's from C programs.
Notice that it doesn't matter. You can use C API's or Windows API's intermixed.
If windows is running; is someone forced to use the windows api to get something running on it or can they bypass windows entirely and directly access the hardware?
"Directly access the hardware"? What does that mean? If windows is running, then.... well... Windows is running. Windows mediates your access to the hardware.
Use bootcamp or GRUB or some other bootloader to bypass Windows and have "direct access to the hardware".
If they can, is it possible to damage the hardware if you don't know what your doing?
What does this mean? Are you asking if you can "damage" some rotating media (i.e., disks) by misusing their drivers? You can corrupt your hard disk no matter what OS you're running or not running. A privileged account and dumb software can write bad data on a disk. Does that count as "damage"?
Which is more portable?
What does that mean? To another Windows computer? To a computer not running Windows? What are you asking about? Please clarify your question to define what you mean by "portable".
between different versions of windows
Since different Windows are mutually incompatible, I generally suggest using only the POSIX standard libraries and avoiding all Windows API's.
However, some Windows variants (e.g. Windows mobile for phone vs. Windows "Server") are essentially totally incompatible. There is very little reason for any piece of software to run on both OS's. Portability doesn't much matter. Why try to run a phone app on a server?
Edit
So theres the c language on the bottom (closest to the hardware), then the windows API next, then the C library on top of the Windows API?
This doesn't make sense. You're mixing up two unrelated things. The "language" and the "libraries" have little to do with each other.
Also, the API is not the operating system. So by using Windows "API" all the time, you're making this more confusing than it needs to be.
Here's a way to look at this.
The Windows Operating System has several API's. There are underlying function libraries that are not part of the application interface. They're "internal".
It has a native Windows API. Callable from C.
It has a POSIX API. Callable from C. In some cases, the Posix API generally uses the Windows API.
Most operating systems, including Windows are written in C (and or assembler). The Library is then modified for each operating system to do the basic stuff. (Sockets, Files, Memory, etc ...).
The WINAPI is just a bunch of libraries (written in C and/or Assembler) that allow access to functionality within the OS.
It is not Windows related, after you changed your question, I think what you are trying to understand is the bootstrapping of an OS (Windows or other).
The book Operating Systems Design and implementation discusses the implementation of Minix (Which Linux is based on).
the WINAPI provides an interface that developers in C can use in order to use the WINAPI functionality. C++ programs can also use it.
Operating systems such as Windows contain WINAPI libraries that provide access to some operating system functionality and sometimes contact with Hardware, these libraries are written in C
Carl Norum pointed out that C existed long before Windows, but don't forget that the Windows API kind of started with the MS-DOS API, which kind of started with the CP/M API. C only existed a short time before CP/M.
Lots of answers seem to imply that the Windows API is built on C, but that seems doubtful too. __stdcall is a synonym for PASCAL, which was a keyword in Microsoft's C compilers because the Windows API was built on Pascal. __cdecl is the default for function calls in C and C++ programs compiled by Visual Studio but it doesn't work on calls to APIs.
The relationship between C and the Windows API is that they are capable of working with each other.
As a fun note, you can really get a handle on the 'power' of the Windows API by taking a look at AutoIt http://www.autoitscript.com/autoit3/. AutoIt is a great little scripting language that can create GUIs, run command line apps, manipulate windows and processes, etc. Yes, it does File I/O and networking.
I am new to parallel computing world. Can you tell me is it possible to run a c++ code uses MPI routines in my laptop with dual core or is there any simulator/emulator for doing that?
Most MPI implementations use shared memory for communication between ranks that are located on the same host. Nothing special is required in terms of setting up the laptop.
Using a dual core laptop, you can run two ranks and the OS scheduler will tend to place them on separate cores. The WinXP scheduler tends to enforce some degree of "cpu binding" because by default jobs tend to be scheduled on the core where they last ran. However, most MPI implementations also allow for an explicit "cpu binding" that will force a rank to be scheduled on one specific core. The syntax for this is non-standard and must be gotten from the specific implementations documentation.
You should try to use "the same" version and implementation of MPI on your laptop that the university computers are running. That will help to ensure that the MPI runtime flags are the same.
Most MPI implementations ship with some kind of "compiler wrapper" or at least a set of instructions for building an application that will include the MPI library. Either use those wrappers, or follow those instructions.
If you are interested in a simulator of MPI applications, you should probably check SMPI.
This open-source simulator (in which I'm involved) can run many MPI C/C++/Fortran applications unmodified, and forecast rather accurately the runtime of the application, provided that you have an accurate description of your hardware platform. Both online and offline studies are possible.
There is many other advantages in using a simulator to study MPI applications:
Reproducibility: several runs lead to the exact same behavior unless you specify so. You won't have any heisenbugs where adding some more tracing changes the application behavior;
What-if Analysis: Ability to test on platform that you don't have access to, or that is not built yet;
Clairevoyance: you can observe every parts of the system, even at the network core.
For more information, see this presentation or this article.
The SMPI framework can even formally study the correction of MPI applications through exhaustive testing, as shown in that presentation.
MPI messages are transported via TCP networking (there are other high-performance possibilities like shared performance, but networking is the default). So it doesn't matter at all where the application runs as long as the nodes can connect to each other. I guess that you want to test the application on your laptop, so the nodes are all running locally and can easily connect to each other via the loopback network.
I am not quite sure if I do understand your question, but a laptop is a computer just like any other. Providing you have set up your MPI libs correctly and set your paths, you can, of course, use MPI routines on your laptop.
As far as I am concerned, I use Debian Linux (http://www.debian.org) for all my parallel stuff. I have written a little article dealing with HowTo get MPI run on debian machines. You may want to refer to it.
We have two web servers with load balancing. We need to share some files between those servers. These would be uploaded files, session files, various files that php applications create.
We don't want to use a heavyweight, no longer maintained or a commercial solution. We're looking for some lightweight open-source software that would work as shared file system. It should be really easy to set up, must be HA available, must be very fast. It should work with RedHat Linux.
We looked at such solutions like drbd with synchronous file sharing but we can't use them because it can't work on an underlying filesystem like ext3.
OCFS may be up to snuff by now; it's worth checkout out at least. It's in the mainline linux kernel tree, http://oss.oracle.com/projects/ocfs2/ has some info on it. I've set it up before, it was pretty easy to get going.
DRBD is good for syncing over a network (direct crossover connection if at all possible), but EXT3 is not designed to be aware of changes that occur underneath it, at the block device level. For that reason you need a filesystem designed for such purposes such as the Global File System (GFS). To the best of my knowledge Red Hat has support for GFS.
The DRBD manual will give you an overview of how to use GFS with DRBD.
http://www.drbd.org/users-guide/ch-gfs.html
Don't take this as a final answer - I have not researched or used a multi-master system before, but at least this might give you something to go on.
Ideally, you would only sync the part of the data that's shared between the webservers.
What things should be kept most in mind when writing cross-platform applications in C? Targeted platforms: 32-bit Intel based PC, Mac, and Linux. I'm especially looking for the type of versatility that Jungle Disk has in their USB desktop edition ( http://www.jungledisk.com/desktop/download.aspx )
What are tips and "gotchas" for this type of development?
I maintained for a number of years an ANSI C networking library that was ported to close to 30 different OS's and compilers. The library didn't have any GUI components, which made it easier. We ended up abstracting out into dedicated source files any routine that was not consistent across platforms, and used #defines where appropriate in those source files. This kept the code that was adjusted per platform isolated away from the main business logic of the library. We also made extensive use of typedefs and our own dedicated types so that we could easily change them per platform if needed. This made the port to 64-bit platforms fairly easy.
If you are looking to have GUI components, I would suggest looking at GUI toolkits such as WxWindows or Qt (which are both C++ libraries).
Try to avoid platform-dependent #ifdefs, as they tend to grow exponentially when you add new platforms. Instead, try to organize your source files as a tree with platform-independent code at the root, and platform-dependent code on the "leaves". There is a nice book on the subject, Multi-Platform Code Management. Sample code in it may look obsolete, but ideas described in the book are still brilliantly vital.
Further to Kyle's answer, I would strongly recommend against trying to use the Posix subsystem in Windows. It's implemented to an absolute bare minimum level such that Microsoft can claim "Posix support" on a feature sheet tick box. Perhaps somebody out there actually uses it, but I've never encountered it in real life.
One can certainly write cross-platform C code, you just have to be aware of the differences between platforms, and test, test, test. Unit tests and a CI (continuous integration) solution will go a long way toward making sure your program works across all your target platforms.
A good approach is to isolate the system-dependent stuff in one or a few modules at most. Provide a system-independent interface from that module. Then build everything else on top of that module, so it doesn't depend on the system you're compiling for.
XVT have a cross platform GUI C API which is mature 15+ years and sits on top of the native windowing toollkits. See WWW.XVT.COM.
They support at least LINUX, Windows, and MAC.
Try to write as much as you can with POSIX. Mac and Linux support POSIX natively and Windows has a system that can run it (as far as I know - I've never actually used it). If your app is graphical, both Mac and Linux support X11 libraries (Linux natively, Mac through X11.app) and there are numerous ways of getting X11 apps to run on Windows.
However, if you're looking for true multi-platform deployment, you should probably switch to a language like Java or Python that's capable of running the same program on multiple systems with little or no change.
Edit: I just downloaded the application and looked at the files. It does appear to have binaries for all 3 platforms in one directory. If your concern is in how to write apps that can be moved from machine to machine without losing settings, you should probably write all your configuration to a file in the same directory as the executable and not touch the Windows registry or create any dot directories in the home folder of the user that's running the program on Linux or Mac. And as far as creating a cross-distribution Linux binary, 32-bit POSIX/X11 would probably be the safest bet. I'm not sure what JungleDisk uses as I'm currently on a Mac.
There do exist quite few portable libraries just examples I've worked within the past
1) glib and gtk+
2) libcurl
3) libapr
Those cover nearly every platform and so they are extremly useful tool.
Posix is fine on Unices but well I doubt it's that great on windows, besides we do not have any stuff for portable GUIs there.
I also second the recommendation to separate code for different platforms into different modules/trees instead of ifdefs.
Also I recommend to check beforehand what are the differences in you platforms and how you could abstract them. E.g. this is some OS related stuff (e.g. the annoying CR,CRLF,LF in text files), or hardware stuff. E.g. the previous mentioned posix compability doesnt stop you from
int c;
fread(&c, sizeof(int), 1, file);
But on different hardware platforms the internal memory layout can be complete different (endianess), forcing you to use conversion functions on some of the target platforms.
You can use NAppGUI for both console and desktop apps. The SDK uses ANSI-C and your code will work on Windows/macOS/Linux.
https://www.nappgui.com
It's free and OpenSource.