Detect execution within Virtual-Machine, in Linux? - licensing

Need a Linux application to detect if it is executing in a Virtual-Machine (s.a. VMware ESX/ESXi, Xen, Oracle Virtualbox, Microsoft Virtual server etc.) Based on the outcome of this detection, some software licensing rules need to be enforced.
I am aware that there are some commercial software libraries/frameworks meant for licensing, that can perform such detection, but we need to roll our own due to several reasons.
What are some of the ways and means to achieve such detection ?

For modern hardware-accelerated (VT-x) hypervisors (VMware, VirtualBox, KVM/QEmu), they all set the "hypervisor" CPUID bit, which is quite simple to read from Linux.
Open the file /proc/cpuinfo, and look for "hypervisor" in the flags line.
It's not 100% though. Software based hypervisors (Bochs, etc) will not set it, and the bit is not enforced, so one could modify QEmu to not set the bit for instance. But, it might be enough for your uses.

Related

Windows RDP vs. Various Versions of NX

Can anyone weigh in on the speed, responsiveness, reliability and flexibility of the following two options:
Using RDP to remotely access a Windows machine from a Windows machine
Using NX to remotely access a Linux machine from a Linux machine (or a Windows machine if not much different)
The application I would run on either guest is the same. If the approaches perform about as well as each other, I'd prefer the second for security reasons pertaining to Linux. However if NX is going to be significantly slower, I may reluctantly go with RDP and Windows for the time being.
Please mention the variety or varieties of NX you have experience with (FreeNX, NeatX, x2go, etc.) Thanks!
The short answer is that it really depends on what the application does, NX can be quite efficient with applications that draw using X11 primitives, much less so for graphical/video.
As for the different NX varieties: FreeNX is unmaintained, so is NeatX, and you forgot winswitch.
When it comes to performance, which implementation you choose makes no difference since they all rely on the same NX libraries for remoting the display (assuming they do not misconfigure the link parameters), the only thing they change is the way they manage the sessions (the UI) and the client-server protocol used for the GUI app (which has no impact on performance).
It is now at least as efficient as RDP and NX so you may also want to give xpra a go.

Getting hardware information using java/jna to work on all operating systems

Hi I am trying to write a java applet that will get some hard ware info, mac address(which I have done), cpuid motherboard serial number and hard drive serial number. I know I need to use jna to do this. My question is, is there a way in c/c++ to get that information that is not platform dependent? Everything i have seen would work only on windows, and I need it to work on all platforms. I need this information so I can create a unique id of that computer. any help or a point in the right direction would be much appreciated.
My question is, is there a way in c/c++ to get that information that is not platform dependent?
Not possible. Heck, within the same PC market, querying e.g. BIOS version differs from one MB manufacturer/OEM to another. And that if the PC still has BIOS - not the newer EFI.
Sun/SPARCs are notable exception: their H/W has a relatively unique ID, provisioned mainly for inventorial purposes. It is not precisely unique (what brings up another point) as that might infer on privacy and Sun had no choice but to make sure it is not globally unique.
IOW unique identification of the hardware is illegal in many parts of the world, thus no reliable (least portable) method exists to achieve what you want.
I'd say binding to the MAC address should be already good enough. And that information is rather easy to access on pretty much all platforms. As long as your license check would be lenient enough for user to have sufficient time to receive new license key (in case of hardware replacement) there should be few problems.
There's a project called OSHI that aims to do that. It's looking for contributors to write the *nix implementatoin.

Why no good extN drivers for Windows?

Why are there no good drivers for Windows for reading ext2/3/4 filesystems? Googling around indicates that there's 2 or 3 out there, but all of them have problems. Is there some technical inconsistency that makes it difficult to correctly code up something that would enable me to open up My Computer and work with an extN partition just like NTFS or FAT? I thought one of the benefits of open sources and standards was that problems like this should be solved fairly quickly.
Driver signing.
Microsoft's driver signing is by its own nature incompatible with the GPL and unsigned drivers don't work anymore.
I haven't used it myself, but a coworker of mine has used Ext2 IFS for Windows without any problems.
One of the benefits of open sources and standards is that problems like this can be solved fairly quickly. If no one is sufficiently motivated to work on a problem - whether that motivation comes from money, personal need, fame, whatever - then the problem is unlikely to get solved. (The closed source world is no different.) It probably doesn't help that relatively few open source developers have experience hacking on Windows kernel mode device drivers. Writing device drivers is a specialized skill. There are developers who understand the ext2/3/4 code very well and are very willing to work on it, but odds are that the the people experienced enough at hacking on the Linux kernel to work on the ext2/3/4 drivers are probably primarily Linux users (and so don't much care about writing drivers for Windows).
With regards to driver signing: It's my understanding that, starting with Windows Vista, Microsoft doesn't have to sign or certify your drivers for them to be installed without warnings, but you do need a code signing certificate. These are somewhere in the neighborhood of $400 - $500 a year (see Verisign's web site, for example), and most non-commercial developers aren't interested in paying out that kind of money. There are methods for disabling driver signing requirements, but none of them are something the average user is likely to try, which would hinder the acceptance of a non-signed driver.
I don't know how the Ext2 IFS for Windows handles it; either its author got a certificate somehow, or it requires that you disable the driver signing requirements.
So, to summarize, the best ext2/3/4 developers probably don't have much need for Windows, and driver signing discourages would-be open source driver developers for Windows, and the availability of NTFS for Linux means that you can use NTFS instead of ext2/3/4 to share data between Linux and Windows. These three factors work together to remove a lot of the interest in developing ext2/3/4 for Windows.

Cross Platform System Usage Metrics

Is there a cross platform C API that can be used to get system usage metrics?
I've worked with libstatgrab before. Gets you some pretty useful system statistics for the main Unix-like variants and Windows through Cygwin (supposedly - never tried). Different OS's work so differently - especially when it comes to usage metrics - it may be challenging to get what you want. Even something as simple sounding as "free memory" can be tricky to act on in a cross-platform way. Perhaps if you narrow things down a bit, maybe we can find something.
Unfortunately not.
The C standard is pretty much limited to dynamic allocation, string manipulation, math and text I/O. Once you get beyond that, you need OS APIs which by definition are OS specific and not cross platform.
Depending on the metrics you want to collect, you may want to consider looking at PCP (Performance Co-Pilot). This is an open-source performance framework, originally developed at Silicon Graphics, which collects and collates a vast number of possible metrics from a vast number of sources, and lets you monitor them from anywhere.
Basically PCP would involve adding another 'layer' into your system -- for example, you might monitor a distributed cluster of mixed-OS machines, each with PCP installed locally; a set of 'agents' collect the performance data on each machine, and your code could then use libpcp to collect those metrics as required.
It's hard to say without knowing your exact usage scenario (if you're talking about something running seamlessly on end-users' machines, PCP may not suit but if you want to monitor machines which you control, and are happy to run the PCP service on them, it's an awesome solution).
We use PCP very happily to collect metrics from Windows and Linux boxes, as well as internal metrics from our application, and log them all centrally, report on them, monitor trends, etc.
The good old SNMP provides C libraries for the client and server side. If you prefer something newer you should try Prometheus. They have Node Exporter ready to use and the client libraries for many languages including C.
Also note that PCP (mentioned in Cowan's answer) supports the OpenMetrics standard since version 4, so if you decide to use Prometheus as the metric data collector extending PCP or writing a custom Prometheus client are better solutions that SNMP which also supported by Prometheus through SNMP Exporter, but more difficult to set up due to the way it handles MIBs and authentication.

Running MPI code in my laptop

I am new to parallel computing world. Can you tell me is it possible to run a c++ code uses MPI routines in my laptop with dual core or is there any simulator/emulator for doing that?
Most MPI implementations use shared memory for communication between ranks that are located on the same host. Nothing special is required in terms of setting up the laptop.
Using a dual core laptop, you can run two ranks and the OS scheduler will tend to place them on separate cores. The WinXP scheduler tends to enforce some degree of "cpu binding" because by default jobs tend to be scheduled on the core where they last ran. However, most MPI implementations also allow for an explicit "cpu binding" that will force a rank to be scheduled on one specific core. The syntax for this is non-standard and must be gotten from the specific implementations documentation.
You should try to use "the same" version and implementation of MPI on your laptop that the university computers are running. That will help to ensure that the MPI runtime flags are the same.
Most MPI implementations ship with some kind of "compiler wrapper" or at least a set of instructions for building an application that will include the MPI library. Either use those wrappers, or follow those instructions.
If you are interested in a simulator of MPI applications, you should probably check SMPI.
This open-source simulator (in which I'm involved) can run many MPI C/C++/Fortran applications unmodified, and forecast rather accurately the runtime of the application, provided that you have an accurate description of your hardware platform. Both online and offline studies are possible.
There is many other advantages in using a simulator to study MPI applications:
Reproducibility: several runs lead to the exact same behavior unless you specify so. You won't have any heisenbugs where adding some more tracing changes the application behavior;
What-if Analysis: Ability to test on platform that you don't have access to, or that is not built yet;
Clairevoyance: you can observe every parts of the system, even at the network core.
For more information, see this presentation or this article.
The SMPI framework can even formally study the correction of MPI applications through exhaustive testing, as shown in that presentation.
MPI messages are transported via TCP networking (there are other high-performance possibilities like shared performance, but networking is the default). So it doesn't matter at all where the application runs as long as the nodes can connect to each other. I guess that you want to test the application on your laptop, so the nodes are all running locally and can easily connect to each other via the loopback network.
I am not quite sure if I do understand your question, but a laptop is a computer just like any other. Providing you have set up your MPI libs correctly and set your paths, you can, of course, use MPI routines on your laptop.
As far as I am concerned, I use Debian Linux (http://www.debian.org) for all my parallel stuff. I have written a little article dealing with HowTo get MPI run on debian machines. You may want to refer to it.

Resources