Windows RDP vs. Various Versions of NX - remote-desktop

Can anyone weigh in on the speed, responsiveness, reliability and flexibility of the following two options:
Using RDP to remotely access a Windows machine from a Windows machine
Using NX to remotely access a Linux machine from a Linux machine (or a Windows machine if not much different)
The application I would run on either guest is the same. If the approaches perform about as well as each other, I'd prefer the second for security reasons pertaining to Linux. However if NX is going to be significantly slower, I may reluctantly go with RDP and Windows for the time being.
Please mention the variety or varieties of NX you have experience with (FreeNX, NeatX, x2go, etc.) Thanks!

The short answer is that it really depends on what the application does, NX can be quite efficient with applications that draw using X11 primitives, much less so for graphical/video.
As for the different NX varieties: FreeNX is unmaintained, so is NeatX, and you forgot winswitch.
When it comes to performance, which implementation you choose makes no difference since they all rely on the same NX libraries for remoting the display (assuming they do not misconfigure the link parameters), the only thing they change is the way they manage the sessions (the UI) and the client-server protocol used for the GUI app (which has no impact on performance).
It is now at least as efficient as RDP and NX so you may also want to give xpra a go.

Related

Is there anyway to reduce the amount of ram in a system using software?

I am a beginner at coding and working on a web application. I am currently working on a desktop with 16GB of DDR4 memory. However, potential clients in the future would be using laptops and most likely have only 8GB of ram that would potentially have a slower clock speed. While I could turn off my computer and take out a stick of memory for testing, I was wondering if there was a software based solution in windows that would allow me to temporarily shut off a stick of memory or limit the amount that is allowed to be used so that I could run some tests. If anyone knows how to do this I would greatly appreciate any help. If it helps to know, my development environment is VS Code and I am using the MERN stack (react in the front end).
I think it's best if you use some Virtual Machine software to create a machine with your desired spec. Using some software like this: https://superuser.com/questions/1263090/is-it-possible-to-limit-the-memory-usage-of-a-particular-process-on-windows for example requires some more configurations and controls compared to VM. You can use https://www.virtualbox.org/, it's free.
Your OS by default takes the RAM it's given when booting up. It's nearly impossible to switch off RAM midusage at least in Windows. Memory management is something that's not allowed to be edited by external softwares but only by the operating system at least to my knowledge it is. You can use a virtual machine to test your software as an ideal solution but you can't lower the ram usage of the OS you're running now.

Should applications (like those developed in C ) add container (say docker) support?

Till now, I was with the idea that container technology (ex: docker) provides the required isolation and OS-Level virtualisation. And the applications running in the container are restricted by - namespaces, cgroups, apparmour/selinux, capabilities and they have no way to figure out the host environment they are in. But seems this understanding is not 100% correct.
As on wiki -OS-level virtualization
OS-level virtualization is an operating system paradigm in which the
kernel allows the existence of multiple isolated user space instances.
Such instances, called containers (LXC, Solaris containers, Docker),
Zones (Solaris containers), virtual private servers (OpenVZ),
partitions, virtual environments (VEs), virtual kernels (DragonFly
BSD), or jails (FreeBSD jail or chroot jail),1 may look like real
computers from the point of view of programs running in them. A
computer program running on an ordinary operating system can see all
resources (connected devices, files and folders, network shares, CPU
power, quantifiable hardware capabilities) of that computer. However,
programs running inside of a container can only see the container's
contents and devices assigned to the container.
From above quote, It seems it only adds isolation and abstraction and nothing like virtualiztion.
As Java team had to add container support to JVM so it does not look in to the host env directly but instead limits ITSELF to the isolations/abstraction provided by docker.
References:
Java (prior to JDK8 update 131) applications running in docker container CPU / Memory issues? with excellent answer explaining JVM support for linux containers.
Linux container support first appeared in JDK 10 and then ported to
8u191,
How to prevent Java from exceeding the container memory limits?
Does this mean that a C program running in container environment has a way to bypass the restriction and access/read the host env details. Ofcourse, when it tries (i.e uses this information) to do any thing beyond what the container is allowed to do, the container engine might kill the process of the container itself.
So, If I am developing an C/C++ application which requests/queries for host resources like CPU/MEM/Devices etc, is it my responsibility that the application runs as expected in container environments by adding container support.
Although I doubt this will be a popular answer, my view is that applications that might ever run in a container environment must be provided with ways to specify resource limits explicitly. It's a mistake to rely on information queried from the system.
Containers are not full virtualization environments, and generally do not conceal the underlying platform completely. While containers might be, and generally are, isolated from their host at the network, filesystem, and user level, that doesn't mean they are truly independent. A typical problem I encounter is that a container can't get it's own contribution to the system's load average -- only the load average of the host.
The fact that there is not full virtualization does not mean that the host cannot enforce limits -- it generally can, and does. But it means that the container cannot easily find what they are -- not in a robust, platform-neutral way.
Various heuristics are available to the container. For example, it might be able to parse /proc/self/cgroup. This may, or may not, give useful information, depending on the implementation. This approach is only ever going to give useful information if the host is using control groups -- most current implementations do, but that doesn't mean it's mandatory.
There are a number of different container frameworks in current use, and that number is likely to increase. It's going to be difficult to predict what methods will have to be used in future, to make an application container-proof. Better, I think, to provide a way for the user to control limits, than to have an ongoing maintenance task for every piece of software you develop.
In a container environment, it often winds up working better to build small interconnected containers that can run multiple copies. Then you can size the environment to the workload, instead of sizing the workload to its environment.
An easier example to think about is a worker process that handles asynchronous tasks. In a non-container environment, a typical setup would be to ask the host how many cores it has, then launch that many threads. This doesn't translate well into containers, exactly because of the sorts of issues you cite. Instead, it's usually better to have your worker process be single-threaded, but then launch as many copies of it as you need to do the work.
If in particular you're running Kubernetes in a cloud environment, there are some real advantages to doing this. In a Kubernetes Deployment you can specify the number of replicas: of a container, and dynamically change that, so you're not tied to the hardware configuration at all. You can use a Kubernetes piece called the horizontal pod autoscaler to automatically set the deployment count based on the queue length. You can use a different Kubernetes piece called the cluster autoscaler to automatically request more cloud compute nodes when the workload gets too big for the current cluster. Underlying this is a basic assumption that individual containers (Kubernetes Pods) are small, stateless, and behave the same on any hardware setup.
The JVM memory limit question you cite faces a similar problem. The default JVM behavior is to use 25% of system memory for the heap, but now the question becomes, how do you decide how much memory that is in the face of per-container resource constraints? Most application runtimes don't have a system-dependent hard memory limit like this, though; you talk about C programs, and malloc() will work fine until you hit the kernel-enforced (either physical or cgroup) memory limit.
So, If I am developing an C/C++ application which requests/queries for host resources like CPU/MEM/Devices ...
...it is inappropriate to run this inside an isolation system like Docker. Run it directly on the host.

Is it possible to use a handheld as a main development platform?

After reading this post and some derivative publications (ddotdash.com) I wonder whether it is possible to use a handheld as a main platform for development of web applications for mobile web browsers.
For web development I use a rather common set of tools: Cheap netbook, Ubuntu 9.10, Ruby on Rails, VIM, GIT. I think it is possible to use all of those on Nokia n900 due to the fact that it has Maemo OS on it which is based on Debian (all debs are possible to install and you can always compile problematic debs from source).
Nevertheless, I am concerned with 3 problems:
Display size. I have 1280x800 resolution on my netbook and it is convenient for me to have Terminator (multiple consoles), VIM, file browser, Firefox and some PDF books opened at the same time. I wonder if it would be possible to use all these apps on 800px horizontal resolution.
Computing power: Via Nano (or Atom) processor does not distinct dramatically from that on Nokia n900 (at least in MHz), however I wonder if 256+768(virtual) memory on Nokia will be enough for my work (I have 3 GB now on the netbook).
Keyboard. Frankly it is not a problem due to the fact that I have Nokia su-8w bluetooth keyboard that is comfortable enough for touch typing. However it is interesting to read some comments on this problem. [Edit]: Bluetooth keyboard is not so comfortable - a developer has to dispose a handheld on the keyboard and it is not easy to look at the small screen from such rather big distance (keyboard can be placed on a table or on the knees only).
Having solutions for the problems mentioned above, I will have an opportunity to exploit all the wonderful advantages of the mobile development platforms, such as:
work from anywhere (it is important for me);
develop for the same form-factor that is used by the developer and intended users both;
pocket-size working tool :)
It may well be possible - the question is how much energy you'll expend compensating for all the restrictions. It's like developing in Notepad: possible, but not a pleasant experience.
I develop a bit on my netbook too, and it's okay - but I wouldn't want to do it all day.
It's certainly quite cool to be able to develop on a handheld device, but I don't think it's really practical for significant amounts of code. If this is for your own personal pleasure and you think the benefits outweigh the costs, that's one thing - but I wouldn't do it for commercial apps.

Cross Platform System Usage Metrics

Is there a cross platform C API that can be used to get system usage metrics?
I've worked with libstatgrab before. Gets you some pretty useful system statistics for the main Unix-like variants and Windows through Cygwin (supposedly - never tried). Different OS's work so differently - especially when it comes to usage metrics - it may be challenging to get what you want. Even something as simple sounding as "free memory" can be tricky to act on in a cross-platform way. Perhaps if you narrow things down a bit, maybe we can find something.
Unfortunately not.
The C standard is pretty much limited to dynamic allocation, string manipulation, math and text I/O. Once you get beyond that, you need OS APIs which by definition are OS specific and not cross platform.
Depending on the metrics you want to collect, you may want to consider looking at PCP (Performance Co-Pilot). This is an open-source performance framework, originally developed at Silicon Graphics, which collects and collates a vast number of possible metrics from a vast number of sources, and lets you monitor them from anywhere.
Basically PCP would involve adding another 'layer' into your system -- for example, you might monitor a distributed cluster of mixed-OS machines, each with PCP installed locally; a set of 'agents' collect the performance data on each machine, and your code could then use libpcp to collect those metrics as required.
It's hard to say without knowing your exact usage scenario (if you're talking about something running seamlessly on end-users' machines, PCP may not suit but if you want to monitor machines which you control, and are happy to run the PCP service on them, it's an awesome solution).
We use PCP very happily to collect metrics from Windows and Linux boxes, as well as internal metrics from our application, and log them all centrally, report on them, monitor trends, etc.
The good old SNMP provides C libraries for the client and server side. If you prefer something newer you should try Prometheus. They have Node Exporter ready to use and the client libraries for many languages including C.
Also note that PCP (mentioned in Cowan's answer) supports the OpenMetrics standard since version 4, so if you decide to use Prometheus as the metric data collector extending PCP or writing a custom Prometheus client are better solutions that SNMP which also supported by Prometheus through SNMP Exporter, but more difficult to set up due to the way it handles MIBs and authentication.

Running MPI code in my laptop

I am new to parallel computing world. Can you tell me is it possible to run a c++ code uses MPI routines in my laptop with dual core or is there any simulator/emulator for doing that?
Most MPI implementations use shared memory for communication between ranks that are located on the same host. Nothing special is required in terms of setting up the laptop.
Using a dual core laptop, you can run two ranks and the OS scheduler will tend to place them on separate cores. The WinXP scheduler tends to enforce some degree of "cpu binding" because by default jobs tend to be scheduled on the core where they last ran. However, most MPI implementations also allow for an explicit "cpu binding" that will force a rank to be scheduled on one specific core. The syntax for this is non-standard and must be gotten from the specific implementations documentation.
You should try to use "the same" version and implementation of MPI on your laptop that the university computers are running. That will help to ensure that the MPI runtime flags are the same.
Most MPI implementations ship with some kind of "compiler wrapper" or at least a set of instructions for building an application that will include the MPI library. Either use those wrappers, or follow those instructions.
If you are interested in a simulator of MPI applications, you should probably check SMPI.
This open-source simulator (in which I'm involved) can run many MPI C/C++/Fortran applications unmodified, and forecast rather accurately the runtime of the application, provided that you have an accurate description of your hardware platform. Both online and offline studies are possible.
There is many other advantages in using a simulator to study MPI applications:
Reproducibility: several runs lead to the exact same behavior unless you specify so. You won't have any heisenbugs where adding some more tracing changes the application behavior;
What-if Analysis: Ability to test on platform that you don't have access to, or that is not built yet;
Clairevoyance: you can observe every parts of the system, even at the network core.
For more information, see this presentation or this article.
The SMPI framework can even formally study the correction of MPI applications through exhaustive testing, as shown in that presentation.
MPI messages are transported via TCP networking (there are other high-performance possibilities like shared performance, but networking is the default). So it doesn't matter at all where the application runs as long as the nodes can connect to each other. I guess that you want to test the application on your laptop, so the nodes are all running locally and can easily connect to each other via the loopback network.
I am not quite sure if I do understand your question, but a laptop is a computer just like any other. Providing you have set up your MPI libs correctly and set your paths, you can, of course, use MPI routines on your laptop.
As far as I am concerned, I use Debian Linux (http://www.debian.org) for all my parallel stuff. I have written a little article dealing with HowTo get MPI run on debian machines. You may want to refer to it.

Resources