Xamarin Test Cloud - Parallel execution between devices - xamarin-test-cloud

Does anyone know if Xamarin Test Cloud, when you choose more than one device, executes the tests in parallel? I mean, parallel between devices, not between tests. Or sequentially?
Thanks

It depends on what you mean by running in parallel.
Opt-in to use single-device parallelization
The Xamarin Test Cloud has a feature called "parallelization", which you can only opt in to when you select a single device in the Test Cloud. With this approach, the Test Cloud can run multiple copies of identical devices of the same model & OS version; so that your tests return results faster. The maximum number of such devices in a run is still limited by device availability & your account's concurrency.
Automatic concurrent execution on multiple devices
However, with multiple devices selected; that form of "parallelization" cannot be selected. Yet Test Cloud will automatically run on multiple different devices at the same time, if those devices are available and you're not at your concurrency limit. For each individual device, the tests are run sequentially (though order is not guaranteed); but between separate devices they can run in parallel.
In either case, the Test Cloud does not at any point guarantee that the execution between devices will be actually "parallel" by syncing them up or anything like that; it just means it will run them on as many devices as it is able to within your license & technical limits, and that the test runs may overlap (which is the "parallel" aspect.)

It is depends on your subscription plan. If your plan have more than 1 concurrent devices, then it will run parallel between devices. For example if one is having "Small Startup" plan which is having 3 concurrent devices, then your test will run in 3 devices parallel.

Related

Jmeter script development : IBM cloudant Performance testing , Maximum Request /second

I am working on IBM cloudant performance testing. ( No SQL DB hosted in IBM cloud).
I am trying to identify the breaking point ( max input/sec).
I am triggering this request (POST) with JSON data.
I am unable to determine to design this test plan and thread group.
I need to determine the breaking point ( maximum allowed request/second).
Please find my Jmeter configuration above
The test type, you're trying to achieve is the Stress Test, you should design the workload as follows:
Start with 1 virtual user
Gradually increase the load
Observe the correlation between increasing number of virtual users and the throughput (number of requests per second) using i.e. Transaction Throughput vs Threads chart (can be installed using JMeter Plugins Manager)
Ideally throughput should increase proportionally to the increasing number of threads (virtual users). However applications have their limit therefore at certain stage you will run into the situation when the number of virtual users increases and throughput decreases. The moment just before throughput degradation is called saturation point and this is what you're looking for.
P.S. 20 000 virtual users might be a little bit high number for a single JMeter engine, you might need to consider switching to Distributed Testing

Use Google Cloud Functions to speed up GAE app

I have a GAE standard Python app that does some fairly computational processing. I need to complete the processing within the 60 second request time limit, and ideally I'd like to do it faster for a better user experience.
Splitting the work to multiple threads don't seem to be a good solution because the threads would likely run on the same CPU and thus wouldn't give a speed up.
I was wondering if Google Cloud Functions (GCF) could be used in a similar manner as threads. For example, if I create a GCF to do the processing, split my work into 10 chunks, and make 10 GCF calls in parallel, can I expect to get results 10x faster? (aside from latency and GCF startup costs)
Each function invocation runs in its own server instance, and a function will scale up to 1000 instances to handle concurrent requests in parallel. So yes, you can do this, if you are willing to potentially pay the cold start cost of each server instance as it's allocated for its first request.
If you're able to split the workload in smaller chunks that you'd be launching in parallel via separate (external) requests I'd suspect you'd get a better performance (and cost) by using GAE itself (maybe in a separate service) instead of CFs:
GAE standard environment instances can have higher CPU speeds - a B8 instance has 4.8 GHz, the max CF CPU speed is 2.4 GHz
you have better control over the GAE scaling configuration and starting time penalties
I suspect networking delays would be at least the same if not better on GAE - not going to another product infra (unsure though)
GAE costs would likely be smaller since you pay per instance hours (regardless of how many requests the instance handles) not per request/invocations

Why is the parallel execution of an Apache Flink application slower than the sequential execution?

I have an Apache Flink setup with one TaskManager and two processing slots. When I execute an application with parallelism set as 1, the job takes around 33 seconds to execute. When I increase the parallelism to 2, the job takes 45 seconds to complete.
I am using Flink on my Windows machine with the configuration of 10 Compute Cores(4C + 6G). I want to achieve better results with 2 slots. What can I do?
Distributed systems like Apache Flink are designed to run in data centers on hundreds of machines. They are not designed to parallelize computations on a single computer. Moreover, Flink targets large-scale problems. Jobs that run in seconds on a local machine are not the primary use case for Flink.
Parallelizing an application always causes overhead. Data has to be distributed and shared between processes and threads. Flink distributes data across TaskManager slots by serializing and deserializing it. Moreover, starting and coordinating distributed tasks also does not come for free.
It is not surprising to observe longer execution times when scaling a small-scale problem with a distributed system on a single machine. You could port the application to a thread-parallel application that leverages shared memory.

Mesos as a giant Linux box

Is it possible to see all the mesos resources as a giant Linux box without custom code for the framework?
I am wondering, if I want to run a program using 2500tb of ram, can mesos abstract the master slave architecture away? Do I have to use custom code?
You have to write custom code. Mesos offers resources per agent (slave) basis and it is up to you how to coordinate binaries of your app running on different machines.
Q1: Mesos is a resource manager. Yes, it's a giant pool of resources. Although at a given time it will offer you only a subspace of all resources. Assuming that there are other users that might need some resources (don't worry there's a way how to utilize almost whole cluster).
Q2: Mesos is designed for a commodity hardware (many nodes, not a single giant HPC computer). A framework running on Mesos will be given a list resources (and slaves - worker nodes) and Mesos will execute a task within bound of given resources. This way you can start an MPI job or run a task on top of Apache Spark which will handle the communication between nodes for you (but not Mesos itself).
Q3: You haven't specified what kind of task you'd like to compute. Spark comes with quite a few examples. You can run any of those without writing own code.
(Image credits: Malte Schwarzkopf, Google talk EuroSys 2013 in Prague)

What is more expensive for databases, IO or CPU?

I am investigating different structures for our database, which is expected to contain millions of files. I have narrowed it down to two different models; one of which is 4 times faster and uses 3 times less CPU, but uses 4 times more IO reads than the other.
So what is more expensive in both money and server bottlenecks, considering we are planning to host it in either Amazon or Azure cloud, IO or CPU?
It totally depends on the type of IO device and the size of the virtualized instance used. In a cloud hosted environment the real hardware specs are totally abstracted into marketing terms like EC2 Compute Unit. The only real way to know is to spin up in all environments and load test. Anything else is just a plain old guess.
Just want to add one more variable - Memory.
High memory instances can dramatically reduce the IOPS / CPU requirements.
For example - a MongoDB instance which have most of its working set in memory - hardly do IO calls.
And I agree with jeremyjjbrown - test, test, test.
Your KPI would be transactions (R/W) per seconds and transactions per Dollar.

Resources