I have a use case where I have 20-30 frameworks runnings on mesos cluster that has over 200 nodes. A lot of the times mesos is offering resources to frameworks that do not want any offers at all. While doing that, it is offering little resources to frameworks that actually need them.
I know there's a function requestResources that a framework can call to ask for resources. However, I couldn't find a function that a framework can use to tell mesos to stop sending it any offers. Is there a way of doing that? Because my frameworks keep getting offers every 100 milliseconds, which is way fast!
When you declineOffer, you can set an optional Filter with refuse_seconds longer than the default 5s. This will mean that after you decline an offer from a node, that Mesos will not offer these resources back to your framework for refuse_seconds.
Alternatively, if your framework temporarily doesn't want any offers from any nodes, it can call driver.stop(true), and the scheduler will unregister from Mesos but its tasks will keep running for FrameworkInfo.failover_timeout. Once the framework has work to do, it can start/run the driver again to start getting offers again.
(FYI, requestResources doesn't actually do anything yet.)
Related
Is it possible to see all the mesos resources as a giant Linux box without custom code for the framework?
I am wondering, if I want to run a program using 2500tb of ram, can mesos abstract the master slave architecture away? Do I have to use custom code?
You have to write custom code. Mesos offers resources per agent (slave) basis and it is up to you how to coordinate binaries of your app running on different machines.
Q1: Mesos is a resource manager. Yes, it's a giant pool of resources. Although at a given time it will offer you only a subspace of all resources. Assuming that there are other users that might need some resources (don't worry there's a way how to utilize almost whole cluster).
Q2: Mesos is designed for a commodity hardware (many nodes, not a single giant HPC computer). A framework running on Mesos will be given a list resources (and slaves - worker nodes) and Mesos will execute a task within bound of given resources. This way you can start an MPI job or run a task on top of Apache Spark which will handle the communication between nodes for you (but not Mesos itself).
Q3: You haven't specified what kind of task you'd like to compute. Spark comes with quite a few examples. You can run any of those without writing own code.
(Image credits: Malte Schwarzkopf, Google talk EuroSys 2013 in Prague)
I am aiming to simulate a large number of 'real users' hitting and realistically using our site at the same time, and ensuring they can all get through their use cases. I am looking for a framework that combines some EC2 grid management with a web automation tool (such as GEB/WATIR). Ideal 'pushbutton' operation would do all of this:
Start up a configurable number of EC2 instances (using a specified
AMI preconfigured with my browser automation framework and test
scripts)
Start the web automation framework test(s) running on all of them,
in parallel. I guess they would have to be headless.
Wait for completion
Aggregate results
Shut down EC2 instances.
While not a framework per se, I've been really happy with http://loader.io/
It has an API for your own custom integration, reporting and analytics for analysis.
PS. I'm not affiliated with them, just a happy customer.
However, in my experience, you need to do both load testing and actual client testing. Even loader.io will only hit your service from a handful of hosts. And, it skips a major part (the client-side performance from a number of different clients' browsers).
This video has more on that topic:
http://www.youtube.com/watch?v=Il4swGfTOSM&feature=youtu.be
BrowserMob used to offer such service. Looks like they got acquired.
I developed an application for client that uses Play framework 1.x and runs on GAE. The app works great, but sometimes is crazy slow. It takes around 30 seconds to load simple page but sometimes it runs faster - no code change whatsoever.
Are there any way to identify why it's running slow? I tried to contact support but I couldnt find any telephone number or email. Also there is no response on official google group.
How would you approach this problem? Currently my customer is very angry because of slow loading time, but switching to other provider is last option at the moment.
Use GAE Appstats to profile your remote procedure calls. All of the RPCs are slow (Google Cloud Storage, Google Cloud SQL, ...), so if you can reduce the amount of RPCs or can use some caching datastructures, use them -> your application will be much faster. But you can see with appstats which parts are slow and if they need attention :) .
For example, I've created a Google Cloud Storage cache for my application and decreased execution time from 2 minutes to under 30 seconds. The RPCs are a bottleneck in the GAE.
Google does not usually provide a contact support for a lot of services. The issue described about google app engine slowness is probably caused by a cold start. Google app engine front-end instances sleep after about 15 minutes. You could write a cron job to ping instances every 14 minutes to keep the nodes up.
Combining some answers and adding a few things to check:
Debug using app stats. Look for "staircase" situations and RPC calls. Maybe something in your app is triggering RPC calls at certain points that don't happen in your logic all the time.
Tweak your instance settings. Add some permanent/resident instances and see if that makes a difference. If you are spinning up new instances, things will be slow, for probably around the time frame (30 seconds or more) you describe. It will seem random. It's not just how many instances, but what combinations of the sliders you are using (you can actually hurt yourself with too little/many).
Look at your app itself. Are you doing lots of memory allocations in the JVM? Allocating/freeing memory is inherently a slow operation and can cause freezes. Are you sure your freezing is not a JVM issue? Try replicating the problem locally and tweak the JVM xmx and xms settings and see if you find similar behavior. Also profile your application locally for memory/performance issues. You can cut down on allocations using pooling, DI containers, etc.
Are you running any sort of cron jobs/processing on your front-end servers? Try to move as much as you can to background tasks such as sending emails. The intervals may seem random, but it can be a result of things happening depending on your job settings. 9 am every day may not mean what you think depending on the cron/task options. A corollary - move things to back-end servers and pull queues.
It's tough to give you a good answer without more information. The best someone here can do is give you a starting point, which pretty much every answer here already has.
By making at least one instance permanent, you get a great improvement in the first use. It takes about 15 sec. to load the application in the instance, which is why you experience long request times, when nobody has been using the application for a while
There is appengine-mapreduce which seems the official way to do things on AppEngine. But there seems no documentation besides some hacked together Wiki Pages and lengthy videos. There are statements that the lib only supports the map step. But the source indicates that there are also implementations for shuffle.
A Version of this appengine-mapreduce library seems also to be included in the SDK but it not blessed for public use. So you basically are expected to load the library twice into your runtime.
Then there is appengine-pipeline. "A primary use-case of the API is connecting together various App Engine MapReduces into a computational pipeline." But there also seems pipeline-related code in the appengine-mapreduce library.
So where do I start to find out how this all fits together? Which is the library to call from my project. Is there any decent documentation on appengine-mapreduce besides parsing change logs?
Which is the library to call from my project.
They serve different purposes, and you've provided no details about what you're attempting to do.
The most fundamental layer here is the task queue, which lets you schedule background work that can be highly parallelized. This is fan-out. Let's say you had a list of 1000 websites, and you wanted to check the response time for each one and send an email for any site that takes more than 5 seconds to load. By running these as concurrent tasks, you can complete the work much faster than if you checked all 1000 sites in sequence.
Now let's say you don't want to send an email for every slow site, you just want to check all 1000 sites and send one summary email that says how many took more than 5 seconds and how many took fewer. This is fan-in. It's trickier with the task queue, because you need to know when all tasks have completed, and you need to collect and summarize their results.
Enter the Pipeline API. The Pipeline API abstracts the task queue to make fan-in easier. You write what looks like synchronous, procedural code, but uses Python futures and is executed (as much as possible) in parallel. The Pipeline API keeps track of task dependencies and collects results to facilitate building distributed workflows.
The MapReduce API wraps the Pipeline API to facilitate a specific type of distributed workflow: mapping the results of a piece of work into a set of key/value pairs, and reducing multiple sets of results to one by combining their values.
So they provide increasing layers of abstraction and convenience around a common system of distributed task execution. The right solution depends on what you're trying to accomplish.
There is offical documentation here: https://developers.google.com/appengine/docs/java/dataprocessing/
I'm currently developing a small hobby project (open sourced at https://github.com/grav/mailbum) which quite simply takes images from a Gmail account and puts them in albums on Picasa Web.
Since it's (currently) only dealing with Google-hosted data, I was thinking about hosting it on Google App Engine, but I'm not sure if it's well-suited for GAE:
Will the maximum execution time be a problem? It's currently 10 minutes according to http://googleappengine.blogspot.com/2010/12/happy-holidays-from-app-engine-team-140.html, but I'd think the tasks (i.e. processing a single mail) would be easy to run in parallel. I'm also guessing that dealing with Google-hosted data would be quite efficient on GAE?
Will the fact that it's written in Clojure be an obstacle? I've researched a bit in getting Clojure to run on GAE, but I've never tried it. Any pin-pointers?
Thanks for any advice and thoughts on the project!
It seems like your application is doable on GAE. My points of concern would be:
Does your code ever store the images that it is processing to temporary files? If so it will need to be changed to do everything in memory, because GAE applications are sandboxed and not allowed to write to the filesystem (if you need temporary persistent storage, you might be able to work something out where you write your file data to a BLOB field in the GAE datastore).
How do you get the images into Picasa Web? If they provide a simple REST/HTTP API then all is well. If you need something more involved than that (like a raw TCP socket) then it won't work.
The 10-minute execution time limit only applies to background tasks. When actually servicing web requests the time limit is 30 seconds. So if you provide a web-based interface to your app, you need to structure things so that the interface is just scheduling jobs that run in the background (i.e. you can't fire off a job directly as part of servicing a web request).
If none of those sound like show-stoppers to you, then I think your app should work just fine on GAE.
Can't really say if Clojure will work though. I have, however, spent time in the past getting some third-party libraries to work on App-Engine. Generally all I had to do was remove/modify/disable any parts of the library that accessed features that are forbidden by the sandbox (for instance, I had to disable the automatic caching to disk to get commons-fileupload to work on GAE). Not sure if the same would apply to Clojure, or even what the scope would be on a task like that.
I have been dabbling with Clojure and App Engine for a while now and I have to recommend appengine-magic. It abstracts most of the Java stuff away and is very easy to use. As a plus the project seems to be very active.