I'm currently looking for a Cloud PaaS that will allow me to scale an application to handle anything between 1 user and 10 Million+ users ... I've never worked on anything this big and the big question that I can't seem to get a clear answer for is that if you develop, let's say a standard application with a relational database and soap-webservices, will this application scale automatically when deployed on a Paas solution or do you still need to build the application with fall-over, redundancy and all those things in mind?
Let's say I deploy a Spring Hibernate application to Amazon EC2 and I create single instance of Ubuntu Server with Tomcat installed, will this application just scale indefinitely or do I need more Ubuntu instances? If more than one Ubuntu instance is needed, does Amazon take care of running the application over both instances or is this the developer's responsibility? What about database storage, can I install a database on EC2 that will scale as the database grow or do I need to use one of their APIs instead if I want it to scale indefinitely?
CloudFoundry allows you to build locally and just deploy straight to their PaaS, but since it's in beta, there's a limit on the amount of resources you can use and databases are limited to 128MB if I remember correctly, so this a no-go for now. Some have suggested installing CloudFoundry on Amazon EC2, how does it scale and how is the database layer handled then?
GAE (Google App Engine), will this allow me to just deploy an app and not have to worry about how it scales and implements redundancy? There appears to be some limitations one what you can and can't run on GAE and their price increase recently upset quite a large number of developers, is it really that expensive compared to other providers?
So basically, will it scale and what needs to be done to make it scale?
That's a lot of questions for one post. Anyway:
Amazon EC2 does not scale automatically with load. EC2 is basically just a virtual machine. You can achieve scaling of EC2 instances with Auto Scaling and Elastic Load Balancing.
SQL databases scale poorly. That's why people started using NoSQL databases in the first place. It's best to see which database your cloud provider offers as a managed service: Datastore on GAE and DynamoDB on Amazon.
Installing your own database on EC2 instances is very impractical as EC2 has ephemeral storage (it looses all data on "disk" when it reboots).
GAE Datastore is actually a one big database for all applications running on it. So it's pretty scalable - your million of users should not be a problem for it.
http://highscalability.com/blog/2011/1/11/google-megastore-3-billion-writes-and-20-billion-read-transa.html
Yes App Engine scales automatically, both frontend instances and database. There is nothing special you need to do to make it scale, just use their API.
There are limitations what you can do with AppEngine:
A. No local storage (filesystem) - you need to use Datastore or Blobstore.
B. Comet is only supported via their proprietary Channels API
C. Datastore is a NoSQL database: no JOINs, limited queries, limited transactions.
Cost of GAE is not bad. We do 1M requests a day for about 5 dollars a day. The biggest saving comes from the fact that you do not need a system admin on GAE ( but you do need one for EC2). Compared to the cost of manpower GAE is incredibly cheap.
Some hints to save money (an speed up) GAE:
A. Use get instead of query in Datastore (requires carefully crafting natiral keys).
B. Use memcache to cache data you got form datastore. This can be done automatically with objectify and it's #Cached annotation.
C. Denormalize data. Meaning you write data redundantly in various places in order to get to it in as few operations as possible.
D. If you have a lot of REST requests from devices, where you do not use cookies, then switch off session support ( or roll your own as we did). Sessions use datastore under the hood and for every request it does get and put.
E. Read about adjusting app settings. Try different settings (depending how tolerant your app is to request delay and your traffic patterns/spikes). We were able to cut down frontend instances by 70%.
Related
I'm running into a performance issue with Google Cloud Bigtable Python Client. I'm working on a flask API that writes to and reads from a GCP Bigtable instance. The API uses the python client to communicate with Bigtable, and was deployed to GCP App Engine flexible environment.
Under low traffic, the API works fine. However during a load test, the endpoints that reads and writes to Bigtable suffers a huge performance decrease compare to a similar endpoint that doesn't communicate with Bigtable. Also, a large percentage of requests went to the endpoint receives a 502 Bad Gateway, even when health check was turned off in App Engine.
I'm aware of that the client is currently in Alpha. I wonder if the performance issue is known, or if anyone also ran into the same issue
Update
I found a documentation from Google stating:
There are issues with the network connection. Network issues can
reduce throughput and cause reads and writes to take longer than
usual. In particular, you'll see issues if your clients are not
running in the same zone as your Cloud Bigtable cluster.
In my case, my client is in a different region, by moving it to the same region had a huge increase in performance. However the performance issue still exist, and the recommendation from the documentation is to put client in the same zone as Bigtable.
I also considered using Container engine or Compute Engine where it is easier to specify the zone, but I want stay with App Engine for its autoscale functionality and managed services.
Bigtable client take somewhere between 3 ms to 20 ms to complete each request, and because python is single threaded, during that period of time it will just wait until the response comes back. The best solution we found was for any writes, publish the request to Pubsub, then use Dataflow to write to Bigtable. It is significantly faster because publishing a message in Python would take way below 1 ms to complete, and because Dataflow can be set to exactly the same region as Bigtable, and it is easy to parallel, it can write much faster.
Though it doesn't solve the scenario where you need frequent read or write need to be instantaneous
I am aware that Google cloud has an IaaS offering which allows point click installation of drupal sites using Bitnami. But I am wondering how this compares with using App Engine or container engine (instead of IaaS).
Per OP--
Large is defined as: "upto 1 million daily users visits to the website and growing"
App Engine, Kubernetes Engine (formerly Container Engine), and Compute Engine are tools that solve different problems.
From what you have shared about your use case, you have the following requirements:
Migrate a large (single) Drupal website
Ensure that it scales beyond 1 million user visits per day
All 3 products will support these requirements.
The important factors that you will need to consider when evaluating these products are:
Cost
Maintenance Effort
When evaluating the products against these two factors in the context of your requirements, you might reach a clearer winner.
For the cost ratings below, it is a best guess using you high level requirements.
Google App Engine - allows you to easily deploy, scale, and update your application. It takes care of all of the "system administration" tasks for you. Rolling out updates to deployed services is also done quite easily via the command line.
Cost is "medium" when compared to running a small single Compute Engine VM but App Engine only provisions what it believes you need to serve the traffic you are receiving, as Drupal requires a DB (can be managed by Google Cloud SQL), this will need to be provisioned separately and will add to the total cost.
Maintenance Effort is "low" because App Engine handles all of the operations tasks.
Google Kubernetes Engine - is a managed Kubernetes providing "Container Infrastructure as a Service". Great for when you have various sized services and you want to bin pack and utilize a cluster of machines for different services. Given you are running a single sized application - a Drupal website, you won't really be taking advantage of this. Scaling and administration is also a little bit more complicated as you will need to scale your Pod as well as the Kubernetes Engine cluster.
Cost is "low" as Kubernetes Engine zonal masters are free and you are only charged for the compute resources of the nodes.
Maintenance Effort is "high" as you will have to configure scaling of your cluster and pods. Operate the Kubernetes Engine cluster and manage your application through the Kubernetes API. A lot more moving parts, when compared to App Engine.
Google Compute Engine - as mentioned, there are Cloud Launcher options to run Drupal. However, once the instance is deployed, you will need to take care of updating. Also, that particular Cloud Launcher version deploys to a single VM. You will probably want to setup Load Balancing, a Managed Instance Group, and Cloud SQL to ensure it is highly available and will scale.
Cost is "low" as you could run your Drupal application and DB on a single small VM, however it may not scale.
Maintenance Effort is "high" as at a minimum, you will have to apply OS patches. To have High Availability and scaling you will have to configure a managed instance group, load balancer, and a startup script/VM image.
In conclusion, if you are an experienced System Administrator, you might prefer to deploy to Compute Engine and manage the system yourself for a potentially lower cost.
If you don't want any operational overhead, App Engine is the way to go.
Kubernetes Engine is too sophisticated a tool for the problem you are solving.
pardon me if this isn't the place to ask such question. But I have finished my project and thinking to deploy it using amazon elastic beanstalk and got a huge worry. My worry is that my project's database can be humongous. It's a community website like a reddit that users can create a page that other people can post text,link,pic,video(youtube). Also users get a profile page, and are able to comment as well. This was my first big project, and I don't want to pay more than $200 for server fee every month.
should I still deploy this? or just be happy I proved myself I can make this? how much do you think I'll have to pay assuming I get about (max)100 users?
For starters, you can look at the costs for any AWS service by going to that services 'homepage' and clicking "Pricing" usually on the left side. I typically get to the pricing page by Googling "AWS <> Pricing" (e.g. "AWS EC2 Pricing").
Whether or not you incur any cost and what that cost is, really depends on how you deploy your website. Questions like, is your database self-managed (i.e. installed on your own EC2 instance) or are you using RDS? Are you using S3 to store static content? Will you be serving your web contents via Cloudfront (AWS' CDN)?
Many of the basic services (EC2, S3, RDS, etc.) have free-tiers which will allow you to use them for free, provided you stay within certain (and usually very low) levels of usage.
If your database is going to get VERY large, and cost is your primary concern, it's usually more cost-effective to manage it on your own EC2 server, however then things like updates, security, scaling, backups, etc. all become your problem to deal with and often can incur additional cost (i.e. your backups will likely require volume snapshots which will cost you vs. RDS' backups are free).
If you're going to have a significant amount of static content, it will be more cost effective to host it on your own EC2 server, however again, all maintenance will be your responsibility, such as backups and scaling (which can incur cost) to meet demand vs. all of that is taken care of by S3, though you pay each time a file is accessed.
If cost is your primary concern, my suggestion is to start out your development using the AWS services (RDS, S3 maybe Elastic Beanstalk), though that can often complexity to your development efforts (dealing with authentication, additional SDK's, etc.). You can typically and pretty easily roll out your own service (MySQL, EBS filesystem to replace S3, etc.) as replacement. Additionally, again depending on your roll-out, there can be network traffic costs. Usually, this isn't a problem if you're doing things the way Amazon wants you to, but...it wouldn't be unheard of.
To get you started:
https://aws.amazon.com/s3/pricing/
https://aws.amazon.com/ec2/pricing/
https://aws.amazon.com/rds/pricing/
Additionally, there is a nifty calculator which can help you estimate your costs. You will need to know what your traffic expectations as well as service requirements are, but you can play around with those numbers.
https://calculator.s3.amazonaws.com/index.html
You don't have to worry about charges. As AWS has a Free tier which offers most of the services free for 12 months.
https://aws.amazon.com/free/
You can have 01 t2.micro (1 vCpu + 1 gb RAM) instance for Elastic beanstalk (EBS) with auto scaling turned off, purchase Reserved instances by making 1 or 3 year all upfront payment to save more.
https://aws.amazon.com/ec2/purchasing-options/reserved-instances/
You must use Relational Database Service (RDS) for database, instead of installing db in your EBS instance & store files on Simple Storage Service (S3).
RDS Pricing is $0.100 per GB-month, after first 12 months. So you dont need to worry about the database size.
After first 12 months, your monthly bill would be less than USD 50.
On production, we are using 2 t2.micro instances (Windows) with 1 Ms Sql database on RDS & we only have to pay for the extra EC2 machine, about USD 14 per month.
I did find some relevant information in the below stackoverflow thread which talks about the capabilities of t2.micro,t2.small & t2.medium ec2 instances.Do have a look at it.
several t2.micro better than a single t2.small or t2.medium
Context:
We have a web application written in Java EE on Glassfish + MySQL with JDBC for storage + XMPP connection for GCM CCS upstream messaging + Quartz scheduler library. Those are the main components of the app.
We're wrapping up our app development stage and we're trying to figure out the best option for deploying it, considering future scalability, both in number of users and their geographical location (ex, multiple VMS on multiple continents, if needed). Currently we're using a single VM instance from DigitalOcean for both the application server and the MySQL server, which we can then scale up with some effort (not much, but probably more than GAE).
Questions:
1) From what I understood, Cloud SQL is replicated in multiple datacenters across the world, thus storage-wise, the geographical scalability is assured. So this is a bonus. However, we would need to update the DB interaction code in the app to make use of Cloud SQL structure, thus being locked-in to their platform. Has this been a problem for you ? (I haven't looked at their DB interaction code yet, so I might be off on this one)
2) Do GAE instances work pretty much like a normal VM would ? Are there any differences regarding this aspect ? I've understood the auto-scaling capability, but how are they geographically scalable ? Do you have to select a datacenter location for each individual instance ? Or how can you achieve multiple worldwide instance locations as Cloud SQL does right out of the box ?
3) How would the XMPP connection fare with multiple instances ? Considering each instance is a separate VM, that means each instance will have a unique XMPP connection to the GCM CCS server. That would cause multiple problems, ex if more than 10 instances are being opened, then the 10 simultaneous XMPP connections limit cap will be hit.
4) How would the Quartz scheduler fare with the instances ? Right now we're saving the scheduled jobs in the MySQL database and scheduling them at each server start. If there are multiple instances, that means the jobs will be scheduled on each instance; so if there are multiple instances, the jobs will be scheduled multiple times. We wouldn't want that.
So, if indeed problems 3 & 4 are like I described, what would be the solution to those 2 problems ? Having a single instance for the XMPP connection as well as a single instance for the Quartz scheduler ?
1) Although CloudSQL is a managed replicated RDBMS, you still need to chose a region when creating an instance. That said, you cannot expect the latency to be great seamlessly globally. You would still need to design a proper architecture to achieve that.
2) GAE isn't a VM in any sense. GAP is a PaaS and, therefore, a runtime environment. You should expect several differences: you are restricted to Java, PHP, GO and Python, the mechanisms GAE provide do you out-of-the-box and compatible third-parties libraries. You cannot install a middleware there, for example. On the other hand, you don't have to deal with anything from the infrastructure standpoint, having transparent high-availability and scalability.
3) XMPP is not my strong suit, but GAE offers a XMPP service, you should take a look at it. More info: https://developers.google.com/appengine/docs/java/xmpp/?hl=pt-br
4) GAE offers a cron service that works pretty well. You wouldn't need to use Quartz.
My last advice is: if you want to migrate an existing application, your best choice would probably be GCE + CloudSQL, not GAE.
Good luck!
Cheers!
Currently clouds are mushrooming like crazy and people start to deploy everything to the cloud including CMS systems, but so far I have not seen people that have succeeded in deploying popular CMS systems to a load balanced cluster in the cloud. Some performance hurdles seem to prevent standard open-source CMS systems to be deployed to the cloud like this.
CLOUD: A cloud, better load-balanced cluster, has at least one frontend-server, one network-connected(!) database-server and one cloud-storage server. This fits well to Amazon Beanstalk and Google Appengine. (This specifically excludes CMS on a single computer or Linux server with MySQL on the same "CPU".)
To deploy a standard CMS in such a load balanced cluster needs a cloud-ready CMS with the following characteristics:
The CMS must deal with the latency of queries to still be responsive and render pages in less than a second to be cached (or use a precaching strategy)
The filesystem probably must be connected to a remote storage (Amazon S3, Google cloudstorage, etc.)
Currently I know of python/django and Wordpress having middleware modules or plugins that can connect to cloud storages instead of a filesystem, but there might be other cloud-ready CMS implementations (Java, PHP, ?) and systems.
I myself have failed to deploy django-CMS to the cloud, finally due to query latency of the remote DB. So here is my question:
Did you deploy an open-source CMS that still performs well in rendering pages and backend admin? Please post your average page rendering access stats in microseconds for uncached pages.
IMPORTANT: Please describe your configuration, the problems you have encountered, which modules had to be optimized in the CMS to make it work, don't post simple "this works", contribute your experience and knowledge.
Such a CMS probably has to make fewer than 10 queries per page, if more, the queries must be made in parallel, and deal with filesystem access times of 100ms for a stat and query delays of 40ms.
Related:
Slow MySQL Remote Connection
Have you tried Umbraco?
It relies on database, but it keeps layers of cache so you arent doing selects on every request.
http://umbraco.com/azure
It works great on azure too!
I have found an excellent performance test of Wordpress on Appengine. It appears that Google has spent some time to optimize this system for load-balanced cluster and remote DB deployment:
http://www.syseleven.de/blog/4118/google-app-engine-php/
Scaling test from the report.
parallel
hits GAE 1&1 Sys11
1 1,5 2,6 8,5
10 9,8 8,5 69,4
100 14,9 - 146,1
Conclusion from the report the system is slower than on traditional hosting but scales much better.
http://developers.google.com/appengine/articles/wordpress
We have managed to deploy python django-CMS (www.django-cms.org) on GoogleAppEngine with CloudSQL as DB and CloudStore as Filesystem. Cloud store was attached by forking and fixing a django.storage module by Christos Kopanos http://github.com/locandy/django-google-cloud-storage
After that, the second set of problems came up as we discovered we had access times of up to 17s for a single page access. We have investigated this and found that easy-thumbnails 1.4 accessed the normal file system for mod_time requests while writing results to the store (rendering all thumb images on every request). We switched to the development version where that was already fixed.
Then we worked with SmileyChris to fix unnecessary access of mod_times (stat the file) on every request for every image by tracing and posting issues to http://github.com/SmileyChris/easy-thumbnails
This reduced access times from 12-17s to 4-6s per public page on the CMS basically eliminating all storage/"file"-system access. Once that was fixed, easy-thumbnails replaced (per design) file-system accesses with queries to the DB to check on every request if a thumbnail's source image has changed.
One thing for the web-designer: if she uses a image.width statement in the template this forces a ugly slow read on the "filesystem", because image widths are not cached.
Further investigation led to the conclusion that DB accesses are very costly, too and take about 40ms per roundtrip.
Up to now the deployment is unsuccessful mostly due to DB access times in the cloud leading to 4-5s delays on rendering a page before caching it.