I am developng an app with xamarin and azure serverless functions as backend for the app.
I will be syncing map coordinates from the users in real time with a database on the cloud. i.e : taking coordinates from all users and then pushing the updated coordinates to all users at the same time, continuously so that all users can see live location of each other.
So I have to call an azure function in continous loop in order to sync database with cloud. so it can check db after every, 4-5 secs. is it the best way to do this? or will this cause too much execution of azure function and might be costly? If there is a better way to sync the db please suggest. Thankyou.
You have a mobile app that is making http calls to azure functions. Functions is elastic and scale will likely be fine. As I understand, you're not asking how to implement the server side of this; rather the real question here is pricing, right?
Azure Functions can run in two ways:
"serverless", aka "Consumption plan". In this case, Azure Functions is managing the underlying server (and scale out) and you pay only for active usage (per GB*Sec). This is what you get by default when you visit http://functions.Azure.com. See pricing details here: https://azure.microsoft.com/en-us/pricing/details/functions/
"AppService" - In this case, you've bought a VM upfront and you decided how much to scale. You pay a fixed monthly cost. See pricing details here: https://azure.microsoft.com/en-us/pricing/details/app-service/
You can switch between them. I'd recommend starting with the first approach. It will certainly be cheaper in the beginning when you have low traffic. Monitor cost, run your scenarios through the pricing sheets, and consider switching to the 2nd if it ends up being cheaper.
Related
My team has run into a design conflict. We are working on a project that involves scraping historical data from yahoo for all stocks for the last year to run some ML analysis on it. The latency is unbearably slow, not sure if it's the network or the web scraper. I proposed we use AWS RDS to store the data so we can access it quicker. However, a team member said that storing the data in the cloud would not solve our latency issue. I rebutted with the fact that the data will be organized and stored in a way to access the data significantly faster. He came back with something else and this went on. Is it true that a cloud DB won't offer any additional speed compared to a scraper? If so does AWS have a service that allows us to access the data we store faster through another service, almost as if the database was on our own server?
I am not that all familiar with cloud services but I do understand databases pretty well. So please dumb down the AWS stuff if you wish and feel free to point me to any duplicates or links that may help me understand this more.
Lots of good reasons to use RDS as a database, but speeding up your scraping isn't one of them - it likely isn't your bottleneck.
I have written lots of scrapers over the years, and by far the biggest performance boost will be to have a fast network connection between the scraper machine(s) and the host you are scraping, and even then, using a multi-threaded scraper for each scraping machine will give you another HUGE speed improvement.
Most time spent scraping is waiting on the host to return the results to you, not parsing the page and not saving the database to a database.
A MySQL DB on AWS RDS would be the same as the one that you'd install yourself on some machine. So, it isn't going to be different or slower just because it is in the cloud.
If you scrape some data and process it only once, then there is no point in introducing a DB in between. But if your scraper is slow and you process scraped data multiple times, then storing it in a DB should improve latencies. That is because the latencies of a DB read will be much lesser than that of scraping (assuming you design your DB schema properly; your hosts are in the same availability zones, or at least regions, as your DB etc.).
For e.g., if scraping a webpage takes ~10s and you process the scraped data twice, it'd take you ~20s if you don't have a DB. If you have a DB which has latencies of ~500ms you'd only take ~11s.
pardon me if this isn't the place to ask such question. But I have finished my project and thinking to deploy it using amazon elastic beanstalk and got a huge worry. My worry is that my project's database can be humongous. It's a community website like a reddit that users can create a page that other people can post text,link,pic,video(youtube). Also users get a profile page, and are able to comment as well. This was my first big project, and I don't want to pay more than $200 for server fee every month.
should I still deploy this? or just be happy I proved myself I can make this? how much do you think I'll have to pay assuming I get about (max)100 users?
For starters, you can look at the costs for any AWS service by going to that services 'homepage' and clicking "Pricing" usually on the left side. I typically get to the pricing page by Googling "AWS <> Pricing" (e.g. "AWS EC2 Pricing").
Whether or not you incur any cost and what that cost is, really depends on how you deploy your website. Questions like, is your database self-managed (i.e. installed on your own EC2 instance) or are you using RDS? Are you using S3 to store static content? Will you be serving your web contents via Cloudfront (AWS' CDN)?
Many of the basic services (EC2, S3, RDS, etc.) have free-tiers which will allow you to use them for free, provided you stay within certain (and usually very low) levels of usage.
If your database is going to get VERY large, and cost is your primary concern, it's usually more cost-effective to manage it on your own EC2 server, however then things like updates, security, scaling, backups, etc. all become your problem to deal with and often can incur additional cost (i.e. your backups will likely require volume snapshots which will cost you vs. RDS' backups are free).
If you're going to have a significant amount of static content, it will be more cost effective to host it on your own EC2 server, however again, all maintenance will be your responsibility, such as backups and scaling (which can incur cost) to meet demand vs. all of that is taken care of by S3, though you pay each time a file is accessed.
If cost is your primary concern, my suggestion is to start out your development using the AWS services (RDS, S3 maybe Elastic Beanstalk), though that can often complexity to your development efforts (dealing with authentication, additional SDK's, etc.). You can typically and pretty easily roll out your own service (MySQL, EBS filesystem to replace S3, etc.) as replacement. Additionally, again depending on your roll-out, there can be network traffic costs. Usually, this isn't a problem if you're doing things the way Amazon wants you to, but...it wouldn't be unheard of.
To get you started:
https://aws.amazon.com/s3/pricing/
https://aws.amazon.com/ec2/pricing/
https://aws.amazon.com/rds/pricing/
Additionally, there is a nifty calculator which can help you estimate your costs. You will need to know what your traffic expectations as well as service requirements are, but you can play around with those numbers.
https://calculator.s3.amazonaws.com/index.html
You don't have to worry about charges. As AWS has a Free tier which offers most of the services free for 12 months.
https://aws.amazon.com/free/
You can have 01 t2.micro (1 vCpu + 1 gb RAM) instance for Elastic beanstalk (EBS) with auto scaling turned off, purchase Reserved instances by making 1 or 3 year all upfront payment to save more.
https://aws.amazon.com/ec2/purchasing-options/reserved-instances/
You must use Relational Database Service (RDS) for database, instead of installing db in your EBS instance & store files on Simple Storage Service (S3).
RDS Pricing is $0.100 per GB-month, after first 12 months. So you dont need to worry about the database size.
After first 12 months, your monthly bill would be less than USD 50.
On production, we are using 2 t2.micro instances (Windows) with 1 Ms Sql database on RDS & we only have to pay for the extra EC2 machine, about USD 14 per month.
I did find some relevant information in the below stackoverflow thread which talks about the capabilities of t2.micro,t2.small & t2.medium ec2 instances.Do have a look at it.
several t2.micro better than a single t2.small or t2.medium
What's a good strategy to maintain availability with Azure SQL? We've noticing way too many service interruptions with messages like
which basically kill our application entirely. The SLA is far from 99.9% and honestly we're not interested in getting a refund, just reliable availability so our customers don't experience app outages. We're actually having better uptimes with a single IaaS VM running SQL than relying on Azure SQL (which totally blows our minds)
Anyway, leaving all those observations behind, programmatically, what is the most economical approach to having better than the advertised 99.99% availability (say an order of magnitude better - 99.999% - 5 mins downtime/year) with Azure SQL? Any specific data access programming pattern(s) and operating procedures anyone recommends?
EDIT: We're already using the Microsoft EntLib 6.0 Transient Fault Handling Application Block library ... 10 retries with 100ms inter-attempt timings. However, it's not 'transient' when these outages are 5+ hours long ...
Have a look at the passive and active geo-replication features of SQL Database -
http://azure.microsoft.com/blog/2014/07/12/spotlight-on-sql-database-active-geo-replication/
Active geo-replication creates additional copies of the database seamlessly in another data centre you can fall back to in case of an issue in a particular data centre
We use stovepiped solutions in each datacenter with an external load-balancer/failover solution. URLs are created to monitor data center faults and failover to another region.
In other words, we built our own business continuity solution to guarantee 5x9s.
I have a background in web programming where both the data and the code live on the server. Web hosts with mysql or the like are plentiful and cheap so using the application from multiple pcs was never a problem.
However I'm considering switching to building desktop applications but the only factor that annoys me is the syncing of data across the many pcs I use. I was thinking of perhaps setting up a light amazon ec2 instance with a postgresql on it and having my desktop applications use that.
I have a few questions:
I'm curious as to what latency I might expect by running the database on ec2 instead of the local network, any experience or insight is appreciated.
Are there better/more obvious/cheaper solutions?
I've looked at the pricing and it seems to come down to 24.48$ per month for a yearly contract. Whilst not really expensive, it is not exactly cheap either. At what point does it become more interesting to run a local server?
I'm obviously not using my applications for large parts of the day (sleep, work,...). I was wondering if I can have the amazon server go into a sort of "sleep" mode and wake up when poked. An initial delay for the first desktop application is acceptable. The reason behind this behavior would be to save money on the instance if it is only actually needed for 10% of the day.
I welcome any feedback at all on how this problem is best tackled.
This could get ugly. Every single query you do will have latency associated with it. If you have a lot of queries, this can add up very fast. So keep your query count low, and try to pre-fetch and cache data when possible.
Not enough information to answer that question.
Depends on the cost of your local server. Keep in mind that you will need to pay for electricity to keep it on.
You can stop your instance when you are not needing it, with the exception of high utilization reservations, you wont get billed when its in stopped state. With high utilization reservations you will still pay the full cost.
I'm currently looking for a Cloud PaaS that will allow me to scale an application to handle anything between 1 user and 10 Million+ users ... I've never worked on anything this big and the big question that I can't seem to get a clear answer for is that if you develop, let's say a standard application with a relational database and soap-webservices, will this application scale automatically when deployed on a Paas solution or do you still need to build the application with fall-over, redundancy and all those things in mind?
Let's say I deploy a Spring Hibernate application to Amazon EC2 and I create single instance of Ubuntu Server with Tomcat installed, will this application just scale indefinitely or do I need more Ubuntu instances? If more than one Ubuntu instance is needed, does Amazon take care of running the application over both instances or is this the developer's responsibility? What about database storage, can I install a database on EC2 that will scale as the database grow or do I need to use one of their APIs instead if I want it to scale indefinitely?
CloudFoundry allows you to build locally and just deploy straight to their PaaS, but since it's in beta, there's a limit on the amount of resources you can use and databases are limited to 128MB if I remember correctly, so this a no-go for now. Some have suggested installing CloudFoundry on Amazon EC2, how does it scale and how is the database layer handled then?
GAE (Google App Engine), will this allow me to just deploy an app and not have to worry about how it scales and implements redundancy? There appears to be some limitations one what you can and can't run on GAE and their price increase recently upset quite a large number of developers, is it really that expensive compared to other providers?
So basically, will it scale and what needs to be done to make it scale?
That's a lot of questions for one post. Anyway:
Amazon EC2 does not scale automatically with load. EC2 is basically just a virtual machine. You can achieve scaling of EC2 instances with Auto Scaling and Elastic Load Balancing.
SQL databases scale poorly. That's why people started using NoSQL databases in the first place. It's best to see which database your cloud provider offers as a managed service: Datastore on GAE and DynamoDB on Amazon.
Installing your own database on EC2 instances is very impractical as EC2 has ephemeral storage (it looses all data on "disk" when it reboots).
GAE Datastore is actually a one big database for all applications running on it. So it's pretty scalable - your million of users should not be a problem for it.
http://highscalability.com/blog/2011/1/11/google-megastore-3-billion-writes-and-20-billion-read-transa.html
Yes App Engine scales automatically, both frontend instances and database. There is nothing special you need to do to make it scale, just use their API.
There are limitations what you can do with AppEngine:
A. No local storage (filesystem) - you need to use Datastore or Blobstore.
B. Comet is only supported via their proprietary Channels API
C. Datastore is a NoSQL database: no JOINs, limited queries, limited transactions.
Cost of GAE is not bad. We do 1M requests a day for about 5 dollars a day. The biggest saving comes from the fact that you do not need a system admin on GAE ( but you do need one for EC2). Compared to the cost of manpower GAE is incredibly cheap.
Some hints to save money (an speed up) GAE:
A. Use get instead of query in Datastore (requires carefully crafting natiral keys).
B. Use memcache to cache data you got form datastore. This can be done automatically with objectify and it's #Cached annotation.
C. Denormalize data. Meaning you write data redundantly in various places in order to get to it in as few operations as possible.
D. If you have a lot of REST requests from devices, where you do not use cookies, then switch off session support ( or roll your own as we did). Sessions use datastore under the hood and for every request it does get and put.
E. Read about adjusting app settings. Try different settings (depending how tolerant your app is to request delay and your traffic patterns/spikes). We were able to cut down frontend instances by 70%.