After reading the pricing of the new google relational database Spanner, it states that the cost is based on storage and use. They charge $0.9 by hour per node.
The question is: if I create the database for development, and only use it 6 hours a day, 100 hours a Month as maximum... Do I have to pay only for the hours with active use (receiving queries) or for the whole month? The charge is similar to App Engine instances?
In the first case, there is no problem spending US$90 for testing this new database, but if they charge for the whole month (using it or not)... the cost rise to US$670/month...
Anyone has been using this database and can share the final cost invoiced?
In the tutorial they recommend to delete de database after testing, but for development deleting the database and recreating database and data every day is not suitable.
Correct, you need to maintain at least 1 node to keep the data, and you need at least 1 node for every 2 TiB of data.
So, if you upload 50 TiB of data, you need to keep 25 nodes at a minimum to maintain the data.
More info - https://cloud.google.com/spanner/docs/limits
You are charged for any resources in your instances (while the nodes are running and storage is being used), even if you aren't actively issueing queries. It's like Compute Engine or Cloud SQL.
Related
My use case is the following: I run about 60 websockets from 7 data sources in parallel that record stock tickers (so time-series data). Currently, I'm writing the data into a mongodb that is hosted on a Google Cloud VM such that every data source has its own collection and all collections are hosted inside the same database.
However, the database has grown to 0.6 GB and ~ 10 million rows after only five days of data. I'm pretty new to such questions, but I have a feeling that this is not a viable long-term solution. I will never need all of the data at once, but I need all of the data in order to query by date / currency. However, as I understood those queries might become impossible once the dataset is bigger than my RAM, is that true?
Moreover, this is a research project, but unfortunately I'm currently not able to use a university cluster, therefore I'm hosting the data on a private VM. However, this is subject to a budget constraint, and highly performant machines quickly become very expensive. That's why I'm questioning my design choice. Currently, I'm thinking of either switching to another kind of database, but fear that I'm running into the same issues again, or exporting the database once per week / month / whatever to CSV and wiping out. This would be quite a hastle though and I'm also scared of losing data.
So my question is, how can I design this database such that I can subset the data per one of the keys (either datetime or ticker_id) even when the database grows larger than my machine's RAM? Diskspace is not an issue.
On top of what Alex Blex already commented about storage and performance.
Query response time,in 5 days you have close to 10M rows, will worsen as data set grows. You can look at sharding to break the table down to reasonable chunks and still have acees to all data for query purpose.
We are using Application Insights to track some basic telemetry of a WPF application. As we have developed the app we have just been using the Basic subscription, but we would like to make use of the continuous export feature which requires the Enterprise.
But according to the pricing page the Enterprise is charged at $15/node/month, will it treat each users pc as a node? It is not really clear as AI is really aimed at web servers.
I am happy to pay for 1 node and whatever extra data charges are incurred but unsurprisingly $15 per user machine per month is not affordable.
It is based on the role instance, so basically the machine name. You should just stick with the basic plan unless you need the OMS Connector or Continuous Export. If you have this deployed to hundreds of machines and need these features ping #DaleKoetke on twitter. I think he might even have his e-mail there.
This is a response I got from Microsoft:
What we have to understand is few things.
For assumption Node==Client
Continuous feed to Node – We count the number of distinct nodes sending telemetry data each hour. If a node does not send any telemetry during a particular hour, then it is not counted. The monthly per-node pricing for enterprise($15) assumes a node is sending telemetry every hour of the month, so if there are periods of inactivity for your application during the month, the actual charge will be lower.
So the short answer is yes we charge per node however the charge is proportional to the “activity feed”. No activity no charge and the enterprise charges will decline/node.
Calculation:-
So lets say you have 744 hours in a month and there is continuous feed for one node that is priced t $15.
So we see that every hour $0.020 is being charged for continuous feed/node.
You will need to calculate the continuous activity feed on the clients machine to get an idea of the charges.
I have about 20GB of data in the datastore.
The builtin indexes (which I have no control over) have increased to ~200GB.
I backed up all the data to BigQuery and don't need it anymore.
I am trying to get out of the datastore, but I can't - datastore Admin console can't delete that many entities (bulk delete option uses map reduce, which fails on quota within an hour or so), and the cost of deleting each entity programatically is too expensive (> $1000 - many write operations because of the many indexes).
Meanwhile google charges me $50/month for storing the data I don't need :(
how do I: Close my datastore project (not the app engine project, just the datastore part of it), or just wipe out all the data?
Please help me get out of this!
Wait until July 1, 2016. Then delete all entities.
Starting from July 1, 2016 the pricing is $0.02 per 100,000 deletes regardless of indexed properties.
pardon me if this isn't the place to ask such question. But I have finished my project and thinking to deploy it using amazon elastic beanstalk and got a huge worry. My worry is that my project's database can be humongous. It's a community website like a reddit that users can create a page that other people can post text,link,pic,video(youtube). Also users get a profile page, and are able to comment as well. This was my first big project, and I don't want to pay more than $200 for server fee every month.
should I still deploy this? or just be happy I proved myself I can make this? how much do you think I'll have to pay assuming I get about (max)100 users?
For starters, you can look at the costs for any AWS service by going to that services 'homepage' and clicking "Pricing" usually on the left side. I typically get to the pricing page by Googling "AWS <> Pricing" (e.g. "AWS EC2 Pricing").
Whether or not you incur any cost and what that cost is, really depends on how you deploy your website. Questions like, is your database self-managed (i.e. installed on your own EC2 instance) or are you using RDS? Are you using S3 to store static content? Will you be serving your web contents via Cloudfront (AWS' CDN)?
Many of the basic services (EC2, S3, RDS, etc.) have free-tiers which will allow you to use them for free, provided you stay within certain (and usually very low) levels of usage.
If your database is going to get VERY large, and cost is your primary concern, it's usually more cost-effective to manage it on your own EC2 server, however then things like updates, security, scaling, backups, etc. all become your problem to deal with and often can incur additional cost (i.e. your backups will likely require volume snapshots which will cost you vs. RDS' backups are free).
If you're going to have a significant amount of static content, it will be more cost effective to host it on your own EC2 server, however again, all maintenance will be your responsibility, such as backups and scaling (which can incur cost) to meet demand vs. all of that is taken care of by S3, though you pay each time a file is accessed.
If cost is your primary concern, my suggestion is to start out your development using the AWS services (RDS, S3 maybe Elastic Beanstalk), though that can often complexity to your development efforts (dealing with authentication, additional SDK's, etc.). You can typically and pretty easily roll out your own service (MySQL, EBS filesystem to replace S3, etc.) as replacement. Additionally, again depending on your roll-out, there can be network traffic costs. Usually, this isn't a problem if you're doing things the way Amazon wants you to, but...it wouldn't be unheard of.
To get you started:
https://aws.amazon.com/s3/pricing/
https://aws.amazon.com/ec2/pricing/
https://aws.amazon.com/rds/pricing/
Additionally, there is a nifty calculator which can help you estimate your costs. You will need to know what your traffic expectations as well as service requirements are, but you can play around with those numbers.
https://calculator.s3.amazonaws.com/index.html
You don't have to worry about charges. As AWS has a Free tier which offers most of the services free for 12 months.
https://aws.amazon.com/free/
You can have 01 t2.micro (1 vCpu + 1 gb RAM) instance for Elastic beanstalk (EBS) with auto scaling turned off, purchase Reserved instances by making 1 or 3 year all upfront payment to save more.
https://aws.amazon.com/ec2/purchasing-options/reserved-instances/
You must use Relational Database Service (RDS) for database, instead of installing db in your EBS instance & store files on Simple Storage Service (S3).
RDS Pricing is $0.100 per GB-month, after first 12 months. So you dont need to worry about the database size.
After first 12 months, your monthly bill would be less than USD 50.
On production, we are using 2 t2.micro instances (Windows) with 1 Ms Sql database on RDS & we only have to pay for the extra EC2 machine, about USD 14 per month.
I did find some relevant information in the below stackoverflow thread which talks about the capabilities of t2.micro,t2.small & t2.medium ec2 instances.Do have a look at it.
several t2.micro better than a single t2.small or t2.medium
I never used Amazon EC2 or RDS Service. I am trying to calculate my cost using http://calculator.s3.amazonaws.com/calc5.html
I searched a little but could locate answers to some basic things. Can you help me out with this:
What does DB Instance means? 1 Database = 1 Instance or 1 Connection = 1 Instance
How to calculate hours/month usage? It should depend on the transfer rates or processing time. Is there a way I can get rough Idea about it?
What if I already have my DB Ready and want to upload it directly (it would be few GBs) then how will it be calculated.
I am new to amazon EC2 and searched stackoverflow and serverfault before posting this question. Got some idea but not specific what I am looking for. Can someone help me out here?
In general, one database = one instance. You spin up instances, and do what you like with them. Definitely possible to have more connections to it.
Hours per month is just that. How many hours per month you have the instance active. If you plan to have the instance active 24/7, you may find more cost effective alternatives with other cloud providers. If you run it less often than that, you save money when it's not active. It's billed hourly to your account at the rate specified.
Upload data is counted at the standard transfer rates. A few GBs doesn't cost much, but you will be paying for the service starting the moment you spin up the instance.