GAE DataStore vs Google Cloud SQL for Enterprise Management Systems - google-app-engine

I am building an application that is an enterprise management system using gae. I have built several applications using gae and the datastore, but never one that will require a high volume of users entering transactions along with the need for administrative and management reporting. My biggest fear is that when I need to create cross-tab and other detailed reports (or business intelligence reporting and data manipulation) I will be facing a mountain of problems with gae's datastore querying and data pull limits. Is it really just architectural preference or are there quantitative concerns here?
In the past I have built systems using C++/c#/Java against an Oracle/MySql/MSSql (with a caching layer sprinkled in for some added performance on complex or frequently accessed db results).
I keep reading that we are to throw away the old mentality of relational data and move to the new world of the big McHashTable in the sky... but new isnt always better... Any insight or experience on the above would be helpful.

From the Cloud SQL FAQ:
Should I use Google Cloud SQL or the App Engine Datastore?
This depends on the requirements of the application. Datastore provides NoSQL key-value > storage that is highly scalable, but does not support the complex queries offered by a SQL database. Cloud SQL supports complex queries and ACID transactions, but this means the database acts as a ‘fixed pipe’ and performance is less scalable. Many applications use both types of storage.
If you need a lot of writes (~XXX per/s) to db entity w/ distributed keys, that's where the Google App Engine datastore really shine.
If you need support for complex and random user crafted queries, that's where Google Cloud SQL is more convenient.

What is scare me more in GAE datastore is index number limitation. For example if you need search by some field or sorting - you need +1 index. Totally you can have 200 indexes. If you have entity with 10 searchable fields and you can sort by any field - there will be about 100 combunations. So you need 100 indexes. I have developed few small projects for gae - and this is success stories. But when big one come - this is not for gae.
About cache - you can do it with gae, but they distributed cache works very slow. I prefer to create private single instance of permanent backend with RESTfull API that holds cached values in memory. Frontend instances call this API to get/set values.
Maybe it is posible to build complex system with gae, but this will be a set of small applications/services.

Related

How are the instances / datastores for different project ids comingled in GCP App Engine?

GCP has put out several articles about how their various services work behind the scenes.
Is there any information out there illustrating how they keep projects and the data for those projects segregated?
Is my data stored on separate machines from other GCP customers? or is it the same machines with some kind of multi-tenancy implemented (like this article they have where they explain how i could implement multi-tenancy within my own datastore project https://cloud.google.com/datastore/docs/concepts/multitenancy)?
Datastore is a Non-SQL database, built on Megastore, which in turn builds on Bigtable. Datastore is essentially a layer on top of Bigtable that adds query semantics, transactions, and index management (a DBMS).
Perhaps this is interesting for you to know more about the internal of Google Cloud Datastore. Also, here you can find a further explanation on Megastore, from which most of Datastore is part of. The information on those slides can be found in this public paper.
Long story short: no, your data is not stored in a separate machine from other Google Cloud Platform users, as well as your data may reside in different physical machines.

Pluggable database interface

I am working on a project where we are scoping out the specs for an interface to the backend systems of multiple wholesalers. Here is what we are working with,
Each wholesaler has multiple products, upwards of 10,000. And each wholesaler has customized prices for their products.
The list of wholesalers being accessed will keep growing in the future, so potentially 1000s of wholesalers could be accessed by the system.
Wholesalers are geographically dispersed.
The interface to this system will allow the user to select the wholesaler they wish and browse their products.
Product price updates should be reflected on the site in real time. So, if the wholesaler updates the price it should immediately be available on the site.
System should be database agnostic.
The system should be easy to setup on the wholesalers end, and be minimally intrusive in their daily activities.
Initially, I thought about creating databases for each wholesaler on our end, but with potentially 1000s of wholesalers in the future, is this the best option as far as performance and storage.
Would it be better to query the wholesalers database directly instead of storing their data locally? Can we do this and still remain database agnostic?
What would be best technology stack for such an implementation? I need some kind of ORM tool.
Java based frameworks and technologies preferred.
Thanks.
If you want to create a software that can switch the database I would suggest to use Hibernate (or NHibernate if you use .Net).
Hibernate is an ORM which is not dependent to a specific database and this allows you to switch the DB very easy. It is already proven in large applications and well integrated in the Spring framework (but can be used without Spring framework, too). (Spring.net is the equivalent if using .Net)
Spring is a good technology stack to build large scalable applications (contains IoC-Container, Database access layer, transaction management, supports AOP and much more).
Wiki gives you a short overview:
http://en.wikipedia.org/wiki/Hibernate_(Java)
http://en.wikipedia.org/wiki/Spring_Framework
Would it be better to query the wholesalers database directly instead
of storing their data locally?
This depends on the availability and latency for accessing remote data. Databases itself have several posibilities to keep them in sync through multiple server instances. Ask yourself what should/would happen if a wholesaler database goes (partly) offline. Maybe not all data needs to be duplicated.
Can we do this and still remain database agnostic?
Yes, see my answer related to the ORM (N)Hibernate.
What would be best technology stack for such an implementation?
"Best" depends on your requirements. I like Spring. If you go with .Net the built-in ADO.NET Entity Framework might be fit, too.

Geospatial Database Cloud Server

Are there any cloud hosting solutions for geospatial data? I am currently writing a directory style app where businesses can sign up and then users can find nearby ones.
I am considering Google App Engine for this, but from what I can tell the GeoModel code is quite expensive (up to tens of thousands of dollars a year) to run since Google updated the pricing of App Engine. It doesn't seem like App Engine's database is really suited to this kind of query (though the SQL solution may be an answer).
I was hoping to find a service where I could send off a HTTP request to add data (a business' id, name and icon url) to a database, and then another one to find a list of businesses that are nearby to a given point. A service is preferable as this is work done for a client and we would like the solution to be managed with as little interaction from us or the client needed as possible.
EDIT:
I just found cartodb.com which uses PostgreSQL and is reasonably priced. Are the any other alternatives?
The App Engine Search API (currently in Experimental) supports GeoPoints and geosearch, and is great for exactly the kind of query that you describe.
See the Google Developers Academy (GDA) App Engine Search API classes for a bit more info and an example as well.
http://www.iriscouch.com/ is a cloud-based host for CouchDB and they support the geocouch extensions for CouchDB to store geoJSON data and perform spatial queries.
We have decided to go with cartodb.com because it looks like they have a good price to ease of use ratio.
You mentioned going with CartoDB, which is a good choice with a nice UI.
Just adding, if you were just looking for a scalable backend, you could use StormDB. It is a cloud hosted SQL database with geospatial extensions. You data is automatically distributed amongst multiple nodes for write, read, and parallel query scalability.

GoogleApps Datastore Cons and Pros

I've been reading more about Google AppEngine and learned python in the past couple of weeks, including working with MongoDB. What I need the most is a scalable database solution. Before discovering Google AppEngine, the only three DB solutions I find useful are DynamoDB, MongoDb and BigCouch.
I find out how that I really like python language, and for one coming from ASP.NET development, I've decided to switch and develop my app using python. My first choice was to develop my application using python + bottle + mongoDB. The problem is that DynamoDB is very expensive, and the lack of easy to use backup/restore options made me pass Amazon's offering.
Google AppEngine datastore is much more affordable. However, I still can't find information regarding some specific question on Google's website
Here are some of the questions I need answer to:
Does Google Datastore support backup/restore within the administration console?
If I want to backup/restore 50TB of data, how much time it takes to backup/restore the data? Where it is stored? what are the costs?
How much time it takes to backup 1TB of data for example?
Does DataStore support caching in the database layer
Any cons that I should be aware of?
Those some of the question that I need to get answers to. MongoDB is an excellent product and developing web app using Mongo + Python + bottle is fun fun fun. However, I prefer a full DB hosted solution like one offered by Google. But before I do that, I need to be sure that I'm not missing anything.
Here are some of the questions I need answer to:
Does Google Datastore support backup/restore within the administration
console?
No. Yes. You can back up and restore data from within the Administration Console by enabling datastore_admin for an application (Thanks to Idan Shechter for pointing this out!) More info can be found here: https://developers.google.com/appengine/docs/adminconsole/datastoreadmin
You can also download the data through the command line. See: https://developers.google.com/appengine/docs/python/tools/uploadingdata
If I want to backup/restore 50TB of data, how much time it takes to backup/restore the data?
It depends on where you back the data up to. Backing up to the Blobstore or Google Cloud storage will probably take much less time than backing up to your local machine. Transferring 50TBs to your local machine will take a long time and depend on many factors including network speed.
Where it is stored?
If you use the Datastore Administration, you can backup to the Blobstore or to Google Cloud Storage. If you use the command line tools, it will be stored where you choose to download the data to.
what are the costs?
The Blobstore costs $0.13/GB/Month and gives you 5GB free. Google Cloud Storage is $0.12 per GB/Month up to the first TB. You can see more pricing info for Cloud Storage here:
https://developers.google.com/storage/docs/pricingandterms
Bandwidth costs are $0.12 per GB (The first GB is free). More details on pricing can be seen here:
https://cloud.google.com/pricing/
How much time it takes to backup 1TB of data for example?
Again, it depends on where you back up to and your transfer speeds.
Does DataStore support caching in the database layer Any cons that I should be aware of?
No, it does not support database layer caching.

To CouchDB or not to?

Note: (I have investigated CouchDB for sometime and need some actual experiences).
I have an Oracle database for a fleet tracking service and some status here are:
100 GB db
Huge insertion/sec (our received messages)
Reliable replication (via Oracle streams on 4 servers)
Heavy complex queries.
Now the question: Can CouchDB be used in this case?
Note: Why I thought of CouchDB?
I have read about it's ability to scale horizontally very well. That's very important in our case.
Since it's schema free we can handle changes more properly since we have a lot of changes in different tables and stored procedures.
Thanks
Edit I:
I need transactions too. But I can tolerate other solutions too. And If there is a little delay in replication, that would be no problem IF it is guaranteed.
You are enjoying the following features with your database:
Using it in production
The data is naturally relational (related to itself)
Huge insertion rate (no MVCC concerns)
Complex queries
Transactions
These are all reasons not to switch to CouchDB.
Of course, the story is not so simple. I think you have discovered what many people never learn: complex problems require complex solutions. We cannot simply replace our database and take the rest of the month off. Sure, CouchDB (and BigCouch) supports excellent horizontal scaling (and cross-datacenter replication too!) but the cost will be rewriting a production application. That is not right.
So, where can CouchDB benefit you?
I suggest that you begin augmenting your application with CouchDB applications. Deploy CouchDB, import your data into it, and build non mission-critical applications. See where it fits best.
For your project, these are the key CouchDB strengths:
It is a small, simple tool—easy for you to set up on a workstation or server
It is a web server. It integrates very well with your infrastructure and security policies.
For example, if you have a flexible policy, just set it up on your LAN
If you have a strict network and firewall policy, you can set it up behind a VPN, or with your SSL certificates
With that step done, it is very easy to access now. Just make http or http requests. Whether you are importing data from Oracle with a custom tool, or using your web browser, it's all the same.
Yes! CouchDB is an app server too! It has a built-in administrative app, to explore data, change the config, etc. (like a built-in phpmyadmin). But for you, the value will be building admin applications and reports as simple, traditional HTML/Javascript/CSS applications. You can get as fancy or as simple as you like.
As your project grows and becomes valuable, you are in a great position to grow, using replication
Either expand the core with larger CouchDB clusters
Or, replicate your data and applications into different data centers, or onto individual workstations, or mobile phones, etc. (The strategy will be more obvious when the time comes.)
CouchDB gives you a simple web server and web site. It gives you a built-in web services API to your data. It makes it easy to build web apps. Therefore, CouchDB seems ideal for extending your core application, not replacing it.
I don't agree with this answer..
I think CouchDB suits especially well fleet tracking use case, due to their distributed nature. Moreover, the unreliable nature of gprs connections used for transmitting position data, makes the offline-first paradygm of couchapps the perfect partner for your application.
For uploading data from truck, Insertion-rate can take a huge advantage from couchdb replication and bulk inserts, especially if performed on ssd-based couchdb hosting.
For downloading data to truck, couchdb provides filtered replication, allowing each truck to download only the data it really needs, instead of the whole database.
Regarding complex queries, NoSQL database are more flexible and can perform much faster than relation databases.. It's only a matter of structuring and querying your data reasonably.

Resources