app engine data store retrieving the deleted entities - google-app-engine

I am trying to delete all the entities of a kind. After deleting i am trying to make query on the same kind. But the thing is, even after deleting all the entities the query is giving the results with some 'n' number of rows. I know it is the HRD issue where the indexes are not deleted yet. But i want to know from where the query is getting the data though we are deleted the actual data.
Consider the below code,
class CustomReportInfo {
String reportName;
// Setter and getters
}
Here i am trying delete the CustomReportInfo kind entities. In order delete the entities what i am doing is,
List<CustomReportInfo> customReportInfoList = query(CustomReportInfo.class).list();
delete(customReportInfoList);
So, as you see i am deleting all the entities of CustomReportInfo. After that i am trying to make query on the same kind. Like,
List<CustomReportInfo> list = query(CustomReportInfo.class).list();
Still i am getting the 'n' number of rows after deleting. From where the query is getting the results even after the deleting the actual data?

Eventual consistency in Cloud Datastore means that if you run a global query (like the one in your example), it may return results that have already been deleted.
To ensure that the deleted entities don't appear in query results, you could restructure your entities to user ancestors. There's an overview of some of these issues in this article.

Related

Entities missing from Google Datastore query results -- composite index stale or missing?

My application runs on Google App Engine and uses Google Cloud Datastore.
I was alerted by one of my users that some entries they are associated with and had previously been seeing, are not appearing to them.
Indeed, when I query the Datastore for these entities (with a single property filter), they are not returned. I was able to find them via querying on a different property and after writing them to the datastore, the index is updated and they are returned in the query.
Perhaps technically my query is not guaranteed to return the entities as it is weakly consistent, but none of the entities were changed recently and usually any inconsistent results are resolved quite quickly. (it has been several days now)
So it seems like the index entries for this property on these entities were lost or damaged somehow. What to do? Wait and hope the index will be regenerated? I can write entities for this user to the datastore to regenerate the index...but doing it for all my users is not really an option.
The only similar case I can see on SA is this question: Seems the indexes are missing for new entities created since some time late June 1st, 2015 which resulted from this incident: https://status.cloud.google.com/incident/appengine/15015 but no similar incident occurred recently, according to the status dashboard.
Have you changed the status of which properties are indexed?
If you do, previously existing entities will only be updated in the indices the next time they are put.
That could cause older entities to not show up in a query.
An App Engine engineer kindly pointed me towards the root of the issue - my query filter property was on a UserProperty type.
From the docs at https://cloud.google.com/appengine/docs/legacy/standard/python/users/userobjects :
A User value with an email address that does not represent a Google
account at the time it is created will never match a User value that
represents a real user.
My user's email was not a Google Account but he likely recently created one, causing the user_id() to go from None to an integer id.
This means that from now, when I do a query like this:
u = User('name#non_google_domain.com')
Entity.all().filter('user_property', u)
Internally, u is now actually looked up by the combination of name#non_google_domain.com and the integer id, instead of the combination of name#non_google_domain.com and None, causing my entities not to be returned.
Indeed, examining e._entity['user_property'] of a returned entity, the new ones are of the form User('name#non_google_domain.com', _user_id='12345678912345678900') and the old ones are User('name#non_google_domain.com').
UserProperty is no longer recommended for use, because of issues like this.

Datastore sometimes fails to fetch all required entities, but works the second time

I have a datastore entity called lineItems, which consists of individual line items to be invoiced. The users find the line items and attach a purchase order number to the line items. These are they displayed on the web page where they can create the invoice.
I would display my code for fetching the entities, but I don't think it matters at all as this also happened a couple times when I was using managed VM's a few months ago and the code is completely different. (I was using objectify before, now I am using the datastore API). In a nutshell, I am currently just using a StructuredQuery.setFilter(new PropertyFilter.eq("POnum",ponum)).setFilter(new PropertyFilter.eq("Invoiced", false)); (this is pseudo code you can't do two .setFilters like this. The real code accepts a list of PropertyFilters and creates a composite filter properly.)
What happened this morning was the admin person created the invoice, and all but two of the lines were on the invoice. There were two lines which the code never fetched, and those lines were stuck in the "invoices to create" section.
The admin person simply created the invoice again for the given purchase order number, but the second time it DID pick up the two remaining lines and created a second invoice.
Note that the entities were created/edited almost 24 hours before (when she assigned the purchase order number to them), so they were sitting in the database for quite a while. (I checked my logs). This is not a case where they were just created, and then tried to be accessed within a short period of time. It is also NOT a case of failing to update the entities - the code creates the invoice in a 3'rd party accounting package, and they simply were not there. Upon success of the invoice creation, all of the entities are then updated with "invoiced = true" and written in the datastore. So the lines which were not on the invoice in the accounting program are the ones that weren't updated in the datastore. (This is not a "smart" check either, it does not check line-by line. It simply checks if the invoice creation was successful or not, and then updates all of the entities that it has in memory).
As far as I can tell, the datastore simply did not return all of the entities which matched the query the first time but it did the second time.
There are approximately 40'000 lineItem entities.
What are the conditions which can cause a datastore fetch to randomly fail to grab all of the entities which meet the search parameters of a StructuredQuery? (Note that this also happened twice while using Objectify on the now deprecated Managed VM architecture.) How can I stop this from happening, or check to see if it has happened?
You may be seeing eventual consistency because you are not using an ancestor query.
See: https://cloud.google.com/datastore/docs/articles/balancing-strong-and-eventual-consistency-with-google-cloud-datastore/

Google App Engine / NDB - Strongly Consistent Read of Entity List after Put

Using Google App Engine's NDB datastore, how do I ensure a strongly consistent read of a list of entities after creating a new entity?
The example use case is that I have entities of the Employee kind.
Create a new employee entity
Immediately load a list of employees (including the one that was added)
I understand that the approach below will yield an eventually consistent read of the list of employees which may or may not contain the new employee. This leads to a bad experience in the case of the latter.
e = Employee(...)
e.put()
Employee.query().fetch(...)
Now here are a few options I've thought about:
IMPORTANT QUALIFIERS
I only care about a consistent list read for the user who added the new employee. I don't care if other users have an eventual consistent read.
Let's assume I do not want to put all the employees under an Ancestor to enable a strongly consistent ancestor query. In the case of thousands and thousands of employee entities, the 5 writes / second limitation is not worth it.
Let's also assume that I want the write and the list read to be the result of two separate HTTP requests. I could theoretically put both write and read into a single transaction (?) but then that would be a very non-RESTful API endpoint.
Option 1
Create a new employee entity in the datastore
Additionally, write the new employee object to memcache, local browser cookie, local mobile storage.
Query datastore for list of employees (eventually consistent)
If new employee entity is not in this list, add it to the list (in my application code) from memcache / local memory
Render results to user. If user selects the new employee entity, retrieve the entity using key.get() (strongly consistent).
Option 2
Create a new employee entity using a transaction
Query datastore for list of employees in a transaction
I'm not sure Option #2 actually works.
Technically, does the previous write transaction get written to all the servers before the read transaction of that entity occurs? Or is this not correct behavior?
Transactions (including XG) have a limit on number of entity groups and a list of employees (each is its own entity group) could exceed this limit.
What are the downsides of read-only transactions vs. normal reads?
Thoughts? Option #1 seems like it would work, but it seems like a lot of work to ensure consistency on a follow-on read.
If you don not use an entity group you can do a key_only query and get_multi(keys) lookup for entity consistency. For the new employee you have to pass the new key to key list of the get_multi.
Docs: A combination of the keys-only, global query with a lookup method will read the latest entity values. But it should be noted that a keys-only global query can not exclude the possibility of an index not yet being consistent at the time of the query, which may result in an entity not being retrieved at all. The result of the query could potentially be generated based on filtering out old index values. In summary, a developer may use a keys-only global query followed by lookup by key only when an application requirement allows the index value not yet being consistent at the time of a query.
More info and magic here : Balancing Strong and Eventual Consistency with Google Cloud Datastore
I had the same problem, option #2 doesn't really work: a read using the key will work, but a query might still miss the new employee.
Option #1 could work, but only in the same request. The saved memcache key can dissapear at any time, a subsequent query on the same instance or one on another instance potentially running on another piece of hw would still miss the new employee.
The only "solution" that comes to mind for consistent query results is to actually not attempt to force the new employee into the results and rather leave things flow naturally until it does. I'd just add a warning that creating the new user will take "a while". If tolerable maybe keep polling/querying in the original request until it shows up? - that would be the only place where the employee creation event is known with certainty.
This question is old as I write this. However, it is a good question and will be relevant long term.
Option #2 from the original question will not work.
If the entity creation and the subsequent query are truly independent, with no context linking them, then you are really just stuck - or you don't care. The trick is that there is almost always some relationship or some use case that must be covered. In other words if the query is truly some kind of, essentially, ad hoc query, then you really don't care. In that case, you just quote CAP theorem and remind the client executing the query how great it is that this system scales. However, almost always, if you are worried about the eventual consistency, there is some use case or set of cases that must be handled. For example, if you have a high score list, the highest score must be at the top of the list. The highest score may have just been achieved by the user who is now looking at the list. Another example might be that when an employee is created, that employee must be on the "new employees" list.
So what you usually do is exploit these known cases to balance the throughput needed with consistency. For example, for the high score example, you may be able to afford to keep a secondary index (an entity) that is the list of the high scores. You always get it by key and you can write to it as frequently as needed because high scores are not generated that often presumably. For the new employee example, you might use an approach that you started to suggest by storing the timestamp of the last employee in memcache. Then when you query, you check to make sure your list includes that employee ... or something along those lines.
The price in balancing write throughput and consistency on App Engine and similar systems is always the same. It requires increased model complexity / code complexity to bridge the business needs.

What to do when a cache and a db index get very different?

I use memcache and datastore indexes with the google search api in gae. A practical problem is how to refresh a datastore index after an entity has been deleted since it appears that the entity is still in the index though it has been deleted. And how should I handle a more hypothetical scenario if memcache and the index start contain very different contents for the "same" data set i.e. a list of entities that can display from memcache, from the datastore index or from a datastore roundtrip?
For the first problem, I would recommend using the entity's key as doc_id for the index and since you have a reference to the document, you can delete it in a pre_delete_hook. This way you can also keep the data up to date, needed since adding a new document with an existing doc_id to the index will result in overwriting the existing one. (e.g. having a post_put_hook that creates the corresponding search document)
For the second, it's probably better to make sure that you won't run into this kind of a situation than trying to remedy it by keeping them updated.

Google App Engine: efficient large deletes (about 90000/day)

I have an application that has only one Model with two StringProperties.
The initial number of entities is around 100 million (I will upload those with the bulk loader).
Every 24 hours I must remove about 70000 entities and add 100000 entities. My question is now: what is the best way of deleting those entities?
Is there anyway to avoid fetching the entity before deleting it? I was unable to find a way of doing something like:
DELETE from xxx WHERE foo1 IN ('bar1', 'bar2', 'bar3', ...)
I realize that app engine offers an IN clause (albeit with a maximum length of 30 (because of the maximum number of individual requests per GQL query 1)), but to me that still seems strange because I will have to get the x entities and then delete them again (making two RPC calls per entity).
Note: the entity should be ignored if not found.
EDIT: Added info about problem
These entities are simply domains. The first string being the SLD and the second the TLD (no subdomains). The application can be used to preform a request like this http://[...]/available/stackoverflow.com . The application will return a True/False json object.
Why do I have so many entities? Because the datastore contains all registered domains (.com for now). I cannot perform a whois request in every case because of TOSs and latency. So I initially populate the datastore with an entire zone file and then daily add/remove the domains that have been registered/dropped... The problem is, that these are pretty big quantities and I have to figure out a way to keep costs down and add/remove 2*~100000 domains per day.
Note: there is hardly any computation going on as an availability request simply checks whether the domain exists in the datastore!
1: ' A maximum of 30 datastore queries are allowed for any single GQL query.' (http://code.google.com/appengine/docs/python/datastore/gqlreference.html)
If are not doing so already you should be using key_names for this.
You'll want a model something like:
class UnavailableDomain(db.Model):
pass
Then you will populate your datastore like:
UnavailableDomain.get_or_insert(key_name='stackoverflow.com')
UnavailableDomain.get_or_insert(key_name='google.com')
Then you will query for available domains with something like:
is_available = UnavailableDomain.get_by_key_name('stackoverflow.com') is None
Then when you need to remove a bunch of domains because they have become available, you can build a big list of keys without having to query the database first like:
free_domains = ['stackoverflow.com', 'monkey.com']
db.delete(db.Key.from_path('UnavailableDomain', name) for name in free_domains)
I would still recommend batching up the deletes into something like 200 per RPC, if your free_domains list is really big
have you considered the appengine-mapreduce library. It comes with the pipeline library and you could utilise both to:
Create a pipeline for the overall task that you will run via cron every 24hrs
The 'overall' pipeline would start a mapper that filters your entities and yields the delete operations
after the delete mapper completes, the 'overall' pipeline could call an 'import' pipeline to start running your entity creation part.
pipeline api can then send you an email to report on it's status

Resources