I am quite new with appEnginy and objectify. However I need to fetch a single row from db to get some value from it. I tried to fetch element by ofy().load().type(Branch.class).filter("parent_branch_id", 0).first() but the result is FirstRef(null). However though when I run following loop:
for(Branch b : ofy().load().type(Branch.class).list()) {
System.out.println(b.id +". "+b.tree_label+" - parent is " +b.parent_branch_id);
};
What do I do wrong?
[edit]
Ofcourse Branch is a database entity, if it matters parent_branch_id is of type long.
If you want a Branch as the result of your request, I think you miss a .now():
Branch branch = ofy().load().type(Branch.class).filter("parent_branch_id", 0).first().now();
It sounds like you don't have an #Index annotation on your parent_branch_id property. When you do ofy().load().type(Branch.class).list(), Objectify is effectively doing a batch get by kind (like doing Query("Branch") with the low-level API) so it doesn't need the property indexes. As soon as you add a filter(), it uses a query.
Assuming you are using Objectify 4, properties are not indexed by default. You can index all the properties in your entity by adding an #Index annotation to the class. The annotation reference provides useful info.
Example from the Objectify API reference:
LoadResult<Thing> th = ofy.load().type(Thing.class).filter("foo", foo).first();
Thing th = ofy.load().type(Thing.class).filter("foo", foo).first().now();
So you need to make sure member "foo" has an #Index and use the now() to fetch the first element. This will return a null if no element is found.
May be "parent_branch_id"in your case is a long, in which case the value must be 0L and not 0.
Related
Is there a way to remove a number from an attibute array in an update? For example, if I want to update all of an alchy's booze stashes if he runs out of a particular type of booze:
Alchy has_many :stashes
Stash.available_booze_types = [] (filled with booze.ids)
Booze is also a class
#booze.id = 7
if #booze.is_all_gone
#alchy.stashes.update(available_booze_types: "remove #booze.id")
end
update: #booze.id may or may not be present in the available_booze_types array
... so if #booze.id was in any of the Alchy.stash instances (in the available_booze_types attribute array), it would be removed.
I think you can do what you want in the following way:
if #booze.is_all_gone
#alchy.stashes.each do |stash|
stash.available_booze_types.delete(#booze.id)
end
end
However, it looks to me like there are better ways to do what you are trying to do. Rails gives you something like that array by using relations. Also, the data in the array will be lost if you reset the app (if as I understand available_booze_types is an attribute which is not stored in a database). If your application is correctly set up (an stash has many boozes), an scope like the following in Stash class seems to me like the correct approach:
scope :available_boozes, -> { joins(:boozes).where("number > ?", 0) }
You can use it in the following way:
#alchy.stashes.available_boozes
which would only return the ones that are available.
I'm struggling with a KeyProperty query, and can't see what's wrong.
My model is
class MyList(ndb.Model):
user = ndb.KeyProperty(indexed=True)
status = ndb.BooleanProperty(default=True)
items = ndb.StructuredProperty(MyRef, repeated=True, indexed=False)
I create an instance of MyList with the appropriate data and can run the following properly
cls = MyList
lists = cls.query().fetch()
Returns
[MyList(key=Key('MyList', 12), status=True, items=..., user=Key('User', 11))]
But it fails when I try to filter by user, i.e. finding lists where the user equals a particular entity; even when using the one I've just used for insert, or from the previous query result.
key = lists[0].user
lists = cls.query(cls.user=key).fetch()
Returns
[]
But works fine with status=True as the filter, and I can't see what's missing?
I should add it happens in a unit testing environment with the following v3_stub
self.policy = datastore_stub_util.PseudoRandomHRConsistencyPolicy(probability=0)
self.testbed.init_datastore_v3_stub(
require_indexes=True,
root_path="%s/../"%(os.path.dirname(__file__)),
consistency_policy=self.policy
)
user=Key('User', 11) is a key to a different class: User. Not MyList
Perhaps you meant:
user = ndb.KeyProperty(kind='User', indexed=True)
Your code looks fine, but I have noticed some data integrity issues when developing locally with NDB. I copied your model and code, and I also got the empty list at first, but then after a few more attempts, the data is there.
Try it a few times?
edit: possibly related?
google app engine ndb: put() and then query(), there is always one less item
I want to scan all records to check if there is not errors inside data.
How can I disable BadValueError to no break scan on lack of required field?
Consider that I can not change StringProperty to not required and such properties can be tenths in real code - so such workaround is not useful?
class A(db.Model):
x = db.StringProperty(required = True)
for instance in A.all():
# check something
if something(instance):
instance.delete()
Can I use some function to read datastore.Entity directly to avoid such problems with not need validation?
The solution I found for this problem was to use a resilient query, it ignores any exception thrown by a query, you can try this:
def resilient_query(query):
query_iter = iter(query)
while True:
next_result = query_iter.next()
#check something
yield next_result
except Exception, e:
next_result.delete()
query = resilient_query(A.query())
If you use ndb, you can load all your models as an ndb.Expando, then modify the values. This doesn't appear to be possible in db because you cannot specify a kind for a Query in db that differs from your model class.
Even though your model is defined in db, you can still use ndb to fix your entities:
# Setup a new ndb connection with ndb.Expando as the default model.
conn = ndb.make_connection(default_model=ndb.Expando)
# Use this connection in our context.
ndb.set_context(ndb.make_context(conn=conn))
# Query for all A kinds
for a in ndb.Query(kind='A'):
if a.x is None:
a.x = 'A more appropriate value.'
# Re-put the broken entity.
a.put()
Also note that this (and other solutions listed) will be subject to whatever time limits you are restricted to (i.e. 60 seconds on an App Engine frontend). If you are dealing with large amounts of data you will most likely want to write a custom map reduce job to do this.
Try setting a default property option to some distinct value that does not exist otherwise.
class A(db.Model):
x = db.StringProperty(required = True, default = <distinct value>)
Then load properties and check for this value.
you can override the _check_initialized(self) method of ndb.Model in your own Model subclass and replace the default logic with your own logic (or skip altogether as needed).
I'm wondering what the right pattern should be to update an existing datastore object using endpoints-proto-datastore.
For example, given a model like the one from your GDL videos:
class Task(EndpointsModel):
detail = ndb.StringProperty(required=True)
owner = ndb.StringProperty()
imagine we'd like to update the 'detail' of a Task.
I considered something like:
#Task.method(name='task.update',
path='task/{id}',
request_fields=('id', 'detail'))
def updateTask(self, task):
pass
However, 'task' would presumably contain the previously-stored version of the object, and I'm not clear on how to access the 'new' detail variable with which to update the object and re-store it.
Put another way, I'd like to write something like this:
def updateTask(self, task_in_datastore, task_from_request):
task_in_datastore.detail = task_from_request.detail
task_in_datastore.put()
Is there a pattern for in-place updates of objects with endpoints-proto-datastore?
Thanks!
See the documentation for details on this
The property id is one of five helper properties provided by default
to help you perform common operations like this (retrieving by ID). In
addition there is an entityKey property which provides a base64
encoded version of a datastore key and can be used in a similar
fashion as id...
This means that if you use the default id property your current object will be retrieved and then any updates from the request will replace those on the current object. Hence doing the most trivial:
#Task.method(name='task.update',
path='task/{id}',
request_fields=('id', 'detail'))
def updateTask(self, task):
task.put()
return task
will perform exactly what you intended.
Task is your model, you can easily update like this:
#Task.method(name='task.update',
path='task/{id}',
request_fields=('id', 'detail'))
def updateTask(self, task):
# Task.get_by_id(task.id)
Task.detail = task.detail
Task.put()
return task
I'm reading the documentation on full text search api (java) in google app engine at https://developers.google.com/appengine/docs/java/search/overview. They have example on getting the index:
public Index getIndex() {
IndexSpec indexSpec = IndexSpec.newBuilder()
.setName("myindex")
.setConsistency(Consistency.PER_DOCUMENT)
.build();
return SearchServiceFactory.getSearchService().getIndex(indexSpec);
}
How about on creating an index? How to create one?
Thanks
You just did. You just created one.
public class IndexSpec
Represents information about an index. This class is used to fully specify the index you want to retrieve from the SearchService. To build an instance use the newBuilder() method and set all required parameters, plus optional values different than the defaults.
https://developers.google.com/appengine/docs/java/javadoc/com/google/appengine/api/search/IndexSpec
You can confirm this by looking at the SearchService
SearchService is also responsible for creating new indexes. For example:
SearchService searchService = SearchServiceFactory.getSearchService();
index = searchService.getIndex(IndexSpec.newBuilder().setName("myindex"));
https://developers.google.com/appengine/docs/java/javadoc/com/google/appengine/api/search/SearchService
Anyway, It seems your code will create a new index if it doesn't exist. That's what the docs suggest:
// Get the index. If not yet created, create it.
Index index = searchService.getIndex(
IndexSpec.newBuilder()
.setIndexName("indexName")
.setConsistency(Consistency.PER_DOCUMENT));
https://developers.google.com/appengine/docs/java/javadoc/com/google/appengine/api/search/Index
Now, what happens if you run the code again and change the Consistency? Do you have the same index with a different consistency? Is the index overwritten? I don't know. I would use the SearchService to lookup existing indexes instead of using code that might create them just to avoid trying to get an index in my code but changing the specs inadvertantly.
An Index is implicitly created when a document is written. Consistency is an attribute of the index, i.e. you can't have two indexes of the same name with different consistencies.