I'm using GAE 1.8.1 and I've hit an issue with Objectify with the new scattered ID system. At least I think it's Objectify, I'm not sure (Using Objectify 4 RC1). I'm getting this..
Caused by: java.lang.IllegalArgumentException: id cannot be zero
at com.google.appengine.api.datastore.KeyFactory.createKey(KeyFactory.java:52)
at com.google.appengine.api.datastore.KeyFactory.createKey(KeyFactory.java:47)
at com.googlecode.objectify.Key.<init>(Key.java:91)
at com.googlecode.objectify.Key.create(Key.java:39)
at com.googlecode.objectify.impl.cmd.LoadTypeImpl.id(LoadTypeImpl.java:77)
The records get put in the datastore correctly as they have an ID, but this is on fetching the data out.
So I have to roll back to what it was before for the time being. It states here that you can specify the legacy ID generation with modifying the auto_id_policy in appengine-web.xml. I've tried adding this to the file:
<auto-id-policy>legacy</auto-id-policy>
But it doesn't work, or at least it might work if the XSD it validates against supports this tag. You can't deploy the app because of this.
we are aware of this issue and working on a fix.
The problem appears to be you are passing in 0 to load().id(). I don't think this has anything to do with scattered ids or Objectify.
Related
I've been trying my very best not to ask any nosy question here in stackoverflow, but it has been almost one week since I got stuck in this problem and I couldn't find any solution.
I already have my working website built with CakePHP 3.2. What the website basically does is scrape Twitter for tweets containing a given search term, check if it's already in my database, and store it if it doesn't yet exist. Twitter's JSON response has this "tweet_id" property, and I've been using that value to check for whether I should ignore or append a specific tweet to my DB. While this might be okay while my database is small, I suspect it's going to slow things down considerably when my tables grow bigger. Thus my need for ElasticSearch.
My ElasticSearch server is running on my Arch Linux install, and I've configured my app to point to the said server. Also, I have my "Type" object named the same way as my "Tweets" table (I followed the documentation until the overview part http://book.cakephp.org/3.0/en/elasticsearch.html). This craps out an "Unknown method "alias" error, and following Google searches led me to creating an alternate pagination class since that was what some found to be the cause of the error (https://github.com/lorenzo/audit-stash/issues/4), which still doesn't fix things.
I'm not sure if I got this right. I installed the ElasticSearch plugin with the assumption that all I have to do is name the Types the same name as my tables, since to me the documentation "implies" that this should be done on top of the Blog Tutorial they did to "improve query performance".
TLDR, how is this supposed to work? Is my above assumption right? Do I name the Types differently and index everything myself? I'm not sure if there's just too much automagic, or I'm just poor at these sort of things. And yes, I'm new to frameworks (but not PHP, among other languages)
Thanks in advance!
I have a weird issue with my BigQuery UI (going on https://bigquery.cloud.google.com/queries/my-project-name). I don't know why, but I see no datasets for my projects, when I'm fully aware they exist. My code can still hit these datasets and their tables. There is just no way for me to see them.
In the UI itself, I can still query them if I type the whole query by hand, but being able to see my structure for my schema could be helpful.
When I check my network tab in the developer tools on chrome, I notice that I receive "Failed to load ressource: net::ERR_CACHE_MISS". I then decided to do everything I could to reset my own cache. I cleared my cookies, went incognito, I tried other browsers, even other computers. NOTHING brings back my datasets.
Anyone encountered this and has any ideas how to force my cache to hit?
I had the same problem a while back. When I got the error, I struggled with it and I ended up finding a way to reset this. Seems like it's something cached server-side that makes this incorrect cache-hit. The way to reset the server-side cache is to hit a URL with a project that doesn't exist, so something like https://bigquery.cloud.google.com/queries/bogus-nonexistant-project should reset it all
Did you recently assign a new string ID to your project that previously only had a numeric ID? If so, this is a known issue that has been reported recently, and I'm still working to resolve.
The issue is that the frontend cache gets stuck with the old numeric ID for the project and our frontend JS has a bug where it errors out instead of updating the cache to contain the new string ID. LiY's workaround of going to a bogus, uncacheable URL is the suggested workaround to unstick the cache until this bug is resolved.
(And if you didn't recently assign a new string ID to your project, then I'd love to hear more details about what might have caused this issue so it won't happen to anyone else!)
I'm trying to use some existing code on Google App Engine, and it appears that there's a problem with my use of the CachedRowSetImpl class. I don't see this class in the JRE Class White List. Does anyone know where to get a CachedRowSet implementation when running in the GAE? In Java 7 there is the new class, RowSetProvider, which can be used to obtain a RowSetFactory, from which can be obtained a cached row set object; but RowSetProvider isn't on the white list either! Many thanks for any help!
Update:
I'm not sure of the correct answer for this issue, but in the end I just removed the references to the CachedRowSetImpl, and went with the plain ResultSet. I wasn't using any of the functionality specific to the cached row set, and so this really wasn't a problem at this time.
I'm having a problem getting started with Google BigQuery. I'm certain I have done everything correctly to create and configure the account. But when I go to the web interface, the it seems unable to find my project. I cannot create/upload any new data and I can't even query the sample data set. All the interface returns is:
Not Found: Project [my-project-id]
However, in the same window, the project name and ID is being listed in the panel on the left...so it looks like BigQuery is aware of my project in some sense. Screen shot below:
I am at a loss of how to rectify this. Does anyone have any ideas of something I might be missing in configuration and/or setup?
Best regards,
Dan
Did you recently set the ID on your project (e.g. xs-analytical-park-g)? If so, there may be a dataset that uses the old name (which was the numeric id of the projcet) which confuses the UI. We periodically search for changed project names and apply updates, but sometimes this can take a while.
I've just checked and it looks like our data should be up-to-date with respect to the project ids, so please let me know if this problem still persists.
Over the last few weeks we have repeatedly failed on doing a complete backup of the data store using the datastore admin tool. We thought the issues had to do with quota errors we were running into so we switched our application from a free to a paid app and we still have problems.
Each time we are attempting to back up to the blobstore and what occurs is that the process never finishes. We see the backup in our Pending Backups list but it never actually completes. We only have a total of 43MB of data right now so we don't see it as a data transfer problem. Looking at our default Task Queues it shows that we have two pending tasks one is a call to /_ah/mapreduce/controller_callback and another is a call to /_ah/mapreduce/worker_callback
The worker_callback racks up its retry count and the only error clue we have is on the Previous Run tab it shows the last http response code to be 500. There is no error message, nothing shows up in our error logs, it just keeps trying over and over again.
We've been able to narrow the backup problems to a specific entity kind for a particular namespace but we can't figure out why that entity kind is failing whereas the others are not. The major difference is the entity kind has a large number of embedded entities, but if the app engine is able to read / put those entities we can't understand why it seems to be having problems backing it up. The particular namespace that the error occurs in has the largest data stored for that entity kind compared to the other namespaces we have setup.
We think if we can see what error is occurring in the worker_callback we may be able to figure out why the backup is failing, or what is wrong with our data that's preventing the backup. Is there something we need to setup / enable through settings / configuration files to give us more detailed information on the backup? Or is there some other avenue we should explore to figure out how to investigate/fix this problem?
I should mention we are using the Java SDK as well as Objectify V3 to work with the data store. We are also backing up data to the Blobstore.
Thank you.
Well with the app engine team's help we figured what the problem was and we worked around the issue. I want to give details in case anyone else runs into this problem.
From issue 8363 the app engine team indicated that from their logs they could see that the map reduce failed because of the large number of properties that our entity kind had. The specific entity kind that was causing the failure had a large number of variable properties that was generating errors when map reduce tried to write out a schema. They indicated that the solution on their end was to ignore entities that were like this in the backup to make it so the backup worked successfully.
What we did to work around the issue and make the backup work was change how we told objectify to store out data. The large number of properties were being created due to our use of the #embedded keyword on a HashMap() class member field. Since the embedded keyword breaks down classes into individual components it was generating a large number of properties. We switched the member field to be #serialized and then ran a conversion process to make it use the new serialized property. This made the backup / restore work again.
You can read more about the differences between embedded and serialized on objectify's website
snielson, would you mind opening an issue on our Public issue tracker here. Remember to add your Application ID so we can further debug this specific scenario.
Thanks!