App engine index building stalled stuck - google-app-engine

I am having a problem with indexes building in my App Engine application. There are only about 200 entities in the indexes that are being built, and the process has now been running for over 24 hours.
My application name is romanceapp.
Is there any way that I can re-start or clear the indexes that are being built?

Try to redeploy your application to appspot, I have the same issue and this solved it.
Let me know if this helps.
Greeting, eng.Ilian Iliev

To handle "Error" indexes, first
remove them from your index.yaml file
and run appcfg.py vacuum_indexes.
Then, either reformulate the index
definition and corresponding queries
or remove the entities that are
causing the index to "explode."
Finally, add the index back to
index.yaml and run appcfg.py
update_indexes.
I found it here and it helps me.

Related

App engine not syncing index after vacuum

I have uploaded my application with some indexs (only 5). But it was taking long time to build, I waited 2 days but it was still in "Building" state. After googling I found few solutions, one was vacuum all the index and then redeploy indexs. I did vacuum indexs (by emptying my index.yaml file and then running vacuum command). I re-deployed indexes using update index command but in admin console it saying "You have not created indexes for this application.".
I was wondering did anyone face this problem, is there anything I can do to fix it?
Thanks in advance.
There used to be an issue where building indexes would become stuck and need a Googler to un-stick them.
Submit a production issue with your app-id (choose "production" from the drop down)
http://code.google.com/p/googleappengine/issues/list

HRD migration broke datastore queries & indexes

Google appengine HRD migration has been a nightmare for me. I migrated my 55GB datastore to HRD yesterday. Since then many queries and indexes are broken:
Some examples:
Select * from table1 where col1=val1 => query.get() returns empty in
python. However, it works in datastore viewer.
Select * from table1
where col1=val1 => query.count()>0. However query.get() = empty.
Select * from table1 where col1=val1 order by col2 desc => Almost
half of the rows are getting missed in the response. Same behavior in
datastore viewer.
How do I get these tables and indexes repaired? Any way of getting Google Appengine team support for addressing this issue? Its a GAE Migration tool bug.
Will appreciate any help.
When the migration tool is used, a new app id is assigned, which makes all the keys change.
To recreate the custom indexes:
Temporarily empty index.yaml.
Vacuum the indexes (check out How can I remove unused indexes in Google Application Engine? for further information).
Wait until all the indexes have been deleted.
Restore index.yaml.
Create indexes by either redeploying the application or running appcfg.py update_indexes <path> (check out the documentation for furher information).
You may also need to manually update all of the other references (e.g. a ListProperty of keys) if you have any.
Edit
The simple, mono-property indexes that are managed automatically by App Engine are created/updated when a property is put.
To regenerate them, I recommend creating and running a simple MapReduce task to put every existing entity. This procedure should rebuild all the indexes (including those defined in index.yaml).
As this is a costly process, first do it manually with a few entities to see if it solves the problem.
Tables get repaired automatically in about 2-3 days. Thats a HRD problem. My problem is now resolved.
Update : finally it fix itself in 24 hours=)
I have the same problem as you
query.count()>0. However query.get() or fetch() is empty.
Its strange that some table work fine but some table has this problem
I think it is Google App Engine problem from migration very large table(model).
I hope my tables will recovered like yours in 2-3 days too.

How can I speed up the App Engine bulk downloader?

I'm trying to use the App Engine bulkloader to download entities from the datastore (the high-replication one if it matters). It works, but it's quite slow (85KB/s). Are there some magical set of parameters I can pass it to make it faster? I'm receiving about 5MB/minute or 20,000 records/minute, and given that my connection can do 1MB/second (and hopefully App Engine can serve faster than that) there must be a way to do it faster.
Here's my current command. I've tried high numbers, low numbers, and every permutation:
appcfg.py download_data
--application=xxx
--url=http://xxx.appspot.com/_ah/remote_api
--filename=backup.csv
--rps_limit=30000
--bandwidth_limit=100000000
--batch_size=500
--http_limit=32
--num_threads=30
--config_file=bulkloader.yaml
--kind=foo
I already tried this
App Engine Bulk Loader Performance
and it's no faster than what I already have. The number's he mentions are on par with what I'm seeing as well.
Thanks in advance.
Did you set an index on the key of the entity your trying to download?
I don't know if that helps but check if you get a warning at the beginning of the download that says something about "using sequential download"
Put this on the index.yaml to create an index on the entity key upload and wait for the index to be built.
- kind: YOUR_ENTITY_TYPE
properties:
- name: __key__
direction: desc

Is there any way to delete app engine's useless datastore indexes

I made a lots of indexes for testing,is it gonna call any issue?then how to delete them?
I aready delete them from my datastore-indexes.xml.
You need to use appcfg.py from the App Engine python SDK (yes, even if you're using Java; there's an open issue to correct this oversight) to remove indexes, with the vacuum_indexes option.

Wiping the datastore?

I'm working on an app engine project (java). I'm using the jdo interface. I haven't pushed the application yet (just running at localhost). Is there a way I can totally wipe my datastore after I publish? In eclipse, when working locally, I can just wipe the datastore by deleting the local file:
appengine-generated/local_db.bin
any facility like that once published?
I'm using jdo right now, but I might switch to objectify or slim3, and would want a convenient way to wipe my datastore should I switch over, or otherwise make heavy modifications to my classes.
Otherwise it seems like I have to setup methods to delete instances myself, right?
Thanks
you can delete it from admin console if there are not much enitty stored in your app. go to http://appengine.google.com and manually do it. easy for less than 2000-5000 entity.
This question addressed the same topic. There is no one command way to drop an entire datastore's worth of data. The only suggestion I have beyond those give in that previous question, would be to try out the new Mapper functionality, which would make it easy to map over an entire set of entities, deleting them as you went.

Resources