I have uploaded my application with some indexs (only 5). But it was taking long time to build, I waited 2 days but it was still in "Building" state. After googling I found few solutions, one was vacuum all the index and then redeploy indexs. I did vacuum indexs (by emptying my index.yaml file and then running vacuum command). I re-deployed indexes using update index command but in admin console it saying "You have not created indexes for this application.".
I was wondering did anyone face this problem, is there anything I can do to fix it?
Thanks in advance.
There used to be an issue where building indexes would become stuck and need a Googler to un-stick them.
Submit a production issue with your app-id (choose "production" from the drop down)
http://code.google.com/p/googleappengine/issues/list
Related
My team is working on a search application for our websites. We are using Collective Solr in Plone to index our intranet and documentation sites. We recently set up shared blob storage on our test instance of the intranet site because Solr was not indexing our PDF files. This appears to be working, however, each time I run the reindexing script (##solr-maintenance/reindex) it stops after about an hour and a half. I know that it is not indexing our entire site as there are numerous pages, files, etc. missing when I run a query in the Solr dashboard.
The warning below is the last thing I see in the Solr log before the script stops. I am very new to Solr so I'm not sure what it indicates. When I run the same script on our documentation site, it completes without error.
2017-04-14 18:05:37.259 WARN (qtp1989972246-970) [ ] o.a.s.h.a.LukeRequestHandler Error getting file length for [segments_284]
java.nio.file.NoSuchFileException: /var/solr/data/uvahealthPlone/data/index/segments_284
I'm hoping someone out there might have more experience with Collective Solr for Plone and could recommend some good resources for debugging this issue. I've done a lot of searching lately but haven't found much useful info.
This was a bug fixed some time ago with https://github.com/collective/collective.solr/pull/122
We have standalone solr servers which are master and slave. Also have a full indexer job nightly. Generally, when job executed successful everything is alright. But last days, we noticed that indexer node has different document number with searching node. So, expected productions are not available in our production system. That's why we had to restart nodes and start replication manually, then problem went away. We need to prevent to occur this problem again. What do you suggest us to check or where should i look at? Indeed i think that essential error about the issue is: "SEVERE: No files to download for index generation"
Regards
Google appengine HRD migration has been a nightmare for me. I migrated my 55GB datastore to HRD yesterday. Since then many queries and indexes are broken:
Some examples:
Select * from table1 where col1=val1 => query.get() returns empty in
python. However, it works in datastore viewer.
Select * from table1
where col1=val1 => query.count()>0. However query.get() = empty.
Select * from table1 where col1=val1 order by col2 desc => Almost
half of the rows are getting missed in the response. Same behavior in
datastore viewer.
How do I get these tables and indexes repaired? Any way of getting Google Appengine team support for addressing this issue? Its a GAE Migration tool bug.
Will appreciate any help.
When the migration tool is used, a new app id is assigned, which makes all the keys change.
To recreate the custom indexes:
Temporarily empty index.yaml.
Vacuum the indexes (check out How can I remove unused indexes in Google Application Engine? for further information).
Wait until all the indexes have been deleted.
Restore index.yaml.
Create indexes by either redeploying the application or running appcfg.py update_indexes <path> (check out the documentation for furher information).
You may also need to manually update all of the other references (e.g. a ListProperty of keys) if you have any.
Edit
The simple, mono-property indexes that are managed automatically by App Engine are created/updated when a property is put.
To regenerate them, I recommend creating and running a simple MapReduce task to put every existing entity. This procedure should rebuild all the indexes (including those defined in index.yaml).
As this is a costly process, first do it manually with a few entities to see if it solves the problem.
Tables get repaired automatically in about 2-3 days. Thats a HRD problem. My problem is now resolved.
Update : finally it fix itself in 24 hours=)
I have the same problem as you
query.count()>0. However query.get() or fetch() is empty.
Its strange that some table work fine but some table has this problem
I think it is Google App Engine problem from migration very large table(model).
I hope my tables will recovered like yours in 2-3 days too.
I`m running several python (2.7) applications and constantly hit one problem: log search (from dashboard, admin console) is not reliable. it is fine when i searching for recent log entries (they are normally found ok), but after some period (one day for instance) its not possible to find same record with same search query again. just "no results". admin console shows that i have 1 gig of logs spanning 10-12 days, so old record should be here to find, retention/log size limits is not a reason for this.
Specifically i have "cron" request that write stats to log every day (it`s enough for me) and searching for this request always gives me the last entry, not entry-per-day-of-span-period as expected.
Is it expected behaviour (i do not see clear statements about log storage behaviour in docs, for example) or there is something to tune? For example, will it help to log less per request? or may be there is advanced use of query language.
Please advise.
This is a known issue that has already been reported on googleappengine issue tracker.
As an alternative you can consider reading your application logs programmatically using the Log Service API to ingest them in BigQuery, or build your own search index.
Google App Engine Developer Relations delivered a codelab at Google I/O 2012 about App Engine logs ingestion into Big Query.
And Streak released a tool called called Mache and a chrome extension to automate this use case.
I am having a problem with indexes building in my App Engine application. There are only about 200 entities in the indexes that are being built, and the process has now been running for over 24 hours.
My application name is romanceapp.
Is there any way that I can re-start or clear the indexes that are being built?
Try to redeploy your application to appspot, I have the same issue and this solved it.
Let me know if this helps.
Greeting, eng.Ilian Iliev
To handle "Error" indexes, first
remove them from your index.yaml file
and run appcfg.py vacuum_indexes.
Then, either reformulate the index
definition and corresponding queries
or remove the entities that are
causing the index to "explode."
Finally, add the index back to
index.yaml and run appcfg.py
update_indexes.
I found it here and it helps me.