At the moment as I debug my App Engine server, I'm often starting it up with the instruction to clear the datastore and then firing a couple of KB of data at it in the hope of figuring out why some of the reports I've written aren't generating properly.
However one thing that's getting in the way of the development and also raising some slight concern is that the console floods with the following output:
DEBUG 2012-07-13 11:44:34,033 datastore_stub_index.py:181] No need to update index.yaml
DEBUG 2012-07-13 11:44:34,221 datastore_stub_index.py:181] No need to update index.yaml
DEBUG 2012-07-13 11:44:34,406 datastore_stub_index.py:181] No need to update index.yaml
DEBUG 2012-07-13 11:44:34,601 datastore_stub_index.py:181] No need to update index.yaml
I've got two questions: should I be concerned about the flood of messages that are indicating that index.yaml does not need to be changed, and if not, is there a way to surpress the warning? If I should be concerned, could someone point me in the right direction?
Thanks,
It's no need for concern and just indicates that the devserver doesn't need to add new items to the index.yaml file. This is explained in more detail here.
Every datastore query made by an application needs a corresponding
index. Indexes for complex queries must be defined in a configuration
file named index.yaml.
The development web server automatically adds items to this file when
the application tries to execute a query that needs an index that does
not have an appropriate entry in the configuration file.
If I'm not mistaken this should only be printed when the --debug flag is passed to the devserver, so maybe it set as an option in the tool you use to invoke the devserver.
Related
The debug_kit.sqlite file in the tmp directory grows with every request by approx. 1.5 Mb. If I don`t remember to delete it, I am running out of disc space.
How could I limit it`s growth? I don't use the history panel, so I don't need the historic data. (Side question: why does it keep all historic requests anyways? In the history panel only the last 10 requests are shown, so why keep more than 10 requests in the db at all?)
I found out that the debug_kit has a garbage collection. However it is not effective in reducing the disc space because sqlite needs to rebuild the database with the vacuum command to free disc space. I created a PR to implement vacuuming into the garbage collection: https://github.com/cakephp/debug_kit/pull/702
UPDATE: The PR has been accepted. You can solve the problem now by updating debug_kit to 3.20.3 (or higher): https://github.com/cakephp/debug_kit/releases/tag/3.20.3
Well, there is one main purpose for debug kit. DebugKit provides a debugging toolbar and enhanced debugging tools for CakePHP applications. It lets you quickly see configuration data, log messages, SQL queries, and timing data for your application. Simple answer is Just for debug. Even though only shown 10 requests, you can still query to get all histories such as
Cache
Environment
History
Include
Log
Packages
Mail
Request
Session
Sql Logs
Timer
Variables
Deprecations
It's safe to delete debug_kit.sqlite or you can set false to generate again or what I did it I run cronjob to delete it every day.
Btw, you should not enable it for staging or production. Hope this help for you.
When I run update_indexes on Google Datastore I get the message below. It is telling me to determine which indexes are in error by looking at the GUI, then delete these indexes.
I have 51 erroneous indexes out of 200, and copying them out of the GUI is not feasible.
(Edit: By laboriously removing and adding indexes from the datastore-indexes.xml, we identified the one problematic index.)
Good devops procedure demands that we do this sort of thing automatically.
How does one determine which indexes are in error programmatically? (Python, bash, or even Java are OK.)
Cannot build indexes that are in state ERROR.To vacuum and rebuild your indexes:
1. Create a backup of your index.yaml specification.
2. Determine the indexes in state ERROR from your admin console: https://appengine.google.com/datastore/indexes?&app_id=s~myproject
3. Remove the definitions of the indexes in ERROR from your index.yaml file.
4. Run "appcfg.py vacuum_indexes your_app_dir/"
5. Wait until the ERROR indexes no longer appear in your admin console.
6. Replace the modified version of your index.yaml file with the original.
7. Run "appcfg.py update_indexes your_app_dir/"
Unfortunately Cloud Datastore doesn't have a public API for managing indexes and the current command line tools use an internal API that doesn't have access to that information.
We're aiming to have an index management API sometime next year (already working on designs) and I'll make sure this key use case is something we cover.
I'm having a lot of problems running my solr server. When I have problems committing my csv files (its a 500 MB csv) it throws up some error and I am never able to fix it. Which is why I try to clean up entire indexing using
http://10.96.94.98:8983/solr/gettingstarted/update?stream.body=<delete><query>*:*</query></delete>&commit=true
But sometimes it just doesnt delete. In which casese, I use the
bin/solr stop -all
And then try, but again it gives me some errors for updating. Then I dedicided to extract the install tarball deleteing all my revious solr files. And successfully it works!
I was wondering if there is a shorter way to go about it. I'm sure the index files arn't the only that are generated. Is there any revert to fresh installion option?
If you are calling the update command against the right collection and you are doing commit, you should see the content deleted/reset. If that is not happening, I would check that the server/collection you are querying is actually the same one you are executing your delete command against (here gettingstarted). If that does not work, you may have found a bug. But it is unlikely.
If you really want to delete the collection, you can unload it in the Admin UI's Core page and then delete from the disk. To see where the collection is, look at the core's Overview page on the right hand side. You will see Instance variable with path to your core's directory. It could be for example: .../solr-6.1.0/example/techproducts/solr/techproducts So, deleting that directory after unloading the core will get rid of everything there.
I've been following this tutorial http://www.magentocommerce.com/knowledge-base/entry/tutorial-creating-a-magento-widget-part-1 to create a Magento widget as part of an extension I'm working on.
Whilst the widget was created successfully and worked as I wanted it, I changed the code and started getting the following error
Warning: Invalid argument supplied for foreach() in app/code/core/Mage/Widget/Model/Widget/Instance.php on line 502
When I changed the code back, the error was still present. However when I copied all my module to a fresh Magento install then the error wouldn't appear.
Although my widget does not explicitly use the database does anyone know if the act of installing and uninstalling a Magento widget makes changes to the core databases tables and if it does, which tables are altered.
Thanks
The core_resource table contains a list of all modules, so adding a new module will cause a new row to be created.
If you have anything in your module's sql folder, that code will be run depending on your module's version.
Without knowing exactly what code was run and changed, it's hard to know what your specific problem is.
http://www.magentocommerce.com/knowledge-base/entry/magento-for-dev-part-6-magento-setup-resources
So if you followed that tutorial you linked to I do not think it changed any database settings.
You can tell if a module will add tables or modify table columns by the following method.
Assume the module is called Foo_Bar and it is installed in the "community" code pool (as opposed to core or local).
Navigate to app/code/community/Foo/Bar. You will at minimum usually see etc and Block directories there.
If you see a sql directory then this module makes db changes. You also need to understand a module is versioned and may initially make a certain table and then modify it.
By way of example you can go to any core module and look for the same. For example I am running Enterprise 1.12 and went to:
app/code/core/Mage/Sendfriend/sql/sendfriend_setup
I see:
mysql4-upgrade-1.5.9.9-1.6.0.0.php
mysql4-upgrade-0.7.3-0.7.4.php
mysql4-upgrade-0.7.2-0.7.3.php
mysql4-upgrade-0.7.1-0.7.2.php
mysql4-install-0.7.0.php
install-1.6.0.0.php
Note the upgrade x-y nomenclature. That is what core_resource keeps track of.
If you are wondering where your new module's settings are saved, that is actually in core_config_data. try this:
SELECT * FROM core_config_data where path like '%foo%';
Assuming you have some setting in the admin you named "foo".
Now back to your problem. That is a common php error. You are running a foreach on something which can not be iterated. The code right before that is probably not returning an array or collection or whatever.
Ideally you should always wrap the foreach with a statement that checks that the item you are iterating over is not empty.
You can also turn off displaying errors or suppress that error using the # statement which is a bad practice...
Over the last few weeks we have repeatedly failed on doing a complete backup of the data store using the datastore admin tool. We thought the issues had to do with quota errors we were running into so we switched our application from a free to a paid app and we still have problems.
Each time we are attempting to back up to the blobstore and what occurs is that the process never finishes. We see the backup in our Pending Backups list but it never actually completes. We only have a total of 43MB of data right now so we don't see it as a data transfer problem. Looking at our default Task Queues it shows that we have two pending tasks one is a call to /_ah/mapreduce/controller_callback and another is a call to /_ah/mapreduce/worker_callback
The worker_callback racks up its retry count and the only error clue we have is on the Previous Run tab it shows the last http response code to be 500. There is no error message, nothing shows up in our error logs, it just keeps trying over and over again.
We've been able to narrow the backup problems to a specific entity kind for a particular namespace but we can't figure out why that entity kind is failing whereas the others are not. The major difference is the entity kind has a large number of embedded entities, but if the app engine is able to read / put those entities we can't understand why it seems to be having problems backing it up. The particular namespace that the error occurs in has the largest data stored for that entity kind compared to the other namespaces we have setup.
We think if we can see what error is occurring in the worker_callback we may be able to figure out why the backup is failing, or what is wrong with our data that's preventing the backup. Is there something we need to setup / enable through settings / configuration files to give us more detailed information on the backup? Or is there some other avenue we should explore to figure out how to investigate/fix this problem?
I should mention we are using the Java SDK as well as Objectify V3 to work with the data store. We are also backing up data to the Blobstore.
Thank you.
Well with the app engine team's help we figured what the problem was and we worked around the issue. I want to give details in case anyone else runs into this problem.
From issue 8363 the app engine team indicated that from their logs they could see that the map reduce failed because of the large number of properties that our entity kind had. The specific entity kind that was causing the failure had a large number of variable properties that was generating errors when map reduce tried to write out a schema. They indicated that the solution on their end was to ignore entities that were like this in the backup to make it so the backup worked successfully.
What we did to work around the issue and make the backup work was change how we told objectify to store out data. The large number of properties were being created due to our use of the #embedded keyword on a HashMap() class member field. Since the embedded keyword breaks down classes into individual components it was generating a large number of properties. We switched the member field to be #serialized and then ran a conversion process to make it use the new serialized property. This made the backup / restore work again.
You can read more about the differences between embedded and serialized on objectify's website
snielson, would you mind opening an issue on our Public issue tracker here. Remember to add your Application ID so we can further debug this specific scenario.
Thanks!