Clearing the cakephp tmp/cache solves the issue for only one save call. What can be the reason? - database

I modified the schema of mysql database(added a new table etc.), I cleared the tmp/cache(except directories).
Now the save in the new table happens only once(I have multiple save calls in a for loop and save happens for all of them) and fails from next time I enter the flow.
I am using cakephp 1.3.
What else should I check ?

Got it.
The cache issue was one part of the problem, which got fixed by clearing the files in tmp/cache directory.
Learning is -
If you make mysql schema changes(add new table / column etc.) in mysql, either clear the tmp/cache directory or set the debug level as 3 and refresh the page and set the debug level again to 0(if on production).
I was also getting a save error - mysql server has gone away, because in configs the wait_timeout value was 600 seconds. But my script was taking longer than that.
So model->save() was not working.
In my.cnf I updated the timeout to 4800 and restarted mysql and it fixed the problem.

Related

How can I limit the size of the debug_kit.sqlite file in cakephp 3.x?

The debug_kit.sqlite file in the tmp directory grows with every request by approx. 1.5 Mb. If I don`t remember to delete it, I am running out of disc space.
How could I limit it`s growth? I don't use the history panel, so I don't need the historic data. (Side question: why does it keep all historic requests anyways? In the history panel only the last 10 requests are shown, so why keep more than 10 requests in the db at all?)
I found out that the debug_kit has a garbage collection. However it is not effective in reducing the disc space because sqlite needs to rebuild the database with the vacuum command to free disc space. I created a PR to implement vacuuming into the garbage collection: https://github.com/cakephp/debug_kit/pull/702
UPDATE: The PR has been accepted. You can solve the problem now by updating debug_kit to 3.20.3 (or higher): https://github.com/cakephp/debug_kit/releases/tag/3.20.3
Well, there is one main purpose for debug kit. DebugKit provides a debugging toolbar and enhanced debugging tools for CakePHP applications. It lets you quickly see configuration data, log messages, SQL queries, and timing data for your application. Simple answer is Just for debug. Even though only shown 10 requests, you can still query to get all histories such as
Cache
Environment
History
Include
Log
Packages
Mail
Request
Session
Sql Logs
Timer
Variables
Deprecations
It's safe to delete debug_kit.sqlite or you can set false to generate again or what I did it I run cronjob to delete it every day.
Btw, you should not enable it for staging or production. Hope this help for you.

How to reset solr back to first use?

I'm having a lot of problems running my solr server. When I have problems committing my csv files (its a 500 MB csv) it throws up some error and I am never able to fix it. Which is why I try to clean up entire indexing using
http://10.96.94.98:8983/solr/gettingstarted/update?stream.body=<delete><query>*:*</query></delete>&commit=true
But sometimes it just doesnt delete. In which casese, I use the
bin/solr stop -all
And then try, but again it gives me some errors for updating. Then I dedicided to extract the install tarball deleteing all my revious solr files. And successfully it works!
I was wondering if there is a shorter way to go about it. I'm sure the index files arn't the only that are generated. Is there any revert to fresh installion option?
If you are calling the update command against the right collection and you are doing commit, you should see the content deleted/reset. If that is not happening, I would check that the server/collection you are querying is actually the same one you are executing your delete command against (here gettingstarted). If that does not work, you may have found a bug. But it is unlikely.
If you really want to delete the collection, you can unload it in the Admin UI's Core page and then delete from the disk. To see where the collection is, look at the core's Overview page on the right hand side. You will see Instance variable with path to your core's directory. It could be for example: .../solr-6.1.0/example/techproducts/solr/techproducts So, deleting that directory after unloading the core will get rid of everything there.

WSo2 EMM - App Management Database Bug

Running WSo2 EMM 1.1.0, everything has been working just fine except for one big issue.
From the moment I first click on an app in the App Management tab, the WSO2EMM_DB.h2.db file starts to steadily grow as long as the server is running, even with absolutely no changes. Eventually, it gets so big that clicking an app on that tab takes a ridiculously long time to load the list of devices using the app. We're talking 5+ minutes, it becomes completely unusable. I have checked the error logs and found no errors at all, every time.
Restarting the server does nothing to correct the issue. Even if I click an app on the App Management tab once, and never again, the database file will continue to grow. Even restarting the server and not logging into the EMM page, it will continue to grow.
The only thing I've found so far that can possibly help is keeping backup copies of the database file and overwriting the current file when it gets too big. Obviously that's not a solution, as I'd need to create a new backup file every time there's a change on the server, and eventually the database file would grow too big from that too.
It's not an issue with the H2 database either. Not only have I tried starting over fresh several times and have had the same behavior, but here is the only info I could find regarding this issue, and they were having the issue regardless of whether or not it was on H2 or MySQL.
I've been trying to find a solution for this for over a month with no success. Any help would be appreciated!
EDIT: It looks like this might be the subject of EMM-826. Unfortunately there seems to be no response to that bug report so far.
EDIT 2: EMM-826 was closed with a message saying the following:
This issue is fixed in the EMM 1.1.0 GA latest pack. Please get all the patches for the product/build the product from the latest source [ https://github.com/wso2/product-emm ] and try again.
Unfortunately, that did not work for me. I'm not sure what exactly I'm doing wrong, so I'll list the what I did to try to fix it:
Downloaded the EMM 1.1.0 zip from http://wso2.com/products/enterprise-mobility-manager/.
Downloaded the zip from https://github.com/wso2/product-emm and pasted the files from that into my EMM_HOME directory.
When that didn't work, I searched for patches and found I was only using patches 1-6. In the documentation I found I could download patches 7-12 here. Patches 9 and 10 didn't work right for some reason; causing me not to be able to reach the EMM dashboard or publisher. I could only access the Carbon manager. I was able to make patches 7, 8, 11, and 12 work though - with no change in behavior.
Here are the steps I take to reproduce the issue:
After setting a fresh copy of the EMM up, I log in to the EMM dashboard as Admin, set up a user account, and upload an app through the Publisher.
Register a device to the user account I set up. In this case, an Android device running Android 4.2.2.
From the dashboard, I go to App Management and click the app I uploaded. The list of devices loads, but from that point on, the database file starts growing and eventually, after several hours, becomes so large it the device list will never load.
Please help!
Found this happening also, from a quick look it's the WSO2EMM_DB.notifications table. Seems to keep a history of all notifications over time, and the info for app installs is taken from non-optimized queries, which degrade as the table grows. You 'could' delete all rows from the table, and it will re-populate as devices 'check back' and report their info.
But you'd probably want to write a query to just keep the latest notification of each type of each user (I'll leave that to someone else...) and as was mentioned, it is apparently fixed in the latest version.
Issue appears to be resolved in EMM 2.0, which can be found here.

CiviCRM "database looks to have been partially upgraded"

I recently upgraded both Drupal and CiviCRM to the latest versions. Drupal works fine, and so does Civi except when I move to the Civi menu, I get a message that says "Database check failed - the database looks to have been partially upgraded. You may want to reload the database with the backup and try the upgrade process again." This happened earlier and reloading the most recent backup didn't help. We had to go back quite a ways before we found one that did, then had to reload a lot of data from .CSV files and by hand. I'd rather not go through with that again.
One thing we found when comparing the development site on my WAMP desktop (which was a new install that works well) with the one on my ISP's server is that the server version contained two MyISam-format files from, or generated by, CiviCase where Civi wants to see InnoDB-format files. My ISP, far more knowlegable than I am about MySQL, converted these two files two InnoDB and the problem remains. This leaves me with two questions:
could the MyISam files be the source of the "incomplete upgrade"? and
is there some way to reset a flag that tells Civi that the database is incomplete or to run the database check manually?
Thanks for any help. Civi seems to work OK as is, but the error message will be disturbing to my end users.
That message happens when you have begun the CiviCRM database upgrade but it hasn't finished. CiviCRM edits the version number in the civicrm_domain table to flag that you're in the middle of an upgrade, and when the upgrade completes, it should remove that.
The simple way to remove the message is to go edit that in the database, but it gets set there for a reason: your database upgrade never completed.
You should restore everything to the last version where it all was working--restore both the code and the database. Play around for a bit and make sure nothing funny is happening.
Run a normal CiviCRM upgrade, replacing the files and running the upgrade script. Take note of anything that seems funny when the upgrade script runs. You might try doing a minor upgrade--just a point release--simply to be sure that any upgrade is working fine.
At this point, you should either have no problems or a much more detailed problem.
Finally, please note that there is now a CiviCRM-specific StackExchange site, which is where you'll find the most CiviCRM experts to answer your questions.

SQL Timeouts in CRM 4.0 - import of customizations fail

When importing customizations to CRM 4.0, the import fails with a message "generic SQL error". Digging a bit deeper the error message is really that a timeout has occured. The same error occurs when trying to create a new entity.
I increased the timeout as suggested in the link below, but the timeout occured anyway - it just took longer time to happen.
Increasing the timeout:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSCRM\OLEDBTimeout
This value does not exist by default, and if it does not exist then the default value will be 30 seconds. To change it, add the registry value (of type DWORD), and put in the value you want (measured in seconds). Then you have to recycle the CrmAppPool application pool for the change to take effect
https://community.dynamics.com/product/crm/crmtechnical/b/crmdavidjennaway/archive/2008/09/04/sql-timeouts-in-crm-generic-sql-error.aspx
The SQL profiler displays a set of inserts and updates related to the metadata in CRM, and then a call to the stored procedure exec p_RecreateIndexes
This call is apparently the culprit and never completes in a timely fashion (30+ minutes now and not completed yet). This is an existing test instance of CRM and is quite extensively used and filled with lots of data. Creating new entities has never before taken this long. Just in case, I have run the asyncoperation cleaning scripts from MS. It did not have any visible effect.
Is there any way to find the reason for the delay in this procedure, or some other solution I can try?
Try splitting up your import into chunks. For example, import the first 20 entities, then the second 20 entities, and so on until you've imported all of them. Then publish. Then go back and try importing the entire customizations file at the same time and republish. Following this method exactly has been the only way we've found to import some customization files in particularly stubborn environments.
It sounds like the re-index operation is taking some time - which would be as expected if the data size is large, and the fragmentation is high. It also depends on exactly what that stored procedure does, and how many cores/CPUs you have - and how many SQL Server is allowed to use.
Does the app allow you to defer that operation? You'd be able to run it manually yourself through Management Studio - if that doesn't break the application.
You could be cheeky, and rename that procedure, and replace it with one of your own that does nothing.... and then rename back, and run. Again, it might break something.
Or just keep increasing the timeout until you get past this issue. Some of my re-index jobs on databases generally take hours....
Or contact the vendor if you have support?
If you ran that query through Management Studio, it would complete.... doing that would give you the approximate time required to put (temporarily) into the timeout setting.
I just experienced the exact same problem and I was only importing 2 entities.
I found your questions when I was googling for p_RecreateIndexes after seeing it in the Trace files.
I ended up running exec p_RecreateIndexes in SQL Server. After it completed - about 2 minutes - I reran the Entity import and it worked.

Resources