From time to time, the number of database connections from our Drupal 6.20 system to our Mysql database reaches 100-150 and after a while the website goes offline. The error message when trying to connect to Mysql manually is "blocked because of many connection errors. Unblock with 'mysqladmin flush-hosts'". Since the database is hosted on an Amazon RDS I don't have the permission to issue this command, but I can reboot the database and once rebooted the website works normally again. Until next time.
Drupal reports multiple errors prior to going offline, of two types:
Duplicate entry
'279890-0-all' for key
'PRIMARY' query:
node_access_write_grants /* Guest :
node_access_write_grants */ INSERT
INTO node_access (nid, realm, gid,
grant_view, grant_update,
grant_delete) VALUES (279890,
'all', 0, 1, 0, 0) in
/var/www/quadplex/drupal-6.20/modules/node/node.module
on line 2267.
Lock wait timeout exceeded; try
restarting transaction query:
content_write_record /* Guest :
content_write_record */ UPDATE
content_field_rating SET vid = 503621,
nid = 503621, field_rating_value =
1212 WHERE vid = 503621 in
/var/www/quadplex/drupal-6.20/sites/all/modules/cck/content.module
on line 1213.
The nids in these two queries are always the same and refer to two nodes that are frequently automatically updated by a custom module. I can track down a correlation between these errors and unusually many web requests in the Apache logs. I would understand that the website would become slower because of this. But:
Why do these errors occur, and how can they be solved? It seems to me it's to do with several web requests trying to update the same node at the same time. But surely Drupal should deal with this by locking the tables etc? Or should I deal with it in some special way?
Despite the higher web load, why does the database completely lock and require to be rebooted? Wouldn't it be better if the website still had access to Mysql and so, once the load is lower, it can serve pages again? Is there some setting for this?
Thank you!
Can be solved one or all of these three things to check:
are you out of disk space? From ssh, type command df -h and make sure you still have disk space.
Are the tables damaged? Repair the tables in phpMyAdmin, or CLI instructions here: http://dev.mysql.com/doc/refman/5.1/en/repair-table.html
Have you performance-tuned your mysql with an /etc/my.cnf? See this for more ideas: http://drupal.org/node/51263
Related
We have Microsoft Dynamics 365 (CRM) on premise version.
This instance is used by around 100 users and there are 15+ custom applications written in .Net, which consumes CRM Web service to perform CRUD operations.
For fetching data there are direct SQL select statements and NO Web service existence across the custom applications. Data size is also not much high, few plugins and workflows defined in CRM system. Since long everything has worked but suddenly from last 2-3 months we have started seeing performance issues where end users are seeing slowness, or screen taking long time than expected to load controls, or Timeout errors.
This issue is not constant its an intermittent issue and it happens in business hours (PST/EST)
I wanted to know if there is any way to capture the logs about the issue in CRM, any, way in CRM where I can go and refer the log information or error traces which will help me to get the bottom of this issue?
I think the old tools/diagnostics/diag.aspx page should still work on-prem.
Just append that path to your Dynamics URL, e.g.: https://myOrg.mydomain.com/tools/diagnostics/diag.aspx
When you click Run it will generate some stats about the network and form performance.
Dynamics also has diagnostics tracing capabilities built in (or at least it used to - haven't tried recently.) This article has instructions on that.
Here's a summary (unconfirmed & untested)
On the CRM Server
Open registry (run regedit)
Navigate to HKEY_LOCAL_MACHINE\Software\Microsoft\MSCRM
Add new keys:
Name: TraceEnabled
Type: DWORD
Value: 1
Name: TraceDirectory
Type: String
Value: C:\CRMTrace
Name: TraceRefresh
Type: DWORD
Value: 99
Create the folder "CRMTrace" in C directory
Reset IIS (Run CMD as administrator >> execute this “iisreset” command )
This article has more, including PowerShell instructions.
Back in the day there was a desktop app called the Diagnostics Tool that allowed you to turn the logging on and off.
Also, please note that if you accidentally leave the logging on, it can fill up the C: drive and crash the server!
I'm experiencing an unusual problem with my php (7.3) website creating huge number of unwanted session files on server every minute (around 50 to 100 files) and i noticed all of them having a fixed size of 125K or 0K in cPanel's file manager, hitting iNode counts going uncontrolled into thousands in hours & hundred thousands+ in a day; where as my website really have a small traffic of less than 3K a day and google crawler on top it. I'm denying all bad bots in .htaccess.
I'm able to control situation with help of a cron command that executes every six hours cleaning all session files older than 12hours from /tmp, however this isn't an ideal solution as fake session files getting created great in number eating all my server resources like RAM, Processor & most importantly Storage getting bloated impacting overall site performance.
I opened many of such files to examine but found them not associated with any valid user as i add user id, name, email to session upon successful authentication. Even assuming a session created for every visitor (without acc/login), it shouldn't go beyond 3K on a day but sessions count going as high as 125.000+ just in a day. Couldn't figure out the glitch.
I've gone through relevant posts and made checks like adding IP & UserAgent to sessions to track suspecious server monitoring, bot crawling, overwhelming proxy activities, but with no luck! I can also confirm by watching their timestamps that there is no human or crawler activity taken place when they're being created. Can see files being created every single minute without any break throughout the day!!.
Didn't find any clue yet in order to figure out root cause behind this weird behavior and highly appreciate any sort of help to troubleshoot this! Unfortunately server team unable to help much but added clean-up cron. Pasting below content of example session files:
0K Sized> favourites|a:0:{}LAST_ACTIVITY|i:1608871384
125K Sized> favourites|a:0:{}LAST_ACTIVITY|i:1608871395;empcontact|s:0:"";encryptedToken|s:40:"b881239480a324f621948029c0c02dc45ab4262a";
Valid Ex.File1> favourites|a:0:{}LAST_ACTIVITY|i:1608870991;applicant_email|s:26:"raju.mallxxxxx#gmail.com";applicant_phone|s:11:"09701300000";applicant|1;applicant_name|s:4:Raju;
Valid Ex.File2> favourites|a:0:{}LAST_ACTIVITY|i:1608919741;applicant_email|s:26:"raju.mallxxxxx#gmail.com";applicant_phone|s:11:"09701300000";IP|s:13:"13.126.144.95";UA|s:92:"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:82.0) Gecko/20100101 Firefox/82.0 X-Middleton/1";applicant|N;applicant_name|N;
We found that the issue triggered following hosting server's PHP version change from 5.6 to 7.3. However we noticed unwanted overwhelming session files not created on PHP 7.0! It's same code base we tested against three versions. Posting this as it may help others facing similar issue due to PHP version changes.
The debug_kit.sqlite file in the tmp directory grows with every request by approx. 1.5 Mb. If I don`t remember to delete it, I am running out of disc space.
How could I limit it`s growth? I don't use the history panel, so I don't need the historic data. (Side question: why does it keep all historic requests anyways? In the history panel only the last 10 requests are shown, so why keep more than 10 requests in the db at all?)
I found out that the debug_kit has a garbage collection. However it is not effective in reducing the disc space because sqlite needs to rebuild the database with the vacuum command to free disc space. I created a PR to implement vacuuming into the garbage collection: https://github.com/cakephp/debug_kit/pull/702
UPDATE: The PR has been accepted. You can solve the problem now by updating debug_kit to 3.20.3 (or higher): https://github.com/cakephp/debug_kit/releases/tag/3.20.3
Well, there is one main purpose for debug kit. DebugKit provides a debugging toolbar and enhanced debugging tools for CakePHP applications. It lets you quickly see configuration data, log messages, SQL queries, and timing data for your application. Simple answer is Just for debug. Even though only shown 10 requests, you can still query to get all histories such as
Cache
Environment
History
Include
Log
Packages
Mail
Request
Session
Sql Logs
Timer
Variables
Deprecations
It's safe to delete debug_kit.sqlite or you can set false to generate again or what I did it I run cronjob to delete it every day.
Btw, you should not enable it for staging or production. Hope this help for you.
The Access database just needs to be open and it will usually crash within the next 20-40mins, resulting in the following error message:
Your network access was interrupted. To continue, close the database, and then open it again.
More details:
The database is split, with the back end and front end on a server. The computers are then connected to the server via LAN (ethernet).
Although there are multiple computers connected to the server, the database only has one user at a time.
The database has been fine for almost a year, until this week where this error has started occurring.
We never have connectivity issues with the server.
I have seen several answers saying it is:
the databases fault, as it is starting to corrupt
the servers fault, as it broken, dropping my connection briefly
microsofts fault, they should patch it
I am hoping this is a problem with the database itself, as I am not responsible for the server.
Does anyone have a definitive solution?
I have recently experienced the same problem, and it all started when I moved my DB in an extrernal disk. The same db was working just fine in the local disk, or in the previous external disk. So, i am guessing is just a bug that has to do with the disk letter changing or something like this.
The problem sounds like an unstable LAN connection OR changes the LAN location (e.g. new hardware or changs to admin settings) causing increased latency.
If you have forms in the FE bound to BE tables the latency can cause the connection to be severed resulting in the error you see.
I'm not a network admin but the main culprits I've seen are:
Users connecting to the network using a VPN using an unstable connection (cell phones, crappy wifi, or just bad ISP service).
Network admins capping persistent connections to a share causing disconnects.
Unstable network hardware or bad hardware configuration.
"Switching" between wired and wireless LAN connections.
I don't think the issue is the database other than having bound forms to a BE database which is more of a fundemental design problem than anything else.
Good luck!
I use Access 2010. I had the same issue but solved it in the following ways.
On the external data ribbon, go to the Import & link group and click on Linked Table Manager.
Click on select all.
Click on Ok to refresh the links.
In cases where the path of the BackEnd database file has been changed, browse to the new location and select the new path. This will also refresh the links. This will solve the problem. It did for me.
You wrote, "The database has been fine for almost a year, until this week where this error has started occurring."
Clearly something has recently changed for this to be happening and without narrowing the field of possibilities it's anyone's guess. However, in my experience Jet DB crashes when two or more users are accessing and editing the same record(s) at the same time. So, if you've recently added new users this is a possibility.
Note: Jet is a file-server DB not a client server, which means the app was probably designed for a specific number of front-end users. Without knowing more I would start there.
I resolved my issue on this when I figured out that I had a offline directory setup and the sync was having an issue I turned off the sync and tested it and the error went away.
question for you.
So I have this Access 2007 database that I'm trying to lock down so that it can be deployed. The intent is for multiple users to run the front-end application simultaneously, connecting to the back-end tables over the network. However, I obviously don't want to give them access to the forms, settings, tables, etc.
I already tried using the ChangeProperty function for
AllowFullMenus
AllowSpecialKeys
AllowBypassKey
AllowShortcutMenus
AllowBuiltInToolbars
AllowToolbarChanges
AllowBreakIntoCode
But whenever anyone without macro's explicitly enabled opens the database, everything opens as if none of these settings are set. How can I get around this? I only use like 3 macros in the program, and none of them are related to the opening of the database or locking down the database.
Suggestions?
Thanks.
You can try distributing your front-end as a locked ACCDE file, this is the equivalent of the old MDE files from Access 2000. Details are available here: http://www.databasedev.co.uk/convert_to_accde_format.html