cakePHP - Simple insert statements taking long time to execute - cakephp

We are using the cakePHP framework and have deployed the application on our product environment. We have noticed that insert statements are taking a long time to execute. Sometimes a simple insert statement takes 6 seconds which is way too much.
We have switched the persistent key to true in database.php and it seems to improve alot, but still, sometimes queries do take 2 to 3 seconds. Is it a good idea to have this switched on?
Any advice on why and how we can improve execution times?
Thanks
Regards
Gabriel

Does it take long on the local dev environment? Set debug to 2 in core.php to get an sqldump that will show you each SQL statement and time for each. Maybe you have too many joins?
Remember to add the sql dump element to your layout:
<?php echo $this->element('sql_dump'); ?>

Related

Microsoft SQL Server Management Studio and displaying queries as they happen in real time

Two years ago a SQL expert opened a SSMS and showed all queries as they were happening in real time. That way he saw which SQL statements were running fast and which took some time to be ran. I remember the queries to be displayed in a "CMD" look alike window. I can't remember if new queries were displayed at the top or were they shown at the bottom of the window.
For the past month I have been trying to figure out how he got this working. I looked everywhere in monitor, but I can't find anything similar to what he showed me than.
The results were similar to the "claymore eth miner window" ...
Can someone point me in the direction of getting this?
You can use sp_whoisactive to get the current running queries. It is very useful to see if currently there is blocking, locking, long running statements.
In order to get better picture of what was going on in the past, you can enable the Query Store. There are some predefined reports:
and various statistics and it is user friendly:

Query Run in different DB after returning to check remotely

Please help me on this.
I am running a query remotely on A.db, because it's a big data and maybe something is wrong on A.db that day so it took a long time. I don't spending time as long as I would get my result.
After hours of running(16 HOURS to be exact), the result is executing fail with some error. I went through the query and couldn't find any mistakes. But after a few read, I realize that the query has been executing in B.db instead of A.db that I was originally executing.
Is there any reason for the query to change to different db by itself? I read through the query and even let my colleague went through it, nothing is asking the query to run in different db.
Please help me on this, is has been bugged me for more than a week, I can't focus on others due to this problem.
Thank you
Hey this can't be run on different DB if you ran it on some DB.
There might be chances that if you accidentally hit object explorer and selected different DB.
Better way you can do is just add
Use DB_name
at top of your query this will say engine to run query in that db which you said.
Use a.DB
SELECT distinct A.* from (

"Unit" Testing Database

I'm running Oracle 11g SE1 .
Just wondering if there're any tools that would allow me to test the data integrity of a ( mostly read-only ) schema. Essentially, what I want to do is to have some queries that run every night or so and see if they return the expected result. For example:
SELECT COUNT(*) FROM PATIENTS WHERE DISEASE = 'Clone-Killing Nanovirus';
Expected result : 59.
How do people normally do such testing ?
I've used SQLUnit and written about it here. I don't believe any new development is being done on it but it should accomplish your goal.
SQL Developer (free, as in beer) also has a Unit Testing framework. I have installed it and that's about it. I want to use it more, but I've been working with BI the past few years so no external pressure to learn it.
The tests that you want to create sound pretty simple, so either of those should work well for you. The next step would be to have them run on a schedule (cron, windows scheduler, etc) or you can go crazy with a continuous integration tool like Atlassian's Bamboo (haven't used it).
Of course you could skip the tools altogether and just write up scripts that are called from the command line. Fancy would have you writing the results to a database table so you can easily skin it, simple would be piping the results to a text file and reviewing that each day.
Hope this helps.
You could batch up your queries and run a simple perl script using DBI that would run the queries and check them against an accepted tolerance and email you if something didn't meet thresholds. I know I have written such db checking code before to make sure items were within thresholds. Perl is a good tool for this sort of thing as the DBI module can connect to your database and then you can run some canned queries and easily send yourself an email using the MIME package. http://www.perl.com/pub/1999/10/DBI.html

How to rollback/tear down/clear the database changes after a system test runs?

I have a test method, using NUnit and Selenium, which opens a browser on our website which is on the Production Server and registers a user and verifies that the registration is successful.
(I know ideally the system tests should run on a separate Test Server rather than production but here they want to test whether the prod system works!)
The problem is how to rollback the database changes as a result of this test? For example, the state of my database before and after running the state should be the same.
I thought of 3 possible options but none is practical:
1) writing SQL queries to delete from the actual tables before starting the test (Setup) and after running the test (TearDown); this is my current approach however
The problem with this approach is that I have to know exactly which tables were involved for each System Test which runs and this can quickly become very complex as a test may impact more than one table.
2) Writing transactional Code
This is not an option since the code changes are done by the website, not by the unit test written.
3) Getting an snapshot of existing database (SQL Server 2008 R2) before each test starts then after the test finished, restoring the snapshot to the original one.
This idea sounds good to me if we could run the tests only on Staging environment but the problem is that the tests have to run on Production and may take like 5 minutes totally so rolling it back and restoring it, would be a stupid idea as the changes done in that 5 minutes would be lost!
Please advise what approach would be best possible option to resolve this problem? there may be a 4th option?
Thanks,
Option 4 never ever ever ever do tests on a production server it's a recipe for disaster (see thousands of funny (if you are not the protagonist) stories on the internet on how this could go horribly wrong), the right thing to do would be to configure the test and production server in the same way.
There is a fith option. If the website receives a registration for user "WeAreTestingOutSite" it does everything except for actually adding the user to the Database.
To be honest, as was said, there are better ways to test if a production site is still in operation than to run bots to register a user to make sure it is working (or operational).
I would recommend you going with 4th option: Introduce new feature which allows to delete the user. Probably not to the user himself/herself but to the system admins (Backoffice users). That way you can test if user can be registered - and deleted afterwards while not caring that much about the SQL scripts.

sql server management studio code completion

I've noticed that whenever I add tables / stored procs / functions / whatever to a sql server database, that it takes a while for the code completion to pick up that they are now part of the database.
This is really annoying since the code completion and syntax highlighting become totally broken in the workflow scenario where you create a table and then start writing queries or whatever that deal with this new object.
Does anyone know how to get the code completion / syntax highlighting engine to update it's view of what is in the database to get rid of all these spurious invalid object name errors?
I understand that it's too late to answer the question but maybe it will help someone.
You can refresh the Intellisense cache with Ctrl+Shift+R, and wait for 5-10 seconds.
A guess: Close and reopen SSMS? Lame and ineffective, and I hope there's a better way.

Resources