Random "SELECT 1" query in all requests on rails - database

I'm profiling my rails 3.2 app with miniprofiler, and it shows me a
SELECT 1
query at the beginning of each page load or ajax call. It only takes 0.4ms, but it is still a seemingly unnecessary database query.
Anyone know why this query is happening or how to get rid of it?

SELECT 1 is like a ping - the cheapest query to test whether the session is alive and kicking. Various clients use it for that purpose. It may be useless in your case ...

For Postgres you can find it in this line on Github.
Or, if you using MySQL, you can see the solution in this groupon engineering blog.

Related

reusing queries in jmeter

I am using Jmeter to test API's. I am often using queries to access the DB (JCDB connection)
So far so good. However as i am using more and more queries it seems i am copying data.
For instance:
Thread 1:
HTTP request 1
Query A
Query B
Query C
Thead 2:
HTTP request 2
Query D
Query A
Thead 3:
HTTP request 3
Query A
Query C
As you can see. I have the same query duplicate it often. Not only on 1 jmx file but i have a lot of jmx files where i use Query API
So i am looking for a way to have to write query A once. I would think to create a new jmx file and just include the jmx file and call to that. Is this a good way to appraoch this? Also how do i call Query A from any thread? I would need to pass (and return) parameters.
Help would be appreciated
It appears you're looking for the Module Controller, you can define a "module" per query
and build your test using the "modules" instead of copying and pasting the real JDBC Request samplers
If you're going to store the "modules" as external .jmx files consider using Test Fragments

Query Run in different DB after returning to check remotely

Please help me on this.
I am running a query remotely on A.db, because it's a big data and maybe something is wrong on A.db that day so it took a long time. I don't spending time as long as I would get my result.
After hours of running(16 HOURS to be exact), the result is executing fail with some error. I went through the query and couldn't find any mistakes. But after a few read, I realize that the query has been executing in B.db instead of A.db that I was originally executing.
Is there any reason for the query to change to different db by itself? I read through the query and even let my colleague went through it, nothing is asking the query to run in different db.
Please help me on this, is has been bugged me for more than a week, I can't focus on others due to this problem.
Thank you
Hey this can't be run on different DB if you ran it on some DB.
There might be chances that if you accidentally hit object explorer and selected different DB.
Better way you can do is just add
Use DB_name
at top of your query this will say engine to run query in that db which you said.
Use a.DB
SELECT distinct A.* from (

MongoDB slow in fetching from database

I'm using MongoDB in combination with Meteor + React and the result fetching takes like 5 sec even on a small database.
This happens only on the production server (AWS) and it works instantly on the local machine.
In order to fetch the results, I'm using the following code.
return{ cand : Job.find({thejob:props.id}).fetch() };
and to see if the array has been loaded, I use the following code on the frontend side.
if(!this.props.cand){return(<div>Loading....</div>)}
but the Loading.... takes like 5 sec on the server always. The database is a small one with less than 1000 records.
I have had similar experiences. The performance is pretty good when you run the queries in the local machine. If the query is slower in platforms like AWS and not on the local, it's mostly due to the Network latency.
I suspect there isn't an index on the thejob field.
First check if there is an index on thejob field
db.job.getIndexes()
If there is none, simply create one
db.job.createIndex({thejob:1})

Monitor that a website is active from SQL Agent

I want to test a portion of my website to see if it is running by executing a SQL server agent job. my site logs every time someone loads the login page. What I would like to do is launch:
https://www.example.com/Main/main_dir.wp1
after a few seconds run
SELECT * FROM dbo.TR_Weblog where DATEDIFF(MINUTE, date_time, getdate()) < 1
If there are no entries the site is down.
How do I launch a URL from inside agent?
IMO, this isn't an appropriate use of SQL Agent; it's not a general purpose task scheduler.
If you're going use Agent though...
I would advise against doing it the way #TheGameiswar suggests, as it will leave orphaned iexplore.exe processes on your SQL Server box, and there are situations where it won't even start properly - causing the process to stall out.
Instead, make your first step one of type PowerShell, and run the following command from it:
invoke-restmethod -URL YOURURLHERE
However, this will not parse/execute any JavaScript on the page, nor load any images. It'll just pull the raw HTML returned by the page when loaded.
But even this is a bit of a Rube Goldberg method of monitoring your website's availability when there are purpose-built applications/tools and services to do exactly that.
You can just select command type as cmd type and then use below url..
#START http://bing.com/
further ,you don't have any control after launch.So I think the best way is to do a periodic check of iis logs using log parser and see status

Is it possible to get a history of queries made in postgres

Is it possible to get a history of queries made in postgres? and is it be possible to get the time it took for each query? I'm currently trying to identify slow queries in the application I'm working on.
I'm using Postgres 8.3.5
There's no history in the database itself, if you're using psql you can use "\s" to see your command history there.
You can get future queries or other types of operations into the log files by setting log_statement in the postgresql.conf file. What you probably want instead is log_min_duration_statement, which if you set it to 0 will log all queries and their durations in the logs. That can be helpful once your apps goes live, if you set that to a higher value you'll only see the long running queries which can be helpful for optimization (you can run EXPLAIN ANALYZE on the queries you find there to figure out why they're slow).
Another handy thing to know in this area is that if you run psql and tell it "\timing", it will show how long every statement after that takes. So if you have a sql file that looks like this:
\timing
select 1;
You can run it with the right flags and see each statement interleaved with how long it took. Here's how and what the result looks like:
$ psql -ef test.sql
Timing is on.
select 1;
?column?
----------
1
(1 row)
Time: 1.196 ms
This is handy because you don't need to be database superuser to use it, unlike changing the config file, and it's easier to use if you're developing new code and want to test it out.
You can use like
\s
it will fetch you all command history of the terminal, to export it to file using
\s filename
If you want to identify slow queries, than the method is to use log_min_duration_statement setting (in postgresql.conf or set per-database with ALTER DATABASE SET).
When you logged the data, you can then use grep or some specialized tools - like pgFouine or my own analyzer - which lacks proper docs, but despite this - runs quite well.
If The question is the see the history of queries executed in the Command line. Answer is
As per Postgresql 9.3, Try \? in your command line, you will find all possible commands, in that search for history,
\s [FILE] display history or save it to file
in your command line, try \s. This will list the history of queries, you have executed in the current session. you can also save to the file, as shown below.
hms=# \s /tmp/save_queries.sql
Wrote history to file ".//tmp/save_queries.sql".
hms=#
FYI for those using the UI Navicat:
You MUST set your preferences to utilize a file as to where to store the history.
If this is blank your Navicat will be blank.
PS: I have no affiliation with or in association to Navicat or it's affiliates. Just looking to help.
There's no history in the database itself,but if you are using DataGrip data management tool then you can check the history thats your run in the datagrip.
pgBadger is another option - also listed here: https://github.com/dhamaniasad/awesome-postgres#utilities
Requires some additional setup in advance to capture the necessary data in the postgres logs though, see the official website.
Not logging but if you're troubleshooting slow running queries in realtime, you can query the pg_stat_activity view to see which queries are active, the user/connection they came from, when they started, etc. Eg...
SELECT *
FROM pg_stat_activity
WHERE state = 'active'
See the pg_stat_activity view docs.

Resources