PostgreSQL how to see which queries have run - database

I have a PostgreSQL DB at my computer and I have an application that runs queries on it.
How can I see which queries has run on my DB?
I use a Linux computer and pgadmin.

Turn on the server log:
log_statement = all
This will log every call to the database server.
I would not use log_statement = all on a production server. Produces huge log files.
The manual about logging-parameters:
log_statement (enum)
Controls which SQL statements are logged. Valid values are none (off), ddl, mod, and all (all statements). [...]
Resetting the log_statement parameter requires a server reload (SIGHUP). A restart is not necessary. Read the manual on how to set parameters.
Don't confuse the server log with pgAdmin's log. Two different things!
You can also look at the server log files in pgAdmin, if you have access to the files (may not be the case with a remote server) and set it up correctly. In pgadmin III, have a look at: Tools -> Server status. That option was removed in pgadmin4.
I prefer to read the server log files with vim (or any editor / reader of your choice).

PostgreSql is very advanced when related to logging techniques
Logs are stored in Installationfolder/data/pg_log folder. While log settings are placed in postgresql.conf file.
Log format is usually set as stderr. But CSV log format is recommended. In order to enable CSV format change in
log_destination = 'stderr,csvlog'
logging_collector = on
In order to log all queries, very usefull for new installations, set min. execution time for a query
log_min_duration_statement = 0
In order to view active Queries on your database, use
SELECT * FROM pg_stat_activity
To log specific queries set query type
log_statement = 'all' # none, ddl, mod, all
For more information on Logging queries see PostgreSql Log.

I found the log file at /usr/local/var/log/postgres.log on a mac installation from brew.

While using Django with postgres 10.6, logging was enabled by default, and I was able to simply do:
tail -f /var/log/postgresql/*
Ubuntu 18.04, django 2+, python3+

You can see in pg_log folder if the log configuration is enabled in postgresql.conf with this log directory name.

Related

Delete all files in /var/opt/mssql/log everyday?

I want to delete it regularly every day, is there a problem with my database if I delete it every day?
Because if left unchecked it will take up very large space
see this pic below (I use ubuntu server):
Yes, it is safe to delete it.
This directory contains only logs and does not hold the main data of your database.
you can configure the number of error logs for your SQL Server instance
(default behavior in SQL Server on Linux is to keep 128 error logs)
if you want to retain 6 error logs in the LOG folder, you will configure it as follows:
sudo /opt/mssql/bin/mssql-conf set errorlog.numerrorlogs 6
https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-configure-mssql-conf?view=sql-server-ver15#errorlogdir

How to Start SQL Server without TempDB

After the scheduled maintenance when the DBA tried to start the SQL Server;
it failed due to some corruption issue with storage subsystem.
Later on, we identified that the drive on which we had our TempDB's data and log files was corrupt and it was preventing SQL Server from starting successfully.
(Drive was corrupt, so I am unable to read anything from that drive)
So basically we did not have Tempdb database on the server.
And we had to start SQL Server without TempDB
So how do we start the SQL Server without TempDB and how do we fix this?
Before you try anything make sure you backup your data. If one drive failed, another one might fail and leave you without your data. Drives that are purchases at around the same time tend to fail around the same time too.
You need to do that even if some of the data is stored in a RAID array - RAID isn't the same as a backup. If something happens to the array, your best case scenario is that you'll wait for a few hours to recover the data. Worst case, you could lose it all.
The process is described in The SQL Server Instance That Will not Start in the TempDB location does not exist section, and other sites like Start SQL Server without tempdb.
You'll have to start SQL Server with Minimal Configuration. In that state, tempdb isn't used. You can do this with the -f command-line parameter. You can specify this parameter in the service's property page, or by calling sqlservr.exe -f from the command line, eg:
sqlservr -f
Another option is to use the -t3608 trace flag which starts only the master database.
sqlservr -t3608
After that, you need to connect to the server with the sqlcmd utility, eg :
sqlcmd -S myservername -E
to connect using Windows authentication.
Once you do this, you can go to the master database and change the file location of the tempdb files:
USE master;
GO
ALTER DATABASE tempdb
MODIFY FILE (NAME = tempdev, FILENAME = 'E:\SQLData\tempdb.mdf');
GO
ALTER DATABASE tempdb
MODIFY FILE (NAME = templog, FILENAME = 'F:\SQLLog\templog.ldf');
GO
After that, remove the parameters from the service (if you set them there) and restart the service.
Finally, you may have to reconsider the placement of TempDB. TempDB is used heavily for sorting, calculating window functions or in situations where the available RAM isn't enough. Some operations require creating intermediate results, which get stored in TempDB. In general, you should have
multiple tempdb files, although the exact number depends on the server's workload.
How to Start SQL Server without TempDB database?
Step 1: Start the SQL Server in minimal configuration mode.
Click here
to see, "How to start the SQL Server in minimal mode using command prompt".
Step 2: Once SQL Server has started with minimum configuration mode;
connect to SQL Server instance and move TempDB data and log file to a new location.
See, move TempDB data and log files to new location
Step 3: Once you have performed the troubleshooting steps; exit SQLCMD window by typing Quit and Press Enter.
Step 4: . In the initial window click CTRL C and enter Y to Stop SQL Server Service.
Step 5 : Eventually, start the SQL Server Database Engine by Using SQL Server Configuration Manager.
What version of SQL Server it is? One simple solution is to move the tempdb.* files from that location and restart the SQL Server it will create new tempdb files. If you keep those files in that same location it will fail to start.
In SQL Server 2016 If you remove the tempdb physical files, on startup it will see they are missing and rebuild them on the fly in the location they are supposed to be in sysdatabases.

Error 21 when trying to delete a SQL Server DB

I had a SQL Server database on an external HDD. I forgot to detach the DB. I do not need it anymore, but I am unable to delete or take it off line.
When I try to delete or take the DB offline, I get the following error.
Msg 823, Level 24, State 2, Line 7
The operating system returned
error 21(The device is not ready.) to SQL Server during a read at
offset 0x00000000012000 in file 'E:\Kenya Air\Monet - Paulus.mdf'.
Additional messages in the SQL Server error log and system event log
may provide more detail. This is a severe system-level error condition
that threatens database integrity and must be corrected immediately.
Complete a full database consistency check (DBCC CHECKDB). This error
can be caused by many factors; for more information, see SQL Server
Books Online.
I have tried to run a DBCC CHECK, but I get the same error.
Try taking the database offline and then online.
Alter database DatabaseName set offline
Then bring it back online after a while
Alter database DatabaseName set online
I would try the system stored procedure sp_detach_db in SQL. From the fine manual:
Dropping a database deletes the database from an instance of SQL Server and deletes the physical disk files used by the database. If
the database or any one of its files is offline when it is dropped,
the disk files are not deleted. These files can be deleted manually by
using Windows Explorer. To remove a database from the current server
without deleting the files from the file system, use
sp_detach_db.
The OS is reporting exactly what you said in your question: In dropping the database, SQL Server attempts to remove the file from a device that no longer exists. Thus the database cannot be "dropped", per definition. But perhaps it can be detached, because that affects only the system's internal definition of the list of available databases.
Do NOT try to set the database offline and back online - this will eventually make things worse.
Stop SQL Server - move the respective database files (data and logfile(s)) to a different location. Start SQL Server again - eventually the DB will indicate (restore pending) - now delete the DB from SQL server. Next attach the database files back to the server and you should be ok - unless the files are physically corrupt. I have seen this problem numerous times - especially on virtualized SQL instances where SQL server is set to autostart and wasn't shut down in a coordinated manner before a system reboot. A momentary connection problem to either the data or log file can cause this problem. In case your system shows this problem more than once set SQL server to start manually.
I had the same problem, even when I wanted to take the database offline, it gave me this error.
But the problem was solved by restarting SQL.

DB2 - Tablespace access is not allowed

I am trying to restore a DB2 database to a backup from another database (also DB2). The restore seems to run fine. However, I am receiving the error: Tablespace access is not allowed. I checked the state of the tablespaces and they are stuck in Restore Pending. How do I get them in the correct state? If that's not possible, are there any other suggestions? BTW, I am working in a Windows environment and am using Data Studio for the restore.
So you tried to do a so called redirected restore as it seems (used the option redirect in the restore command) - right? This means you get the chance to redefine paths during the restore.
The restore ist split up in basically three parts:
restore is reading the data and stops
lets you redefine the paths
restore will finish the restore writing data to the new locations
During Step2 you would see the tablespaces being in restore pending as the restore has started but has not finished yet.
To support you in Step 2 I recommend using
restore... redirect generate script <scriptname>
which will give you a script with all the possible/necessary commands.
Check out
http://www-01.ibm.com/support/knowledgecenter/SSEPGG_10.5.0/com.ibm.db2.luw.admin.ha.doc/doc/c0006249.html?lang=en
Remember to check the database state afterwards as you might want / need to do a rollfowrds operation as well.
For tablespace states check out following website
http://www.ibm.com/developerworks/data/library/techarticle/dm-0407melnyk/#rp
Since you are using automatic storage, you'll need to redefine the paths of storage groups that are different in the target. I'm afraid I cannot say how it's done in Data Studio, but there should be a way to accomplish this via the GUI.
In the command line you'll need to do something along these lines. Firstly, determine the storage groups that need redefinition, e.g. by running db2pd -d yourdb -storagegroups. The result would look similar to this:
Storage Group Configuration:
Address SGID Default DataTag Name
0x00007F239319BB20 0 Yes 0 IBMSTOGROUP
Storage Group Statistics:
Address SGID State Numpaths NumDropPen
0x00007F239319BB20 0 0x00000000 1 0
Storage Group Paths:
Address SGID PathID PathState PathName
0x00007F23931C1000 0 0 InUse /export/db2data
Note the names of the storage groups that would have invalid paths on the target system.
Now you can start the restore:
db2 restore db yourdb from <path> redirect
The command will quickly complete. At this point you will be able to redefine the storage groups:
db2 set stogroup paths for <your_stogroup> on '<new_path>'
Once you've done that, continue the restore:
db2 restore db yourdb continue
Finish this off with rollforward if needed.
Follow the below steps:
Find the last backup image. and its path
Perform redirect Restore:
db2 "RESTORE DATABASE dbname FROM /path TAKEN AT timestamp into NEWDBNAME REDIRECT GENERATE SCRIPT redirectRestore.sql"
Using the VI editor make changes to the redirectRestore.sql file, Change the pathname of the stogroup paths.
Run the .sql file
db2 -tvf redirectRestore.sql
Perform Rollforward if required.
db2 rollforward db dbname to end of logs

SQL monitoring tool(windows) for Postgres database

I would like to know is there any postgres sql monitoring tool for windows ? The scenario is I would like to see the sql statements that come from the custom web application. Any kind help is appreciated.
PostgreSql has a very smart Log system. I use it to monitor queries and recommend it. It is very easy to operate and one can use just notepad to see queries plus the log can be organised by date.
Logs are stored in Installationfolder/data/pg_log folder. While log settings are placed in postgresql.conf file.
Log format is usually set as stderr. But CSV log format is recommended. In order to enable CSV format change in
log_destination = 'stderr,csvlog'
logging_collector = on
In order to log all queries, very usefull for new installations, set min. execution time of a query that will be logges
log_min_duration_statement = 0
In order to view active Queries on your database, use
SELECT * FROM pg_stat_activity
To log specific queries set query type
log_statement = 'all' # none, ddl, mod, all
You can use pgFouine to examine the PostgreSQL logs and examine the commands that are being run.
pgBadger is also good log analyzer tool that you can use.

Resources