SQL monitoring tool(windows) for Postgres database - database

I would like to know is there any postgres sql monitoring tool for windows ? The scenario is I would like to see the sql statements that come from the custom web application. Any kind help is appreciated.

PostgreSql has a very smart Log system. I use it to monitor queries and recommend it. It is very easy to operate and one can use just notepad to see queries plus the log can be organised by date.
Logs are stored in Installationfolder/data/pg_log folder. While log settings are placed in postgresql.conf file.
Log format is usually set as stderr. But CSV log format is recommended. In order to enable CSV format change in
log_destination = 'stderr,csvlog'
logging_collector = on
In order to log all queries, very usefull for new installations, set min. execution time of a query that will be logges
log_min_duration_statement = 0
In order to view active Queries on your database, use
SELECT * FROM pg_stat_activity
To log specific queries set query type
log_statement = 'all' # none, ddl, mod, all

You can use pgFouine to examine the PostgreSQL logs and examine the commands that are being run.

pgBadger is also good log analyzer tool that you can use.

Related

End user initiating SQL commands to create a file from a SQL table?

Using SQL Manager ver 18.4 on 2019 servers.
Is there an easier way to allow an end user with NO access to anything SQL related to fire off some SQL commands that:
1.)create and update a SQL table
2.)then create a file from that table (csv in my case) that they have access to in a folder share?
Currently I do this using xp_command shell with bcp commands in a cloud hosted environment, hence I am not in control of ANY permission or access, etc. For example:
declare #bcpCommandIH varchar(200)
set #bcpCommandIH = 'bcp "SELECT * from mydb.dbo.mysqltable order by 1 desc" queryout E:\DATA\SHARE\test\testfile.csv -S MYSERVERNAME -T -c -t, '
exec master..xp_cmdshell #bcpCommandIH
So how I achieve this now is allowing the end users to run a Crystal report which fires a SQL STORED PROCEDUE, that runs some code to create and update a SQL table and then it creates a csv file that the end user can access. Create and updating the table is easy. Getting the table in the hands of the end user is nothing but trouble in this hosted environment.
We always end up with permission or other folder share issues and its a complete waste of time. The cloud service Admins tell me "this is a huge security issue and you need to start and stop the xp_command shell with some commands every time you want generate this file to be safe".
Well this is non-sense to me. I wont want to have to touch any of this and it needs to be AUTOMATED for the end user start to finish.
Is there some easier way to AUTOMATE a process for an END USER to create and update a SQL table and simply get the contents of that table exported to a CSV file without all the administration trouble?
Are there other simpler options than xp_command shell and bcp to achieve this?
Thanks,
MP
Since the environment allows you to run a Crystal Report, you can use the report to create a table via ODBC Export. There are 3rd-party tools that allow that to happen even if the table already exists (giving you an option to replace or append records to an existing target table).
But it's not clear why you can't get the data directly into the Crystal report and simply export to csv.
There are free/inexpensive tools that allow you to automate/schedule the exporting/emailing/printing of a Crystal Report. See list here.

SQL Server Command Metadata - Custom Text for Auditing

Is there a built-in capability to include custom text as metadata to a SQL command?
Use Case:
A manual deployment occurs to a highly-controlled production environment. The person performing the deployment has been assigned a ticket number to authorize the deployment. Is it possible for the person to include the ticket number somehow directly in the script he/she executes such that it is logged on the server as part of the auditable metadata for that command?
We would like to use an auditing tool monthly to dump all DML statements and if the ticket number could be right there next to the command, it would make our lives a lot easier.
Thanks!

PostgreSQL how to see which queries have run

I have a PostgreSQL DB at my computer and I have an application that runs queries on it.
How can I see which queries has run on my DB?
I use a Linux computer and pgadmin.
Turn on the server log:
log_statement = all
This will log every call to the database server.
I would not use log_statement = all on a production server. Produces huge log files.
The manual about logging-parameters:
log_statement (enum)
Controls which SQL statements are logged. Valid values are none (off), ddl, mod, and all (all statements). [...]
Resetting the log_statement parameter requires a server reload (SIGHUP). A restart is not necessary. Read the manual on how to set parameters.
Don't confuse the server log with pgAdmin's log. Two different things!
You can also look at the server log files in pgAdmin, if you have access to the files (may not be the case with a remote server) and set it up correctly. In pgadmin III, have a look at: Tools -> Server status. That option was removed in pgadmin4.
I prefer to read the server log files with vim (or any editor / reader of your choice).
PostgreSql is very advanced when related to logging techniques
Logs are stored in Installationfolder/data/pg_log folder. While log settings are placed in postgresql.conf file.
Log format is usually set as stderr. But CSV log format is recommended. In order to enable CSV format change in
log_destination = 'stderr,csvlog'
logging_collector = on
In order to log all queries, very usefull for new installations, set min. execution time for a query
log_min_duration_statement = 0
In order to view active Queries on your database, use
SELECT * FROM pg_stat_activity
To log specific queries set query type
log_statement = 'all' # none, ddl, mod, all
For more information on Logging queries see PostgreSql Log.
I found the log file at /usr/local/var/log/postgres.log on a mac installation from brew.
While using Django with postgres 10.6, logging was enabled by default, and I was able to simply do:
tail -f /var/log/postgresql/*
Ubuntu 18.04, django 2+, python3+
You can see in pg_log folder if the log configuration is enabled in postgresql.conf with this log directory name.

Access 2007 - accdb; options in setting up a reliable multi-user environment across multiple servers?

I am having trouble sorting through all the information / various options in regards to Access 2007 used in a multi-user environment. Here is a brief description of my current situation. At work there is the "Business LAN" which I can log on and use to monitor two other servers via remote desktop. The business LAN is strictly controlled by our IT department and no one is permitted to install any software or drivers without their consent. I do have administrative privileges on both servers though.
The two servers that I log on to using RD are used for essentially the same task, which is to monitor and control the heat to different process lines. Each server runs a different program to accomplish this task but both programs use SQL Server as a back end.
I created two access databases (one on each server because they are currently behind seperate firewalls) in order to query information from the backend SQL side of these programs and combine it with relative information I have compiled in tables in order to add more detail to the data the programs are collecting. My program is still in the debug stage but ultimately this information can then be accessed by field techs / maintenance in order to make their job easier. Maintenance staff can also add even more information based on the status of repairs etc....Last, I have created reports which can be run by Managers / Engineers who are looking for an overall status of their area.
Both access db's are split so that the back ends are seperate from the forms, queries, etc... I use an ODBC data source to import a link to SQL. I am using vba for user authentication, user logging record updates, and user / group access control. Everything works the way I intended except the fact I everyone who logs on the server will be trying to run the same copy of the front end. For example, I had a co-worker log on to the server via RD to test the program and I logged on from my desk. After logging in I could see the forms he had open. Access was already running. Without being able to install access locally (or even runtime, due to IT restrictions) on to each individuals workstation, I'm not sure what approach to take to resolve this.
Additional info, Server 1
One of the servers is considered to be the "master server" in which a number of client stations "slave servers" all communicate with. The only way to access folders on themaster server is log on to the client station and run RD.
Server 2
This server is considered to be the "historian". It communicates with a terminal server in which users log on using RD and run applications which use SQL backend which resides on the historian. I have been able to set up shares so that certain folders are visible on the historian from the terminal server.
Can anyone tell me what my best option is?
Thanks in advance.
CTN
It's really crazy the way some IT departments do everything possible to make it hard to do your job well.
You allude to users logging on via Terminal Server. If so, perhaps you can store the front ends in the user profiles of their Terminal Server logons? This assumes they're not just using the two default admininstrative Terminal Server logons, of course.
The other thing that's not clear to me is why you need a back end at all in Access/Jet/ACE -- why not just link via ODBC to the SQL Server and use that data directly? The only reason to have an independent Jet/ACE file with data tables in it in that scenario is if there is data you're storing for your Access application that is not stored in the SQL Server. You might also have temp tables (e.g., for staging complicated reports, etc.), but those should be in a temp database on a per-user basis, not in a shared back end.
Here is a suggestion how to implement what David Fenton wrote: write a simple batch script which copies your frontend from the installation path to %TEMP% (the temporary folder of the current user session) and runs the frontend from there. Something along the lines of
rem make sure current directory is where the script is
cd /d %~d0%~p0
rem assume frontend.mdb is in the same folder as the script
copy /y frontend.mdb %temp%
start %temp%\frontend.mdb
Tell your users not to run the frontend directly, only via the batch script, then everyone should get his own copy of the frontend. Or, give your frontend a different suffix in the installation path and rename it to "frontend.mdb" when copying to %temp%.

I have a 18MB MySQL table backup. How can I restore such a large SQL file?

I use a Wordpress plugin called 'Shopp'. It stores product images in the database rather than the filesystem as standard, I didn't think anything of this until now.
I have to move server, and so I made a backup, but restoring the backup is proving a horrible task. I need to restore one table called wp_shopp_assets which is 18MB.
Any advice is hugely appreciated.
Thanks,
Henry.
For large operations like this it is better to go to command line. phpMyAdmin gets tricky when lots of data is involved because there are all sorts of timeouts in PHP that can trip it up.
If you can SSH into both servers, then you can do a sequence like the following:
Log in to server1 (your current server) and dump the table to a file using "mysqldump" --- mysqldump --add-drop-table -uSQLUSER -pPASSWORD -h
SQLSERVERDOMAIN DBNAME TABLENAME > BACKUPFILE
Do a secure copy of that file from server1 to server2 using "scp" ---
scp BACKUPFILE USER#SERVER2DOMAIN:FOLDERNAME
Log out of server 1
Log into server 2 (your new server) and import that file into the new DB using "mysql" --- mysql -uSQLUSER -pPASSWORD DBNAME < BACKUPFILE
You will need to replace the UPPERCASE text with your own info. Just ask in the comments if you don't know where to find any of these.
It is worthwhile getting to know some of these command line tricks if you will be doing this sort of admin from time to time.
try HeidiSQL http://www.heidisql.com/
connect to your server and choose the database
go to menu "import > Load sql file" or simply paste the sql file into the sql tab
execute sql (F9)
HeidiSQL is an easy-to-use interface
and a "working-horse" for
web-developers using the popular
MySQL-Database. It allows you to
manage and browse your databases and
tables from an intuitive Windows®
interface.
EDIT: Just to clarify. This is a desktop application, you will connect to your database server remotely. You won't be limited to php script max runtime, or upload size limit.
use bigdupm.
create a folder on your server witch is not easy to guess like "BigDump_D09ssS" or w.e
Download the http://www.ozerov.de/bigdump.php importer file and add them to that directory after reading the instructions and filling out your config information.
FTP The .SQL File to that folder along side the bigdump script and go to your browser and navigate to that folder.
Selecting the file you uploaded will start importing the SQL is split chunks and would be a much faster method!
Or if this is an issue i reccomend the other comment about SSH And mysql -u -p -n -f method!
Even though this is an old post I would like to add that it is recommended to not use database-storage for images when you have more than like 10 product(image)s.
Instead of exporting and importing such a huge file it would be better to transfer the Shopp installation to file-storage for images before transferring.
You can use this free plug-in to help you. Always backup your files and database before performing this action.
What I do is open the file in a code editor, copy and paste into a SQL window within phpmyadmin. Sounds silly, but I swear by it via large files.

Resources