After running yugabyte db for some time, I see that the logs are never deleted. How can I configure the server to delete old logs ?
Currently YugabyteDB doesn't provide an automatic way to delete old logs.
You can achieve this by running a bash script in crontab that deletes old files.
Examples:
https://askubuntu.com/questions/589210/removing-files-older-than-7-days
Delete files older than specific date in linux
https://askubuntu.com/questions/413529/delete-files-older-than-one-year-on-linux
Related
I want to delete it regularly every day, is there a problem with my database if I delete it every day?
Because if left unchecked it will take up very large space
see this pic below (I use ubuntu server):
Yes, it is safe to delete it.
This directory contains only logs and does not hold the main data of your database.
you can configure the number of error logs for your SQL Server instance
(default behavior in SQL Server on Linux is to keep 128 error logs)
if you want to retain 6 error logs in the LOG folder, you will configure it as follows:
sudo /opt/mssql/bin/mssql-conf set errorlog.numerrorlogs 6
https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-configure-mssql-conf?view=sql-server-ver15#errorlogdir
I have created database project. I am able to upgrade my changes in my sql server. Now I have deploy the same changes on another environment. Also I dont want to change my previous data. I dont have to access that Sql server so I don`t know the connection string.
I have some options, like to deploy the .dacpac file or .sql script, but it first delete the database then creates new one. So that I loosing my data.
Please help me. If any option is there?
The options I see for this are:
Ask for a backup (or extract schema tables using task-->Generate scripts ion ssms) - restore this somewhere and use sqlpackage to generate a deployment script you can ask them to run
Ask them to run sqlpackage.exe and either generate a script or run it directly
Ask them for permissions so you can do it
If the database is being deleted then you have the option "CreateNewDatabase" set to true which would be bad in a production environment so remove it or set it to false!
If they run it or you ask for permissions, these are the minimum permissions you need to generate a script (to run the script you will probably need dbo):
https://the.agilesql.club/Blogs/Ed-Elliott/What-Permissions-Do-I-Need-To-Generate-A-Deploy-Script-With-SSDT (my blog)
I feel I'm doing something wrong with the ArangoDB upgrade process. The end result from the upgrade is that my databases exist, my users exist, my collections exist, but there are no documents in my collections. Obviously this is an issue. I've had this problem occur twice, upgrading from 2.3.1 -> 2.3.4, and 2.3.4 -> 2.4 in Windows. I used the same procedure in both cases:
Stopped the ArangoDB service
Made a backup copy of my ArangoDB directory from Program Files
Installed the new version of ArangoDB
Copied the contents of the database folder from the old ArangoDB directory to the new one, excluding the system database (I feel like this is where I go wrong...)
Then I open a command prompt to the bin directory and run arangod --upgrade
The upgrade output seems right to me, it finds the old databases and upgrades them, which is evident by the fact that they exist, along with the collections. But as stated before the collections are all empty. Thankfully this has been in a dev environment, but I worry about upgrading my production environment. Am I doing something wrong or is this a bug?
I've tried to reproduce this with the step 2.3.5 to 2.4.1 using the x64 Arango packages
What I did:
First, ran arangod from the shell with its own database directory outside of the program directory:
bin\arangod.exe c:\ee --console
Created a collection, inserted data (like the js/server/tests/aql-optimizer-rule-use-index-for-sort.js setUp()-function does)
then installed the new version, ran
bin\arangod.exe c:\ee --upgrade
then
bin\arangod.exe c:\ee --console
AQL_EXECUTE("for u in UnitTestsAqlOptimizeruse_index_for_sort_XX return u")
Which gave me all 100 documents which I put into the collection.
Next I tried with running the arangod service, with the var\lib folder inside of the Porgram Files folder. I connected using arangosh, inserted the documents into the collection again, verified with
db._query("for u in UnitTestsAqlOptimizeruse_index_for_sort_XX return u").toArray();
that all data was there.
Then stopped the service, installed 2.4.1, stopped the service, and used explorer to copy over the ArangoDB 2.4.1\var\lib directory, run the arangod --upgrade with success restarted the service, and used arangosh to successfully revalidate the collection and its documents again.
So, as this seems similar to what you did, can you try to reproduce this with a minimal set of data and send us your var\lib directory?
As it turns out the problem was related to replication. I would replicate data from the production db to use during development. Then when I would upgrade or stop the Arango service on the dev db all the documents would vanish. BUT when I used arango backup and restore to copy the production DB data, everything worked as expected. The newest version of Arango is supposed to have fixed the issue, but I haven't had any time to test it.
I have a PostgreSQL DB at my computer and I have an application that runs queries on it.
How can I see which queries has run on my DB?
I use a Linux computer and pgadmin.
Turn on the server log:
log_statement = all
This will log every call to the database server.
I would not use log_statement = all on a production server. Produces huge log files.
The manual about logging-parameters:
log_statement (enum)
Controls which SQL statements are logged. Valid values are none (off), ddl, mod, and all (all statements). [...]
Resetting the log_statement parameter requires a server reload (SIGHUP). A restart is not necessary. Read the manual on how to set parameters.
Don't confuse the server log with pgAdmin's log. Two different things!
You can also look at the server log files in pgAdmin, if you have access to the files (may not be the case with a remote server) and set it up correctly. In pgadmin III, have a look at: Tools -> Server status. That option was removed in pgadmin4.
I prefer to read the server log files with vim (or any editor / reader of your choice).
PostgreSql is very advanced when related to logging techniques
Logs are stored in Installationfolder/data/pg_log folder. While log settings are placed in postgresql.conf file.
Log format is usually set as stderr. But CSV log format is recommended. In order to enable CSV format change in
log_destination = 'stderr,csvlog'
logging_collector = on
In order to log all queries, very usefull for new installations, set min. execution time for a query
log_min_duration_statement = 0
In order to view active Queries on your database, use
SELECT * FROM pg_stat_activity
To log specific queries set query type
log_statement = 'all' # none, ddl, mod, all
For more information on Logging queries see PostgreSql Log.
I found the log file at /usr/local/var/log/postgres.log on a mac installation from brew.
While using Django with postgres 10.6, logging was enabled by default, and I was able to simply do:
tail -f /var/log/postgresql/*
Ubuntu 18.04, django 2+, python3+
You can see in pg_log folder if the log configuration is enabled in postgresql.conf with this log directory name.
I have the following problem and I need to know if thereĀ“s a way to fix it.
I have a client who was cheap enough to decline buying a backup plan for his postgreSQL databases on the main system that runs his company and as I thought it would happen some day, some OS files crashed during a blackout and the OS needs to be reinstalled.
This client didn't have any backups of the databases but I managed to save the PostgreSQL main directory. I read that the databases are stored somehow inside the data directory of the postgres main folder.
My question is: Is there any way to recover the databases from the data folder only? I am working in a windows environment (XP service pack 2) with PostgreSQL 8.2 and I need to reinstall PostgreSQL in a new server. I would need to recreate the databases in the new environment and somehow attach the old files to the new database instances. I know that's possible in SQL Server because of the way that engine stores the databases but I have no clue in postgres.
Any ideas? They would be much appreciated.
If you have the whole data folder, you have everything you need (as long as architecture is the same). Just try restoring it on another machine before wiping this one out, in case you didn't copy something.
Just save the data directory to disk. When launching Postgres, set the parameter telling it where the data directory is (see: wiki.postgresql.org). Or remove original data directory of the fresh installation and place the copy in its place.
This is possible, you just need to copy the "data" folder (inside the Postgres installation folder) from the old computer to the new one, but there are a few things to keep in mind.
First, before you copy the files, you must stop the Postgres server service. So, Control Panel->Administrative tools->Services, find Postgres service and stop it. When you're done copying the files and setting permissions, start it again.
Second, you need to set the permissions for the data files. Because postgres server actually runs on another user account, it will not be able to access the files if you just copy them into the data folder, because it will not have permissions to do so. So you need to change the ownership of the files to the "postgres" user. I had to use subinacl for this, install it first, and then use it from command prompt like this (first navigate to folder where you installed it):
subinacl /subdirectories "C:\Program Files\PostgreSQL\8.2\data\*" /setowner=postgres
(Changing ownership should also be possible to do from the explorer: first you must disable "Use simple file sharing" in Folder options, then a "Security" tab will appear in the folder Properties dialog, and there are options there to set permissions and change ownership, but I wasn't able to do it that way.)
Now, if the server service can't start after you start it manually again, you can usually see the reason in the Event viewer (Administrative tools->Event viewer). Postgres will throw an error event, and inspecting it will give you a clue about what the problem is (sometimes it will complain about a postmaster.pid file, just remove it, etc.).
The question is very old, but I want to share an effective method that I found.
If you have not got a backup with "pg_dump" and your old data is folder, try the following steps.
In the Postgres database, add records to the "pg_database" table. With a manager program or "insert into".
Make the necessary check and change the following insert query and run it.
The query will return an OID after it has worked. Create a folder with the name of this number. Once you have copied your old data into this folder, the use is now ready.
/*
------------------------------------------
*** Recover From Folder ***
------------------------------------------
Check this table on your own system.
Change the differences below.
*/
INSERT INTO
pg_catalog.pg_database(
datname, datdba, encoding, datcollate, datctype, datistemplate, datallowconn,
datconnlimit, datlastsysoid, datfrozenxid, datminmxid, dattablespace, datacl)
VALUES(
-- Write Your collation
'NewDBname', 10, 6, 'Turkish_Turkey.1254', 'Turkish_Turkey.1254',
False, True, -1, 12400, '536', '1', 1663, Null);
/*
Create a folder in the Data directory under the name below New OID.
All old backup files in the directory "data\base\Old OID" are the directory with the new OID number
Copy. The database is now ready for use.
*/
select oid from pg_database a where a.datname = 'NewDBname';
As shown by move database to another hard drive. All we need to do is to modify the registry table and file permissions. By modifying registry table(shown in image 1), postgresql server know the new location of data.
modify registry
If you have issues with permissions or with stuff like icacls during installation to old data folder then try my solution from sister website.
https://superuser.com/a/1611934/1254226
I do so but the most tricky part was to change the owner permission:
go to services from administative tools
find postgres service and double click on it
at log on tab change to local system
then restart