How to limit infinitely growing pgbouncer logfile? - pgbouncer

My pgbouncer.log grows endlessly. How do I enable filename rollover or limit the logfile size for pgbouncer (Windows) ?

Related

Script to alert a resize is necessary for database to fit into memory?

I want to create a bash script that will alert me that the size of my database is becoming too large to fit into memory. An effective bash script would run daily and only alert me via email when db_size / memory >= 0.5.
For example: You should to size up your Digital Ocean Droplet soon for your PostgreSQL server: db_size = 0.6 GB; system_memory = 1.0 GB;
The problem is I am not sure exactly how to accurately make this comparison. My first stab at this was to use psql command pg_size_pretty and the Linux command free. Is there a good/accurate comparison that I can make between db_size and total memory size (or another field) to alert me when I need to resize my droplet? Or are there any tools already in place that do this type of thing?
Bash Script
#!/usr/bin/env bash
db_size=`psql dbname username -c "SELECT pg_size_pretty( pg_database_size('dbname') );"`
echo $db_size
free
Output
user#postgres-server:~/gitrepo/scripts/cronjobs# bash postgres_db_size_check.sh
pg_size_pretty ---------------- 6976 kB (1 row)
total used free shared buff/cache available
Mem: 1016000 48648 142404 54492 824948 699852
Swap: 0 0 0
What is the correct comparison to use in the output above? In the output above the size of the PostgreSQL DB is only 6.976 MB and the server's available memory is 1,016.00 MB. So right now we are at 0.6% of RAM used (6.976 / 1,016.00) with < 20 users. As I scale my database I expect the size of the database to increase quickly and I want to stay ahead of it before RAM becomes an issue.
After doing some digging into PostgreSQL tuning I have gathered that giving PostgreSQL more memory to use is not as straightforward as just increasing the amount of RAM. Several variables likely need to be set to use the additional memory including work_mem, effective_cache_size, and shared_buffers. I say this to note that I realize increasing the size of my server's RAM is only one step in the process.
I asked this question on Database Administrators but no one got back to me over there.

JMeter Ubuntu: java.net.SocketException: Too many open files

Once JMeter script with 5000 values in CSV file is executed with Synchronization Timer, Response data in View Results in Tree shows following error:
java.net.SocketException: Too many open files
I could not find the satisfactory answer yet on google.
Is there any way to resolve this?
Increase the number of open file handles or file descriptors per process.
You can use command ulimit -a to find out how many open file handles per process is allowed.
$ ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 10
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 2048
virtual memory (kbytes, -v) unlimited
You can see that, open files (-n) 1024, which means only 1024 open file handles per process is allowed. If your Java program exceeds this limit, it will throw java.net.SocketException: Too many files open error.
See these threads I/O exception (java.net.SocketException) and java.net.SocketException: Too many open files.

Difference between Linux errno 23 and Linux errno 24

What is the difference between these 2 linux errors in errno.h? 23 and 24
I tried 2 different sites but can't understand difference between the two.
[EMFILE]
Too many open files.
[ENFILE]
Too many files open in system.
# define ENFILE 23 /* File table overflow */
# define EMFILE 24 /* Too many open files */
Also, I am getting errno 24 and socket call failing at 974th time. (AF_INET UDP datagram socket)
When I did a cat /proc/sys/fs/file-max I am seeing a value of 334076
ulimit -n showing 1024
Any idea what can be done to increase limit?
For 1) Both error codes are about the situation with too many opened files. EMFILE is too many files opened in your process. ENFILE is too many files opened in the entire system.
You can increase the maximum number of open files / file descriptors
sysctl -w fs.file-max=100000
Or open
/etc/sysctl.conf
and append/change fs.file-max to the number you need:
fs.file-max = 100000
Then run
sysctl -p
to reload the new settings
If you don't want to set system-wide FD (file-descriptor) limits, you can set the user-level FD limits.
You need to edit
/etc/security/limits.conf file
And for user YOUR_USER, add these lines:
YOUR_USER soft nofile 4096
YOUR_USER hard nofile 10240
to set the soft and hard limits for user YOUR_USER.
Save and close the file.
To see the hard and soft limits for user YOUR_USER:
su - YOUR_USER
ulimit -Hn
ulimit -Sn

sqlite3 C API cant Open/Close more than 1024 database

Yop,
I have an application running continuesly and needs to acess a sqlite3 database.
The program crash after about 1022 open/close of the same database.
Exemple:
int i = 1024;
sqlite3 * db;
while(i){
sqlite3_open("database.sqlite",&db) ;
// exécute prepared statement
sqlite3_close(db);
i--;
}
After 1022 iteration I cant open database, Iv Got the error:
Failed to open database unable to open database fileFailed to prepare database library routine called out of sequence2
I take a look at the limits sqlite documentation but no mentions of such of limit:
http://sqlite.org/limits.html
You're bumping into the max open files per process limit of the operating system itself.
Have a look at ulimit -S -a: (mine shown here for example)
xenon-lornix:~> ulimit -S -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 29567
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 29567
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Look through the list, see where it says open files? The default limit is 1024, meaning that a user (you in this case!) may only have a maximum of 1024 files open per process at once.
A typical program has 3 files open by default, STDIN, STDOUT, & STDERR... leaving 1021 file descriptors available... so when you go to open the 1022nd database, it refuses, and fails.
Be sure to read the man page involving ulimit, surprisingly, it's NOT man ulimit! Best documentation is in bash, so try man bash, then press slash ('/') and type ulimit to search for it. (Around line 3383 in my bash man page)
The more detailed programming side of the various ulimits can be found in man getrlimit.
Please remember that ulimit works with both HARD and SOFT limits. A user can change their SOFT limit (via -S) from 0 to whatever the HARD (-H) limit value is. But a user cannot RAISE their HARD limit, and if user LOWERS their HARD limit, they can't raise it back up again. Only a super-user (root) may raise a HARD limit.
So to raise your SOFT open files limit, try this:
ulimit -S -n 8192
A quirk... ulimit defaults to setting HARD limits. I have an alias for ulimit to default to the soft limit, like this:
alias ulimit='ulimit -S'
If you happen to add the -H option, it overrides the default soft (-S) option, so all is good.
To see your hard limits:
ulimit -H -a
You probably don't sqlite3_finalize your prepared statements. You can't close a database that still has outstanding prepared statements. Do you check the result code of sqlite3_close?

fopen does not deal with more than 60 files at the same time

I need to have more than 60 text files opened at the same time in my C program. However, it seems that fopen is not able to handle more than 60 files simultaneously. I am programming in Windows environment.
I use the following fopen statement:
fopen(fileName.c_str(),"wt");
Where fileName is the path of my txt file, name which changes inside a loop along 100 files. Does anybody know any trick to make this work? Or any alternative?
If you issue the bash shell command:
ulimit -n
you'll see that 60 is your limit for open file handles. You can change it with:
ulimit -n 256
Note: there're soft (-S) and hard (-H) limits you can see with -Sn and -Hn, you can change your soft limit up to your hard limit.
There's actually two things that constrain how many files you can have open at any time:
The environment limit specified by ulimit -n.
The C runtime library. I know of several that limit you to 256 file handles (Sun to name one)
Your current limit is probably 63 once you take into account STDIN, STDOUT and STDERR already being opened, and I don't know of a system that goes that low so it's probably your ulimit but you need to be aware of the other limit.
On windows you can use _setmaxstdio(n) but in the default case you should still be able to open 512 files. so I'm still a little confused as to why you only get 60 odd unless you open each file about 8 times...

Resources