How can I increase PostgreSQL's max_stack_depth on Windows (10)?
I tried to increase it from postgresql.conf (current value is 2MB). But I can't make it higher than 3MB. If I set a higher value the PostgreSQL service won't start.
From the manual:
Setting max_stack_depth higher than the actual kernel limit will mean
that a runaway recursive function can crash an individual backend
process. On platforms where PostgreSQL can determine the kernel limit,
the server will not allow this variable to be set to an unsafe value.
However, not all platforms provide the information, so caution is
recommended in selecting a value.
So what is this limit on Windows? When you have that answer, you make your configuration change in PostgreSQL.
Related
I'm trying to do a couple of tests where I need to set the computer time backward or forward depending on some external values. I know that I can do this using clock_settime() in time.h.
I've encountered the problem that when needing to set the time backward, the operation fails.
The documentation for clock_settime states that
Only the CLOCK_REALTIME clock can be set, and only the superuser may do so. If the system securelevel is greater than 1 (see init(8)), the time may only be advanced. This limitation is imposed to prevent a malicious superuser from setting arbitrary time stamps on files. The system time can still be adjusted backwards using the adjtime(2) system call even when the system is secure.
I require nanosecond precision, and adjtime() as far as I understand, does not allow nanosecond precision. The other problem with ajdtime() is that it does not set the clock outright, rather slows it down, until the clock catches up to the set value.
I've done some reading on init() but I'm not sure how to lower the securelevel, and frankly I'd rather not be forced to do this, however, if there's no other way, I'm willing to try it.
Thanks in advance
Update 1
I started looking into altering securelevel and now i'm not even sure if that's something that can be done on Ubuntu. Around the web, I have come across mentions of editing /etc/init/rc-sysinit.conf, /etc/init/rc.conf, or /etc/sysctl.conf and, again, I'm not sure what needs to be added in order to lower the securelevel if, in fact, this is something that can be done. Especially since I could not find a 'rc.securelvel' file.
Presumably there is a library or simple asm blob that can get me the number of the current CPU that I am executing on.
Use sched_getcpu to determine the CPU on which the calling thread is running. See man getcpu (the system call) and man sched_getcpu (a library wrapper). However, note what it says:
The information placed in cpu is only guaranteed to be current at the time of the call: unless the CPU affinity has been fixed using sched_setaffinity(2), the kernel might change the CPU at any time. (Normally this does not happen because the scheduler tries to minimize movements between CPUs to keep caches hot, but it is possible.) The caller must be prepared to handle the situation when cpu and node are no longer the current CPU and node.
You need to do something like:
Call sched_getaffinity and identify the CPU bits
Iterate over the CPUs, doing sched_setaffinity to each one
(I'm not sure if after sched_setaffinity you're guaranteed to be on the CPU, or
need to yield explicitly ?)
Execute CPUID (asm instruction)... there is a way of getting a unique per-core ID out of one of it's outputs (see Intel docs). I vaguely recall it's the "APIC ID".
Build a table (a std::map ?) from APIC IDs to a CPU number or affinity mask or something.
If you did this on your main thread, don't forget to set sched_setaffinity back to all CPUS!
Now you can CPUID again whenever you need to and lookup which core you're on.
But I'd query why you need to do this; normally you want to take control via sched_setaffinity rather than finding out which core you're on (and even that's a pretty rare thing to want/need). (That's why I don't know the crucial detail of what to pull out of CPUID exactly, sorry!)
Update: Just learned about sched_getcpu from litb's response here. Much better! (my Debian/etch libc is too old to have it though).
I don't know of anything to get your current core id. With kernel level task/process migration, you wouldn't be guaranteed that it would remain constant for any length of time, unless you were running in some form of real-time mode.
If you want to be on a specific core, you can put use that sched_setaffinity() function or the taskset command to launch your program. I believe that these need elevated permissions to work, though. In your program, you could then run sched_getaffinity() to see the mask that was set earlier and use that as a best guess at the core on which you are executing.
sysconf(_SC_NPROCESSORS_ONLN);
What's the best way to grant a process/thread the right to lower its own nice value, without running it with full privileges? Solution can be external to the process itself (ulimit or setcap for example).
I'm looking for something portable at least across modern Linux and Mac OS X (and this is why I didn't reply myself with ulimit or setcap).
You'll need extra privileges to decrease the nice value (increase the logical priority). In Linux, this means either being run by root or having the CAP_SYS_NICE capability. Both can be set for the binary executable (either setuid root via chown and chmod, or setcap). The former will work on all Unix-like systems (but will require root privileges when installed), but the latter is Linux-specific.
The most acceptable portable way is probably to write a wrapper program, that can be installed setuid root. It will be very simple, just a couple of dozen lines of C. It simply calls sched_get_priority_min(), sched_get_priority_max(), sched_setscheduler(), and sched_setparam() to lower the nice value (getting it more CPU time), then calls seteuid(0); setregid(getgid(), getgid); setreuid(getuid(), getuid()); to drop the extra privileges, and finally execv() the actual program. Note: you most definitely want to hardcode the path to the actual program at install time. This should work without modifications on all Linux and Unix-like systems.
In your actual program, you simply increase the niceness of the threads that are not so important. In other words, you do not try to lower the niceness of any threads in your program, but increase the niceness of all other threads. The setuid root wrapper program is the portable way to reduce the minimum niceness level. You can obviously check the current niceness and scheduler details first to see if there is enough range to adjust. Perhaps your wrapper program can set command-line parameters or environment variables that tell the actual program which priority levels to use.
Any process can make itself nicer using setpriority() or sched_setscheduler(), and any thread using pthread_setchedparam() and pthread_setschedprio(). Both are defined in POSIX.1-2001, so should be available in basically all non-Windows systems. For details on the scheduler types and priorities available, see man 2 sched_setscheduler.
Note that higher numerical priority values indicate nicer process; lower logical priority. The larger the value, the less CPU time it gets. To find out the minimum and maximum values for a given scheduling policy, you must use sched_get_priority_min() and sched_get_priority_max().
Normally a process or thread should always be able to lower its priority (making it nicer), and use any scheduling policy that does not make it less nice. However, Linux kernels prior to 2.6.12 did not allow that for normal users, so your program should probably just try to make it or some of its threads nicer, but not mind too much if it happens to not be allowed on some rarer architectures. Most importantly, your algorithmic design should not rely on scheduling; strive for more robust code than that.
For a process, I have set a soft limit value of 335544320 and hard limit value of 1610612736 for the resource RLIMIT_AS. Even after setting this value, the address space of the process goes up to maximum 178MB. But I am able to see the value of the soft and Hard limits in /proc/process_number/limits correctly set to the above said values.
I wanted to know whether RLIMIT_AS is working in my OS and would also like to know how I can test for the RLIMIT_AS feature.
CentOS 5.5(64 bit) is the operating system that I am using.
Some please help me with regard to this. Thank you!
All setrlimit() limits are upper limits. A process is allowed to use as much resources as it needs to, as long as it stays under the soft limits. From the setrlimit() manual page:
The soft limit is the value that the
kernel enforces for the corresponding
resource. The hard limit acts as a
ceiling for the soft limit: an
unprivileged process may only set its
soft limit to a value in the range
from 0 up to the hard limit, and
(irreversibly) lower its hard limit. A
privileged process (under Linux: one
with the CAP_SYS_RESOURCE capability)
may make arbitrary changes to either
limit value.
Practically this means that the hard limit is an upper limit for both the soft limit and itself. The kernel only enforces the soft limits during the operation of a process - the hard limits are checked only when a process tries to change the resource limits.
In your case, you specifiy an upper limit of 320MB for the address space and your process uses about 180MB of those - well within its resource limits. If you want your process to grow, you need to do it in its code.
BTW, resource limits are intended to protect the system - not to tune the behaviour of individual processes. If a process runs into one of those limits, it's often doubtful that it will be able to recover, no matter how good your fault handling is.
If you want to tune the memory usage of your process by e.g. allocating more buffers for increased performance you should do one or both of the following:
ask the user for an appropriate value. This is in my opinion the one thing that should always be possible. The user (or a system administrator) should always be able to control such things, overriding any and all guesswork from your application.
check how much memory is available and try to guess a good amount to allocate.
As a sidenote, you can (and should) deal with 32-bit vs 64-bit at compile-time. Runtime checks for something like this are prone to failure and waste CPU cycles. Keep in mind, however, that the CPU "bitness" does not have any direct relation with the available memory:
32-bit systems do indeed impose a limit (usually in the 1-3 GB range) on the memory that a process can use. That does not mean that this much memory is actually available.
64-bit systems, being relatively newer, usually have more physical memory. That does not mean that a specific system actually has it or that your process should use it. For example, many people have built 64-bit home file servers with 1GB of RAM to keep the cost down. And I know quite a few people that would be annoyed if a random process forced their DBMS to swap just because it only thinks of itself.
I would like to allocate a fixed memory for my application (developed using C). Say my application should not cross 64MB of memory occupation. And also i should avoid to use more CPU usage. How it is possible?
Regards
Marcel.
Under unix: "ulimit -d 64M"
One fairly low-tech way I could ensure of not crossing a maximum threshold of memory in your application would be to define your own special malloc() function which keeps count of how much memory has been allocated, and returns a NULL pointer if the threshold has been exceeded. This would of course rely on you checking the return value of malloc() every time you call it, which is generally considered good practice anyway because there is no guarantee that malloc() will find a contiguous block of memory of the size that you requested.
This wouldn't be foolproof though, because it probably won't take into account memory padding for word alignment, so you'd probably end up reaching the 64MB memory limit long before your function reports that you have reached it.
Also, assuming you are using Win32, there are probably APIs that you could use to get the current process size and check this within your custom malloc() function. Keep in mind that adding this checking overhead to your code will most likely cause it to use more CPU and run a lot slower than normal, which leads nicely into your next question :)
And also i should avoid to use more
CPU usage.
This is a very general question and there is no easy answer. You could write two different programs which essentially do the same thing, and one could be 100 times more CPU intensive than another one due to the algorithms that have been used. The best technique is to:
Set some performance benchmarks.
Write your program.
Measure to see whether it reaches your benchmarks.
If it doesn't reach your benchmarks, optimise and go to step (3).
You can use profiling programs to help you work out where your algorithms need to be optimised. Rational Quantify is an example of a commercial one, but there are many free profilers out there too.
If you are on POSIX, System V- or BSD-derived system, you can use setrlimit() with resource RLIMIT_DATA - similar to ulimit -d.
Also take a look at RLIMIT_CPU resource - it's probably what you need (similar to ulimit -t)
Check man setrlimit for details.
For CPU, we've had a very low-priority task ( lower than everything else ) that does nothing but count. Then you can see how often that task gets to run, and you know if the rest of your processes are consuming too much CPU. This approach doesn't work if you want to limit your process to 10% while other processes are running, but if you want to ensure that you have 50% CPU free then it works fine.
For memory limitations you are either stuck implementing your own layer on top of malloc, or taking advantage of your OS in some way. On Unix systems ulimit is your friend. On VxWorks I bet you could probably figure out a way to take advantage of the task control block to see how much memory the application is using... if there isn't already a function for that. On Windows you could probably at least set up a monitor to report if your application does go over 64 MB.
The other question is: what do you do in response? Should your application crash if it exceeds 64Mb? Do you want this just as a guide to help you limit yourself? That might make the difference between choosing an "enforcing" approach versus a "monitor and report" approach.
Hmm; good question. I can see how you could do this for memory allocated off the heap, using a custom version of malloc and free, but I don't know about enforcing it on the stack too.
Managing the CPU is harder still...
Interesting.