Get current Payara MaxHeapSize and MetaspaceSize - payara

I have a running Payara 4 instance which I set the MaxHeapSize and MetaspaceSize as described here to be production ready. How can I check those values were correctly set?

You could check this using jmap -heap <pid> on the PID of the Payara process. jmap is contained in the JDK bin directory.
On JDK9+ you need to use jhsdb jmap --heap --pid <PID> to get the needed information.
The output should contain the needed information, e.g:
Heap Configuration:
MinHeapFreeRatio = 0
MaxHeapFreeRatio = 100
MaxHeapSize = 268435456 (256.0MB)
NewSize = 89128960 (85.0MB)
MaxNewSize = 89128960 (85.0MB)
OldSize = 179306496 (171.0MB)
NewRatio = 2
SurvivorRatio = 8
MetaspaceSize = 21807104 (20.796875MB)
CompressedClassSpaceSize = 1073741824 (1024.0MB)
MaxMetaspaceSize = 17592186044415 MB
G1HeapRegionSize = 0 (0.0MB)

Related

How does ffmpeg use concat of filter in C?

It's OK for me to use the command line like this.
ffmpeg -i test1.mp4 -i test2.mp4 -filter_complex "movie='test1.mp4',scale=640:360[v1];movie='test2.mp4',scale=640:360[v2];[v1][v2]concat" testout.mp4
This is my configuration code.
AVFilterInOut* inputs = avfilter_inout_alloc();
AVFilterInOut* outputs = avfilter_inout_alloc();
...
avfilter_graph_parse_ptr(filter->filterGraph,
"movie='test1.mp4',scale=640:360[v1];movie='test2.mp4',scale=640:360[v2];[v1][v2]concat",
&inputs, &outputs, NULL)
avfilter_graph_config(filter->filterGraph, NULL)
Reported error
[h264 # 0000026dfecef780] Application has requested 17 threads. Using a thread count greater than 16 is not recommended.
[h264 # 0000026dffae9d00] Application has requested 17 threads. Using a thread count greater than 16 is not recommended.
Output pad "default" with type video of the filter instance "in" of buffer not connected to any destination
How can I configure the filter correctly?

How to add options to ntpd

I'd like to add a new option to ntpd however I couldn't find how to generate ntpd/ntpd-opts{.c, .h} after adding some lines to ntpd/ntpdbase-opts.def e.g.,
$ git diff ntpd/ntpdbase-opts.def
diff --git a/ntpd/ntpdbase-opts.def b/ntpd/ntpdbase-opts.def
index 66b953528..a790cbd51 100644
--- a/ntpd/ntpdbase-opts.def
+++ b/ntpd/ntpdbase-opts.def
## -479,3 +479,13 ## flag = {
the server to be discovered via mDNS client lookup.
_EndOfDoc_;
};
+
+flag = {
+ name = foo;
+ value = F;
+ arg-type = number;
+ descrip = "Some new option";
+ doc = <<- _EndOfDoc_
+ For testing purpose only.
+ _EndOfDoc_;
+};
Do you have any ideas?
how to generate ntpd/ntpd-opts{.c, .h} after adding some lines to ntpd/ntpdbase-opts.def
It is just in build scripts. Just compile https://github.com/ntp-project/ntp/blob/master-no-authorname/INSTALL#L30 it normally and make will pick it up.
https://github.com/ntp-project/ntp/blob/master-no-authorname/ntpd/Makefile.am#L304
https://github.com/ntp-project/ntp/blob/master-no-authorname/ntpd/Makefile.am#L183
In addition to #KamilCuk's answer, we need to do the following to add custom options:
Edit *.def file
Run bootstrap script
Run configure script with --disable-local-libopts option
Run make
For example,
$ git diff ntpd/ntpdbase-opts.def
diff --git a/ntpd/ntpdbase-opts.def b/ntpd/ntpdbase-opts.def
index 66b953528..a790cbd51 100644
--- a/ntpd/ntpdbase-opts.def
+++ b/ntpd/ntpdbase-opts.def
## -479,3 +479,13 ## flag = {
the server to be discovered via mDNS client lookup.
_EndOfDoc_;
};
+
+flag = {
+ name = foo;
+ value = F;
+ arg-type = number;
+ descrip = "Some new option";
+ doc = <<- _EndOfDoc_
+ For testing purpose only.
+ _EndOfDoc_;
+};
This change yields:
$ ./ntpd --help
ntpd - NTP daemon program - Ver. 4.2.8p15
Usage: ntpd [ -<flag> [<val>] | --<name>[{=| }<val>] ]... \
[ <server1> ... <serverN> ]
Flg Arg Option-Name Description
-4 no ipv4 Force IPv4 DNS name resolution
- prohibits the option 'ipv6'
...
-F Num foo Some new option
opt version output version information and exit
-? no help display extended usage information and exit
-! no more-help extended usage information passed thru pager
Options are specified by doubled hyphens and their name or by a single
hyphen and the flag character.
...

What are the best values for Postgresql.conf for better performance

I am using a DBMS PostgreSQL 9.0.1 on S.O Linux Debian 8 and server HP proliant Ml110-G9 :
Processador: (1) Intel Xeon E5-1603v3 (2.8GHz/4-core/10MB/140W)
Memória RAM: 8GB DDR4
Disco Rígido: SATA 1TB 7.2K rpm LFF
More specifications here: https://www.hpe.com/us/en/product-catalog/servers/proliant-servers/pip.specifications.hpe-proliant-ml110-gen9-server.7796454.html
154/5000
See Below parameters presents in postgresql.conf. You would indicate which value for example: cpu_index_tuple_cost and other CPU_*, based on this
Server. So I do not have to use default values.
#seq_page_cost = 1.0
#random_page_cost = 4.0
#cpu_tuple_cost = 0.01
#cpu_index_tuple_cost = 0.005
#cpu_operator_cost = 0.0025
max_connections = 100
shared_buffers = 2GB
effective_cache_size = 6GB
work_mem = 52428kB
maintenance_work_mem = 1GB
checkpoint_segments = 128
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 500

APC needs restart to see changes

I have Apache2 with APC.
When I change something, I have to restart Apache to see the effect. I know 100% for sure it's because of APC.
What is wrong in my settings? (thanks for the help!)
extension=apc.so
apc.enabled = On
apc.optimization = 0
apc.shm_segments = 1
apc.shm_size = 2.6G
apc.ttl = 7200
apc.user_ttl = 720
apc.num_files_hint = 102400
apc.mmap_file_mask = /tmp/apc.XXXXXX
apc.enable_cli = 1
apc.cache_by_default = 1
apc.max_file_size = 220M
apc.stat = 0
You have apc.stat set to 0. This means APC will not check whether the file is modified when it's requested, it will always serve it from cache after the first compilation.
To fix you problem either remove apc.stat = 0 or change it back to default apc.stat = 1

m_tornBits fields in sql server page

Every page in an mdf file(sql server) has a m_tornBits field in the page header.
Can anybody explain what this value means
here is an example of a page header : -
PAGE HEADER:
Page #0x1A198000
m_pageId = (1:135) m_headerVersion = 1 m_type = 1
m_typeFlagBits = 0x0 m_level = 0 m_flagBits = 0x2
m_objId = 3 m_indexId = 0 m_prevPage = (1:89)
m_nextPage = (0:0) pminlen = 46 m_slotCnt = 80
m_freeCnt = 2360 m_freeData = 7036 m_reservedCnt = 0
m_lsn = (8:213:7) m_xactReserved = 0 m_xdesId = (0:834)
m_ghostRecCnt = 0 m_tornBits = 822083793
here the tornbit field is 822083793
what does this mean?
From Technet: SQL Server 2000 I/O Basics
Torn I/O
Torn I/O is often referred to as a torn page in SQL Server documentation. A torn I/O occurs when a partial write takes place, leaving the data in an invalid state. SQL Server 2000/7.0 data pages are 8 KB in size. A torn data page for SQL Server occurs when only a portion of the 8 KB is correctly written to or retrieved from stable media.
m_tornBits contains the TORN or CHECKSUM validation value(s).
When the page is read from disk and PAGE_VERIFY protection is enabled for the database, the torn bits are audited.
You can find your answer here in this document (search for m_tornBits):
http://download.microsoft.com/download/4/7/a/47a548b9-249e-484c-abd7-29f31282b04d/SQLIOBasicsCh2.doc

Resources