I am trying to understand the mapping between the JVMID present in the JSESSION Cookie and the ipaddr:port of the managed server. Few questions below -
Who generates the JVMID and how does apache plugin know the JVMID of a given node. Does it get it back in the response from the server (may be as part of the Dynamic Server List?).
If we send a request to an apache with a JSESSION cookie containing a JVMID, and that apache hasn’t handled any requests yet, what would be the behavior?
Assuming that apache maintains a local mapping between JVMIDs and node addresses, how does this get updated? (specially in case of apache restart or a managed server restart)
See more at: http://middlewaremagic.com/weblogic/?p=654#comment-9054
1) The JVM ID is generated from each Weblogic server and appended to the JSESSIONID.
Apache logs the individual server HASH and maps it to the respective Managed server, and is able to send it to the same weblogic managed server as the previous request.
Here is an Example log from http://www.bea-weblogic.com/weblogic-server-support-pattern-common-diagnostic-process-for-proxy-plug-in-problems.html
Mon May 10 13:14:40 2004 getpreferredServersFromCookie: -2032354160!-457294087
Mon May 10 13:14:40 2004 GET Primary JVMID1: -2032354160
Mon May 10 13:14:40 2004 GET Secondary JVMID2: -457294087
Mon May 10 13:14:40 2004 [Found Primary]: 172.18.137.50:38625:65535
Mon May 10 13:14:40 2004 list[0].jvmid: -2032354160
Mon May 10 13:14:40 2004 secondary str: -457294087
Mon May 10 13:14:40 2004 list[1].jvmid: -457294087
Mon May 10 13:14:40 2004 secondary str: -457294087
Mon May 10 13:14:40 2004 [Found Secondary]: 172.18.137.54:38625:65535
Mon May 10 13:14:40 2004 Found 2 servers
2) If the plugin is installed on the new Apache as well, the moment Apache starts up it will ping all available Weblogic servers to report them as Live or Dead (my terms used here, not official) - while doing that health check it gets the JVMID for each available Weblogic. After that when it will receive the first request with a pre-existing JVMID - it can direct correctly.
3) there are some params like DynamicServerList ON - if it's On it keeps polling for Healthy Weblogics, if OFF then it send it to a hardcoded list only. so if On - then it's pretty dynamic
Related
I am trying to setup a local NTP Server without Internet Connection.
Below is my ntp.conf on Server
# Server
server 127.127.1.0
fudge 127.127.1.0 stratum 5
broadcast 10.108.190.255
Below is my ntp.conf on Clients
# Clients
server 10.108.190.14
broadcastclient
but my clients are not sync with the server. Output to ntpq -p on Clients show that they are not taking time from the server, and server ip is show at stratum 16
Could any one please help in this issue.
The server should use its local clock as the source. A better set up is to use orphan mode for isolated networks which gives you fail-over. Check out the documentation:
http://www.eecis.udel.edu/~mills/ntp/html/orphan.html
You need to configure the clients with th e prefer keyword. ntpd tries its hardest not to honor local undisciplined clocks in order to prevent screwups.
server 10.108.190.14 prefer
For more information see: http://www.ntp.org/ntpfaq/NTP-s-config-adv.htm#AEN3658
This is all assuming that you have included the full and entire ntp.con and did not leave out any bits about restrict lines.
How about using chrony?
Steps
Install chrony in both your devices
sudo apt install chrony
Let's assume the server IP address 192.168.1.87 then client configuration (/etc/chrony/chrony.conf) as follows:
server 192.168.1.87 iburst
keyfile /etc/chrony/chrony.keys
driftfile /var/lib/chrony/chrony.drift
log tracking measurements statistics
logdir /var/log/chrony
Server configuration (/etc/chrony/chrony.conf), assume your client IP is 192.168.1.14
keyfile /etc/chrony/chrony.keys
driftfile /var/lib/chrony/chrony.drift
log tracking measurements statistics
logdir /var/log/chrony
local stratum 8
manual
allow 192.0.0.0/24
allow 192.168.1.14
Restart chrony in both computers
sudo systemctl stop chrony
sudo systemctl start chrony
5.1 Checking on the client-side,
sudo systemctl status chrony
`**output**:
июн 24 13:26:42 op-desktop systemd[1]: Starting chrony, an NTP client/server...
июн 24 13:26:42 op-desktop chronyd[9420]: chronyd version 3.2 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SECHASH +SIGND +ASYNCDNS +IPV6 -DEBUG)
июн 24 13:26:42 op-desktop chronyd[9420]: Frequency -6.446 +/- 1.678 ppm read from /var/lib/chrony/chrony.drift
июн 24 13:26:43 op-desktop systemd[1]: Started chrony, an NTP client/server.
июн 24 13:26:49 op-desktop chronyd[9420]: Selected source 192.168.1.87`
5.1 chronyc tracking output:
Reference ID : C0A80157 (192.168.1.87)
Stratum : 9
Ref time (UTC) : Thu Jun 24 10:50:34 2021
System time : 0.000002018 seconds slow of NTP time
Last offset : -0.000000115 seconds
RMS offset : 0.017948076 seconds
Frequency : 5.491 ppm slow
Residual freq : +0.000 ppm
Skew : 0.726 ppm
Root delay : 0.002031475 seconds
Root dispersion : 0.000664742 seconds
Update interval : 65.2 seconds
Leap status : Normal
If I try and set a client variable, by just doing a simple <cfset Client.X = 0 /> in onRequestStart, the request takes over a second (rather than mere milliseconds), and the following is output in coldfusion-out.log:
Aug 22, 2013 15:51:54 PM Information [ajp-bio-8012-exec-9] - [Macromedia][SQLServer JDBC Driver][SQLServer]String or binary data would be truncated.
Aug 22, 2013 15:51:55 PM Information [ajp-bio-8012-exec-9] - client variable JDBC STORE - retry 1
Aug 22, 2013 15:51:55 PM Information [ajp-bio-8012-exec-9] - [Macromedia][SQLServer JDBC Driver][SQLServer]String or binary data would be truncated.
Aug 22, 2013 15:51:55 PM Information [ajp-bio-8012-exec-9] - client variable JDBC STORE - retry 2
Aug 22, 2013 15:51:55 PM Information [ajp-bio-8012-exec-9] - [Macromedia][SQLServer JDBC Driver][SQLServer]String or binary data would be truncated.
Aug 22, 2013 15:51:55 PM Information [ajp-bio-8012-exec-9] - client variable JDBC STORE - retry 3
Aug 22, 2013 15:51:55 PM Information [ajp-bio-8012-exec-9] - [Macromedia][SQLServer JDBC Driver][SQLServer]String or binary data would be truncated.
Aug 22, 2013 15:51:55 PM Warning [ajp-bio-8012-exec-9] - Failed to store CLIENT variables to datasource mydsn - [Macromedia][SQLServer JDBC Driver][SQLServer]String or binary data would be truncated.
As per the message, the variable doesn't get stored - but I'm not sure why it thinks anything would be truncated - it's not exactly a lot of data.
I'm pretty sure this has been working fine previously, but I'm not sure when it might have started playing up (I'd been doing work involving remote http requests, so wouldn't have noticed an extra second during that).
If I remove the client.x line, it responds immediately and doesn't log the warning.
Similarly, if I change the storage mechanism for client variables to either cookie or registry, the issue doesn't occur.
I've tried deleting the CDATA and CGLOBAL tables and re-assigning the datasource to let them be recreated, but this hasn't had an effect.
Just as I was about to post the question I figured out what it was, so to help anyone else experiencing the same problem I'll go ahead and answer too...
The error was referring to the app column.
This is created as a char(64), but for this application the name is generated based on a couple of factors which resulted in it ending up as 95 characters long.
I edited the column in the database to char(255) and the problem goes away.
I am storing a time field in a timestamp without timezone column. The data is stored in UTC time. When I call the data back on heroku I get somehting like 2013-07-13T18:06:41.000Z which is what I want.
However, on my test machine, which is running windows 8 and postgres 9.3 I get this back Sat Jul 13 2013 18:06:41 GMT-0400 (Eastern Daylight Time). This is the right time sort of. It is the correct time with the offset to local time.
How can I match my production db to return the same or similar results to my test db?
It sounds like you want to run your test server with TimeZone set to UTC.
You can set this globally in postgresql.conf, or on a per-user, per-database, per-session or per-transaction basis.
See the manual.
I've constructed a Nagios remote-host monitoring setup (non-NRPE), and it's functional and useful, except:
Somehow, I found that the Nagios host logs in to various remote hosts, only to log out one second later (if not in that same second), every 3 minutes or so; how often it does this doesn't appear to be deterministic. These logins don't coincide with any check periods I've defined.
From an arbitrary member of my remote host array's auth.log:
Feb 25 10:51:11 MACHINE sshd[3590]: Accepted publickey for nagios from 10.1.2.110 port 54069 ssh2
Feb 25 10:51:11 MACHINE sshd[3590]: pam_unix(sshd:session): session opened for user nagios by (uid=0)
Feb 25 10:51:11 MACHINE sshd[3599]: Received disconnect from 10.1.2.110: 11: disconnected by user
Feb 25 10:51:11 MACHINE sshd[3590]: pam_unix(sshd:session): session closed for user nagios
And then, three minutes later:
Feb 25 10:54:10 MACHINE sshd[3632]: Accepted publickey for nagios from 10.1.2.110 port 54176 ssh2
Feb 25 10:54:10 MACHINE sshd[3632]: pam_unix(sshd:session): session opened for user nagios by (uid=0)
Feb 25 10:54:10 MACHINE sshd[3642]: Received disconnect from 10.1.2.110: 11: disconnected by user
Feb 25 10:54:10 MACHINE sshd[3632]: pam_unix(sshd:session): session closed for user nagios
I can't figure it. My service follows the generic-service template, which I've modified for a slightly longer check-interval and max-check-attempts. Why is Nagios on this serial login spree?
Have you checked your host definitions? What do you use for 'check-host'? If that performs a check 'through' an NRPE check (rather than something like a 'local' check-ping), then it could be logging in as well.
Also you can check your Nagios log file to see what checks are actually being performed. I usually perform a 'tail -f nagios.log | grep [IP_ADDRESS_of_target_host]' to narrow the results to a specific machine.
If nothing is showing up there, in a last ditch effort you can enable debugging and check the Nagios debug file - EVERYTHING Nagios does will go into this file. As the debug file tends to roll very quickly (at least in our install - >6.8K checks), you may have to get creative with 'grep' to find what you're looking for.
If the check is returning a CRITICAL/WARNING state, it could be that your retry_interval is set to 3 minutes, which I believe is the default. Doublecheck your service template in nagios/etc/objects/templates
My slave polls every ten minutes, but my master only indexes once a day.
Every time my slave polls but my master has not been updated, it's recorded as a replication failure in replication.properties.
Is this expected behavior? It seems to me that this should not be considered a failure.
#Replication details
#Fri Mar 02 08:23:38 EST 2012
replicationFailedAtList=1330692602649,1330692000359,1330691401390,1330690800157,1330690200096,1330689600014,1330689000012,1330688400011,1330687800013,1330687200013
previousCycleTimeInSeconds=1418 timesFailed=1675
indexReplicatedAtList=1330694618838,1330692602649,1330692000359,1330691401390,1330690800157,1330690200096,1330689600014,1330689000012,1330688400011,1330687800013
indexReplicatedAt=1330694618838 replicationFailedAt=1330692602649
lastCycleBytesDownloaded=14445955753 timesIndexReplicated=1699
EDIT:
Master is set to replicate on Optimize.
The failed replications list stops growing after the successful replication occurs. I'm not sure when they start up again, but perhaps at some point after the master starts its update on the next day?
My understanding of replication is that the slave checks the index version and if they're different the slave pulls the new data. Considering master is set to replicate on Optimize, shouldn't the index version only increment when optimization is complete? Is there some other mechanism by which the two solr instances communicate?
EDIT:
I don't have any errors in the log. I do notice that while there are a lot of lines on startup like this:
INFO: Adding component:org.apache.solr.handler.component.SpellCheckComponent#41217e67
Mar 1, 2012 11:53:49 AM org.apache.solr.handler.component.SearchHandler inform
INFO: Adding debug component:org.apache.solr.handler.component.DebugComponent#7df1bd98
Mar 1, 2012 11:53:49 AM org.apache.solr.handler.component.SearchHandler inform
INFO: Adding component:org.apache.solr.handler.component.QueryComponent#259a8416
Mar 1, 2012 11:53:49 AM org.apache.solr.handler.component.SearchHandler inform
INFO: Adding component:org.apache.solr.handler.component.FacetComponent#4355d3a3
Mar 1, 2012 11:53:49 AM org.apache.solr.handler.component.SearchHandler inform
There is nothing about the ReplicationHandler. Still, it does successfully replicate when the master is updated.