Solr excessive logging and autowarming - is it normal? - solr

I am pretty new to Solr, so I apologize if this is a stupid question :)
I have a Solr process running and logging stuff to file. Log level set to INFO I believe. Regardless of this, it still logs like crazy even though nothing being searched really. Logs contain records like these mostly:
INFO: autowarming result for Searcher#7c35a3be main
queryResultCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
May 31, 2012 6:53:45 PM org.apache.solr.search.SolrIndexSearcher warm
INFO: autowarming Searcher#7c35a3be main from Searcher#7dde0950 main
documentCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=6188938,cumulative_hits=2441,cumulative_hitratio=0.00,cumulative_inserts=6186497,cumulative_evictions=4581707}
May 31, 2012 6:53:45 PM org.apache.solr.search.SolrIndexSearcher warm
INFO: autowarming result for Searcher#7c35a3be main
documentCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=6188938,cumulative_hits=2441,cumulative_hitratio=0.00,cumulative_inserts=6186497,cumulative_evictions=4581707}
May 31, 2012 6:53:45 PM org.apache.solr.core.QuerySenderListener newSearcher
INFO: QuerySenderListener sending requests to Searcher#7c35a3be main
May 31, 2012 6:53:45 PM org.apache.solr.core.QuerySenderListener newSearcher
INFO: QuerySenderListener done.
May 31, 2012 6:53:45 PM org.apache.solr.core.SolrCore registerSearcher
INFO: [] Registered new searcher Searcher#7c35a3be main
May 31, 2012 6:53:45 PM org.apache.solr.search.SolrIndexSearcher close
INFO: Closing Searcher#7dde0950 main
fieldValueCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
filterCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
queryResultCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
documentCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=6188938,cumulative_hits=2441,cumulative_hitratio=0.00,cumulative_inserts=6186497,cumulative_evictions=4581707}
May 31, 2012 6:53:45 PM org.apache.solr.update.processor.LogUpdateProcessor finish
Is this normal? This seems to put a pretty hefty load on the system(nothing to dramatic, but still).
I am just trying to understand what exactly it is doing and why.

IMHO in production environment, until you have a problem, you should use WARNING level (as application servers do).
You can configure logging through Solr admin console (for local Jetty URL would be: http://localhost:8983/solr/admin/logging) and it can be done for every package/class separately.
Logging levels are:
FINEST
FINE
CONFIG
INFO
WARNING
SEVERE
OFF
If you leave it unset, INFO is used.

By default the logging level is info.
When solr loads the core, it will load all the configuration files for the cores and auto warm the caches and all this would be logged out in the log files.
Configuring logging can help you configure your logging to the level you need.
You can configure the autowarming for Solr in the solrconfig.xml configuration file.

Related

Apache Proxy Plugin handling of JVM ID in JSESSION Cookie

I am trying to understand the mapping between the JVMID present in the JSESSION Cookie and the ipaddr:port of the managed server. Few questions below -
Who generates the JVMID and how does apache plugin know the JVMID of a given node. Does it get it back in the response from the server (may be as part of the Dynamic Server List?).
If we send a request to an apache with a JSESSION cookie containing a JVMID, and that apache hasn’t handled any requests yet, what would be the behavior?
Assuming that apache maintains a local mapping between JVMIDs and node addresses, how does this get updated? (specially in case of apache restart or a managed server restart)
See more at: http://middlewaremagic.com/weblogic/?p=654#comment-9054
1) The JVM ID is generated from each Weblogic server and appended to the JSESSIONID.
Apache logs the individual server HASH and maps it to the respective Managed server, and is able to send it to the same weblogic managed server as the previous request.
Here is an Example log from http://www.bea-weblogic.com/weblogic-server-support-pattern-common-diagnostic-process-for-proxy-plug-in-problems.html
Mon May 10 13:14:40 2004 getpreferredServersFromCookie: -2032354160!-457294087
Mon May 10 13:14:40 2004 GET Primary JVMID1: -2032354160
Mon May 10 13:14:40 2004 GET Secondary JVMID2: -457294087
Mon May 10 13:14:40 2004 [Found Primary]: 172.18.137.50:38625:65535
Mon May 10 13:14:40 2004 list[0].jvmid: -2032354160
Mon May 10 13:14:40 2004 secondary str: -457294087
Mon May 10 13:14:40 2004 list[1].jvmid: -457294087
Mon May 10 13:14:40 2004 secondary str: -457294087
Mon May 10 13:14:40 2004 [Found Secondary]: 172.18.137.54:38625:65535
Mon May 10 13:14:40 2004 Found 2 servers
2) If the plugin is installed on the new Apache as well, the moment Apache starts up it will ping all available Weblogic servers to report them as Live or Dead (my terms used here, not official) - while doing that health check it gets the JVMID for each available Weblogic. After that when it will receive the first request with a pre-existing JVMID - it can direct correctly.
3) there are some params like DynamicServerList ON - if it's On it keeps polling for Healthy Weblogics, if OFF then it send it to a hardcoded list only. so if On - then it's pretty dynamic

Why is attempting to set client variables resulting in "String or binary data would be truncated" messages in coldfusion-out.log?

If I try and set a client variable, by just doing a simple <cfset Client.X = 0 /> in onRequestStart, the request takes over a second (rather than mere milliseconds), and the following is output in coldfusion-out.log:
Aug 22, 2013 15:51:54 PM Information [ajp-bio-8012-exec-9] - [Macromedia][SQLServer JDBC Driver][SQLServer]String or binary data would be truncated.
Aug 22, 2013 15:51:55 PM Information [ajp-bio-8012-exec-9] - client variable JDBC STORE - retry 1
Aug 22, 2013 15:51:55 PM Information [ajp-bio-8012-exec-9] - [Macromedia][SQLServer JDBC Driver][SQLServer]String or binary data would be truncated.
Aug 22, 2013 15:51:55 PM Information [ajp-bio-8012-exec-9] - client variable JDBC STORE - retry 2
Aug 22, 2013 15:51:55 PM Information [ajp-bio-8012-exec-9] - [Macromedia][SQLServer JDBC Driver][SQLServer]String or binary data would be truncated.
Aug 22, 2013 15:51:55 PM Information [ajp-bio-8012-exec-9] - client variable JDBC STORE - retry 3
Aug 22, 2013 15:51:55 PM Information [ajp-bio-8012-exec-9] - [Macromedia][SQLServer JDBC Driver][SQLServer]String or binary data would be truncated.
Aug 22, 2013 15:51:55 PM Warning [ajp-bio-8012-exec-9] - Failed to store CLIENT variables to datasource mydsn - [Macromedia][SQLServer JDBC Driver][SQLServer]String or binary data would be truncated.
As per the message, the variable doesn't get stored - but I'm not sure why it thinks anything would be truncated - it's not exactly a lot of data.
I'm pretty sure this has been working fine previously, but I'm not sure when it might have started playing up (I'd been doing work involving remote http requests, so wouldn't have noticed an extra second during that).
If I remove the client.x line, it responds immediately and doesn't log the warning.
Similarly, if I change the storage mechanism for client variables to either cookie or registry, the issue doesn't occur.
I've tried deleting the CDATA and CGLOBAL tables and re-assigning the datasource to let them be recreated, but this hasn't had an effect.
Just as I was about to post the question I figured out what it was, so to help anyone else experiencing the same problem I'll go ahead and answer too...
The error was referring to the app column.
This is created as a char(64), but for this application the name is generated based on a couple of factors which resulted in it ending up as 95 characters long.
I edited the column in the database to char(255) and the problem goes away.

Timezone with Postgres on Windows

I am storing a time field in a timestamp without timezone column. The data is stored in UTC time. When I call the data back on heroku I get somehting like 2013-07-13T18:06:41.000Z which is what I want.
However, on my test machine, which is running windows 8 and postgres 9.3 I get this back Sat Jul 13 2013 18:06:41 GMT-0400 (Eastern Daylight Time). This is the right time sort of. It is the correct time with the offset to local time.
How can I match my production db to return the same or similar results to my test db?
It sounds like you want to run your test server with TimeZone set to UTC.
You can set this globally in postgresql.conf, or on a per-user, per-database, per-session or per-transaction basis.
See the manual.

solr, solrj: I/O exception (java.net.SocketException) caught when processing request: Connection reset

I have a multi-threaded application in solrj 4. The threads (max 25) share one connection to HttpSolrServer. Each thread is running one query. This worked fine for a while, until it finally crashed with the following messages:
Jan 12, 2013 12:52:15 PM org.apache.http.impl.client.DefaultRequestDirector tryExecute
INFO: Retrying request
Jan 12, 2013 12:52:15 PM org.apache.http.impl.client.DefaultRequestDirector tryExecute
INFO: I/O exception (java.net.SocketException) caught when processing request: Connection reset
I would like to catch this exception and reset the connection to the server. I don't get the whole stack trace with the above message, so I'm not sure where to do this. The only place I reference the server is for making a query with:
server.query( query )
But the only exception this throws is SolrServerException, which I'm currently handling.
Any suggestions would be greatly appreciated.
FYI, this is how I'm setting up the initial server connection:
server = new HttpSolrServer(url);
server.setSoTimeout(0);
server.setDefaultMaxConnectionsPerHost(50);
server.setMaxTotalConnections(128);

solr - replication failure - expected behavior?

My slave polls every ten minutes, but my master only indexes once a day.
Every time my slave polls but my master has not been updated, it's recorded as a replication failure in replication.properties.
Is this expected behavior? It seems to me that this should not be considered a failure.
#Replication details
#Fri Mar 02 08:23:38 EST 2012
replicationFailedAtList=1330692602649,1330692000359,1330691401390,1330690800157,1330690200096,1330689600014,1330689000012,1330688400011,1330687800013,1330687200013
previousCycleTimeInSeconds=1418 timesFailed=1675
indexReplicatedAtList=1330694618838,1330692602649,1330692000359,1330691401390,1330690800157,1330690200096,1330689600014,1330689000012,1330688400011,1330687800013
indexReplicatedAt=1330694618838 replicationFailedAt=1330692602649
lastCycleBytesDownloaded=14445955753 timesIndexReplicated=1699
EDIT:
Master is set to replicate on Optimize.
The failed replications list stops growing after the successful replication occurs. I'm not sure when they start up again, but perhaps at some point after the master starts its update on the next day?
My understanding of replication is that the slave checks the index version and if they're different the slave pulls the new data. Considering master is set to replicate on Optimize, shouldn't the index version only increment when optimization is complete? Is there some other mechanism by which the two solr instances communicate?
EDIT:
I don't have any errors in the log. I do notice that while there are a lot of lines on startup like this:
INFO: Adding component:org.apache.solr.handler.component.SpellCheckComponent#41217e67
Mar 1, 2012 11:53:49 AM org.apache.solr.handler.component.SearchHandler inform
INFO: Adding debug component:org.apache.solr.handler.component.DebugComponent#7df1bd98
Mar 1, 2012 11:53:49 AM org.apache.solr.handler.component.SearchHandler inform
INFO: Adding component:org.apache.solr.handler.component.QueryComponent#259a8416
Mar 1, 2012 11:53:49 AM org.apache.solr.handler.component.SearchHandler inform
INFO: Adding component:org.apache.solr.handler.component.FacetComponent#4355d3a3
Mar 1, 2012 11:53:49 AM org.apache.solr.handler.component.SearchHandler inform
There is nothing about the ReplicationHandler. Still, it does successfully replicate when the master is updated.

Resources