Jmxtrans monitors solr issue - solr

I want to monitor solr, and have the jmxtrans config:
{
"servers":[
{
"port":"8099",
"host":"localhost",
"queries":[
{
"obj":"solr/*:type=/select,id=org.apache.solr.handler.component.SearchHandler",
"resultAlias":"solr",
"attr":[
"requests","errors","avgRequestsPerSecond","avgTimePerRequest","95thPcRequestTime"
],
"outputWriters":[
{
"#class":"com.googlecode.jmxtrans.model.output.KeyOutWriter",
"settings":{
"outputFile" : "/tmp/jmx.log",
"maxLogFileSize" : "10MB",
"maxLogBackupFiles" : 2,
"debug" : true
}
}
]
}
]
}
]
}
I have configure the wildcard domain name
"obj":"solr/*:type=/select,id=org.apache.solr.handler.component.SearchHandler"
but I get the following result without domain name:
localhost_8099.solr.errors 0 1446715240625
localhost_8099.solr.avgRequestsPerSecond 0.00883917964270778 1446715240625
localhost_8099.solr.avgTimePerRequest 1.99831994970047 1446715240625
localhost_8099.solr.95thPcRequestTime 3.8249146499999997 1446715240625
localhost_8099.solr.requests 717419 1446715241205
localhost_8099.solr.errors 0 1446715241205
I tried typeNames: https://code.google.com/p/jmxtrans/wiki/Queries
But it seems doesn't support domain.

I found the answer, add the following configuration:
...
"obj":"solr/*:type=/select,id=org.apache.solr.handler.component.SearchHandler",
"useObjDomainAsKey":true,
...

Related

Enable Access Control in MongoDB doesn't work

Everything worked fine when I used as a connection string: mongodb://0.0.0.0:27017/data. After add Enable Access Control with connection string: mongodb://user:pwd#0.0.0.0:27017/?authSource=data stops working.
I'm connected to mongodb but in application I see on request This request has no response data available. But for connection string to mongodb://0.0.0.0:27017/data there was data.
To add authentication I used this instruction Enable Access Control:
I created admin:
> use admin
switched to db admin
> db.createUser(
... {
... user: 'admin',
... pwd: 'password',
... roles: [ { role: 'root', db: 'admin' } ]
... }
... );
Successfully added user: {
"user" : "admin",
"roles" : [
{
"role" : "root",
"db" : "admin"
}
]
}
> exit;
I changed mongo config file - mongod.cfg:
security:
authorization: enabled
I logged in as admin:
> use admin
> db.auth('admin','password');
1
I created new user:
use data
db.createUser(
{
user: "user",
pwd: "pwd",
roles: [
{ role: "dbAdmin", db: "data" },
]
}
)
db.grantRolesToUser(
"user",
[ "readWrite" , { role: "read", db: "data" } ],
{ w: "majority" , wtimeout: 4000 }
)
And I used: mongodb://user:pwd#0.0.0.0:27017/?authSource=data as a connection string, but there is not working. What am I doing wrong?
I solved my problem. The reason was authSource in connection string. Before:
"connectionString": "mongodb://user:pwd#0.0.0.0:27017/?authSource=data"
After:
"connectionString": "mongodb://user:pwd#0.0.0.0:27017/data"
Now it's working fine.

logstash not reading multiple files

My Kibana5.6.8 logstash configuration seems only reading one log file
My logstash.conf on /home/elastichsearch/confLogs is
input {
file {
type => "static"
path => "/home/elasticsearch/static_logs/**/*Web.log*" exclude => "*.zip"
start_position => beginning
sincedb_path => "/dev/null"
}
}
filter {
if [type] == "static" {
if [message] !~ /(.+)/ {
drop { }
}
grok{
patterns_dir => "./patterns"
overwrite => [ "message" ]
# 2017-08-07 11:47:35,466 INFO [http-bio-10.60.2.19-10267-exec-60] jsch.DeployManagerFileUSImpl (DeployManagerFileUSImpl.java:155) - Deconnexion de l'hote qvizzza3
# 2017-08-07 11:47:51,775 ERROR [http-bio-10.60.2.19-10267-exec-54] service.BindingsRSImpl (BindingsRSImpl.java:143) - Can't find bindings file deployed on server
# 2017-08-03 16:01:11,352 WARN [Thread-552] pcf2.AbstractObjetMQDAO (AbstractObjetMQDAO.java:137) - Descripteur de
match => [ "message", "%{TIMESTAMP_ISO8601:logdate},%{INT} %{LOGLEVEL:logLevel} \[(?<threadname>[^\]]+)\] %{JAVACLASS:package} \(%{JAVAFILE:className}:%{INT:line}\) - %{GREEDYDATA:message}" ]
}
# 2017-08-03 16:01:11,352
date{
match => [ "logdate", "YYYY-MM-dd hh:mm:ss" ]
target => "logdate"
}
}
}
output {
elasticsearch { hosts => ["192.168.99.100:9200"]}
My logs directory, with load balanced logrotate files
static_logs
--prd1
----mlog Web.log
----mlog Web.log.1
----mlog Web.log.2
--prd2
----mlog Web.log
----mlog Web.log.2
Where is my mistake ?
My patterns are on /home/elasticsearch/confLogs/patterns/grok-patterns qui with TIMESTAMP_ISO8601
Regards
If my log files are more 140M, logdate filter is not viewing as an date field, but as an string field !!!

Log to file with gradle 4

Until gradle 3, I used this to write gradle's output to a logfile:
def fileLogger = [
onOutput : {
File logfile = new File( 'gradle.log' )
logfile << it
}
] as org.gradle.api.logging.StandardOutputListener
gradle.useLogger( fileLogger )
This does not work with with gradle 4.
Update for Gradle 5:
It works when using logging.addStandardOutputListener instead gradle.useLogger and adding it to all tasks:
// logger
def fileLogger = [
onOutput: {
File logfile = new File('gradle.log')
logfile << it
}
] as org.gradle.api.logging.StandardOutputListener
// for configuration phase
logging.addStandardOutputListener(fileLogger)
// for execution phase
gradle.taskGraph.whenReady { taskGraph ->
taskGraph.allTasks.each { Task t ->
t.doFirst {
logging.addStandardOutputListener(fileLogger)
}
}
}

Flume solrSink, No error but does not ingest the data to Solr

I am indexing text document using Flume, I do not see any error or warning message but data is not getting ingested to Solr Log level for both Solr and Flume is set to TRACE, ALL
Flume version : 1.5.2.2.3
Solr Version : 5.5
**Config files are as below**
**Flume Config :**
agent.sources = SpoolDirSrc
agent.channels = FileChannel
agent.sinks = SolrSink
# Configure Source
agent.sources.SpoolDirSrc.channels = fileChannel agent.sources.SpoolDirSrc.type = spooldir
agent.sources.SpoolDirSrc.spoolDir = /home/flume/source_emails agent.sources.SpoolDirSrc.basenameHeader = true agent.sources.SpoolDirSrc.fileHeader = true
agent.sources.SpoolDirSrc.deserializer =org.apache.flume.sink.solr.morphline.BlobDeserializer$Builder
agent.channels.FileChannel.type = file agent.channels.FileChannel.capacity = 10000
agent.sinks.SolrSink.type = org.apache.flume.sink.solr.morphline.MorphlineSolrSink
agent.sinks.SolrSink.morphlineFile = /etc/flume/conf/morphline.conf agent.sinks.SolrSink.batchsize = 1000 agent.sinks.SolrSink.batchDurationMillis = 2500 agent.sinks.SolrSink.channel = fileChannel agent.sinks.SolrSink.morphlineId = morphline1 agent.sources.SpoolDirSrc.channels = FileChannel agent.sinks.SolrSink.channel = FileChannel
"
Morphline Config
solrLocator: {
collection : gsearch
zkHost : "codesolr-as-r3p:21810,codesolr-as-r3p:21811,codesolr-as-r3p:21812"
}
morphlines :
[
{
id : morphline1
importCommands : ["org.kitesdk.**", "org.apache.solr.**"]
commands :
[
{ detectMimeType { includeDefaultMimeTypes : true } }
{
solrCell {
solrLocator : ${solrLocator}
captureAttr : true
lowernames : true
capture : [_attachment_body, _attachment_mimetype, basename, content, content_encoding, content_type, file, meta]
parsers : [ { parser : org.apache.tika.parser.txt.TXTParser } ]
}
}
{ generateUUID { field : id } }
{ sanitizeUnknownSolrFields { solrLocator : ${solrLocator} } }
{ logDebug { format : "output record: {}", args : ["#{}"] } }
{ loadSolr: { solrLocator : ${solrLocator} } }
]
}
]
Please help me what could be the issue
Regards,
~Sri
Normally en flume logs you can see more detailed of your error, you can paste the trace?
May be morphilines doesnt find your solr configuration, you can add this property in your morphilines.conf
solrHomeDir : "/your_solr_config_files"
I hope it's your help

Specifying bigquery table schema that resides on a file in a multipart http request

I have a text file schema.txt in which the schema for the table that I want to create is defined.
I want to include this file in the multipart HTTP request that I'm using to create my table.
How do I specify the schema.txt file in the multipart HTTP request?
Below is what I'm currently doing (not working though):
def loadTable(service, projectId, datasetId, targetTableId, sourceCsv, filenm):
try:
jobCollection = service.jobs()
jobData = {
'projectId': projectId,
'configuration': {
'load': {
'sourceUris': [sourceCsv],
'schema': filenm,
'destinationTable': {
'projectId': projectId,
'datasetId': datasetId,
'tableId': targetTableId
},
'createDisposition': 'CREATE_IF_NEEDED',
'writeDisposition': 'WRITE_TRUNCATE',
'encoding': 'UTF-8'
}
}
}
Where filenm will be 'schema.txt'.
I know I can specify the schema directly as:
'schema': {
'fields': [
{
'name': 'level',
'type': 'STRING',
},
{
'name': 'message',
'type': 'STRING',
}
]
},
But instead I want to specify the file containing the schema.
Hmm, not sure why you need a "multipart HTTP request" unless you are ingesting directly from a file. Here you are specifying a CSV input, indicating a Cloud Storage object.
See here for more info:
https://developers.google.com/bigquery/docs/developers_guide#storageimport
In any case, this is not really a BigQuery question, more a Python question.. do you mean this?
import json
def loadTable(project_id, dataset_id, target_table, source_csv, filename):
file = open(filename, 'r')
schema = file.read()
file.close()
schema_json = json.loads('{%s}' % schema)
job_data = {
"projectId": project_id,
"configuration": {
"load": {
"sourceUris": [source_csv],
"schema": schema_json,
"destinationTable": {
"projectId": project_id,
"datasetId": dataset_id,
"tableId": target_table
},
"createDisposition": "CREATE_IF_NEEDED",
"writeDisposition": "WRITE_TRUNCATE",
"encoding": "UTF-8"
}
}
}
print json.dumps(job_data, indent=2)
loadTable('project_id', 'dataset_id', 'target_table', 'source_csv', '/tmp/schema.txt')

Resources