logstash file reading interval not working - file

I am pretty new to Logstash and Elasticsearch.
I have a Problem that I am not able to configure out.
My configuration is for testing is a 2 EL nodes running on OS X + Kibana and Logstash. All with the actual stable releases.
I am reading with logstash a log with following informations:
64.12.89.186 {"register":"07-015", "tag":["Server1", "Proxy", "Web", "picture"], "comment":"texttext"}
149.174.107.97 {"register":"07-015", "tag":["Server1", "Proxy", "Web", "picture"], "comment":"texttext"}
149.174.110.102 {"register":"07-015", "tag":["Server1", "Proxy", "Web", "picture"], "comment":"texttext"}
and write them in EL.
Configurationfile from logstash is this:
input {
file {
path => ["/scenario_02/data/ipinfo4_log"]
stat_interval => 1
discover_interval => 5
}
}
filter {
grok {
match => { "message" => "%{IP:ip} %{GREEDYDATA:data}" }
}
json {
source => "data"
}
}
filter {
geoip {
source => "ip"
target => "geoip"
}
}
output {
elasticsearch { host => localhost }
stdout { codec => rubydebug }
}
So far so good. The new informations will be added at the end of the log file.
Now to my Problem: The file are read only when I start logstash.
When in the log file get new informations during logstash is running, no new documents will be written to EL.
When I stop logstash and start it again. The new informations from the log file will be added.
Did I understand something wrong that the informations from the log file will not added or checked automatically in intervals ? Or I have to restart always logstash to read the file again?
Thanks a lot for your help.
Paris

Try:
start_position => "end" after 'path => ...' . It can help, but end is a default for start_position. BTW, if you are using lumberjack? then files always will be checked for a new info by default.

Related

Logstash write from file to file

I have the following problem: I am new to logstash and I am trying to simply transfer information from one .log file to another. My config file looks like this and logstash is currently running on Windows 10.
input {
file {
path => "C:/Downloads/Test/input.log"
sincedb_path => "NUL"
start_position => "beginning"
ignore_older => 0
}
}
output {
file {
path => "C:/Downloads/Test/output.log"
}
}
The input file looks like this.
INFO 2018-11-12 13:47:22,378 [AGENT] - [KEY] [METHOD] MESSAGE
The next step would be applying a filter to only transfer the ERROR lines to the output file using a grok filter.
Can anyone help me please?

Use a .db file with MS Chatbot

Here is my situation : I'm developing a Chatbot on Microsoft azure platform using Node.js. For the moment the bot messages are hard-coded in .json files.
I want to improve it by using calls to a database.
I have a SQLite database file working fine (I used a browser for SQLite and made my requests). But the problem is :
How do can I use my .db file from my project ? Is this possible to somehow "read" the database file from my dialogs and then make my request to get what I need from my database ?
I know that you can call a database with the chatbot, but the issue here is that I only have the file and nothing deployed to call.
Example of what the result should give :
"Hey chatbot, tell me about Mona Lisa"
This triggers the dialogs that will ask the database : "SELECT info FROM arts WHERE arts.title LIKE '%Mona Lisa%' ";
And send the result in session.send(results).
Thanks !
Note : I'm just an intern in my company, the database file is the only thing they gave me and I have to find a solution with it
I got the solution after some research :
First you need to install sqlite3 with npm for example, then use this at the beginning of your code :
var sqlite3 = require('sqlite3').verbose();
var path = require('path');
var db_path = path.resolve(__dirname, name_Of_Your_DB);
And then work on your file with the request you need :
var db = new sqlite3.Database(db_path, sqlite3.OPEN_READONLY,(err) => {
if (err) {
return console.error(err.message);
}
//console.log("Stuff that is processed only if no error happened.");
});
var req = "YOUR REQUEST";
db.get(req, [your_parameter],(err, row) => {
if (err) {
return console.error(err.message);
}
});
db.close((err) => {
if (err) {
return console.log(err.message);
}
});
The documentation about node.js and sqlite3 is quite complete :
http://www.sqlitetutorial.net/sqlite-nodejs/query/

Update permissions to a file not deployed by puppet

I would like to know how to update the permissions for a file that is not getting deployed by puppet. The file is actually getting deployed by an RPM. I have tried the following with no luck:
file { '/some/directory/myfile.conf' :
ensure => 'file',
replace => 'no',
owner => 'someuser',
group => 'somegroup',
mode => '0644'
}
This actually removes the content of the file and leaves an empty file. However it sets the right permissions and mode. I would like to keep the content. I am using puppet 2.7.3.

Logstash not reading file input

I have a strange problem with Logstash. I am providing a log file as input to logstash. The configuration is as follows:
input {
file {
type => "apache-access"
path => ["C:\Users\spanguluri\Downloads\logstash\bin\test.log"]
}
}
output {
elasticsearch {
protocol => "http"
host => "10.35.143.93"
port => "9200"
index => "latestindex"
}
}
I am running elasticsearch server already and verifying if the data is being received with
curl queries. The problem is, no data is being received when the input is a file. However, if I change input to stdin { } as follows, it sends all input data smoothly:
input {
stdin{ }
}
output {
elasticsearch {
protocol => "http"
host => "10.35.143.93"
port => "9200"
index => "latestindex"
}
}
I don't get where I am going wrong. Can someone please take a look at this?
You should set start_position under your file section:
start_position => "beginning"
It defaults to end and so won't read any existing lines in your file, only newly added ones:
start_position
Value can be any of: "beginning", "end"
Default value is "end"
Choose where Logstash starts initially reading files: at the beginning
or at the end. The default behavior treats files like live streams and
thus starts at the end. If you have old data you want to import, set
this to ‘beginning’
This option only modifies “first contact” situations where a file is
new and not seen before. If a file has already been seen before, this
option has no effect.
In addition to the provided answer, I had to change the path from c:\my\path to c:/my/path in order for it to read the files.

Double log statements

I have weird issue, it might be something silly but I can't find where the problem is.
I develop an application on cakephp 2.x and when I log data from the controller it appears twice in the log. Something like this:
2013-05-24 11:50:19 Debug: excel file uploaded
2013-05-24 11:50:19 Debug: excel file uploaded
2013-05-24 11:50:19 Debug: fire test
2013-05-24 11:50:19 Debug: fire test
Just to add some fun, it doesn't happen in all functions in that controller, only in two out of six. It annoys me a lot and I don't see what way I should to dig to get rid of it.
Any ideas?
EDIT:
OK, I found that this happens when I log to the two different files in one method.
When I change the line: CakeLog::write('time'....); to CakeLog::write('debug'....);
everything works fine. Like in the following method:
function file_upload() {
if (!$this->request->data) {
} else {
CakeLog::write('time', 'start working at: ' . date('m/d/Y', strtotime("now")));
$data = Sanitize::clean($this->request->data);
CakeLog::write('debug', 'test statement');
if ($data['Scrap']['excel_submittedfile']['type'] === 'application/vnd.ms-excel' && $data['Scrap']['csv_submittedfile']['type'] === 'text/csv') {
$tmp_xls_file = $data['Scrap']['excel_submittedfile']['tmp_name'];
$xls_file = $data['Scrap']['excel_submittedfile']['name'];
$tmp_csv_file = $data['Scrap']['csv_submittedfile']['tmp_name'];
$csv_file = $data['Scrap']['csv_submittedfile']['name'];
$upload_dir = WWW_ROOT . "/files/";
if (file_exists($upload_dir) && is_writable($upload_dir)) {
if (move_uploaded_file($tmp_xls_file, $upload_dir . $xls_file) && move_uploaded_file($tmp_csv_file, $upload_dir . $csv_file)) {
CakeLog::write('debug', 'excel file uploaded');
$this->redirect(array('action' => 'edit', $xls_file, $csv_file));
} else {
echo 'upload failed';
}
} else {
echo 'Upload directory is not writable, or does not exist.';
}
} else {
echo 'make sure the files are in correct format';
}
}
}
I guess it has something to do with declarations of log files in bootstrap.php. So it's not that big problem just annoying.
This happens because your call
CakeLog::write('time', 'start working at: ' . date('m/d/Y', strtotime("now")));
Will attempt to write a log of the type: "time". Since there is no stream configured to handle that the CakeLog will create a "default" stream for you to handle this log call.
The problem is that, from now on you will have a "default" stream configured that will catch all logs and double them for debug and error logs.
The solution is to properly configure the log in the bootstrap.php file like this:
CakeLog::config('time_stream', array(
'engine' => 'FileLog',
'types' => array( 'time' ), //<--here is the log type of 'time'
'file' => 'time', //<-- this will go to time.log
) );
Of course, that if you use other log types you will need to configure streams for those as well, otherwise the default catch-all stream will be configured for you and you will be having the same problem again.
Good luck!

Resources