I have the following problem: I am new to logstash and I am trying to simply transfer information from one .log file to another. My config file looks like this and logstash is currently running on Windows 10.
input {
file {
path => "C:/Downloads/Test/input.log"
sincedb_path => "NUL"
start_position => "beginning"
ignore_older => 0
}
}
output {
file {
path => "C:/Downloads/Test/output.log"
}
}
The input file looks like this.
INFO 2018-11-12 13:47:22,378 [AGENT] - [KEY] [METHOD] MESSAGE
The next step would be applying a filter to only transfer the ERROR lines to the output file using a grok filter.
Can anyone help me please?
Related
In my project I have a table of projects. For each project there is a column for downloading pdf file. Now I want to be able to download all files and to create a single .rar file. There is a code for downloading a single file:
routes.js
app.get('/api/download/archive/:filename', function(req,res){
res.download("public/uploads/"+req.params.filename, req.params.filename);
})
archive.js
$scope.downloadPdf = function(obj){
$http.get('api/download/archive/'+obj.Documentation)
.success(function(data){
window.open('api/download/archive/'+obj.Documentation)
});
}
Unfortunately, RAR is a closed-source software. So the only way to create an archive is to install the command-line utility called rar and then use rar a command in a child process to compress the files.
To install rar on Mac I had to run brew install homebrew/cask/rar. You can find the installation instructions for other platforms here.
After you install it you can make use of child_process like this:
const { exec } = require('child_process');
const { promisify } = require('util');
const fs = require('fs');
const path = require('path');
// Promisify `unlink` and `exec` functions as by default they accept callbacks
const unlinkAsync = promisify(fs.unlink);
const execAsync = promisify(exec);
(async () => {
// Generating a different name each time to avoid any possible collisions
const archiveFileName = `temp-archive-${(new Date()).getTime()}.rar`;
// The files that are going to be compressed.
const filePattern = `*.jpg`;
// Using a `rar` utility in a separate process
await execAsync(`rar a ${archiveFileName} ${filePattern}`);
// If no error thrown the archive has been created
console.log('Archive has been successfully created');
// Now we can allow downloading it
// Delete the archive when it's not needed anymore
// await unlinkAsync(path.join(__dirname, archiveFileName));
console.log('Deleted an archive');
})();
In order to run the example please put some .jpg files into the project directory.
PS: If you choose a different archive format (like .zip) you would be able to make use of something like archiver for example. That might allow you to create a zip stream and pipe it to response directly. So you would not need to create any files on a disk.
But that's a matter of a different question.
Try WinrarJs.
The project lets you Create a RAR file, Read a RAR file and Extract a RAR archive.
Here's the sample from GitHub:
/* Create a RAR file */
var winrar = new Winrar('/path/to/file/test.txt');
// add more files
winrar.addFile('/path/to/file/test2.txt');
// add multiple files
winrar.addFile(['/path/to/file/test3.txt', '/path/to/file/test4.txt']);
// set output file
winrar.setOutput('/path/to/output/output.rar');
// set options
winrar.setConfig({
password: 'testPassword',
comment: 'rar comment',
volumes: '10', // split volumes in Mega Byte
deleteAfter: false, // delete file after rar process completed
level: 0 // compression level 0 - 5
});
// archiving file
winrar.rar().then((result) => {
console.log(result);
}).catch((err) => {
console.log(err);
}
Unfortunately Nodejs dosn't native support Rar compression/decompression, i frustated with this too so i created a module called "super-winrar" making super easy deal with rar files in nodejs :)
check it out: https://github.com/KiyotakaAyanokojiDev/super-winrar
Exemple creating rar file "pdfs.rar" and appending all pdf files into:
const Rar = require('super-winrar');
const rar = new Rar('pdfs.rar');
rar.on('error', error => console.log(error.message));
rar.append({files: ['pdf1.pdf', 'pdf2.pdf', 'pdf3.pdf']}, async (err) => {
if (err) return console.log(err.message);
console.log('pdf1, pdf2 and pdf3 got successfully put into rar file!');
rar.close();
});
I would like to know how to update the permissions for a file that is not getting deployed by puppet. The file is actually getting deployed by an RPM. I have tried the following with no luck:
file { '/some/directory/myfile.conf' :
ensure => 'file',
replace => 'no',
owner => 'someuser',
group => 'somegroup',
mode => '0644'
}
This actually removes the content of the file and leaves an empty file. However it sets the right permissions and mode. I would like to keep the content. I am using puppet 2.7.3.
I am pretty new to Logstash and Elasticsearch.
I have a Problem that I am not able to configure out.
My configuration is for testing is a 2 EL nodes running on OS X + Kibana and Logstash. All with the actual stable releases.
I am reading with logstash a log with following informations:
64.12.89.186 {"register":"07-015", "tag":["Server1", "Proxy", "Web", "picture"], "comment":"texttext"}
149.174.107.97 {"register":"07-015", "tag":["Server1", "Proxy", "Web", "picture"], "comment":"texttext"}
149.174.110.102 {"register":"07-015", "tag":["Server1", "Proxy", "Web", "picture"], "comment":"texttext"}
and write them in EL.
Configurationfile from logstash is this:
input {
file {
path => ["/scenario_02/data/ipinfo4_log"]
stat_interval => 1
discover_interval => 5
}
}
filter {
grok {
match => { "message" => "%{IP:ip} %{GREEDYDATA:data}" }
}
json {
source => "data"
}
}
filter {
geoip {
source => "ip"
target => "geoip"
}
}
output {
elasticsearch { host => localhost }
stdout { codec => rubydebug }
}
So far so good. The new informations will be added at the end of the log file.
Now to my Problem: The file are read only when I start logstash.
When in the log file get new informations during logstash is running, no new documents will be written to EL.
When I stop logstash and start it again. The new informations from the log file will be added.
Did I understand something wrong that the informations from the log file will not added or checked automatically in intervals ? Or I have to restart always logstash to read the file again?
Thanks a lot for your help.
Paris
Try:
start_position => "end" after 'path => ...' . It can help, but end is a default for start_position. BTW, if you are using lumberjack? then files always will be checked for a new info by default.
I have a strange problem with Logstash. I am providing a log file as input to logstash. The configuration is as follows:
input {
file {
type => "apache-access"
path => ["C:\Users\spanguluri\Downloads\logstash\bin\test.log"]
}
}
output {
elasticsearch {
protocol => "http"
host => "10.35.143.93"
port => "9200"
index => "latestindex"
}
}
I am running elasticsearch server already and verifying if the data is being received with
curl queries. The problem is, no data is being received when the input is a file. However, if I change input to stdin { } as follows, it sends all input data smoothly:
input {
stdin{ }
}
output {
elasticsearch {
protocol => "http"
host => "10.35.143.93"
port => "9200"
index => "latestindex"
}
}
I don't get where I am going wrong. Can someone please take a look at this?
You should set start_position under your file section:
start_position => "beginning"
It defaults to end and so won't read any existing lines in your file, only newly added ones:
start_position
Value can be any of: "beginning", "end"
Default value is "end"
Choose where Logstash starts initially reading files: at the beginning
or at the end. The default behavior treats files like live streams and
thus starts at the end. If you have old data you want to import, set
this to ‘beginning’
This option only modifies “first contact” situations where a file is
new and not seen before. If a file has already been seen before, this
option has no effect.
In addition to the provided answer, I had to change the path from c:\my\path to c:/my/path in order for it to read the files.
I'm trying to configure the rewrite rules for lighttpd so that a specific cakephp controller-action is executed if a file is not found. The controller-action will generate the data (png file), save it for future use, and then serve it to the client. The next time someone tries to access the file, it will be served by lighttpd directly without executing any php. In other words, the png files are cached so there is no need to recreate them.
From what I can tell, url.rewrite-if-not-file is ignored if rewrite-once has a match. Thus, the following can serve up my cached files but not my uncached files.
url.rewrite-if-not-file = (
"^/scan/(.+)\.png" => "/mycontroller/scan/$1"
)
url.rewrite-once = (
"^/(css|files|img|js)/(.*)" => "/$1/$2",
"^/favicon.ico" => "/favicon.ico",
"^/scan/(.+\.png)" => "/scan/$1",
"^([^\?]*)(\?(.+))?$" => "/index.php?url=$1&$3",
)
The only solution I can think of now is to delete the 3rd rule and modify ^([^\?]*)(\?(.+))?$ so it ignores urls starting with \scan\.
Any other suggestions?
i'd do something like this in a controller:
if (file_exists($file_name)) {
$this->redirect($file_name);
} else {
$this->redirect(array('action' => 'some_action', 'controller' => 'some_controller'));
}