In the following path, i have the following logs:
When 10MB are reached for metricbeat file, it will go to write on metricbeat.1 and when this file size exceeds also, it will write to metricbeat.2 etc
[root# metricbeat]# ls -lorth
total 4.1M
-rwxrwxrwx 1 nifi 10 Aug 17 11:17 metricbeat.2
-rwxrwxrwx 1 nifi 10 Aug 17 11:17 metricbeat.1
-rwxrwxrwx 1 nifi 4.1M Aug 17 11:47 metricbeat
In NiFi (no cluster) I want to tail all the files that are stored on path
/logs/metricbeat/
I am using TailFile Processor with the following Configuration:
But, the main problem is that i am getting the following error:
'File to Tail' is invalid because There is no file to tail. Files must exist when starting this processor.
If i select the "Single Line" it gets the file "metricbeat"
Could you please tell me what i am doing wrong? Or how can i read all the "metricbeat" files from that path?
"Single File" mode does not require the file to exist before starting the processor, while "Multiple Files" mode does - hence the error you see.
Related
I am creating an FTP server as a school project, most of the commands are working and I almost nailed PORT (active mode for data transfer).
Launching my server using ftp like such:
ftp localhost 4242 // where 4242 is the port on which my server is listening
And using the command ls after logging in, I receive a working ls output followed by this message:
WARNING! 8 bare linefeeds received in ASCII mode
File may not have transferred correctly.
Please note that when using ls in ftp, it switches automatically to Active Mode before using the LIST command.
What does this error signify?
Full output:
200 Active Mode Enabled.
150 Directory listing.
total 56
drwxrwxr-x 4 kade_c kade_c 4096 mai 12 15:24 .
drwxr-xr-x 38 kade_c kade_c 4096 mai 12 14:58 ..
drwxrwxr-x 8 kade_c kade_c 4096 mai 12 15:17 .git
-rw-rw-r-- 1 kade_c kade_c 1726 mai 11 10:35 Makefile
-rw-rw-r-- 1 kade_c kade_c 161 mai 11 11:43 README.txt
-rwxrwxr-x 1 kade_c kade_c 29368 mai 12 15:24 server
drwxrwxr-x 4 kade_c kade_c 4096 mai 2 18:40 server_src
WARNING! 8 bare linefeeds received in ASCII mode
File may not have transferred correctly.
226 LIST complete.
And finally, here is the part of the code that creates, and connects to the socket and does the ls -la:
server_write(client, "150 Directory listing.\r\n");
if (connect_data(client) == -1) // Creates socket and connects to it
{
server_write(client, "520 Impossible to reach client.\r\n");
return;
}
ofd = xdup(1);
xdup2(client->data.socket, 1);
system("ls -la");
xdup2(ofd, 1);
server_write(client, "226 LIST complete.\r\n");
close_data(client, -1);
This issue is because you are downloading files in ASCII mode. Switching to binary mode will make the warning disappear.
Once you login to the FTP server, type binary and then start downloading.
ftp> binary
200 Type set to I.
You only have to run this command once per FTP session.
I'd guess that you send LF's to the client, and the client (rightly) expects CRLF's and warns about those missing CR's.
According to FTP specification, RFC 959, section 3.4. Transmission modes, in the ASCII mode, you need to use CRLF exclusively:
For the purpose of standardized transfer, the sending host will
translate its internal end of line or end of record denotation
into the representation prescribed by the transfer mode and file
structure, and the receiving host will perform the inverse
translation to its internal denotation. ... End-of-line in an ASCII file with no
record structure should be indicated by <CRLF>
I have a script that I run manually every hour in my Laravel that's under this path:
/var/www/name/storage/scripts/getListOfClassesFromSubjects.pl
What I normally do is, I cd to /scripts/, and I manually run:
./getListOfClassesFromSubjects.pl
And the script works fine.
Today, I setup a crontab to automate this (obviously).
0,30 * * * * /var/www/name/storage/scripts/getListOfClassesFromSubjects.pl >> /var/www/name/storage/logs/schedulizer.log 2>&1
Within my logs are this:
DBD::SQLite::db prepare failed: no such table: subject_urls at /var/www/loop/storage/scripts/getListOfClassesFromSubjects.pl line 56.
Which is an anomaly because when I run the script manually, it's fine.
This is my database's permissions:
-rw-r--r-- 1 root root 11750400 Aug 4 12:30 database.sqlite
So I'm thinking this is the issue with the rwx permissions, so I changed the DB to 755:
-rwxr-xr-x 1 root root 11750400 Aug 4 12:30 database.sqlite
Still the same issue
For the path to the database, your code uses a relative path that assumes the current directory is the directory in which the script resides. It is not.
Instead of
"../database.sqlite"
use
use FindBin qw( $RealBin );
"$RealBin/../database.sqlite"
or
use FindBin qw( $RealBin );
chdir($RealBin);
"../database.sqlite"
In Spring-XD the file source detects new files in an input directory and streams their content through the pipeline.
Is there an analogous sink which creates separate result files in an output directory (e.g. with the original file names) and not a single file to which all results were appended, http://docs.spring.io/spring-xd/docs/current/reference/html/#file-sink: "The file sink uses the stream name as the default name for the file it creates, and places the file in the /tmp/xd/output/ directory."?
Scroll down to the options in that document you referenced.
Use --nameExpression=....
If you are using mode=contents; the original file name is available in the file_name header:
--nameExpression=headers[file_name]
mode=lines doesn't currently capture the file name (it will be fixed in the next release).
If you are using mode=ref, you need to set a header.
Minimal Working Example
In Spring-XD
stream create --name test --definition "file --mode=contents | b:file --binary=true --dirExpression='''/tmp/out''' --nameExpression=headers[file_name]" --deploy
than
echo "1111" > /tmp/xd/input/test/file1.txt
echo "2222" > /tmp/xd/input/test/file2.txt
results in
ll /tmp/out/
>
> -rw-rw-r-- 1 rmv rmv 5 Jul 7 10:19 file1.txt
> -rw-rw-r-- 1 rmv rmv 5 Jul 7 10:19 file2.txt
i'm getting error when i run below command
nagios3 -v /etc/nagios3/nagios.cfg
Error in configuration file '/etc/nagios3/nagios.cfg' - Line 469 (Check result path is not a valid directory) Error processing main config file
So i looked ls -l /var/lib/nagios3/
drwxr-x--- 3 nagios nagios 1024 Mar 14 21:13 spool
In this case, why i'm getting error? Probably i think my /var/lib/nagios3/spool/checkresult/check2JcDx5 file contains wrong line. And when i run below command, i get this output.
#cat check2JcDx5
file_time=1363378360
host_name=localhost
service_description=HTTP
check_type=0
check_options=0
scheduled_check=1
reschedule_check=1
latency=0.122000
start_time=1363378360.122234
Disable SELinux:
# getenforce
# setenforce 0
Edit /etc/selinux/config. Set SELINUX=disabled.
You may be able to install the nagios-selinux package to add the policy to run nagios in an selinux environment. Better than disabling your existing security.
I'm trying to get Nagios to execute a custom java command but I always get error 126.
[1360324906] Warning: Return code of 126 for check of service 'Java Process Test' on host 'localhost' was out of bounds.Make sure the plugin you're trying to run is executable.
Now I've checked few things but as I'm a newbie here I probably missed something.
Here few information about the environment:
-rwxr-xr-x. 1 root root 2938 Aug 17 15:39 check_wave
drwxr-xr-x. 2 root root 4096 Jan 13 15:08 eventhandlers
drwxr-xr-x. 2 root root 4096 Feb 7 17:22 jars
-rwxr-xr-x. 1 root root 38696 Aug 17 15:39 negate
-rwxr-xr-x. 1 root root 886 Feb 8 12:47 test_java_plugin.sh
test_java_plugin.sh is my test script and "jars" is the current dir where the jar is located
Scripts is this:
#!/bin/bash
#This will get the output of process
output=$(/usr/java/latest/bin/java -cp .:/usr/lib64/nagios/plugins/jars/SimpleNagiosPlugin.jar it.nagios.SimpleTest)
#This will catch the result returned by last process that is our java command
java_result=$?
echo "$java_result: $output"
exit $java_result
and is working perfectly when launched manually at console
[root#bw plugins]# ./test_java_plugin.sh
0: This is an OK message
Forgot to add command definition:
# 'test_java_plugin' command definition
define command{
command_name test_java_plugin
command_line $USER1$/test_java_plugin.sh
}
Also as as per request into comment I'm adding also the current java code of my test class
public static void main(String[] args) {
System.out.println("This is an OK message");
System.exit(0);
}
Just launching the command from a shell I got still 0:
[root#bw plugins]# /usr/java/latest/bin/java -cp .:/usr/lib64/nagios/plugins/jars/SimpleNagiosPlugin.jar it.nagios.SimpleTest
This is an OK message
[root#bw plugins]# echo $?
0
What else should I check to determine what is going wrong here?
I faced a similar issue and found that SELinux was blocking me. The same can be checked in /var/log/audit/audit.log
If you get denied errors for nagios_t/nagios_system_plugin_t, add them to the permissive list of selinux using the below command rather than turning it off completely
semanage permissive -a nagios_t
you should try to run test_java_plugin.sh as nagios user, you can give nagios a shell (temporary) . Take into account that the root environment is different from the nagios environment . When running test_java_plugin.sh as nagios , you can add "env > env_log_file" to see what is the environment during the run time.
Good luck.
Error 126 Means that the plugin was found but not executable.
You can try 2 things.
Try running the plugin as nagios user and check for the error.
or
This did work out for one issue i had. Try it out. hopefully it may work
/bin.bash -l -c "/#{path to plugin}/test_java_plugin.sh"