fatal: Unsupported command: Users.Ref "KPLUS" - git-p4

I'm migrating several Perforce projects to Git. One is failing though at 18% of the process with:
fatal: Unsupported command: Users.Ref "KPLUS"
It looks like git fast-import is trying to execute the text in the file where it should be printed (I think)
The fast-import crash report shows me
fast-import crash report:
fast-import process: 28327
parent process : 28325
at Fri Sep 11 14:34:26 2015
fatal: Unsupported command: Users.Ref "KPLUS"
Most Recent Commands Before Crash
---------------------------------
....
....
commit refs/remotes/p4/master
committer USERNAME <EMAIL> 1175609377 +0100
data <<EOT
* Users.Ref "KPLUS"
Active Branch LRU
-----------------
active_branches = 1 cur, 5 max
pos clock name
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1) 714 refs/remotes/p4/master
Inactive Branches
-----------------
refs/remotes/p4/master:
status : active loaded
tip commit : 307170cc21264c58ab1943c16f1d2943c1a44f45
old tree : 2f45d5c6d9cbe56e5f335f92b21316ad272f3504
cur tree : 2f45d5c6d9cbe56e5f335f92b21316ad272f3504
commit clock: 714
last pack : 0
Marks
-----
-------------------
END OF CRASH REPORT
The text is in a xml file that doesn't seem to be well formatted, but I would assume this shouldn't matter.

Found the cause in the commit messages. There were "EOT" lines in the message causing git-p4 script to interpret this as an End of Transaction. All next lines were interpreted as executable lines. Changing the git-p4 script from using EOT to EOM solved the issue.

Related

NiFi - TailFile - multiple files

In the following path, i have the following logs:
When 10MB are reached for metricbeat file, it will go to write on metricbeat.1 and when this file size exceeds also, it will write to metricbeat.2 etc
[root# metricbeat]# ls -lorth
total 4.1M
-rwxrwxrwx 1 nifi 10 Aug 17 11:17 metricbeat.2
-rwxrwxrwx 1 nifi 10 Aug 17 11:17 metricbeat.1
-rwxrwxrwx 1 nifi 4.1M Aug 17 11:47 metricbeat
In NiFi (no cluster) I want to tail all the files that are stored on path
/logs/metricbeat/
I am using TailFile Processor with the following Configuration:
But, the main problem is that i am getting the following error:
'File to Tail' is invalid because There is no file to tail. Files must exist when starting this processor.
If i select the "Single Line" it gets the file "metricbeat"
Could you please tell me what i am doing wrong? Or how can i read all the "metricbeat" files from that path?
"Single File" mode does not require the file to exist before starting the processor, while "Multiple Files" mode does - hence the error you see.

How to run multiple Gatling simulations in a sequence (sequence will be provides by us)

When I run Gatling from my command prompt I get a list of simulations like this:
Choose a simulation number: 1,2,3,4
When I type 3 third simulation will run but this sequence is auto-generated.Suppose I want to list them according to my wish like:
3,2,1,4
Is it possible to give user defined sequence for simulations list.If yes how it is possible?
As far as I know there is no possibility in Gatling to provide sequence of simulations. You can achieve this by writing for example bash script. For running Gatling tests in mvn it could look like this
#!/bin/bash
#params
SIMULATION_CLASSES=
#usage
function usage (){
echo "usage: $0 options"
echo "This script run Gatling load tests"
echo ""
echo "OPTIONS:"
echo "Run options:"
echo " -s [*] Simulation classes (comma separated)"
}
#INIT PARAMS
while getopts “s:” OPTION
do
case $OPTION in
s) SIMULATION_CLASSES=$OPTARG;;
?) usage
exit 1;;
esac
done
#checks
if [[ -z $SIMULATION_CLASSES ]]; then
usage
exit 1
fi
#run scenarios
SIMULATION_CLASSES_ARRAY=($(echo $SIMULATION_CLASSES | tr "," "\n"))
for SIMULATION_CLASS in "${SIMULATION_CLASSES_ARRAY[#]}"
do
echo "Run scenario for $SIMULATION_CLASS"
mvn gatling:execute -Dgatling.simulationClass=$SIMULATION_CLASS
done
And sample usage
./campaign.sh -s package.ScenarioClass1,package.ScenarioClass2
If you use the Gatling SBT Plugin (demo project here), you can do, in Bash:
sbt "gatling:testOnly sims.ReadProd02Simulation" "gatling:testOnly sims.ReadProd02Simulation
This first runs only the sceenario ReadProd02Simulation, and then runs ReadProd03Simulation. No Bash script needed.
The output will be first the output from ReadProd02Simulation and then ReadProd03Simulation, like so:
08:01:57 46 ~/dev/ed/gatling-sbt-plugin-demo[master*]$ sbt "gatling:testOnly sims.ReadProd02Simulation" "gatling:testOnly sims.ReadProd02Simulation"
[info] Loading project definition from /home/.../gatling-sbt-plugin-demo/project
[info] Set current project to gatling-sbt-plugin-demo...
Simulation sims.ReadProd02Simulation started...
...
Simulation sims.ReadProd02Simulation completed in 16 seconds
Parsing log file(s)...
Parsing log file(s) done
Generating reports...
======================================================================
- Global Information ----------------------------------------------
> request count 3 (OK=3 KO=0 )
...
...
Reports generated in 0s.
Please open the following file: /home/.../gatling-sbt-plugin-demo/target/gatling/readprod02simulation-1491631335723/index.html
[info] Simulation ReadProd02Simulation successful.
[info] Simulation(s) execution ended.
[success] Total time: 19 s, completed Apr 8, 2017 8:02:33 AM
08:02:36.911 [INFO ] i.g.h.a.HttpEngine - Start warm up
08:02:37.240 [INFO ] i.g.h.a.HttpEngine - Warm up done
Simulation sims.ReadProd03Simulation started...
...
Simulation sims.ReadProd03Simulation completed in 4 seconds
Parsing log file(s)...
Parsing log file(s) done
Generating reports...
======================================================================
---- Global Information ----------------------------------------------
> request count 3 (OK=3 KO=0 )
......
Reports generated in 0s.
Please open the following file: /home/.../gatling-sbt-plugin-demo/target/gatling/readprod03simulation-1491631356198/index.html
[info] Simulation ReadProd03Simulation successful.
[info] Simulation(s) execution ended.
[success] Total time: 9 s, completed Apr 8, 2017 8:02:42 AM
That is, first it runs one sim, then another, and concatenates all output.
But how do you make use of this? Well, you could use Bash and grep the output for exactly two lines matching failed 0 ( 0%) (if you run two simulations) + check the total request counts for both simulations, also via Bash + grep etc.

Import old apache access logs to webalizer - ignoring records

I installed webalizer on my apache 2 webserver yesterday and came across the problem, that all the old access logs are not used. The directory list looks like that:
/var/log/apache2/
access.log
access.log1
access.log.10.gz
access.log.11.gz
...
How can I import all my files at once?
I tried several things, but it was telling me, that the records were ignored.
Hope somone can help. Thanks!
I ran into the same problem. I had just installed webalizer, and changed it to incremental mode (here are the relevant entries from my /etc/webalizer/webalizer.conf):
LogFile /var/log/apache2/access.log.1
OutputDir /var/www/htdocs/w
Incremental yes
IncrementalName webalizer.current
And then I ran webalizer by hand, which initialized the non-gz files in my logs directory. After that, any attempt to manually import an older gz logfile (by running webalizer /var/log/apache2/access.log.2.gz for instance) resulted in all of the entries being ignored.
I suspect this is because the entries found in the gz logs were older than the last import- I had to delete my webalizer.current file (really I cleared the whole dir- either way should work). Finally, in reverse order (oldest first), I could import the old gz files one at a time:
bhs128#home:~$ cd /var/log/apache2
bhs128#home:/var/log/apache2$ sudo rm -rf /var/www/htdocs/w/*
bhs128#home:/var/log/apache2$ ls -1t /var/log/apache2/access.log*gz | grep -o [0-9]* | tail -n1
52
bhs128#home:/var/log/apache2$ for i in {52..2}; do webalizer /var/log/apache2/access.log.$i.gz; done
I just had the same problem, and I took a look into the webalizer.current file:
$ head -n 2 webalizer.current
# Webalizer V2.21-02 Incremental Data - 11/05/2019 22:29:02
2019 11 5 22 29 2
The second line seems to contain the timestamp of the last run, so I just changed the year to 2018. After that, I was able to import older log files than the last imported ones, without having to delete all the data first.

nagios Check result path is not a valid directory

i'm getting error when i run below command
nagios3 -v /etc/nagios3/nagios.cfg
Error in configuration file '/etc/nagios3/nagios.cfg' - Line 469 (Check result path is not a valid directory) Error processing main config file
So i looked ls -l /var/lib/nagios3/
drwxr-x--- 3 nagios nagios 1024 Mar 14 21:13 spool
In this case, why i'm getting error? Probably i think my /var/lib/nagios3/spool/checkresult/check2JcDx5 file contains wrong line. And when i run below command, i get this output.
#cat check2JcDx5
file_time=1363378360
host_name=localhost
service_description=HTTP
check_type=0
check_options=0
scheduled_check=1
reschedule_check=1
latency=0.122000
start_time=1363378360.122234
Disable SELinux:
# getenforce
# setenforce 0
Edit /etc/selinux/config. Set SELINUX=disabled.
You may be able to install the nagios-selinux package to add the policy to run nagios in an selinux environment. Better than disabling your existing security.

Backtrack 5 r3 "apt-get upgrade" error [NEED HELP]

root#bt:~# apt-get upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages have been kept back:
smartphone-pentest-framework
0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.
1 not fully installed or removed.
After this operation, 0B of additional disk space will be used.
Do you want to continue [Y/n]? y
Setting up w3af (1.2-bt2) ...
tar: pybloomfiltermmap-0.2.0.tar.gz: Cannot open: No such file or directory
tar: Error is not recoverable: exiting now
tar: Child returned status 2
tar: Exiting with failure status due to previous errors
/var/lib/dpkg/info/w3af.postinst: line 4: cd: pybloomfiltermmap-0.2.0: No such file or directory
python: can't open file 'setup.py': [Errno 2] No such file or directory
svn: Working copy 'w3af' locked
svn: run 'svn cleanup' to remove locks (type 'svn help cleanup' for details)
dpkg: error processing w3af (--configure):
subprocess installed post-installation script returned error exit status 1
Errors were encountered while processing:
w3af
E: Sub-process /usr/bin/dpkg returned an error code (1)
root
I have copied and pasted the errors i get while updating the packing "smartphone-pentest-framework" above.
What is going wrong and what is the problem?
As you can see "/var/lib/dpkg/info/w3af.postinst: line 4: cd: pybloomfiltermmap-0.2.0: No such file or directory" so simple solution to change the download URL here is the trick they have a bug in their script as the repository moved to https://svn.code.sf.net/p/w3af/code/trunk
so simply edit w3af.postinst file (vi /var/lib/dpkg/info/w3af.postinst)
and replace old url with
https://svn.code.sf.net/p/w3af/code/trunk
save file and then run w3af setup again...

Resources