Daemon to RSYNC data fails after few successfull attempts - c

I have a Daemon coded in C to copy a file from local to remote using RSYNC and update the same file after every 5 second.
Everything works fine but when the while loop has executed for say 10 to 15 times RSYNC fails.
Here is the segment of code:
#define SHELLSCRIPT "\
#! /bin/bash \n\
rsync -azh /my_daemon_test -e 'ssh -p 2222' username#domainname.com:~ \n\
"
syslog(LOG_NOTICE, "Successfully started daemon\n");
while(1) {
syslog(LOG_NOTICE, "daemon in while loop : %d\n", flag);
sys_ret = system(SHELLSCRIPT);
syslog(LOG_NOTICE, "RSYNC Executed %d\n", sys_ret);
sleep(5);
flag++;
}
Log
Dec 18 00:07:08 localhost ./mydaemon[26384]: daemon in while loop : 10
Dec 18 00:07:09 localhost ./mydaemon[26384]: RSYNC Executed 0
Dec 18 00:07:14 localhost ./mydaemon[26384]: daemon in while loop : 11
Dec 18 00:07:15 localhost ./mydaemon[26384]: RSYNC Executed 0 <==== Success
Dec 18 00:07:20 localhost ./mydaemon[26384]: daemon in while loop : 12
Dec 18 00:07:20 localhost ./mydaemon[26384]: RSYNC Executed 65280 <==== Failed
Dec 18 00:07:25 localhost ./mydaemon[26384]: daemon in while loop : 13
Dec 18 00:07:26 localhost ./mydaemon[26384]: RSYNC Executed 65280
Need help to maintain the consistency.
Thanks

Related

why open file descriptors are not getting reused instead they are increasing in number value

I have a simple C HTTP server. I close file descriptors for disk files and new connection fds returned by accept(...), but I noticed that I am getting new file descriptor numbers that are bigger than the previous numbers: for example file descriptor from accept return starts with 4, then 5, then 4 again and so on until file descriptor reaches max open file descriptor on a system.
I have set the value to 10,000 on my system but I am not sure why exactly file descriptor number jumps to max value. And I am kind of sure than my program is closing the file descriptors.
So I would like to know if there are not thousands of connections then how come file descriptor new number are increasing periodically: in around 24 hours I get message accept: too many open files. What is this message?
Also, does ulimit -n number value get reset automatically without system reboot?
as mentioned in the answer. The output of _2$ ps aux | grep lh is
dr-x------ 2 fawad fawad 0 Oct 11 11:15 .
dr-xr-xr-x 9 fawad fawad 0 Oct 11 11:15 ..
lrwx------ 1 fawad fawad 64 Oct 11 11:15 0 -> /dev/pts/3
lrwx------ 1 fawad fawad 64 Oct 11 11:15 1 -> /dev/pts/3
lrwx------ 1 fawad fawad 64 Oct 11 11:15 2 -> /dev/pts/3
lrwx------ 1 fawad fawad 64 Oct 11 11:25 255 -> /dev/pts/3
and the output of ls -la /proc/$$/fd is
root 49855 0.5 5.4 4930756 322328 ? Sl Oct09 15:58 /usr/share/atom/atom --executed-from=/home/fawad/Desktop/C++-work/lhparse --pid=49844 --no-sandbox
root 80901 0.0 0.0 25360 5952 pts/4 S+ 09:32 0:00 sudo ./lh
root 80902 0.0 0.0 1100852 2812 pts/4 S+ 09:32 0:00 ./lh
fawad 83419 0.0 0.0 19976 916 pts/3 S+ 11:27 0:00 grep --color=auto lh
I like to know what is pts/4 etc. column. is this the file descriptor number.
It's likely that the socket that is represented by the file descriptor is in close_wait or time_wait state. Which means the TCP stack holds the fd open for a bit longer. So you won't be able to reuse it immediately in this instance.
Once the socket is fully finished with and closed, the file descriptor number will then available for reuse inside your program.
See: https://en.m.wikipedia.org/wiki/Transmission_Control_Protocol
Protocol Operation and specifically Wait States.
To see what files are still open you can run
ls -la /proc/$$/fd
The output of this will also be of help.
ss -tan | head -5
LISTEN 0 511 *:80 *:*
SYN-RECV 0 0 192.0.2.145:80 203.0.113.5:35449
SYN-RECV 0 0 192.0.2.145:80 203.0.113.27:53599
ESTAB 0 0 192.0.2.145:80 203.0.113.27:33605
TIME-WAIT 0 0 192.0.2.145:80 203.0.113.47:50685

I can see file size but not able to open the file

I can file size in MBs but when I am trying to open the file I can't ... saying can't find file.
while read line; do echo $line # or whaterver you want to do with the $line variable
cat $line | grep "PROCEDURE" > result3.txt
chmod 777 result3.txt
done < xreflist.txt;
To be able to find the file size, you need permission to read the directory.
To be able to open the file, you need permission to read the file.
It is perfectly possible to be able to do the one without the other.
$ mkdir junk
$ cd junk
$ echo "Hello World" > no-permission
$ chmod 0 no-permission
$ ls -la
total 8
drwxr-xr-x 3 jonathanleffler staff 96 Dec 29 11:34 .
drwxr-xr-x 18 jonathanleffler staff 576 Dec 29 11:34 ..
---------- 1 jonathanleffler staff 12 Dec 29 11:34 no-permission
$ cat ./no-permission
cat: ./no-permission: Permission denied
$ rm -f ./no-permission
$ cd ..
$ rmdir junk
$

Bash at job not running - Array

I have built a bash script that runs fine when executed from the command line but does not work when run as batch job (with at). First I thought because of the environment but when debugging I think there is a problem with arrays I need to create. When run from command line log is created and its content is what I expected but when run with at any log is created. Any idea for what is causing this issue?
A short script with the piece of code I suppose it is not running is below
#!/bin/bash
fsol=`date +%Y%m%d`
for dia
in 0 1 2
do
var=$(date -d "$fsol +$dia days" +'%Y-%m-%d')
orto=`awk -v j=$var 'BEGIN { FS=","} $2 == j { print $3}' hora-sol.dat`
h_orto=${orto:0:2}
m_orto=${orto:2:2}
a_orto+=($h_orto $m_orto)
echo "dia $dia" $var $h_orto $m_orto >> log1.txt
done
echo ${a_orto[#]} >> log2.txt
Data in hora-sol.dat
32,2016-02-01,0711,1216,1722,10.1885659530428
33,2016-02-02,0710,1216,1723,10.2235441870822
34,2016-02-03,0709,1216,1724,10.2589836910036
35,2016-02-04,0708,1216,1725,10.2948670333624
36,2016-02-05,0707,1216,1727,10.3311771153741
37,2016-02-06,0706,1217,1728,10.3678971831004
38,2016-02-07,0705,1217,1729,10.4050108377139
39,2016-02-08,0704,1217,1730,10.4425020444393
40,2016-02-09,0703,1217,1731,10.4803551390436
41,2016-02-10,0701,1217,1733,10.5185548339287
42,2016-02-11,0700,1217,1734,10.5570862213108
43,2016-02-12,0659,1217,1735,10.5959347763989
44,2016-02-13,0658,1217,1736,10.6350863580571
45,2016-02-14,0657,1217,1737,10.6745272092687
46,2016-02-15,0655,1217,1738,10.7142439549499
47,2016-02-16,0654,1217,1740,10.7542236006922
48,2016-02-17,0653,1217,1741,10.7944535282585
49,2016-02-18,0652,1216,1742,10.8349214920733
50,2016-02-19,0650,1216,1743,10.8756156133281
51,2016-02-20,0649,1216,1744,10.9165243743526
52,2016-02-21,0648,1216,1745,10.9576366115941
53,2016-02-22,0646,1216,1746,10.9989415078031
54,2016-02-23,0645,1216,1747,11.0404285846154
55,2016-02-24,0644,1216,1749,11.0820876932144
56,2016-02-25,0642,1216,1750,11.123909005324
57,2016-02-26,0641,1215,1751,11.1658830035395
58,2016-02-27,0639,1215,1752,11.2080004711946
59,2016-02-28,0638,1215,1753,11.2502524821626
60,2016-02-29,0636,1215,1754,11.2926303895977
Running manually, it generated:
# cat log.txt
dia 0 2016-02-12 0659 1217 1735
dia 1 2016-02-13 0658 1217 1736
dia 2 2016-02-14 0657 1217 1737
06
59
06
58
06
57
Scheduling with at:
# echo "/tmp/horasol/script.sh" | at now +1 minute
warning: commands will be executed using /bin/sh
job 1 at Fri Feb 12 12:11:00 2016
It generated exactly the same:
# cat log.txt
dia 0 2016-02-12 0659 1217 1735
dia 1 2016-02-13 0658 1217 1736
dia 2 2016-02-14 0657 1217 1737
06
59
06
58
06
57
Note that warninig informing that 'at' uses /bin/sh:
warning: commands will be executed using /bin/sh
Tell us how you conclude that "does not work when run as batch job (with at)"
Tell us more about your "when debugging" moment.
Perhaps I'm reproducing here using a different proccess as you. And due to this difference it works for me.

zsh process file script

I'm attempting to have this process a number of files but I don't want it in a look so I don't have to monitor it.
#!/usr/local/bin/zsh
X=${1-20}
for (( N=1; N<=X; N++ )); do
for p in *.xml; do
curl -X POST -H "Content-Type:application/xml" -d "#${p}" "https://url /postAPI" > "post_${p}"
sleep 1
done
done
When doing ./work.sh 5 this loops forever!
What's causing infinate loop?
Edit Based on a comment below
/tmp/tmp.KeFYeM9Z % ls -l
total 4
-rwxr-xr-x 1 naes wheel 218 Nov 20 14:42 work.sh
#!/usr/local/bin/zsh
X=${1-20}
for (( N=1; N<=X; N++ )); do
for p in /tmp/tmp.u6RnKaJ3/*.xml; do
curl -X POST -H "Content-Type:application/xml" -d "#${p}" "https://url /postAPI"
sleep 1
done
done
This still continues the infinite loop
This doesn't:
% cat work1.sh
#!/usr/local/bin/zsh
X=${1-20}
for (( N=1; N<=X; N++ )); do
date
sleep 1
done
% ./work1.sh 5
Thu Nov 20 15:22:27 PST 2014
Thu Nov 20 15:22:28 PST 2014
Thu Nov 20 15:22:29 PST 2014
Thu Nov 20 15:22:30 PST 2014
Thu Nov 20 15:22:31 PST 2014
What in my loop causes the infinite?
You are writing to the same directory you are reading from. So while reading the xml files you are writing xml files making it essentially loop forever. Although it's not really an infinite loop, it's a very large one.
Let's say you have 10 files, in that case you'll have this result:
N=1: |p| = 100
N=2: |p| = 200
N=3: |p| = 400
N=4: |p| = 800
N=5: |p| = 1600
So... it groes quite fast.
This should do the trick:
#!/usr/local/bin/zsh
X=${1-20}
OUTPUT_DIR=/tmp/output/
mkdir -p $OUTPUT_DIR
cd /tmp/tmp.u6RnKaJ3
for (( N=1; N<=X; N++ )); do
echo "Attempt $N"
for p in *.xml; do
curl -X POST -H "Content-Type:application/xml" -d "#${p}" "https://url /postAPI" > "${OUTPUT_DIR}post_${p}"
sleep 1
done
done

Why do dots ("." and "..") appear when I print files from directory?

I'm printing files from two directories using C language. Here is my code:
char *list1[30], *list2[30];
int i=0, x=0;
struct dirent *ent, *ent1;
/* print all the files and directories within directory */
while ((ent = readdir (dirSource)) != NULL) {
list1[i] = ent->d_name;
i++;
}
i=0;
while((ent1 = readdir (dirDest)) != NULL) {
list2[i] = ent1->d_name;
i++;
}
while(x != i){
printf("Daemon - %s\n", list1[x]);
printf("Daemon1 - %s\n", list2[x]);
x++;
}
I can print all the files, but everytime I print the files in a directory, the end result is this:
Daemon - .
Daemon1 - .
Daemon - ..
Daemon1 - ..
Daemon - fich5
Daemon1 - fich4
Daemon - fich3
Daemon1 - fich3
I don't understand why there are dots in the beginning.
Obs.: I don't if it matters, but I'm using Ubuntu 14.04 on a pen, meaning every time I use Ubuntu, I use the trial instead of using dual boot on my pc.
. and .. are two special files which are in every directory in Linux and other Unix-like systems. . represents the current directory and .. represents the parent directory.
Every directory in Unix has the entry . (meaning current directory) and .. (the parent directory).
Give that they start with "." they are hidden files; ls normally do not show them unless you use "-a" option.
See:
[:~/tmp/lilla/uff] % ls -l
total 0
-rw-rw-r-- 1 romano romano 0 May 17 18:48 a
-rw-rw-r-- 1 romano romano 0 May 17 18:48 b
[:~/tmp/lilla/uff] % ls -la
total 8
drwxrwxr-x 2 romano romano 4096 May 17 18:48 .
drwxrwxr-x 3 romano romano 4096 May 17 18:47 ..
-rw-rw-r-- 1 romano romano 0 May 17 18:48 a
-rw-rw-r-- 1 romano romano 0 May 17 18:48 b

Resources