I tried to use File::Copy or just shell cp. None worked. Any clue?
https://github.com/songyangster/perltest
$ bin/test_copy.pl from_dir to_dir
Failed to File::Copy cp from_dir/foo1.txt to to_dir
cp: from_dir/foo1.txt: No such file or directory
Failed to cp from_dir/foo1.txt to to_dir
Failed to File::Copy cp from_dir/subdir1/foo2.txt to to_dir
cp: from_dir/subdir1/foo2.txt: No such file or directory
Failed to cp from_dir/subdir1/foo2.txt to to_dir
Related
I am crosscompiling a userpackage with an OpenWRT SDK. I want to install this package with opkg install and get following errog:
line 4: /etc/init.d/openwrt_stationsdata: Permission denied
I think it is a mistake in my makefile, but i´m not sure.
I am using ash, as terminal, so i can´t run it as root.
Here is my makefile:
include $(TOPDIR)/rules.mk
PKG_NAME:=openwrt_stationsdata
PKG_VERSION:=0.1
PKG_RELEASE:=1
PKG_MAINTAINER:=Yannic Storck
PKG_LICENSE:=CC0-1.0
include $(INCLUDE_DIR)/package.mk
include $(INCLUDE_DIR)/cmake.mk
define Package/openwrt_stationsdata
# Select package by default
#DEFAULT:=y
DEPENDS:=+curl
endef
define Package/openwrt_stationsdata/description
Package to utilities data from an AP
endef
define Build/Prepare
mkdir -p $(PKG_BUILD_DIR)
$(CP) ./src/* $(PKG_BUILD_DIR)/
endef
define Package/openwrt_stationsdata/install
$(CP) ./files/* $(1)/
$(INSTALL_DIR) $(1)/usr/bin
$(INSTALL_BIN) $(PKG_BUILD_DIR)/openwrt_stationsdata $(1)/usr/bin/
endef
$(eval $(call BuildPackage,openwrt_stationsdata))```
I have three directories:
/home/Desktop/1
/home/Desktop/2
/home/Desktop/3
In the directories 1 and 2 are executable C programs, which can be executed in the terminal like this ./tst1 or ./tst2.
In the directory 3 I have a bash script, which executes a C program tst3.c from the same directory.
I want to execute these C programs from directories 1 and 2 using my bash script in the directory 3 like this
#!/bin/bash
sudo ./tst3
sleep 1
sudo ./tst1 # from directory 1
sleep 2
sudo ./tst2 # from directory 2
Any ideas?
You have multiple options, including at least:
Set PATH to include the directories where your commands are found:
#!/bin/bash
export PATH="$PATH:/home/Desktop/1:/home/Desktop/2:/home/Desktop/3"
sudo tst3 # from directory 3
sleep 1
sudo tst1 # from directory 1
sleep 2
sudo tst2 # from directory 2
Use absolute paths to the commands:
#!/bin/bash
sudo /home/Desktop/3/tst3 # from directory 3
sleep 1
sudo /home/Desktop/1/tst1 # from directory 1
sleep 2
sudo /home/Desktop/2/tst2 # from directory 2
Use relative paths to the commands:
#!/bin/bash
sudo ../3/tst3 # from directory 3
sleep 1
sudo ../1/tst1 # from directory 1
sleep 2
sudo ../2/tst2 # from directory 2
These treat the directories symmetrically. Another alternative is to place the commands in a directory already on your PATH (like $HOME/bin, perhaps), and then run them without any path. This is what I'd normally do — ensure the commands to be run are in a directory on my PATH.
If you are simply trying to locate the scripts:
#!/bin/bash
base_dir="$( dirname "$( readlink -e "$0" )" )"/..
sudo "$base_dir/3/tst3"
sleep 1
sudo "$base_dir/1/tst1"
sleep 2
sudo "$base_dir/2/tst2"
or
#!/bin/bash
cd "$( dirname "$( readlink -e "$0" )" )"/..
sudo 3/tst3
sleep 1
sudo 1/tst1
sleep 2
sudo 2/tst2
If you want the CWD to be changed the directory of each executable before executing it:
#!/bin/bash
cd "$( dirname "$( readlink -e "$0" )" )"
sudo ./tst3
cd ../1
sleep 1
sudo ./tst1
cd ../2
sleep 2
sudo ./tst2
These scripts will work properly even if its launched from a directory other than the directory it's found in. They will even work if they are launched via a symlink!
I am trying to mount a file system using read only, but I am getting this error.
# cd Downloads
# ls
.localized
# mkdir /mnt/temp
mkdir: /mnt/temp: File exists
# mount -o ro,loop -t Ext3 system1 /mnt/temp
mount: exec /Library/Filesystems/Ext3.fs/Contents/Resources/mount_Ext3 for /mnt/temp: No such file or directory`
Why isn't /mnt/temp showing up? I am in sudo
ipk packages are the intallation packages used by opkg.
I'm trying to extract the contents of one of them and also create my own ipk.
I've read that I should be able to untar them but that is not true.
I've tried:
tar -zxvf mypack.ipk
and I get:
zip: stdin: not in gzip format
I've also tried:
tar -xvf mypack.ipk
and I get:
tar: This does not look like a tar archive
I've found that most of the information on the internet regarding ipk's are inaccurate.
My ipk was generated by bitbake. I'm having a hard time with bitbake and want to avoid using it.
Any ideas on how to extract and how to create ipk files? A simple template with a single package would be useful to have.
I figured it out.
You can extract the main package with the ar x command, then extract the control.tar.gz with the tar -zxf command.
I have tested "ar x package-name.ipk" command but it didn't help
I found bellow command which worked perfectly
tar zxpvf package-name.ipk
This extracts three files:
debian-binary
data.tar.gz
control.tar.gz
use the same command to open data.tar.gz and control.tar.gz files
for more information refer to
https://cognito.me.uk/computers/manual-extractioninstallation-of-ipk-packages-on-gargoyleopenwrt/
You need to create a control file, and then do some archiving using tar and ar. In my case, I was distributing just python scripts, so there was no architecture dependency. You should check the control and Makefile into version control, and delete all the other intermediate files.
Here are the contents of control
Package: my-thing-python
Version: 1.0
Description: python scripts for MyCompany
Section: extras
Priority: optional
Maintainer: John
License: CLOSED
Architecture: all
OE: my-thing-python
Homepage: unknown
Depends: python python-distutils python-pyserial python-curses python-mmap python-ctypes
Source: N/A
Here is my Makefile which sits in the same directory as all my python scripts.
all: my-thing-python.ipk
my-thing-python.ipk:
rm -rf ipk
mkdir -p ipk/opt/my-thing-python
cp *.py ipk/opt/my-thing-python
tar czvf control.tar.gz control
cd ipk; tar czvf ../data.tar.gz .; cd ..
echo 2.0 > debian-binary
ar r my-thing-python.ipk control.tar.gz data.tar.gz debian-binary
clean: FORCE
rm -rf ipk
rm -f control.tar.gz
rm -f data.tar.gz
rm -f my-thing-python.ipk
FORCE:
Extracting with these commands:
Extract the file by running the command:
ar -xv <.ipk file>
Extract the control.tar.gz file by running the command:
tar -zxvf control.tar.gz
data.tar.gz : untar by running the command:
tar –zxvf data.tar.gz
If you want a list of files in an ipk, you can do something like:
#!/bin/sh
for f
do
tar -x -z -f $f ./data.tar.gz -O | tar tvzf -
done
-O is extract to standard output.
ipk files used to be AR (like DPKG), but are now tgz.
I feel that some dpkg utility ought to cope with ipkg files, but I haven't found the right one.
I am attempting to run a tool known as crest on existing applications. My first target application is Apache. The output (as shown below) indicates that some header files are not being found. These are located in other directories in the source folder. One of these being \httpd-2.2.14\srclib\apr\include. I'd rather not change the source code of apache since I will want to run this command against numerous files in Apache and then apply the same technique to several other applications.
My question is:
1) How can I make it so whenever these referenced include files are being looked for, it locates the proper directory they are located in and will use that. I can define the directories. A friend mentioned something about altering the command line input, or building environment variables?
Other thoughts/suggestions would be appreciated.
Thanks all.
The command to run crest is:
../bin/crestc ../../httpd-2.2.14/server/request.c -dfs
I get the following output:
[root#localhost src]# ../bin/crestc ../../httpd-2.2.14/server/request.c -dfs
cp libcrest/libcrest.a ../lib
cp run_crest/run_crest ../bin
cp process_cfg/process_cfg ../bin
cp tools/print_execution ../bin
cp libcrest/crest.h ../include
cp libcrest/libcrest.a ../lib
cp run_crest/run_crest ../bin
cp process_cfg/process_cfg ../bin
cp tools/print_execution ../bin
cp libcrest/crest.h ../include
gcc -D_GNUCC -E -I../bin/../include -DCIL=1 ../../httpd-2.2.14/server/request.c -o ./request.i
../../httpd-2.2.14/server/request.c:28:25: error: apr_strings.h: No such file or directory
../../httpd-2.2.14/server/request.c:29:25: error: apr_file_io.h: No such file or directory
../../httpd-2.2.14/server/request.c:30:25: error: apr_fnmatch.h: No such file or directory
../../httpd-2.2.14/server/request.c:33:22: error: apr_want.h: No such file or directory
../../httpd-2.2.14/server/request.c:36:23: error: ap_config.h: No such file or directory
../../httpd-2.2.14/server/request.c:37:19: error: httpd.h: No such file or directory
../../httpd-2.2.14/server/request.c:38:25: error: http_config.h: No such file or directory
../../httpd-2.2.14/server/request.c:39:26: error: http_request.h: No such file or directory
../../httpd-2.2.14/server/request.c:40:23: error: http_core.h: No such file or directory
../../httpd-2.2.14/server/request.c:41:27: error: http_protocol.h: No such file or directory
../../httpd-2.2.14/server/request.c:42:22: error: http_log.h: No such file or directory
../../httpd-2.2.14/server/request.c:43:23: error: http_main.h: No such file or directory
../../httpd-2.2.14/server/request.c:44:25: error: util_filter.h: No such file or directory
../../httpd-2.2.14/server/request.c:45:26: error: util_charset.h: No such file or directory
../../httpd-2.2.14/server/request.c:46:25: error: util_script.h: No such file or directory
../../httpd-2.2.14/server/request.c:48:22: error: mod_core.h: No such file or directory
BRead 0 branches.
Read 0 nodes.
Wrote 0 branch edges.
[root#localhost src]#
You can set the C_INCLUDE_PATH environment variable (use a : to separate multiple paths):
export C_INCLUDE_PATH=/path/to/include/files:/path/to/more/include/files