How to specify the path which is to be tested using iozone? - benchmarking

I have a freebsd install which has ufs filesystem. Inside freebsd i have created a zpool in raidz1. Now i want to perform iozone test on zfs but i am not able to understand how to specify that iozone test on the zpool not the base filesystem.

Here is what I personally use, which is also specifying the files to be used.
./iozone -R -l 16 -u 16 -r 128k -s 200G –F /testpool/g{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16}
If you only want a single file to be used, just use -f instead of -F.

Related

How to completely download Anaconda Cloud bz2 files and dependencies for offline package installation? [duplicate]

I want to create a Python environment with the data science libraries NumPy, Pandas, Pytorch, and Hugging Face transformers. I use miniconda to create the environment and download and install the libraries. There is a flag in conda install, --download-only to download the required packages without installing them and install them afterwards from a local directory. Even when conda just downloads the packages without installing them, it also extracts them.
Is it possible to download the packages without extracting them and extract them afterwards before installation?
There is no simple command in the CLI to prevent the extraction step. The extraction is regarded as part of the FETCH operation to populate the package cache before running the LINK operation to transfer the package to the specified environment.
The alternative would be to do something manually. Naively, one could search Anaconda Cloud and manually download, however, it would probably be better to go through the solver to ensure package compatibility. All the info for operations to be run can be viewed by including the --json flag. This could be filtered to just the tarball URLs and then downloaded directly. Here's a script along these lines (assuming Linux/Unix):
File: conda-download.sh
#!/bin/bash -l
conda create -dn null --json "$#" |\
grep '"url"' | grep -oE 'https[^"]+' |\
xargs wget -c
which can be used as
./conda-download.sh -c conda-forge -c pytorch numpy pandas pytorch transformers
that is, it accepts all arguments conda create would, and will download all the tarballs locally.
Ignoring Cached Packages
If you already have some packages cached then the above will not redownload them. Instead, if you wish to download all tarballs needed for an environment, then you could use this alternate version which overrides the package cache using an empty temporary directory:
File: conda-download-all.sh
#!/bin/bash -l
tmp_dir=$(mktemp -d)
CONDA_PKGS_DIRS=$tmp_dir conda create -dn null --json "$#" |\
grep '"url"' | grep -oE 'https[^"]+' |\
xargs wget -c
rm -r $tmp_dir
Do you really want to use conda-pack? That lets you archive a conda-environment for reproducing without using the internet or re-solving for dependencies. To just prevent re-solving you can also use conda env export --explict but that still ties you to the source (internet or local disk repository).
If you have a static environment (read-only) and want to really reduce docker size, you can volume mount the environment at runtime. You would need to match the file paths (ie: /opt/anaconda => /opt/anaconda).

Change ownership of dir to user when running program in sudo

I have a program that I need to run with sudo. I create a directory using mkdir, but this directory has owner and group set to root. That makes sense since I am using sudo. I would like to change the owner and group to the normal user, but I'm not sure how to do that. I thought running system("chown $USER:$USER /directory/") would work, but I suppose since I am in sudo it will just set to root. I was looking into using chown, but I wasn't sure how I was supposed to get the owner and group id. Also it would be good for it to be portable, so I don't want to just hardcode a user/group id.
You're mostly on the right path already, chown is the command you're looking for here.
You can string the two commands to make and then own the directory together using a semicolon.
sudo mkdir test ; sudo chown $USER:$USER test
I've tested this on ubuntu 18.04 and ubuntu 20.04 as that's your tag. The $USER variable resolves to the user that you originally logged in as, not root, as long as you're using it at the beginning of your command like the above. Note that you need to call sudo again when doing the chown portion, the ; ends the sudo elevation.
The coreutils package includes an useful little command, install, you can use instead of mkdir in a sudo context. For example,
sudo install -o USER -g GROUP -m MODE -d DIRECTORY
where USER is the user to own the directory DIRECTORY, GROUP is the group to own the directory, and MODE is the access mode (like chmod) to the directory.
Because system(COMMAND) and popen(COMMAND,...) actually run /bin/sh with -c and COMMAND as parameters, you can use the form
sudo install -o $(id -u) -g $(id -g) -m u=rwx,g=r-x,o=x DIRECTORY
where the shell replaces the user and group names (or rather, numbers, since I'm not using the -n option) before executing sudo. (The id command is also included in coreutils, so you can definitely expect both install and id to be available on all full-blown Linux machines; and even on most embedded systems. It is what all package managers et cetera use to install files, you see.)
Above, I used the mode u=rwx,g=r-x,o=x (equivalently, 0751) as an example; it sets the mode to rwxr-x--x, i.e. grants access to everybody, with owner user and group being able to list the directory contents, and only the owner user being able to create new files or directories in it.

How to disable linux space randomization via dockerfile?

I'm trying to disable randomization via Dockerfile:
RUN sudo echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
but I get
Step 9 : RUN sudo echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
---> Running in 0f69e9ac1b6e
[91mtee: /proc/sys/kernel/randomize_va_space: Read-only file system
any way to work around this? (I see its saying read-only file system any way to get around this?) If its something which the kernel does this means it's outside of my container scope, in that case how am i supposed to work with gdb inside my container? please note this is my target to work with gdb in a container because i'm experimenting with it, so i wanted a container which encapsulates gcc and gdb which i'll use for experimentations.
In host
run:
sudo echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
not in docker
Docker has syntax for modifying some of the sysctls (not via dockerfile though) and kernel.randomize_va_space does not seem to be one of them.
Since you've said you're interested in running gcc/gdb you could disable ASLR only for these binaries with:
setarch `uname -m` -R /path/to/gcc/gdb
Also see other answers in this question.
Sounds like you are building a container for development on your own computer. Unlike production environment, you could (and probably should) opt for a privileged container. In a privileged container sysfs is mounted read-write, so you can control kernel parameters as you would on the host. This is an example of Amazon Linux container I use to develop for on my Debian desktop, which shows the difference
$ docker run --rm -it amazonlinux
bash-4.2# grep ^sysfs /etc/mtab
sysfs /sys sysfs ro,nosuid,nodev,noexec,relatime 0 0
bash-4.2# exit
$ docker run --rm -it --privileged amazonlinux
bash-4.2# grep ^sysfs /etc/mtab
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
bash-4.2# exit
$
Notice ro mount in the unprivileged, rw in the privileged case.
Note that the Dockerfile command
RUN sudo echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
makes no sense. It will be executed (a) during container build time (b) on the machine where you build the image. You want (a) happen at container's run time and (b) on the machine where you run the container. If you need to change sysctls on image start, write a script which does all the setup and then drops you into the interactive shell, like placing a script into e.g. /root and setting it as the ENTRYPOINT
#!/bin/sh
sudo sysctl kernel.randomize_va_space=0
exec /bin/bash -l
(Assuming you mount host working directory into /home/jas that's a good practice, as bash will read your startup files etc).
You need to make sure you have the same UID and GID inside the container, and can do sudo. How you enable sudo depends on a distro. In Debian, members of the sudo group have unrestricted sudo access, while on Amazon Linux (and, IIRC, other RedHat-like system, the group wheel has. Usually this boils down to an unwieldy run command that you rather want to script than type, like
docker run -it -v $HOME:$HOME -w $HOME -u $(id -u):$(id -g) --group-add wheel amazonlinux-devenv
Since your primary UID and GID match the host, files in mounted host directories won't end up owned by root. An alternative is create a bona fide user for yourself during image build (i.e., in the Dockerfile), but I find this more error-prone, because I can end up running this devenv image where my username has a different UID, and that will cause problems. The use of id(1) in a startup command guarantees UID match.

How do I recursively ftp only certain file types from a linux server using the command line?

I want to download only .htm or .html files from my server. I'm trying to use ncftpget and even wget but only with limited success.
with ncftpget I can download the whole tree structure no problem but can't seem to specify which files I want, it's either all or nothing.
If I specify the file type like this, it only looks in the top folder:
ncftpget -R -u myuser -p mypass ftp://ftp.myserver.com/public_html/*.htm ./local_folder
If I do this, it downloads the whole site and not just .htm files:
ncftpget -R -u myuser -p mypass ftp://ftp.myserver.com/public_html/ ./local_folder *.htm
Can I use ncftp to do this, or is there another tool I should be using?
You can do it with wget
wget -r -np -A "*.htm*" ftp://site/dir
or:
wget -m -np -A "*.htm*" ftp://user:pass#host/dir
However, as per Types of Files:
Note that these two options do not affect the downloading of HTML files (as determined by a .htm or .html filename prefix). This behavior may not be desirable for all users, and may be changed for future versions of Wget.
Does ncftpget understand dir globs?
Try
ncftpget -R -u myuser -p mypass ftp://ftp.myserver.com/public_html/**/*.htm ./local_folder
** means any number of directories.
The wget command understands standing unix file globbing syntax.
wget -r -np --ftp-user=username --ftp-password=password "ftp://example.com/path/to/dir/*.htm"
Conversely, you can use the -A option, which accepts a comma-separated list of file name suffixes or patterns to accept.
wget -A '*.htm'
The -R option is the opposite of -A, so you can use it to specify patterns NOT to fetch.
Caveat: Make sure to quote patterns! Otherwise, your shell may expand the glob itself, leading to unexpected results.
Also! See the "Using wget to recursively download whole FTP directories" question on Server Fault.

Mounting NTFS filesystem on CentOS 5.2

I want to mount some internal and external NTFS drives in CentOS 5.2, preferably automatically upon boot-up. Doesn't matter if it's read/write or read-only, but read/write would be preferred, if it's safe.
Edit: Thanks for all answers, I summarized them below =)
first do a
fdisk -l
get the harddrive partition, ie /dev/sda2
then
mount /dev/sda2 /mnt/windows
if this fails, try a
yum install ntfs-3g
* Just noted this is not included by default, so you can check out NTFS-3g here, and find a suitable package for your system.
to auto mount this, add a line to /etc/fstab saying
/dev/sda2 /mnt/temp ntfs defaults 0 0
and this should auto mount on a reboot
To answer my own question: PostMan and mgb led me to the right path, but their answers did not contain complete solution.
Note: A short manual/wiki on this question is here: http://wiki.centos.org/TipsAndTricks/NTFSPartitions
So, I am using a fresh, bare install of CentOS 5.2 with latest updates. First of all, I ran the su command to avoid any permission issues.
I created mount points for a couple of external NTFS drives:
mkdir /mnt/iomega80
mkdir /mnt/iogear250
I had to use the fdisk command, but it wasn't in my system. Here's what installs it:
yum install util-linux
Then I ran /sbin/fdisk -l and found the device names:
Disk /dev/sdc: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
**/dev/sdc1** * 1 30401 244196001 7 HPFS/NTFS
Disk /dev/sdd: 82.3 GB, 82348278272 bytes
255 heads, 63 sectors/track, 10011 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
**/dev/sdd1** * 1 10011 80413326 7 HPFS/NTFS
For me, they are /dev/sdc1 and /dev/sdd1.
I had to install NTFS-3G, a package that enables NTFS support on CentOS. To install NTFS-3G, I first had to include RPMFORGE in YUM repository list.
To include RPMFORGE in YUM repository list, I used these instructions: http://rpmrepo.org/RPMforge/Using. For my system, the two commands I had to run were:
wget http://packages.sw.be/rpmforge-release/rpmforge-release-0.3.6-1.el5.rf.i386.rpm
rpm -Uhv rpmforge-release-0.3.6-1.el5.rf.i386.rpm
Finally, I installed NTFS-3G using this YUM command:
yum install fuse fuse-ntfs-3g dkms dkms-fuse
At last, I could use the mount command to mount the filesystems:
mount -t ntfs-3g /dev/sdc1 /mnt/iogear250
mount -t ntfs-3g /dev/sdd1 /mnt/iomega80
By adding these two lines to /etc/fstab, like previous answers suggested, I got the drives to mount upon boot-up:
/dev/sdc1 /mnt/iogear250 ntfs-3g rw,umask=0000,defaults 0 0
/dev/sdd1 /mnt/iomega80 ntfs-3g rw,umask=0000,defaults 0 0
You should already have ntfs available, read-write support is now pretty reliable.
You can test it with "mount -t ntfs /dev/sdX1 /mnt/tmp" you need to know what drive the external disk is identified as (check dmesg) and you need to make a mount point.
To mount automatically everytime put a line in /etc/fstab, use one of the existing lines as an example - you will have to be root to do this.
You forgot to mention that you need to do a reboot after installing fuse, etc.
First enable the repository Epel
yum install epel-release
Then install ntfs
yum install ntfs-3g
Enable the EPEL repository
yum -y install epel-release
Install ntfs-3g
yum -y install ntfs-3g
Update Grub
grub2-mkconfig -o /boot/grub2/grub.cfg

Resources