I want to mount some internal and external NTFS drives in CentOS 5.2, preferably automatically upon boot-up. Doesn't matter if it's read/write or read-only, but read/write would be preferred, if it's safe.
Edit: Thanks for all answers, I summarized them below =)
first do a
fdisk -l
get the harddrive partition, ie /dev/sda2
then
mount /dev/sda2 /mnt/windows
if this fails, try a
yum install ntfs-3g
* Just noted this is not included by default, so you can check out NTFS-3g here, and find a suitable package for your system.
to auto mount this, add a line to /etc/fstab saying
/dev/sda2 /mnt/temp ntfs defaults 0 0
and this should auto mount on a reboot
To answer my own question: PostMan and mgb led me to the right path, but their answers did not contain complete solution.
Note: A short manual/wiki on this question is here: http://wiki.centos.org/TipsAndTricks/NTFSPartitions
So, I am using a fresh, bare install of CentOS 5.2 with latest updates. First of all, I ran the su command to avoid any permission issues.
I created mount points for a couple of external NTFS drives:
mkdir /mnt/iomega80
mkdir /mnt/iogear250
I had to use the fdisk command, but it wasn't in my system. Here's what installs it:
yum install util-linux
Then I ran /sbin/fdisk -l and found the device names:
Disk /dev/sdc: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
**/dev/sdc1** * 1 30401 244196001 7 HPFS/NTFS
Disk /dev/sdd: 82.3 GB, 82348278272 bytes
255 heads, 63 sectors/track, 10011 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
**/dev/sdd1** * 1 10011 80413326 7 HPFS/NTFS
For me, they are /dev/sdc1 and /dev/sdd1.
I had to install NTFS-3G, a package that enables NTFS support on CentOS. To install NTFS-3G, I first had to include RPMFORGE in YUM repository list.
To include RPMFORGE in YUM repository list, I used these instructions: http://rpmrepo.org/RPMforge/Using. For my system, the two commands I had to run were:
wget http://packages.sw.be/rpmforge-release/rpmforge-release-0.3.6-1.el5.rf.i386.rpm
rpm -Uhv rpmforge-release-0.3.6-1.el5.rf.i386.rpm
Finally, I installed NTFS-3G using this YUM command:
yum install fuse fuse-ntfs-3g dkms dkms-fuse
At last, I could use the mount command to mount the filesystems:
mount -t ntfs-3g /dev/sdc1 /mnt/iogear250
mount -t ntfs-3g /dev/sdd1 /mnt/iomega80
By adding these two lines to /etc/fstab, like previous answers suggested, I got the drives to mount upon boot-up:
/dev/sdc1 /mnt/iogear250 ntfs-3g rw,umask=0000,defaults 0 0
/dev/sdd1 /mnt/iomega80 ntfs-3g rw,umask=0000,defaults 0 0
You should already have ntfs available, read-write support is now pretty reliable.
You can test it with "mount -t ntfs /dev/sdX1 /mnt/tmp" you need to know what drive the external disk is identified as (check dmesg) and you need to make a mount point.
To mount automatically everytime put a line in /etc/fstab, use one of the existing lines as an example - you will have to be root to do this.
You forgot to mention that you need to do a reboot after installing fuse, etc.
First enable the repository Epel
yum install epel-release
Then install ntfs
yum install ntfs-3g
Enable the EPEL repository
yum -y install epel-release
Install ntfs-3g
yum -y install ntfs-3g
Update Grub
grub2-mkconfig -o /boot/grub2/grub.cfg
Related
My /home partition is of size 350 GB and I have a lot of space remaining on it. I have just 16 GB space remaining on my FileSystem Root. I need to expand the Filesystem Root to install more softwares on my system. Is it possible? If yes, how?enter image description here
enter image description here
Dump the following commands from your machine to know the partition type clearly.
lsblk
df -HT
This command works for me
sudo apt update && sudo apt upgrade && sudo apt dist-upgrade
I have a running system with Ubuntu 16.04, Apache 2.4.18, PHP 7.3 and 7.4, PHP-FPM, PHP FastCGI, MPM event.
I wanted to upgrade to the latest Apache version (2.4.46-2+ubuntu16.04.1+deb.sury.org+3 amd64 [upgradable from: 2.4.18-2ubuntu3.17]) as follows:
add-apt-repository -y ppa:ondrej/apache2
apt update
apt-get --only-upgrade install apache2
service apache2 restart
Job for apache2.service failed because the control process exited with error code. See "systemctl status apache2.service" and "journalctl -xe" for details.
journalctl -xe
apachectl[9010]: [:crit] [pid 9013] (38)Function not implemented: AH00141: Could not initialize random number generator
I checked and /dev/random and /dev/urandom are installed.
Kernel: 4.4.0-042stab141.2 and libc6: 2.23-0ubuntu11.2
Happened to me after upgrading apache to version 2.4.46 on Ubuntu as well. I found out it was the kernel version.
I knew I did apt-get upgrade and the kernel should be latest version, Also running
sudo update-grub
Showed me newer versions, but running uname -r showed very old kernel.
After a long investigation that took almost all day and trying everything I found online about upgrading Ubuntu kernel - I found out it was Digitalocean, not me. Old droplets use external managed kernel - so no matter what you do on your environment, it will always take the external kernel. The solution was here:
https://www.digitalocean.com/docs/droplets/how-to/kernel/grubloader/#switch
If you do see the drop down & change button in your droplet settings in Digital ocean control panel, then your kernel is externally managed. In that drop down type “grub” and choose GrubLoader v0.2, press “change” button & that’s it!
Now you’ll need to shut down & turn back on your server, but before you do so I suggest to run the following commands:
sudo apt-get update
sudo apt-get upgrade
The above upgrade will update the whole system. To update just kernel run the above update command followed by:
sudo apt-get upgrade linux-image-generic
Now shut down (sudo poweroff or power off from DigitalOcean interface, though doing it from CLI is preferred). Note that reboot is not sufficient in this particular case and a complete shut down is needed (Thanks #gauss256 for your comment). Then power it back on from digital ocean interface, And upon startup you should see a new kernel version.
Tip - you might want to delete old Kernel files after the reboot, this can be done by:
sudo apt-get purge $( dpkg --list | grep -P -o "linux-image-\d\S+" | grep -v $(uname -r | grep -P -o ".+\d") )
I'm writing a paper in RMarkdown and for better reproducibility, I want to containerize all required software in a singularity container. Unfortunately, when I try to install TinyTeX (which is recommended for Rmarkdown and I would prefer over TeXLive to not inflate the container more than needed), it fails with the following error message (the full build log is pasted here):
Can't locate TeXLive/TLConfig.pm in #INC (you may need to install the TeXLive::TLConfig module) (#INC contains: /~/.TinyTeX/texmf-dist/scripts/texlive /~/.TinyTeX/tlpkg /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.26.1 /usr/local/share/perl/5.26.1 /usr/lib/x86_64-linux-gnu/perl5/5.26 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl/5.26 /usr/share/perl/5.26 /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base) at ~/.TinyTeX/bin/x86_64-linux/tlmgr line 100.
BEGIN failed--compilation aborted at ~/.TinyTeX/bin/x86_64-linux/tlmgr line 100.
This is the build definition file, basically it uses a very slimmed down ubuntu 18.04 and then executes the %post section to install software
BootStrap: library
From: ubuntu:18.04
%post
# Add universe repository
echo "deb http://us.archive.ubuntu.com/ubuntu bionic universe" >> /etc/apt/sources.list
apt -y update
# Install utilites
apt install -y wget
# Install R
apt install -y r-base-core
## Install RMarkdown and TinyTeX
R --slave -e 'install.packages(c("rmarkdown","tinytex")); tinytex::install_tinytex()'
# Clean
apt-get clean
%environment
export LC_ALL="en_US.UTF-8"
%labels
Author DP
I have also tried tinytex::install_tinytex(dir="/opt/tinytex") but that didn't seem to change anything. Does anyone have an idea what's wrong?
That error message is complaining that your image (or, more likely, your path) is missing the TeXLive::TLConfig perl module.
My guess is that the path contents are not being rehashed with the installed modules after the install. The simplest solution is to break it into two commands:
R --slave -e 'install.packages(c("rmarkdown","tinytex"))'
R --slave -e 'tinytex::install_tinytex()'
Installation succeeds when I try that locally.
A potentially useful alternative, if the image is just for document generation, could be converting a docker image with rmarkdown and tex (e.g. https://hub.docker.com/r/rocker/verse) to a singularity one.
With singularity pull docker://rocker/verse you can do that for the latest version, or for a specific version with verse:version_number.
I'm building a docker image on my Raspberry Pi, which is of course takes some time. The problem here is that even very simple commands in the Dockerfile like setting an environment variable, using chmod +x on a single file or exposing port 80 take minutes to complete.
Here is an excerpt of my Dockerfile:
FROM resin/rpi-raspbian
MAINTAINER felixbr <mymail#redacted.com>
RUN export DEBIAN_FRONTEND=noninteractive && apt-get update && apt-get install -y python python-dev python-pip python-numpy python-scipy python-mysqldb mysql-server redis-server nginx dos2unix poppler-utils
COPY requirements.txt /app/
RUN pip install -r /app/requirements.txt
COPY . /app
WORKDIR /app
RUN cp /app/nginx-django.cfg /etc/nginx/sites-enabled/default
RUN chmod +x /app/start.sh
ENV DOCKERIZED="true"
CMD ./start.sh
EXPOSE 80
Keep in mind this is using an ARMv6 base image, so it can run on a Raspberry Pi and I'm using docker 1.5.0 built for the hypriot Raspberry Pi OS.
Is it copying the built layers for every command or why does each of the last few commands take minutes to complete?
Each instruction of the Dockerfile will be run in a container. What it means is that for each instruction it will do the following :
Instantiate a container from the image created by the previous step, which will create a new layer (the R/W one)
Do the thing (pip install, etc..)
Commit, which will copy the top layer as an image layer (I'm pretty sure it is copying the layer)
And removing the container (if the --rm option is specified) (thus, removing the container Read/Write layer)
There are a few I/O operations involved. On an SSD it's really quick, as well as on a good hard drive. When you build it on the Raspberry PI, if you build it on the SD Card (or MicroSD), the performance of the SD card is probably not that good. It will depend on the class of you MicroSD and even then, I don't think it's really good for the card. I made the try with a simple node project, and it definitely took a few minutes instead of a few seconds like it did on my laptop. It is hardware related (mostly I/O for the SD Card, maybe a little bit the CPU, but...).
You might wanna try to use an external hard drive connected to the raspberry Pi and move the docker folders there, to see if the performance are better.
This is an old question but for reference, you may have been suffering from the chosen storage driver.
On Ubuntu/Debian, Docker uses by default an AUFS storage driver, which is quite fast.
On other distributions, Docker uses by default a devicemapper storage driver, which is very slow with the default configuration (due to a "loop-lvm" mode, configured by default, and not recommandent for production use).
Check this guide for reference and to see how to configure the devicemapper storage driver in production (without loop mode) : https://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver/
Another consideration that was not mentioned here, is that on armv7, most packages that you may want to install with pip or apt-get, are not packaged as binaries.
That means that on an amd64 architecture, pip install downloads a binary and it just merely copies it in the right place, but on armv7, it won't find a suitable binary and will instead downloads the source code and will need to build it from scratch.
When you have a package with lots of dependencies, and they need to be built from source, it takes a looong time.
You can check on what is going on during docker build using the -v flag on pip
pip install -v -r requirements.txt
On Arm/v7 arch, some python libs are not ready yet as a binary, building time is so long, as you are building libs for armV7 as well .
I have a freebsd install which has ufs filesystem. Inside freebsd i have created a zpool in raidz1. Now i want to perform iozone test on zfs but i am not able to understand how to specify that iozone test on the zpool not the base filesystem.
Here is what I personally use, which is also specifying the files to be used.
./iozone -R -l 16 -u 16 -r 128k -s 200G –F /testpool/g{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16}
If you only want a single file to be used, just use -f instead of -F.