commands I executed
mkfs .ext4 -F /dev/xxx0
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
524288 inodes, 2097152 blocks
104857 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2147483648
64 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables: done
Writing inode tables: done
Writing superblocks and filesystem accounting information: done
mount -o dax /dev/xxx0 /mnt/ext4-xxx0
mount output
/dev/xxx0 on /mnt/ext4-xxx0 type ext2 (rw,relatime,seclabel,block_validity,barrier,dax,user_xattr,acl)
dmesg output
[ 2917.044866] EXT4-fs (xxx0): DAX enabled. Warning: EXPERIMENTAL, use at your own risk
[ 2917.044869] EXT4-fs (xxx0): mounting ext2 file system using the ext4 subsystem
...
...
[ 2917.045459] EXT4-fs (xxx0): mounted filesystem without journal. Opts: dax
which subsequently results in failure of fallocate, as the ext2 is not supported fos ext2.
any suggestions/solution would be appreciated.
Provide the type as ext4 while mounting. Or else it will take default type (or) type mentioned in /etc/fstab.
# mount -o dax -t ext4 /dev/xxx0 /mnt/ext4-xxx0
Related
SYNOPSIS
mount -o degraded,ro /dev/disk/by-uuid/ec3 /mnt/ec3/ && echo noerror
noerror
DESCRIPTION
mount -t btrfs fails but returns with noerror as above and only since the last reboot.
btrfs check seems clean to me (I am simple user).
btrfs restore errors out with "We have looped trying to restore files in"...
I had a lingering artifact btrfs filesystem show giving " *** Some devices missing " on the volume. This meant it would not automount on boot and I have been manually mounting (+ searching for a resolution to that)
I have previously used rdfind to deduplicate with hard links (as many as 10 per file)
I had just backed up using btrfs send/recieve but have to check if I have everything - this was the main Raid1 server
DETAILS
btrfs-find-root /dev/disk/by-uuid/ec3
Superblock thinks the generation is 103093
Superblock thinks the level is 1
Found tree root at 8049335181312 gen 103093 level 1
btrfs restore -Ds /dev/disk/by-uuid/ec3 restore_ec3
We have looped trying to restore files in
df -h /mnt/ec3/
Filesystem Size Used Avail Use% Mounted on
/dev/dm-0 16G 16G 483M 97% /
mount -o degraded,ro /dev/disk/by-uuid/ec3 /mnt/ec3/ && echo noerror
noerror
df /mnt/ec3/
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/dm-0 16775168 15858996 493956 97% /
btrfs filesystem show /dev/disk/by-uuid/ec3
Label: none uuid: ec3
Total devices 3 FS bytes used 1.94TiB
devid 6 size 2.46TiB used 1.98TiB path /dev/mapper/26d2e367-65ea-47ad-b298-d5c495a33efe
devid 7 size 2.46TiB used 1.98TiB path /dev/mapper/3c193018-5956-4637-9ec2-dd5e49a4a412
*** Some devices missing #### comment, this is an old artifact unchanged since before unable to mount
btrfs check /dev/disk/by-uuid/ec3
Checking filesystem on /dev/disk/by-uuid/ec3
UUID: ec3
checking extents
checking free space cache
checking fs roots
checking csums
checking root refs
found 2132966506496 bytes used err is 0
total csum bytes: 2077127248
total tree bytes: 5988204544
total fs tree bytes: 3492638720
total extent tree bytes: 242151424
btree space waste bytes: 984865976
file data blocks allocated: 3685012271104
referenced 3658835013632
btrfs-progs v4.1.2
Update:
After a reboot (had to wait for a slot to go down) the system mounts manually but not completely clean.
Now asking question on irc #btrfs:
! http://pastebin.com/359EtZQX
Hi, I'm scratching my head and searched in vain to remove *** Some devices missing. Can anyone help give me a clue to clean this up?
- is there a good way to 'fix' the artifacts I am seeing? trying: scrub, balance. Try: resize, defragment.
- would I be advised to move to a new clean volume set?
- would a fix via a btrfs send/recieve be safe from propogating errors?
- or (more painfully) rsync to a clean volume? http://pastebin.com/359EtZQX (My first ever day using irc)
I'm trying to write 1 billion of files in one folder using multi thread but next my program wrote 20 million files I got "No space left on device". I did not close my program because It still writing same files.
I don't have any problems with "inode", I used only 7%.
No problem with /tmp, /var/tmp, there are empty.
I increased fs.inotify.max_user_watches to 1048576.
I use debian and EXT4 as filesystem.
Is there same one meet this problem and thank you so much for help.
Running tune2fs -l /path/to/drive gives
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 260276224
Block count: 195197952
Reserved block count: 9759897
Free blocks: 178861356
Free inodes: 260276213
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 1024
Blocks per group: 24576
Fragments per group: 24576
Inodes per group: 32768
Inode blocks per group: 2048
Flex block group size: 16
Filesystem created: ---
Last mount time: ---
Last write time: ---
Mount count: 2
Maximum mount count: -1
Last checked: ---
Check interval: 0 ()
Lifetime writes: 62 GB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: ---
Directory Hash Seed: ---
Journal backup: inode blocks
check this question
How to store one billion files on ext4?
you have fewer blocks than inodes which is not going to work, though I think that is the least of your problems. If you really want to do this (would a database be better?) you may need to look into filesystems other an ext4 zfs springs to mind as an option that allows 2^48 entries per directory and should do what you want.
If this question https://serverfault.com/questions/506465/is-there-a-hard-limit-to-the-number-of-files-a-directory-can-have is anything to go by, there is a limit on the number of files per directory using ext4 which you are likely hitting
I have an application which needs to read information from a hard disk, stuff like serial model etc.
Now of course it matters if the drive is a SAS, SATA or FC drive.
Is there a reliable way that I can identify which protocol a connected drive uses? Either via an OS command or checking some logs or inquiring the device?
I don't want to use sysfs structure. I want to know how the OS know if it's an ATA, SCSI or whatever type of disk.
As you have mentioned in comments to user3588161's answer, you are having SATA and SAS disk attached to the same SAS controller, so I'd suggest to use the smartctl command!
The smartctl command act as a control and monitor Utility for SMART disks under Linux and Unix like operating systems. Type the following command to get information about /dev/sda (SATA disk):
# smartctl -d ata -a -i /dev/sda
For SAS disk use one of the following syntax:
# smartctl -d scsi --all /dev/sgX
# smartctl -d scsi --all /dev/sg1
# smartctl -d scsi --all /dev/sg1 -H
I guess all of the information is somehow related to this location :-
/sys/class/scsi_device/?:?:?:?/device/model
I suggest you try doing this too to check what output does it render.
cat /sys/class/scsi_device/0\:0\:0\:0/device/{model,vendor}
(The backslashes next to zeros are for escaping special char :.)
Also, I'd like to suggest you to visit these two links in order for more information or detail like sample output,etc :-
Find Out Hard Disk Specs
To Check Disk behind Adaptec RAID Controllers
Checking boot information, it seems the disk type is set in kernel ahci calls. You can check (as root) with dmesg | grep ahci (on sysvinit systems) or with journalctl -k -b -0 -l --no-pager | grep ahci (with systemd). The relevant query/setting looks to be:
kernel: ahci 0000:00:12.0: version 3.0
kernel: ahci 0000:00:12.0: controller can't do 64bit DMA, forcing 32bit
kernel: ahci 0000:00:12.0: AHCI 0001.0100 32 slots 4 ports 3 Gbps 0xf impl SATA mode
kernel: ahci 0000:00:12.0: flags: ncq sntf ilck pm led clo pmp pio slum part ccc
The third line holds the controller/type information you are looking for. This seems to be where the information comes from, but from your questions standpoint, it isn't a viable solution.
The question becomes where does this information get recorded or stored within /dev /proc or /sys. I have looked and cannot find a one-to-one correlation between this initial determination of disk type on boot and any flag stored. This information may well be part of the coded data, for example, /sys/class/scsi_disk/0:0:0:0/device or similar location. Hopefully this information may allow you or others to help pinpoint if, and if so, where this information is captured and available on a running system.
Answer rewritten in view of clarification: libATA is what you want. It's what hdparm calls and it reports the transport too. It's hard to find up to date docs on it though. See http://docs.huihoo.com/linux/kernel/2.6.26/libata/index.html for example.
I have not used libATA (directly) myself, so I can't be more specific as to the API calls needed. Since not many people need to write something like hdparm themselves, your best bet is to consult its sources to see what exactly it calls.
hdparm can report stuff like:
[root#alarmpi ~]# hdparm -I /dev/sdb
/dev/sdb:
ATA device, with non-removable media
Model Number: TOSHIBA DT01ACA200
Serial Number: Z36GKMKGS
Firmware Revision: MX4OABB0
Transport: Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6, SATA Rev 3.0; Revision: ATA8-AST T13 Project D1697 Revision 0b
If your actual problem is that only sdparm works on your system for SCSI drives (can happen) then it seems the problem is reduced to figuring out which of hdparm or sdparm to call isn't it? You could use udevinfo for that. See https://chromium.googlesource.com/chromiumos/third_party/laptop-mode-tools/+/775acea9e819bdee90cca8d2363827c13967a14b/laptop-mode-tools_1.52/usr/share/laptop-mode-tools/modules/hdparm for example.
I am using Red Hat 5.5 and I am trying to run Sybase ASE 12.5.4.
Yesterday I was trying to use the command "service sybase start" and the console showed sybase repeatedly trying to initialize, but failing, the main database server.
UPDATE:
I initialized a database at /ims_systemdb/master using the following commands:
dataserver -d /ims_systemdb/master -z 2k -b 51204 -c $SYBASE/ims.cfg -e db_error.log
chmod a=rwx /ims_systemdb/master
ls -al /ims_systemdb/master
And it gives me a nice database at /ims_systemdb/master with a size of 104865792 bytes (2048x51240).
But when I run
service sybase start
The error log at /logs/sybase_error.log goes like this:
00:00000:00000:2013/04/26 16:11:45.18 kernel Using config area from primary master device.
00:00000:00000:2013/04/26 16:11:45.19 kernel Detected 1 physical CPU
00:00000:00000:2013/04/26 16:11:45.19 kernel os_create_region: can't allocate 11534336000 bytes
00:00000:00000:2013/04/26 16:11:45.19 kernel kbcreate: couldn't create kernel region.
00:00000:00000:2013/04/26 16:11:45.19 kernel kistartup: could not create shared memory
I read "os_create_region" is normal if you don't set shmmax in sysctl high enough, so I set it to 16000000000000, but I still get this error. And sometimes, when I'm playing around with the .cfg file, I get this error message instead:
00:00000:00000:2013/04/25 14:04:08.28 kernel Using config area from primary master device.
00:00000:00000:2013/04/25 14:04:08.29 kernel Detected 1 physical CPU
00:00000:00000:2013/04/25 14:04:08.85 server The size of each partitioned pool must have atleast 512K. With the '16' partitions we cannot configure this value f
Why do these two errors appear and what can I do about them?
UPDATE:
Currently, I'm seeing the 1st error message (os cannot allocate bytes). The contents of /etc/sysctl.conf are as follows:
kernel.shmmax = 4294967295
kernel.shmall = 1048576
kernel.shmmni = 4096
But the log statements earlier state that
os_create_region: can't allocate 11534336000 bytes
So why is the region it is trying to allocate so big, and where did that get set?
The Solution:
When you get a message like "os_create_region: can't allocate 11534336000 bytes", what it means is that Sybase's configuration file is asking the kernel to create a region that exceeds the shmmax variable in /etc/sysctl.conf
The main thing to do is to change ims.conf (or whatever configuration file you are using). Then, you change the max memory variable in the physical memory section.
[Physical Memory]
max memory = 64000
additional network memory = 10485760
shared memory starting address = DEFAULT
allocate max shared memory = 1
For your information, my /etc/sysctl.conf file ended with these three lines:
kernel.shmmax = 16000000000
kernel.shmall = 16000000000
kernel.shmmni = 8192
And once this is done, type "showserver" to reveal what processes are running.
For more information, consult the Sybase System Administrator's Guide, volume 2 as well as Michael Gardner's link to Red Hat memory management in the comments earlier.
I have a flash drive device (/dev/sda1) mounted to /mnt on an embedded linux system (kernel 2.6.23). Using C how do I work out the size of the drive?
On Linux, if you're not worried about portability (C doesn't know about drives, so any such specific code will be unportable), use statfs():
struct statfs fsb;
if(statfs("/mnt", &fsb) == 0)
printf("device has %ld blocks, each %ld bytes\n", fsb.f_blocks, fsb.f_bsize);
Read and parse a number in device's sysfs entry. In your case,
Full device (all partitions and partition table): /sys/block/sda/size
Logical partition on this device: /sys/block/sda/sda1/size
The device does not have to be mounted yet.
If you have no problem using external tools, exec this :
df -h | grep -i /dev/sda1
using popen, and parse the resulting line with strtok.