Logwatch is too noisy - ubuntu-18.04

I've been using Logwatch for at least 12 years, but since I've moved to Ubuntu 18.04 I've gotten soooo annoyed about the daily e-mail is listing 37 /snap in the filesystem check:
Filesystem Size Used Avail Use% Mounted on
/dev/nvme0n1p2 439G 268G 149G 65% /
/dev/loop0 83M 83M 0 100% /snap/shotcut/119
/dev/loop1 234M 234M 0 100% /snap/gimp/322
/dev/loop3 291M 291M 0 100% /snap/vlc/1620
/dev/loop4 218M 218M 0 100% /snap/gnome-3-34-1804/60
/dev/loop2 256K 256K 0 100% /snap/gtk2-common-themes/13
etc...
I have looked for a solution before without luck and I've been looking in the logwatch files, I couldn't find any settings to do this.

I looked in /usr/share/logwatch/scripts/services/zz-disk_space, where the df command is:
df -h -x tmpfs -x devtmpfs -x udf -x iso9660
Filesystem Size Used Avail Use% Mounted on
/dev/nvme0n1p2 439G 268G 150G 65% /
/dev/loop0 83M 83M 0 100% /snap/shotcut/119
/dev/loop1 234M 234M 0 100% /snap/gimp/322
/dev/loop3 291M 291M 0 100% /snap/vlc/1620
etc... (37 of those in total)
By adding '-x squashfs' i get what i want:
df -h -x tmpfs -x devtmpfs -x udf -x iso9660 -x squashfs
Filesystem Size Used Avail Use% Mounted on
/dev/nvme0n1p2 439G 268G 150G 65% /
/dev/sda 3.6T 580G 2.9T 17% /backup
/dev/nvme0n1p1 511M 7.4M 504M 2% /boot/efi
//192.168.0.200/nas-office/backup 1.9T 723G 1.2T 39% /mnt/nas
Excellent!

Related

Why is gdb aborting when I try to print a cosine?

Here's my interaction with it. I first start gdb, set a breakpoint, run the program gdb halts at the breakpoint. Then:
<code>
(gdb) b 89
Breakpoint 1 at 0x18cc: file parseGaia3DataToSqDeg.c, line 89.
(gdb) r
Starting program: /sixTB/astro/catalogs/gaia3/shSqDeg/fj
Star 0.0281655 -89.857 not found in 0 tries.
Breakpoint 1, main (argc=1, argv=0x7fffffffe5c8) at parseGaia3DataToSqDeg.c:89
89 exit(0); //TEST
(gdb) p cos(.333)
Abort
</code>
Gdb simply quits, and I'm back at my command line.
Data on gdb:
gdb --version
GNU gdb (Debian 10.1-1.7) 10.1.90.20210103-git
My machine:
total used free shared buff/cache available
Mem: 27Gi 3.1Gi 1.2Gi 123Mi 23Gi 23Gi
Swap: 976Mi 3.0Mi 973Mi
CPU family: 25
AMD Ryzen 5 5600G with Radeon Graphics
CPU MHz: 1397.031
CPU max MHz: 5000.6831
CPU min MHz: 1400.0000
BogoMIPS: 7784.71
CPU cache size: 512 KB
No brand USB OPTICAL MOUSE
Microsoft Corp. Microsoft Ergonomic Keyboard
Filesystem Size Used Avail Use% Mounted on
udev 14G 0 14G 0% /dev
tmpfs 2.8G 1.5M 2.8G 1% /run
/dev/nvme0n1p2 233G 22G 199G 10% /
tmpfs 14G 0 14G 0% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
/dev/nvme0n1p1 511M 3.5M 508M 1% /boot/efi
/dev/sdb1 3.6T 93G 3.4T 3% /fourTB
/dev/sda1 5.5T 2.3T 2.9T 45% /sixTB
tmpfs 2.8G 132K 2.8G 1% /run/user/1000
FWIW, in previous versions of gdb, I could always print a cosine or other math function.
OK, the above comment's solution worked once, and then quit. cos(.333) aborted gdb. Oh well... I'm wondering if it's a gdb or Debian problem, or that my machine's hardware is simply weird. I also neglected to include in the above comment's command "install" The command should read:
apg-get install gdb gdb-doc build-essential devscript

why is my linux system experiencing "Log I/O error Detected. Shutting down filesystem" problem

system info:
[root#cpe ~]# uname -a
Linux cpe 3.10.0-514.26.2.el7.x86_64 #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[root#cpe ~]# cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)
[root#cpe ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/cl_cpe-root 34721216 34721196 20 100% /
devtmpfs 32843124 0 32843124 0% /dev
tmpfs 32855080 0 32855080 0% /dev/shm
tmpfs 32855080 942644 31912436 3% /run
tmpfs 32855080 0 32855080 0% /sys/fs/cgroup
/dev/mapper/cl_cpe-home 16947200 32944 16914256 1% /home
/dev/sda1 1038336 85484 952852 9% /boot
tmpfs 6571016 0 6571016 0% /run/user/0
[root#cpe ~]# mount | grep root
/dev/mapper/cl_cpe-root on / type xfs (rw,relatime,attr2,inode64,noquota)
problem:
when system is running, there are error log as below. then can't run any command line.
XFS (dm-0): metadata I/O error: block 0x2128c70 ("xlog_iodone") error 5 numblks 64
XFS (dm-0): Log I/O Error Detected. Shutting down filesystem
XFS (dm-0): Please umount the filesystem and rectify the problems(s)
XFS (dm-0): metadata I/O error: block 0x2128c7f ("xlog_iodone") error 5 numblks 64
XFS (dm-0): metadata I/O error: block 0x2128c82 ("xlog_iodone") error 5 numblks 64
Who can help me to analyze or locate the problem.
Thanks in advance.

How to restrict mongodb's ROM usage on 32 bit architecture system? I am using mongodb version 2.4

I am using mongodb version 2.4 due to 32 bit system limitation (nanopi m1 plus). I have a debian jessie OS image (Debian 8) with 4.2 GB space available (emmc). I have a total of about 2.2 GB available after loading my application files. However, my flash gets filled up quickly to 100% after I start running my application.
Then I get the error "Unable to get database instance and mongodb stopped working " and my application stops working.
Can someone please help me with this problem. Thanks in advance!
Memory status of my device when it stopped working:
df –h:
Filesystem- overlay Size- 4.2G Used- 4.2G Avail- 0 Use%- 100% Mounted on- /
df -h command
du –shx /var/lib/mongodb/ | sort –rh | head –n 20
512M /var/lib/mongodb/xyz.6
512M /var/lib/mongodb/xyz.5
257M /var/lib/mongodb/xyz.4
128M /var/lib/mongodb/xyz.3
64M /var/lib/mongodb/xyz.2
32M /var/lib/mongodb/xyz.1
17M /var/lib/mongodb/xyz.ns
17M /var/lib/mongodb/xyz.0
16M /var/lib/mongodb/local.ns
16M /var/lib/mongodb/local.0
4.0K /var/lib/mongodb/journal
0 /var/lib/mongodb/mongodb.lock
du –shx /var/lib/mongodb/journal/* | sort –rh | head –n 20
257M /var/lib/mongodb/journal/prealloc.2
257M /var/lib/mongodb/journal/prealloc.1
257M /var/lib/mongodb/journal/prealloc.0
du –shx /var/log/mongodb/journal/* | sort –rh | head –n 20
399M /var/lib/mongodb/mongodb.log
353M /var/lib/mongodb/mongodb.log.1
3.3M /var/lib/mongodb/mongodb.log.2.gz
752K /var/lib/mongodb/mongodb.log.1.gz

Lando db-import is taking very long time to import database

I am using lando (v3.0.0.rc-14) along with docker CE (latest version) for my Drupal 7 site. I am trying to import database of file size uncomressed(~4GB) & compressed (~900 MB) using following command:
lando db-import db-name db-filename.sql
lando db-import db-name db-filename.sql.gz
But, this is not helping. Usually, this db-import is taking more than 24 hours to complete. Is this a problem with my version or my settings in .lando.yml file?
My CPU and Mem usage stats below:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28334 1001 20 0 1435088 568912 15892 S 2.3 3.5 4:27.81 mysqld
3131 usersf 20 0 306928 9320 7172 S 1.7 0.1 0:38.15 gvfs-udisks2-vo
2095 root 20 0 2434900 96804 39620 S 1.3 0.6 0:34.19 dockerd
Try these instructions
Disable writing barrier for the ext4 fs:
$ sudo gedit /etc/fstab
Here you’ll see something like this:
UUID=700a2404-f687-4ae2-a2d5-54291553551e / ext4 errors=remount-ro 0 1
So you just need to add barrier=0
UUID=700a2404-f687-4ae2-a2d5-54291553551e / ext4 errors=remount-ro,barrier=0 0 1
Reboot your system.
Or try any of the suggestions

Unit Testing (assert.h) on Beaglebone Black (ARM) with Linux Headers installed on SD Card

Ok so here it goes:
I'm developing a DMA Kernel Driver on the Beaglebone Black (ARM Cortex-A8) - currently my file system looks like this (important for the question):
/dev/mmcblk1p2 1.7G 1.1G 511M 69% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 247M 4.0K 247M 1% /dev
tmpfs 50M 224K 50M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 248M 0 248M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/mmcblk1p1 71M 20M 52M 28% /boot/uboot
/dev/mmcblk0p1 3.6G 571M 2.8G 17% /media/microsd
rootfs and /boot are sitting on the eMMC NAND Flash Memory Chip
mounting /media/microsd to give myself an extra ~4GB of space
My driver code base is sitting in __/home/user/__
the Linux Headers were too big to install on __rootfs__ (NAND Flash) so I wrote a little script that installed them to the __/media_microsd__ filesystem, then symbolically linked __/lib/modules/3.8.13-bone28/build__ to __/media/microsd/usr/src/linux-3.8.13-bone28__ then in my makefile I run this: __make -C /lib/modules/3.8.13-bone28/build M=$(PWD) modules__ so that the driver is built where the linux headers are living (/media/microsd ...) and then I can include them easily within my code by doing #include <linux/whatever.h>
code Reference: GitHub - Mighty_DMA
My issue comes when trying to build Unit Tests using the #include <assert.h> header file which lives in /usr/include ... since my Makefile uses the -C flag to change immediately to the SD card directory (to access Linux Headers and Build) then Make tries to look for assert.h in /media/microsd/usr/include instead of /usr/include/
What is the best way to build Unit Tests using either Check (check.h) or Assert (assert.h) when I can not include them in my code because of the divergence of the file systems living on both NAND Flash and SD Card
I have tried modifying AutoTools and Makefiles to include the directory path /usr/include/ but because the -C flag, it becomes relative. I tried giving #include </usr/include/assert.h> direct path to the file but that doesn't solve the problem recursively - it will begin to error about header files assert calls ... and so on
Thank you in advance for your help, I really don't know what the best route to take is here.
<3,
-q

Resources