I created a new ZPool on my disk with the zpool create command.
Shortly after that, I read that you should specify the pool-disks by /dev/disk/by-id/ and not by the identificator /dev/sda for example.
I didn't do this on my pool, and now I have a problem:
Because of a new disk, all identifiers changed for the existing disks.
The pool was on /dev/sdb, now is this disk localized on /dev/sdc.
Zfs doesn't realizes this, and it tries to access the existing pool at /dev/sdb, but it isn't there.
through searching the web, I found a possibility to import unmounted zpools: zpool import.
But if I want to import the existing, unavailable pool with zpool import dte ("dte" was the name of the pool), I get the following error:
ms#linuxServer:/# sudo zpool import dte
cannot import 'dte': pool may be in use from other system
use '-f' to import anyway
If I try with the -f option, I get the following error:
ms#linuxServer:/# sudo zpool import dte -f
cannot import 'dte': one or more devices is currently unavailable
So it really tries to mount /dev/sdb, but this is used.
If I just use zpool import it shows me the following:
ms#linuxServer:/# sudo zpool import
pool: dte
id: 12561099924127384920
state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
see: http://zfsonlinux.org/msg/ZFS-8000-5E
config:
dte FAULTED corrupted data
ata-TOSHIBA_DT01ACA300_X3N87RPGS UNAVAIL corrupted data
Does anyone know, how I can tell the zpool-command, that the pool "dte" is located at /dev/sdc and not at /dev/sdb?
I haven't found any possible, usable solution yet.
Just this one, which didn't really help:
Google Groups
whirlpool.net
Your zpool import command string should be: sudo zpool import -f dte
Following that, you should be able to zpool clear.
Symlinking worked for me which allow at least to "clear" the pool but it his so frustrating not knowing how to re-attribute devices path! All i found so far is never use /dev/* identifiers for drives when intend to sue with ZFS, use what you can easily edit in /fstab.
why not symlinking ?
ln -s /dev/sdb /dev/sdc
Related
I recently upgraded lubuntu 22.04 and it wanted a few things to be installed from the snap repository. Firefox was one of them. Currently I'm using Selenium 4.1.3, Python 3.10 and Firefox 99.0.1 with latest geckodriver V31.0
I've been using this python3 code for my testing for some time but now it completely fails to start.
First of it failed to find a profile, so I forced something in there:
from selenium import webdriver
from selenium.webdriver.firefox.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.support.select import Select
options = Options()
options.add_argument("-profile /path2temp/") # create profile
options.set_preference("browser.download.folderList", 2)
options.set_preference("browser.download.manager.showWhenStarting",
False)
options.set_preference("browser.download.dir", "./data_export")
options.set_preference(
"browser.helperApps.neverAsk.saveToDisk",
"application/vnd.google-earth.kml+xml,application/x-zip-compressed,application/gpx+xml,text/csv"
)
options.set_preference("devtools.debugger.remote-enabled", True)
options.set_preference("devtools.debugger.prompt-connection", False)
browser = webdriver.Firefox(options=options, executable_path=r"/usr/bin/geckodriver")
url = 'https://cnn.com'
browser.get(url)
If firefox is already open, it fails to communicate with it. Normally in the past it would just open a new tab and start working. But now I get this error:
Firefox is already running, but is not responding. To use Firefox, you
must first close the existing Firefox process, restart your device, or
use a different profile.
If I let it initiate the application, it then times out after a lot of time with the following error (note, the /path2temp/ is a real path to a directory where it has permissions).
1651528082918 geckodriver
INFO Listening on 127.0.0.1:54985 1651528083062 mozrunner::runner
INFO Running command: "/snap/bin/firefox" "--marionette" "-profile /path2temp/" "--remote-debugging-port" "47927" "-- remote-allow-hosts" "localhost" "-no-remote"
ATTENTION: default value of option mesa_glthread overridden by environment.
ATTENTION: default value of option mesa_glthread overridden by environment.
ATTENTION: default value of option mesa_glthread overridden by environment.
ATTENTION: default value of option mesa_glthread overridden by environment.
DevTools listening on ws://localhost:47927/devtools/browser/19a59834-6a4b-4d75-902c-06c36704d50e
Exiting due to channel error.
Exiting due to channel error.
Exiting due to channel error.
Exiting due to channel error.
Exiting due to channel error.
Any ideas of what I could do to fix this problem?
Edit: I was able to at least get it to work when it initiates firefox by passing it to the current users profile located in the snap file structure /home/username/snap/firefox/common/.mozilla/firefox/wnrrbapq.default-release
But it's not an ideal behavior as I have to close the browser every time for testing.
Snap version of Firefox cannot write to /tmp location.
Recommended workaround is to set TMPDIR environment variable to location that both geckodriver and firefox can write to e.g. $HOME/tmp.
More info in geckdriver 0.31.0 relaese notes and related github issue.
One solution is to not set up a profile at all and instead define the file paths for both firefox and geckodriver.
Since Snap likes to do consistent updates, I found it helpful to make my file paths dynamic. Here is some sample code:
from selenium import webdriver
from selenium.webdriver.firefox.options import Options
from selenium.webdriver.firefox.service import Service
from subprocess import getoutput
options = Options()
options.binary_location = getoutput("find /snap/firefox -name firefox").split("\n")[-1]
driver = webdriver.Firefox(service =
Service(executable_path = getoutput("find /snap/firefox -name geckodriver").split("\n")[-1]),
options = options)
url = 'https://cnn.com'
driver.get(url)
You can use getoutput to get a list of files that within /snap/firefox in 1 long string. Splitting them based on \n and getting the [-1] (last) record gives you the latest version of that file.
Its possible that you don't want to always latest version so you could always just pick the first record on the list if you needed consistency. Though it is nice that Snap will update both geckodriver and Firefox together so they should always work together.
There is a mysterious folder being shared in an internal server with Ubuntu 18.04.2 LTS.
There is nothing pointing to that folder in the /etc/samba/smb.conf file. What happen was that this sharing configuration was made before I have access to that server and the person who made it curiously does not remember how did that.
How can I discover how was made that sharing?
# To access your network share
sudo apt-get install smbclient
# List all shares:
smbclient -L //<HOST_IP_OR_NAME>/<folder_name> -U <user>
# connect:
smbclient //<HOST_IP_OR_NAME>/<folder_name> -U <user>
To access your network share use your username (<user_name>) and password through the path` "smb:////" (Linux users) or "\\\" (Windows users). Note that "" value is passed in "[]", in other words, the share name you entered in "/etc/samba/smb.conf".
Note: The default user group of samba is "WORKGROUP".
i hope it might just do.
Might as well put this out there in the event that someone else comes across it. I also had a mysterious samba share that was not listed in the smb.conf.
It turns out I had a symbolic link that was either shared before or pointed to a folder that was shared. Deleting the symlink got rid of the samba share after a smbd service restart. Restoring the symlink from the trash caused the samba share to come back. Very mysterious indeed.
I can start Neo4j without issues using default settings.
Now I try to have my Neo4j data in a folder of an external hard drive because I don't have enough space on my local machine for an import.
So in my neo4j.conf I have set: dbms.directories.data=/Volumes/INTENSO/richRich/neo4j-data. Then on neo4j start a databases folder gets created, but the database shuts down immediately after start. This is what I perceived as useful from the debug.log stacktrace:
...
2018-07-22 11:25:55.745+0000 INFO [o.n.k.i.DiagnosticsManager] --- INITIALIZED diagnostics END ---
2018-07-22 11:25:55.912+0000 INFO [o.n.b.BoltKernelExtension] Bolt Server extension loaded.
2018-07-22 11:25:56.069+0000 INFO [o.n.k.i.s.f.RecordFormatSelector] Selected format 'RecordFormat:StandardV3_4[v0.A.9]' for the new store
2018-07-22 11:25:56.118+0000 WARN [o.n.k.NeoStoreDataSource] Exception occurred while setting up store modules. Attempting to close things down. Unable to open store file: /Volumes/INTENSO/richRich/neo4j-data/databases/graph.db/neostore.nodestore.db.labels
org.neo4j.kernel.impl.store.UnderlyingStorageException: Unable to open store file: /Volumes/INTENSO/richRich/neo4j-data/databases/graph.db/neostore.nodestore.db.labels
...
Caused by: org.neo4j.io.pagecache.impl.FileLockException: Already locked: /Volumes/INTENSO/richRich/neo4j-data/databases/graph.db/neostore.nodestore.db.labels
...
I checked /Volumes/INTENSO/richRich/neo4j-data/databases/graph.db/neostore.nodestore.db.labels and the file is there. I also made everything chmod -R 777 to ensure that there are no permission issues.
OS: latest MacOS
Neo4j: 3.4.4 Community Edition
When I write
> show databases;
in Hive, I get the following error;
FAILED: SemanticException org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
Can you please provide a solution for this?
Run this command sub hive directory;
bin/schematool -initSchema -dbType derby
So, make sure the services are started;
start-all.sh
this command run.
It could be due to the default setting:/user/hive/warehouse (in the hive-site.xml) is not properly created or permission granted. (pls note this is **user, not usr)
which may be the culprit if you are doing a manual setup!
1) You may first check out the hive-site.xml (located at $HIVE_HOME/conf in my case is /usr/local/hive/conf) if you want, but which is the initially set default anyway
2) check if the path in Hadoop using: hadoop fs -ls /user/hive/warehouse exists or not?
3) create the Hadoop folder by using: hadoop fs -mkdir /usr/hive/wawrehouse if non-existing, take a look at the access right using Hadoop fs -ls ...............
4) use Hadoop fs -chmod g+w /usr/................. to grant the needed right
Either the user vs usr, or the set up of the warehouse, could be common causes
Reference (from hive-site.xml):
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
Note: you also have to make sure another Hadoop folder /tmp is also properly set as above
I am trying to mount a folder workspace from server to the client over nfs. For this I bind the folder to an /export by adding the following in my /etc/fstab on server:
/home /export none bind
Then I add the following lines in my /etc/exports on my server:
/export *(ro,sync,no_subtree_check,insecure,fsid=0)
/export/workspace *(rw,sync,no_subtree_check,insecure,nohide)
I load the exportfs file, and restart the nfs-kernel-server:
exportfs -vr
service nfs-kernel-server restart
I now go to my client and check which folders can be exported:
showmount -e 192.168.145.131
Export list fo 192.168.145.131:
/export/workspace *
/export *
But when I try mounting the folder, I get the following error:
sudo mount -t nfs4 192.168.145.131:/workspace nfs/ -v
mount.nfs4: timeout set for Sat Apr 19 19:16:51 2014
mount.nfs4: trying text-based options 'addr=192.168.145.131,clientaddr=192.168.145.128'
mount.nfs4: mount(2): No such device
mount.nfs4: No such device
I have also tried mounting :/export/workspace and :/home/workspace but that gives me the same error. I have tried loading the nfs module using modprobe on both client and server, but the module is loaded on both client and server.
Any help would be much appreciated. Thanks.
Solved the problem after 3 days!!
I tried mounting the nfs4 server folder from a client with a newer Kernel version (3.8). I was able to do so. So I copied the configuration file /boot/configure-3.8-generic file to my /usr/src/.config, and enabled the option Filesystems -> Network File Systems -> NFS3 client load as module and NFS4 client load as module.
compiled my kernel again, created initrd image, updated grub, and now I am able to mount the server folder from my 2.6 kernel client also!