I can not run show databases; command on terminal for hive - database

When I write
> show databases;
in Hive, I get the following error;
FAILED: SemanticException org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
Can you please provide a solution for this?

Run this command sub hive directory;
bin/schematool -initSchema -dbType derby
So, make sure the services are started;
start-all.sh
this command run.

It could be due to the default setting:/user/hive/warehouse (in the hive-site.xml) is not properly created or permission granted. (pls note this is **user, not usr)
which may be the culprit if you are doing a manual setup!
1) You may first check out the hive-site.xml (located at $HIVE_HOME/conf in my case is /usr/local/hive/conf) if you want, but which is the initially set default anyway
2) check if the path in Hadoop using: hadoop fs -ls /user/hive/warehouse exists or not?
3) create the Hadoop folder by using: hadoop fs -mkdir /usr/hive/wawrehouse if non-existing, take a look at the access right using Hadoop fs -ls ...............
4) use Hadoop fs -chmod g+w /usr/................. to grant the needed right
Either the user vs usr, or the set up of the warehouse, could be common causes
Reference (from hive-site.xml):
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
Note: you also have to make sure another Hadoop folder /tmp is also properly set as above

Related

Solr Error: Unable to create core [mycore] Caused by solr.ICUCollationField

I am trying to create a solr core, I am using drupalvm with vagrant and virtual box.
When setting up solr with this command:
sudo su - solr -c "/opt/solr/bin/solr create -c m4m -d /tmp/search_api_solr/solr-conf/7.x/"
I am getting this error:
INFO - 2018-11-05 19:21:45.804; org.apache.solr.util.configuration.SSLCredentialProviderFactory; Processing SSL Credential Provider chain: env;sysprop
ERROR: Error CREATEing SolrCore 'mycore': Unable to create core [mycore] Caused by: solr.ICUCollationField
Creating a core without specifying the -d <confdir> option is successful but gives me some really weird errors in the solr dashboard and Drupal UI which research indicates has something to do with a corrupted core.
Any help with why I am getting this error would be much appreciated. Other developers using the same vagrant installation is running without issue.
If you create the core without the config directory, solr will use it's default configurations.
Which in turn, will have none of the drupal needed field definitions, and so forth.
What you need to do, if you know a little bit about the solr's structure, and if you use solr > version 7 is:
go to where your solr installation is
cd /PATH_TO_SOLR/server/solr-webapp/webapp/WEB-INF/lib
Copy all jars from the analysis-extras folder to your wEB-INF/lib folder
cp /PATH_TO_SOLR/contrib/analysis-extras/lib/*.jar ./
restart solr the way you normally do, specifying your -d config directory. That's important.
Hope this helps.
OR...
Save your hassle and let the pros handle all this for you with a SaaS such as the likes of https://opensolr.com
You can create your solr index with 1 click, and you need 2 more clicks to upload your config files and you're done.
I need jars from 2 directories:
cd /PATH_TO_SOLR
cp solr/contrib/analysis-extras/lib/*.jar solr/server/solr-webapp/webapp/WEB-INF/lib/
cp solr/contrib/analysis-extras/lucene-libs/*.jar solr/server/solr-webapp/webapp/WEB-INF/lib/
see solr/contrib/analysis-extras/README.txt

KNIME Command Line Execution - ClassNotFoundException

I'd like to schedule a KNIME workflow. The workflow does its job very good as long as I start it from the KNIME GUI application. When I execute the same workflow via command line, java complains that com.microsoft.sqlserver.jdbc.SQLServerDriver
could not be found (ClassNotFoundException).
I invoke it via:
"D:\Progamme\KNIME\knime.exe" -nosplash -application -consoleLog org.knime.product.KNIME_BATCH_APPLICATION -preferences="absolutepathto\preferences.epf" -workflowDir="absolutepathto\workflow"
Since the error message signals missing content in the java CLASSPATH I also tried to add the parameters
-vmargs -classpath .;"absolutepathto/sqljdbc42.jar"
But still I earn a java slap, pointing to the same error...
I also tried to run the command from within the knime.exe's directory and I also tried to add the JAR file to Preferences -> Java -> Build Path -> Classpath Variable / User Libraries (referenced via the -preference argument). But that had no effect.
Did anybody face the same problems? Maybe with other third party JARs?
It is all about a Database connector that is configured like this:
Does the integrated security maybe force a misleading error?
System spec: KNIME 3.2.2 on Windows Server 2008 R2
Update - extract from preferences file
/configuration/org.eclipse.core.net/org.eclipse.core.net.hasMigrated=true
/configuration/org.eclipse.ui.ide/MAX_RECENT_WORKSPACES=10
/configuration/org.eclipse.ui.ide/RECENT_WORKSPACES=<list of some workspaces>
/configuration/org.eclipse.ui.ide/RECENT_WORKSPACES_PROTOCOL=3
/configuration/org.eclipse.ui.ide/SHOW_RECENT_WORKSPACES=false
/configuration/org.eclipse.ui.ide/SHOW_WORKSPACE_SELECTION_DIALOG=true
Is there maybe a problem due to the fact that it is a shared KNIME instance among several users and the command line execution does not know which workspace has to be chosen? Is the workspace somehow needed and why?
Partial Solution:
I finally managed it but I don't know exactly why it works now. What I did was to load a fresh portable version of KNIME and ran the same commands only changing the executable path to the new portable version. Before that I started the portable version once to set the workspace directory and register the database driver in preferences dialog and .ini file, nothing else, same configuration so far as the shared KNIME instance. What I am really wondering abpout is that from now on the commands are also working with the shared KNIME instance. I really don't know what caused the change that let KNIME find the driver class.
Info
Because I encountered a few more problems within shared environment in KNIME command line mode, that led to undeterministic execution results, I wrote a little .NET library. This gives me more flexibility/control over the workflow execution (which returncodes and error messages occured and so on). You can find it here if you're interested: KnimeNet
I took a very minimal approach:
cd "C:\Program Files\KNIME"
.\knime -nosplash -noexit -consoleLog -reset -application org.knime.product.KNIME_BATCH_APPLICATION -workflowFile="D:\Work\Knime Workflows\Output\CMD_Test.knwf" -preferences="D:\Work\Knime Workflows\Output\CMD_Test.epf"

Setting up ArangoDB cluster wihout DCOS

I'm working on setting up an ArangoDB cluster in an Ubuntu machine based on these instructions :
https://docs.arangodb.com/3.0/Manual/Deployment/Distributed.html
I keep getting the below error when i execute the first command in the above documentation with sudo. I ensured that all the directories
pointing to in the /etc/arangod.conf file has the required permissions. Please can you let me know if i'm missing something here.
Below is the error i get :
2016-08-23T07:29:52Z [26629] FATAL unable to create database directory: Failed to create directory [agency1] Permission denied
The command passes the database directory on the command line (agency1) and arangodb doesn't seem to have rights to create agency1 in your current working directory.
Either provide a proper working directory on the command line or specify one in the config file.
You need to first change the directory to /var/lib/arangodb3 or whatever data directory you have set and then run the command.

Mesosphere installation PermissionError:/genconf/config.yaml

I got a Mesosphere-EE, and install on fedora 23 server (kernel 4.4)with:
$bash dcos_generate_config.ee.sh --web –v
then output:
Running mesosphere/dcos-genconf docker with BUILD_DIR set to/home/mesos-ee/genconf
Usage of loopback devices is strongly discouraged for production use.Either use `--storage-opt dm.thinpooldev` or use `--storage-opt
dm.no_warn_on_loop_devices=true` to suppress this warning.
07:53:46:: Logger set to DEBUG
07:53:46:: ====> Starting DCOS installer in web mode
07:53:46:: DCOS Installer v1
07:53:46:: Starting server ('0.0.0.0', 9000)
Then I start firefox though vnc, the vnc is on root. then:
07:53:57:: Root page requested. 07:53:57:: Serving/usr/local/lib/python3.4/site-packages/dcos_installer/templates/index.html
07:53:58:: Request for configuration type made.
07:53:58::Configuration file not found, /genconf/config.yaml. Writing new onewith all defaults.
07:53:58:: Error handling request
PermissionError: [Errno 13] Permission denied: '/genconf/config.yaml'
But I already have a genconf/config.yaml, it look like:
bootstrap_url: http://<bootstrap_public_ip>:<your_port>
cluster_name: '<cluster-name>'
exhibitor_storage_backend: zookeeper
exhibitor_zk_hosts: <host1>:2181,<host2>:2181,<host3>:2181
exhibitor_zk_path: /dcos
master_discovery: static
master_list:
- <master-private-ip-1>
- <master-private-ip-2>
- <master-private-ip-3>
superuser_username: <username>
superuser_password_hash: <hashed-password>
resolvers:
- 8.8.8.8
- 8.8.4.4
I do not know what’s going on. If you have any idear, please let me know, thank you very much!
Disable Selinux!
Configure SELINUX=disabled in the /etc/selinux/config file and then reboot!
Be ensure the selinux is disabled by the command getenforce.
$ getenforce
Disabled
zhe.
Correctly installing the enterprise edition depends on the correct system prerequisites. Anyway I suppose you're still on the bootstrap node so I will give you some path to succed in your current task.
Run the script as root or as a user issuing sudo dcos_generate_config.ee.sh
The script will also generate the config file automatically; if you want to use your own configuration file then create a folder named genconf and put it inside before running the script. You should changes the values inside <> with your specific configuration. If you need more help for your specific case send me an email to infofs2 at gmail.com

No start database manager command was issued. SQLSTATE=57019

I am new to DB2 and I have installed DB2 9.7.
I created an instance which is shown below
[sathish#oc3855733574 ~]$ db2ilist
sathish
Settings of /etc/services is shown below
DB2_sathish 60000/tcp
DB2_sathish_1 60001/tcp
DB2_sathish_2 60002/tcp
DB2_sathish_END 60003/tcp
DB2_TMINST 50000/tcp
But, when I start using 'db2start' it throws the following error
07/31/2015 10:26:20 0 0 SQL1042C An unexpected system error occurred.
SQL1032N No start database manager command was issued. SQLSTATE=57019
I installed DB2 using 'root' and starting 'DB2' from 'instance' (sathish in this case)
Any help or URL link will be of great use
Thanks
Sathish Kumar
I had a look into db2diag.log file and I got a unusual hack from one of the website
I followed the steps mentioned below and it worked
a) db2trc on -f db2trace.out
b) db2start
c) db2trc off
This problem generally occurs if you have recently changed the password of the account which is the owner of that db2 instance what you need to do is go to services-> properties of the db2 instance -> and then from log on part select local system account
This looks like something is wrong with the installation. There should be some hints on what DB2 ran into in the db2diag.log file (look under ~/sqllib/db2dump/db2diag.log).
What you could do if the db2diag.log does not provide a clue is to verify your installation is correct. DB2 includes a tool for that named "db2val". Here is the link to the documentation of db2val for version 9.7. Just run "db2val" as the instance owner and check the output.
Try
sudo -i -u db2inst1 /database/config/db2inst1/sqllib/adm/db2start
For more information
https://dba.stackexchange.com/questions/49807/sql1641n-error-on-linux-while-running-db2start-using-db2-express-c-on-linux-luw

Resources