I was trying to install vertica using /opt/vertica/sbin/install_vertica script and following command /opt/vertica/sbin/install_vertica -s vertica001,vertica002,vertica003 -r /root/packages/vertica-6.0.1-7.x86_64.RHEL5.rpm
I was getting following error
Vertica Analytic Database 6.0.1-7 Installation Tool
Starting installation tasks...
Getting system information for cluster (this may take a while)....
'failed to login to 172.16.10.212: EOF ERROR: Could not login with SSH. Here is what SSH said: (publickey,gssapi-keyex,gssapi-with-mic).\r\r\n'
Updating Nodes that are UP
'failed to login to 172.16.10.18: EOF ERROR: Could not login with SSH. Here is what SSH said: (publickey,gssapi-keyex,gssapi-with-mic).\r\r\n'
Updating Nodes that are UP
Removing 172.16.10.212,172.16.10.18 from hosts list
backing up admintools.conf on 172.16.10.52
Info: the package 'pstack' is useful during troubleshooting. Vertica recommends this package is installed.
Checking/fixing OS parameters.....
Error: No JSON object could be decoded
Traceback (most recent call last):
File "/opt/vertica/bin/verticaInstall.py", line 1187, in <module>
if not SSH.check_min_free_kbytes(installerSSH, fix=True):
File "/opt/vertica/oss/python/lib/python2.7/site-packages/vertica/network/SSH.py", line 2388, in check_min_free_kbytes
data =json.loads( ''.join(res[host][1]))
File "/opt/vertica/oss/python/lib/python2.7/json/__init__.py", line 310, in loads
return _default_decoder.decode(s)
File "/opt/vertica/oss/python/lib/python2.7/json/decoder.py", line 346, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/opt/vertica/oss/python/lib/python2.7/json/decoder.py", line 364, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
I have setup passwordless ssh for dbadmin also it has sudo access.
Do I need to have passwordless ssh for root user here ? what I am missing here ?
Do I need to have passwordless ssh for root user here ? what I am missing here ?
Yes. The installer runs on the nodes specified in the host parameter. If SSH is not set up between the nodes, how else is the installer supposed to complete the process? See Configuring the Network and how to Enable Secure Shell (SSH) Logins in the doc.
Q: Why are you installing Vertica 6.0? The latest release is 7.2
Related
I am fcing problem while logging to my snoqsql and geting below issue.
when I rung the below line in comand line:
(base) user#MacBook-Air MacOS % snowsql <snoflake ID> -u <user ID>
getting the below response :
We were unable to create or write to the ../snowsql_rt.log_bootstrap. Make sure you have permission to write to the log file's parent folder or to modify the location of the log file specified in the SnowSQL log_file configuration option. See docs: https://docs.snowflake.com/en/user-guide/snowsql-config.html#log-file
Observed error: [Errno 1] Operation not permitted: '/Applications/SnowSQL.app/Contents/snowsql_rt.log_bootstrap'
We were unable to create or write to the ../snowsql_rt.log. Make sure you have permission to write to the log file's parent folder or to modify the location of the log file specified in the SnowSQL log_file configuration option. See docs: https://docs.snowflake.com/en/user-guide/snowsql-config.html#log-file
Observed error: [Errno 1] Operation not permitted: '/Applications/SnowSQL.app/Contents/snowsql_rt.log'
Password:
250001 (n/a): Could not connect to Snowflake backend after 0 attempt(s).Aborting
If the error message is unclear, enable logging using -o log_level=DEBUG and see the log to find out the cause. Contact support for further help.
Goodbye!
Shoule be able to login to the Snowflake account
I am trying to write a simple script to connect to the router via ssh from the Window. I use subprocess and it seems working but do not know how the script passes the password
paramiko always gives me the error
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\cryptography\x509\general_name.py", line 16, in
ipaddress.IPv4Address,
AttributeError: module 'ipaddress' has no attribute 'IPv4Address'
I am getting quite confused on how to approach connecting to SQL server from a Python Lambda function. At this point, I am just trying to connect to my SQL Server (Azure) instance from the recommended AMI image .
The first and most popular approach seems to be using pymssql. Within that, it seems like you can go with the precompiled FreeTDS version (export PYMSSQL_BUILD_WITH_BUNDLED_FREETDS=1) or you can install FreeTDS on your system.
I have tried both and always ended up with the following error:
>>> pymssql.connect('myserver.database.windows.net', 'myuser#myserver.database.windows.net', 'mypass', 'mydb')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pymssql.pyx", line 641, in pymssql.connect (pymssql.c:10824)
pymssql.OperationalError: (20002, 'DB-Lib error message 20002, severity 9:\nAdaptive Server connection failed (myserver.database.windows.net:1433)\n')
Note, that if I install FreeTDS on the system and attempt to connect with tsql, I have no problems at all.
I can provide more info on exactly how my installs are setup if someone can help me with this approach. Alternatively, I am happy to go the pyodbc route but would need some help in that direction, too.
Update:
I tried going the pyodbc route and managed to get the drivers installed following this guide. Will update on future progress
I have a Flask application deployed on an Amazon Elastic Beanstalk cluster. On my local machine, macOS, I've added an integration with the Google Cloud API, and I've updated my requirements.txt to include the line google-cloud==0.27.0. When I deploy to Elastic Beanstalk with the updated requirements file, my deployment fails during pip install with the error
Running setup.py install for grpcio
Complete output from command /opt/python/run/venv/bin/python3.4 -c "import setuptools, tokenize;__file__='/tmp/pip-build-ve1vz0tx/grpcio/setup
.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-aszzosux-rec
ord/install-record.txt --single-version-externally-managed --compile --install-headers /opt/python/run/venv/include/site/python3.4/grpcio:
Failed to import the site module
Traceback (most recent call last):
File "/opt/python/run/venv/lib64/python3.4/site.py", line 890, in <module>
main()
File "/opt/python/run/venv/lib64/python3.4/site.py", line 848, in main
virtualenv_search_paths(sys.prefix)
File "/opt/python/run/venv/lib64/python3.4/site.py", line 638, in virtualenv_search_paths
addsitedir(sitedir, known_paths)
File "/opt/python/run/venv/lib64/python3.4/site.py", line 204, in addsitedir
addpackage(sitedir, name, known_paths)
File "/opt/python/run/venv/lib64/python3.4/site.py", line 173, in addpackage
exec(line)
File "<string>", line 1, in <module>
KeyError: 'google'
I am able to install my requirements locally in a virtualenv running python 3, however, when I create a similar virtualenv on my ec2 and install the requirements, I get the same error I get during deployment. One thing I have read about is that the ec2 might not have google cloud sdk installed, however, I installed it on my ec2 (tested both inside and outside of a virtualenv) using the following commands as described here here:
curl https://sdk.cloud.google.com | bash
exec -l $SHELL
gcloud init
How can I diagnose this error and prevent it from happening going forward?
My current hypotheses are:
there is still an issue with the way google cloud sdk is install or operating on the ec2
there is some conflict between requirements in my requirements.txt file once I add the google-cloud requirement
I've identified and fixed the problem. I had google==1.9.2 as a package in my requirements.txt and it wasn't playing well with google-cloud==0.27.0. I'm not sure why this occurred though.
Note: when deploying to Elastic Beanstalk, I had to rebuild the environments for the change to take place. It seems like Elastic Beanstalk reuses the Python virtualenv across deploys, so if a server had ever run a version of my application with google==1.9.2 in requirements, that previously installed version of google would interfere with future deploys that excluded it
I got a Mesosphere-EE, and install on fedora 23 server (kernel 4.4)with:
$bash dcos_generate_config.ee.sh --web –v
then output:
Running mesosphere/dcos-genconf docker with BUILD_DIR set to/home/mesos-ee/genconf
Usage of loopback devices is strongly discouraged for production use.Either use `--storage-opt dm.thinpooldev` or use `--storage-opt
dm.no_warn_on_loop_devices=true` to suppress this warning.
07:53:46:: Logger set to DEBUG
07:53:46:: ====> Starting DCOS installer in web mode
07:53:46:: DCOS Installer v1
07:53:46:: Starting server ('0.0.0.0', 9000)
Then I start firefox though vnc, the vnc is on root. then:
07:53:57:: Root page requested. 07:53:57:: Serving/usr/local/lib/python3.4/site-packages/dcos_installer/templates/index.html
07:53:58:: Request for configuration type made.
07:53:58::Configuration file not found, /genconf/config.yaml. Writing new onewith all defaults.
07:53:58:: Error handling request
PermissionError: [Errno 13] Permission denied: '/genconf/config.yaml'
But I already have a genconf/config.yaml, it look like:
bootstrap_url: http://<bootstrap_public_ip>:<your_port>
cluster_name: '<cluster-name>'
exhibitor_storage_backend: zookeeper
exhibitor_zk_hosts: <host1>:2181,<host2>:2181,<host3>:2181
exhibitor_zk_path: /dcos
master_discovery: static
master_list:
- <master-private-ip-1>
- <master-private-ip-2>
- <master-private-ip-3>
superuser_username: <username>
superuser_password_hash: <hashed-password>
resolvers:
- 8.8.8.8
- 8.8.4.4
I do not know what’s going on. If you have any idear, please let me know, thank you very much!
Disable Selinux!
Configure SELINUX=disabled in the /etc/selinux/config file and then reboot!
Be ensure the selinux is disabled by the command getenforce.
$ getenforce
Disabled
zhe.
Correctly installing the enterprise edition depends on the correct system prerequisites. Anyway I suppose you're still on the bootstrap node so I will give you some path to succed in your current task.
Run the script as root or as a user issuing sudo dcos_generate_config.ee.sh
The script will also generate the config file automatically; if you want to use your own configuration file then create a folder named genconf and put it inside before running the script. You should changes the values inside <> with your specific configuration. If you need more help for your specific case send me an email to infofs2 at gmail.com