IBM Cloud Private 2.1.0.2 install failing at "Waiting for Cloudant to Start" - cloudant

IBM Cloud Private 2.1.0.2 ce install is failing at "Waiting for Cloudant to Start", it is a multinode cluster with CentOS 7.2, docker version 17.09 running.
I have One common boot and master node, One worker node and One proxy node. I have checked the hardware requirements and assigned more that 151GB storage on all nodes. I have also pulled the icp-datastore locally.
Can anyone please help with suggestions to solve this issue.

you should edit cloudant deployment and increase readiness timeout, so the platform can take its time to rebuild the database. can be achieved using kubectl (kubectl edit deploy xxxx) or using ICP console

I would recommend looking at the following:
Are you using firewall enabled? Always ensure the correct state before install even after an uninstall.
Are you using IPsec enabled? Try to install with IPsec disabled.
Ensure you're disabling SElinux every time you run the install.

Related

flink standalone get no metrics to pushgateway for grafana

I am new to flink and have install yarn and flink on my macbook with M1 pro chip.
To monitor the running of flink 1.13, I installed grafana,prometheus, pushgateway the same way I found on the internet posts, and all the web ui looks fine.
Then I changed the flink-conf.yaml file as following pic and copy the flink-metrics-prometheus-1.13.6.jar to the lib folder. And restart the flink using stop-cluster.sh and start-cluster.sh.
However, the pushgateway still get no metricsfrom flink???
Can anyone tell how to fix?
Really in a hurry. Many Thankssssss!!!
I solved this problem. I think its quite tricky, should use 127.0.0.1 instead of localhost
metrics.reporter.promgateway.host: 127.0.0.1

Launch Mongodb in AWS envirement after installation

During 3 days I have been trying to install mongo BD in AWS ec2 instance, today finally managed to install it in Ubuntu, now I can't launch it in AWS environment, after numerous attempts to check the status in aws environment terminal I get errors:
What I have already tried do:
installed mongodb on Ubuntu 16.04 (Xenial)
launch mongo as a service (Ubuntu)
sudo service mongod start
sudo service mongod status
Go to AWS environment and do attempts to check if I'm connected to DB, and get errors:
sudo: mongod: command not found
mongod: unrecognized service
sudo: apt-get: command not found
bash: mongo: command not found
Please help to set my environment
I am pretty sure that Шоира is dealing not with Ubuntu OS but Amazon Linux or so.
So, if she is dealing with Community Edition version of it, the actual docs for every *nix based OS can be found here (MongoDB Docs)
And if I remember this fact correctly, AWS instances comes with Amazon Linux by default, so the documentation guide should been read for Amazon Linux (here), not Ubuntu.
To ensure that she is using Amazon Linus, she must type command grep ^NAME /etc/*release in terminal. If so, the reply should be: Amazon Linux or Amazon Linux AMI
Also, I don't know does it matter or not, but MongoDB Atlas provides also free-tier (as EC2 instance) servers in (almost) every data centers from GCP / Azure / AWS, so sometimes it's better to dealing with Cloud Service (which includes Compass and Realms by default, out of the box) instead of using the Community edition of the -raw DB, and write code and https API for it, later.
I tried to recreate the issue on an EC2 instance with Ubuntu 16.04:
NAME="Ubuntu"
VERSION="16.04.6 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.6 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
I followed the instructions from your link:
Install MongoDB Community Edition on Ubuntu
I had no issues and was able to install mangoDB as described in the link. The mongoDB is working fine on my instance:
● mongod.service - MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)
Active: active (running) since Sun 2020-07-19 08:21:41 UTC; 8s ago
Docs: https://docs.mongodb.org/manual
Main PID: 3214 (mongod)
Tasks: 24
Memory: 69.6M
CPU: 746ms
CGroup: /system.slice/mongod.service
└─3214 /usr/bin/mongod --config /etc/mongod.conf
Thus, please double check and ensure that you follow the instructions from the link. The instructions are correct.
Also please make sure to use Ubuntu 16.04:
This means that, you are trying to connect to the mongod process running on the local host which is binded on the default port of 27017

How to see console.logs of a running nodeJs application on ubuntu 18 EC2 instance?

I am new to the node world. I created a node js rest API. When I run npm start in my local machine or in the terminal for the first time, I can see console.log() in my terminal. Now, I am running the same application on an AWS Ec2 instance with Ubuntu as os. I run npm start and serve my app on port 80. I do this via ssh and after running my server I close the ssh connection. But when I reconnect via ssh, I want to see those console.log() messages in my terminal for some purpose.
I completely understand that logging messages in the terminal is not a good idea and there can be so many alternatives. Just want to know how to access the same terminal window/result as we see when I start my application.
if you are using pm2, you can try "pm2 logs"
So Nodemon won't work well in a production server or in any instance where you need to have the app going by itself.
Nodemon is a dev tool to enable you to restart your server during development. In a "real" vps you need to place the process in the background or it will be automatically killed when the connection times out.
Check out this YT series for a correct deploy architecture in an NGINX server with the use of pm2 and NGINX on a Red Hat server, I've personally used it more than once:
https://www.youtube.com/playlist?list=PLQlWzK5tU-gDyxC1JTpyC2avvJlt3hrIh

An error occurred during installation: No such plugin: cloudbees-folder

An error occurred during installation: No such plugin: cloudbees-folder (While installing Jenkins suggested plugins getting the error on windows 10)
This work for me:
http://localhost:8080/restart
OR
service jenkins restart on my Ubuntu
For Windows users, restarting the Jenkins service will resolve the issue.
For that Open Task Manager (Ctrl+Sft+Esc) -> Services -> right-click on Jenkins service and choose the option restart
If you are using the Jenkin's image in Docker if you see this error just try to execute the below command:
http://localhost:8080/safeRestart
you will get a prompt do you need to restart don't click on yes just ignore then start creating your jobs by clicking on NEW, it's just a plugin
For the docker version
1-open the browser
2- write your_ip_address:your_docker_port/restart
Note: you must have the password generated by your Jenkins server
if you do have click here
As a comment suggested: always look for official images mentioned here jenkins.io/download.
I faced the same issue when using the docker image jenkins:2.60.3, it turned out that this image isn't an official one, the official images look like jenkins/jenkins:<something>, you can find them here: https://hub.docker.com/r/jenkins/jenkins
If you are behind a firewall (local or through a VPN), there's a good chance that it's failing to download files correctly.
Try installing from another location or disabling the VPN.
Or request your network admin to allow the connections.
When I initially started my server while on a VPN there were two connection failure stack traces.
I am running Jenkins on Ubuntu in AWS.
I had this initial problem after booting up the box.
Ran the following:
sudo service jenkins restart
All working for me now.
Check the Context Path in tomcat for Jenkins that you have given during deployment and then restart/safeRestart from that path.
For example: The Path in the below screenshot is "/jenkins", So the restart URL will be
http://localhost:8080/jenkins/restart OR
http://localhost:8080/jenkins/safeRestart
I just Unchecked all the plugins and clicked on the install button. After that, it successfully takes me to the inside.
Just restart the machine. It successfully worked for me.

GAE Managed VMs - can't deploy if your project name is too long

Currently the GAE Managed VMs feature is broken for any project with a name longer than 27 characters.
The underlying issue is that Docker restricts image namespace to between 4-30 chars. This has been fixed (https://github.com/docker/docker/issues/10392) but is still awaiting a release at time of writing.
It seems when deploying a Managed VM to GAE that the namespace is automatically generated from your project name plus _m_ prefix. This leads to error when attempting to deploy the vm:
DEBUG: "POST /v1.10/images/gcr.io/_m_<my project name>/<my project name>.default.20150330t140211/push HTTP/1.1" 500 111
INFO: Exception 500 Server Error: Internal Server Error ("Invalid namespace name (_m_<my project name>). Cannot be fewer than 4 or more than 30 characters.") thrown in ProgressHandler. Retrying.
The obvious solution would be for GAE gcloud tools to respect the underlying limit via some auto-truncation or hashing scheme.
Does anyone know a way around this? Or I have to wait for Google to fix or Docker to release a new version and Google to update?
We're aware of the issue and we're working on a long-term fix. For now, you can switch to an old version of gcloud. You can do this by setting this variable to point to an old version (0.9.51):
gcloud config set --scope=installation component_manager/fixed_sdk_version 0.9.51
then run "gcloud components update"
Then run "gcloud config set app/hosted_registry false"
and you should be able to deploy. I'll update this answer when we've fixed the naming issue.
UPDATE:
The naming issue has been fixed as of this week's release (0.9.57).

Resources