I need to monitor a CoreOS cluster which used to host a kubernetes cluster on top of that. I use heapster to monitor kuberenetes cluster.
Now I need to monitor CoreOS minions using icinga/nagios. Is there any way to do so?
Thanks
If CoreOS is similar enough to Linux, it will likely be compatible with NRPE or NCPA.
I would try this first...
NRPE install
cd /tmp
wget http://assets.nagios.com/downloads/nagiosxi/agents/linux-nrpe-agent.tar.gz
tar xzf linux-nrpe-agent.tar.gz
cd linux-nrpe-agent
./fullinstall
Is yum, rpm, or any other package manager present on CoreOS? Sorry I'm a bit unfamiliar with the OS...
Related
i was installing postgresql on ubuntu using linuxbrew:
brew install postgresql
it seems to work fine but after that because i was installing PostgreSQL for the first time i tried creating a database:
initdb /usr/local/var/postgres -E utf8
but it returned as:
initdb: command not found
i tried running the command with sudo but that doesn't helped
run locate initdb it should give you the list to chose. smth like:
MacBook-Air:~ vao$ locate initdb
/usr/local/Cellar/postgresql/9.5.3/bin/initdb
/usr/local/Cellar/postgresql/9.5.3/share/doc/postgresql/html/app-initdb.html
/usr/local/Cellar/postgresql/9.5.3/share/man/man1/initdb.1
/usr/local/Cellar/postgresql/9.6.1/bin/initdb
/usr/local/Cellar/postgresql/9.6.1/share/doc/postgresql/html/app-initdb.html
/usr/local/Cellar/postgresql/9.6.1/share/man/man1/initdb.1
/usr/local/bin/initdb
/usr/local/share/man/man1/initdb.1
So in my case I want to run
/usr/local/Cellar/postgresql/9.6.1/bin/initdb
If you don't have mlocate installed, either install it or use
sudo find / -name initdb
There's a good answer to a similar question on SuperUser.
In short:
Postgres groups databases into "clusters", each of which is a named collection of databases sharing a configuration and data location, and running on a single server instance with its own TCP port.
If you only want a single instance of Postgres, the installation includes a cluster named "main", so you don't need to run initdb to create one.
If you do need multiple clusters, then the Postgres packages for Debian and Ubuntu provide a different command pg_createcluster to be used instead of initdb, with the latter not included in PATH so as to discourage end users from using it directly.
And if you're just trying to create a database, not a database cluster, use the createdb command instead.
I had the same problem and found the answer here.
Ubuntu path is
/usr/lib/postgresql/9.6/bin/initdb
Edit: Sorry, Ahmed asked about linuxbrew, I'm talking about Ubuntu.
I Hope this answer helps somebody.
I had a similar issue caused by the brew install postgresql not properly linking postgres. The solve for me was to run:
brew link --overwrite postgresql
you can add the PATH to run from any location
sudo nano ~/.profile
inside nano go to the end and add the following
# set PATH so it includes user's private bin if it exists
if [ -d "/usr/lib/postgresql/14/bin/" ] ; then
PATH="/usr/lib/postgresql/14/bin/:$PATH"
fi
and configure the alternative
sudo update-alternatives --install /usr/bin/initdb initdb /usr/lib/postgresql/14/bin/initdb 1
I'm trying to run zeppelin on Ubuntu14 w/ Hadoop 1.0.3 and Spark 1.4.0.
I've finished building the source code, and all of the package successfully finished building. But when I run the daemon, it fails and says that the Zeppelin process had died.
Any ideas where this is going wrong?
It says that it can't find the logs folder and the run folder, which are definitely there.
Joseph,
I suggest that you try to test your zeppelin package first.
mvn verify
or check if your zeppelin process is alive or not.
ps -aux | grep zeppelin
Try running zeppelin by sudo :
sudo bin/zeppelin-daemon.sh start
This works for me:
ps -ef | grep "zeppelin"
kill -9 pid
sudo bin/zeppelin-daemon.sh restart
It could be an error caused by JDK version, at least that was the case for me
try to update jdk and build it again.
Also, make sure you are building it using the correct command
mvn clean package -Pspark-1.4 -Dhadoop.version=2.2.0 -Phadoop-2.2 -DskipTests
If you are running zeppelin from a virtual machine make sure that you have enough RAM and CPU. I ran into your problem when I was using Virtual Box and the default settings. When I increased the cpu's to 2 and RAM to 4096 everything worked fine. This is because zeppelin runs spark by default and spark is very resource intensive locally and otherwise.
I had the same issue and tried the proposed answers but none worked for me. Here is what did work for me:
Download the binaries, then download the build requirements:
sudo apt install openjdk-8-jdk npm libfontconfig r-base-dev r-cran-evaluate
sudo apt install maven
Go to the Zeppelin directory and run:
sudo bin/zeppelin-daemon.sh start
Go to localhost:8080 in your browser.
I had the same issue right now.. so I checked environment compatibility with my cdh then I got java compatibility issue
I setup yum install java-1.8.0-openjdk then start all the services of Hadoop with spark
Then, I started zeppelin, I made the zeppelin folder on root so I used..
sudo zeppelin/bin/zeppelin-daemon.sh start
Or
zeppelin/bin/zeppelin-daemon.sh start
I took Kangrok Lee's suggestion and ran mvn verify on my system. It prompted me that I do not have JAVA_HOME set and JAVA_HOME must point to JDK and not JRE.
Following steps fixed for me:
Make sure you have JDK installed on the system where you are trying to run zeppelin
Make sure JAVA_HOME environment variable points to your JDK and not JRE
After above steps are ensured zeppelin-daemon.sh start / restart should work for you. No need to use sudo.
I'm building a docker image on my Raspberry Pi, which is of course takes some time. The problem here is that even very simple commands in the Dockerfile like setting an environment variable, using chmod +x on a single file or exposing port 80 take minutes to complete.
Here is an excerpt of my Dockerfile:
FROM resin/rpi-raspbian
MAINTAINER felixbr <mymail#redacted.com>
RUN export DEBIAN_FRONTEND=noninteractive && apt-get update && apt-get install -y python python-dev python-pip python-numpy python-scipy python-mysqldb mysql-server redis-server nginx dos2unix poppler-utils
COPY requirements.txt /app/
RUN pip install -r /app/requirements.txt
COPY . /app
WORKDIR /app
RUN cp /app/nginx-django.cfg /etc/nginx/sites-enabled/default
RUN chmod +x /app/start.sh
ENV DOCKERIZED="true"
CMD ./start.sh
EXPOSE 80
Keep in mind this is using an ARMv6 base image, so it can run on a Raspberry Pi and I'm using docker 1.5.0 built for the hypriot Raspberry Pi OS.
Is it copying the built layers for every command or why does each of the last few commands take minutes to complete?
Each instruction of the Dockerfile will be run in a container. What it means is that for each instruction it will do the following :
Instantiate a container from the image created by the previous step, which will create a new layer (the R/W one)
Do the thing (pip install, etc..)
Commit, which will copy the top layer as an image layer (I'm pretty sure it is copying the layer)
And removing the container (if the --rm option is specified) (thus, removing the container Read/Write layer)
There are a few I/O operations involved. On an SSD it's really quick, as well as on a good hard drive. When you build it on the Raspberry PI, if you build it on the SD Card (or MicroSD), the performance of the SD card is probably not that good. It will depend on the class of you MicroSD and even then, I don't think it's really good for the card. I made the try with a simple node project, and it definitely took a few minutes instead of a few seconds like it did on my laptop. It is hardware related (mostly I/O for the SD Card, maybe a little bit the CPU, but...).
You might wanna try to use an external hard drive connected to the raspberry Pi and move the docker folders there, to see if the performance are better.
This is an old question but for reference, you may have been suffering from the chosen storage driver.
On Ubuntu/Debian, Docker uses by default an AUFS storage driver, which is quite fast.
On other distributions, Docker uses by default a devicemapper storage driver, which is very slow with the default configuration (due to a "loop-lvm" mode, configured by default, and not recommandent for production use).
Check this guide for reference and to see how to configure the devicemapper storage driver in production (without loop mode) : https://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver/
Another consideration that was not mentioned here, is that on armv7, most packages that you may want to install with pip or apt-get, are not packaged as binaries.
That means that on an amd64 architecture, pip install downloads a binary and it just merely copies it in the right place, but on armv7, it won't find a suitable binary and will instead downloads the source code and will need to build it from scratch.
When you have a package with lots of dependencies, and they need to be built from source, it takes a looong time.
You can check on what is going on during docker build using the -v flag on pip
pip install -v -r requirements.txt
On Arm/v7 arch, some python libs are not ready yet as a binary, building time is so long, as you are building libs for armV7 as well .
I need to monitor the performance of a raspberry PI (with raspbian), I tried to use new relic, but it doesn't support ARM architecture, so it's impossible to use.
I even tried graphdat but seems to have the same problem.
Any alternative to suggest me?
Linode Longview does support arm architecture:
https://www.linode.com/longview
The free tier have 12-hour retention but that may be enough for most cases.
I know this is old, but New Relic has ARM and ARM64 infrastructure agents now:
https://download.newrelic.com/infrastructure_agent/binaries/linux/arm/
I've tested this on a Raspberry Pi 4 (8GB) on Debian (32-bit) and it's been working fine so far.
In case anyone else tries, here's what I did:
Download the Infrastructure Agent:
sudo curl https://download.newrelic.com/infrastructure_agent/binaries/linux/arm/newrelic-infra_linux_1.20.5_arm.tar.gz --output newrelic-infra_linux_1.20.5_arm.tar.gz
Extract the files
sudo tar -xf newrelic-infra_linux_1.20.5_arm.tar.gz
Add license key to the config script:
echo "license_key=\"<YOUR_LICENSE_KEY>\"" | sudo tee -a ~/newrelic-infra/config_defaults.sh
Install the Infrastructure Agent
sudo ~/newrelic-infra/installer.sh
Check service status to make sure it's running:
sudo systemctl status newrelic-infra
By default, process information is not sent to New Relic, so I had to enable it manually:
echo "enable_process_metrics: true" | sudo tee -a /etc/newrelic-infra.yml
Finally, restart the service:
sudo systemctl restart newrelic-infra
I would like to know how to either ignore upgrading certain ports or unmark them as "outdated".
This is motivated by certain ports failing to upgrade, while I wish to upgrade all the rest. I know about sudo port install -n, which allows one to install a port without upgrading port dependencies, as in the case of mongodb requiring an older (not the current) version of theboost libraries, but this is not applicable here.
For example:
$ sudo port list outdated
gdb #7.5 devel/gdb
py27-scikits-image #0.7.1 python/py-scikits-image
As gdb#7.5 fails to update, I would just like to upgrade the others, ie. py27-scikits-image, without going thru the whole sudo port list outdated | awk '{print $1}' | grep -v gdb | xargs sudo port upgrade pipeline.
Much appreciated.
I would advise to create a local portfile for gdb with a lower version number.
Create a local portfile repository: howto
Copy the gdb portfile directory (a directory called "gdb" containing the file "Portfile" and directory "files") into your local portfile repository
Change the version number in the portfile to e.g. 0.0
Run portindex in your local portfile repository
The local portfile overrides the one downloaded from the default port repository. The low version number makes macports think your version of gdb is up to date.
I hope this can help.
BTW: you can do sudo port upgrade outdated and not gdb