Posting here as I didn't hear back anything on the mailing list here.
On Ubuntu 20.04.5 LTS, built the latest version 3.0.0 of klish
git clone https://src.libcode.org/pkun/faux.git
cd faux
./autogen.sh && ./configure && make && sudo make install
cd ..
git clone https://src.libcode.org/pkun/klish.git
cd klish
./autogen.sh && ./configure && make && sudo make install
Trying to start the server ...
/usr/local/bin/klishd -d -v -f ./klishd.conf
klishd: Parse config file: ../klishd.conf
klishd: opts: Foreground = true
klishd: opts: Verbose = true
klishd: opts: LogFacility = daemon
klishd: opts: PIDPath = /var/run/klishd.pid
klishd: opts: ConfigPath = ../klishd.conf
klishd: opts: UnixSocketPath = /tmp/klish-unix-socket
klishd: opts: DBs = libxml2
klishd: Start daemon.
Scheme errors:
DB "libxml2": kdb-libxml2.so: cannot open shared object file: No such file or directory
DB "libxml2": Can't load DB plugin
klishd: Stop daemon.
Am I doing something wrong here?
Eventually I am trying to connect the client klish to the server klishd.
Related
As I want to try out Apache Age of postgresql, I got lost when reading the documentation.
Is there any easy solutions ?
Here is a step-by-step guide on how to install PostgreSQL and age extension for postgres from source.
Prerequisite:
Ubuntu must be installed in the Virtual Machine or dual boot alongside windows.
You should have ample space in your ubuntu software.
You should have already installed git. If not you can take help from here Install Git.
Install some Dependencies:
mkdir age_installation
cd age_installation
mkdir pg
cd pg
Remember below commands might vary according to the operating systems.
sudo apt-get install build-essential libreadline-dev zlib1g-dev flex bison
Installation of Components from Source:
For now, age only supports Postgres 11 and 12. So downloading the required version of PostgreSQL.
Download the files in the folder age-installation/pg
wget https://ftp.postgresql.org/pub/source/v11.18/postgresql-11.18.tar.gz && tar -xvf postgresql-11.18.tar.gz && rm -f postgresql-11.18.tar.gz
Installing PG:
Now we will move toward installing PG
cd postgresql-11.18
Configure by setting flags
./configure --enable-debug --enable-cassert --prefix=$(path) CFLAGS="-ggdb -Og -fno-omit-frame-pointer"
Now install
make install
Go back
cd ../../
In the above command, the prefix flag will contain the path where you would like to install the PSQL. Replace your path with the path in the parenthesis.
AGE:
Downloading:
Download the age from the GitHub repository. i.e. clone it in the age_installation directory.
git clone https://github.com/apache/age.git
Installing:
Configure age with PostgreSQL.
cd age/
sudo make PG_CONFIG=/home/talhastinyasylum/Desktop/age_installation/pg/postgresql-11.18/bin/pg_config install
make PG_CONFIG=/home/talhastinyasylum/Desktop/age_installation/pg/postgresql-8/bin/pg_config installcheck
Database initialization:
cd postgresql-11.18/
Intitialization
bin/initdb sample
When you will execute the command the success message will be shown with the command to start the server.
Start server:
bin/pg_ctl -D sample -l logfile start
The command will return a message saying that the server has started.
Create Database:
The name of the Database is SampleDatabase
bin/createdb SampleDatabase
Start querying Database:
Now that AGE has been added to pg successfully. Now we can start testing using pg_sql console.
bin/psql SampleDatabase
CREATE EXTENSION age;
Load 'age';
The above command will load the extension and we also need to set the search path and other variables.
SET search_path = ag_catalog, "$user", public;
Try below queries using cypher commands:
SELECT create_graph('demo_graph');
It will create a graph named demo_graph.
SELECT * FROM cypher('demo_graph', $$ CREATE (n:Person {name : "james", bornIn : "US"}) $$) AS (a agtype);
SELECT * FROM cypher('demo_graph', $$ CREATE (n:Person {name : "Talha", bornIn : "Lahore"}) $$) AS (a agtype)
SELECT * FROM cypher('demo_graph', $$ MATCH (n) RETURN n $$) as (a agtype);
Copy
The last command will return the rows in the database sample image of the output
Download source PostgreSQL package.
wget https://ftp.postgresql.org/pub/source/v11.18/postgresql-11.18.tar.gz && tar -xvf postgresql-11.18.tar.gz && rm -f postgresql-11.18.tar.gz
Go in PostgreSQL folder.
cd postgresql-11.18
configure by setting flags.
./configure --enable-debug --enable-cassert --prefix=$(pwd) CFLAGS="-ggdb -Og -fno-omit-frame-pointer"
Now install.
make install
Go back.
cd ../
*** CLONING AGE ***
git clone https://github.com/apache/age.git
Go in AGE cloned repo
cd age/
Install
sudo make PG_CONFIG=/home/postgresql-11.18/bin/pg_config install
Install check
make PG_CONFIG=/home/postgresql-11.18/bin/pg_config installcheck
Go in Postgresql file
cd postgresql-11.18/
initialization of db named demo
bin/initdb demo
Open File demo/postgresql.conf
nano demo/postgresql.conf
In postgresql.conf file update
shared_preload_libraries = 'age'
search_path = 'ag_catalog, "$user", public'
Starting db demo which we initialized earlier
bin/pg_ctl -D demo -l logfile start
bin/createdb demo
Age in added to pg successfuly now we can test it, opens pg console
bin/psql demo
PostgreSQL versions 11 & 12 are supported for AGE extension. After installing postgresql from source code make sure bin and lib folder are in your environment variable. If not you can set it as below in cmd.
export PATH="$PATH:/home/pg/dist/postgresql-11.18/bin/"
export LD_LIBRARY_PATH="/home/pg/dist/postgresql-11.18/lib/"
export PG_CONFIG="/home/pg/dist/postgresql-11.18/bin/pg_config"
Just replace the path with your installation directory
After that clone the age in your ubuntu. Go to the directory and run
sudo make install
From now on you can run AGE extension after running pgsql as follow :
CREATE EXTENSION age;
LOAD 'age';
SET search_path = ag_catalog, "$user", public;
I started working on Github actions to build some sample containers instead of doing my builds locally. Less reliance on my machines, etc. It seems though that building and pushing on my MAC works, but creating an Action to do it will fail.
When doing tests locally, I did make sure I have an updated Dockerfile to ensure that everything is built as needed correctly, but part of me is thinking it is related to the OS building with the Github action, but I am trying to understand it more.
The error I get is:
Error: failed to solve: executor failed running [/bin/sh -c ( /opt/mssql/bin/sqlservr & ) | grep -q "Service Broker manager has started" && /opt/sqlpackage/sqlpackage /a:Import /tsn:. /tdn:${DBNAME} /tu:sa /tp:$SA_PASSWORD /sf:/tmp/db.bacpac && rm /tmp/db.bacpac && pkill sqlservr]: exit code: 1
My workflow action is:
name: Docker Image CI MSSQL
on:
schedule:
- cron: '0 6 * * *'
push:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout#v3
-
name: Prepare Variables
id: prepare
run: |
DOCKER_IMAGE=fallenreaper/eve-mssql
VERSION=$(date -u +'%Y%m%d')
if [ "${{ github.event_name }}" = "schedule" ]; then
VERSION=nightly
fi
TAGS="${DOCKER_IMAGE}:${VERSION}"
if [[ $VERSION =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then
TAGS="$TAGS --tag ${DOCKER_IMAGE}:latest"
fi
echo ::set-output name=docker_image::${DOCKER_IMAGE}
echo ::set-output name=version::${VERSION}
echo ::set-output name=tags::${TAGS}
-
name: Login to DockerHub
if: success() && github.event_name != 'pull_request'
uses: docker/login-action#v1
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
-
name: Docker Build & Push
if: success() && github.event_name != 'pull_request'
uses: docker/build-push-action#v2
with:
push: true
tags: ${{steps.prepare.outputs.tags}}
context: mssql/.
So i was thinking this would work. The Dockerfile I am building uses mcr.microsoft.com/mssql/server:2017-latest which I figured would work.
Dockerfile:
FROM mcr.microsoft.com/mssql/server:2017-latest
ENV ACCEPT_EULA=Y
ENV SA_PASSWORD=Password123!
ENV MSSQL_PID=Developer
ARG DBNAME=evesde
EXPOSE 1433
RUN apt-get update \
&& apt-get install unzip -y
RUN wget -progress=bar:force -q -O sqlpackage.zip https://go.microsoft.com/fwlink/?linkid=2165213 \
&& unzip -qq sqlpackage.zip -d /opt/sqlpackage \
&& chmod +x /opt/sqlpackage/sqlpackage
RUN wget -o /tmp/db.bacpac https://www.fuzzwork.co.uk/dump/mssql-latest.bacpac
RUN ( /opt/mssql/bin/sqlservr & ) | grep -q "Service Broker manager has started" \
&& /opt/sqlpackage/sqlpackage /a:Import /tsn:. /tdn:${DBNAME} /tu:sa /tp:$SA_PASSWORD /sf:/tmp/db.bacpac \
&& rm /tmp/db.bacpac \
&& pkill sqlservr
EDIT As I keep reading various documents, I am trying to understand and test various methods to see if i can spawn a build. I was thinking that simulating a MAC might be useful, so i had also attempted to use the action: runs-on: macos:latest to see if that would solve it, but i havent seen gains as the run docker-login-action#v1 will fail.
Looking Through each line item I ended up the the following Dockerfile.
FROM mcr.microsoft.com/mssql/server:2017-latest
ENV ACCEPT_EULA=Y
ENV SA_PASSWORD=Password123!
ENV MSSQL_PID=Developer
ENV DBNAME=evesde
EXPOSE 1433
RUN apt-get update \
&& apt-get install unzip -y
RUN wget -progress=bar:force -q -O sqlpackage.zip https://go.microsoft.com/fwlink/?linkid=2165213 \
&& unzip -qq sqlpackage.zip -d /opt/sqlpackage \
&& chmod +x /opt/sqlpackage/sqlpackage
RUN wget -progress=bar:force -q -O /mssql-latest.bacpac https://www.fuzzwork.co.uk/dump/mssql-latest.bacpac
RUN ( /opt/mssql/bin/sqlservr & ) | grep -q "Service Broker manager has started" \
&& /opt/sqlpackage/sqlpackage /a:Import /tsn:. /tdn:$DBNAME /tu:sa /tp:$SA_PASSWORD /sf:/mssql-latest.bacpac \
&& pkill sqlservr
#Cleanup of created bulk files no longer needed.
RUN rm mssql-latest.bacpac sqlpackage.zip
The Main difference is where the bacpac file is stored. It seemed there were hiccups when creating that file. After adjusting the location, and breaking apart the import list, it seemed to work.
Notes: When the file was created in TMP, it seemed to be partially created, and so it was recognizing the existing file but it was corrupt. Not sure if there were size limits, but it was an observation. Putting it in the / directory of the build gave me the access and complete file so i needed to adjust the /sf reference.
Lastly because there were hanging files which no longer were needed, I found it best to just do a little cleanup by deleting both the sqlpackage and bacpac files.
Suggesting to investigate grep -q "Service Broker manager has started" &&
Maybe this fails, because you start /opt/mssql/bin/sqlservr then immediately check that it started. In most cases it takes few seconds to start.
To test if my thesis is correct.Suggesting to insert few sleep 10 or timeout commands in strategic places.
FROM python:3.7
WORKDIR /opt
RUN curl https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb -o /chrome.deb
RUN dpkg -i /chrome.deb || apt-get install -yf
RUN rm /chrome.deb
ENV CHROMEDRIVER_VERSION 89.0.4389.23
ENV CHROMEDRIVER_DIR /chromedriver
RUN mkdir -p $CHROMEDRIVER_DIR
# Download and install Chromedriver
RUN wget -q --continue -P $CHROMEDRIVER_DIR "http://chromedriver.storage.googleapis.com/$CHROMEDRIVER_VERSION/chromedriver_linux64.zip"
RUN unzip $CHROMEDRIVER_DIR/chromedriver* -d $CHROMEDRIVER_DIR
ENV PATH $CHROMEDRIVER_DIR:$PATH
RUN apt-get update
COPY . .
RUN pip3 install -r requirements.txt
CMD ["python3","code.py"]
code.py
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
try:
chrome_options = Options()
chrome_options.add_argument("--headless")
driver = webdriver.Chrome(options=chrome_options)
driver.get('google.com')
print(f'{driver.page_source}')
except Exception as e:
print(f'111111111 Exception: {e}')
try:
options = webdriver.ChromeOptions()
options.headless = True
driver = webdriver.Chrome(options=options)
driver.get('google.com')
print(f'{driver.page_source}')
except Exception as e:
print(f'222222 Exception: {e}')
print(f'h')
requirements.txt
selenium
When i build this image it build fine , but throws following error on running image
Service chromedriver unexpectedly exited. Status code was: 127
any idea , where i am doing wrong in the Dockerfile. I tried for other posts available but nothing worked for me.
My machine os is Mac catalina and was trying to configure the code for remote host, but this is not even working at my local system as well.
Is this is the issue I have another OS and configuring another
I tried this post also but nothing worked
Had the same problem. Fixed it by installing chrome-browser.
Please check this answer: How to install Google chrome in a docker container
I'd like to setup a docker container and share it with my co-workers so we all don't have to setup development environments individually.
Note that this does not include saving the container as an image and uploading to DockerHub (search around for that part). Its also worth noting that this set of instructions would be a good start for creating a "dockerfile" which would automatically run all of these commands to build up this container on-demand.
docker pull ubuntu
docker run --privileged=true -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:ro -v /dev/bus/usb:/dev/bus/usb ubuntu
// As root in the docker container:
apt-get update
apt-get install vim vifm ssh sshd iproute2 iputils-ping sshfs build-essential dos2unix git usbutils
adduser mydevuser
/etc/init.d/ssh start
ip a
// As mydevuser user in the docker container:
- Download and unzip gcc compilers from ARM (gcc-arm-none-eabi-8-2019-q3-update-linux.tar.bz2)
- Download and unzip nRF5SDK (nRF5SDK160098a08e2.zip)
- Download and unzip soft device (s113nrf52701.zip)
- Download and unzip command line tools (nRF-Command-Line-Tools_10_4_1_Linux-amd64.tar.gz)
// Configure for our compiler, here is my updated GNU_INSTALL_ROOT
~/nRF5SDK/components/toolchain/gcc ..) head Makefile.posix
GNU_INSTALL_ROOT ?= /home/mydevuser/gcc/gcc-arm-none-eabi-8-2019-q3-update/bin/
// Now lets compile some examples
~/nRF5SDK/external/micro-ecc/nrf52hf_armgcc/armgcc ..) make
~/nRF5SDK/examples/dfu/secure_bootloader/pca10100_s113_ble_debug/armgcc ..) make
~/nRF5SDK/examples/peripheral/spi/pca10056/blank/armgcc ..) make
// As root in the docker container:
mv /home/mydevuser/cli_nrf/mergehex /opt/
mv /home/mydevuser/cli_nrf/nrfjprog/ /opt/
ln -s /opt/nrfjprog/nrfjprog /usr/local/bin/nrfjprog
ln -s /opt/mergehex/mergehex /usr/local/bin/mergehex
cp -pv /home/mydevuser/cli_nrf/JLink_Linux_V650b_x86_64/libjlinkarm* /lib/x86_64-linux-gnu/
// As root in the docker container:
// Load the firmware over USB to the dev board:
nrfjprog -f NRF52 --program nrf52840_xxaa.hex --chiperase --log
I am using Check_MK with Nagios. I found somewhere that nagios comes with mk-livestatus but I found on my server but I couldn't found it.
Can we add this(mk-livestatus) feature explicitely? If Yes, How could I install it?
Thanks
OMD automatically configures this option correctly in etc/mk-livestatus/nagios.cfg.
If your nagios is not configured with OMD you can use following steps to install mk-livestatus:
- Installing mk livestatus:
1.- Install dependencies:
# yum install make gcc-c++ wget
2.- Download mk livestatus:
# cd /tmp && wget http://mathias-kettner.de/download/mk-livestatus-1.1.12p7.tar.gz
3.- Extract package:
# tar -xzvf mk-livestatus-1.1.12p7.tar.gz
4.- Install:
# cd mk-livestatus-1.1.12p7/ && ./configure
# make && make install
5.- Create new directory with correct permissions:
# mkdir /usr/lib/nagios/mk-livestatus && chown nagios:apache /usr/lib/nagios/mk-livestatus
6.- Edit /etc/nagios/nagios.cfg :
broker_module=/usr/local/lib/mk-livestatus/livestatus.o /usr/lib/nagios/mk-livestatus/live
7.- Restart Nagios:
# service nagios restart
8.- Try command line:
# echo 'GET hosts' | unixcat /usr/lib/nagios/mk-livestatus/live
For more information of query syntax:
http://mathias-kettner.de/checkmk_livestatus.html
There is a website specially for mk_livestatus made by mathias kettner who's the creator of mk_livestatus
here the installation documentation for the agent on linux ( dpkg or rpm )
http://mathias-kettner.com/checkmk_linuxagent.html