Error getting alerts from compliances Your environment may not have any index with Wazuh's alerts - elk

I am new in elasticsearch. I have to set up wazuh with elasticsearch cluster. I did all the thing. I have also installed wazuh plugin on the Kibana . Once, I opened the app and clicked on the agent section It is saying =>
Error getting alerts from compliances
Your environment may not have any index with Wazuh's alerts
Please help me.

you could try to uninstall and install it again. Here you have the official uninstallation guide Uninstalling Wazuh with Elastic Stack, and after installing Elastic again with this guide Unattended installation
Remember if you want to preserve your configuration you can backup the files:
cp -p /var/ossec/etc/ossec.conf.orig /var/ossec_backup/etc/ossec.conf
cp -p /var/ossec/etc/local_internal_options.conf /var/ossec_backup /etc/local_internal_options.conf
cp -p /var/ossec/etc/ /var/ossec_backup/etc/client.keys
cp -p /var/ossec/queue/rids/ /var/ossec_backup/queue/rids/*
Here you have more information about that Migrating OSSEC agent
After that, you have to put the files in their original path again

Related

cx_Oracle in Azure Databricks

I am unable to establish connection to my Oracle database from Azure Databricks although it works in ADF where I am able to query the table. But ADF takes time to filter the records so I am still trying to connect from Databricks.
I followed the steps from this Microsoft link, both manually and using init-script but error seems to persist.
When I looked into my cluster event log it says the init-script execution was successfully.
Error message when I tried to establish the connection:
DPI-1047: Cannot locate a 64-bit Oracle Client library: "/databricks/driver/oracle_ctl//lib/libclntsh.so: cannot open shared object file: No such file or directory".
When I executed the following command
dbutils.fs.ls("/databricks/driver/")
there was no such directory
This triggered me to post some questions here:
Does this mean the init-script did not perform its job?
Is /databricks/driver/oracle_ctl a hidden directory for dbutils.fs.ls?
Error message points to /databricks/driver/oracle_ctl//lib/libclntsh.so, when I manually inspected the downloaded oracle client, there is no such folder called lib although libclntsh.so exists in the main directory. Is there a problem that databricks is checking the wrong directory for the libclntsh.so?
Does this connections still works for others?
Syntax for connection: cx_Oracle.connect(user= user_name, password= password,dsn= IP+':'+Port+'/'+DB_name)
Above syntax works fine when connected from inside a on-premises machine.
Try installing the latest major release of cx_Oracle - which got renamed to python-oracledb, see the release announcement.
This version doesn't need Oracle Instant Client. The API is the same as cx_Oracle, although obviously the name is different.
If I understand the instructions, your init script would do something like:
/databricks/python/bin/pip install oracledb
Application code would be like:
import oracledb
connection = oracledb.connect(user='scott', password=mypw, dsn='yourdbhostname/yourdbservicename')
with connection.cursor() as cursor:
for row in cursor.execute('select city from locations'):
print(row)
Resources:
Home page: oracle.github.io/python-oracledb/
Quick start: Quick Start python-oracledb Installation
Documentation: python-oracle.readthedocs.io/en/latest/index.html
PyPI: pypi.org/project/oracledb/
Source: github.com/oracle/python-oracledb
Upgrading: Upgrading from cx_Oracle 8.3 to python-oracledb
Changed the path from "/databricks/driver/oracle_ctl/" to "/databricks/driver/oracle_ctl/instantclient" in the init-script and that error does not appear anymore.
Please use the following init script instead
dbutils.fs.put("dbfs:/databricks/<init-script-folder-name>/oracle_ctl.sh","""
#!/bin/bash
sudo apt-get install libaio1
wget --quiet -O /tmp/instantclient-basiclite-linuxx64.zip https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-linuxx64.zip
unzip /tmp/instantclient-basiclite-linuxx64.zip -d /databricks/driver/oracle_ctl/
mv /databricks/driver/oracle_ctl/instantclient* /databricks/driver/oracle_ctl/instantclient
sudo echo 'export LD_LIBRARY_PATH="/databricks/driver/oracle_ctl/instantclient/"' >> /databricks/spark/conf/spark-env.sh
sudo echo 'export ORACLE_HOME="/databricks/driver/oracle_ctl/instantclient/"' >> /databricks/spark/conf/spark-env.sh
""", True)
Notes:
The above init-script was advised by a databricks employee and can be found here.
As mentioned by Christopher Jones in one of the comments, cx_Oracle has been recently upgraded to oracledb with a thin and thick version.
You will get the above error if you don’t have Oracle instant client in your Cluster.
To resolve above error in azure databricks, please follow this code:
%sh
mkdir -p /opt/oracle
cd /opt/oracle
wget https://download.oracle.com/otn_software/nt/instantclient/19600/instantclient-basic-windows.x64-19.6.0.0.0dbru.zip
unzip instantclient-basic-windows.x64-19.6.0.0.0dbru.zip
set ORACLE_HOME=%ORABAS%\instantclient_19_3
set TNS_ADMIN=%ORACLE_HOME%
set PATH=%ORACLE_HOME%;%PATH%
To create init script, use the following code:
As per official doc,
dbutils.fs.put("dbfs:/databricks/<init-script-folder>/oracle_ctl.sh","""
#!/bin/bash
wget --quiet -O /tmp/instantclient-basiclite-linuxx64.zip https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-linuxx64.zip
unzip /tmp/instantclient-basiclite-linuxx64.zip -d /databricks/driver/oracle_ctl/
sudo echo 'export LD_LIBRARY_PATH="/databricks/driver/oracle_ctl/"' >> /databricks/spark/conf/spark-env.sh
sudo echo 'export ORACLE_HOME="/databricks/driver/oracle_ctl/"' >> /databricks/spark/conf/spark-env.sh
""", True)
To read data from oracle database in PySpark follow this article by Emrah Mete
For more information refer this official document:
https://docs.databricks.com/data/data-sources/oracle.html#oracle

AIP backup - using Docker

I am using the cloned dspace 6-x branch and installed it via docker. Can someone help me with the backup of my local database (Communities, collections, items)to a remote database?
According to the documentation we need to use the command:
dspace packager -s -t AIP -e eperson -p parent-handle file-path
But it returns an error: dspace is not a command
Anyone could help me transfer my local database to my remote repo?
Thanks!
Moving publications to a new repository will be a more substantial undertaking!
But your recent problem seems just that you are either not on the right container or in the right directory for executing the dspace command. Thus it is "not found". Make sure to execute dspace on the dspace container and specify the right/complete path. The dspace command is located in
/path/to/your/dspace-deployement-directory/bin.

How to setup Debezium for Kafka running in Docker for MSSQL Server

I am new to Debezium, Kafka, and Docker. I have successfully installed Docker and it is running on my locahost.
I am attempting to go through the Debezium tutorial at: https://github.com/debezium/debezium-examples/blob/master/tutorial/README.md#debezium-tutorial
I went to the section for SQL Server: and the first step says to # Start the topology as defined in https://debezium.io/docs/tutorial/. I successfully ran through that tutorial. But, it is for MySQL and not MSSQL Server. Anyways, I went back to the ../debezium-tutorial and the first line tells me to run:
export DEBEZIUM_VERSION=1.1
docker-compose -f docker-compose-sqlserver.yaml up
The tutorial does not discuss how to create the docker-compose-sqlserver.yaml. I checked Debezium's github site for this file and it is not there. Am I supposed to create this file manually or am I missing something in the steps?
In order to get Debezium to work, am I supposed to create and run a SQL Server instance in Docker, or can I use the instance that is running on my localhost?
The Docker Compose is included in the tutorials repository.
git clone https://github.com/debezium/debezium-examples.git
cd debezium-examples/tutorial
export DEBEZIUM_VERSION=1.1
docker-compose -f docker-compose-sqlserver.yaml up

How to import .bacpac into docker Sqlserver?

I installed Sqlserver on my Mac in a docker container, following the instructions from this article.
I run the container with Kitematic and managed to connect to the server using Navicat Essentials for SQl Server.
The server has four databases and I can create new ones, but, ideally, I would like to import an existing database as .bacpac.
The instructions from this answer have been of use to me in the past. Can I run something similar within the container? Or, more generally, is there a way to import a database in the container?
Hi all! We finally have a preview ready for sqlpackage that is built on dotnet core and is cross-platform! Below are the links to download from. They are evergreen links, i.e. each day a new build is uploaded. This way any checked in bug fix is available the next day. Included in the .zip file is the preview EULA.
linux
https://go.microsoft.com/fwlink/?linkid=873926
osx
https://go.microsoft.com/fwlink/?linkid=873927
windows
https://go.microsoft.com/fwlink/?linkid=873928
Release notes:
The /p:CommandTimeout parameter is hardcoded to 120
Build and deployment contributors are not supported
a. Need to move to .NET Core 2.1 where System.ComponentModel.Composition.dll is supported
b. Need to handle case-sensitive paths
SQL CLR UDT types are not supported.
a. This includes SQL Server Types SqlGeography, SqlGeometry, & SqlHierarchyId
Older .dacpac and .bacpac files that use Json serialization are not supported
Referenced .dacpacs (e.g. master.dacpac) may not resolve due to issues with case-sensitive file systems
For lack of a better method, please provide any feedback you have here on this GitHub issue.
Thanks for giving it a try and letting us know how it goes!
https://github.com/Microsoft/mssql-docker/issues/135#issuecomment-389245587
EDIT: I've made you a Docker image for this
https://hub.docker.com/r/samuelmarks/mssql-server-fts-sqlpackage-linux/
Example of setting up a container, creating a database, copying a .bacpac file over, and importing it into aforementioned database:
docker run -d -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=<YourStrong!Passw0rd>' -p 1433:1433 --name sqlfts0 samuelmarks/mssql-server-fts-sqlpackage-linux
docker exec -it sqlfts0 /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P '<YourStrong!Passw0rd>' -Q 'CREATE DATABASE MyDb0'
docker cp ~/Downloads/foo.bacpac sqlfts0:/opt/downloads/foo.bacpac
docker exec -it sqlfts0 dotnet /opt/sqlpackage/sqlpackage.dll /tsn:localhost /tu:SA /tp:'<YourStrong!Passw0rd>' /A:Import /tdn:MyDb0 /sf:foo.bacpac
It looks like Microsoft has implemented support of this on sqlpackage, with documentation!
You will have to add sqlpackage to your container.
You can download it here. (optionally, direct link to linux package here, hopefully doesn't change)
The following are instructions for running this from a windows machine -- obviously it's the bare minimum to get it working. Please change passwords, and probably put this in a docker-compose.yml for re-use.
I unzip the above package into a folder 'c:\sqlpackage' (my windows docker run doesn't allow relative paths), and then mount that into the container with the bacpac, like such:
docker run -d -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Asdf1234" -v c:\sqlpackage:/opt/sqlpackage -v c:\yourdb.bacpac:/tmp/yourdb.bacpac -p 1433:1433 --name mssql-server-example microsoft/mssql-server-linux:2017-latest
here is what a *nix user could run alternatively:
docker run -d -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Asdf1234' -v ./sqlpackage:/opt/sqlpackage -v ./yourdb.bacpac:/tmp/yourdb.bacpac -p 1433:1433 --name mssql-server-example microsoft/mssql-server-linux:2017-latest
and finally, attach to your container and run:
/opt/sqlpackage/sqlpackage /a:Import /tsn:. /tdn:targetdbname /tu:sa /tp:Asdf1234 /sf:/tmp/yourdb.bacpac
After this, you should be able to connect with SSMS to localhost, username and password as you provide them above, and see 'targetdbname'! These are mostly notes I wrote for myself but I'm sure others could use them too.
You can use free Azure Data Studio from Microsoft. Once you have it installed, install the extension "Admin Pack for SQL Server" from Microsoft. Then you can import bacpac files with ease.
This is not a supported feature with a LINUX implementation it seems.
See this link.

How can I attach a database to an app in Heroku?

I'm using Heroku's Postgres addon, and I created a new production database from the Heroku Postgres addon page.
I Didn't add it directly to my App using the Resources page of my App.
Now I want to attach this database to my App so it'll be recognized by the heroku pg command.
I'm able to use the database btw after setting the DATABASE_URL config var of my app to point to it, but heroku pg command doesn't recognize it yet.
Additional info: The previous database was Shared, and the new one is a Production.
Heroku add-ons may now be attached across applications and multiple times on a single app.
heroku addons:attach ADDON_NAME -a APP_NAME
Source: https://devcenter.heroku.com/changelog-items/646
To know the name of your addon, do:
heroku addons
Source: https://devcenter.heroku.com/articles/managing-add-ons
Did you add the database using the app-independent https://postgres.heroku.com/ site? Or did you just create a postgresql database in your Heroku control panel?
If you created your database on https://postgres.heroku.com/, you will not see the database via your heroku pg:info command. What you can do to add your database to your application, however, would be to:
Log into https://postgres.heroku.com/.
Click on the database you want to attach to your application.
Under 'Connection Settings', click the configuration button at the top right.
Then click the 'URL' option.
Copy your database URL, this should be something like "postgres://blah:blah#ec2-23-23-122-88.compute-1.amazonaws.com:5432/omg".
In your application, on the command line, run heroku config:set DATABASE_URL=postgres://blah:blah#ec2-23-23-122-88.compute-1.amazonaws.com:5432/omg
What we did there was assign your database to the DATABASE_URL environment variable in your application. This is the variable that's used by default when you provision databases locally to your application, so theoretically, assigning this value should work just fine for you.
To get your database that you created at https://postgres.heroku.com/ attached to your actual heroku app that you are working on you can't use any of the pg backup commands and as far as I can tell there is no supported Heroku way of attaching a database to a heroku app.
You can however create a backup of your database using pg_dump and then use pg_restore to populate your new database that is attached to your app:
pg_dump -i -h hostname -p 5432 -U username -F c -b -v -f "backup-filename" database_name
Once that is complete you can populate your new database with:
pg_restore -i -h new_hostname -p 5432 -U new_username -d new_database_name -v "same_backup_filename"
Even if you are upgrading from the "basic plan" to a the "crane plan" you still have to do a backup and restore, but since the db's are already attached to your app you have the advantage of using the heroku backup commands.

Resources