How to import Linux logs into ELK stack for log analysis? - database

I'm building a log analysis environment with the purpose of analyzing linux logs such as: /var/log/auth.log, /var/log/cron, /var/log/syslog, etc. The goal is to be able to upload such a log file and analyze it properly with Kibana/Elasticsearch. To do so, I created a .conf file as seen below, which includes the proper patterns to pars auth.log and the information needed in the input and output section. Unfortunately, when connecting to Kibana I cannot see any data in the "Discover" panel and cannot find the related "index pattern". I tested the grokk pattern and they works well.
input {
file {
type => "linux-auth"
path => [ "/home/ubuntu/logs/auth.log"]
}
filter {
if [type] == "linux-auth" {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:time} %{WORD:method}\[%{POSINT:auth_pid}\]\: %{DATA:message} for %{DATA:user} from %{IPORHOST:IP_address} port %{POSINT:port}" }
}
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:time} %{WORD:method}\[%{POSINT:auth_pid}\]\:%{DATA:message} for %{GREEDYDATA:username}" }
}
}
}
output{
elasticsearch {
hosts => "elasticsearch:9200"
}
}
Example of auth.log:
2018-12-02T14:01:00Z sshd[0000001]: Accepted keyboard-interactive/pam for root from 185.118.167.241 port 64965 ssh2
2018-12-02T14:02:00Z sshd[0000002]: Failed keyboard-interactive/pam for invalid user ubuntu from 36.104.140.175 port 57512 ssh2
2018-12-02T14:03:00Z sshd[0000003]: pam_unix(sshd:session): session closed for user root

Here is the few recommendations which i would like to give:
You can run logstash on debug mode like below to check what is the exact error.
bin/logstash --debug -f file_path.conf
Check with stdout in output section which will print the incoming data. So that you will be sure that logstash reading the file correctly.
The most important as you mention you want to read system log and need to visualize the data, I would recommend to use filebeat with system modules. Filebeat is especially build for such use cases like reading from file.
It is simple setup where in filebeat under the system module you just need to specify which system log file you need read. Mention the Elasticsearch endpoint and run the filebeat.
It will start reading and pushing the data to the elasticsearch.
Also You don't need build the custom dashboard in kibana (As you going to build in case of logstash). Filebeat comes with pre configured dashboards for system logs.
You can check more on above official document.

Related

How to deploy a smart contract on KADENA

Please how do I deploy a smart contract to the testnet or mainnet WITHOUT Chainweaver web UI? I know I need a YAML file for that, but what do I do with it and where exactly do I send it?
Do I need to run a pact server, chainweb api or...? I couldn't find any guide for that
Step 0: Install the Prerequisites
Install Pact
Step 1: Create the Pact Module
We will be deploying the following pact module. For simplicity's sake the pact code we are deploying is not using a transaction's data field (read-keyset is one such pact function that makes use of this field). Otherwise, the accompanying YAML file will have to change. We also assume that this pact code is saved as test.pact.
(namespace 'free)
(module someModuleName AUTONOMOUS
(defcap AUTONOMOUS ()
true)
(defun dummy ()
(+ 1 2)
)
)
Step 2: Create the YAML file
The following YAML file will be used along with pact -a to sign and produce the escaped JSON needed to submit a transaction to Testnet.
codeFile: /Users/linda.ortega.cordoves/pact/test.pact
networkId: testnet04
publicMeta:
chainId: "0"
gasLimit: 1000
ttl: 28000
creationTime: 1585056536
sender: "testing"
gasPrice: 0.00001
keyPairs:
- public: 1d877a7b4524b6724a6ae708cf9ea7396d6ee9d17b10098b7793800177669c1d
secret: 33fcd94b8a42057bd4e3190f8983e3a73ec96c3f60df95c9e2aa3f13602c714f
nonce: step02
This file makes a couple of assumptions that might change depending on your specific implementation:
The full path of the pact we want to upload is: /Users/linda.ortega.cordoves/pact/test.pact
We want to submit a transaction to Testnet, whose network id is testnet04
We want to submit to the zero'th chain on Testnet, which has a chain id of "0"
That the current creation time in UNIX Epoch time is 1585056536 seconds. This value MUST CHANGE, so calculate it by either navigating to this website or running date +%s on the command line.
That "testing" is the account paying for gas (aka the "sender") on the Testnet network. To create a Testnet account and fund it some coins, navigate to the Testnet Coin Faucet. You will need to have generated an ED22519 public-private key pair to use the faucet. You can use pact -g to generate this key pair. Make sure to save it somewhere save.
That the key pair specified in "keyPairs" corresponds to the key pair used to create the gas payer account, which in this example is "testing". This must change from the defaults provided.
That we saved this YAML file as /Users/linda.ortega.cordoves/pact/test.yaml.
Step 3: Submit Transaction to Testnet
We will now submit the example pact module we created by hitting the /send endpoint of a Testnet node. In the command line, run the following command:
pact -a /Users/linda.ortega.cordoves/pact/test.yaml | curl -H "Content-Type: application/json" -d #- https://us1.testnet.chainweb.com/chainweb/0.0/testnet04/chain/0/pact/api/v1/send
Some of the assumptions we made when creating the YAML file become important here:
The network id must match the node endpoint we submit to. Since the network id we chose is testnet04, we must submit to /chainweb/0.0/testnet04/. And the node we submit to (in this case us1.testnet.chainweb.com) must have this network id.
The chain id must also match. We chose chain id of "0", so we must submit to /chain/0/.
That we saved the yaml file to /Users/linda.ortega.cordoves/pact/test.yaml.
If we submitted the transaction successfully we will see the following:
{"requestKeys":["Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek"]}
This means that our transaction was successfully added to the blockchain's mempool and is waiting to be mined. Make note of the request key returned from /send as we will use it when polling for the result of the transaction.
It is also possible that our transaction will fail node validation when we attempt to submit it. If this happens, you will receive a validation failure message instead of the request key.
Step 4: Verify the Result of the Transaction
We will now try to get the results of the transaction we submitted to the Testnet network by hitting the /poll endpoint. In the command line, run the following command:
curl -H "Content-Type: application/json" -d '{"requestKeys":["Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek"]}' -X POST https://us1.testnet.chainweb.com/chainweb/0.0/testnet04/chain/0/pact/api/v1/poll
Again, we make a couple of assumptions in this step:
That the Testnet node we want to poll from is us1.testnet.chainweb.com.
That the network id is testnet04. Note that part of the endpoint is /chainweb/0.0/testnet04/.
That the chain id we are polling from is chain "0". Note that part of the endpoint is /chain/0/.
That the request key we are polling for is Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek.
If the transaction was successfully mined and thus added to the blockchain, then /poll will return the following JSON object:
{
"Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek": {
"gas": 58,
"result": {
"status": "success",
"data": "Loaded module free.linda-test, hash n0g99JhWnO2F7X7f8o_zcAiSHBAWS_QSAfn4yUaqpps"
},
"reqKey": "Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek",
"logs": "0KzZQDJmEgnAKvPnO20UeGoE7KGCIN22nhjraeyp1aw",
"metaData": {
"blockTime": 1585056990071469,
"prevBlockHash": "dIYmpjBQge9yw0Yzhn0Sau-wJFwsLOFBmGbV3_0xYeE",
"blockHash": "yULpC5C-7tzRcc9sWm-f1bOC3JDvtxwT61hruW0aXrA",
"blockHeight": 261712
},
"continuation": null,
"txId": 266084
}
}
Please note that it is possible that a transaction fails at the pact level, but still gets added to the blockchain and gas gets charged. If this happens the result.status field will be failure.
If a transaction has not be mined yet, /poll will return {}. Keep retrying until you receive the JSON object shown above.
source: https://gist.github.com/LindaOrtega/1c219f887d9782c6745dbd827bdbfb4d

Use a config file in jenkins pipeline

I have a Jenkins pipeline job, which is parametrized. It will take a string input and pass that to a bat script.
Input : https://jazz-test3.com/web/projects/ABC
I wanted to include a config file, it should be placed in a location (may be in the Jenkins machine locally or what can be the best option?).
Sample config file: server_config.txt
test1 : https://jazz-test1.com
test2 : https://jazz-test2.com
test3 : https://jazz-test3.com
Once the user input is received , the job should access this file and check whether the server in the input link is present in the config file.
If present call the bat script if not present throw an error , the server is not supported.
Expected result:
Input 1 : https://jazz-test3.com/web/projects/ABC
Job should run
Input 2 : https://jazz-test4.com/web/projects/ABC
Job should fail with error message
What is the best way of achieving this ?
Can we do it directly from the pipeline or a separate script will be required to perform this ?
Thank you for the help!!
Jenkins supports configuration files stored on the controller via the config-file-provider plugin. See Manage Jenkins | Managed Files for the collection of managed configuration files.
You can add a JSON config file with id "test-servers" that stores your test server configuration:
{
"test1":"https://jazz-test-1.com",
"test2":"https://jazz-test-1.com"
}
Then the job pipeline would do something like:
servers = [:]
stage("Setup") {
steps {
configFileProvider([fileId:'test-servers', variable:'test_servers_json']) {
script {
// Read the json data from the file.
echo "Loading configured servers from file '${test_servers_json}' ..."
servers = readJSON(file:test_servers_json)
}
}
}
}
stage('Test') {
steps {
script {
// Check if 'server' is supported.
if (!servers.containsKey(params.server)) {
error "Unsupported server"
}
build job:'...', parameters:[...]
}
}
}

Changing CloudWatch log output from a "Kinesis Data Analytics for Apache Flink" app

Does anyone know how to change the CloudWatch log output from a "Kinesis Data Analytics for Apache Flink" app.?
There are two things I'd like to change:
The fields in the JSON written to CloudWatch
The contents/format of the "message" field (i.e., format of each "LOG.info", "LOG.warn", etc. - line)
#1 is most important.
The default format written to CloudWatch looks like this:
{
"locationInformation": "",
"logger": "",
"message": "",
"threadName": "",
"applicationARN": "arn:aws:kinesisanalytics:eu-west-1:...",
"applicationVersionId": "23",
"messageSchemaVersion": "1",
"messageType": "INFO"
}
Is it somehow possible to change the output, so that each CloudWatch entry becomes this instead:
{
“EventTime”: "20201224T23:59:59.999Z",
“LogLevel”: 5,
“EventSource”: "ApplicationURI/Name",
“Message”: ”foobar”
}
Using SLF4J is mentioned here (https://docs.aws.amazon.com/kinesisanalytics/latest/java/cloudwatch-logs-writing.html), although the format mentioned on the same page is the default described above.
The pom.xml file of the Java project includes aws-java-sdk-logs. It also excludes log4j and slf4j.
<artifactSet>
<excludes>
<exclude>org.slf4j:*</exclude>
<exclude>log4j:*</exclude>
</excludes>
</artifactSet>
I've had a look at this:
https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/java-dg-logging.html
But when testing locally, changing log4j.properties changes the "message" field of the log entry. That does not seem to be loaded when running on AWS, despite existing in the root directory of the .jar file. Even if I could make aws-java-sdk-logs pick up changes in log4j.properties, changing this file doesn't seem capable of changing the JSON fields written to CloudWatch (only the "message" format).
When the app starts on AWS, I can see it prints this:
-Dlog4j.configuration=file:/etc/flink/log4j-console.properties
-Dlogback.configurationFile=file:/etc/flink/logback-console.xml
I was hoping to perhaps copy them out on startup, changing them, and including them while setting the props to point at the JAR. However, both these files seem empty when trying to read them on startup from inside the Flink app code.
Is there some relatively straight-forward way to:
Rename/remove/add fields to the JSON written to CloudWatch?
Change the format of the "message" field?

How to log Stackdriver log messages correlated by trace id using stdout Go 1.11

I'm using Google App Engine Standard Environment with the Go 1.11 runtime. The documentation for Go 1.11 says "Write your application logs using stdout for output and stderr for errors". The migration from Go 1.9 guide also suggests not calling the Google Cloud Logging library directly but instead logging via stdout.
https://cloud.google.com/appengine/docs/standard/go111/writing-application-logs
With this in mind, I've written a small HTTP Service (code below) to experiment logging to Stackdriver using JSON output to stdout.
When I print plain text messages they appear as expected in the Logs Viewer panel under textPayload. When I pass a JSON string they appear under jsonPayload. So far, so good.
So, I added a severity field to the output string and Stackdriver Log Viewer successfully categorizes the message according to the levelled logging NOTICE, WARNING etc.
https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry
The docs say to set the trace identifier to correlate log entries with the originating request log. The trace ID is extracted from the X-Cloud-Trace-Context header set by the container.
Simulate it locally using curl -v -H 'X-Cloud-Trace-Context: 1ad1e4f50427b51eadc9b36064d40cc2/8196282844182683029;o=1' http://localhost:8080/
However, this does not cause the messages to be threaded by request, but instead the trace property appears in the jsonPayload object in the logs. (See below).
Notice that severity has been interpreted as expected and does not appear in the jsonPayload. I had expected the same to happen for trace, but instead it appears to be unprocessed.
How can I achieve nested messages within the original request log message? (This must be done using stdout on Go 1.11 as I do not wish to log directly with the Google Cloud logging package).
What exactly is GAE doing to parse the stdout stream from my running process? (In the setup guide for VMs on GCE there is something about installing an agent program to act as a conduit to Stackdriver logging- is this what GAE has installed?)
app.yaml file looks like this:
runtime: go111
handlers:
- url: /.*
script: auto
package main
import (
"fmt"
"log"
"net/http"
"os"
"strings"
)
var projectID = "glowing-market-234808"
func parseXCloudTraceContext(t string) (traceID, spanID, traceTrue string) {
slash := strings.Index(t, "/")
semi := strings.Index(t, ";")
equal := strings.Index(t, "=")
return t[0:slash], t[slash+1 : semi], t[equal+1:]
}
func sayHello(w http.ResponseWriter, r *http.Request) {
xTrace := r.Header.Get("X-Cloud-Trace-Context")
traceID, spanID, _ := parseXCloudTraceContext(xTrace)
trace := fmt.Sprintf("projects/%s/traces/%s", projectID, traceID)
warning := fmt.Sprintf(`{"trace":"%s","spanId":"%s", "severity":"WARNING","name":"Andy","age":45}`, trace, spanID)
fmt.Fprintf(os.Stdout, "%s\n", warning)
message := "Hello"
w.Write([]byte(message))
}
func main() {
http.HandleFunc("/", sayHello)
port := os.Getenv("PORT")
if port == "" {
port = "8080"
}
log.Fatal(http.ListenAndServe(fmt.Sprintf(":%s", port), nil))
}
Output shown in the Log viewer:
...,
jsonPayload: {
age: 45
name: "Andy"
spanId: "13076979521824861986"
trace: "projects/glowing-market-234808/traces/e880a38fb5f726216f94548a76a6e474"
},
severity: "WARNING",
...
I have solved this by adjusting the program to use logging.googleapis.com/trace in place of trace and logging.googleapis.com/spanId in place of spanId.
warning := fmt.Sprintf(`{"logging.googleapis.com/trace":"%s","logging.googleapis.com/spanId":"%s", "severity":"WARNING","name":"Andy","age":45}`, trace, spanID)
fmt.Fprintf(os.Stdout, "%s\n", warning)
It seems that GAE is using the logging agent google-fluentd (a modified version of the fluentd log data colletor.)
See this link for a full explanation.
https://cloud.google.com/logging/docs/agent/configuration#special-fields
[update] June 25th, 2019: I've written a Logrus plugin that will help to thread log entries by HTTP request. It's available under on GitHub https://github.com/andyfusniak/stackdriver-gae-logrus-plugin.
[update]] April 3rd, 2020: I've since switched to using Cloud Run and the Logrus plugin appears to work fine with this platform also.

Grails - log into files via appender

This post has NOT been accepted by the mailing list yet.
Hi everyone,
Thank you in advance for any help provided.
I encounter a problem trying to keep track of my logs into a file, and do not understand what is wrong in the syntax I am using.
Here are my settings for logging into Config.groovy:
[SNIP]
log4j = {
// Example of changing the log pattern for the default console
// appender:
appenders {
// console name:'stdout', layout:pattern(conversionPattern: '%c{2} %m%n')
file name: "scraperServiceLogger",
file: "target/scraperService.log"
}
error 'org.codehaus.groovy.grails.web.servlet', // controllers
'org.codehaus.groovy.grails.web.pages', // GSP
'org.codehaus.groovy.grails.web.sitemesh', // layouts
'org.codehaus.groovy.grails.web.mapping.filter', // URL mapping
'org.codehaus.groovy.grails.web.mapping', // URL mapping
'org.codehaus.groovy.grails.commons', // core / classloading
'org.codehaus.groovy.grails.plugins', // plugins
'org.codehaus.groovy.grails.orm.hibernate', // hibernate integration
'org.springframework',
'org.hibernate',
'net.sf.ehcache.hibernate'
'grails.app.'
error scraperServiceLogger: "grails.app.service.ScraperService"
warn 'org.mortbay.log'
debug 'grails.test.*'
}
[/SNIP]
Here is how I am trying to use it within my ScraperService.groovy:
[SNIP]
log.error "test"
[/SNIP]
The file I want to write in is created correctly but the logging in only displayed on console.
Any help greatly appreciated :)
All the best.
Try using the full package to your service
error scraperServiceLogger: "grails.app.service.com.whatever.ScraperService"

Resources