Grails - log into files via appender - file

This post has NOT been accepted by the mailing list yet.
Hi everyone,
Thank you in advance for any help provided.
I encounter a problem trying to keep track of my logs into a file, and do not understand what is wrong in the syntax I am using.
Here are my settings for logging into Config.groovy:
[SNIP]
log4j = {
// Example of changing the log pattern for the default console
// appender:
appenders {
// console name:'stdout', layout:pattern(conversionPattern: '%c{2} %m%n')
file name: "scraperServiceLogger",
file: "target/scraperService.log"
}
error 'org.codehaus.groovy.grails.web.servlet', // controllers
'org.codehaus.groovy.grails.web.pages', // GSP
'org.codehaus.groovy.grails.web.sitemesh', // layouts
'org.codehaus.groovy.grails.web.mapping.filter', // URL mapping
'org.codehaus.groovy.grails.web.mapping', // URL mapping
'org.codehaus.groovy.grails.commons', // core / classloading
'org.codehaus.groovy.grails.plugins', // plugins
'org.codehaus.groovy.grails.orm.hibernate', // hibernate integration
'org.springframework',
'org.hibernate',
'net.sf.ehcache.hibernate'
'grails.app.'
error scraperServiceLogger: "grails.app.service.ScraperService"
warn 'org.mortbay.log'
debug 'grails.test.*'
}
[/SNIP]
Here is how I am trying to use it within my ScraperService.groovy:
[SNIP]
log.error "test"
[/SNIP]
The file I want to write in is created correctly but the logging in only displayed on console.
Any help greatly appreciated :)
All the best.

Try using the full package to your service
error scraperServiceLogger: "grails.app.service.com.whatever.ScraperService"

Related

How to import Linux logs into ELK stack for log analysis?

I'm building a log analysis environment with the purpose of analyzing linux logs such as: /var/log/auth.log, /var/log/cron, /var/log/syslog, etc. The goal is to be able to upload such a log file and analyze it properly with Kibana/Elasticsearch. To do so, I created a .conf file as seen below, which includes the proper patterns to pars auth.log and the information needed in the input and output section. Unfortunately, when connecting to Kibana I cannot see any data in the "Discover" panel and cannot find the related "index pattern". I tested the grokk pattern and they works well.
input {
file {
type => "linux-auth"
path => [ "/home/ubuntu/logs/auth.log"]
}
filter {
if [type] == "linux-auth" {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:time} %{WORD:method}\[%{POSINT:auth_pid}\]\: %{DATA:message} for %{DATA:user} from %{IPORHOST:IP_address} port %{POSINT:port}" }
}
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:time} %{WORD:method}\[%{POSINT:auth_pid}\]\:%{DATA:message} for %{GREEDYDATA:username}" }
}
}
}
output{
elasticsearch {
hosts => "elasticsearch:9200"
}
}
Example of auth.log:
2018-12-02T14:01:00Z sshd[0000001]: Accepted keyboard-interactive/pam for root from 185.118.167.241 port 64965 ssh2
2018-12-02T14:02:00Z sshd[0000002]: Failed keyboard-interactive/pam for invalid user ubuntu from 36.104.140.175 port 57512 ssh2
2018-12-02T14:03:00Z sshd[0000003]: pam_unix(sshd:session): session closed for user root
Here is the few recommendations which i would like to give:
You can run logstash on debug mode like below to check what is the exact error.
bin/logstash --debug -f file_path.conf
Check with stdout in output section which will print the incoming data. So that you will be sure that logstash reading the file correctly.
The most important as you mention you want to read system log and need to visualize the data, I would recommend to use filebeat with system modules. Filebeat is especially build for such use cases like reading from file.
It is simple setup where in filebeat under the system module you just need to specify which system log file you need read. Mention the Elasticsearch endpoint and run the filebeat.
It will start reading and pushing the data to the elasticsearch.
Also You don't need build the custom dashboard in kibana (As you going to build in case of logstash). Filebeat comes with pre configured dashboards for system logs.
You can check more on above official document.

How do we load a WebVtt file in React?

This is my data structure
WEBVTT
00:00:15.000 --> 00:00:17.951
At the left we can see...
00:00:18.166 --> 00:00:20.083
At the right we can see the...
00:00:20.119 --> 00:00:21.962
...the head-snarlers
I am trying to parse a web vtt file. For this I wrote :
var data = require('./captions.en.vtt');
Error
client:47 ./src/captions.en.vtt
Module parse failed: C:\Users\sampr\Desktop\MyVIew\WebVtt\src\captions.en.vtt Invalid number (3:0)
You may need an appropriate loader to handle this file type.
Could someone please help me?
You need a appropriate loader for the file type (.vtt).
Install file-loader (npm install --save-dev file-loader) and you could use it directly like this:
var data = require('file-loader!./captions.en.vtt');
Using Next.js, I ended up using this solution.
...add a file called staticdata.js inside the pages/api folder. This will create a Serverless Function that will load the json data from the file, and return it as a response.
Basically the solution is to return the data from an API endpoint.

Can't Get GAE + GWT + Objectify to Work

As the title says, I'm trying to create a GAE + GWT project using Objectify but I can't even get it off the ground. I'm sure I'm missing something simple but doesn't seem to be working.
Here is what I've done so far:
Create a new project and added guava-17.0.jar, guava-gwt-17.0.jar, objectify-5.0.3.jar, and objectify-gwt-1.1jar to my WEB-INF\lib folder. These are all the latest versions of these jars.
Run the application. Send a simple RPC command, server responds, and client successfully receives response (onSuccess() is called).
Add the line <inherits name="com.googlecode.objectify.Objectify" /> to my gwt.xml file per Objectify-GWT's website which is supposed to enable Objectify in GWT.
Run the application. The application starts, same RPC command is sent, server receives and responds, but the client says the command was a failure (onFailure() is called).
I am using the boiler-plate code that is pre-populated when first create a new web application. For reference, here is the RPC command:
private void sendNameToServer() {
// First, we validate the input.
errorLabel.setText("");
String textToServer = nameField.getText();
if (!FieldVerifier.isValidName(textToServer)) {
errorLabel.setText("Please enter at least four characters");
return;
}
// Then, we send the input to the server.
sendButton.setEnabled(false);
textToServerLabel.setText(textToServer);
serverResponseLabel.setText("");
greetingService.greetServer(textToServer,
new AsyncCallback<String>() {
public void onFailure(Throwable caught) {
// Show the RPC error message to the user
dialogBox
.setText("Remote Procedure Call - Failure");
serverResponseLabel
.addStyleName("serverResponseLabelError");
serverResponseLabel.setHTML(SERVER_ERROR);
dialogBox.center();
closeButton.setFocus(true);
}
public void onSuccess(String result) {
dialogBox.setText("Remote Procedure Call");
serverResponseLabel
.removeStyleName("serverResponseLabelError");
serverResponseLabel.setHTML(result);
dialogBox.center();
closeButton.setFocus(true);
}
});
}
This is the error I receive after I try to make the RPC call:
[DEBUG] [my_app] - Validating units:
[INFO] [my_app] - Module my_app has been loaded
[ERROR] [my_app] - Errors in 'com/google/gwt/dev/jjs/SourceOrigin.java'
[ERROR] [my_app] - Line 77: The method synchronizedMap(new LinkedHashMap<SourceOrigin,SourceOrigin>(){}) is undefined for the type Collections
[ERROR] [my_app] - Errors in 'com/google/gwt/dev/util/StringInterner.java'
[ERROR] [my_app] - Line 29: No source code is available for type com.google.gwt.thirdparty.guava.common.collect.Interner<E>; did you forget to inherit a required module?
[ERROR] [my_app] - Line 29: No source code is available for type com.google.gwt.thirdparty.guava.common.collect.Interners; did you forget to inherit a required module?
To me it looks like Objectify is interfering with GWT. I know they're supposed to work together so not sure what I'm doing wrong. Any advice would be appreciated.
Use objectify-gwt 1.2. It's possible that 1.1 has some issues from merging a bad PR.
You can see a sample application that uses objectify-gwt to pass a GeoPt back and forth from the client here: https://github.com/stickfigure/objectify-gwt-test
You should use objectify on the server side before trying to do this kind of stuff. Objectify is a server side peristence technology. Call it in your server code
add try catch in your service methods and print the stack trace of the exception on your server console, if you receive onFailure() on GWT that means there is a failure on the server side. You have to find what is that failure.
Now the second part is an advice:
<inherits name="com.googlecode.objectify.Objectify" />
Is a weired line for me. GWT doesn't have to know about your persistence layer.
Unless it's a revolutionary concept, I would recommend you d'ont use this type of technology that removes your hand from the controle of your db access...

How can I get AppEngine to log info level only for my app?

So I've tried configuring AppEngine logging according to this guide, ensuring I've configured the logging.properties file to be used in web.xml. I've configured logging.properties the following way:
.level = WARNING
nilsnett.chinese.backend.level = INFO
The package name of my logging wrapper is nilsnett.chinese.backend. The problem is that even with this configuration, info-level log output from my app is filtered. Evidence:
I've also tried the following config, which yielded the same result (including the logger class name at the end of the package name):
.level = WARNING
nilsnett.chinese.backend.JavaUtilLogger.level = INFO
To demonstrate that the logging.properties-file is actually read, and that I actually do write info-level logging data to app-engine in this service call, let me show you what happens when I set.level=INFO:
So my desired result is to have INFO and higher-level log outputs from my packages, while other packages, like org.datanucleus, only shows output if WARNING or more severe. In the example above, I want only the two lines marked with the purple star. Am I doing anything wrong?
change your config to:
.level = WARNING
# Set the default logging level for the datanucleus loggers
DataNucleus.JDO.level=WARNING
DataNucleus.Persistence.level=WARNING
DataNucleus.Cache.level=WARNING
DataNucleus.MetaData.level=WARNING
DataNucleus.General.level=WARNING
DataNucleus.Utility.level=WARNING
DataNucleus.Transaction.level=WARNING
DataNucleus.Datastore.level=WARNING
DataNucleus.ClassLoading.level=WARNING
DataNucleus.Plugin.level=WARNING
DataNucleus.ValueGeneration.level=WARNING
DataNucleus.Enhancer.level=WARNING
DataNucleus.SchemaTool.level=WARNING
# FinalizableReferenceQueue tries to spin up a thread and fails. This
# is inconsequential, so don't scare the user.
com.google.common.base.FinalizableReferenceQueue.level=WARNING
com.google.appengine.repackaged.com.google.common.base.FinalizableReferenceQueue.level=WARNING
this is are coming from logging config template, so to set datanucleus to warning you have todo like in this template.
https://developers.google.com/appengine/docs/java/#Logging
and then just add your own logging config:
nilsnett.chinese.backend.level = INFO
this should solve it

Grails - logging from Quartz job and Filter

I would like to stock my logs into files:
Here is how I declare my appenders in Config.groovy:
log4j = {
appenders {
// console name:'stdout', layout:pattern(conversionPattern: '%c{2} %m%n')
file name: "scraperServiceDetailedLogger",
file: "target/scraperServiceDetailed.log"
file name: "scraperServiceLogger",
file: "target/scraperService.log"
file name: "filterLogger",
file: "target/filter.log"
}
error 'org.codehaus.groovy.grails.web.servlet', // controllers
'org.codehaus.groovy.grails.web.pages', // GSP
'org.codehaus.groovy.grails.web.sitemesh', // layouts
'org.codehaus.groovy.grails.web.mapping.filter', // URL mapping
'org.codehaus.groovy.grails.web.mapping', // URL mapping
'org.codehaus.groovy.grails.commons', // core / classloading
'org.codehaus.groovy.grails.plugins', // plugins
'org.codehaus.groovy.grails.orm.hibernate', // hibernate integration
'org.springframework',
'org.hibernate',
'net.sf.ehcache.hibernate'
error scraperServiceDetailedLogger: "grails.app.service.personalcreditcomparator.ScraperService"
info scraperServiceLogger: "grails.app.jobs.personalcreditcomparator.ScraperJob"
info filterLogger: "grails.app.conf.personalcreditcomparator.AdministratorInterfaceProtectorFilters"
}
The 3 files are created properly but only scraperServiceDetailedLogger stores the logs properly. The other two files remain empty.
The level of logging is respected while calling the log.
What am I missing ?
Thank you for any help provided.
For quartz jobs try the Logger prefix of 'grails.app.task'
info scraperServiceLogger: "grails.app.task.personalcreditcomparator.ScraperJob"
And for filters try the Logger prefix of 'grails.app.filters'
info filterLogger: "grails.app.filters.personalcreditcomparator.AdministratorInterfaceProtectorFilters"

Resources