How can I get AppEngine to log info level only for my app? - google-app-engine

So I've tried configuring AppEngine logging according to this guide, ensuring I've configured the logging.properties file to be used in web.xml. I've configured logging.properties the following way:
.level = WARNING
nilsnett.chinese.backend.level = INFO
The package name of my logging wrapper is nilsnett.chinese.backend. The problem is that even with this configuration, info-level log output from my app is filtered. Evidence:
I've also tried the following config, which yielded the same result (including the logger class name at the end of the package name):
.level = WARNING
nilsnett.chinese.backend.JavaUtilLogger.level = INFO
To demonstrate that the logging.properties-file is actually read, and that I actually do write info-level logging data to app-engine in this service call, let me show you what happens when I set.level=INFO:
So my desired result is to have INFO and higher-level log outputs from my packages, while other packages, like org.datanucleus, only shows output if WARNING or more severe. In the example above, I want only the two lines marked with the purple star. Am I doing anything wrong?

change your config to:
.level = WARNING
# Set the default logging level for the datanucleus loggers
DataNucleus.JDO.level=WARNING
DataNucleus.Persistence.level=WARNING
DataNucleus.Cache.level=WARNING
DataNucleus.MetaData.level=WARNING
DataNucleus.General.level=WARNING
DataNucleus.Utility.level=WARNING
DataNucleus.Transaction.level=WARNING
DataNucleus.Datastore.level=WARNING
DataNucleus.ClassLoading.level=WARNING
DataNucleus.Plugin.level=WARNING
DataNucleus.ValueGeneration.level=WARNING
DataNucleus.Enhancer.level=WARNING
DataNucleus.SchemaTool.level=WARNING
# FinalizableReferenceQueue tries to spin up a thread and fails. This
# is inconsequential, so don't scare the user.
com.google.common.base.FinalizableReferenceQueue.level=WARNING
com.google.appengine.repackaged.com.google.common.base.FinalizableReferenceQueue.level=WARNING
this is are coming from logging config template, so to set datanucleus to warning you have todo like in this template.
https://developers.google.com/appengine/docs/java/#Logging
and then just add your own logging config:
nilsnett.chinese.backend.level = INFO
this should solve it

Related

Unable to push vespa metrics to cloudwatch

Basically I need to monitor vespa metrics and for that I am trying to implement method to push metrics to cloudwatch.
This is the document that I am referring to https://docs.vespa.ai/documentation/monitoring.html
I have added the credentials file and putMetricData permission in the IAM role attached. The service.xml file that I am using in my code looks like this:
<admin version="2.0">
<adminserver hostalias="admin0"/>
<configservers>
<configserver hostalias="admin0"/>
</configservers>
<monitoring>
</monitoring>
<metrics>
<consumer id="my-cloudwatch">
<metric-set id="vespa" />
<cloudwatch region="ap-south-1" namespace="vespa">
<shared-credentials file="~/.aws/credentials" profile="default" />
</cloudwatch>
</consumer>
</metrics>
</admin>
I have deployed the code using vespa-deploy prepare application.zip && vespa-deploy activatebut I am still not seeing any metrics updated on my cloudwatch.
Also, I have tried to add:
<monitoring>
<interval>1</interval>
<systemname>vespa</systemname>
</monitoring>
But getting this error when deploying:
Request failed. HTTP status code: 400
Invalid application package: default.default: Error loading model: XML error in services.xml: element "interval" not allowed here; expected the element end-tag [9:16], input:
How can I fix this issue. Or atleast debug the issue that I am facing.
I suggest to use absolute path to the credentials file, as the ~ may not resolve to the directory you intended at runtime.
A couple more things:
I recommend using the default metric set, as vespa contains a lot of metrics, which will drive your CloudWatch cost higher. If you need additional metrics, you can add them with the metric tag inside consumer.
The monitoring element doesn't do anything useful in this context, so you should just drop it.
If you still don't see any metrics, please check for warnings or errors in the vespa log file (use vespa-logfmt) and the Telegraf log file: /opt/vespa/logs/telegraf/telegraf.log. (Vespa uses Telegraf internally to emit metrics to CloudWatch.)

Google AppEngine application log assigned to the wrong request log

When I look at the logs in the Google Log Viewer for my GAE project, I see that often the logs that I write myself in the code are assigned to the wrong request. Most of the time the log is assigned to the request directly after the request that produced the log entry.
As the root of every application log in GAE must be a request, this means that the wrong request is sometimes marked as error, because another request before produced an error, but the log is somehow assigned to the request after that.
I don't really do anything special, I use Ktor as my servlet and have an interceptor that creates a log when an exception occurs before returning status 500.
I use Java logging via SLF4J with the google cloud logging handler, but before that I used logback via SLf4J and had the same problem.
The content of the logs itself is also correct, the returned status of the request, the level of the log entry, the message, everything is ok.
I thought that it may be because I use kotlin and switch coroutine contexts during a single request, but in some cases the point where I write the log and where I send the response are exactly next to each other, so I'm not sure if kotlin has anything to do with it.
My logging.properties:
# To use this configuration, add to system properties : -Djava.util.logging.config.file="/path/to/file"
#
.level = INFO
# it is recommended that io.grpc and sun.net logging level is kept at INFO level,
# as both these packages are used by Stackdriver internals and can result in verbose / initialization problems.
io.grpc.netty.level=INFO
sun.net.level=INFO
handlers=com.google.cloud.logging.LoggingHandler
# default : java.log
com.google.cloud.logging.LoggingHandler.log=custom_log
# default : INFO
com.google.cloud.logging.LoggingHandler.level=INFO
# default : ERROR
com.google.cloud.logging.LoggingHandler.flushLevel=WARNING
# default : auto-detected, fallback "global"
#com.google.cloud.logging.LoggingHandler.resourceType=container
# custom formatter
com.google.cloud.logging.LoggingHandler.formatter=java.util.logging.SimpleFormatter
java.util.logging.SimpleFormatter.format=%1$tY-%1$tm-%1$td %1$tH:%1$tM:%1$tS %4$-6s %2$s %5$s%6$s%n
#optional enhancers (to add additional fields, labels)
#com.google.cloud.logging.LoggingHandler.enhancers=com.example.logging.jul.enhancers.ExampleEnhancer
My logging relevant dependencies:
implementation "org.slf4j:slf4j-jdk14:1.7.30"
implementation "com.google.cloud:google-cloud-logging:1.100.0"
An example logging call:
exception<Throwable> { e ->
logger().error("Error", e)
call.respondText(e.message ?: "", ContentType.Text.Plain, HttpStatusCode.InternalServerError)
}
with logger() being:
import org.slf4j.Logger
import org.slf4j.LoggerFactory
inline fun <reified T : Any> T.logger(): Logger = LoggerFactory.getLogger(T::class.java)
Edit:
An example of the log in Google cloud. The first request has the query parameter GAID=cdda802e-fb9c-47ad-0794d394c913, but as you can see the error log for that request is in the one below, marked in red.

Non-interactive auto-refresh stale OAuth Token with Googlesheets package

I'm trying to automatically run an r script to download a private Google Sheet every hour. It always works fine when I'm interactively using R. It also works fine during the first hour after I automate the script with launchd.
It stops working an hour after I start automating it with launchd. I think the problem is that after one hour the access token changes, and the non-interactive version isn’t waiting for the auto refreshing of the OAuth token. Here is the error that I get from the error report:
Auto-refreshing stale OAuth token.
Error in gzfile(file, mode) : cannot open the connection
Calls: gs_auth ... -> -> cache_token -> saveRDS -> gzfile
In addition: Warning message:
In gzfile(file, mode) :
cannot open compressed file '.httr-oauth', probable reason 'Permission denied'
Execution halted
I'm using Jenny Bryan's googlesheets package. Here is the code that I initially use to register the sheet, and then save the oAuth token:
gToken <- gs_auth() # Run this the first time to get the oAuth information
saveRDS(gToken, "/Users/…/gToken.rds") # Save the oAuth information for non-interactive use
I then use the following script in the file that I automate with launchd:
gs_auth(token = "/Users/…/gToken.rds")
How can I avoid this error when running the script automatically with launchd?
I don't know about launchd but I had the same problem when I wanted to run a R script automatically from the Windows task planer. Changing the 'cache' attribute value to FALSE did the trick for me [1]: https://i.stack.imgur.com/pprlC.png
You can find the solution here: https://github.com/jennybc/googlesheets/issues/262
To authenticate once in the browser in order to get a token file, I did this:
token_file <- gs_auth(new_user = TRUE, cache = FALSE)
saveRDS(token_file, "googlesheets_token.rds")
Automatic login afterwards via:
gs_auth(token = paste0(path_scripts, "googlesheets_token.rds"),
verbose = TRUE, cache = FALSE)

Disable scalatest logging statements when running tests from maven

What is the method to disable logging on the scalatest log4j messages:
The log4j.properties is as follows:
log4j.rootLogger=INFO,CA,FA
#Console Appender
log4j.appender.CA=org.apache.log4j.ConsoleAppender
log4j.appender.CA.layout=org.apache.log4j.PatternLayout
log4j.appender.CA.layout.ConversionPattern=%d{HH:mm:ss.SSS} %p %c: %m%n
log4j.appender.CA.Threshold = INFO
#File Appender
log4j.appender.FA=org.apache.log4j.FileAppender
log4j.appender.FA.append=false
log4j.appender.FA.file=target/unit-tests.log
log4j.appender.FA.layout=org.apache.log4j.PatternLayout
log4j.appender.FA.layout.ConversionPattern=%d{HH:mm:ss.SSS} %p %c{1}: %m%n
log4j.appender.FA.Threshold = INFO
..
log4j.logger.org.scalatest=WARN
However we are seeing INFO level scalatest log4j messages:
2014-11-30 14:25:57,263 INFO [ScalaTest-run-running-DiscoverySuite] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - hadoop.native.lib is deprecated. Instead, use io.native.lib.available
2014-11-30 14:25:57,493 INFO [ScalaTest-run-running-DiscoverySuite] hbase.HBaseCommonTestingUtility (HBaseTestingUtility.java:startMiniCluster(840)) - Starting up minicluster with 1 master(s) and 2 regionserver(s) and 2 datanode(s)
2014-11-30 14:25:57,499 INFO [ScalaTest-run-running-DiscoverySuite] hbase.HBaseCommonTestingUtility (HBaseTestingUtility.java:setupClusterTestDir(390)) - Created new mini-cluster data directory: /shared/hwspark/target/
Alternatively, you can throw this bit of code anywhere in one of your tests,
org.slf4j.LoggerFactory.getLogger(org.slf4j.Logger.ROOT_LOGGER_NAME)
.asInstanceOf[ch.qos.logback.classic.Logger]
.setLevel(ch.qos.logback.classic.Level.WARN)
which will set all logging to the WARN level.
Those log messages are not actually being printed by ScalaTest, but by something you are using from your ScalaTest tests. The reason "ScalaTest" shows up in them is that ScalaTest does change the name of threads when suites and tests are executed, so that if someone has a suite that hangs forever and does a thread dump to investigate, it is more obvious what test and suite is causing the run to hang. Log4J seems to print out the thread name in square brackets, so that can give you a hint as to where these log messages are coming from.
In my case it was slick.relational I looked at the classpath with the class reported in ScalaTest-run-running-.... information and found the class to be find in a package imported and added that specific package to logback.xml as
<logger name="slick.relational" level="INFO"/>
In your case search for HBaseTestingUtility or the other class reported there to find which jar it contains it and work out your logback logger name from the package prefix.

Grails - log into files via appender

This post has NOT been accepted by the mailing list yet.
Hi everyone,
Thank you in advance for any help provided.
I encounter a problem trying to keep track of my logs into a file, and do not understand what is wrong in the syntax I am using.
Here are my settings for logging into Config.groovy:
[SNIP]
log4j = {
// Example of changing the log pattern for the default console
// appender:
appenders {
// console name:'stdout', layout:pattern(conversionPattern: '%c{2} %m%n')
file name: "scraperServiceLogger",
file: "target/scraperService.log"
}
error 'org.codehaus.groovy.grails.web.servlet', // controllers
'org.codehaus.groovy.grails.web.pages', // GSP
'org.codehaus.groovy.grails.web.sitemesh', // layouts
'org.codehaus.groovy.grails.web.mapping.filter', // URL mapping
'org.codehaus.groovy.grails.web.mapping', // URL mapping
'org.codehaus.groovy.grails.commons', // core / classloading
'org.codehaus.groovy.grails.plugins', // plugins
'org.codehaus.groovy.grails.orm.hibernate', // hibernate integration
'org.springframework',
'org.hibernate',
'net.sf.ehcache.hibernate'
'grails.app.'
error scraperServiceLogger: "grails.app.service.ScraperService"
warn 'org.mortbay.log'
debug 'grails.test.*'
}
[/SNIP]
Here is how I am trying to use it within my ScraperService.groovy:
[SNIP]
log.error "test"
[/SNIP]
The file I want to write in is created correctly but the logging in only displayed on console.
Any help greatly appreciated :)
All the best.
Try using the full package to your service
error scraperServiceLogger: "grails.app.service.com.whatever.ScraperService"

Resources