GAE printing same log statement multiple times - google-app-engine

In the GAE logger i could see that same log statement is printing multiple times. How do i control the logger to print the log statement only once?
I am using Log4j for my logging.Do i need to do anything in the logger.properties to control the duplicate log statements?
Also i am not sure how to silence the apache http, client, socketcontrol, headers logs in the logging, there are so many logs related to apache http, which is making it difficult to find the relevant logs.

Related

Google Cloud Pub/Sub - Stopped triggering push to endpoint

It was working fine till last day and suddenly stopped pushing to endpoint. Checked all settings including endpoint URL and found everything remains unchanged. Can you guys suggest possible causes.
Not receiving a message on a push endpoint could happen for many reasons. The first thing to do would be to go to Stackdriver and create a graph for the subscription/push_request_count metric. You can break this down by response_code to see how many requests Cloud Pub/Sub is sending to your push endpoint and what response codes it is returning. If there are requests being delivered that are returning errors, this graph will show that.
It might also be worth checking the publish side to ensure messages are still being published as expected. You can look at the topic/send_message_operation_count metric, which can also be broken down by response_code, to make sure the publish requests are all returning success.
You should also check to ensure the subscription still exists using the Pub/Sub Subscriptions page in the Cloud console. After 30 days of inactivity (including inability to successfully deliver a message to a push endpoint), subscriptions are potentially deleted.
If the issue still unsolved after those steps, it is best to contact Google Cloud support with your project ID and subscription name so that things can be investigated for your specific case.

Can I use StackDriver Trace PHP application in GKE?

I want to check latencies of RPC every day about CakePHP Application each endpoints running in GKE cluster. I found it is possible using php google client or zipkin server by reading documents , but I don't know how easy to introduce to our app though both seem tough for me.
In addition, I'm concerned about GKE cluster configuration has StackDriver Trace option though our cluster it sets disabled.Can we trace span if it sets enable?
Could you give some advices?
I succeeded to send gcp's trace api in php client via REST. It can see trace set by php client parameters , but my endpoint for trace api has stopped though I don't know why.Maybe ,it is not still supported well because the document have many ambiguous expression so, I realized watching server response by BigQuery with fluentd and DataStudio and it seem best solution because auto span can be set by table name with yyyymmdd and we can watch arbitrary metrics with custom query or calculation field.

Java Google AppEngine Managed VMs: What logs are obtainable through the Logging API?

I like that I can use the Logs API (described here: https://cloud.google.com/appengine/docs/java/logs/) to programatically access and display app & request logs as I see fit--it's great.
Now that I'm using Managed VMs on AppEngine you can see on the Admin Console Logs Viewer that there are a ton of additional logs--including in my case a custom log which I found I could include in the viewer (decribed here: https://cloud.google.com/appengine/docs/managed-vms/custom-runtimes#logging).
My question is: Is there any way I can use the Logs API (or other pipelines already built?) to access these logs? My Managed VM module includes several components which could produce logs that I want to view:
App logs -- I can get these! No problem here.
Custom log files created by background processes I kick off in _ah/start (like "my_custom_1.log" in the screenshot)
STDERR & STDOUT from my background processes
Relevant Managed VM logs (e.g. for when an instance was restarted due to bad health... other system events like normal restarts?)
Basically I want "the total picture" at the instance level. Anyone tried to tame Managed VMs in this way with success? I'm not looking forward to rolling my own solution. And I wouldn't even know where to start on the problem of capturing STDERR and STDOUT. Any help appreciated.
There is a difference between App Engine logging and Google Cloud logging. Some of the Managed VM logs go to both, but much of it only goes to cloud logging.
Until recently there was not an API to read Cloud logs, only to write them. However, there is a new v2 beta API: https://cloud.google.com/logging/docs/api/introduction_v2
To do things at an instance level, entries in Cloud logging should have metadata set to denote which VM they came from. Both of these values seem to vary on logs from my VMs:
compute.googleapis.com/resource_name
compute.googleapis.com/resource_id

Read log files on JBoss AS 7

I have an application running on JBoss AS 7 and creating log files in /standalone/log.
For security reasons I not allowed to browse JBoss directories.
Is there any build-in application to read these logs files from a browser ?
NB : I cannot use admin console either.
No, nothing built in. You can have the admins configure the logging service to put logs where you can get to them, or you can configure the logger to capture logs and post to a database or other.
Not yet, but there are some requests for it (one by me, BTW ;-) and it might appear in WildFlz 8. Hopefully. (Vote on them if you like.)
WFLY-1048 Allow hooking into logging subsystem through Management API
WFLY-1144 Provide the ability to view server logs through the web interface
WFLY-280 Provide an operation to retrieve the last 10 errors from the log
Until then, I suggest to ask the admins to allow access to that one particular log file.
If that doesn't pass through, you may declare dependency of your deployment on a logging service's modules (Dependencies: ... in MANIFEST.MF) and the log manager in JVM. Unless there's some additional obstacle like security manager or so.

log4net adonet appender

I am using adonet appender of log4net for database debugging. Logging level is set to error. Database logging is configured for two applications running on different servers writting to same table on Oracle database.The columns of table were loginId, level.The problems I am facing are:
Even the logging level is set to error, some info level statements were also shown in the table , and the corresponing level column is being shown as error.
In between some statements, Login Id is shown different than the actual user's login id who is running the application.
So, how to configure log4net on different servers to behave autonomously.
EDIT: I am facing these issues only when running multiple instances of an application otherwise log4net logging is fine.
Scenario: I browsed the published version of the application in 2 browsers with different login Ids and gone through different flow in each browser. The result was login id was getting jumbled. I am getting the login id value from User session in my code and then storing into log4net.GlobalContext.Properties.
After some research, I found that there were some alternatives for log4net.GlobalContext.Properties which can be found in http://logging.apache.org/log4net/release/manual/contexts.html. I think ThreadContext.Properties should be used instead of global.
I think that I am facing the issues because of storing into log4net.GlobalContext.Properties.
Issue 1: I checked the code, and the statements were logger.info. But in the database table it was logging with error level.
Issue 2: code for login Id:
user = (User)Session["User"];
log4net.GlobalContext.Properties["LOGINID"] = user.Login;
in web.config.
If you believe that ThreadContext.Properties can be used instead of global.properties can you show me how to use it for login_id.
I started to post this as a comment but I realized that while I don't have the details I need to give you a specific answer, I can point you in the right direction.
Issue 1: If you are getting statements in your database that are info statements but that are marked as error statements, this is a problem in your code. You have to tell log4net what level the log statement is. You can say that a "Hello World" statement is a FATAL error. It sounds like your program is sending messages you want marked as info messages to the log but they are marked as error statements. Look at where those statements are sent to the log file and you should see a log.ERROR statement. Change that to log.INFO and you should be good to go.
Issue 2: The login ID should show who executed the log statement. That means if you execute something under another account (for permissions) or if you use a service account, it will log that user instead of the person clicking the mouse. I can be much more specific in how to potentially fix this if you show us how you are logging the user information.
Issue 3: I'm not sure what you mean here. Log4net does behave autonomously. You can even use the same configuration on multiple servers without issue, if that is what you are alluding to.
If you would like a more complete answer that is more specific to your issues, please post the log4net config file and the relevant code (where you are logging the INFO statements and the method by which you log the user ID would be a good start).

Resources