What is the format of Google App Engine log files, as downloaded by appcfg.sh request_logs?
As far as I've been able to determine, the format of the log file is as follows:
CLIENT_IP_ADDRESS - USERNAME [DATE:TIME TIMEZONE] "METHOD URL HTTP/VERSION" RESPONSE_CODE ??? URL USER_AGENT
LOG_LEVEL:TIMESTAMP MESSAGE
:
LOG_LEVEL:TIMESTAMP MESSAGE
:
LOG_LEVEL:TIMESTAMP MESSAGE
:
Each of the indented lines is associated with the non-indented line above it - that's how you determine which request each of them relates to, I think. The solitary colons are used to separate one log message from the next.
The non-indented lines are in reverse chronological order, but the groups of indented lines below them are in chronological order.
A strange client IP address like 0.1.0.2 indicates a within-App-Engine post from your app to your app's task queue. Actually, though, you can also see that it is within-App-Engine by looking at the user agent for that request.
Related
Please how do I deploy a smart contract to the testnet or mainnet WITHOUT Chainweaver web UI? I know I need a YAML file for that, but what do I do with it and where exactly do I send it?
Do I need to run a pact server, chainweb api or...? I couldn't find any guide for that
Step 0: Install the Prerequisites
Install Pact
Step 1: Create the Pact Module
We will be deploying the following pact module. For simplicity's sake the pact code we are deploying is not using a transaction's data field (read-keyset is one such pact function that makes use of this field). Otherwise, the accompanying YAML file will have to change. We also assume that this pact code is saved as test.pact.
(namespace 'free)
(module someModuleName AUTONOMOUS
(defcap AUTONOMOUS ()
true)
(defun dummy ()
(+ 1 2)
)
)
Step 2: Create the YAML file
The following YAML file will be used along with pact -a to sign and produce the escaped JSON needed to submit a transaction to Testnet.
codeFile: /Users/linda.ortega.cordoves/pact/test.pact
networkId: testnet04
publicMeta:
chainId: "0"
gasLimit: 1000
ttl: 28000
creationTime: 1585056536
sender: "testing"
gasPrice: 0.00001
keyPairs:
- public: 1d877a7b4524b6724a6ae708cf9ea7396d6ee9d17b10098b7793800177669c1d
secret: 33fcd94b8a42057bd4e3190f8983e3a73ec96c3f60df95c9e2aa3f13602c714f
nonce: step02
This file makes a couple of assumptions that might change depending on your specific implementation:
The full path of the pact we want to upload is: /Users/linda.ortega.cordoves/pact/test.pact
We want to submit a transaction to Testnet, whose network id is testnet04
We want to submit to the zero'th chain on Testnet, which has a chain id of "0"
That the current creation time in UNIX Epoch time is 1585056536 seconds. This value MUST CHANGE, so calculate it by either navigating to this website or running date +%s on the command line.
That "testing" is the account paying for gas (aka the "sender") on the Testnet network. To create a Testnet account and fund it some coins, navigate to the Testnet Coin Faucet. You will need to have generated an ED22519 public-private key pair to use the faucet. You can use pact -g to generate this key pair. Make sure to save it somewhere save.
That the key pair specified in "keyPairs" corresponds to the key pair used to create the gas payer account, which in this example is "testing". This must change from the defaults provided.
That we saved this YAML file as /Users/linda.ortega.cordoves/pact/test.yaml.
Step 3: Submit Transaction to Testnet
We will now submit the example pact module we created by hitting the /send endpoint of a Testnet node. In the command line, run the following command:
pact -a /Users/linda.ortega.cordoves/pact/test.yaml | curl -H "Content-Type: application/json" -d #- https://us1.testnet.chainweb.com/chainweb/0.0/testnet04/chain/0/pact/api/v1/send
Some of the assumptions we made when creating the YAML file become important here:
The network id must match the node endpoint we submit to. Since the network id we chose is testnet04, we must submit to /chainweb/0.0/testnet04/. And the node we submit to (in this case us1.testnet.chainweb.com) must have this network id.
The chain id must also match. We chose chain id of "0", so we must submit to /chain/0/.
That we saved the yaml file to /Users/linda.ortega.cordoves/pact/test.yaml.
If we submitted the transaction successfully we will see the following:
{"requestKeys":["Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek"]}
This means that our transaction was successfully added to the blockchain's mempool and is waiting to be mined. Make note of the request key returned from /send as we will use it when polling for the result of the transaction.
It is also possible that our transaction will fail node validation when we attempt to submit it. If this happens, you will receive a validation failure message instead of the request key.
Step 4: Verify the Result of the Transaction
We will now try to get the results of the transaction we submitted to the Testnet network by hitting the /poll endpoint. In the command line, run the following command:
curl -H "Content-Type: application/json" -d '{"requestKeys":["Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek"]}' -X POST https://us1.testnet.chainweb.com/chainweb/0.0/testnet04/chain/0/pact/api/v1/poll
Again, we make a couple of assumptions in this step:
That the Testnet node we want to poll from is us1.testnet.chainweb.com.
That the network id is testnet04. Note that part of the endpoint is /chainweb/0.0/testnet04/.
That the chain id we are polling from is chain "0". Note that part of the endpoint is /chain/0/.
That the request key we are polling for is Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek.
If the transaction was successfully mined and thus added to the blockchain, then /poll will return the following JSON object:
{
"Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek": {
"gas": 58,
"result": {
"status": "success",
"data": "Loaded module free.linda-test, hash n0g99JhWnO2F7X7f8o_zcAiSHBAWS_QSAfn4yUaqpps"
},
"reqKey": "Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek",
"logs": "0KzZQDJmEgnAKvPnO20UeGoE7KGCIN22nhjraeyp1aw",
"metaData": {
"blockTime": 1585056990071469,
"prevBlockHash": "dIYmpjBQge9yw0Yzhn0Sau-wJFwsLOFBmGbV3_0xYeE",
"blockHash": "yULpC5C-7tzRcc9sWm-f1bOC3JDvtxwT61hruW0aXrA",
"blockHeight": 261712
},
"continuation": null,
"txId": 266084
}
}
Please note that it is possible that a transaction fails at the pact level, but still gets added to the blockchain and gas gets charged. If this happens the result.status field will be failure.
If a transaction has not be mined yet, /poll will return {}. Keep retrying until you receive the JSON object shown above.
source: https://gist.github.com/LindaOrtega/1c219f887d9782c6745dbd827bdbfb4d
Does anyone know how to change the CloudWatch log output from a "Kinesis Data Analytics for Apache Flink" app.?
There are two things I'd like to change:
The fields in the JSON written to CloudWatch
The contents/format of the "message" field (i.e., format of each "LOG.info", "LOG.warn", etc. - line)
#1 is most important.
The default format written to CloudWatch looks like this:
{
"locationInformation": "",
"logger": "",
"message": "",
"threadName": "",
"applicationARN": "arn:aws:kinesisanalytics:eu-west-1:...",
"applicationVersionId": "23",
"messageSchemaVersion": "1",
"messageType": "INFO"
}
Is it somehow possible to change the output, so that each CloudWatch entry becomes this instead:
{
“EventTime”: "20201224T23:59:59.999Z",
“LogLevel”: 5,
“EventSource”: "ApplicationURI/Name",
“Message”: ”foobar”
}
Using SLF4J is mentioned here (https://docs.aws.amazon.com/kinesisanalytics/latest/java/cloudwatch-logs-writing.html), although the format mentioned on the same page is the default described above.
The pom.xml file of the Java project includes aws-java-sdk-logs. It also excludes log4j and slf4j.
<artifactSet>
<excludes>
<exclude>org.slf4j:*</exclude>
<exclude>log4j:*</exclude>
</excludes>
</artifactSet>
I've had a look at this:
https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/java-dg-logging.html
But when testing locally, changing log4j.properties changes the "message" field of the log entry. That does not seem to be loaded when running on AWS, despite existing in the root directory of the .jar file. Even if I could make aws-java-sdk-logs pick up changes in log4j.properties, changing this file doesn't seem capable of changing the JSON fields written to CloudWatch (only the "message" format).
When the app starts on AWS, I can see it prints this:
-Dlog4j.configuration=file:/etc/flink/log4j-console.properties
-Dlogback.configurationFile=file:/etc/flink/logback-console.xml
I was hoping to perhaps copy them out on startup, changing them, and including them while setting the props to point at the JAR. However, both these files seem empty when trying to read them on startup from inside the Flink app code.
Is there some relatively straight-forward way to:
Rename/remove/add fields to the JSON written to CloudWatch?
Change the format of the "message" field?
When I look at the logs in the Google Log Viewer for my GAE project, I see that often the logs that I write myself in the code are assigned to the wrong request. Most of the time the log is assigned to the request directly after the request that produced the log entry.
As the root of every application log in GAE must be a request, this means that the wrong request is sometimes marked as error, because another request before produced an error, but the log is somehow assigned to the request after that.
I don't really do anything special, I use Ktor as my servlet and have an interceptor that creates a log when an exception occurs before returning status 500.
I use Java logging via SLF4J with the google cloud logging handler, but before that I used logback via SLf4J and had the same problem.
The content of the logs itself is also correct, the returned status of the request, the level of the log entry, the message, everything is ok.
I thought that it may be because I use kotlin and switch coroutine contexts during a single request, but in some cases the point where I write the log and where I send the response are exactly next to each other, so I'm not sure if kotlin has anything to do with it.
My logging.properties:
# To use this configuration, add to system properties : -Djava.util.logging.config.file="/path/to/file"
#
.level = INFO
# it is recommended that io.grpc and sun.net logging level is kept at INFO level,
# as both these packages are used by Stackdriver internals and can result in verbose / initialization problems.
io.grpc.netty.level=INFO
sun.net.level=INFO
handlers=com.google.cloud.logging.LoggingHandler
# default : java.log
com.google.cloud.logging.LoggingHandler.log=custom_log
# default : INFO
com.google.cloud.logging.LoggingHandler.level=INFO
# default : ERROR
com.google.cloud.logging.LoggingHandler.flushLevel=WARNING
# default : auto-detected, fallback "global"
#com.google.cloud.logging.LoggingHandler.resourceType=container
# custom formatter
com.google.cloud.logging.LoggingHandler.formatter=java.util.logging.SimpleFormatter
java.util.logging.SimpleFormatter.format=%1$tY-%1$tm-%1$td %1$tH:%1$tM:%1$tS %4$-6s %2$s %5$s%6$s%n
#optional enhancers (to add additional fields, labels)
#com.google.cloud.logging.LoggingHandler.enhancers=com.example.logging.jul.enhancers.ExampleEnhancer
My logging relevant dependencies:
implementation "org.slf4j:slf4j-jdk14:1.7.30"
implementation "com.google.cloud:google-cloud-logging:1.100.0"
An example logging call:
exception<Throwable> { e ->
logger().error("Error", e)
call.respondText(e.message ?: "", ContentType.Text.Plain, HttpStatusCode.InternalServerError)
}
with logger() being:
import org.slf4j.Logger
import org.slf4j.LoggerFactory
inline fun <reified T : Any> T.logger(): Logger = LoggerFactory.getLogger(T::class.java)
Edit:
An example of the log in Google cloud. The first request has the query parameter GAID=cdda802e-fb9c-47ad-0794d394c913, but as you can see the error log for that request is in the one below, marked in red.
when i try to synchronize my caldav server implementation with Thunderbird 45.4.0 and Lightning 4.7.4 (one particular calendar collection) it doesnt show any data or events in the calendar though the last call of the sequence provided the data.
In the Thunderbird error log i can see one error:
Zeitstempel: 07.11.16, 14:21:12
Fehler: [calCachedCalendar] replay action failed: null,
uri=http://127.0.0.1:8003/sap/sports/webdav/appsvc/webdav/services/
server.xsjs/cal/_D043133/, result=2147500037, op=[xpconnect wrapped
calIOperation]
Quelldatei:
file:///Users/d043133/Library/Thunderbird/Profiles/hfbvuk9f.default/
extensions/%7Be2fda1a4-762b-4020-b5ad-a41df1933103%7D/calendar-
js/calCachedCalendar.js
Zeile: 327
the call sequence is as follows (detailed content via gist-links):
Propfind Request - Response
Options Request - Response
Propfind Request - Response
Report Request - Response - Response Raw
The synchronization with other clients like macOS-calendar and ios-calendar works in principle and shows the data. Does anyone has a clue what is going wrong here?
Not sure whether that is the cause but I can see two incorrect things:
a) Your <href/> property has trailing spaces:
<d:href>/sap/sports/webdav/appsvc/webdav/services/server.xsjs/cal/_D043133/EVENT%3A070768ba5dd78ff15458f1985cdaabb1.ics
</d:href>
b) your ORGANIZER property is not a valid URI
ORGANIZER:_D043133
i was able to find the cause of the above issue by debugging Thunderbird as propsed by Philipp. The Report Response has http status code 200, but as it is a multistatus response Thunderbird/Lightning expects status code 207 ;-)
Thanks for the hints!
Here are my email related dev_appserver options:
--smtp_host=smtp.gmail.com --smtp_port=25 --smtp_user=me#mydomain.com --smtp_password="password"
Now, this still doesn't work and every time Google release a new dev_appserver I have to edit api/mail_stub.py to get things to work locally as per this S/O answer.
However, even this workaround has now stopped working. I get the following exception:
SMTPSenderRefused: (555, '5.5.2 Syntax error. mw9sm14633203wib.0 - gsmtp', <email.header.Header instance at 0x10c9c9248>)
Does anyone smarter than me know how to fix it?
UPDATE
I was able to get email to send on dev_appserver by using email addresses (eg. for sender and recipient) in their 'plain' format of a simple string (name#domain.com) rather than using the angle bracket style (Name <name#domain.com>). This is not a problem in production: recipients and sender email addresses can use angle brackets in the mail.send_mail call. I raised a ticket about this divergent behaviour between dev_appserver and production: https://code.google.com/p/googleappengine/issues/detail?id=10211&thanks=10211&ts=1383140754
Looks like it's because the 'sender' is now stored as a "email.header.Header" instance in the dev server instead of a string (since SDK 1.8.3 I think).
From my testing, when a 'From' string like "Name " is passed into smtplib.SMTP.sendmail, it parses the string to find the part within angle brackets, if any, to use as the SMTP sender, giving "". However, if this parameter is an "email.header.Header", then is just converts to string and uses it without further parsing, giving ">", thus causing the problem we're seeing.
Here's the patch I just posted on the issue tracker to google/appengine/api/mail_stub.py to convert this parameter back to a string (works for me):
--- google/appengine/api/mail_stub-orig.py 2014-12-12 20:04:53.612070031 +0000
+++ google/appengine/api/mail_stub.py 2014-12-12 20:05:07.532294605 +0000
## -215,7 +215,7 ##
tos = [mime_message[to] for to in ['To', 'Cc', 'Bcc'] if mime_message[to]]
- smtp.sendmail(mime_message['From'], tos, mime_message.as_string())
+ smtp.sendmail(str(mime_message['From']), tos, mime_message.as_string())
finally:
smtp.quit()
Another alternative is to patch the SMTP server that you use for testing the app engine mail functionality in your dev environment (instead of patching mail_stub.py).
For example, I'm using subethasmtp Wiser and was able to work around this issue by patching org.subethamail.smtp.util.EmailUtils.extractEmailAddress to accept nested angle brackets (details posted here).