What's the parsing rule of Google Cloud Error Report? - google-app-engine

I log the error to Google StackDriver Logging.
But the Google Cloud Error Reporting doesn't recognise it (work for others).
Is my formatting so different that error reporter can not recognise it ?
What's the parsing rule of Google Cloud Error Report ?
The logs is:
02:05:12 ERROR application -
! #78in3pjc5 - Internal server error, for (GET) [/api/news/page/1] ->
play.api.UnexpectedException: Unexpected exception[NonNullableColumnRead: SQL `NULL` read at column 5 (JDBC type Array) but mapping is to a non-Option type; use Option here. Note that JDBC column indexing is 1-based.]
at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:247)
at play.api.http.DefaultHttpErrorHandler.onServerError(HttpErrorHandler.scala:178)
at play.core.server.AkkaHttpServer$$anonfun$1.applyOrElse(AkkaHttpServer.scala:363)
at play.core.server.AkkaHttpServer$$anonfun$1.applyOrElse(AkkaHttpServer.scala:361)
at scala.concurrent.Future.$anonfun$recoverWith$1(Future.scala:413)
at scala.concurrent.impl.Promise.$anonfun$transformWith$1(Promise.scala:37)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
Caused by: doobie.util.invariant$NonNullableColumnRead: SQL `NULL` read at column 5 (JDBC type Array) but mapping is to a non-Option type; use Option here. Note that JDBC column indexing is 1-based.
at doobie.util.meta$Meta.unsafeGetNonNullable(meta.scala:50)
at doobie.util.composite$Composite$$anon$6$$anon$7.$anonfun$get$3(composite.scala:121)
at doobie.util.composite$Composite$$anon$6$$anon$7.$anonfun$get$3$adapted(composite.scala:121)
at doobie.util.kernel$Kernel$$anon$6.$anonfun$get$3(kernel.scala:80)
at doobie.util.kernel$Kernel$$anon$6.$anonfun$get$3$adapted(kernel.scala:80)
at doobie.util.kernel$Kernel$$anon$6.$anonfun$get$3(kernel.scala:80)
at doobie.util.kernel$Kernel$$anon$6.$anonfun$get$3$adapted(kernel.scala:80)
at doobie.util.kernel$Kernel$$anon$6.$anonfun$get$3(kernel.scala:80)
at doobie.util.kernel$Kernel$$anon$6.$anonfun$get$3$adapted(kernel.scala:80)
at doobie.util.kernel$Kernel$$anon$6.$anonfun$get$3(kernel.scala:80)

Could you provide the full log entry (e.g. as it appears then fully expanded in https://console.cloud.google.com/logs)?
The problem doesn't appear to be in the format of the stacktrace. The error is captured by if I:
go to the Cloud Console API explorer
enter "projects/[PROJECT_NAME]" for projectName
copy your content (i.e. starting from 02:05:12)
replace newlines with "\n"
paste that as a message in the Request Body
hit "Execute"

Related

SageMaker ValidationException : Value '[]' at 'subnetIds' failed to satisfy constraint

1 validation error detected
Value '[]' at 'subnetIds' failed to satisfy constraint: Member must have length greater than or equal to 1
I want to make Sagemaker studio Domain in Ohio region, but I got ↑ error.
I also confirmed that vpc existed (no default) and one subnet existed.
How can I fix the error? Please share your knowledge.
On the setup SageMaker Domain screen, if you've been trying with the "Quick Setup" instead try using the "Standard Setup". That got me past this issue.

Presto query error: Error reading tail from

I'm trying to query data with Presto connection. The data(delta format) is in S3 bucket and fails with this error:
SQL Error [16777232]: Query failed (#20211005_122441_00037_s2r9w): Error reading tail from s3://*/*/*/table/*/part-00015-bc2cc6d2-706d-4859-ab57-5f87d93d81f5-c000.snappy.parquet with length 16384
When I look at the bucket the file doesn't exist.
Looks like your data has been changed, but the metadata (I assume you're using AWS Glue as a metastore) hasn't.
You can try to CALL system.sync_partition_metadata('<YOUR_SCHEMA>', '<YOUR _TABLE>', 'full'); to get it updated.
Also make sure you've got a consistent schema between your partitions if you're using them.

TYPO3 Exception: Could not determine pid

While trying to add a new fe_users record, on save I get
(1/1) Exception
Could not determine pid
It's TYPO3 9.5.20.
We already have a lot of entries in multiple folders which could be edited without problem.
But those records were imported (by EXT:ig_ldap_sso_auth or with mysql terminal)
These records are used only to be shown (no login is used).
What configuration is missing or could be wrong?
EDIT:
as #biesior mentioned: the error message does not come from the core but from an extension. It's EXT:solrfal (in version 7.0.0)
The real error was not in EXT:solrfal. this extension just hides the error with a misleading message.
The real reason was a wrong database configuration for the table fe_users. Although it is not possible in SQL to have a default value for fields of type text (and any given value is ignored) TYPO3 expects a default value if it is configured. As this is not returned from the database it assumes an error. And EXT:solrfal hooks into the error handling and assumes a wrong error.
Hi just got the same problem.
The error message was called in solrfal ConsistencyAspect::getRecordPageId() which was called by ConsistencyAspect::getDetectorsForSiteExclusiveRecord(). I remember that I have added various tablenames to siteExclusiveRecordTables of Extension Settings of solrfal. And yes, there was one table without pid. After removing this table from list, deleting files works again.

Cannot create kinesis analytics application

While creating Kinesis Analytics application it successfully discovered my schema based on the data. However, when I hit save and continue, I get following error
Error updating application There was an issue updating your
application. Error message: 1 validation error detected: Value 'C' at
'input.inputSchema.recordColumns.2.member.name' failed to satisfy
constraint: Member must satisfy regular expression pattern:
[a-zA-Z][a-zA-Z0-9_]+
my sample record is below
{"reported": {"timestamp": "1482231365", "C": "40", "id": "D_aa-bb"}}
My bad, I overlooked the error message. Found the solution, hope it might help someone.
The auto detected schema name was the issue. From the sample record, the auto detected column name was C and the regex says it should contains atleast two characters. After editing the schema manually with two characters it succeeded.
There was another issue though, the auto detected column name timestamp is a reserved keyword, which we need to change.

Access database in Physionet's ptbdb by Matlab

I set up the system first by
[old_path]=which('rdsamp');if(~isempty(old_path)) rmpath(old_path(1:end-8)); end
wfdb_url='http://physionet.org/physiotools/matlab/wfdb-app-matlab/wfdb-app-toolbox-0-9-3.zip';
[filestr,status] = urlwrite(wfdb_url,'wfdb-app-toolbox-0-9-3.zip');
unzip('wfdb-app-toolbox-0-9-3.zip');
cd mcode
addpath(pwd);savepath
I am trying to read databases from Physionet.
I have successfully reached one database mitdb by
[tm,sig]=rdsamp('mitdb/100',1)
but I want to reach the database ptbdb unsuccessfully by
[tm,sig]=rdsamp('ptbdb/100',1)
and get the error
Warning: Could not get signal information. Attempting to read signal without buffering.
> In rdsamp at 107
Error: Cannot convert to double:
init: can't open header for record ptbdb/100
Error using rdsamp (line 145)
Java exception occurred:
java.lang.NumberFormatException: Cannot convert
at org.physionet.wfdb.Wfdbexec.execToDoubleArray(Unknown Source)
The first error message refers to these lines in rdsamp.m:
if(isempty(N))
[siginfo,~]=wfdbdesc(recordName);
if(~isempty(siginfo))
N=siginfo(1).LengthSamples;
else
warning('Could not get signal information. Attempting to read signal without buffering.')
end
end
This line if(~isempty(siginfo)) is false means that the siginfo is empty that is there is no signal. Why? No access to the database, I think.
I think other errors follow from it.
So the error must follow from this line
[siginfo,~]=wfdbdesc(recordName);
What does the snake mean here in the brackets?
How can you get data from ptbdb by Matlab?
So
Does this error mean that the connection cannot be established to the database?
or
that there does not exists such data in the database?
It would be very nice to know how you can check if you have connection to the database like in Postrgres. It would be much easier to debug.
If you run physionetdb("ptdb",1) it will download the files to your computer. You will then be able to see the available records in the <current-dir>/ptdb/
Source: physionetdb function documentation. You are interested in the DoBatchDownload parameter.
After downloading it, I believe every command from the toolbox will check if you have the files locally before fetching from the server (as long as you give the function the correct path to the local files).
The problem is that the data unit "100" does not exist in the database ptbdb.
I run finally successfully after waiting 35 minutes with 100Mb cable broadband:
db_list = physionetdb('ptbdb')
and get not complete data finally to the patient 54 - there should be 294 patients.
'ptbdb/patient001/s0014lre' 'ptbdb/patient001/s0014lre' ... cut ...
The main developer, Ikaro's' answer helped me to wait so long:
The WFDB Toolbox connects to PhysioNet's file server. The databases
accessible through the WFDB Toolbox are not SQL database, they consist
of flat files. The error message that you are getting regarding the
ptdb/100 database is because you are attempting to get a record that
does not exist on the database.
For more information on a particular database or record in PhysioNet
please type:
help physionetdb
and
physionetdb('ptdb')
This flat file system is really a bottle neck in the system.
It would be a good time to change to SQL.

Resources