for six months, I learn how to code. While following a Udemy lesson to connect MongoDB and React, two error logs showed up simultaneously. After two days, I did solve the bug. However, I felt a bit misled by my console.
The errors:
1.POST http://localhost:3000/api/new-meetup 500 (Internal Server Error) //
2.Uncaught (in promise) SyntaxError: Unexpected token I in JSON at position 0
The authorization with MongoDB servers caused the issue since changing the URI popped the same two error logs again.
Since it was on a server-side, the debugger also logged
reason: TopologyDescription {
type: 'ReplicaSetNoPrimary',
Isn't that a bit missleading logs?.
Nu. 1 >> Problem with the connection.
Nu. 2 >> Problem with the data transferred, usually an escaped character or a spelling error.
It isn't a standard error chain that one can usually see coding with a platform due to a spelling error that crashes many levels.
Is it common to have situations like this with two errors, one of which is not really "the main issue," or am I missing something?.
What worked : changing password and reauthorizing my IP in Momngodb website.
Didn't worked : creating new firewall rule, playing with address, try catch, etc.
the console logged the data so:
enteredMeetupData
{title: '1', image: 'https://media.istockphoto.com/photos/circuit-blue-board-background-copy-space-computer', address: '1', description: '1'}
JSON.stringfy(enteredMeetupData)
{"title":"1","image":"https://media.istockphoto.com/photos/circuit-blue-board-background-copy-space-computer","address":"1","description":"1"}
Which looks OK to me.
Related
Basically, I am facing an issue while n number of taskqueues are running in the Google Cloud Platform. There is no error in code or server but the execution of the taskqueues got terminated due to instance unavailability by which it will trigger a taskqueue again and again.
I know a few reasons by which this kind of termination process takes place.
Reasons:
Instance Unavailable
App Error / AppEngine Error
Memory Exceeded
I want to know the other possible values for the X-AppEngine-TaskRetryReason header.
For example (the response of GAE):
self.request.headers {'Content_Length': '432', 'Content-Length': '432', 'X-Appengine-Current-Namespace': '75f4910a-b925-4945-92f0-b214a459f0be', 'X-Appengine-Taskexecutioncount': '1', 'X-Appengine-Tasketa': '1624452214.545367', 'User-Agent': 'AppEngine-Google; (+http://code.google.com/appengine)', 'X-Appengine-Taskpreviousresponse': '503', 'Host': 'payqa-dot-hw-pay.qa.appspot.com', 'X-Appengine-Taskretrycount': '2', 'Referer': 'http://payqa-dot-hw-pay.qa-.appspot.com/pay/runpayroll', 'Content_Type': 'application/octet-stream', 'X-Cloud-Trace-Context': 'd44fdfd56bc7733afb3169fb354b80ed/6659926505008598367', 'Traceparent': '00-d44fdfd56bc7733afb3169fb354b80ed-5c6ccfded93f0d5f-00', 'X-Appengine-Queuename': 'payroll', 'X-Appengine-Taskname': '21925984910338157231', 'Content-Type': 'application/octet-stream', 'X-Appengine-Country': 'ZZ', **'X-Appengine-Taskretryreason': 'Instance Unavailable'**}
Like I mentioned in the comments there is no listing in the documentation for the possible values of X-AppEngine-TaskRetryReason and it only states that it represents:
The reason for retrying the task.
That being said there is two possibilities why this happens, either this has no specific value and just spits out whatever message it is passed to it by the actual class or component that generated the failure of the execution of the tasks or this is not being shared because the Google Cloud team did not considered it necessary.
Either way if you want to know why this happens and what values you can expect, you should open a Customer issue in Google's Issue Tracker so you can check why this is not shared in the documentation with their Engineering team.
Following is the error log from App in the production. what is the easy way to understand the bad second byte issue here. Provide me any guidance here.
This error getting thrown on different bytes..sometimes at 2 and sometimes at 19 etc. I'm not able to reproduce this issue on simulator. This happens rare but I'm not sure what is causing this issue.
[EDT] 0:23:57,929 - Exception: java.lang.RuntimeException - bad second byte at 19
java.lang.RuntimeException
at java_io_DataInputStream.decode:207
at java_io_DataInputStream.decodeUTF:187
at java_io_DataInputStream.decodeUTF:181
at java_io_DataInputStream.readUTF:177
at com_codename1_io_Util.readUTF:1081
at com__server_Activity.internalize:571
at com_codename1_io_Util.readObject:714
at com_codename1_io_Util.readObject:689
at com_codename1_io_Storage.readObject:264
at com_server_ServerImpl.getActivitiesOfflineMode:1898
at com__forms_AppointmentForm.lambda$onShowCompleted$14:636
at com__forms_AppointmentForm__Lambda_9.run:276
at com_codename1_ui_Display.processSerialCalls:1298
at com_codename1_ui_Display.edtLoopImpl:1242
at com_codename1_ui_Display.mainEDTLoop:1130
at com_codename1_ui_RunnableWrapper.run:120
at com_codename1_impl_CodenameOneThread.run:176
at java_lang_Thread.runImpl:153
It looks like Activity internalize and externalize aren't symmetric and you're writing different data than you're reading leading to corruption. You're doing that before the readUTF line. By the time you reach the readUTF call the data is corrupt hence this error.
I recently got myself a new PC(Predator Helios 300) and I wanted to start using aws there but when I try to perform amplify init I get the error below even though I already did all the other steps such as configuration.
× Root stack creation failed
init failed
{ SignatureDoesNotMatch: Signature expired: 20190427T235724Z is now earlier than 20190428T094952Z (20190428T095452Z - 5 min.)
at Request.extractError (C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\protocol\query.js:50:29)
at Request.callListeners (C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\sequential_executor.js:106:20)
at Request.emit (C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\sequential_executor.js:78:10)
at Request.emit (C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\request.js:683:14)
at Request.transition (C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\request.js:22:10)
at AcceptorStateMachine.runTo (C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\state_machine.js:14:12)
at C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\state_machine.js:26:10
at Request.<anonymous> (C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\request.js:38:9)
at Request.<anonymous> (C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\request.js:685:12)
at Request.callListeners (C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\sequential_executor.js:116:18)
message:
'Signature expired: 20190427T235724Z is now earlier than 20190428T094952Z (20190428T095452Z - 5 min.)',
code: 'SignatureDoesNotMatch',
time: 2019-04-27T23:57:24.753Z,
requestId: 'ab179ef3-699b-11e9-bfe3-4ddc7ceb66ee',
statusCode: 403,
retryable: true }
After doing some research It seems to be a verification problem. Does anyone has experience with this or knows how to resolve this issue. Thanks a lot!
Any time you see an error like "is now earlier than" around some numbers that look like timestamps (20190427T235724Z -> 2019-04-27 23:57:24 UTC), that's an indicator that the error is time related. Time matters for cryptography in order to validate certificates (so that an attacker cannot break a certificate and use it after its expiration, among other reasons) [1]. In this case, either your clock or the remote server clock is wrongly set. Since the remote server in this case is AWS, it is highly unlikely that they have any significant clock drift, leaving you as the possible outlier.
Given that you mentioned a new computer, it is even more likely that this is due to an incorrectly set system clock.
Reset/synchronize your system clock and the error should disappear.
Reference [1]: https://security.stackexchange.com/q/72866/47422
I want to ask what is the difference between DriveScopes.DRIVE_METADATA_READONLY and https://www.googleapis.com/auth/drive.readonly.metadata? In other words, what is the difference between
these two forms:
https://www.googleapis.com/auth/drive.metadata.readonly //DriveScopes.DRIVE_METADATA_READONLY
https://www.googleapis.com/auth/drive.readonly.metadata
When I was using service account for working with Drive API it takes me a long time to figure out, why my app was throwing unauthorized exception:
Uncaught exception from servlet
com.google.api.client.auth.oauth2.TokenResponseException: 403
{
"error" : "access_denied",
"error_description" : "Requested client not authorized."
}
The String constant DriveScopes.DRIVE_METADATA_READONLY was causing the exception. In which context should I use this constant?
That's clearly a mistake in the Java API client.
The API documentation states that the correct scope is :
https://www.googleapis.com/auth/drive.readonly.metadata
Whereas when you look at the latest javadoc (at the time of this answer), you get :
https://www.googleapis.com/auth/drive.metadata.readonly
You should ignore the DriveScopes constant and create your own constant, while the Google Drive team fixes this.
I am trying to adopt databasedotcom gem, but couldn't get beyond the authentication. Here is what I did (after installing databasedotcom gem):
rails c (or irb then require 'databasedotcom')
client=Databasedotcom::Client.new :client_id => 'foo', :client_secret=>'bar'
client.ca_file = '/Users/tjiang/missioncontrol/tmp/ca-bundle.crt'
client.verify_mode = OpenSSL::SSL::VERIFY_PEER
client.authenticate :username=>'myusername', :password=>'mypassword'
All credentials are copy-and-pasted in the process so no mistake there; the certificate was downloaded here: http://certifie.com/ca-bundle/ca-bundle.crt.txt
I tried Ruby 187 and 193 as well as inside and outside Rails, repeatedly, but always got this error message:
Databasedotcom::SalesForceError: authentication failure from /Library/Ruby/Gems/1.8/gems/databasedotcom-1.3.0/lib/databasedotcom/client.rb:112:in `authenticate'
I wonder what I have missed here? Particularly, I am concerned about the Callback URL I used when creating a Remote Access in Salesforce (I tried 'oob', 'http://localhost:3000', and 'https://www.salesforce.com', but none made any difference).
It turns out this is due to a bug in databasedotcom. When you use username and password to authenticate, it puts them into an url query string WITHOUT encoding and POST a request with that url. As a result, the plus sign in my username will be interpreted as a blank space.
Solution: CGI::escape() both your username and password.