'amplify init' keeps failing - reactjs

I recently got myself a new PC(Predator Helios 300) and I wanted to start using aws there but when I try to perform amplify init I get the error below even though I already did all the other steps such as configuration.
× Root stack creation failed
init failed
{ SignatureDoesNotMatch: Signature expired: 20190427T235724Z is now earlier than 20190428T094952Z (20190428T095452Z - 5 min.)
at Request.extractError (C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\protocol\query.js:50:29)
at Request.callListeners (C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\sequential_executor.js:106:20)
at Request.emit (C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\sequential_executor.js:78:10)
at Request.emit (C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\request.js:683:14)
at Request.transition (C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\request.js:22:10)
at AcceptorStateMachine.runTo (C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\state_machine.js:14:12)
at C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\state_machine.js:26:10
at Request.<anonymous> (C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\request.js:38:9)
at Request.<anonymous> (C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\request.js:685:12)
at Request.callListeners (C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\sequential_executor.js:116:18)
message:
'Signature expired: 20190427T235724Z is now earlier than 20190428T094952Z (20190428T095452Z - 5 min.)',
code: 'SignatureDoesNotMatch',
time: 2019-04-27T23:57:24.753Z,
requestId: 'ab179ef3-699b-11e9-bfe3-4ddc7ceb66ee',
statusCode: 403,
retryable: true }
After doing some research It seems to be a verification problem. Does anyone has experience with this or knows how to resolve this issue. Thanks a lot!

Any time you see an error like "is now earlier than" around some numbers that look like timestamps (20190427T235724Z -> 2019-04-27 23:57:24 UTC), that's an indicator that the error is time related. Time matters for cryptography in order to validate certificates (so that an attacker cannot break a certificate and use it after its expiration, among other reasons) [1]. In this case, either your clock or the remote server clock is wrongly set. Since the remote server in this case is AWS, it is highly unlikely that they have any significant clock drift, leaving you as the possible outlier.
Given that you mentioned a new computer, it is even more likely that this is due to an incorrectly set system clock.
Reset/synchronize your system clock and the error should disappear.
Reference [1]: https://security.stackexchange.com/q/72866/47422

Related

A misleading "SyntaxError" log + 500

for six months, I learn how to code. While following a Udemy lesson to connect MongoDB and React, two error logs showed up simultaneously. After two days, I did solve the bug. However, I felt a bit misled by my console.
The errors:
1.POST http://localhost:3000/api/new-meetup 500 (Internal Server Error) // 
2.Uncaught (in promise) SyntaxError: Unexpected token I in JSON at position 0
The authorization with MongoDB servers caused the issue since changing the URI popped the same two error logs again.
Since it was on a server-side, the debugger also logged
reason: TopologyDescription {
type: 'ReplicaSetNoPrimary',
Isn't that a bit missleading logs?.
Nu. 1 >> Problem with the connection.
Nu. 2 >> Problem with the data transferred, usually an escaped character or a spelling error.
It isn't a standard error chain that one can usually see coding with a platform due to a spelling error that crashes many levels.
Is it common to have situations like this with two errors, one of which is not really "the main issue," or am I missing something?.
What worked : changing password and reauthorizing my IP in Momngodb website.
Didn't worked : creating new firewall rule, playing with address, try catch, etc.
the console logged the data so:
enteredMeetupData
{title: '1', image: 'https://media.istockphoto.com/photos/circuit-blue-board-background-copy-space-computer', address: '1', description: '1'}
JSON.stringfy(enteredMeetupData)
{"title":"1","image":"https://media.istockphoto.com/photos/circuit-blue-board-background-copy-space-computer","address":"1","description":"1"}
Which looks OK to me.

Codenameone: Android Build Failed with strange exception

Since today (28.03.) the build of my app (CN1 build server) throws an build Exception which I don't understand. The Build yesterday worked without an error. The error from the error-log:
Dex: The number of method references in a .dex file cannot exceed 64K.
Learn how to resolve this issue at https://developer.android.com/tools/building/multidex.html
UNEXPECTED TOP-LEVEL EXCEPTION:
com.android.dex.DexIndexOverflowException: method ID not in [0, 0xffff]: 65536
com.android.dex.DexIndexOverflowException: method ID not in [0, 0xffff]: 65536
at com.android.dx.merge.DexMerger$8.updateIndex(DexMerger.java:565)
at com.android.dx.merge.DexMerger$IdMerger.mergeSorted(DexMerger.java:276)
at com.android.dx.merge.DexMerger.mergeMethodIds(DexMerger.java:574)
at com.android.dx.merge.DexMerger.mergeDexes(DexMerger.java:166)
at com.android.dx.merge.DexMerger.merge(DexMerger.java:198)
at com.android.builder.dexing.DexArchiveMergerCallable.call(DexArchiveMergerCallable.java:61)
at com.android.builder.dexing.DexArchiveMergerCallable.call(DexArchiveMergerCallable.java:36)
at java.util.concurrent.ForkJoinTask$AdaptedCallable.exec(ForkJoinTask.java:1424)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1689)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
:transformDexArchiveWithDexMergerForDebug FAILED
:transformDexArchiveWithDexMergerForDebug (Thread[Daemon worker,5,main]) completed. Took 0.334 secs.
FAILURE: Build failed with an exception.
Can anybody help me to understand what went wrong?
The error is:
Dex: The number of method references in a .dex file cannot exceed 64K.
In this case, add the build hint:
android.multidex=true
As written in the developer guide (link):
android.multidex -> Boolean true/false defaults to false. Multidex
allows Android binaries to reference more than 65536 methods. This
slows builds a bit so we have it off by default but if you get a build
error mentioning this limit you should turn this on.

AppEngine RemoteAPI SocketTimeoutException

I'm using RemoteAPI to fetch entities from GAE Datastore, 300 at a time.
I'm doing something along the lines of:
while(!(emails = getEmails()).isEmpty()) {
Filter filter = new FilterPredicate("email", FilterOperator.IN, emails)
Query query = new Query("MyEntity").setFilter(filter);
QueryResultIterable<Entity> result = ds.prepare(query).asQueryResultIterable();
for (Entity entity : result) {
System.out.println(entity.getProperty("name"));
}
}
I'm processing something like 50k emails. The first time I ran this code it got to maybe 3/4 of the way, then it threw the following exception. Now it throws it after a single loop iteration is run.
com.google.appengine.tools.remoteapi.RemoteApiException: remote API call: I/O error
at com.google.appengine.tools.remoteapi.RemoteRpc.makeException(RemoteRpc.java:160)
at com.google.appengine.tools.remoteapi.RemoteRpc.callImpl(RemoteRpc.java:104)
at com.google.appengine.tools.remoteapi.RemoteRpc.call(RemoteRpc.java:50)
at com.google.appengine.tools.remoteapi.RemoteDatastore.runQuery(RemoteDatastore.java:156)
at com.google.appengine.tools.remoteapi.RemoteDatastore.handleRunQuery(RemoteDatastore.java:115)
at com.google.appengine.tools.remoteapi.RemoteDatastore.handleDatastoreCall(RemoteDatastore.java:93)
at com.google.appengine.tools.remoteapi.RemoteApiDelegate.makeDefaultSyncCall(RemoteApiDelegate.java:57)
at com.google.appengine.tools.remoteapi.StandaloneRemoteApiDelegate.makeSyncCall(StandaloneRemoteApiDelegate.java:47)
at com.google.appengine.tools.remoteapi.StandaloneRemoteApiDelegate$1.call(StandaloneRemoteApiDelegate.java:58)
at com.google.appengine.tools.remoteapi.StandaloneRemoteApiDelegate$1.call(StandaloneRemoteApiDelegate.java:54)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:442)
at sun.security.ssl.InputRecord.read(InputRecord.java:480)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:934)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:891)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:102)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:690)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1324)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:468)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:338)
at com.google.appengine.repackaged.com.google.api.client.http.javanet.NetHttpResponse.<init>(NetHttpResponse.java:37)
at com.google.appengine.repackaged.com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:94)
at com.google.appengine.repackaged.com.google.api.client.http.HttpRequest.execute(HttpRequest.java:972)
at com.google.appengine.tools.remoteapi.OAuthClient.post(OAuthClient.java:54)
at com.google.appengine.tools.remoteapi.RemoteRpc.callImpl(RemoteRpc.java:102)
... 12 more
I can't figure out what the problem is, but the code seems to be evaluating the for() condition before throwing the exception.
Could this be a quota problem? The quota details screen doesn't show any problems and I can't find any relevant information in the documentation.
For future readers of this question, if you see occurrences of RemoteApiException: remote API call: I/O error which are happening consistently and not intermittently, this could be related to a disruption in network connectivity or possibly a remote issue on the App Engine side.
If the first possibility is ruled out, the best course of action is to report the issue on the Google App Engine issue tracker.
To fix this, first, check your Internet connection. Then clean all artifacts and build them again by (with IntelliJ):
Go to Build => Build Artifacts...
Focus on All Artifacts => Clean
Focus on All Artifacts => Build

Is there a list of applicationError codes & descriptions for the Google AppEngine ApplicationException?

Hello everyone and thanks up front for your time,
I am working on a java-based GAE web application and now and then I get ApiProxy.ApplicationExceptions.
In the current case they appear randomly and come with the applicationError 108 when I open a write channel to a blob using the (yes I know, still experimental) FileStore API. Although the API is still in an experimental state, I'd like to handle the thrown exception correctly. Thus my question:
Where can I find a list of possible application errors including their descriptions?
As of right now it is not possible for me to figure out where the problem resides since the thrown exception does not contain something like a message, hint or reason phrase but only the error ID 108:
Caused by: com.google.apphosting.api.ApiProxy$ApplicationException: ApplicationError: 108:
at java.lang.Thread.getStackTrace(Thread.java:1495)
at com.google.apphosting.runtime.ApiProxyImpl.doSyncCall(ApiProxyImpl.java:240)
at com.google.apphosting.runtime.ApiProxyImpl.access$000(ApiProxyImpl.java:66)
at com.google.apphosting.runtime.ApiProxyImpl$1.run(ApiProxyImpl.java:183)
at com.google.apphosting.runtime.ApiProxyImpl$1.run(ApiProxyImpl.java:180)
at java.security.AccessController.doPrivileged(Native Method)
at com.google.apphosting.runtime.ApiProxyImpl.makeSyncCall(ApiProxyImpl.java:180)
at com.google.apphosting.runtime.ApiProxyImpl.makeSyncCall(ApiProxyImpl.java:66)
at com.googlecode.objectify.cache.TriggerFutureHook.makeSyncCall(TriggerFutureHook.java:154)
at com.google.apphosting.api.ApiProxy.makeSyncCall(ApiProxy.java:107)
at com.google.apphosting.api.ApiProxy.makeSyncCall(ApiProxy.java:56)
at com.google.appengine.api.files.FileServiceImpl.makeSyncCall(FileServiceImpl.java:584)
... 65 more
Also, the corresponding javadoc is quite conservative with giving information: https://developers.google.com/appengine/docs/java/javadoc/com/google/apphosting/api/ApiProxy.ApplicationException
Currently I bluntly cancel these requests with a 500, but since I am not sure what has happened I should probably do something else/more.
Thanksalot!
the best information I could get is from the Python source code :
http://code.google.com/p/googleappengine/source/browse/trunk/python/google/appengine/api/files/file_service_pb.py

Tomcat cluster fails and generates tons of logs

Periodically, I'm getting problems with my Tomcat 6 cluster (2 nodes). One of the nodes would just go haywire and generate a ton of logs repeating the following:
Aug 25, 2009 11:44:10 AM org.apache.catalina.ha.session.DeltaRequest reset
SEVERE: Unable to remove element
java.util.NoSuchElementException
at java.util.LinkedList.remove(LinkedList.java:788)
at java.util.LinkedList.removeFirst(LinkedList.java:134)
at org.apache.catalina.ha.session.DeltaRequest.reset(DeltaRequest.java:201)
at org.apache.catalina.ha.session.DeltaRequest.execute(DeltaRequest.java:195)
at org.apache.catalina.ha.session.DeltaManager.handleSESSION_DELTA(DeltaManager.java:1364)
at org.apache.catalina.ha.session.DeltaManager.messageReceived(DeltaManager.java:1320)
at org.apache.catalina.ha.session.DeltaManager.messageDataReceived(DeltaManager.java:1083)
at org.apache.catalina.ha.session.ClusterSessionListener.messageReceived(ClusterSessionListener.java:87)
at org.apache.catalina.ha.tcp.SimpleTcpCluster.messageReceived(SimpleTcpCluster.java:916)
at org.apache.catalina.ha.tcp.SimpleTcpCluster.messageReceived(SimpleTcpCluster.java:897)
at org.apache.catalina.tribes.group.GroupChannel.messageReceived(GroupChannel.java:264)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:79)
at org.apache.catalina.tribes.group.interceptors.TcpFailureDetector.messageReceived(TcpFailureDetector.java:110)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:79)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:79)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:79)
at org.apache.catalina.tribes.group.ChannelCoordinator.messageReceived(ChannelCoordinator.java:241)
at org.apache.catalina.tribes.transport.ReceiverBase.messageDataReceived(ReceiverBase.java:225)
at org.apache.catalina.tribes.transport.nio.NioReplicationTask.drainChannel(NioReplicationTask.java:188)
at org.apache.catalina.tribes.transport.nio.NioReplicationTask.run(NioReplicationTask.java:91)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
at java.lang.Thread.run(Thread.java:619)
That's the only thing that it shows. The other node in the cluster is still active at this time. There's nothing to do but to restart. The large amount of logs has caused disk space issues more than a couple of times too.
Does anybody have any idea what's wrong here?
Thanks!
Wong
Appears to be a bug in Tomcat 6. If you look at the source at:
http://www.java2s.com/Open-Source/Java-Document/Sevlet-Container/apache-tomcat-6.0.14/org/apache/catalina/ha/session/DeltaRequest.java.htm (line 225)
you'll see that the reset() method can potentially throw this exception. I suggest that you contact the Tomcat developers regarding this issue.

Resources