I am calling a function from a smart contract:
ContractCallQuery query = new ContractCallQuery()
.setContractId(ContractId.fromString(request.getContractID()))
.setGas(100_000_000)
.setFunction(request.getContractFunctionName(), contractFunctionParameters);
Which results in a BUSY response (timing out at attempt 47):
web - 2023-01-30 22:53:28,795 [Test worker] WARN c.h.h.sdk.Query$QueryCostQuery - Problem submitting request to node 0.0.3 for attempt #43, retry with new node: BUSY
web - 2023-01-30 22:53:28,936 [Test worker] WARN c.h.h.sdk.Query$QueryCostQuery - Retrying node 0.0.6 in 8000 ms after failure during attempt #44: BUSY
web - 2023-01-30 22:53:37,032 [Test worker] WARN c.h.h.sdk.Query$QueryCostQuery - Problem submitting request to node 0.0.5 for attempt #45, retry with new node: BUSY
web - 2023-01-30 22:53:37,182 [Test worker] WARN c.h.h.sdk.Query$QueryCostQuery - Problem submitting request to node 0.0.3 for attempt #46, retry with new node: BUSY
web - 2023-01-30 22:53:37,321 [Test worker] WARN c.h.h.sdk.Query$QueryCostQuery - Retrying node 0.0.6 in 8000 ms after failure during attempt #47: BUSY
java.util.concurrent.TimeoutException
This used to work and I havent chaged any code. How do I resolve this?
The BUSY error happens when the network gets congested with other users and the throttle has reached its limit. You'll have to wait until the network is no longer congested to complete any transactions.
If you are on testnet consider running a local instance to continue testing. Its highly encouraged to use a local node for development.
Here is a tutorial on how to set up your own local node. https://hedera.com/blog/how-to-set-up-your-own-hedera-local-network-using-docker
Related
ABP version 4.4 APP with identity server with angular EF core
my angular client send the username and password to identity server that is part of my abp application (mean same URL for api and identity server) that returns me token but after very next request for "api/abp/application-configuration" return with CORs error.
same application work for QA local environment with local config.
same call for "api/abp/application-configuration" work on app start because that time token is not attached in request header
i check my app setting CorsOrigins are matching with IdentityServerClientCorsOrigins
no space issue both angular and front end on https but with different port. you can verify with logs. one thing is strange in log no protocol attach # respond. (domain:1000)
I check and confirm that this is Authentication issue. by comment these two line and Authorize attribute from class
My application works fine. no issue for cors
app.UseAuthentication();
app.UseJwtTokenMiddleware();
system log
2021-09-04 10:42:24.731 -04:00 [INF] Request starting HTTP/2 OPTIONS https://domain:1000/.well-known/openid-configuration - -
2021-09-04 10:42:24.745 -04:00 [INF] CORS policy execution successful.
2021-09-04 10:42:24.751 -04:00 [INF] Request finished HTTP/2 OPTIONS https://domain:1000/.well-known/openid-configuration - - - 204 - - 19.7919ms
2021-09-04 10:42:24.793 -04:00 [INF] Request starting HTTP/2 GET https://domain:1000/.well-known/openid-configuration - -
2021-09-04 10:42:24.794 -04:00 [INF] CORS policy execution successful.
2021-09-04 10:42:46.050 -04:00 [ERR] Exception occurred while processing message.
System.InvalidOperationException: IDX20803: Unable to obtain configuration from: 'System.String'.
System.IO.IOException: IDX20804: Unable to retrieve document from: 'System.String'.
System.Net.Http.HttpRequestException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. (domain:1000)
System.Net.Sockets.SocketException (10060): A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
It is not a CORS issue, instead it is because your API (the UseJwtTokenMiddleware part) can't reach your IdentityServer over HTTP(s). I guess you got some DNS/Certificate/Network issue to resolve.
We are facing this below error in Vespa, after restarting the cluster we got this below issue.
1600455444.680758 10.10.000.00 1030/1 container Container.com.yahoo.filedistribution.fileacquirer.FileAcquirerImpl info Retrying waitFor for file 'e0ce64d459828eb0': 103 -- Request timed out after 60.0 seconds.
1600455446.819853 10.10.000.00 32752/146 configproxy configproxy.com.yahoo.vespa.filedistribution.FileReferenceDownloader info Request failed. Req: request filedistribution.serveFile(e0ce64d459828eb0,0)\nSpec: tcp/10.10.000.00:19070, error code: 103, set error for connection and use another for next request
We faced this issue second time, earlier we kept it ideal and it was resolved automatically, but this time it is persistent.
Looks like the configproxy is unable to talk to the config server (which is listening to port 19070 on the same host: Spec: tcp/10.10.000.00:19070). Is the config server really runnning and listening on port 19070 on this host? Try running the vespa-config-status script to see if all is well with the config system
I'm following this doc to configure probes for JobManager and TaskManager on Kubernetes.
JobManager works perfectly, but TaskManager doesn't work. I noticed in the pod log that the liveness probe failed:
Normal Killing 3m36s kubelet, gke-dagang-test-default-pool-494df2ba-vhs5 Killing container with id docker://taskmanager:Container failed liveness probe.. Container will be killed and recreated.
Warning Unhealthy 37s (x8 over 7m37s) kubelet, gke-dagang-test-default-pool-494df2ba-vhs5 Liveness probe failed: dial tcp 10.20.1.54:6122: connect: connection refused
I'm wondering does TM actually listen on 6122?
Flink version: 1.9.0
Turns out it is because I didn't add taskmanager.rpc.port: 6122 in flink-config.yaml, now it works perfectly.
i am using Gatling 3.0.0 as a plugin in SBT i am configuring the browser as given in the https://gatling.io/docs/current/http/recorder/#recorder under configuration heading after then when i start the recorder using gatling:startRecorder in sbt and try to hit my website https://www.example.com/ Firefox displayed
Did Not Connect: Potential Security Issue
Firefox detected a potential security threat and did not continue to www.mydomain.com because this website requires a secure connection.
www.mydomain.com has a security policy called HTTP Strict Transport Security (HSTS), which means that Firefox can only connect to it securely. You can’t add an exception to visit this site
and here are the exception logs
ioEventLoopGroup-2-1] DEBUG io.netty.handler.ssl.util.InsecureTrustManagerFactory - Accepting a server certificate: CN=www.mydomain.com
14:44:55.604 [nioEventLoopGroup-4-2] DEBUG io.gatling.recorder.http.Mitm$ - Open new server channel
14:44:55.607 [nioEventLoopGroup-4-1] WARN io.netty.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:472)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:128)
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:308)
at java.base/sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:279)
at java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:181)
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:164)
at java.base/sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:672)
at java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:627)
at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:443)
at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:422)
at java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:634)
at io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:294)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1297)
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1199)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1243)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441)
... 16 common frames omitted
14:44:55.609 [recorder-akka.actor.default-dispatcher-4] DEBUG io.gatling.recorder.http.flows.SecuredNoProxyMitmActor - Server channel 6acf48e4 was closed while in Connected state, closing
14:44:55.622 [recorder-akka.actor.default-dispatcher-2] DEBUG io.gatling.recorder.http.flows.PlainNoProxyMitmActor - serverChannel=8d7b2171 received init request http://detectportal.firefox.com/success.txt, connecting
14:44:55.622 [recorder-akka.actor.default-dispatcher-2] DEBUG io.gatling.recorder.http.flows.PlainNoProxyMitmActor - Connecting to Remote(detectportal.firefox.com,80)
14:44:55.629 [recorder-akka.actor.default-dispatcher-4] INFO akka.actor.RepointableActorRef - Message [io.gatling.recorder.http.flows.MitmMessage$ClientChannelInactive] without sender to Actor[akka://recorder/user/$a#-1754914561] was not delivered. [1] dead letters encountered. If this is not an expected behavior, then [Actor[akka://recorder/user/$a#-1754914561]] may have terminated unexpectedly, This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
14:44:55.655 [nioEventLoopGroup-2-2] DEBUG io.gatling.recorder.http.Mitm$ - Open new client channel
How did you configured proxy in your firefox? Do you have proxy only for http or also for https? If you are proxing also https requests then in recorder settings you need to switch "HTTPS mode" to "Certificate Authority". There will be a button to generate new certificate authority file that you need to import to your browser (Preferences / Privacy & Security / Certificates / View certificates / Import). After that your browser will know that it can trust Gatling proxy server and you should be able to proxy also ssl requests.
We have a blocking issue trying to execute queries via WSO2 DSS.
After a day of usage, the query results are not being retrieved by DSS consistently. (The same query might return results in the next try)
Using a network monitoring tool, we could find the DB responds almost immediately, however DSS did not acknowledge this response.
The same code in Acceptance env did not have this problem.
This is what i get in my log :
[2015-11-27 09:45:17,308] WARN - SourceHandler Connection time out after request is read: http-incoming-1366
[2015-11-27 09:45:17,314] WARN - TargetHandler http-outgoing-1267: Connection time out while in state: REQUEST_DONE
[2015-11-27 09:45:17,315] WARN - FaultHandler ERROR_CODE : 101507
[2015-11-27 09:45:17,315] WARN - FaultHandler ERROR_MESSAGE : Error in Sender
[2015-11-27 09:45:17,315] WARN - FaultHandler ERROR_DETAIL : Error in Sender
[2015-11-27 09:45:17,316] WARN - FaultHandler ERROR_EXCEPTION : null
[2015-11-27 09:45:17,316] WARN - FaultHandler FaultHandler : Endpoint [Sample_First]
[2015-11-27 09:45:17,316] WARN - FailoverEndpoint Endpoint [getMyOverviewServiceEndpoint] Detect a Failure in a child endpoint : Endpoint [Sample_First]
Any pointers could be of great help!
Thanks,
Siby Mathew
Update
Could find that WSO2 initially connects to Google DNS in the production, gets timed out and then connect to proper DNS to resolve the Database server.
Could this be the reason for the inconsistent behaviour ?
Is there any default code in DSS to connect to Google DNS ?