INVALID_REQUEST error in pull taskqueues - google-app-engine

I'm using a pull queue in Appengine for Go and while locally leasing tasks worked just fine, when I deployed my code, the call to taskqueue.Lease gave me this error:
API error 13 (taskqueue: INVALID_REQUEST)
My lease call was:
tasks, err := taskqueue.Lease(ctx, 100, "pullqueue", 60)
And it has happened no matter what parameters I pass in, like parameters for a blank queue name. Has anyone else gotten this error? Thanks in advance for the help!

Nvm in my real code I was leasing 2000 tasks but the max I can lease is actually 1000.

Related

Monero wallet-rpc keeps saying set max-reorg-depth N no matter what i do

No matter what I do, my monero-wallet-rpc keeps saying:
2021-06-09 15:58:56.402 E reorg_depth > m_max_reorg_depth. THROW EXCEPTION: error::reorg_depth_error
And:
2021-06-09 15:58:56.430 E Exception at while refreshing, what=reorg exceeds maxi mum allowed depth, use 'set max-reorg-depth N' to allow it, reorg depth: 744
For the past few days i'v been searching around and no info in regards to this was found. I am trying to run a full monero node with monerod as daemon and wallet-rpc.
This is how I am starting both monerod and wallet-rpc:
monerod --config-file /root/.bitmonero/monerod.conf --confirm-external-bind
monero-wallet-rpc --rpc-bind-ip EXTERNAL-IP --rpc-bind-port 18082 --log-level 2 --wallet-file /root/.bitmonero/testwallet --confirm-external-bind --daemon-address EXTERNAL-IP:18081 --daemon-login user:pass --password WALLET-PASSWORD
My monerod.conf:
data-dir=/root/.bitmonero
log-file=/root/.bitmonero/monero.log
log-level=0
rpc-bind-ip=PUBLIC-IP
rpc-bind-port=18081
rpc-login=user:pass
This is how i created the wallet:
monero-wallet-cli --trusted-daemon --daemon-address PUBLIC-IP:18081 --daemon-login user:pass
Inserted wallet name, password, didn't set backround mining, refresh done, no errors happened. I hit exit and start the wallet-rpc, then I again get the same error. I try to set max-reorg-depth 744 in the wallet-cli and start the wallet-rpc again ,same error. No matter what I do, same error occurs. Monero doesn't offer any documentation on "set max-reorg-depth N", that's the sad part.
Tried creating multiple different wallets, same error happens on each single one.
Yes, my daemon is fully syncronized, even restarted it couple of times to make sure everything is fine.
If anyone can lighten my day and explain what exactly am I doing wrong ?
Thank you!
Problem fixed, all you have to do is open your monero-wallet-cli, load your master wallet and execute the fallowing command: set max-reorg-depth 2378751 ( current number of blocks )

Creating an account fails with "Failed to get account from validator, error: Waypoint value mismatch"

the use case
I am following the tutorial to create my first transaction: https://developers.diem.com/docs/tutorials/my-first-transaction
I run Ubuntu 20.04
I executed those commands successfully:
git clone https://github.com/diem/diem.git && cd diem
git checkout testnet
./scripts/dev_setup.sh
The error:
I created a first account with this command
libra% account create
The creation of the command triggered this error:
>> Creating/retrieving next local account from wallet
2020-12-18T21:02:29.644049Z [main] ERROR testsuite/cli/src/client_proxy.rs:1320
Failed to get account from validator, error: Waypoint value mismatch:
waypoint value = 3139c30efe6dbde4228efb9df32c137dc3a2490b97ab6a086897be1d0cb336f0
, given value = 8ce65af8ca7ad5c9da796fbfccdc1bd53f5cdf58616322d5d574c7ca93ddd583
Created/retrieved local account #0 address bda28b9df5b779a854f6f0a035d10484
How can I know where does the waypoint 8ce65a comes from? I have found where the 3139c3 waypoint came from: https://testnet.libra.org/waypoint.txt
I see the final message stating that the account was created though. Is it a safe assumption?
The waypoints are generated on Diem at regular intervals, say epoch boundaries.
Initially the account might not appear on the ledger as it is not synced. The account would appear after the sync has happened, it is safe

'amplify init' keeps failing

I recently got myself a new PC(Predator Helios 300) and I wanted to start using aws there but when I try to perform amplify init I get the error below even though I already did all the other steps such as configuration.
× Root stack creation failed
init failed
{ SignatureDoesNotMatch: Signature expired: 20190427T235724Z is now earlier than 20190428T094952Z (20190428T095452Z - 5 min.)
at Request.extractError (C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\protocol\query.js:50:29)
at Request.callListeners (C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\sequential_executor.js:106:20)
at Request.emit (C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\sequential_executor.js:78:10)
at Request.emit (C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\request.js:683:14)
at Request.transition (C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\request.js:22:10)
at AcceptorStateMachine.runTo (C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\state_machine.js:14:12)
at C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\state_machine.js:26:10
at Request.<anonymous> (C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\request.js:38:9)
at Request.<anonymous> (C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\request.js:685:12)
at Request.callListeners (C:\Users\sahve\AppData\Roaming\npm\node_modules\#aws-amplify\cli\node_modules\aws-sdk\lib\sequential_executor.js:116:18)
message:
'Signature expired: 20190427T235724Z is now earlier than 20190428T094952Z (20190428T095452Z - 5 min.)',
code: 'SignatureDoesNotMatch',
time: 2019-04-27T23:57:24.753Z,
requestId: 'ab179ef3-699b-11e9-bfe3-4ddc7ceb66ee',
statusCode: 403,
retryable: true }
After doing some research It seems to be a verification problem. Does anyone has experience with this or knows how to resolve this issue. Thanks a lot!
Any time you see an error like "is now earlier than" around some numbers that look like timestamps (20190427T235724Z -> 2019-04-27 23:57:24 UTC), that's an indicator that the error is time related. Time matters for cryptography in order to validate certificates (so that an attacker cannot break a certificate and use it after its expiration, among other reasons) [1]. In this case, either your clock or the remote server clock is wrongly set. Since the remote server in this case is AWS, it is highly unlikely that they have any significant clock drift, leaving you as the possible outlier.
Given that you mentioned a new computer, it is even more likely that this is due to an incorrectly set system clock.
Reset/synchronize your system clock and the error should disappear.
Reference [1]: https://security.stackexchange.com/q/72866/47422

How to catch BigQuery loading errors from an AppEngine pipeline

I have built a pipeline on AppEngine that loads data from Cloud Storage to BigQuery. This works fine, ..until there is any error. How can I can loading exceptions by BigQuery from my AppEngine code?
The code in the pipeline looks like this:
#Run the job
credentials = AppAssertionCredentials(scope=SCOPE)
http = credentials.authorize(httplib2.Http())
bigquery_service = build("bigquery", "v2", http=http)
jobCollection = bigquery_service.jobs()
result = jobCollection.insert(projectId=PROJECT_ID,
body=build_job_data(table_name, cloud_storage_files))
#Get the status
while (not allDone and not runtime.is_shutting_down()):
try:
job = jobCollection.get(projectId=PROJECT_ID,
jobId=insertResponse).execute()
#Do something with job.get('status')
except:
exc_type, exc_value, exc_traceback = sys.exc_info()
logging.error(traceback.format_exception(exc_type, exc_value, exc_traceback))
time.sleep(30)
This gives me status error, or major connectivity errors, but what I am looking for is functional errors from BigQuery, like fields formats conversion errors, schema structure issues, or other issues BigQuery may have while trying to insert rows to tables.
If any "functional" error on BigQuery's side happens, this code will run successfully and complete normally, but no table will be written on BigQuery. Not easy to debug when this happens...
You can use the HTTP error code from the exception. BigQuery is a REST API, so the response codes that are returned match the description of HTTP error codes here.
Here is some code that handles retryable errors (connection, rate limit, etc), but re-raises when it is an error type that it doesn't expect.
except HttpError, err:
# If the error is a rate limit or connection error, wait and
# try again.
# 403: Forbidden: Both access denied and rate limits.
# 408: Timeout
# 500: Internal Service Error
# 503: Service Unavailable
if err.resp.status in [403, 408, 500, 503]:
print '%s: Retryable error %s, waiting' % (
self.thread_id, err.resp.status,)
time.sleep(5)
else: raise
If you want even better error handling, check out the BigqueryError class in the bq command line client (this used to be available on code.google.com, but with the recent switch to gCloud, it isn't any more. But if you have gcloud installed, the bq.py and bigquery_client.py files should be in the installation).
The key here is this part of the pasted code:
except:
exc_type, exc_value, exc_traceback = sys.exc_info()
logging.error(traceback.format_exception(exc_type, exc_value, exc_traceback))
time.sleep(30)
This "except" is catching every exception, logging it, and letting the process continue without any consideration for re-trying.
The question is, what would you like to do instead? At least the intention is there with the "#Do something" comment.
As a suggestion, consider App Engine's task queues to check the status, instead of a loop with a 30 second wait. When tasks get an exception, they are automatically retried - and you can tune that behavior.

Tomcat cluster fails and generates tons of logs

Periodically, I'm getting problems with my Tomcat 6 cluster (2 nodes). One of the nodes would just go haywire and generate a ton of logs repeating the following:
Aug 25, 2009 11:44:10 AM org.apache.catalina.ha.session.DeltaRequest reset
SEVERE: Unable to remove element
java.util.NoSuchElementException
at java.util.LinkedList.remove(LinkedList.java:788)
at java.util.LinkedList.removeFirst(LinkedList.java:134)
at org.apache.catalina.ha.session.DeltaRequest.reset(DeltaRequest.java:201)
at org.apache.catalina.ha.session.DeltaRequest.execute(DeltaRequest.java:195)
at org.apache.catalina.ha.session.DeltaManager.handleSESSION_DELTA(DeltaManager.java:1364)
at org.apache.catalina.ha.session.DeltaManager.messageReceived(DeltaManager.java:1320)
at org.apache.catalina.ha.session.DeltaManager.messageDataReceived(DeltaManager.java:1083)
at org.apache.catalina.ha.session.ClusterSessionListener.messageReceived(ClusterSessionListener.java:87)
at org.apache.catalina.ha.tcp.SimpleTcpCluster.messageReceived(SimpleTcpCluster.java:916)
at org.apache.catalina.ha.tcp.SimpleTcpCluster.messageReceived(SimpleTcpCluster.java:897)
at org.apache.catalina.tribes.group.GroupChannel.messageReceived(GroupChannel.java:264)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:79)
at org.apache.catalina.tribes.group.interceptors.TcpFailureDetector.messageReceived(TcpFailureDetector.java:110)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:79)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:79)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:79)
at org.apache.catalina.tribes.group.ChannelCoordinator.messageReceived(ChannelCoordinator.java:241)
at org.apache.catalina.tribes.transport.ReceiverBase.messageDataReceived(ReceiverBase.java:225)
at org.apache.catalina.tribes.transport.nio.NioReplicationTask.drainChannel(NioReplicationTask.java:188)
at org.apache.catalina.tribes.transport.nio.NioReplicationTask.run(NioReplicationTask.java:91)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
at java.lang.Thread.run(Thread.java:619)
That's the only thing that it shows. The other node in the cluster is still active at this time. There's nothing to do but to restart. The large amount of logs has caused disk space issues more than a couple of times too.
Does anybody have any idea what's wrong here?
Thanks!
Wong
Appears to be a bug in Tomcat 6. If you look at the source at:
http://www.java2s.com/Open-Source/Java-Document/Sevlet-Container/apache-tomcat-6.0.14/org/apache/catalina/ha/session/DeltaRequest.java.htm (line 225)
you'll see that the reset() method can potentially throw this exception. I suggest that you contact the Tomcat developers regarding this issue.

Resources