Realtime API - "Maximum Stack Size Exceeded" - google-drive-realtime-api

About 7-8hrs ago we started seeing instances of the "Maximum Stack Size Exceeded" error on our live site. Unfortunately I don't have much information on the error as the relevant bits of the stack have been blown away in the report and I am not able to get it to happen myself. The information I have is the repeated loop and the document ID:
Document ID: 0B0m--69Vv_F6c1gwdUU3OG5MNDQ
Stack
...
File https://drive.google.com/otservice/api line 78 col 51 in dl
File https://drive.google.com/otservice/api line 250 col 415 in c.M
File https://drive.google.com/otservice/api line 78 col 51 in dl
File https://drive.google.com/otservice/api line 250 col 415 in c.M
...
Will update with more information if it becomes available. Is this happening for others?

Related

Timing range from request to response in J1939

Generally a request is sent via 0xEB00 and response is capured by 0xEC00 (Response more than 8 bytes) in J1939,what is the range for the response from request?
Example:-
0.00 - 0xEB00 - EC FE 00
xx.xx - 0xEC00 - xx xx xx xx xx EC FE 00.
What can be the possible range of xx.xx be ?
Looked into many options but unable to find the exact range.
Somewhere its mentioned as 10 - 200 => Datapackets
and somewhere its mentioned as 0 - 1250
All devices, when required to provide a response, must do so within 0.20s (Tr). All devices expecting a response must wait at least 1.25s (T3) before giving up or retrying. These times assure that any latencies due to bus access or message forwarding across bridges do not cause unwanted timeouts. Different time values can be used for specific applications when required. For instance, for high-speed control messages, a 20 ms response may be expected. Reordering any buffered messages may be necessary to accomplish the faster response. There is no restriction on minimum response time.
Time between packets of a multipacket message directed to a specific destination is 0 to 200 ms. This means that backto-back messages can occur and they may contain the same identifier. The CTS mechanism can be used to assure a
given time spacing between packets. The required time interval between packets of a Multipacket Broadcast message is 50 to 200 ms. A minimum time of 50 ms assures the responder has time to pull the message from the CAN hardware.
The responder shall use a timeout of 250 ms (provides margin allowing for the maximum spacing of 200 ms).
a. Maximum forward delay time within a bridge is 50 ms Total number of bridges = 10 (i.e. 1 tractor + 5 trailers + 4 dollies = 10 bridges) Total network delay is 500 ms in one direction.
b. Number of request retries = 2 (3 requests total); this includes the situation where the CTS is used to request the retransmission of data packet(s). c. 50 ms margin for timeouts

Is the duration time for Power Apps Dataflow from Azure SQL to Dataverse really slow and error messages this terrible?

I have a table in a Azure SQL Database which contains approximately 10 cols and 1.7 million rows. There data in each cell is mostly null/varchar(30).
When running a dataflow to a new table in Dataverse, I have two issues:
It takes around 14 hours (around 100k rows or so per hour)
It fails after 14 hours with the great error message (**** is just some entity names I have removed):
Dataflow name,Entity name,Start time,End time,Status,Upsert count,Error count,Status details
****** - *******,,1.5.2021 9:47:20 p.m.,2.5.2021 9:51:27 a.m.,Failed,,,There was a problem refreshing >the dataflow. please try again later. (request id: 5edec6c7-3d3c-49df-b6de-e8032999d049).
****** - ,,1.5.2021 9:47:43 p.m.,2.5.2021 9:51:26 a.m.,Aborted,0,0,
Table name,Row id,Request url,Error details
*******,,,Job failed due to timeout : A task was canceled.
Is it really so that this should take 14 hours :O ?
Are there any verbose logging I can enable to get a more friendly error message?

Gmail API error 429 "User-rate limit exceeded"

Using Gmail API, I keep getting hundreds of error 429 "User-rate limit exceeded".
My script sends 100 emails at a time, every 10 minutes.
Shouldn't this be within the API limits?
The error pops up after only 5 or 6 successful sends.
Thanks
Have a look at the Gmail Usage Limits:
So, if you send 100 emails during a very short time, this corresponds to 10 000 units - thats is 40 times more than the allowed quota per second.
So while short bursts are allowed, if you exceed the quota significantly this might be beyond the scope of the allowed burst.
In this case you should implement exponential backoff as recommended by Google.

Sagemaker: DeepAR Hyperparameter Tuning Error

Running into a new issue with tuning DeepAR on SageMaker when trying to initialize a hyperparameter tuning job - this error also occurs when calling the test:mean_wQuantileLoss. I've upgraded the sagemaker package, restarted my instance, restarted the kernel (using a juptyer notebook), and yet the problem persists.
ClientError: An error occurred (ValidationException) when calling the
CreateHyperParameterTuningJob operation: The objective metric type, [Maximize], that you specified for objective metric, [test:RMSE], isn’t valid for the [156387875391.dkr.ecr.us-west-2.amazonaws.com/forecasting-deepar:1] algorithm. Choose a valid objective metric type.
Code:
my_tuner = HyperparameterTuner(estimator=estimator,
objective_metric_name="test:RMSE",
hyperparameter_ranges=hyperparams,
max_jobs=20,
max_parallel_jobs=2)
# Start hyperparameter tuning job
my_tuner.fit(inputs=data_channels)
Stack Trace:
ClientError Traceback (most recent call last)
<ipython-input-66-9d6d8de89536> in <module>()
7
8 # Start hyperparameter tuning job
----> 9 my_tuner.fit(inputs=data_channels)
~/anaconda3/envs/python3/lib/python3.6/site-packages/sagemaker/tuner.py in fit(self, inputs, job_name, include_cls_metadata, **kwargs)
255
256 self._prepare_for_training(job_name=job_name, include_cls_metadata=include_cls_metadata)
--> 257 self.latest_tuning_job = _TuningJob.start_new(self, inputs)
258
259 #classmethod
~/anaconda3/envs/python3/lib/python3.6/site-packages/sagemaker/tuner.py in start_new(cls, tuner, inputs)
525 output_config=(config['output_config']),
526 resource_config=(config['resource_config']),
--> 527 stop_condition=(config['stop_condition']), tags=tuner.tags)
528
529 return cls(tuner.sagemaker_session, tuner._current_job_name)
~/anaconda3/envs/python3/lib/python3.6/site-packages/sagemaker/session.py in tune(self, job_name, strategy, objective_type, objective_metric_name, max_jobs, max_parallel_jobs, parameter_ranges, static_hyperparameters, image, input_mode, metric_definitions, role, input_config, output_config, resource_config, stop_condition, tags)
348 LOGGER.info('Creating hyperparameter tuning job with name: {}'.format(job_name))
349 LOGGER.debug('tune request: {}'.format(json.dumps(tune_request, indent=4)))
--> 350 self.sagemaker_client.create_hyper_parameter_tuning_job(**tune_request)
351
352 def stop_tuning_job(self, name):
~/anaconda3/envs/python3/lib/python3.6/site-packages/botocore/client.py in _api_call(self, *args, **kwargs)
312 "%s() only accepts keyword arguments." % py_operation_name)
313 # The "self" in this scope is referring to the BaseClient.
--> 314 return self._make_api_call(operation_name, kwargs)
315
316 _api_call.__name__ = str(py_operation_name)
~/anaconda3/envs/python3/lib/python3.6/site-packages/botocore/client.py in _make_api_call(self, operation_name, api_params)
610 error_code = parsed_response.get("Error", {}).get("Code")
611 error_class = self.exceptions.from_code(error_code)
--> 612 raise error_class(parsed_response, operation_name)
613 else:
614 return parsed_response
ClientError: An error occurred (ValidationException) when calling the CreateHyperParameterTuningJob operation:
The objective metric type, [Maximize], that you specified for objective metric, [test:RMSE], isn’t valid for the [156387875391.dkr.ecr.us-west-2.amazonaws.com/forecasting-deepar:1] algorithm.
Choose a valid objective metric type.
It looks like you are trying to maximize this metric, test:RMSE can only be minimized by SageMaker HyperParameter Tuning.
To achieve this in the SageMaker Python SDK, create your HyperparameterTuner with objective_type='Minimize'. You can see the signature of the init method here.
Here is the change you should make to your call to HyperparameterTuner:
my_tuner = HyperparameterTuner(estimator=estimator,
objective_metric_name="test:RMSE",
objective_type='Minimize',
hyperparameter_ranges=hyperparams,
max_jobs=20,
max_parallel_jobs=2)

Datastore Read Operations Calculation

So I am currently performing a test, to estimate how much can my Google app engine work, without going over quotas.
This is my test:
I have in the datastore an entity, that according to my local
dashboard, needs 18 write operations. I have 5 entries of this type
in a table.
Every 30 seconds, I fetch those 5 entities mentioned above. I DO
NOT USE MEMCACHE FOR THESE !!!
That means 5 * 18 = 90 read operations, per fetch right ?
In 1 minute that means 180, in 1 hour that means 10800 read operations..Which is ~20% of the daily limit quota...
However, after 1 hour of my test running, I noticed on my online dashboard, that only 2% of the read operations were used...and my question is why is that?...Where is the flaw in my calculations ?
Also...where can I see in the online dashboard how many read/write operations does an entity need?
Thanks
A write on your entity may need 18 writes, but a get on your entity will cost you only 1 read.
So if you get 5 entries every 30 secondes during one hour, you'll have about 5reads * 120 = 600 reads.
This is in the case you make a get on your 5 entries. (fetching the entry with it's id)
If you make a query to fetch them, the cost is "1 read + 1 read per entity retrieved". Wich mean 2 reads per entries. So around 1200 reads in one hour.
For more details informations, here is the documentation for estimating costs.
You can't see on the dashboard how many writes/reads operations an entity need. But I invite you to check appstats for that.

Resources