delete_model() error when cleaning up AWS sagemaker - amazon-sagemaker

I followed the tutorial on https://aws.amazon.com/getting-started/hands-on/build-train-deploy-machine-learning-model-sagemaker/
I got an error when trying to clean up with the following code.
xgb_predictor.delete_endpoint()
xgb_predictor.delete_model()
ClientError: An error occurred (ValidationException) when calling the DescribeEndpointConfig operation: Could not find the endpoint configuration.
Does it mean I need to delete the model first instead?
I checked on the console and deleted the model manually.

No, you don't need to delete the model prior to deleting the endpoint. From the error logs looks like its not able to find the endpoint configuration. Can you verify if you are setting delete_endpoint_config to True
xgb_predictor.delete_endpoint(delete_endpoint_config=True)
Additionally, you can verify if the endpoint_config is still avaiable on the AWS console.

Related

Google Error Reporting does not correlate to parent http request

I'm using Google App Engine Standard with Python 3. When i click on an error in Google Error Reporting and then click "View Logs", I get taken to Google Logs Viewer/Explorer with something like this error_group("CObpg_HTfjskb6GA") as a search filter.
I see the individual log line with the stacktrace but not any logs for the parent request for which this occurred.
In the docs, they have a screenshot where it does look like we should be able to see the parent http request in which the error occurred: https://cloud.google.com/error-reporting/docs/viewing-errors#view_associated_log_entries
Right now when I need to look into an error, I have to do a separate search in logs explorer with part of the error message (in the case of the above example I'd search for "KeyError: 'c'") to find a duplicate log that has a trace id set. Then I'd be able to 'show all logs for trace' and be able to finally see all the logs that lead up to this error.
This feels related to this other issue from before, where logs in general in python3 were not getting correlated like they were in python2: How to group related request log entries GAE python 3.7 standard env
Logs now get grouped together via trace, but as far as I can tell, I cannot set trace on error report logs.
I have my logging setup by doing:
client = google.cloud.logging.Client()
client.setup_logging()
For error reporting I was just getting error reports from google.cloud.logging's integration with the python logger:
try:
# code where an error occurs
except Exception as exc:
logging.exception(exc)
raise
I've now started trying to use google-cloud-error-reporting to see if maybe there are some options in there that I can set to get it to correlate, but I seem to only be able to set a HttpContext & ReportingLocation. There isnt a spot for me to set trace or anything like that.

Akeneo import returning 500 on upload

When creating a import-profile in Akeneo i.e. XLSX, and then trying to upload and import a file Akeneo shows the spinner for infinite time. When I use the inspector window i see that the POST request to /launch that is apparently part of the rest API ( baseURL /job-instance/rest/import/product_variant_import/launch ) is returning this 500 error preventing Akeneo from proceeding.
First i thought it might had something to do with upload permissions but uploading media works fine. Unfortunately because of the 500 error there is nothing in the apache logs.
I'm using the basic apache configuration that is suggested in the set-up guide ( https://docs.akeneo.com/3.1/install_pim/manual/system_requirements/manual_system_installation_debian9.html ) under Apache.
I can't find anything on this subject online (Akeneo import + 500 error), so hopefully any of you have suggestions on what might cause this.
Best,
Seb
You should check the log and to have more information. And know what is happening exactly. If you can't check it, try to use xdebug to debug akeneo and stop it here:
/vendor/symfony/symfony/src/Symfony/Component/HttpKernel/HttpKernel.php
Line 225:
private function handleException(\Exception $e, $request, $type)
And you will know what error you have.
EDIT:
We were talking via slack, and he sent me the error. He had problems due permissions
The error that he sent me:
“Impossible to create the root directory “/tmp/pim/file_storage/13_Product_variant_import_CSV”.

Wordpress Migration Issue 503 error

Recently I have revamped a website which is created on a development server. Then after that i started migrating it onto the main server. Initially I got a unicode error while uploading the database on the live server. I googled it and found a solution on stack overflow itself (#1273 – Unknown collation: ‘utf8mb4_unicode_520_ci’). I used the method suggest by sabba and it worked. Later when I Changed the config file and loaded that link. Its giving me a 503 error.. It error is as follows:
Service Unavailable
The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.
Additionally, a 503 Service Unavailable error was encountered while trying to use an Error Document to handle the request
Go through the steps and check it,
Enable WP_DEBUG
But since the 503 error often locks you out of your WordPress admin, we shall use WP_DEBUG and WP_DEBUG_LOG, WP_DEBUG_DISPLAY and #ini_set constants available to WordPress.
To enable debug mode in WordPress and write errors to a log file, follow these steps:
1. Open the wp-config.php file
2. Scroll down to where WP_DEBUG is defined. It looks like this define ('WP_DEBUG', false);. If it is missing, we will add it just above the line that says /*That's all, stop editing! Happy blogging.*/
3. Insert the DEBUG magic codes. Just change the above define ('WP_DEBUG', false); code to:
define ('WP_DEBUG', true);
define ('WP_DEBUG_LOG', true);
define ('WP_DEBUG_DISPLAY', false);
#ini_set ('display_errors', 0);
4. Save changes
Now, reload your site to provoke the error. Next, locate a file known as debug.log inside your wp-content folder in your WordPress directory.
This file contains all the errors on your website. If your 503 service unavailable error is caused by a custom code snippet, it will show up somewhere with details of the error.
Eliminate/replace the problematic code and reload your site. If the 503 error persists, the problem could lie in your web server.

Moodle Global Search issue

I want Global Search in Moodle. I have configured Solr Server
But i am getting the below error message.
Solr client error: Unsuccessful system request : Response Code 404.
HTTP ERROR 404
Problem accessing /solr/moodle/admin/system/. Reason:
I am new to Moodle and dont know much about it.
First of all check that you installed php solr2 extension, see doc here :
https://docs.moodle.org/31/en/Global_search#How_to_install_Solr
Also to understand Moodle errors you need to enable debug, see doc here :
https://docs.moodle.org/31/en/Debugging

Getting 403 not authorized when indexing documents on Retrieve and Rank

I am suddenly getting a 403 error when I try to POST an update to the Retrieve and Rank service. This code is under development but it has been working up until yesterday. The failure occurs only when doing a POST to /v1/solr_clusters/{solr_cluster_id}/solr/{collection_name}/update, and it fails the same way whether I do it via my program, the Swagger API documentation, or cURL. All other operations to this service that I've tried work fine when using the same credentials that I'm using with this POST. The error message I'm getting back is
Error: WRRCSH004: Service [1d111267-76b7-417a-98bd-4e9a58072ef9] is not authorized for cluster [sc262b05e8_dcf5_40b4_b662_ae85058ff07f]!. I don't know where the identifier (1d111267-76b7-417a-98bd-4e9a58072ef9) is coming from; that's not the userid I'm sending in.
Looking into your issue it appears your Bluemix organization has multiple service instances. The 403 issue you were seeing is because you're trying to access a Solr cluster using credentials from one of your instances against a cluster in the other instance. The 1d111267-76b7-417a-98bd-4e9a58072ef9 represents one of these service instances—but the issue is that the cluster you're trying to access is not part of that instance. A good way to test this is to ensure you're using the same credentials that generate the 403 but simply try to list the Solr clusters you have created by doing a GET against https://gateway.watsonplatform.net/retrieve-and-rank/api/v1/solr_clusters/.
As for the 500 issue, I wasn't able to see anything on our end. If you're still experiencing that I would suggest posting another question and we can look into things again.
Thanks,
-Scott

Resources