How do you resolve an "Access Denied" error when invoking `image_uris.retrieve()` in AWS Sagemaker JumpStart? - amazon-sagemaker

I am working in a SageMaker environment that is locked down. For example, my user account is prevented from creating S3 buckets. But, I can successfully run vanilla ML training jobs by passing in role=get_execution_role to an instance of the Estimator class when using an out-of-the-box algorithm such as XGBoost.
Now, I'm trying to use an algorithm (LightBGM) that is only available via the JumpStart feature in SageMaker, but I can't get it to work. When I try to retrieve an image URI via image_uris.retrieve(), it returns the following error:
ClientError: An error occurred (AccessDenied) when calling the GetObject operation: Access Denied.
This makes some sense to me if my user permissions are being used when creating an object. But what I want to do is specify another role - like the one returned from get_execution_role - to perform these tasks.
Is that possible? Is there another work-around available? How can I see which role is being used?
Thanks,

When I encountered this issue, it was a permissions issue with a bucket that had changed.
In the SageMaker Python SDK source code , there is a cache that is located at in an AWS-owned bucket: jumpstart-cache-prod-{region}. and a manifest.json that translates the ECR path for the image for you.
If you look at the stack trace, it could be erroring out at the code that is looking for the manifest.
One place to look is if there are new restrictions placed in IAM, Included here is the minimum policy you need to access JumpStart (pretrained) models

Related

Upload file to s3 browser through batch script

I am trying to upload a json file to S3 browser through batch script using command:
s3browser-con.exe upload <account_name> <local directory\json file> <s3 bucket name and path>
(Referred CLI documentation). However, I get error:
:AccountManager::CurrentAccount::get::failed - unable to show the Add New Account dialog.
This runs fine when I run the batch script individually, however, when I try to run it through command task in Informatica cloud, it gives me this error.
I suspect this is trying to create new account at runtime, but we can only add two accounts at a time since it is free version. Not sure though as I am new to S3 and batch scripts.
Also, is there any way, we can avoid giving account name, as all users might have different account name for a particular bucket? Any help and guidance would be appreciated.
EDIT:
Note: This is detailed error Unhandled Exception:
System.NullReferenceException: Object reference not set to an instance of an object. at mg.b(String aty) at mk.a(String[] avx) at mg.Main(String[] args)
<account_name> is held in the User Profile of whoever set it up in s3browser-con. So if you are not running Informatica (secure agent) on the same machine under the same user then it's not going to work.
However, why are you using a 3rd party tool to upload files to S3 within Informatica? Why not just use Informatica's built-in capabilities? Unless there is a very specific reason for doing this, your solution appears to be over complicated.

System unauthorized access exception UWP Access to path is denied

I want to use
DirectoryInfo source C:\Users\admin\Desktop\Server:
source.GetDirectories();
I got System.UnauthorizedAccessException:
'Access to the path 'C:\Users\admin\Desktop\Server' is denied.'
I have UWP application. How to get permission to read and write folders/files from my UWP application?
UWP apps were designed to be safer for users to install - by the fact they run in sandbox and do not have most permissions by default the user knows the app can't cause any damage to her PC or data. This includes access to file system - you are allowed to access several specific paths on the PC - including app install location, app data folder. You can request additional locations like libraries, etc.
For arbitrary locations you have two options:
Use the FolderPicker (see Docs). User will select the desired folder and you get a StorageFolder instance through which you can freely access it. You can even store the permission to this folder over app restarts using FutureAccessList (see Docs) which will give you a token by which you can retrieve the StorageFolder instance in the future.
Declare the broadFileSystemAccess capability. This will give you full access to the filesystem via the StorageFolder and StorageFile APIs (but not via the classic System.IO API). This permission is a restricted one however, so it will be verified during Microsoft Store certification process and your app must have a good reason for actually needing this.

Sagemaker memory leak

I deployed a deep learning model in the sagemaker and created a endpoint.
Unfortunately, I put it a large size image then the endpoint return 'RuntimeError: CUDA error: out of memory'.
So I would like to re-launch the endpoint, but seems there is not any restart button.
What could I do for restarting it?
Thank you
Assuming you meant UpdateEndpoint by "restart", you will not be able to update a SageMaker endpoint if it is already in 'Failed' status. This is documented in SageMaker API references.
If you have already identified the cause of endpoint failure, you can delete the failed endpoint and create a new one with the correct model.

Creating a Message-Hub Bridge for IBM Cloud Object Storage

I'm trying to create a "Bridge" from Message Hub to S3 Object Storage, copying information from the credentials that I created but I always get an error that says "Please trying refreshing the page, or logging back into Bluemix."
I have already created an access policy for these credentials and the Bucket I want to use as destination.
Also tried with private and public end-points.
I wasn't able to found documentation that explains how to accomplish this. Nothing seems to work.
Thanks!
Apologies, this is an internal error caused by the S3 Object Storage bridges capability being made available in the UI but not in the backend.
An update to the Message Hub service will be made this week to correct this.

TransformationError on blob via get_serving_url (app engine)

TransformationError
This error keeps coming up for a specific image.
There are no problems with other images and I'm wondering what the reason for this exception could be.
From Google:
"Error while attempting to transform the image."
Update:
Development server it works fine, only live it fails.
Thanks
Without more information I'd say it's either the image is corrupted, or it's in a format that cannot be used with get_serving_url (animate GIF for example).
I fought this error forever and incase anyone finds they get the dreaded TransformationError please note that you need to make sure that your app has owner permissions on the files you want to generate a url for
It'll look something like this in your IAM tab:
App Engine app default service account
your-project-name-here#appspot.gserviceaccount.com
In IAM on that member you want to scroll down to Storage and grant "Storage Object Admin" to that user. That is as long as you have your storage bucket under the same project... if not I'm not sure how...
This TransformationError exception seems to show up for permissions errors so it is a bit misleading.
I way getting this error because I had used the Bucket Policy Only permissions on a bucket in a different project.
However after changing this back to Object Level permissions and giving my App Engine app access (from a different project) I was able to perform the App Engine Standard Images operation (google.appengine.api.images.get_serving_url) that I was trying to implement.
Make sure that you set your permissions correctly either in the Console UI or via gsutil like so:
gsutil acl ch -u my-project-a#appspot.gserviceaccount.com:OWNER gs://my-project-b

Resources