How to save models trained locally in Amazon SageMaker? - amazon-sagemaker

I'm trying to use a local training job in SageMaker.
Following this AWS notebook (https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/mxnet_gluon_mnist/mxnet_mnist_with_gluon_local_mode.ipynb) I was able to train and predict locally.
There is any way to train locally and save the trained model in the Amazon SageMaker Training Job section?
Otherwise, how can I properly save trained models I trained using local mode?

There is no way to have your local mode training jobs appear in the AWS console. The intent of local mode is to allow for faster iteration/debugging before using SageMaker for training your model.
You can create SageMaker Models from local model artifacts. Compress your model artifacts into a .tar.gz file, upload that file to S3, and then create the Model (with the SDK or in the console).
Documentation:
https://sagemaker.readthedocs.io/en/stable/overview.html#using-models-trained-outside-of-amazon-sagemaker
https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateModel.html

As #lauren said, just compress it and creates your model. Once you local trained it, you don’t have to save it as a training job since you already have the artifacts for a model.
Training jobs are a combination of input_location, output_location, chosen algorithm, and hyperparameters. That’s what is saved on a training job and not a trained model. When a training job completes, it actually compress the artifacts and save your model in Amazon S3 so you can create a Model out of it.
So, since you trained locally (instead of decoupling the training step), create a model with the compressed artifacts, then create an endpoint, and do some inferences.

Related

Train Amazon SageMaker object detection model on local PC

I wonder if it's possible to run training Amazon SageMaker object detection model on a local PC?
You're probably referring to this object detection algorithm which is part of of Amazon SageMaker built-in algorithms. Built-in algorithms must be trained on the cloud.
If you're bringing your own Tensorflow or PyTorch model, you could use SageMaker training jobs to train either on the cloud or locally as #kirit noted.
I would also look at SageMaker JumpStart for a wide variety of object detection algorithm which are TF/PT based.
You can use SageMaker Local Mode to run SageMaker training jobs locally on your PC. Here is a list of examples. https://github.com/aws-samples/amazon-sagemaker-local-mode

Register SageMaker model in MLflow

MLflow can be used to track (hyper)parameters and metrics when training machine learning models. It stores the trained model as an artifact for every experiment. These models then can be directly deployed as SageMaker endpoints.
Is it possible to do it the other way around, too, i.e. to register models trained in SageMaker into MLflow?
of course! here is an example https://aws.amazon.com/blogs/machine-learning/managing-your-machine-learning-lifecycle-with-mlflow-and-amazon-sagemaker/

Is it possible to store an in-memory Jena Dataset as a triple-store?

Warning! This question is a catch, I bring 0 XP considering RDF systems, so I couldn't express this in a single question. Feel free to skip the first two paragraphs.
What I'm trying to build, overall
I'm currently building a Spring app that will be the back-end for a system that will gather measurements.
I want to store the info in a triple-store instead of an RDBMS.
So, you may imagine a Spring Boot app with the addition of the Jena library.
The workflow of the system
About the methodology that I'm planning to deploy.
1. Once the App is up and running it would either create or connect to an existing triple-store database.
2. A POST request reaches an app controller.
3. I use SPARQL query to insert the new entry to the triple-store.
4. Other Controller/Service/DAO methods exist to serve GET requests for SELECT queries on the triple-store.
*The only reason I provided such a detailed view of my final goal is to avoid answers that would call my question a XY-problem.
The actual problem
1. Does a org.apache.jena.query.Dataset represent an in memory triple-store or is this kind of Dataset a completely different data structure?
2. If a Dataset is indeed a triple-store, then how can I store this in-memory Dataset to retrieve it in a later session?
3. If indeed one can store a Dataset, then what are the options? Is the default storing a Dataset as a file with .tdb extension? If so then what is the method for that and under which class?
4. If so far I am correct in my guess then would the assemble method be sufficient to "retrieve" the triple-store from the file stored?
5. Do all triple-store databases follow this concept, of being stored in .tdb files?
org.apache.jena.query.Dataset is an interface - there are multiple implementations with different characteristics.
DatasetFactory makes datasets of various kinds. DatasetFactory.createTxnMem is an in-memory, transactional dataset. It can be initialized with the contents of files but updates do not change the files.
An in-memory only exists for the JVM-session.
If you want data and data changes to persist across sessions, you can use TDB for persistent storage. Try TDBFactory or TDB2Factory
TDB (TDB1 or TDB2) are triplestore databases.
Fuseki is the triple store server. You can send SPARQL requests to Fuseki (query, update, bulk upload, ...)
You can start Fuseki with a TDB database (it creates if it does not exist)
fuseki-server -tdb2 --loc DB /myData
".tdb" isn't a file extension Apache Jena uses. Databases are a directory of files.

How can I deploy AWS SageMaker Linear Learner Model in a Local Environment

I have trained a AWS SageMaker Model using the in-built Linear Learner algorithm. I can download the trained model artifacts (model.tar.gz) from S3.
How can I deploy the model in an local environment which is independent of AWS, so I can make predictions inferences calls without internet access?
Matx, there is no local mode for built-in algorithms. However, you can programmatically load mxnet module with model weights and use it to make predictions. Check https://forums.aws.amazon.com/thread.jspa?messageID=827236&#827236 for code example.

How do I manage multiple training sets using the Watson NLC Toolkit

From what I see, there's no way to upload multiple training sets to the new Watson NLC tooling. I need to manage separate training sets and their associated classifiers. What am I missing here?
Preferred option: Provision an NLC service instance for each set of training data you'd like to work with and separately access the tooling for each.
Workaround: Currently, the flow for managing multiple training sets in one NLC service instance is as follows:
(Optional to start fresh) Go to the training data page and click on the garbage icon to delete all training data.
Upload a training set on the training data page using the upload icon.
Manipulate the data as necessary. Add texts and classes, tag texts with classes, etc.
Create a classifier. When you create a classifier, it is essentially a snapshot of your current training data since you are able to retrieve it later from the classifiers page.
Repeat steps 1-4 as necessary until you have uploaded all of your training data sets and created the corresponding classifiers.
When you want to continue working on a previous training set:
Clear your training data (step 1 from above).
Go to the classifiers page.
Click on the download icon for the classifier which contains the training data you'd like to work with.
Return to the training data page and upload the file downloaded from step 3.
The best way to manage multiple training sets is to use a different NLC service instance for each training set.
The current beta NLC tooling is not intended to manage separate training sets within a single service instance. For example, the tool makes suggestions when you add texts without classes- these are based on the most recently trained classifier which won't make sense if that was based on a completely different training set.
The work around suggested by #John Bufe will work if you have a hard limit on the number of NLC services you can use for some reason, e.g. you have reached your limit of Bluemix services. Cost is not a factor here as additional NLC service instances will not increase the overall price since the monthly charge is for trained classifier instances. For example, if you have four service instances with a single classifier in each, you'll see 3 charged and 1 free.
If you want to use the NLC beta tooling to manage your training data, I would recommend using separate NLC services for each training set you require.

Resources