Register SageMaker model in MLflow - amazon-sagemaker

MLflow can be used to track (hyper)parameters and metrics when training machine learning models. It stores the trained model as an artifact for every experiment. These models then can be directly deployed as SageMaker endpoints.
Is it possible to do it the other way around, too, i.e. to register models trained in SageMaker into MLflow?

of course! here is an example https://aws.amazon.com/blogs/machine-learning/managing-your-machine-learning-lifecycle-with-mlflow-and-amazon-sagemaker/

Related

Difference between SageMaker instance count and Data parallelism

I can't understand the difference between SageMaker instance count and Data parallelism. As we already have a feature that can specify how many instances we train model when we write a training script using sagemaker-sdk.
However, in 2021 re:Invent, SageMaker team launched and demonstrated SageMaker managed Data Parallelism and this feature also provides distributed training.
I've searched a lot of sites for letting me know about that, but I can't find really clear demonstration. I share some stuffs explaining the concept I mentioned closely. Link : https://godatadriven.com/blog/distributed-training-a-diy-aws-sagemaker-model/
Increasing the instance count will enable SageMaker to launch those many instances and copy data to the instances. This will only enable parallelization at the infrastructure level. To really carry out distributed training we need support at framework/code level where the code should know how to aggregate/send gradients across all the GPU's/instances within the cluster. In some case how to distribute data as well usually when using DataLoaders. To achieve this SageMaker has Distributed Data Parallelism feature built into it. This is similar to other alternatives like Horovod, Pytorch DDP etc...

Train Amazon SageMaker object detection model on local PC

I wonder if it's possible to run training Amazon SageMaker object detection model on a local PC?
You're probably referring to this object detection algorithm which is part of of Amazon SageMaker built-in algorithms. Built-in algorithms must be trained on the cloud.
If you're bringing your own Tensorflow or PyTorch model, you could use SageMaker training jobs to train either on the cloud or locally as #kirit noted.
I would also look at SageMaker JumpStart for a wide variety of object detection algorithm which are TF/PT based.
You can use SageMaker Local Mode to run SageMaker training jobs locally on your PC. Here is a list of examples. https://github.com/aws-samples/amazon-sagemaker-local-mode

How to save models trained locally in Amazon SageMaker?

I'm trying to use a local training job in SageMaker.
Following this AWS notebook (https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/mxnet_gluon_mnist/mxnet_mnist_with_gluon_local_mode.ipynb) I was able to train and predict locally.
There is any way to train locally and save the trained model in the Amazon SageMaker Training Job section?
Otherwise, how can I properly save trained models I trained using local mode?
There is no way to have your local mode training jobs appear in the AWS console. The intent of local mode is to allow for faster iteration/debugging before using SageMaker for training your model.
You can create SageMaker Models from local model artifacts. Compress your model artifacts into a .tar.gz file, upload that file to S3, and then create the Model (with the SDK or in the console).
Documentation:
https://sagemaker.readthedocs.io/en/stable/overview.html#using-models-trained-outside-of-amazon-sagemaker
https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateModel.html
As #lauren said, just compress it and creates your model. Once you local trained it, you don’t have to save it as a training job since you already have the artifacts for a model.
Training jobs are a combination of input_location, output_location, chosen algorithm, and hyperparameters. That’s what is saved on a training job and not a trained model. When a training job completes, it actually compress the artifacts and save your model in Amazon S3 so you can create a Model out of it.
So, since you trained locally (instead of decoupling the training step), create a model with the compressed artifacts, then create an endpoint, and do some inferences.

How can I deploy AWS SageMaker Linear Learner Model in a Local Environment

I have trained a AWS SageMaker Model using the in-built Linear Learner algorithm. I can download the trained model artifacts (model.tar.gz) from S3.
How can I deploy the model in an local environment which is independent of AWS, so I can make predictions inferences calls without internet access?
Matx, there is no local mode for built-in algorithms. However, you can programmatically load mxnet module with model weights and use it to make predictions. Check https://forums.aws.amazon.com/thread.jspa?messageID=827236&#827236 for code example.

Core understanding f what salesforce is

firstly I apologise if this is a ridiculously simple question to answer but it has been bothering me for a while.
I am trying to understand what salesforce actually is, I mean in technical terms. I have read the websites documentation and the wikipedia page but I am trying to understand what's behind all this fluffy terminology.
My understanding is that salesforce is a cloud based database which stores a very high volume of information and all salesforce apps consists of scripts that query this database and model them in different ways depending on the intended application, is this correct?
Thanks !
Software as a Service (SaaS)
To get program you need to download it, install, configure and so on. If your system have a lot of users it's very hard to configure ans support single user installation.
Imagine that you improved application, new release for example. You need update every instance.
With SaaS model you have a shared web application, that do the same thing as old downloadable one. But it's much easier to support it, because ideally there is just one instance of it.
Salesforce is a company that provides its own system by SaaS model, but not only. It is also a platform for developing new applications.

Resources