Brewing up custom ML models on AWS SageMaker - amazon-sagemaker

Iam new with SageMaker and I try to use my own sickit-learn algorithm . For this I use Docker.
I try to do the same task as described here in this github account : https://github.com/awslabs/amazon-sagemaker-examples/blob/master/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.ipynb
My question is should I create manually the repository /opt/ml (I work with windows OS) ?
Can you explain me please?
thank you

You don't need to create /opt/ml, SageMaker will do it for you when it launches your training job.
The contents of the /opt/ml directory are determined by the parameters you pass to the CreateTrainingJob API call. The scikit example notebook you linked to describes this (look at the Running your container sections). You can find more info about this in the Create a Training Job section of the main SageMaker documentation.

Related

Amazon SageMaker Model Monitor for Batch Transform jobs

Couldn't find the right place to ask this, so doing it here.
Does Model Monitor support monitoring Batch Transform jobs, or only endpoints? The documentation seems to only reference endpoints...
We just launched the support.
Here are the sample notebook:
https://github.com/aws/amazon-sagemaker-examples/tree/main/sagemaker_model_monitor/model_monitor_batch_transform
Here is the what's new post:
https://aws.amazon.com/about-aws/whats-new/2022/10/amazon-sagemaker-model-monitor-batch-transform-jobs/

Continuous Training in Sagemaker

I am trying out Amazon Sagemaker, I haven't figured out how we can have Continuous training.
For example if i have a CSV file in s3 and I want to train each time the CSV file is updated.
I know we can go again to the notebook and re-run the whole notebook to make this happen.
But i am looking for an automated way, with some python scripts or using a lambda function with s3 events etc
You can use boto3 sdk for python to start training on lambda then you need to trigger the lambda when csv is update.
http://boto3.readthedocs.io/en/latest/reference/services/sagemaker.html
Example python code
https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-train-model-create-training-job.html
Addition: You dont need to use lambda you just start/cronjob the python script any kind of instance which has python and aws sdk in it.
There are a couple examples for how to accomplish this in the aws-samples GitHub.
The serverless-sagemaker-orchestration example sounds most similar to the use case you are describing. This example walks you through how to continuously train a SageMaker linear regression model for housing price predictions on new CSV data that is added daily to a S3 bucket using the built-in LinearLearner algorithm, orchestrated with Amazon CloudWatch Events, AWS Step Functions, and AWS Lambda.
There is also the similar aws-sagemaker-build example but it might be more difficult to follow currently if you are looking for detailed instructions.
Hope this helps!

Zeppelin: Need to know more about zeppelin

I have recently started learning zeppelin. I know we can use angular and PostgreSQL e.t.c within it using interpreter. I have gone through its tutorial as well. But it is not as descriptive as I thought. I have many doubts which I am asking to you and which may help other beginners as well.
1> How we can create API for the zeppelin (if possible)?: As most of the client side apps uses API, is it possible to create API in zeppelin ? and in which language we can create API. If possible I am thinking to create API in java or node.js(JS).
2> Is it possible to integrate zeppelin graphs in any UI(angular or html ?)?
3> How we can deploy zeppelin based application in production environment ?
If you have any good tutorial source please attach it.
If I have asked unrelated questions please point out. I will change it.
Thanks in advance for provide help and giving you precious time!
Apache Zeppelin has wide and well described API [1]. You can use any language to work with API.
Yes [2]. You can embed the paragraph result to your website.
You can use binary package or built from source [3].
[4] contains a lot of code in setup section.
--
http://zeppelin.apache.org/docs/0.8.0/usage/rest_api/notebook.html
http://zeppelin.apache.org/docs/0.8.0/usage/other_features/publishing_paragraphs.html
http://zeppelin.apache.org/docs/0.8.0/quickstart/install.html
http://zeppelin.apache.org/docs/0.8.0/

Google Cloud Platform Tensorboard - No dashboards are currently active

I was working on the tensorflow object detection API. I managed to train it locally on my computer and get decent results. However, when I tried to replicate the same on GCP, I had several errors. So, basically, I followed the documentation mentioned in the official tensorflow -running on cloud documentation
So this is how the bucket is laid out:
Bucket
weeddetectin-data
Train-packages
This is how I ran the training and evaluation job:
Running a multiworker training job
Running an evaluation job on cloud
I then used the following command to monitor on tensoboard:
tensorboard --logdir=gs://weeddetection --port=8080
I opened the dashboard using the preview feature in the console. But it says no dashboards are active for the current data set.
No Dashboards are active
So, I checked on my activity page to really see if the training and evaluation job were submitted:
Training Job
Evaluation Job
It seems as if there are no events files being written to your bucket.
The root cause could be that the manual your are using refers to an old version of the tensor models.
Please try and change
--train_dir=gs:...
to
--model_dir=gs://${YOUR_BUCKET_NAME}/model
And resend the job, once the job is running check the model_dir in the bucket to see if the files are written there.
Check out: gcloud ml-engine jobs documentation for additional read.
Hope it help!

How to schedule a Cron job on the Google Cloud Platform?

Does anybody know of a barebones example for scheduling a Cron job on the Google Cloud Platform?
I have a simple python script (developed in a virtualenv in visual code) and I'm struggling to follow the examples provided by google.
The script make use of the Google Cloud client libraries (like google.cloud.storage and google.cloud.bigquery).
Any help much appreciated.
One way of doing it would be to
create a VM in the Compute Engine - read this
connect to your instance - read this
set up your cron job by modify the /etc/crontab file: $ nano /etc/crontab
When you are actually interested in starting a process once a new file is uploaded to the Storage, then you could look into setting up a Cloud Function which would do that: https://cloud.google.com/functions/docs/calling/storage
You can now use Google Cloud Scheduler which is a fully managed (and cheap!) and much easier than setting up 'proper' Cron jobs. You can find it here: https://cloud.google.com/scheduler/

Resources