How to deploy the hugging face model via sagemaker pipeline - amazon-sagemaker

Below is the code to get the model from Hugging Face Hub and deploy the same model via sagemaker.
from sagemaker.huggingface import HuggingFaceModel
import sagemaker
role = sagemaker.get_execution_role()
# Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID':'siebert/sentiment-roberta-large-english',
'HF_TASK':'text-classification'
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
transformers_version='4.17.0',
pytorch_version='1.10.2',
py_version='py38',
env=hub,
role=role,
)
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type='ml.g4dn.xlarge' # ec2 instance type
)
How can I deploy this model via sagemaker pipeline .
How can I include this code in sagemaker pipeline.

Prerequisites
SageMaker pipelines offer many different components with lots of functionality. Your question is quite general and needs to be contextualized to a specific problem.
You have to start with putting up a pipeline first.
See the complete guide "Defining a pipeline".
Quick response
My answer is to follow this official AWS guide that answers your question exactly:
SageMaker Pipelines: train a Hugging Face model, deploy it with a Lambda step
General explanation
Basically, you need to build your pipeline architecture with the components you need and register the trained model within the Model Registry.
Next, you have two paths you can follow:
Trigger a lambda that automatically deploys the registered model (as the guide does).
Out of the context of the pipeline, do the automatic deployment by retrieving the ARN of the registered model on the Model Registry. You can get it from register_step.properties.ModelPackageArn or in an external script using boto3 (e.g. using list_model_packages)

Related

How do I register a multi-container model in Model Registry in AWS SageMaker?

I have created a multi-container model in SageMaker notebook and deployed it through an endpoint. But while attempting to do the same through a SageMaker Studio Project (build, train and deploy model template), I need to register the multi-container model through a 'sagemaker.workflow.step_collections.RegisterModel' step, which I am unable to do.
The boto3 client method,
create_model()
has a parameter 'InferenceExecutionConfig' where 'mode' is set to direct for a multi-container model.
InferenceExecutionConfig={
'Mode': 'Serial'|'Direct'
}
To my understanding, multi-container model is created through boto3 api call. I haven't found a way to create a 'sagemaker.model.Model' instance through SageMaker Python SDK which has the above parameter hence not being able to register it.
Using 'sagemaker.pipeline.PipelineModel' will result in a serial pipeline whereas I want to invoke each model directly though an endpoint.
Even the boto3 api method
create_model_package()
which creates and registers the model to a ModelPackageGroup does not have the 'InferenceExecutionConfig' parameter which I can use to register the multi-container model.
My aim is to create a SageMaker Studio Project with a multi-container endpoint which can be used to fetch outputs from two models in the project. If I'm missing something or there is some other approach to achieve this, please let me know that too.

GCP: Remove IAM policy from Service Account using Terraform

Im creating an app engine using the following module: google_app_engine_flexible_app_version.
By default, Google creates a Default App Engine Service Account with roles/editor permissions.
I want to reduce the permissions of my AppEngine.
Therefore, I want to remove the roles/editor permission and add it my custom role.
In order to remove it I know I can use gcloud projects remove-iam-policy-binding cli.
But I want it to be part of my terraform plan.
If you are using https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/app_engine_flexible_app_version to creating your infrastructure then you must have seen the following line in it.
role = "roles/compute.networkUser"
This role is used when setting up your infra and you can tinker it after referring from https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/iam_deny_policy
Note: When setting up role, please ensure valid permissions are in place for your app engine to work properly.
I. Using Provided Terraform Code as template & Tinker it
One simple hack I would suggest you, is to
(1) First setup your infra-structure with the basic terraform code your have and then (2) Update/tinker your infra as per your expectations (3) Now you can do terraform refresh and terraform plan to find the differences required to update your code.
Below is not related but only as an example.
resource "google_dns_record_set" "default" {
name = google_dns_managed_zone.default.dns_name
managed_zone = google_dns_managed_zone.default.name
type = "A"
ttl = 300
rrdatas = [
google_compute_instance.default.network_interface.0.access_config.0.nat_ip
]
}
Above is the code for creating a DNS record using Terraform. After mentioned above step 1, 2 & 3, I get following differences to update my code
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# google_dns_record_set.default will be updated in-place
~ resource "google_dns_record_set" "default" {
id = "projects/mmterraform03/managedZones/example-zone-googlecloudexample/rrsets/googlecloudexample.com./A"
name = "googlecloudexample.com."
~ ttl = 360 -> 300
# (4 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
II. Using Terraform Import
Google Cloud Platform tool - gcloud, terraform and several other open source platform are available today that can read your existing infrastructure and write Terraform code for you.
So you can check terraform import or Google's docs - https://cloud.google.com/docs/terraform/resource-management/import#:~:text=Terraform%20can%20import%20existing%20infrastructure,manage%20your%20deployment%20in%20Terraform.
But to use this method, you have to setup your infrastructure first. You either do it completely manually from Google Console UI or use terraform first and then update it.
As a III option, you can reach out/hire a Terraform Expert to do this task for you but I and II options works best for many cases.
On a different note, please https://stackoverflow.com/help/how-to-ask,
https://stackoverflow.com/help/minimal-reproducible-example. Opinion based and how/what to do questions are usually discouraged in StackOverflow.
This is one situation where you might consider to use google_project_iam_policy
That could be used to knock out the Editor role, but it will knock out everything else you don't explicitly list in the policy!
Beware - There is a risk of locking yourself out of your project if you are not sure what you are doing.
Another option would be to use a custom service account.
Use terraform to create the account and apply the desired roles.
Use gcloud app deploy --service-account={custom-sa} to deploy a service to app engine that uses the custom account.
But you may still wish to remove the Editor role from the default service account. Given that you already have the gcloud command to do it, gcloud projects remove-iam-policy-binding you could use resource terraform-google-gcloud to execute the command from terraform.
See also this feature request.

Flask and connecting to a database confusion

I am currently creating a web application using Flask. My main issue at the moment understanding the concept of connecting to a database as there are many resources online which are confusing me in terms of establishing a solid connection to a database. The Syntax to SQL is not a problem as I have knowledge of that.
I am choosing SQLAlchemy with a dialect of SQLite instead of MySQL, PostgresSQL and etc.
My first question is: is choosing a dialect while using SQLAlchemy necessary? Can we not use SQLAlchemy as it is?
Second Question: I have seen many examples and tutorials online using "phpMyAdmin" or something similar to have a visual and interactive way to deal with their database (relations) in their localhost browser. Is this necessary to set-up before creating any type of database connection for any type of project?
Second Question (extension): To set up pypMyAdmin, there are tutorials such as "https://www.youtube.com/watch?v=hVHFPzjp064&t=238s" indicating to activate apache, activate PHP, and download MySQL to use a workbench. As stated in the second question, are these steps mandatory - as many tutorials don't seem to show how to set this up.
Third Question: Due to my project slowly growing, I am using the 'separation of concerns' concept. My file tree is the following:
After researching, I believe I should include database related code with the __init__.py file? Plus, of course updating the config file with the necessary configurations? What I don't understand is, the syntax used to connect to a database. The following code will show my code in both files stated above:
__init__.py
# This class will ultimately bring our entire application together.
from flask import Flask
from config import Config
from flask_sqlalchemy import SQLAlchemy
from flask_migrate import Migrate
# Creating Flask app.
app = Flask(__name__)
# Creating a database object which represents the database.
# Created a migration object which represents the migration engine.
db = SQLAlchemy(app)
migrate = Migrate(app, db)
# TODO Explain reasons for using this method:
# Using method to determine Flask environment from the following link:
# https://www.youtube.com/watch?v=GW_2O9CrnSU&t=366s
if app.config["ENV"] == "production":
app.config.from_object("config.ProductionConfig")
elif app.config["ENV"] == "testing":
app.config.from_object("config.TestingConfig")
else:
app.config.from_object("config.DevelopmentConfig")
# Importing views file to avoid circular import.
from app import views
from app import admin_views
from app import routes, models
config.py
# This class contains important information regarding the conifgurations for this application.
# It is good practice to keep configurations of the application in a seperate file. This enforces the
# practice of 'seperation of concerns'.
# There is a main class "Config" which has subclasses as illustrated below. The configuration settings
# are defined as class variables within the 'Config' class. As the application grows, we can create subclasses.
import os
basedir = os.path.abspath(os.path.dirname(__file__))
# The SECRET_KEY is important as it...
class Config(object):
DEBUG = False
TESTING = False
SECRET_KEY = '\xb6"\xc5\xce\xc2D\xd1*\x0c\x06\x83 \xbc\xdbM\x97\xe2\xf4OZ\xdc\x16Jv'
# The SQLAlchemy extension is connecting the location of the database from the URI variable.
# The fallback value if the value is not defined is given below as the URL.
SQLALCHEMY_DATABASE_URI = os.environ.get('DATABASE_URL') or \
'sqlite:///' + os.path.join(basedir, 'app.db')
# The 'modifications' config option is set to false as it prevents a signal from appearing whenever
# there is a change made within the database.
SQLALCHEMY_TRACK_MODIFICATIONS = False
class ProductionConfig(Config):
pass
class DevelopmentConfig(Config):
DEBUG = True
class TestingConfig(Config):
TESTING = True
I apologise if my questions seem all over the place. The more I research, the more confused I am becoming with being able to successfully connect to a database.
I would appreciate if someone could answer my concerns in an 'easy to understand' way.
You can avoid (or forestall) some amount of confusion by starting from
flask-sqlalchemy, which provides a convenience layer over SQLAlchemy.
It will arrange for SQLALCHEMY_DATABASE_URI to be turned into an SQLAlchemy "engine" for the database specified in the URI. SQLAlchemy does all of the heavy lifting. There's no need to do the create_engine() yourself when using flask-sqlalchemy.
Adding that following the organizational scheme from chapter 15 of the Flask Mega Tutorial will carry you quite far.

Where do I store my model's training data, artifacts, etc?

I'm trying to build and push a custom ML model with docker to Amazon SageMaker. I know things are supposed to follow the general structure of being in opt/ml. But there's no such bucket in Amazon S3??? Am I supposed to create this directory within my container before I build and push the image to AWS? I just have no idea where to put my training data, etc.
SageMaker is automating the deployment of the Docker image with your code using the convention of channel->local-folder. Everything that you define with a channel in your input data configuration, will be copied to the local Docker file system under /opt/ml/ folder, using the name of the channel as the name of the sub-folder.
{
"train" : {"ContentType": "trainingContentType",
"TrainingInputMode": "File",
"S3DistributionType": "FullyReplicated",
"RecordWrapperType": "None"},
"evaluation" : {"ContentType": "evalContentType",
"TrainingInputMode": "File",
"S3DistributionType": "FullyReplicated",
"RecordWrapperType": "None"},
"validation" : {"TrainingInputMode": "File",
"S3DistributionType": "FullyReplicated",
"RecordWrapperType": "None"}
}
to:
/opt/ml/input/data/training
/opt/ml/input/data/validation
/opt/ml/input/data/testing
When creating your custom model on AWS SageMaker, you can store your docker container with your inference code on ECR, while keeping your model artifacts just on S3. You can then just specify the S3 path to said artifacts when creating the model (when using Boto3's create_model, for example). This may simplify your solution so you don't have to re-upload your docker container every time you may need to change your artifacts (though you will need to re-create your model on SageMaker).
The same goes for your data sets. SageMakers' Batch Transform function allows you to feed any of your data sets stored on S3 directly into your model without needing to keep them in your docker container. This really helps if you want to run your model on many different data sets without needing to re-upload your image.

What's a namespace used for in the App Engine datastore?

In the development admin console, when I look at my data, it says "Select different namespace".
What are namespaces for and how should I use them?
Namespaces allow you to implement segregation of data for multi-tenant applications. The official documentation links to some sample projects to give you an idea how it might be used.
Namespaces is used in google app engine to create Multitenant Applications. In Multitenent applications single instance of the application runs on a server, serving multiple client organizations (tenants). With this, an application can be designed to virtually partition its data and configuration (business logic), and each client organization works with a customized virtual application instance..you can easily partition data across tenants simply by specifying a unique namespace string for each tenant.
Other Uses of namespace:
Compartmentalizing user information
Separating admin data from application data
Creating separate datastore instances for testing and production
Running multiple apps on a single app engine instance
For More information visit the below links:
http://www.javacodegeeks.com/2011/12/multitenancy-in-google-appengine-gae.html
https://developers.google.com/appengine/docs/java/multitenancy/
http://java.dzone.com/articles/multitenancy-google-appengine
http://www.sitepoint.com/multitenancy-and-google-app-engine-gae-java/
Looking, towards this question is not that much good reviewed and answered so trying to give this one.
When using namespaces, we can have a best practice of key and value separation there on a given namespace. Following is the best example of giving the namespace information thoroughly.
from google.appengine.api import namespace_manager
from google.appengine.ext import db
from google.appengine.ext import webapp
class Counter(db.Model):
"""Model for containing a count."""
count = db.IntegerProperty()
def update_counter(name):
"""Increment the named counter by 1."""
def _update_counter(name):
counter = Counter.get_by_key_name(name)
if counter is None:
counter = Counter(key_name=name);
counter.count = 1
else:
counter.count = counter.count + 1
counter.put()
# Update counter in a transaction.
db.run_in_transaction(_update_counter, name)
class SomeRequest(webapp.RequestHandler):
"""Perform synchronous requests to update counter."""
def get(self):
update_counter('SomeRequest')
# try/finally pattern to temporarily set the namespace.
# Save the current namespace.
namespace = namespace_manager.get_namespace()
try:
namespace_manager.set_namespace('-global-')
update_counter('SomeRequest')
finally:
# Restore the saved namespace.
namespace_manager.set_namespace(namespace)
self.response.out.write('<html><body><p>Updated counters')
self.response.out.write('</p></body></html>')

Resources