Run SageMaker Batch transform failed on loading model - amazon-sagemaker

I am trying to run batch transform job with HuggineFace class and fine-tuned model and custom inference file.
The job failed on loading the model but I could load it locally.
I need to make custom inference file because i need to keep the input file as is, so i had to change the input key from the input json file.
Here are the exception :
PredictionException(str(e), 400)
2022-05-08 16:49:45,499 [INFO ] W-model-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - mms.service.PredictionException: Can't load config for '/.sagemaker/mms/models/model'. Make sure that:
2022-05-08 16:49:45,499 [INFO ] W-model-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle -
2022-05-08 16:49:45,499 [INFO ] W-model-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - - '/.sagemaker/mms/models/model' is a correct model identifier listed on 'https://huggingface.co/models'
2022-05-08 16:49:45,499 [INFO ] W-model-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle -
2022-05-08 16:49:45,500 [INFO ] W-model-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - - or '/.sagemaker/mms/models/model' is the correct path to a directory containing a config.json file
2022-05-08 16:49:45,500 [INFO ] W-model-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle -
2022-05-08 16:49:45,500 [INFO ] W-model-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - : 400
I am running on script mode:
from sagemaker.huggingface.model import HuggingFaceModel
hub = {
# 'HF_MODEL_ID':'cardiffnlp/twitter-roberta-base-sentiment',
'HF_TASK':'text-classification',
'INPUT_TEXTS': 'Description'
}
huggingface_model = HuggingFaceModel(model_data='../model/model.tar.gz',
role=role,
source_dir="../model/pytorch_model/code",
transformers_version="4.6",
pytorch_version="1.7",
py_version="py36",
entry_point="inference.py",
env=hub
)
batch_job = huggingface_model.transformer(
instance_count=1,
instance_type='ml.p3.2xlarge',
output_path=output_s3_path, # we are using the same s3 path to save the output with the input
strategy='SingleRecord',
accept='application/json',
assemble_with='Line'
)
batch_job.transform(
data=s3_file_uri,
content_type='application/json',
split_type='Line',
#input_filter='$[1:]',
join_source='Input'
)
Custom inference.py
import json
import os
from transformers import pipeline
import torch
DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu'
def model_fn(model_dir):
model = pipeline(task=os.environ.get('HF_TASK', 'text-classification'), model=model_dir, tokenizer=model_dir)
return model
def transform_fn(model, input_data, content_type, accept):
input_data = json.loads(input_data)
input_text = os.environ.get('INPUT_TEXTS', 'inputs')
inputs = input_data.pop(input_text, None)
parameters = input_data.pop("parameters", None)
# pass inputs with all kwargs in data
if parameters is not None:
prediction = model(inputs, **parameters)
else:
prediction = model(inputs)
return json.dumps(
prediction,
ensure_ascii=False,
allow_nan=False,
indent=None,
separators=(",", ":"),
)

I think the issue is with the "model_data" parameter. It should point to an S3 object(model.tar.gz).
Then the transform job will download the model file from S3 and load it.

The solution is change the "task" in the pipline to "sentiment-analysis"
hub = {
# 'HF_MODEL_ID':'cardiffnlp/twitter-roberta-base-sentiment',
'HF_TASK':'sentiment-analysis',
'INPUT_TEXTS': 'Description'
}

Related

Training Job in Sagemaker gives error in locating file in S3 to docker image path

I am trying to use scikit_bring_your_own/container/decision_trees/train mode, running in AWS CLI, I had no issues. Trying to replicate in Creating Sagemaker Training Job , facing issue in loading data from S3 to docker image path.
In CLI command we used specify the docker run -v $(pwd)/test_dir:/opt/ml --rm ${image} train from where the input needs to referred.
In training job, mentioned the S3 bucket location and output path for model artifacts.
Error entered in the exception as in train - "container/decision_trees/train"
raise ValueError(('There are no files in {}.\n' +
'This usually indicates that the channel ({}) was incorrectly specified,\n' +
'the data specification in S3 was incorrectly specified or the role specified\n' +
'does not have permission to access the data.').format(training_path, channel_name))
Traceback (most recent call last):
File "/opt/program/train", line 55, in train
'does not have permission to access the data.').format(training_path, channel_name))
So not understanding is there any tweaking required or any access missing.
kindly help
If you set the InputDataConfig in the CreateTrainingJob API like this
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataDistributionType": "FullyReplicated",
"S3DataType": "S3Prefix",
"S3Uri": "s3://<bucket>/a.csv"
}
},
"InputMode": "File",
},
{
"ChannelName": "eval",
"DataSource": {
"S3DataSource": {
"S3DataDistributionType": "FullyReplicated",
"S3DataType": "S3Prefix",
"S3Uri": "s3://<bucket>/b.csv"
}
},
"InputMode": "File",
}
]
SageMaker download the data specified above from S3 to the /opt/ml/input/data/channel_name directory in the Docker container. In this case, the algorithm container should be able to find the input data under
/opt/ml/input/data/train/a.csv
/opt/ml/input/data/eval/b.csv
You can find more details in https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-training-algo.html

Gatling not logging to influxdb?

I've tried following the guide at http://gatling.io/docs/2.2.3/realtime_monitoring/index.html to log my test results to influxdb and display the data in a grafana that I have previously set up. However I can't see any of the data that gatling is supposed to log anywhere in influxdb.
I've edited by influxdb.conf file so that it contains the following fields:
[[graphite]]
enabled = true
database = "gatlingdb"
bind-address = ":2003"
protocol = "tcp"
consistency-level = "one"
name-separator = "."
templates = [
"gatling.*.*.*.count measurement.simulation.request.status.field",
"gatling.*.*.*.min measurement.simulation.request.status.field",
"gatling.*.*.*.max measurement.simulation.request.status.field",
"gatling.*.*.*.percentiles50 measurement.simulation.request.status.field",
"gatling.*.*.*.percentiles75 measurement.simulation.request.status.field",
"gatling.*.*.*.percentiles95 measurement.simulation.request.status.field",
"gatling.*.*.*.percentiles99 measurement.simulation.request.status.field"
]
and my gatling.conf file contains the following fields:
data {
writers = [console, file, graphite] # The list of DataWriters to which Gatling write simulation data (currently supported : console, file, graphite, jdbc)
console {
#light = false # When set to true, displays a light version without detailed request stats
}
graphite {
#light = false # only send the all* stats
host = "127.0.0.1" # The host where the Carbon server is located
port = 2003 # The port to which the Carbon server listens to (2003 is default for plaintext, 2004 is default for pickle)
protocol = "tcp" # The protocol used to send data to Carbon (currently supported : "tcp", "udp")
rootPathPrefix = "gatling" # The common prefix of all metrics sent to Graphite
#bufferSize = 8192 # GraphiteDataWriter's internal data buffer size, in bytes
#writeInterval = 1 # GraphiteDataWriter's write interval, in seconds
}
Whenever i run my gatling tests I see no error messages or anything that indicates that anything is wrong, but I cannot see anything in the influxd logs that indicates that anything has been logged to influxdb, nor can I see any data in the gatlingdb database. I am using influxdb v0.10 and gatling v2.2.3 on Ubuntu
Can anyone help me figure out what I am doing wrong?
Updated to influxdb v1.1 and the problem seemed to have resolved itself from doing that

How to load data from the online GAE datastore into the local development server?

I have previously used the approach described in GAE docs to download backups of my entities on the live datastore.
Currently, I have a csv file per entity kind, that I got by writing bulkloader.yaml and using this command:
appcfg.py download_data --config_file=bulkloader.yaml --filename=users.csv --kind=Permission --url=http://your_app_id.appspot.com/_ah/remote_api
I also have a sql3 dump file that I got using the command:
appcfg.py download_data --kind=<kind> --url=http://your_app_id.appspot.com/_ah/remote_api --filename=<data-filename>
Now if I try this command:
appcfg.py upload_data --url=http://your_app_id.appspot.com/_ah/remote_api --kind=<kind> --filename=<data-filename>
Replacing the URL by localhost:8080, it asks me for a username/password. Now even if provide a mock username (test#example.com) in http://localhost:8080/_ah/remote_api and check the "admin" checkbox, it always gives me an authentication error.
The other alternative mentioned in the docs is using this:
appcfg.py upload_data --config_file=album_loader.py --filename=album_data.csv --kind=Album --url=http://localhost:8080/_ah/remote_api <app-directory>
I wrote a loader, and tried it out, it also asks for a username and password, but it accepts anything here. The output is as follows:
/usr/local/google_appengine/google/appengine/api/search/search.py:232: UserWarning: DocumentOperationResult._code is deprecated. Use OperationResult._code instead.
'Use OperationResult.%s instead.' % (name, name))
/usr/local/google_appengine/google/appengine/api/search/search.py:232: UserWarning: DocumentOperationResult._CODES is deprecated. Use OperationResult._CODES instead.
'Use OperationResult.%s instead.' % (name, name))
Application: knowledgetestgame
Uploading data records.
[INFO ] Logging to bulkloader-log-20121113.210613
[INFO ] Throttling transfers:
[INFO ] Bandwidth: 250000 bytes/second
[INFO ] HTTP connections: 8/second
[INFO ] Entities inserted/fetched/modified: 20/second
[INFO ] Batch Size: 10
[INFO ] Opening database: bulkloader-progress-20121113.210613.sql3
Please enter login credentials for localhost
Email: test#example.com
Password for test#example.com:
[INFO ] Connecting to localhost:8080/_ah/remote_api
[INFO ] Starting import; maximum 10 entities per post
[ERROR ] [WorkerThread-4] WorkerThread:
Traceback (most recent call last):
File "/usr/local/google_appengine/google/appengine/tools/adaptive_thread_pool.py", line 176, in WorkOnItems
status, instruction = item.PerformWork(self.__thread_pool)
File "/usr/local/google_appengine/google/appengine/tools/bulkloader.py", line 764, in PerformWork
transfer_time = self._TransferItem(thread_pool)
File "/usr/local/google_appengine/google/appengine/tools/bulkloader.py", line 933, in _TransferItem
self.content = self.request_manager.EncodeContent(self.rows)
File "/usr/local/google_appengine/google/appengine/tools/bulkloader.py", line 1394, in EncodeContent
entity = loader.create_entity(values, key_name=key, parent=parent)
File "/usr/local/google_appengine/google/appengine/tools/bulkloader.py", line 2728, in create_entity
(len(self.__properties), len(values)))
AssertionError: Expected 17 columns, found 18.
[INFO ] [WorkerThread-5] Backing off due to errors: 1.0 seconds
[INFO ] Unexpected thread death: WorkerThread-4
[INFO ] An error occurred. Shutting down...
[ERROR ] Error in WorkerThread-4: Expected 17 columns, found 18.
[INFO ] 980 entities total, 0 previously transferred
[INFO ] 0 entities (278 bytes) transferred in 5.9 seconds
[INFO ] Some entities not successfully transferred
I have ~4000 entities in total, it says here that 980 are transferred, but actually I check the local datastore and I find none of them..
Below is the loader I use (I used NDB for the Guess entity)
import datetime
from google.appengine.ext import db
from google.appengine.tools import bulkloader
from google.appengine.ext.ndb import key
class Guess(db.Model):
pass
class GuessLoader(bulkloader.Loader):
def __init__(self):
bulkloader.Loader.__init__(self, 'Guess',
[('selectedAssociation', lambda x: x.decode('utf-8')),
('suggestionsList', lambda x: x.decode('utf-8')),
('associationIndexInList', int),
('timeEntered',
lambda x: datetime.datetime.strptime(x, '%m/%d/%Y').date()),
('rank', int),
('topicName', lambda x: x.decode('utf-8')),
('topic', int),
('player', int),
('game', int),
('guessString', lambda x: x.decode('utf-8')),
('guessTime',
lambda x: datetime.datetime.strptime(x, '%m/%d/%Y').date()),
('accountType', lambda x: x.decode('utf-8')),
('nthGuess', int),
('score', float),
('cutByRoundEnd', bool),
('suggestionsListDelay', int),
('occurrences', float)
])
loaders = [GuessLoader]
Edit: I just noticed this part in the error message [ERROR ] Error in WorkerThread-0: Expected 17 columns, found 18. while actually I just went through the whole csv file, and made sure that every line has 18 columns. I checked the loader, and found that I was missing the key column, I gave it a type int but this doesn't work.
If you have problems with the authentication, put the following in your appengine_config.py:
if os.environ.get('SERVER_SOFTWARE','').startswith('Development'):
remoteapi_CUSTOM_ENVIRONMENT_AUTHENTICATION = (
'REMOTE_ADDR', ['127.0.0.1'])
then run
appcfg.py download_data --url=http://APPNAME.appspot.com/_ah/remote_api --filename=dump --kind=EntityName
appcfg.py upload_data --url=http://localhost:8080/_ah/remote_api --filename=dump --application=dev~APPNAME
Try just pressing Enter (no username/password). This seemed to do the trick for me. My command (wrapped in a bash script to prevent import errors that I occasionally received) is:
#!/bin/bash
# Modify path
export PYTHONPATH=$PYTHONPATH:.
# Load data
python /path/to/app/config/appcfg.py upload_data \
--config_file=<my_loader.py> \
--filename=<output.csv> \
--kind=<kind> \
--application=dev~<application_id> \
--url=http://localhost:8088/_ah/remote_api ./
When prompted for the Email, I hit enter and all is uploaded to the dev server. I am not using NDB in this case, although I do not believe that should make a difference.

no such table -- django

I am using south, and I have syncdb but alas i get stuck with this database error, can you help me fix this and maybe tell me how to fix something like this in the future.
this missing database table error is comming up in a couple of different places dealing with something called salesflow_contact I have a folder salesflow with a models.py file with a class of Contact with a capitol,
the error shown is from trying to post some content to the table, but i also get an error with the pagination view browsecontacts
Environment:
Request Method: POST
Request URL: http://127.0.0.1:8000/addacontact
Django Version: 1.4.2
Python Version: 2.7.3
Installed Applications:
('django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.admin',
'django.contrib.admindocs',
'south',
'sekizai')
Installed Middleware:
('django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware')
Traceback:
File "/home/brian/virt_env/virt_step/local/lib/python2.7/site-packages/Django-1.4.2-py2.7.egg/django/core/handlers/base.py" in get_response
111. response = callback(request, *callback_args, **callback_kwargs)
File "/home/brian/projects/steprider/steprider/views.py" in Add_a_contact
36. new_contact = form.save(request.user)
File "/home/brian/projects/steprider/salesflow/forms.py" in save
23. contact.save()
File "/home/brian/virt_env/virt_step/local/lib/python2.7/site-packages/Django-1.4.2-py2.7.egg/django/db/models/base.py" in save
463. self.save_base(using=using, force_insert=force_insert, force_update=force_update)
File "/home/brian/virt_env/virt_step/local/lib/python2.7/site-packages/Django-1.4.2-py2.7.egg/django/db/models/base.py" in save_base
551. result = manager._insert([self], fields=fields, return_id=update_pk, using=using, raw=raw)
File "/home/brian/virt_env/virt_step/local/lib/python2.7/site-packages/Django-1.4.2-py2.7.egg/django/db/models/manager.py" in _insert
203. return insert_query(self.model, objs, fields, **kwargs)
File "/home/brian/virt_env/virt_step/local/lib/python2.7/site-packages/Django-1.4.2-py2.7.egg/django/db/models/query.py" in insert_query
1593. return query.get_compiler(using=using).execute_sql(return_id)
File "/home/brian/virt_env/virt_step/local/lib/python2.7/site-packages/Django-1.4.2-py2.7.egg/django/db/models/sql/compiler.py" in execute_sql
909. for sql, params in self.as_sql():
File "/home/brian/virt_env/virt_step/local/lib/python2.7/site-packages/Django-1.4.2-py2.7.egg/django/db/models/sql/compiler.py" in as_sql
872. for obj in self.query.objs
File "/home/brian/virt_env/virt_step/local/lib/python2.7/site-packages/django_extensions-0.9-py2.7.egg/django_extensions/db/fields/__init__.py" in pre_save
135. value = unicode(self.create_slug(model_instance, add))
File "/home/brian/virt_env/virt_step/local/lib/python2.7/site-packages/django_extensions-0.9-py2.7.egg/django_extensions/db/fields/__init__.py" in create_slug
122. while not slug or queryset.filter(**kwargs):
File "/home/brian/virt_env/virt_step/local/lib/python2.7/site-packages/Django-1.4.2-py2.7.egg/django/db/models/query.py" in __nonzero__
130. iter(self).next()
File "/home/brian/virt_env/virt_step/local/lib/python2.7/site-packages/Django-1.4.2-py2.7.egg/django/db/models/query.py" in _result_iter
118. self._fill_cache()
File "/home/brian/virt_env/virt_step/local/lib/python2.7/site-packages/Django-1.4.2-py2.7.egg/django/db/models/query.py" in _fill_cache
892. self._result_cache.append(self._iter.next())
File "/home/brian/virt_env/virt_step/local/lib/python2.7/site-packages/Django-1.4.2-py2.7.egg/django/db/models/query.py" in iterator
291. for row in compiler.results_iter():
File "/home/brian/virt_env/virt_step/local/lib/python2.7/site-packages/Django-1.4.2-py2.7.egg/django/db/models/sql/compiler.py" in results_iter
763. for rows in self.execute_sql(MULTI):
File "/home/brian/virt_env/virt_step/local/lib/python2.7/site-packages/Django-1.4.2-py2.7.egg/django/db/models/sql/compiler.py" in execute_sql
818. cursor.execute(sql, params)
File "/home/brian/virt_env/virt_step/local/lib/python2.7/site-packages/Django-1.4.2-py2.7.egg/django/db/backends/util.py" in execute
40. return self.cursor.execute(sql, params)
File "/home/brian/virt_env/virt_step/local/lib/python2.7/site-packages/Django-1.4.2-py2.7.egg/django/db/backends/sqlite3/base.py" in execute
344. return Database.Cursor.execute(self, query, params)
Exception Type: DatabaseError at /addacontact
Exception Value: no such table: salesflow_contact
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# steprider.salesflow.forms.py
#
# Copyright 2012 BRIAN SCOTT CARPENTER <KlanestroTalisman#gmail.com>
from django.forms import ModelForm
from salesflow import models
from django import forms
from django.template.defaultfilters import slugify
class ContactForm(ModelForm):
description = forms.CharField(widget=forms.Textarea)
class Meta:
model = models.Contact
exclude = ("user","slug")
def save(self, user):
contact = super(ContactForm, self).save(commit=False)
contact.user = user
contact.save()
return contact
class Contact_History_Form:
class Meta:
model = models.Contact_History
exclude = ("user","slug")
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# views.py
#
# Copyright 2012 BRIAN SCOTT CARPENTER <KlanestroTalisman#gmail.com>
from django.shortcuts import render_to_response,get_object_or_404
from django.template import RequestContext
from salesflow.forms import *
from django.core.paginator import Paginator, EmptyPage, PageNotAnInteger
from salesflow.models import Contact
def home (request):
""" This is the Home view"""
return render_to_response(('index.html') ,context_instance=RequestContext(request))
def server_error(request, template_name='templates/500.html'):
"""
500 error handler.
Templates: `500.html`
Context:
STATIC_URL
Path of static media (e.g. "media.example.org")
"""
t = loader.get_template(template_name)
return HttpResponseServerError(
t.render(Context({'STATIC_URL': settings.STATIC_URL})))
def Add_a_contact(request):
"""give them a page so they can donate and save it to a database."""
if request.method == 'POST':
form = ContactForm(request.POST,request.FILES)
if form.is_valid():
new_contact = form.save(request.user)
return HttpResponseRedirect(reverse(item, args=(new_contact.slug,)))
else:
form = ContactForm()
return render_to_response('salesflow/addacontact.html',{'form': form},context_instance=RequestContext(request))
def browse_contacts(request):
contact_list = Contact.objects.all()
paginator = Paginator(contact_list,15)
page = request.GET.get('page')
try:
contacts = paginator.page(page)
except PageNotAnInteger:
# If page is not an integer, deliver first page.
contacts = paginator.page(1)
except EmptyPage:
# If page is out of range (e.g. 9999), deliver last page of results.
contacts = paginator.page(paginator.num_pages)
return render_to_response('salesflow/browsecontacts.html',
{"contacts":contacts,
"site":settings.SITE_DOMAIN},
context_instance=RequestContext(request))
The error is pretty straight forward; you have no table called salesflow_contact.
You'd need to run migrate assuming it has the initial schemamigration salesflow --initial first table creation migration.
syncdb will not create tables for south-managed models, which is most likely your problem. You can bypass this with syncdb --all but if it's truly south-managed, you should run migrate salesflow and it should create the tables assuming you have the schemamigration salesflow --initial migration in there.

BadRequestError while uploading data using bulk loader

Hello I have created sample Greeting application in Google app engine.
Now I am trying to upload data using bulk loader.
But its giving BadRequestError.This is the code for that:
D:\Study\M.Tech\Summer\Research\My Work\Query Transformation\Experiment\Tools\Bu
lkloader\bulkloader test>appcfg.py create_bulkloader_config --url=http://bulkex.
appspot.com/remote_api --application=bulkex --filename=config.yml
Creating bulkloader configuration.
[INFO ] Logging to bulkloader-log-20111008.175810
[INFO ] Throttling transfers:
[INFO ] Bandwidth: 250000 bytes/second
[INFO ] HTTP connections: 8/second
[INFO ] Entities inserted/fetched/modified: 20/second
[INFO ] Batch Size: 10
[INFO ] Opening database: bulkloader-progress-20111008.175810.sql3
[INFO ] Opening database: bulkloader-results-20111008.175810.sql3
[INFO ] Connecting to bulkex.appspot.com/remote_api
Please enter login credentials for bulkex.appspot.com
Email: shyam.rk22#gmail.com
Password for shyam.rk22#gmail.com:
[INFO ] Downloading kinds: ['__Stat_PropertyType_PropertyName_Kind__']
[ERROR ] [WorkerThread-3] WorkerThread:
Traceback (most recent call last):
File "C:\Program Files\Google\google_appengine\google\appengine\tools\adaptive
_thread_pool.py", line 176, in WorkOnItems
status, instruction = item.PerformWork(self.__thread_pool)
File "C:\Program Files\Google\google_appengine\google\appengine\tools \bulkloader.py",line 764, in PerformWork transfer_time = self._TransferItem(thread_pool)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\bulkload
er.py", line 1170, in _TransferItem
self, retry_parallel=self.first)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\bulkload
er.py", line 1471, in GetEntities
results = self._QueryForPbs(query)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\bulkload
er.py", line 1442, in _QueryForPbs
raise datastore._ToDatastoreError(e)
BadRequestError: app s~bulkex cannot access app bulkex's data
[INFO ] [WorkerThread-0] Backing off due to errors: 1.0 seconds
[INFO ] An error occurred. Shutting down...
[ERROR ] Error in WorkerThread-3: app s~bulkex cannot access app bulkex's data
[INFO ] Have 0 entities, 0 previously transferred
[INFO ] 0 entities (6466 bytes) transferred in 25.6 seconds
Note the warning under --application in http://code.google.com/appengine/docs/python/tools/uploadingdata.html and use --url instead.
I was having the same issue. I removed the --application=APPID parameter from the statement and the code executed and built out the config.yml file with all my Kinds from the Datastore!

Resources