SOLVED
Currently I am working on my first django project. The DB is modeling the structure of an Abaqus input file. Here is the Code
from django.db import models as m
import django.contrib.postgres as pg
class node(m.Model):
inputfile = m.CharField(max_length = 255)
source_id = m.IntegerField()
source_sim = m.CharField(max_length = 255)
coordinates = pg.fields.ArrayField(m.FloatField(), size = 3)
When I call manage.py makemigrations (Or just python) it gives me the error message:
AttributeError: module 'django.contrib.postgres' has no attribute 'fields'
When I import ArrayField in a testscript, it works:
from django.contrib.postgres.fields import ArrayField
from django.db import models as m
a = ArrayField(m.FloatField(), size=3)
print(a)
>>><django.contrib.postgres.fields.array.ArrayField>
I was able to migrate my classes into a TestDB without the ArrayField.
My Python version 3.7.1, my Django version is 2.1.3
What's my mistake?
edit: Style&formatting. Thanks to suggestions.
Edit: Solved, but cant find how to flag that
Related
I am trying to deploy a model using my own custom inference container on sagemaker. I am following the documentation here https://docs.aws.amazon.com/sagemaker/latest/dg/adapt-inference-container.html
I have an entrypoint file:
from sagemaker_inference import model_server
#HANDLER_SERVICE = "/home/model-server/model_handler.py:handle"
HANDLER_SERVICE = "model_handler.py"
model_server.start_model_server(handler_service=HANDLER_SERVICE)
I have a model_handler.py file:
from sagemaker_inference.default_handler_service import DefaultHandlerService
from sagemaker_inference.transformer import Transformer
from CustomHandler import CustomHandler
class ModelHandler(DefaultHandlerService):
def __init__(self):
transformer = Transformer(default_inference_handler=CustomHandler())
super(HandlerService, self).__init__(transformer=transformer)
And I have my CustomHandler.py file:
import os
import json
import pandas as pd
from joblib import dump, load
from sagemaker_inference import default_inference_handler, decoder, encoder, errors, utils, content_types
class CustomHandler(default_inference_handler.DefaultInferenceHandler):
def model_fn(self, model_dir: str) -> str:
clf = load(os.path.join(model_dir, "model.joblib"))
return clf
def input_fn(self, request_body: str, content_type: str) -> pd.DataFrame:
if content_type == "application/json":
items = json.loads(request_body)
for item in items:
processed_item1 = process_item1(items["item1"])
processed_item2 = process_item2(items["item2])
all_item1 += [processed_item1]
all_item2 += [processed_item2]
return pd.DataFrame({"item1": all_item1, "comments": all_item2})
def predict_fn(self, input_data, model):
return model.predict(input_data)
Once I deploy the model to an endpoint with these files in the image, I get the following error: ml.mms.wlm.WorkerLifeCycle - ModuleNotFoundError: No module named 'model_handler'.
I am really stuck what to do here. I wish there was an example of how to do this in the above way end to end but I don't think there is. Thanks!
This is because of the path mismatch. The entrypoint is trying to look for "model_handler.py" in WORKDIR directory of the container.
To avoid this, always specify absolute path when working with containers.
Moreover your code looks confusing. Please use this sample code as the reference:
import subprocess
from subprocess import CalledProcessError
import model_handler
from retrying import retry
from sagemaker_inference import model_server
import os
def _retry_if_error(exception):
return isinstance(exception, CalledProcessError or OSError)
#retry(stop_max_delay=1000 * 50, retry_on_exception=_retry_if_error)
def _start_mms():
# by default the number of workers per model is 1, but we can configure it through the
# environment variable below if desired.
# os.environ['SAGEMAKER_MODEL_SERVER_WORKERS'] = '2'
print("Starting MMS -> running ", model_handler.__file__)
model_server.start_model_server(handler_service=model_handler.__file__ + ":handle")
def main():
_start_mms()
# prevent docker exit
subprocess.call(["tail", "-f", "/dev/null"])
main()
Further, notice this line - model_server.start_model_server(handler_service=model_handler.__file__ + ":handle")
Here we are starting the server, and telling it to call handle() function in model_handler.py to invoke your custom logic for all incoming requests.
Also remember that Sagemaker BYOC requires model_handler.py to implement another function ping()
So your "model_handler.py" should look like this -
custom_handler = CustomHandler()
# define your own health check for the model over here
def ping():
return "healthy"
def handle(request, context): # context is necessary input otherwise Sagemaker will throw exception
if request is None:
return "SOME DEFAULT OUTPUT"
try:
response = custom_handler.predict_fn(request)
return [response] # Response must be a list otherwise Sagemaker will throw exception
except Exception as e:
logger.error('Prediction failed for request: {}. \n'
.format(request) + 'Error trace :: {} \n'.format(str(e)))
I have trained a model on AWS SageMaker by using the built-in algorithm Semantic Segmentation. This trained model named as model.tar.gz is stored on S3. So I want to download this file from S3 and then use it to make inference on my local PC without using AWS SageMaker.
Here are the three files:
hyperparams.json: includes the parameters for network architecture, data inputs, and training. Refer to Semantic Segmentation Hyperparameters.
model_algo-1
model_best.params
My code:
import mxnet as mx
from mxnet import image
from gluoncv.data.transforms.presets.segmentation import test_transform
import gluoncv
img = image.imread('./bdd100k/validation/14df900d-c5c145cb.jpg')
img = test_transform(img, ctx)
img = img.astype('float32')
model = gluoncv.model_zoo.PSPNet(2)
# load the trained model
model.load_parameters('./model/model_best.params')
Error:
AssertionError: Parameter 'head.psp.conv1.0.weight' is missing in file './model/model_best.params', which contains parameters: 'layer3.2.bn3.beta', 'layer3.0.conv3.weight', 'conv1.1.running_var', ..., 'layer2.2.bn3.running_mean', 'layer3.4.bn2.running_mean', 'layer4.2.bn3.beta', 'layer3.4.bn3.beta'. Set allow_missing=True to ignore missing parameters.
The following should work after extracting model_algo-1 from the tar.gz file. This will run on local ctx.
import gluoncv
from gluoncv import model_zoo
from gluoncv.data.transforms.presets.segmentation import test_transform
model = model_zoo.DeepLabV3(nclass=2, backbone='resnet50',
pretrained_base=False, height=800, width=1280, crop_size=240)
model.load_parameters("model_algo-1")
img = test_transform(img, ctx)
img = img.astype('float32')
output = model.predict(img)
print(output.shape)
max_predict = mx.nd.squeeze(mx.nd.argmax(output, 1)).asnumpy()
print(max_predict.shape)
prob_mask = mx.nd.squeeze(output).asnumpy()
def NormalizeData(data):
return (data - np.min(data)) / (np.max(data) - np.min(data))
target_cls_id = 1
prob_mat = prob_mask[target_cls_id, :, :]
norm_prob = NormalizeData(prob_mat)
plt.hist(norm_prob.flatten(), bins=50)
I have a really simple web app. All the important stuff happens in index.py:
from google.appengine.api import users
import webapp2
import os
import jinja2
JINJA_ENVIRONMENT = jinja2.Environment(
loader=jinja2.FileSystemLoader(os.path.dirname(__file__)),
extensions=['jinja2.ext.autoescape'],
autoescape=True)
def get_user():
user = {}
user['email'] = str(users.get_current_user())
user['name'], user['domain'] = user['email'].split('#')
user['logout_link'] = users.create_logout_url('/')
return user
class BaseHandler(webapp2.RequestHandler):
def dispatch(self):
user = get_user()
template_values = {'user': user}
if user['domain'] != 'foo.com':
template_values['page_title'] = 'Access Denied'
template = '403'
else:
template_values['page_title'] = 'Home'
template = 'index'
template_engine = JINJA_ENVIRONMENT.get_template('%s.html' % template)
self.response.write(template_engine.render(template_values))
app = webapp2.WSGIApplication([
('/', BaseHandler),
], debug=True)
I'm trying to be a good person and write some local unit tests but - after looking at the documentation - I am totally out of my depth. All I want is a basic framework where I can do something like:
python test_security.py
and simulate two users hitting the domain - one #foo.com who should get the index template, and one #bar.com who should get the 403 template.
Here's where I've got so far:
import sys
# I don't want to talk about it, let's just ignore this block
sys.path.append('C:\Program Files (x86)\Google\google_appengine\lib\webapp2-2.5.2')
sys.path.append('C:\Program Files (x86)\Google\google_appengine\lib\webob-1.2.3')
sys.path.append('C:\Program Files (x86)\Google\google_appengine\lib\jinja2-2.6')
sys.path.append('C:\Program Files (x86)\Google\google_appengine\lib\yaml-3.10')
sys.path.append('C:\Program Files (x86)\Google\google_appengine\lib\jinja2-2.6')
sys.path.append('C:\Program Files (x86)\Google\google_appengine')
sys.path.append('C:\pytest')
# A few proper imports
import unittest
import webapp2
from google.appengine.ext import testbed
# Import the module I'd like to test
import index
class TestHandlers(unittest.TestCase):
def test_hello(self):
self.testbed = testbed.Testbed()
self.testbed.init_user_stub()
self.testbed.setup_env(USER_EMAIL='test#foo.com',USER_ID='1', USER_IS_ADMIN='0')
request = webapp2.Request.blank('/')
response = request.get_response(main.app)
print "running test"
self.assertEqual(response.status_int, 200)
self.assertEqual(response.body, 'Hello, world!')
Predictably, this doesn't work at all. What am I missing? Am I just wildly overestimating how simple this should be?
If you're planning on invoking this with "python test_security.py", the magic words you are looking for are:
if __name__ == '__main__':
unittest.main()
This will make your unit test run - at the moment all you're doing is defining it.
Note also that you'll need to change your request.get_response from "main.app" to "index.app".
I suspect (primarily based on the function names) that you should call self.testbed.init_user_stub() before calling self.testbed.setup_env(), not after.
Also you seem to be missing an initial self.testbed = testbed.Testbed() and possibly a testbed.activate() call.
You might want to check out this answer: https://stackoverflow.com/a/21139805/4495081
I have a Django project in Eclipse PyDev.
I have a file views.py which has the line:
from models import ingredient2
In models.py I have:
from django.db import models
class ingredient2(models.Model):
ingredient = models.CharField(max_length=200)
When I try to run the app I get the following error:
File "C:\Python27\lib\site-packages\django\db\models\base.py", line 54, in __new__
kwargs = {"app_label": model_module.__name__.split('.')[-2]}
IndexError: list index out of range
I did sync the database and started the server running.
I went into base.py and added 2 print statements (yes, I probably should not edit Django's files):
if getattr(meta, 'app_label', None) is None:
# Figure out the app_label by looking one level up.
# For 'django.contrib.sites.models', this would be 'sites'.
model_module = sys.modules[new_class.__module__]
print model_module #ADDED
print model_module.__name__ #ADDED
kwargs = {"app_label": model_module.__name__.split('.')[-2]}
They print out:
<module 'models' from 'C:\Users\Tine\workspace\slangen\slangen2\bolig\models.pyc'>
models
manage.py is contained within the bolig folder. I think the correct app label would be "bolig". The app worked several months ago and now, when I come back to it, something is not right. I have been creating other projects in PyDev.
Add a meta class with an app_label inside your model class definition:
class Foo:
id = models.BigIntegerField(primary_key=True)
class Meta:
app_label = 'foo'
I had something similar
instead of
from models import ingredient2
try :
from your_app_name.models import ingredient2
Well, not really an answer, but... I ended up creating a new django project and then copying in my code. That fixed the problem.
I was also getting the kwargs = {"app_label": model_module.__name__.split('.')[-2]} error when using PyDev. In my case, the project wasn't refreshed before I tried to run it. As soon as I refreshed it, all was well again.
I ran into this problem using Eclipse, Django and PyDev. I needed to have the application (instead of some .py file for example) selected in the PyDev Package Explorer (left panel) before clicking Run for everything to work properly.
in my case, models.py contains models
when I import models to other .py, say views.py it doesn't raise error when I run views.py
but when I run models.py, it raise the same error.
so I will just don't run in the models.py
Ok guys I am having tons of problems getting my working dev server to a working production server :). I have a task that will go through and request urls and collect and update data. It takes 30 minutes to run.
I uploaded to production server and going to the url with its corresponding .py script appname.appspot.com/tasks/rrs after 30 seconds I am getting the class google.appengine.runtime.DeadlineExceededError' Is there any way to get around this? Is this a 30 second deadline for a page? This script works fine in development server I go to the url and the associate .py script runs until completion.
import time
import random
import string
import cPickle
from StringIO import StringIO
try:
import json
except ImportError:
import simplejson as json
import urllib
import pprint
import datetime
import sys
sys.path.append("C:\Program Files (x86)\Google\google_appengine")
sys.path.append("C:\Program Files (x86)\Google\google_appengine\lib\yaml\lib")
sys.path.append("C:\Program Files (x86)\Google\google_appengine\lib\webob")
from google.appengine.api import users
from google.appengine.ext import webapp
from google.appengine.ext.webapp.util import run_wsgi_app
from google.appengine.ext import db
class SR(db.Model):
name = db.StringProperty()
title = db.StringProperty()
url = db.StringProperty()
##request url and returns JSON_data
def overview(page):
u = urllib.urlopen(page)
bytes = StringIO(u.read())
##print bytes
u.close()
try:
JSON_data = json.load(bytes)
return JSON_data
except ValueError,e:
print e," Couldn't get .json for %s" % page
return None
##specific code to parse particular JSON data and append new SR objects to the given url list
def parse_json(JSON_data,lists):
sr = SR()
sr.name = ##data gathered
sr.title = ##data gathered
sr.url = ##data gathered
lists.append(sr)
return lists
## I want to be able to request lets say 500 pages without timeing out
page = 'someurlpage.com'##starting url
url_list = []
for z in range(0,500):
page = 'someurlpage.com/%s'%z
JSON_data = overview(page)##get json data for a given url page
url_list = parse_json(JSON_data,url_list)##parse the json data and append class objects to a given list
db.put(url_list)##finally add object to gae database
Yes, the App Engine imposes a 30 seconds deadline. One way around it might be a try/except DeadlineExceededError and putting the rest in a taskqueue.
But you can't make your requests run for a longer period.
You can also try Bulkupdate
Example:
class Todo(db.Model):
page = db.StringProperty()
class BulkPageParser(bulkupdate.BulkUpdater):
def get_query(self):
return Todo.all()
def handle_entity(self, entity):
JSON_data = overview(entity.page)
db.put(parse_json(JSON_data, [])
entity.delete()
# Put this in your view code:
for i in range(500):
Todo(page='someurlpage.com/%s' % i).put()
job = BulkPageParser()
job.start()
ok so if I am dynamically adding links as I am parsing the pages, I would add to the todo queue like so I believe.
def handle_entity(self, entity):
JSON_data = overview(entity.page)
data_gathered,new_links = parse_json(JSON_data, [])##like earlier returns the a list of sr objects, and now a list of new links/pages to go to
db.put(data_gathered)
for link in new_links:
Todo(page=link).put()
entity.delete()