TransactionManagementError in test of django model - django-models

In django 1.6, I try test a unique field.
# model tag
class Tag(models.Model):
name = models.CharField(max_length=30, unique=True, null=True)
def __unicode__(self):
return self.name
# test unique of name field
class TagTest(TestCase):
def test_tag_unique(self):
t1 = Tag(name='music')
t1.save()
with self.assertRaises(IntegrityError):
t2 = Tag(name='music')
t2.save()
self.assertEqual(['music'], [ t.name for t in Tag.objects.all() ])
with the last line I get this message
"An error occurred in the current transaction. You can't "
TransactionManagementError: An error occurred in the current transaction. You can't execute queries until the end of the 'atomic' block.
why ?
EDIT
I get this with sqlite as DB (Development Environment).

If you're using PostgreSQL then this is why.
Edit:
See this commit. Since it's in base backend it seems like all backends now share common behavior. Despite the backend used, if the transaction needs rollback an error is raised.
Tip:
Use Model.objects.create(attr="value") instead of create and .save().

Related

why my new table cannot be added to my database

I decide to add new Model called Project to my project:
When I run python manage.py migrate, it shows me the below error:
class Project(models.Model):
statut_juridique=[
('per', 'personne physique' ),
('sarl', 'SARL'),
('sual', 'SUARL'),
('anony', 'SA'),
]
type_du_projet = [
('ind', 'industrie'),
('agr', 'agronome'),
('ser', 'service'),
('art', 'artisanat'),
('com', 'commerce'),
]
name = models.CharField(max_length=50)
produit = ArrayField(
ArrayField(
models.CharField(max_length=20, blank=True),
size=8,
),
size=8,
)
stat_jur = models.CharField(max_length=50, choices=statut_juridique)
type_projet = models.CharField(max_length=50, choices=type_du_projet)
Nomination = models.CharField(max_length=50)
adresse = models.CharField(max_length=200)
user = models.ForeignKey(User, related_name='projet', on_delete=models.CASCADE)
def __str__(self):
return self.name
Operations to perform: Apply all migrations: admin, auth,
businesplan, contenttypes, sessions Running migrations: Applying
contenttypes.0001_initial...Traceback (most recent call last): File
"/home/abdallah/projectdjango/oasis/venv/lib/python3.8/site-packages/django/db/backends/utils.py",
line 87, in _execute
return self.cursor.execute(sql) psycopg2.errors.DuplicateTable: relation "django_content_type" already exists
And also I can't see the new table in my Database, Can you help me please!
You probably use a database that already has some tables with their migrations.
For this case you can Try using a new database, or reset your existing database to remove duplicate tables or sometimes you can troubleshot using --fake-initial as one of django-admin cmd option:
$ python manage.py migrate --fake-initial
From Django-Doc:
Allows Django to skip an app’s initial migration if all database tables with the names of all models created by all CreateModel operations in that migration already exist. This option is intended for use when first running migrations against a database that preexisted the use of migrations. This option does not, however, check for matching database schema beyond matching table names and so is only safe to use if you are confident that your existing schema matches what is recorded in your initial migration.
Source: django-admin#cmdoption-migrate-fake-initial [django-doc]

DRF created_by updated_by fail during migrations

I want to add an created_by and updated_by field to all my DB objects. I created a common model for this that will be used by most other objects. I have sorted out most obstacles so far. But the make migrations script ends with an error:
My model:
class CommonModel(models.Model):
"""Common fields that are shared among all models."""
created_by = models.ForeignKey(settings.AUTH_USER_MODEL,on_delete=models.PROTECT,
editable=False, related_name="+")
updated_by = models.ForeignKey(settings.AUTH_USER_MODEL,on_delete=models.PROTECT,
editable=False, related_name="+")
created_at = models.DateTimeField(auto_now_add=True,
editable=False)
updated_at = models.DateTimeField(auto_now=True,
editable=False)
class Meta:
abstract = True
class Tag(CommonModel):
"""Tag to be used for device type"""
name = models.CharField(max_length=255)
def __str__(self):
return self.name
The error I get is :
You are trying to add a non-nullable field 'created_by' to devicetype without a default; we can't do that (the database needs something to populate existing rows).
Please select a fix:
Provide a one-off default now (will be set on all existing rows with a null value for this column)
Quit, and let me add a default in models.py
The only "solution" I found searching the Internet was to define default='', run the makemigrations again and then manually edit the files afterwards to remove the default=''.
I cannot believe that this is the proper way to do this and that there is no solution for this yet.
You need to set a default value for created_at and update_at, since they are not null=True.
The message you get during migration is not an error. If you want to provide a default value, select fix 1., it should show the below prompt,
Please enter the default value now, as valid Python
The datetime and ` modules are available, so you can do e.g. timezone.now
Type 'exit' to exit this prompt
>>>
Here you can set the default value using the datetime or django.utils.timezone module.

Integrate django model with legacy db

I understand that Django Model is pretty fussy when it comes to the absence of a primary key.
I have a legacy database (SQL Server) that I am connecting too (it is not the default one), and in that I have a database.view which I am supposed access. However, the issue is that the view does not have any primary key.
How do i enable django to query from that table without being able to modify the schema?
Here is what I did:
I created a ReadOnlyModel and got my other django model to subclass it. This was done because I wanted to bypass the django's need for PK (evidently it did work, but threw another error - see below)
class ReadOnlyModel(models.Model):
def save(self, *args, **kwargs):
pass
def delete(self, *args, **kwargs):
pass
I created an ActiveObjectManager for Django to know which database to point too. The reason i used this over a router is because the router would work best if I was creating two seperate applications in the same repo. However, I am using one application:
class Db2ActiveObjectManager(models.Manager):
def get_queryset(self):
qs = super(Db2ActiveObjectManager, self).get_queryset()
if hasattr(self.model, 'use_db'):
qs = qs.using(self.model.use_db)
return qs
Below is a sample model from the Db2:
class modelA(ReadOnlyModel):
use_db = 'db2'
objects = Db2ActiveObjectManager()
col1 = models.CharField(primary_key=True, max_length=256)
col2 = models.CharField(primary_key=True, max_length=1024)
col3 = models.CharField(primary_key=True, max_length=256)
col4 = models.CharField(primary_key=True, max_length=100)
col5 = models.CharField(max_length=256)
col6 = models.CharField(primary_key=True, max_length=50)
class Meta:
managed = False
db_table = 'db2.someTable'
#classmethod
def methodA(cls):
try:
filter_condition = {}
return cls.objects.filter(**filter_condition)
except Exception as e:
raise ValueError("Invalid input")
#classmethod
def methodB(cls):
try:
return cls.objects.raw('SQL Query Here')
# cursor = connection.cursor()
# cursor.execute('SQL Query Here')
# field_names = [field[0].lower() for field in cursor.description]
# nt_result = namedtuple('Result', field_names)
# return [nt_result(*row) for row in cursor.fetchall()]
except Exception as e:
raise
So far my current setup is throwing the following error:
django.db.utils.ProgrammingError: ('42S02', "[42S02] [Microsoft][ODBC Driver 13 for SQL Server][SQL Server]Invalid object name 'api_readonlymodel'. (208) (SQLExecDirectW)")
I seen many developers on stackoverflow re-iterating that we need to add a pk to the table/view. But i cannot make any changes to that view. I am pretty sure that there is a work around, but no matter what I have tried so far, nothing seems to work out for me yet.
Either I get the above error^ or I get an error saying that no valid column called id is found, or I get an error stating that I need a PK...
Thanks for the help

Getting a 'The data types nvarchar(max) and ntext are incompatible in the equal to operator.'

I am trying to populate a table with data and am using Django's get_or_create method. Whenever I do this it will enter records into the database but at a certain record it will throw the above error. My queryset function is
r, created = Response.objects.get_or_create(
auth_user=auth_user,
name=surv_name,
organization=org_id,
category=category,
question=question,
present_order=present_order,
reference=reference,
quest_id=quest_id,
survey_id=survey_id
)
My response table is
class Response(models.Model):
auth_user = models.ForeignKey('AuthUser')
survey = models.ForeignKey('Survey')
name = models.CharField(max_length=50)
organization = models.ForeignKey('Organization')
tf_question_key = models.CharField(max_length=50)
category = models.CharField(max_length=25, blank=True, null=True)
question = models.CharField(max_length=2048)
quest_id = models.CharField(max_length=25)
present_order = models.IntegerField()
reference = models.CharField(max_length=20)
answer = models.CharField(max_length=2048)
remediation = models.CharField(max_length=2048, blank=True, null=True)
dt_started = models.DateTimeField(db_column='DT_Started',
auto_now_add=True) # Field name made lowercase.
dt_completed = models.DateTimeField(db_column='DT_COMPLETED',
auto_now_add=True) # Field name made lowercase.
class Meta:
managed = False
db_table = 'response'
and the traceback where the error is located is
organization <Organization: Individual Offices>
r <Response: Response object>
user_id 2
question ('Does your written policy include the follow-up process for significant outstanding checks, including, but not limited to, checks to recording clerk, checks to tax collector, hazard insurance checks, underwriter checks or checks for mortgage payoffs and any other high risk items? ( 2.03 k )')
present_order 21
survey_id 1
reference '2.03 (k)'
quest_id 27
created True
category 'Pillar II'
surv_name 'Compliance Benchmark'
org_id 1
auth_user <AuthUser: AuthUser object>
I can add records to the table by using
r = Response(
auth_user=auth_user,
name=surv_name,
organization=organization,
category=category,
question=question,
present_order=present_order,
reference=reference,
quest_id=quest_id,
survey_id=survey_id
)
r.save()
but I need to use the get_or_create method to avoid duplicating records. I am not sure why I can add records with the .save() method but not with get_or_create and also why with get_or_create it will add records up to a certain one and then fail. The only thing that is changing is the question, quest_id, present_order, and reference.
I am using python 3.4, django 1.8.4 and SQL Server 2014
Any insight would be greatly appreciated.
I ran into the same issue and turned on logging on sql server to see what was occurring. It looks like long text fields are being converted to ntext. This is then being compared to the nvarchar field causing the error.
The error is occurring during the SELECT within the get_or_create function. Instead of using get_or_create, query for your model with startswith. Using startswith performs a LIKE check which will work. I also added a length check on the field to ensure the fields will match instead of finding other rows with the same starting value.
from django.core.exceptions import ObjectDoesNotExist
from django.db.models.functions import Length
attrs = {
auth_user=auth_user,
name=surv_name,
organization=org_id,
category=category,
present_order=present_order,
reference=reference,
quest_id=quest_id,
survey_id=survey_id,
}
try:
r = Response.objects.annotate(
text_len=Length('question')
).get(
text_len__exact=len(question),
question__startswith=question,
**attrs
)
except ObjectDoesNotExist:
r = Response.objects.create(
question=question,
**attrs
)

Row level access for google appengine datastore queries

I'm trying to develop row level access on google appengine datastore tables. So far I do have got a working example for regular ndb put(), get() and delete() operations using _hooks.
The class Acl shall be used by all the other tables. It's used as a structured property.
class Acl(EndpointsModel):
UNAUTHORIZED_ERROR = 'Invalid token.'
FORBIDDEN_ERROR = 'Permission denied.'
public = ndb.BooleanProperty()
readers = ndb.UserProperty(repeated=True)
writers = ndb.UserProperty(repeated=True)
owners = ndb.UserProperty(repeated=True)
#classmethod
def require_user(cls):
current_user = endpoints.get_current_user()
if current_user is None:
raise endpoints.UnauthorizedException(cls.UNAUTHORIZED_ERROR)
return current_user
#classmethod
def require_reader(cls, record):
if not record:
raise endpoints.NotFoundException(record.NOT_FOUND_ERROR)
current_user = cls.require_user()
if record.acl.public is not True or current_user not in record.acl.readers:
raise endpoints.ForbiddenException(cls.FORBIDDEN_ERROR)
I do want to protect access to the Location class. So I did add three hooks (_post_get_hook, _pre_put_hook and _pre_delete_hook) to the class.
class Location(EndpointsModel):
QUERY_FIELDS = ('state', 'limit', 'order', 'pageToken')
NOT_FOUND_ERROR = 'Location not found.'
description = ndb.TextProperty()
address = ndb.StringProperty()
acl = ndb.StructuredProperty(Acl)
#classmethod
def _post_get_hook(cls, key, future):
location = future.get_result()
Acl.require_reader(location)
def _pre_put_hook(self):
if self.key.id() is None:
current_user = Acl.require_user()
self.acl = Acl()
self.acl.readers.append(current_user)
self.acl.writers.append(current_user)
self.acl.owners.append(current_user)
else:
location = self.key.get()
Acl.require_writer(location)
This does work for all the create, read, update and delete operations, but it does not work for query.
#Location.query_method(user_required=True,
path='location', http_method='GET', name='location.query')
def location_query(self, query):
"""
Queries locations
"""
current_user = Acl.require_user()
query = query.filter(ndb.OR(Location.acl.readers == current_user, Location.acl.public == True))
return query
When I run a query against all locations I get the following error message:
BadArgumentError: _MultiQuery with cursors requires __key__ order
Now I've got some questions:
How do I fix the _MultiQuery issue?
Once fixed: Does this Acl implementation make sense? Are there out of the box alternatives? (I wanted to store the Acl on the record itself to be able to run a direct query, without having to get the keys first.)
Datastore doesn't support OR filters natively. Instead what NDB is doing behind the scenes is running two queries:
query.filter(Location.acl.readers == current_user)
query.filter(Location.acl.public == True)
It then merges the results of these two queries into a single result set. In order to properly merge results (in particular to eliminate duplicates when you have repeated properties), the query needs to be ordered by the key when continuing the query from an arbitrary position (using cursors).
In order to run the query successfully, you need to append a key order to the query before running it:
def location_query(self, query):
"""
Queries locations
"""
current_user = Acl.require_user()
query = query.filter(ndb.OR(Location.acl.readers == current_user,
Location.acl.public == True)
).order(Location.key)
return query
Unfortunately, your ACL implementation will not work for queries. In particular, _post_get_hook is not called for query results. There is a bug filed on the issue tracker about this.

Resources