Django: How to make an unique, blank models.CharField? - django-models

Imagine that I have a model that describes the printers that an office has. They could be ready to work or not (maybe in the storage area or it has been bought but not still in th office ...). The model must have a field that represents the phisicaly location of the printer ("Secretary's office", "Reception", ... ). There cannot be two repeated locations and if it is not working it should not have a location.
I want to have a list in which all printers appear and for each one to have the locations where it is (if it has). Something like this:
ID | Location
1 | "Secretary's office"
2 |
3 | "Reception"
4 |
With this I can know that there are two printers that are working (1 and 3), and others off line (2 and 4).
The first approach for the model, should be something like this:
class Printer(models.Model):
brand = models.CharField( ...
...
location = models.CharField( max_length=100, unique=True, blank=True )
But this doesn't work properly. You can only store one register with one blank location. It is stored as an empty string in the database and it doesn't allow you to insert more than one time (the database says that there is another empty string for that field). If you add to this the "null=True" parameter, it behaves in the same way. This is beacuse, instead of inserting NULL value in the corresponding column, the default value is an empty string.
Searching in the web I have found http://www.maniacmartin.com/2010/12/21/unique-nullable-charfields-django/, that trys to resolve the problem in differnt ways. He says that probably the cleanest is the last one, in which he subclass the CharField class and override some methods to store different values in the database. Here is the code:
from django.db import models
class NullableCharField(models.CharField):
description = "CharField that obeys null=True"
def to_python(self, value):
if isinstance(value, models.CharField):
return value
return value or ""
def get_db_prep_value(self, value):
return value or None
This works fine. You can store multiple registers with no location, because instead of inserting an empty string, it stores a NULL. The problem of this is that it shows the blank locations with Nones instead of empty string.
ID | Location
1 | "Secretary's office"
2 | None
3 | "Reception"
4 | None
I supposed that there is a method (or multiple) in which must be specify how the data must be converted, between the model and the database class manager in the two ways (database to model and model to database).
Is this the best way to have an unique, blank CharField?
Thanks,

You can use a model method to output the values in a custom way.
Like this (in your model class):
def location_output(self):
"Returns location and replaces None values with an empty string"
if self.location:
return self.location
else:
return ""
Then you can use it in views like this.
>>> Printer.objects.create(location="Location 1")
<Printer: Printer object>
>>> Printer.objects.create(location=None)
<Printer: Printer object>
>>> Printer.objects.get(id=1).location_output()
u'Location 1'
>>> Printer.objects.get(id=2).location_output()
''
And in your templates, like this.
{{ printer.location_output }}

Try to check this thread.
Unique fields that allow nulls in Django

Related

SQL - trying to find a set of data which only has a certain set of values but not anything else in a few columns

Hopefully an easy problem for an experienced SQL person. I have an application which uses SQL Server, and I cannot perform this query in the application, so I'm hoping to back-door it, but I need help.
I have a table with a large list of emails and all its metadata. I'm trying to find email that is only between parties of this one company and flag them.
What I did was search where companyName.com is in To and From and marked a TagField as 1 (I did this through my application's front end).
Now what I need to do is search where any other possible values, ignoring companyName.com exist in To and From where I've already flagged them as 1 in TagField. From will usually just have one value, but To could have multiple, all formatted differently, but all separated by a semi-colon (I will probably have to apply this same search to CC and BCC columns, too).
Any thoughts?
Replace the ; with the empty string. Then check to see if the length changed. If there's one email address, there shouldn't be a ';'. You could also use the same technique to replace the company name with the empty string. Anything left would be the other companies.
select email_id, to_email
from yourtable
where TagField = 1 and len(to_email) <> len(replace(to_email,';',''))
This solution is based on the following thread
Number of times a particular character appears in a string
So I went an entirely different route and exported my data to a CSV and used Python to get to where I needed. Here's the code I used in case anybody needs it. What this returned for me was a list of DocIDs (unique identifiers that were in the CSV) where ever there was an email address in the To field that wasn't from one specific domain. I went into the original CSV and made sure all instances of this domain name were in all lowercase, too.
import csv
import tkinter as tk
from tkinter import filedialog
root = tk.Tk()
root.withdraw()
file_path = filedialog.askopenfilename()
sub = "domainname"
def findMultipleTo(dict):
for row in reader:
if row['To'].find(";") != -1:
toArray = row['To'].split(';')
newTo = [s for s in toArray if sub not in s]
row['To'] = newTo
else:
row['To'] = 'empty'
with open('location\\newCSV-BCCFieldSearch.csv', 'a') as f:
if row['To'] != "empty" and row['To'] != []:
print(row['DocID'], row['To'], file = f)
else:
pass
with open(file_path) as csvfile:
reader = csv.DictReader(csvfile)
findMultipleTo(reader)

VB.NET - PetaPoco\NPoco - Fetch data from table with dynamic and static columns - Performance issue

I have a specific situation to which I haven't found a solution yet.
I have several databases with the same structure where I have a table, (lets say Users), which has known columns such as: UserID, UserName, UserMail, etc...
In the same table, I have dynamic custom columns which I can know only on runtime, such as: customField54, customField75, customField82, etc...
I have a screen where I must show a list of users, and there are thousands of records (Must show ALL the users - no question about it).
The Users table columns in database A look like this:
| UserID | UserName | UserMail | customField54 | customField55 |
and for the example, lets say I have another database B, and the table Users there looks like this:
| UserID | UserName | UserMail | customField109 | customField211 | customField235 | customField302 |
I have a single code which each time connects to another database. So I have a single code - > multiple databases, while the difference in each database is the custom fields of the Users table.
If I work with a DataTable, I can query:
SELECT * FROM Users
And then, dynamically I can retrieve the custom fields values, like this:
Dim customFieldsIDs() As Integer = GetCustomFieldsIDs()
Dim dt As DataTable = GetUserListData() // All users data in a DataTable
For Each dr In dt.Rows
Response.Write(dr.Item("UserID"))
Response.Write(dr.Item("UserName"))
Response.Write(dr.Item("UserMail"))
For Each cfID in customFieldsIDs
Response.Write(dr.Item("customField" & cfID))
Next
Next
My intention is not to work with DataTables. I want to work with strong typed objects. I cannot create a POCO of Users with the customFields as is inside, because for each database the Users table has different customFields columns, so I can't create an object with strongly typed variables.
Then, I decided to create a class Users with the known columns inside, and also a dictionary holding the customFields.
In VB.NET, I created a class Users, which looks as follows:
Public Class User
Public Property UserID As Integer
Public Property UserName As Integer
Public Property UserMail As Integer
Public Property customFieldsDictionary As Dictionary(Of Integer, String)
End Class
The class has the static values: UserID, UserName, etc...
Also, it has a dictionary of the customFieldIDs and their values, so I can retrieve the values in a single action (in O(1) complexity)
I use MicroORM PetaPoco\NPoco to populate the values.
The ORM allows me to fetch the Users data without me having to iterate the data by myself, by calling:
Dim userList As List(Of User) = db.Fetch(Of User)("SELECT * FROM Users")
But then the customFields dictionary is not populated.
In order to populate I have to iterate the userList and for each user retrieve the customFields data.
This is a very expensive way to fetch the data and results in a very bad performance.
I'd like to know if there is a way to fetch the data into the User class using PetaPoco\NPoco with a single command and manage to populate the known values and the custom fields dictionary for every user without having to iterate through the whole collection.
I hope it is understood. It is really difficult for me to explain and a very difficult issue to find a solution to.
You could try fetching everything into a dictionary and then you could map specific keys/values to your User object properties.
EDIT:
I'm not using VB.NET anymore, but I'll try to explain.
Create the indexer similar to this one:
http://www.java2s.com/Tutorial/VB/0120__Class-Module/DefineIndexerforyourownclass.htm
In the indexer you would do something like:
if (index == "FirsName") then
me.FirstName = value
end if
....
if (index.startWith("customField") then
var indexValue = int.Parse(index.Replace("customField",""))
me.customFieldsDictionary[indexValue] = value
end if
NPoco supports materializing the data into dictionary:
var users = db.Fetch<Dictionary<string, object>>("select * from users");
You should be able to pass your class to the NPoco and force it to use the Fetch overload for dictionary.
I've used this approach before, but I can't find the source at the moment.

How to create autocomplete with GAE?

I use jQuery UI autocomplete widget. Also I have GAE datastore:
class Person(db.Model):
# key_name contains person id in format 'lastname-firstname-middlename-counter',
# counter and leading dash are omitted, if counter=0
first_name = db.StringProperty()
last_name = db.StringProperty()
middle_name = db.StringProperty()
How can I search the person in the autocomplete widget, when user can input there surname, first name and/or middle name?
So, I am getting user input string as self.request.get('term'). How should I search for the same in my datastore (since I need to look at each field and probably for combined value of 3 fields)? How to optimize such query?
I am also not clear what should be the reply format. jQuery doc says:
A data source can be:
an Array with local data
a String, specifying a URL
a Callback
The local data can be a simple Array of Strings, or it contains
Objects for each item in the array, with either a label or value
property or both.
There are a few neat tricks here. Consider this augmented model:
class Person(db.Model):
first_name = db.StringProperty()
last_name = db.StringProperty()
middle_name = db.StringProperty()
names_lower = db.StringListProperty()
You'll need to keep names_lower in sync with the real fields, e.g.:
p.names_lower = [p.first_name.lower(), p.last_name.lower(),
p.middle_name.lower()]
You can do this more elegantly with a DerivedProperty.
And now, your query:
term = self.request.get('term').lower()
query = Person.all()
query.filter('names_lower >=', term)
query.filter('names_lower <=', unicode(term) + u"\ufffd")
This gives you:
Matching on all 3 properties with one index
Case insensitive matches
Wildcard suffix matches
So a query for "smi" will return any person with any name starting with "smi" in any case.
Copying lower-cased names to a ListProperty enables case-insensitive matching, and allows us to search all 3 fields with one query. "\ufffd" is the highest possible unicode character, so it's the upper limit for our substring match. If for some reason you want an exact match, filter for 'names_lower =', term instead.
Edit:
How should I search for the same in my datastore (since I need to look
at each field and probably for combined value of 3 fields)? How to
optimize such query?
This is already accounted for in the original solution. By taking the 3 fields and copying them to a single ListProperty, we're essentially creating a single index with multiple entries per person. If we have a person named Bob J Smith, he'll have 3 hits in our index:
names_lower = bob
names_lower = j
names_lower = smith
This eliminates the need to run distinct queries on each field.
I am also not clear what should be the reply format.
Read the docs carefully. Formatting output for jQuery should be pretty straightforward. Your data source will be a string specifying a URL, and you'll want to format the response as JSON.
Basically agreeing on everything Drew wrote I post a link to my blog with rather elaborate example for auto-completing selecting keywords when searching for information in the datastore.
All done in GAE with Python and using YUI3 instead of jQuery (plugging in jQuery or any other library instead would be trivial).
Shortly, the idea is that datastore contains set of documents that are indexed using keywords (using Relation Index Entity). And when user enters words to search for, the system autocompletes them with the keywords from those documents.

best method to keep and update unique data entries google app engine python

What is the best method to create a database for the given example model and assign each entry with a unique key/name which I already have and to overwrite it if the given key/name shows up again. From what I read you are supposed to use keyname? But I am not getting it to overwrite.
class SR(db.Model):
name = db.StringProperty()
title = db.StringProperty()
url = db.StringProperty()
s = SR(key_name="t5-2rain")
s.name = 't5-2rain'
s.title = 'kaja'
s.url = 'okedoke'
db.put(s)
If I enter this again with the same key name but different title value, will create another entry how do I overwrite an existing value with the same key-name.
Basically how do I populate a table with unique identifiers and overwrite values if the same unique identifier already exist?
I realize I can search for an existing name or key name etc, call that object make the changes to the instances and repopulate but, I would imagine there has to be a better method than that for overwriting especially if I am trying to put a list where some values may be overwrites and some not.
You've got the right idea already.
If 2 SR entities were constructed with the same key_name argument, then they will have the same Key path. Writing one will overwrite any old SR entity which had that key_name argument.
You should be able to observe this by querying the datastore for the entity with its unique key:
s = db.get(db.Key.from_path('SR', 't5-2rain'))

Best database design (model) for user tables

I'm developping a web application using google appengine and django, but I think my problem is more general.
The users have the possibility to create tables, look: tables are not represented as TABLES in the database. I give you an example:
First form:
Name of the the table: __________
First column name: __________
Second column name: _________
...
The number of columns is not fixed, but there is a maximum (100 for example). The type in every columns is the same.
Second form (after choosing a particular table the user can fill the table):
column_name1: _____________
column_name2: _____________
....
I'm using this solution, but it's wrong:
class Table(db.Model):
name = db.StringProperty(required = True)
class Column(db.Model):
name = db.StringProperty(required = True)
number = db.IntegerProperty()
table = db.ReferenceProperty(table, collection_name="columns")
class Value(db.Model):
time = db.TimeProperty()
column = db.ReferenceProperty(Column, collection_name="values")
when I want to list a table I take its columns and from every columns I take their values:
data = []
for column in data.columns:
column_data = []
for value in column.values:
column_data.append(value.time)
data.append(column_data)
data = zip(*data)
I think that the problem is the order of the values, because it is not true that the order for one column is the same for the others. I'm waiting for this bug (but until now I never seen it):
Table as I want: as I will got:
a z c a e c
d e f d h f
g h i g z i
Better solutions? Maybe using ListProperty?
Here's a data model that might do the trick for you:
class Table(db.Model):
name = db.StringProperty(required=True)
owner = db.UserProperty()
column_names = db.StringListProperty()
class Row(db.Model):
values = db.ListProperty(yourtype)
table = db.ReferenceProperty(Table, collection_name='rows')
My reasoning:
You don't really need a separate entity to store column names. Since all columns are of the same data type, you only need to store the name, and the fact that they are stored in a list gives you an implicit order number.
By storing the values in a list in the Row entity, you can use an index into the column_names property to find the matching value in the values property.
By storing all of the values for a row together in a single entity, there is no possibility of values appearing out of their correct order.
Caveat emptor:
This model will not work well if the table can have columns added to it after it has been populated with data. To make that possible, every time that a column is added, every existing row belonging to that table would have to have a value appended to its values list. If it were possible to efficiently store dictionaries in the datastore, this would not be a problem, but list can really only be appended to.
Alternatively, you could use Expando...
Another possibility is that you could define the Row model as an Expando, which allows you to dynamically create properties on an entity. You could set column values only for the columns that have values in them, and that you could also add columns to the table after it has data in it and not break anything:
class Row(db.Expando):
table = db.ReferenceProperty(Table, collection_name='rows')
#staticmethod
def __name_for_column_index(index):
return "column_%d" % index
def __getitem__(self, key):
# Allows one to get at the columns of Row entities with
# subscript syntax:
# first_row = Row.get()
# col1 = first_row[1]
# col12 = first_row[12]
value = None
try:
value = self.__dict__[Row.__name_for_column_index]
catch KeyError:
# The given column is not defined for this Row
pass
return value
def __setitem__(self, key, value):
# Allows one to set the columns of Row entities with
# subscript syntax:
# first_row = Row.get()
# first_row[5] = "New values for column 5"
self.__dict__[Row.__name_for_column_index] = value
# In order to allow efficient multiple column changes,
# the put() can go somewhere else.
self.put()
Why don't you add an IntegerProperty to Value for rowNumber and increment it every time you add a new row of values and then you can reconstruct the table by sorting by rowNumber.
You're going to make life very hard for yourself unless your user's 'tables' are actually stored as real tables in a relational database. Find some way of actually creating tables and use the power of an RDBMS, or you're reinventing a very complex and sophisticated wheel.
This is the conceptual idea I would use:
I would create two classes for the data-store:
table this would serve as a
dictionary, storing the structure of
the pseudo-tables your app would
create. it would have two fields :
table_name, column_name,
column_order . where column_order
would give the position of the
column within the table
data
this would store the actual data in
the pseudo-tables. it would have
four fields : row_id, table_name,
column_name , column_data. row_id
would be the same for data
pertaining to the same row and would
be unique for data across the
various pseudo-tables.
Put the data in a LongBlob.
The power of a database is to be able to search and organise data so that you are able to get only the part you want for performances and simplicity issues : you don't want the whole database, you just want a part of it and want it fast. But from what I understand, when you retrieve a user's data, you retrieve it all and display it. So you don't need to sotre the data in a normal "database" way.
What I would suggest is to simply format and store the whole data from a single user in a single column with a suitable type (LongBlob for example). The format would be an object with a list of columns and rows of type. And you define the object in whatever language you use to communicate with the database.
The columns in your (real) database would be : User int, TableNo int, Table Longblob.
If user8 has 3 tables, you will have the following rows :
8, 1, objectcontaintingtable1;
8, 2, objectcontaintingtable2;
8, 3, objectcontaintingtable3;

Resources