Using DurationField to store working hours - django-models

I'm working on a small project where users can track their time spent working on a contract. Each contract has a defined required work time, that the user may work per month.
Now several question arise: How do I store this work time inside my Django model? I ended up using the DurationField from Django 1.8, but this comes with it's own problems, as described below. Should I maybe switch to an IntegerField and store the work time as minutes and convert it to the correct format inside the template? Then I'd need to re-convert it after a form was sent by the user to store it in the right form again. How and where (models.py, forms.py ..?) would I end up doing those two conversions?
When using the DurationField I come up with two big problems:
It always assumes a format of "hh:mm:ss", while I don't need any seconds for a work time definition. So my JavaScript TimePicker doesn't let me pick seconds and leaves them zeroed. This isn't the most beautiful solution I think.
When specifying a contract with more than 24 hours of work time (say 80/month) Django saves the DurationField value as "3 days, 8 hours", but I want it to show up as "80:00" inside my input field. I know this is the normal Python timedelta behaviour, but is there a way to customize it? At least just for the front-end user.
So my basic two questions are: should I stick with DurationField and somehow solve the problems I face or should I switch to some other field like IntegerField and do the conversions on my own, which I'm not sure where to start.

After putting this issue aside for quite some time, I came up with a solution to this after following this blog post: http://charlesleifer.com/blog/writing-custom-field-django/
So far the code works as I would like it to do. It stores the work time as an integer inside the database and displays it as HH:MM to the user.
Still I'm not sure if I did it correctly or if something is missing or maybe wrong in some special case? I can't wrap my head around the difference behind to_python and from_db_value. Also I removed value_to_string from the original code (see blog post) as it didn't seem to do anything.
class WorkingHoursFieldForm(CharField):
"""
Implementation of a CharField to handle validation of data from WorkingHoursField.
"""
def __init__(self, *args, **kwargs):
kwargs['max_length'] = 5
super(WorkingHoursFieldForm, self).__init__(*args, **kwargs)
def clean(self, value):
value = super(CharField, self).clean(value)
# Split submitted duration into list
hour_value = value.split('.')
# If list does not have two values, let us do some more validation
if len(hour_value) != 2:
# In this case the user did not supply the form with the correct format.
# Therefore we are going to assume that he does not care about
# the minutes and we will just append those for him!
if len(hour_value)<2:
value = hour_value[0] + ".00"
# This case should only arise when the format was not correct at all.
else:
raise ValidationError(_('Working hours entered must be in format HH.MM'))
# If the value is in the correct format, check if the total working hours
# exceed 80 hours per month (this equals 288.000 seconds)
if len(hour_value) == 2:
hours, minutes = map(int, value.split('.'))
total_seconds = hours*3600 + minutes*60
if total_seconds > 80 * 3600:
raise ValidationError(_('Contracts may not be longer than 80 hours!'))
return value
class WorkingHoursField(IntegerField):
"""
Creates a custom field so we can store our working hours in contracts.
Working hours are stored as an integer in minutes inside the database.
This field accepts input in the format HH.MM and will display it the same way.
"""
# Get values from database and return them as HH.MM
def from_db_value(self, value, expression, connection, context):
if value is None:
return value
hours, minutes = divmod(value, 60)
return "%02d.%02d" % (hours, minutes)
def to_python(self, value):
if value is None:
return value
if isinstance(value, (int, long)):
return value
# Split into two values and return the duration in minutes!
if isinstance(value, basestring):
hours, minutes = map(int, value.split('.'))
return (hours * 60) + minutes
# I do not know if this is really relevant here?
elif not isinstance(value, datetime.timedelta):
raise ValidationError('Unable to convert %s to timedelta.' % value)
return value
def get_db_prep_value(self, value, connection, prepared):
return value
# This is somehow needed, as otherwise the form will not work correctly!
def formfield(self, form_class=WorkingHoursFieldForm, **kwargs):
defaults = {'help_text': _('Please specify your working hours in the format HH:MM \
(eg. 12:15 - meaning 12 hours and 15 minutes)')}
defaults.update(kwargs)
return form_class(**defaults)

Related

How to get raw date string from date picker?

I'm struggling for hours with this seemingly trivial issue.
I have a antd datepicker on my page.
Whenever I choose a date, instead of giving me the date I chose, it gives me a messy moment object, which I can't figure out how to read.
All I want is that when I choose "2020-01-18", it should give me precisely this string that the user chose, regardless of timezone, preferably in ISO format.
This is not a multi-national website. I just need a plain vanilla date so I can send it to the server, store in db, whatever.
Here are some of my trials, so far no luck:
var fltval = e;
if (isMoment(fltval)) {
var dat = fltval.toDate();
//dat.setUTCHours(0)
fltval = dat.toISOString(); // fltval.toISOString(false)
var a = dat.toUTCString();
//var b = dat.toLocaleString()
}
It keeps on moving with a few hours, probably to compensate for some timezone bias
UPDATE 1:
the datestring is data-wise correct. But its not ISO, so I cant use it correctly. I might try to parse this, but I cannot find a way to parse a string to date with a specific format.
UPDATE 2:
I also tried adding the bias manually, but for some reason the bias is 0
var dat = pickerval.toDate()
var bias = Date.prototype.getTimezoneOffset()// this is 0...
var bias2 = dat.getTimezoneOffset()// and this too is 0
var d2 = new Date(dat.getTime()+bias)
var mystring= dat.toISOString() //still wrong
Thanks!
Javascript date functions can be used,
I assume you are getting in 2022-01-03T11:19:07.946Z format then
date.toISOString().slice(0, 10)
to 2022-01-03
There are 2 ways to get the date string:
Use the moment.format api:
date.format("yyyy-MM-DD")
Use the date string that is passed to the onChange as second parameter
Here is a Link.
I am assuming your code snippet is inside the onChange method. This gives you a moment and a date string to work with (the first and second parameters of the function respectively).
You have a few options. You could set the format prop on the DatePicker to match the format of the string you want. Then just use the date string. Or you can use the moment object as Domino987 described.

Why do I have problem when I train my model in pytorch?

I'm new with PyTorch and AI but I have some trouble when I try to train my model.
I just create my Dataset and my Dataloader
train_dataset = TensorDataset(tensor_train,tensor_label)
train_dataloader = DataLoader(train_dataset,batch_size=32,shuffle=True)
And after this my criterion and optimiser
criterion = nn.CrossEntropyLoss()
optimiser=optim.Adam(net.parameters(),lr=0.2)
And I try to train it with
for epoch in range(10):
for data in train_dataloader:
inputs,labels = data
output = net(torch.Tensor(inputs))
loss = criterion(output,labels.to(device))
optimiser.zero_grad()
loss.backward()
optimiser.step()
But I got this error
d:\py\lib\site-packages\torch\nn\modules\module.py in <lambda>(t)
321 Module: self
322 """
--> 323 return self._apply(lambda t: t.type(dst_type))
324
325 def float(self):
TypeError: dtype must be a type, str, or dtype object
I will be happy if someone finds the problem, thanks.
I see two possible problems:
1) Your dataloader outputs a tensor, so you don't need to create another tensor. Just do this:
output = net(inputs)
2) Are you sending your model to device? If yes, you need to send the inputs as well. If not, you don't need to do this with the outputs:
loss = criterion(output,labels)
However, I'm not sure if the error you're getting is not related to these 2 points. Consider posting the line in your code (instead of the lib). Also, consider including more information about tensor_train and tensor_label
Ty for reply but the problem was coming from another thing
I was creating my model like this
class Perceptron(nn.Module):
def __init__(self):
super(Perceptron,self).__init__()
self.type = nn.Linear(4,3)
def forward(self,x):
return self.type(x)
net = Perceptron().to(device)
and the nn.Module was already getting a type attribute thats why i was getting this error( i thing) , then i solve by change self.type by self.anythingElseThanType

change a db from a certain point in time, when the change doesn't fit the already existing data

I have a model that looks like this:
class Report(models.Model):
updater = models.CharField(max_length=15)
pub_date = models.DateTimeField(auto_add_now=True)
identifier = models.CharField(max_length=100)
... and so on...
There are some more fields but they are irrelevant to the question. Now the site has very simple functions - the users can see older reports and their data, and can edit them or add new ones.
However, the identifier field is actually an integer that symbolizes a log file that is being reported. Most of the times, each report has one log. But sometimes it has more than one. I did it as a CharField because I built the site to replace an older sharepoint 2003 website, where that field was treated as simple text. So I want that in my next version, it would be like it should be, i.e. like this:
class Report(models.Model):
updater = models.CharField(max_length=15)
pub_date = models.DateTimeField(auto_add_now=True)
... and so on...
class Log(models.Model):
report = models.ForeignKey(Report)
identifier = models.IntegerField()
The problem is, since in the old site that field was a CharField, people used this as they liked. Meaning, even if they updated various logs in the same report they just did it like this <logid1>, <logid2>. Sometimes they added some text <logid1> which is related to <logid2>.
So I want to change this, but I don't want to lose all the old data, and I can't fix all those edge cases (the DB contains around 22 thousand reports). I thought about adding this to report:
def disp_id(self):
if self.pub_date < ... #the day I'll do the update
return self.identifier
else:
return ', '.join([log.identifier for log in self.log_set.all()])
But then I'm not really getting rid of the old field now am I? I'm just adding a new one and keeping the original null from a certain date.
As far as I know, what I want to do is impossible. I'm only asking because I know that maybe I'm not the first one to deal with this sort of thing and maybe there is a solution that I'm not aware of.
Hope my explanation is clear enough, thanks in advance!
class Report(models.Model):
updater = models.CharField(max_length=15)
pub_date = models.DateTimeField(auto_add_now=True)
identifier = models.CharField(null=True)
... and so on...
logs = models.ManyToManyField(Log,null=True)
class Log(models.Model):
identifier = models.IntegerField()
Make the above model , and then make a script as follow:
ident_list = []
for reports in Report.objects.all():
identifiers = reports.identifiers.split(',')
for idents in identifiers:
if not idents in ident_list:
log = Log.create(**{'identifier' : int(idents)})
ident_list.append(int(idents))
else:
log = Log.objects.get(identifier = int(idents))
report.log.add(log)
Check the data before removing the column identifiers from the table Report.
Does it solves your purpose now ?

Temporarily suspend data-binding in AngularJS to update model without firing $scope.$watch

UPDATE 4/19/2012 12PM PST: Gotta eat crow on this one. The problem was not with Angular's databinding but with math errors in how I was calculating the dates. I wasn't properly taking into account the minutes and seconds in my time calculations. I was just subtracting the timestamps from each other expecting the hours to come out nicely.
I've gotten myself into trouble with AngularJS databinding in which my different inputs need to be reciprocally bound to each other.
The form needs to contain the following:
A start date (this is a given, not an input)
An input to add hours on to the start date
Two inputs with the result: 1) a date picker and 2) an hour picker
If you change any of the inputs, it should update the others. So, the following would be desirable results:
(with original date-time as April 19th at 10pm). User enters '1 hour'. The result date becomes April 19th and the result time becomes 11pm.
Building on the above example, the user changes the date input to April 20th. The 'hours' now become 25 hours.
I've placed set up watchers, using $scope.$watch on each of these variables:
$scope.$watch('hours', function (newHours, oldHours, scope) {
if (newHours) {
var newEndDate = new Date(scope.origDate),
offHours = newEndDate.getHours();
newEndDate.setHours(offHours + newHours);
scope.endDate = newEndDate;
scope.endHours = newEndDate.getHours();
}
});
$scope.$watch('endHours', function (newEndHours, oldEndHours, scope) {
if (newEndHours) {
var newEndDate = new Date(scope.endDate);
newEndDate.setHours(newEndHours);
scope.endDate = newEndDate;
}
});
$scope.$watch('endDate', function (newEndDate, oldEndDate, scope) {
if (newEndDate) {
scope.hours = (newEndDate - scope.origDate) / (1000 * 60 * 60);
}
});
Each of these works fine on their own, but in tandem they cause a big fat mess. From my understanding of Angular, it seems that they're creating a 'feedback loop.' Edit: To wit, model 'A' will be updated and trigger a change on model 'B'. Model 'B' will then trigger a change on model 'A'.
Now, is it possible to temporarily suspend data-binding so that I can just update the models without firing the watchers? In some other contexts I've worked in (UITableView on Cocao comes to mind), one can ask to "stop updates," make some changes to the model, and then "resume updates."
Is there anything like this in AngularJS? If not, what am I not getting here, and how could I set up my project to achieve the desired functionality?
Here's a plunkr of my example.

parallel code execution python2.7 ndb

in my app i for one of the handler i need to get a bunch of entities and execute a function for each one of them.
i have the keys of all the enities i need. after fetching them i need to execute 1 or 2 instance methods for each one of them and this slows my app down quite a bit. doing this for 100 entities takes around 10 seconds which is way to slow.
im trying to find a way to get the entities and execute those functions in parallel to save time but im not really sure which way is the best.
i tried the _post_get_hook but the i have a future object and need to call get_result() and execute the function in the hook which works kind of ok in the sdk but gets a lot of 'maximum recursion depth exceeded while calling a Python objec' but i can't really undestand why and the error message is not really elaborate.
is the Pipeline api or ndb.Tasklets what im searching for?
atm im going by trial and error but i would be happy if someone could lead me to the right direction.
EDIT
my code is something similar to a filesystem, every folder contains other folders and files. The path of the Collections set on another entity so to serialize a collection entity i need to get the referenced entity and get the path. On a Collection the serialized_assets() function is slower the more entities it contains. If i could execute a serialize function for each contained asset side by side it would speed things up quite a bit.
class Index(ndb.Model):
path = ndb.StringProperty()
class Folder(ndb.Model):
label = ndb.StringProperty()
index = ndb.KeyProperty()
# contents is a list of keys of contaied Folders and Files
contents = ndb.StringProperty(repeated=True)
def serialized_assets(self):
assets = ndb.get_multi(self.contents)
serialized_assets = []
for a in assets:
kind = a._get_kind()
assetdict = a.to_dict()
if kind == 'Collection':
assetdict['path'] = asset.path
# other operations ...
elif kind == 'File':
assetdict['another_prop'] = asset.another_property
# ...
serialized_assets.append(assetdict)
return serialized_assets
#property
def path(self):
return self.index.get().path
class File(ndb.Model):
filename = ndb.StringProperty()
# other properties....
#property
def another_property(self):
# compute something here
return computed_property
EDIT2:
#ndb.tasklet
def serialized_assets(self, keys=None):
assets = yield ndb.get_multi_async(keys)
raise ndb.Return([asset.serialized for asset in assets])
is this tasklet code ok?
Since most of the execution time of your functions are spent waiting for RPCs, NDB's async and tasklet support is your best bet. That's described in some detail here. The simplest usage for your requirements is probably to use the ndb.map function, like this (from the docs):
#ndb.tasklet
def callback(msg):
acct = yield ndb.get_async(msg.author)
raise tasklet.Return('On %s, %s wrote:\n%s' % (msg.when, acct.nick(), msg.body))
qry = Messages.query().order(-Message.when)
outputs = qry.map(callback, limit=20)
for output in outputs:
print output
The callback function is called for each entity returned by the query, and it can do whatever operations it needs (using _async methods and yield to do them asynchronously), returning the result when it's done. Because the callback is a tasklet, and uses yield to make the asynchronous calls, NDB can run multiple instances of it in parallel, and even batch up some operations.
The pipeline API is overkill for what you want to do. Is there any reason why you couldn't just use a taskqueue?
Use the initial request to get all of the entity keys, and then enqueue a task for each key having the task execute the 2 functions per-entity. The concurrency will be based then on the number of concurrent requests as configured for that taskqueue.

Resources