Didn't find the answer on this doc, is the rescue clause called when redefining a feature?
Class A
class A
feature --
process
do
do_stuff
rescue
on_fail
end
end -- class
Class B
class B
inherit
A
redefine
process
end
feature --
process
do
do_other_stuff -- do I have to call the rescue too?
end
end -- class
Rescue clauses are not inherited like contracts and should be supplied with every new feature body if needed.
Related
While learning more about Ruby, I'm currently running into a wall. I'm trying to extract and sort the value of an instance variable of one class, that's stored in an array inside an instance variable in another class. I seem to only be able to grab the instance of the class itself, rather than the specific instance variables within the class.
Below are the two classes.
product.rb
class Product
attr_reader :name,
:price
def initialize(name, price)
#name = name
#price = price
end
end
catalogue.rb
class Catalogue
attr_reader :contents
def initialize
#contents = []
end
def cheapest
#contents.product.first
end
def <<(product)
#contents << product
end
end
The following test confirms I'm not properly extracting and sorting #name nor #price for the instance of Product that's stored in #contents in the instance of Catalogue.
catalogue_test.rb
gem 'minitest', '~> 5.2'
require 'minitest/autorun'
require 'minitest/pride'
require 'catalogue'
require 'product'
class CatalogueTest < Minitest::Test
def test_cheapest_of_one
catalogue = Catalogue.new
catalogue << Product.new('scissors', 8)
assert_equal 'scissors', catalogue.cheapest
end
end
Here is the failure:
Failure:
CatalogueTest#test_cheapest_of_one [catalogue_test.rb:16]:
--- expected
+++ actual
## -1 +1 ##
-"scissors"
+[#<Product:0xXXXXXX #name="scissors", #price=8>]
Ideally, I'd like to be able to extract and sort the product based on price across a number of instances. I realize I will need to further extend the existing code to ensure I'm grabbing the cheapest (when there is more than one object in the array), but was just trying to start with a basic function to access the elements in it.
Thus far I've tried some different methods like sort and first, however, I'm unable to go beyond the above output, [#<Product:0xXXXXXX #name="scissors", #price=8>], and drill down into the instance variables. 🤷♂️
What would I need to add to catalogue.rb to make test_cheapest_of_one in catalogue_test.rb pass?
I think your test will work with the following method definition:
def cheapest
#contents.sort_by(&:price).reverse.first.name
end
or alternatively
def cheapest
#contents.sort_by { |product| produce.price * -1 }.first.name
end
Currently you're trying calling #contents.product, which is actually calling Array#product - not what you want.
In your question, you ask:
What would I need to add to catalogue.rb to make test_cheapest_of_one in catalogue_test.rb pass?
But, that is the wrong question to ask! As I alluded to in my comment above, your problem is that, in your test, you are testing whether a Product is equal to a String, which can never be true, because a Product will never be equal to a String. A Product will only be equal to a Product and a String will only be equal to a String.
So, the problem is with your test, not with the production code.
If you were to modify your production code, you would have to change the cheapest method to return the name of the cheapest product. But that is wrong. It should return the cheapest product. It doesn't help you if it returns the name, because there's nothing useful you can do with the name. You cannot print out how much the cheapest product costs, for example, because you don't know what the cheapest product is, you only know what it is called.
The correct solution is to modify the test, so that it tests that the correct product is returned, not the name:
def test_cheapest_of_one
catalogue = Catalogue.new
scissors = Product.new('scissors', 8)
catalogue << scissors
assert_equal scissors, catalogue.cheapest
end
As mentioned on this question, there is no way to define a constant which I can redefine into a descendant.
In many of my cases, I'd like to have a constant which I can redefine. The alternatives I see to avoid a creation on each consultation which doesn't make sense would be
That's not doable
class A
feature -- Access
Default_value: STRING = "some A value"
end -- Class
class B
inherit
B
redefine
Default_value
end
feature -- Access
Default_value: STRING = "some B value"
end -- Class
Alternative with once instance_free
class A
feature -- Access
Default_value: STRING
once
Result := "some A value"
ensure
instance_free: class
end
end -- Class
class B
inherit
B
redefine
Default_value
end
feature -- Access
Default_value: STRING
once
Result := "some B value"
ensure
instance_free: class
end
end -- Class
As far as I understand, the once wouldn't be created with B value as the A class value would be taken
Alternative with attribute
class A
feature -- Access
Default_value: STRING
attribute
Result := "some A value"
ensure
instance_free: class
end
end -- Class
class B
inherit
B
redefine
Default_value
end
feature -- Access
Default_value: STRING
attribute
Result := "some B value"
ensure
instance_free: class
end
end -- Class
Would it be the only and good practice to do it?
Out of 3 mentioned possibilities, only instance-free once functions can be used because
constants are frozen and, therefore, cannot be redefined;
instance-free attributes are not supported.
One more way is to use regular functions with manifest once strings to guarantee that result is always the same:
class A feature
default_value: STRING
do
Result := once "some A value" -- Note the modifier "once".
ensure
instance_free: class
constant: Result = default_value -- Ensure the Result is constant.
end
end
However, there seems to be no particular benefit compared to instance-free once functions. (I would still keep the postcondition constant to avoid erroneous redeclaration of the feature.)
Edit. Some details for the example above:
At run-time, regular manifest strings of the form "foo" create a new string object every time they are evaluated. Once manifest strings of the form once "foo" create a new string object only for the first time instead. On subsequent evaluation, they produce the same object that was computed earlier.
Postcondition Result = f of a query f (the example uses default_value instead of f) ensures that the second call to f produces the same object as the first call. Indeed, in expression Result = f, Result refers to the object computed by the feature. The call f refers to the object computed by the second call to the feature. So, whenever we call f, it produces the same object. (Ideally, we would explicitly require that the third, fourth, etc. calls to the feature also produce the same object. However, this is beyond expressive power of the language. Formally, the equality of all results produced by f could be proved by induction.)
I'd like to use playhouse migrator to make changes to my db schema.
I'd like to add a column to a database but with AFTER in sql statement so that I may define the column order in the table.
Is this possible with Peewee/Playhouse migrator?
Thanks in advance!
There is no trigger support in Peewee. In 2015 the author stated that
I do not plan on supporting triggers at this time.
However, Peewee has "Signal support".
from playhouse.signals import Model, post_save
class MyModel(Model):
data = IntegerField()
#post_save(sender=MyModel)
def on_save_handler(model_class, instance, created):
put_data_in_cache(instance.data)
Perhaps, this could be used as a replacement.
Unfortunately the schema migrator does not support the AFTER clause. You are left with subclassing the relevant migrator class, or using a custom field-class and implementing a ddl() method on the field, which includes the AFTER portion.
You can extend the field with your custom ones and override the sort_key to a large number to ensure they are always pushed to the end.
This is definitely not the best way, but it works.
class dbCustomDateTime(dbDateTime):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# Defining the sort key to ensure that even if this is used in a baseclass,
# this column will go to the end of the table
self._sort_key = 100, 100
I have a class that represents the table of a db-row. Its properties are the columns of the table. I add a new row to the table with the following code:
Public Sub AddRow(oTestRow As TestRow)
Dim sql As String
With oTestRow
sql = String.Format("INSERT INTO TestTable " &
"(ArtNr, ArtName, ArtName2, IsVal, CLenght) " &
"Values ('{0}', '{1}', '{2}', {3}, {4})",
.ArtNr, .ArtName, .ArtName2, .IsVal, .CLenght)
End With
Using oConn As New OleDbConnection(m_ConnString)
oConn.Open()
Using oInsertCmd As New OleDbCommand(sql, oConn)
oInsertCmd.ExecuteNonQuery()
End Using
End Using
End Sub
That is just an example, but my classes have around 30-40 properties and this brings a very large and complex sql string.
Creating, editing or maintaining these sql strings for many classes could generate errors.
I am wondering if any compact way or method exists in order to add the whole object's istance (the properties of course) to the table "TestTable" without writing such a large sql string.
I created the TestRow in the way that its properties are exactly the columns of the table "TestTable" (with the same name). But I did not found in the ADO.NET anything that could be used.
If changing DB system is an option, you may wanna take a look at some document based no sql solution like MongoDB, CouchDB or especially for .Net RavenDB, db4o or Eloquera.
Here is a list of some of them.
for starters anything with inline queries is a bad practice (unless the need demands for e.g. you have tables defined in the db, and dont have access to the db to deploy procedures)
you have few options - for e.g. instead of handwriting the classes , use Entitiy framework a better alternative to Linq2Sql
if you want to stick with the tags in this question I would design this making the most of OO concepts. (this is a rough sketch, but I hope this helps)
public class dbObject
protected <type> ID --- This is important. if this has value, commit will assume update, otherwise an update will be performed
public property DBTableName // set the table name
public property CommitStoredprocedure // the procedure on the database that can do commit work
public property SelectStoredProcedure // the procedure used to retrieve the i
public dbObject construcor (connection string or dbcontext etc)
set dbConnection here
end constructor
public method commit
reflect on this.properties available and prepare your commit string.
if you are using storedproc ensure that you prepare named parameters and that the stored proc is defined with the same property names as your class property names. also ensure that storedproc will update if there is an ID value or insert and return a ID when the id value is not available
Create ADO.net command and execute. (this is said easy here but you need to perfect this method)
End method
end class
public class employee inherits dbObject
// employee properties here
public string name;
end employee
public class another inherits dbObject
//another properties
public bool isValid;
end department
usage:
employee e = new employee;
e.name = "John Smith";
e.commit();
console.WriteLine(e.id); // will be the id set by the commit method from the db
If you make baseclass correct (well tested) here, this is automated and you shouldnt see errors.
you will need to extend the base class to Retrieve records from the db based on an id (if you want to instantiate objects from db)
As django model save() methods are not lazy, and as keeping transactions short is a general good practice, should saves be preferably deferred to the end of transaction blocks?
As an example, would the code sample B hold a transaction open for less time than code sample A below?
Code sample A:
from django.db import transaction
from my_app.models import MyModel
#transaction.commit_on_success
def model_altering_method():
for inst in MyModel.objects.all()[0:5000]:
inst.name = 'Joel Spolsky'
# Some models independent time consuming operations...
inst.save()
Code sample B:
from django.db import transaction
from my_app.models import MyModel
#transaction.commit_on_success
def model_altering_method():
instances_to_save = []
for inst in MyModel.objects.all()[0:5000]:
inst.name = 'Joel Spolsky'
# Some models independent time consuming operations...
instances_to_save.append(inst)
for inst in instances_to_save:
inst.save()
I'm not sure, but here is my theory - I would think that your commit_manually decorator will begin a new transaction rather than having a new transaction spring into existence when you do your first save. So my theory is that code sample B would keep the transaction open longer, since it has to loop through the list of models twice.
Again, that's just a theory - and it could also depend on which DBMS you're using as to when the actual transaction starts (another theory).
Django’s default behavior is to run with an open transaction which it commits automatically when any built-in, data-altering model function is called. In case of commit_on_success or commit_manually decorators, django does not commit upon save(), but rather on function execution successful completion or on transaction.commit() command respectively.
Therefore, the elegant approach would be to separate the transaction handling code and other time consuming code if possible:
from django.db import transaction
from my_app.models import MyModel
#transaction.commit_on_success
def do_transaction(instances_to_save):
for inst in instances_to_save:
inst.save()
def model_altering_method():
instances_to_save = []
for inst in MyModel.objects.all()[0:5000]:
inst.name = 'Joel Spolsky'
# Some models independent time consuming operations...
instances_to_save.append(inst)
do_transaction(instances_to_save)
If this is impossible design wise, e.g. you need instance.id information which for new instances you can only get only after the first save(), try breaking up your flow to reasonably sized workunits, as not to keep the transaction open for long minutes.
Also notice that having long transactions is not always a bad thing. If your application is the only entity modifying the db, it could actually be ok. You should however check the specific configuration of your db to see the time limit for transactions (or idle transaction).