How to develop rule engine with python (business-rules library) and how run_all() function is working? - python-3.10

for product in Product_info:
run_all(rule_list=rule,
defined_variables=ProductVariables(product),
defined_actions=ProductActions(product),
stop_on_first_trigger=True
)
How run_all() function is working???
I am Expecting pip install business-rules with this library.
Present we are using this library.
Give me example of "Define Your set of variables"
#string_rule_variable()
class ProductVariables(BaseVariables):
def __init__(self, name, loc):
self.name = name
self.loc = loc
example of "Define your set of actions"
#rule_action(params={"sale_percentage": FIELD_NUMERIC})
3.What is run your rule??????
from business_rules import run_all
rules = _some_function_to_receive_from_client()
for product in Products.objects.all():
run_all(rule_list=rules,
defined_variables=ProductVariables(product),
defined_actions=ProductActions(product),
stop_on_first_trigger=True
)

Related

How to compile and simulate a modelica model that is part of a package with JModelica?

My question is similar to the question of janpeter. I study the ebook by Tiller and try to simulate the example 'Architecture Driven Approach' with OpenModelica and JModelica. I tried the minimal example 'BaseSystem' in OpenModelica and it works fine. But with JModelica version 1.14 I get errors in the compiling process and my script fail. My python script is:
import matplotlib.pyplot as plt
from pymodelica import compile_fmu
from pyfmi import load_fmu
# Variables: modelName, modelFile, extraLibPath
modelName = 'BaseSystem'
modelFile = 'BaseSystem.mo'
extraLibPath = 'C:\Users\Tom\Desktop\Tiller2015a\ModelicaByExample\Architectures'
compilerOption = {'extra_lib_dirs':[extraLibPath]}
# Compile model
fmuName = compile_fmu( modelName, modelFile, compiler_options=compilerOption)
# Load model
model = load_fmu( fmuName)
# Simulate model
res = model.simulate( start_time=0.0, final_time=5.0)
# Extract interesting values
res_w = res['sensor.w']
res_y = res['setpoint.y']
tSim = res['time']
# Visualize results
fig = plt.figure(1)
ax1 = fig.add_subplot(111)
ax2 = ax1.twinx()
ax1.plot(tSim, res_w, 'g-')
ax2.plot(tSim, res_y, 'b-')
ax1.set_xlabel('t (s)')
ax1.set_ylabel('w (???)', color='g')
ax2.set_ylabel('y (???)', color='b')
plt.title('BaseSystem')
plt.legend()
plt.grid(True)
plt.show()
My problem is how to compile and simulate a model that is part of a package?
I am not a jModelica user, but I think I see some confusion in your script. You wrote:
modelName = 'BaseSystem'
modelFile = 'BaseSystem.mo'
extraLibPath = 'C:\Users\Tom\Desktop\Tiller2015a\ModelicaByExample\Architectures'
But that implies (to me) that the compiler should open the package stored at C:\Users\Tom\Desktop\Tiller2015a\ModelicaByExample\Architectures. But the top-level package is ModelicaByExample and the model you want is Architectures.BaseSystem. So I think something like this is probably more appropriate:
modelName = 'Architectures.BaseSystem'
modelFile = 'package.mo'
extraLibPath = 'C:\Users\Tom\Desktop\Tiller2015a\ModelicaByExample'
The essential point here is that you should be opening ModelicaByExample (specifically, the package.mo file in the ModelicaByExample directory). That opens the ModelicaByExample package. You need to open this package because it is the top-level package. You can't load just a sub-package (which is what it looked like you were trying to do). Then, once you've got ModelicaByExample loaded, you can ask the compiler to specifically compile Architectures.BaseSystem.
I suspect OpenModelica was "helping" you by opening the top-level package anyway, even if you were asking it to open the sub-package.
But again, I don't know jModelica very well and I have definitely not tested any of these suggestions.
Thank you Michael Tiller. With your support I found the solution.
First, the modelName has to be full qualified. Second, as you mentioned, the extraLibPath should end at the top-level directory of the library ModelicaByExample. But then I got errors, that JModelica couldn't find components or declarations which are part of the Modelica Standard Library (MSL).
So I added the modelicaLibPath to the MSL, but the error messages remained the same. After many attempts, I launched the command line with administrator privileges and any errors were gone.
Here is the executable python script: BaseSystem.py
### Attention!
# The script and/or the command line must be
# started with administrator privileges
import matplotlib.pyplot as plt
from pymodelica import compile_fmu
from pyfmi import load_fmu
# Variables: modelName, modelFile, extraLibPath
modelName = 'Architectures.SensorComparison.Examples.BaseSystem'
modelFile = ''
extraLibPath = 'C:\Users\Tom\Desktop\Tiller2015a\ModelicaByExample'
modelicaLibPath = 'C:\OpenModelica1.9.2\lib\omlibrary\Modelica 3.2.1'
compileToPath = 'C:\Users\Tom\Desktop\Tiller2015a'
# Set the compiler options
compilerOptions = {'extra_lib_dirs':[modelicaLibPath, extraLibPath]}
# Compile model
fmuName = compile_fmu( modelName, modelFile, compiler_options=compilerOptions, compile_to=compileToPath)
# Load model
model = load_fmu( fmuName)
# Simulate model
res = model.simulate( start_time=0.0, final_time=5.0)
# Extract interesting values
res_w = res['sensor.w']
res_y = res['setpoint.y']
tSim = res['time']
# Visualize results
fig = plt.figure(1)
ax1 = fig.add_subplot(111)
ax2 = ax1.twinx()
ax1.plot(tSim, res_w, 'g-')
ax2.plot(tSim, res_y, 'b-')
ax1.set_xlabel('t (s)')
ax1.set_ylabel('sensor.w (rad/s)', color='g')
ax2.set_ylabel('setpoint.y (rad/s)', color='b')
plt.title('BaseSystem')
plt.legend()
plt.grid(True)
plt.show()

Scala Slick-Extensions SQLServerDriver 2.1.0 usage - can't get it to compile

I am trying to use Slick-Extensions to connect to an SQL Server Database from Scala. I use slick 2.1.0 and slick-extensions 2.1.0.
I can't seem to get the code I wrote to compile. I followed the examples from slick's website and this compiled fine when the driver was H2. Please see below:
package com.example
import com.typesafe.slick.driver.ms.SQLServerDriver.simple._
import scala.slick.direct.AnnotationMapper.column
import scala.slick.lifted.TableQuery
import scala.slick.model.Table
class DestinationMappingsTable(tag: Tag) extends Table[(Long, Int, Int)](tag, "DestinationMappings_tbl") {
def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
def mltDestinationType = column[Int]("mltDestinationType")
def mltDestinationId = column[Int]("mltDestinationId")
def * = (id, mltDestinationType, mltDestinationId)
}
I am getting a wide range of errors: scala.slick.model.Table does not take type parameters, column does not take type parameters and O not found.
If the SQLServerDriver does not use the same syntax as slick, where do I find its documentation?
Thank you!
I think to your import of scala.slick.model.Table shadows your import of com.typesafe.slick.driver.ms.SQLServerDriver.simple.Table
Try to just remove the:
import scala.slick.model.Table

How to test a placeholder in the text field

I want to verify the placeholder text present in a text field using robotframework.
I have used different Selenium2Library keywords but none of them do precisely what I want.
Does anyone have an approach to getting this functionality from within my test?
You can run the command getAttribute(input_field_locator#placeholder). This will return you the text you want and then you can assert on it.
Once you have managed to access the functionality you are looking for from the REPL, you may find that it is not accessible via robotframework. You can expose new keywords by creating a wrapper for Selenium2Library which extends it with extra functionality - for an example see https://github.com/alistair-broomhead/scalable-robotframework-example/blob/master/TestLibraries/Selenium2Custom from a tutorial I am working on.
This example simply adds two keywords to robotframework on top of those in Selenium2Library if you instead import this class (Get Text and Get HTML, which are useful for verification):
from Selenium2Library import Selenium2Library
class Selenium2Custom(Selenium2Library):
"""
Custom wrapper for robotframework Selenium2Library to add extra functionality
"""
def get_text(self, locator):
"""
Returns the text of element identified by `locator`.
See `introduction` for details about locating elements.
"""
return self._get_text(locator)
def get_html(self, id=None):
"""
Get the current document as an XML accessor object.
"""
from lxml import html
src = self.get_source().encode('ascii', 'xmlcharrefreplace')
page = html.fromstring(src)
element = page.get_element_by_id(id) if id is not None else page
return html.tostring(element)
As such it would be trivial to do something like this:
from Selenium2Library import Selenium2Library
class Selenium2Custom(Selenium2Library):
"""
Custom wrapper for robotframework Selenium2Library to add extra functionality
"""
def get_placeholder(self, locator):
"""
Returns the placeholder text of element identified by `locator`.
"""
element = self._element_find(locator, True, False)
return element.get_attribute("#placeholder")
Now I don't know that this will definately work for you, but for me it works like so:
Python 2.7.3 (v2.7.3:70274d53c1dd, Apr 9 2012, 20:52:43)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from Selenium2Library import Selenium2Library
>>> def get_placeholder(self, locator):
... element = self._element_find(locator, True, False)
... return element.get_attribute("placeholder")
...
>>> Selenium2Library.get_placeholder = get_placeholder
>>> session = Selenium2Library()
>>> session.open_browser("http://www.wikipedia.org/wiki/Main_Page",
remote_url="http://127.0.0.1:4444/wd/hub")
1
>>> session.get_placeholder("search")
u'Search'
>>>
I also wanted to check placeholder text and after coming here and reading the comments, I got some clue about it. Now, the tricky part is how to do this in robot framework and after doing some math, I was able to do this pretty easily. Here is my answer in 2 lines:
${webelement}= Get WebElement locator
${placeholder}= Call Method ${webelement} get_attribute placeholder

Using default options when testin thor tasks

I would like to test thor tasks with rspec, but when calling them from rspec I have two requirements:
I would like to have Thor Class instance available
I would like to call tasks with default options (as it will be called from command line)
I am not able to achieve both of these at the same time, consider following code:
require 'thor'
require 'thor/runner'
class App < Thor
method_option :foo , :default => "foovalue"
desc "xlist", "list"
def xlist(search="")
p options
end
end
app = App.new
app.xlist
app.invoke(:xlist)
App.start ARGV
Output is:
> ruby contrib/thor_test.rb xlist
{}
{}
{"foo"=>"foovalue"}
In the first two examples, I can call task through instance but default options are not passed to method (which makes spec unrealistic)
In the third example, I get default options but I can't set expectations on class instance, neither I can stub any methods which makes it hard to test. That is happening because class instance is created on the fly.
If you ask how you can test thor cli utility, I read through the thor specs as suggested in this SO answer. Examples here were particularly helpful and could be used directly.
Spec:
require 'my_thor'
describe MyThor do
it "should work" do
args = ["command", "--force"]
options = MyThor.start(args)
expect(options).to eq({ "force" => true })
end
end
Code:
class MyThor < Thor
desc "command", "do something"
method_option :force, :type => :boolean, :aliases => '-f'
def command
return options
end
end
I was able to do extract default options for a Thor command by:
Getting Thor's internal representation of the options list for that command
Building a Thor::Options object from that command list
Using that Thor::Options object to parse an empty array of options, which returns a hash of only the defaults.
The code is in get_default_options_for_command below. Honestly, I'm hoping there's a better way to do it, but I couldn't find one.
Once you've got a Thor object, you can replace its options to include those defaults plus whatever other options you want to add, then run it with #xlist.
I wrote this all up using your example above:
require 'thor'
def get_default_options_for_command(klass,command_name)
option_precursors = klass.all_commands[command_name].options
parser = Thor::Options.new(option_precursors)
parser.parse([])
end
class App < Thor
desc "xlist", "list"
method_option :foo , :default => "foovalue"
method_option :bar
def xlist(search="")
puts "search: #{search}"
puts "options: #{options}"
end
end
app = App.new
xlist_default_opts = get_default_options_for_command(App,'xlist')
new_opts = { :bar => 3 }
app.options = xlist_default_opts.merge(new_opts)
app.xlist('search-term')
The output is:
$ ./test.rb
search: search-term
options: {"foo"=>"foovalue", "bar"=>3}

What is the best way to do AppEngine Model Memcaching?

Currently my application caches models in memcache like this:
memcache.set("somekey", aModel)
But Nicks' post at http://blog.notdot.net/2009/9/Efficient-model-memcaching suggests that first converting it to protobuffers is a lot more efficient. But after running some tests I found out it's indeed smaller in size, but actually slower (~10%).
Do others have the same experience or am I doing something wrong?
Test results: http://1.latest.sofatest.appspot.com/?times=1000
import pickle
import time
import uuid
from google.appengine.ext import webapp
from google.appengine.ext import db
from google.appengine.ext.webapp import util
from google.appengine.datastore import entity_pb
from google.appengine.api import memcache
class Person(db.Model):
name = db.StringProperty()
times = 10000
class MainHandler(webapp.RequestHandler):
def get(self):
self.response.headers['Content-Type'] = 'text/plain'
m = Person(name='Koen Bok')
t1 = time.time()
for i in xrange(int(self.request.get('times', 1))):
key = uuid.uuid4().hex
memcache.set(key, m)
r = memcache.get(key)
self.response.out.write('Pickle took: %.2f' % (time.time() - t1))
t1 = time.time()
for i in xrange(int(self.request.get('times', 1))):
key = uuid.uuid4().hex
memcache.set(key, db.model_to_protobuf(m).Encode())
r = db.model_from_protobuf(entity_pb.EntityProto(memcache.get(key)))
self.response.out.write('Proto took: %.2f' % (time.time() - t1))
def main():
application = webapp.WSGIApplication([('/', MainHandler)], debug=True)
util.run_wsgi_app(application)
if __name__ == '__main__':
main()
The Memcache call still pickles the object with or without using protobuf. Pickle is faster with a protobuf object since it has a very simple model
Plain pickle objects are larger than protobuf+pickle objects, hence they save time on Memcache, but there is more processor time in doing the protobuf conversion
Therefore in general either method works out about the same...but
The reason you should use protobuf is it can handle changes between versions of the models, whereas Pickle will error. This problem will bite you one day, so best to handle it sooner
Both pickle and protobufs are slow in App Engine since they're implemented in pure Python. I've found that writing my own, simple serialization code using methods like str.join tends to be faster since most of the work is done in C. But that only works for simple datatypes.
One way to do it more quickly is to turn your model into a dictionary and use the native eval / repr function as your (de)serializers -- with caution of course, as always with the evil eval, but it should be safe here given that there is no external step.
Below an example of a class Fake_entity implementing exactly that.
You first create your dictionary through fake = Fake_entity(entity) then you can simply store your data via memcache.set(key, fake.serialize()). The serialize() is a simple call to the native dictionary method of repr, with some additions if you need (e.g. add an identifier at the beginning of the string).
To fetch it back, simply use fake = Fake_entity(memcache.get(key)). The Fake_entity object is a simple dictionary whose keys are also accessible as attributes. You can access your entity properties normally, except referenceProperties give keys instead of fetching the object (which is actually quite useful). You can also get() the actual entity with fake.get(), or more interestigly, change it and then save with fake.put().
It does not work with lists (if you fetch multiple entities from a query), but could be easily be adjusted with join/split functions using an identifier like '### FAKE MODEL ENTITY ###' as the separator. Use with db.Model only, would need small adjustments for Expando.
class Fake_entity(dict):
def __init__(self, record):
# simple case: a string, we eval it to rebuild our fake entity
if isinstance(record, basestring):
import datetime # <----- put all relevant eval imports here
from google.appengine.api import datastore_types
self.update( eval(record) ) # careful with external sources, eval is evil
return None
# serious case: we build the instance from the actual entity
for prop_name, prop_ref in record.__class__.properties().items():
self[prop_name] = prop_ref.get_value_for_datastore(record) # to avoid fetching entities
self['_cls'] = record.__class__.__module__ + '.' + record.__class__.__name__
try:
self['key'] = str(record.key())
except Exception: # the key may not exist if the entity has not been stored
pass
def __getattr__(self, k):
return self[k]
def __setattr__(self, k, v):
self[k] = v
def key(self):
from google.appengine.ext import db
return db.Key(self['key'])
def get(self):
from google.appengine.ext import db
return db.get(self['key'])
def put(self):
_cls = self.pop('_cls') # gets and removes the class name form the passed arguments
# import xxxxxxx ---> put your model imports here if necessary
Cls = eval(_cls) # make sure that your models declarations are in the scope here
real_entity = Cls(**self) # creates the entity
real_entity.put() # self explanatory
self['_cls'] = _cls # puts back the class name afterwards
return real_entity
def serialize(self):
return '### FAKE MODEL ENTITY ###\n' + repr(self)
# or simply repr, but I use the initial identifier to test and eval directly when getting from memcache
I would welcome speed tests on this, I would assume this is quite faster than the other approaches. Plus, you do not have any risks if your models have changed somehow in the meantime.
Below an example of what the serialized fake entity looks like. Take a particular look at datetime (created) as well as reference properties (subdomain) :
### FAKE MODEL ENTITY ###
{'status': u'admin', 'session_expiry': None, 'first_name': u'Louis', 'last_name': u'Le Sieur', 'modified_by': None, 'password_hash': u'a9993e364706816aba3e25717000000000000000', 'language': u'fr', 'created': datetime.datetime(2010, 7, 18, 21, 50, 11, 750000), 'modified': None, 'created_by': None, 'email': u'chou#glou.bou', 'key': 'agdqZXJlZ2xlcgwLEgVMb2dpbhjmAQw', 'session_ref': None, '_cls': 'models.Login', 'groups': [], 'email___password_hash': u'chou#glou.bou+a9993e364706816aba3e25717000000000000000', 'subdomain': datastore_types.Key.from_path(u'Subdomain', 229L, _app=u'jeregle'), 'permitted': [], 'permissions': []}
Personally I also use static variables (faster than memcache) to cache my entities in the short term, and fetch the datastore when the server has changed or its memory has been flushed for some reason (which happens quite often in fact).

Resources