I want to create timer in odoo view which support run/pause something like Soccer match time , min:sec format
I tried below code but it generated an error
#api.one
def timer_th(self):
timer_thread = Thread(target=self.timer)
timer_thread.start()
def timer(self):
while self.current_time <= self.duration:
time.sleep(1)
self.current_time += 1
it gave me AttributeError: environments error
but when I used the code without thread it works but gui wasn't responsive
if you want to schedule a piece of code you can use ir.cron model
this is for automating actions but i don't know if you can do start/pause thing
more details in the docs:
http://odoo-development.readthedocs.io/en/latest/odoo/models/ir.cron.html
Related
I have an API to get list of ids, name, data etc. (TestCase name GET-APIs_OrderdByID_ASC)
I want to transfer those IDs to other following TestCases in same TestSuite or other TestSuite.
In SOAPUI, Property Transfer works within TestSteps in same TestCase. (Using OpenSource version). I need to transfer the property value among different TestCases / TestSuites.
Below is the code that I can extract ids from one testCase and also name of testCases/testSteps where I want to transfer.
import com.eviware.soapui.impl.wsdl.teststeps.*
import com.eviware.soapui.support.types.StringToStringMap
import groovy.json.*
def project = context.testCase.testSuite.project
def TestSuite = project.getTestSuiteByName("APIs")
def TestCase = TestSuite.getTestCaseList()
def TestStep = TestCase.testStepList
def request = testRunner.testCase.getTestStepByName("List_of_APIs_OrderByID_ASC")
def response = request.getPropertyValue("Response")
def JsonSlurperResponse = new JsonSlurper().parseText(response)
def Steps = TestStep.drop(3)
log.info JsonSlurperResponse.data.id
def id = JsonSlurperResponse.data.id
Steps.each {
it.getAt(0).setPropertyValue("apiId", id.toString())
log.info it.getAt(0).name
}
If I run above code, all the array values of id [1, 2, 10, 11, 12, 13, 14, 15, 16, 17, 18] are set to each of the following testSteps
I looked some other SO questions
Property transfer in SOAPUI using groovy script
groovy-script-and-property-transfer-in-soapui
Can anyone help me out. :-)
I have done something with datasinks as Leminou suggests.
Datasinks are a good solution for this. In test A, create a datasink step to persist the values of interest. Then in the target step, use a data source step, which links to the file generated by the datasink earlier.
The data sink can be configured to append after each test or start afresh.
If you're struggling to tease out the values for the datasink, create a groovy step that returns the single value you want, then in the datasink step, invoke the groovy.
Sounds a little convoluted, but it works.
You can use project level properties or testSuiteLevel properties or testCase Properties.
This way you can achieve the same thing that you get from Property Transfer step but in a different way.
Write a groovy step in the source test case to setProperty(save values you want to use later)
testRunner.testCase.setPropertyValue("TCaseProp", "TestCase")
testRunner.testCase.testSuite.setPropertyValue("TSuiteProp","TestSuite")
testRunner.testCase.testSuite.project.setPropertyValue("ProjectLevel","ProjectLevelProperty")
"TCaseProp" is the name of the property. you can give any name
"TestCase" is the value you want to store. You can extract this value and use a variable
for example
def val="9000"
testRunner.testCase.setPropertyValue("TCaseProp", val)
You can use that property in other case of same suite. If you want to use across different suites you can define project level property
use the below syntax in target testcase request
${#Project#ProjectLevel}
${#TestCase#TCaseProp}
${#TestSuite#TCaseProp}
<convertCurrency>${#TestSuite#TCaseProp}</ssp:SystemUsername>
System will automatically replace the property value in above request
https://www.soapui.org/scripting-properties/tips-tricks.html <-- Helpful link which can explain in detail about property transfer
Well the following Script works.
Just to change as
Steps.each { s ->
id.each { i ->
s.getAt(0).setPropertyValue("apiId", i.toString())
}
}
Here id is a ArrayList type. So We can loop through the List.
PS: We can do the same using for loop.
I have the datastore as follows,
class Data(db.Model):
project = db.StringProperty()
project_languages = db.ListProperty(str,default=[])
When user inputs a language (input_language), I want to output all the projects which contains the language user mentioned in it's language list (project_languages).
I tried to do it in the below way but got an error saying,
BadQueryError: Parse Error: Invalid WHERE Condition
db.GqlQuery("SELECT * FROM Data WHERE input_language IN project_languages")
What should be my query, if I want to get the data in the above mentioned way?
Not sure if you are using python for the job.. If so I highly recommend you use the ndb library for datastore queries. The solution is easy as Data.query(A.IN(B))
At some point in the future I may need to bulk load migration data (i.e. from a CSV). Has anyone had exceptions raised doing the following? Also is there any change in behaviour if the ndb.put_multi() function is used?
from google.appengine.ext import ndb
while True:
if not id:
break
id, name = read_csv_row(readline())
x = X(parent=ndb.Key('Y','static_id')
x.id, x.name = id, name
x.put()
class X(ndb.Model):
id = StringProperty()
name = StringProperty()
class Y(ndb.Model):
pass
def read_csv_row(line):
"""returns tuple"""
From my research and thanks to comments it seems that the code above (where it made into valid code) create problems which would eventually lead to google.appengine.api.datastore_errors.Timeout exceptions being thrown.
See another question:
Datastore write limit tests - trying to break app engine, but it won´t break ;)
The best suggestion I have so far is to use a Task Queue to to rate limit this. More information on:
blog.notdot.net/tag/deferred
I'm trying to implement a console library that reads data from Composite C1 (global datatype called RSS Feeds) and then, foreach RSS feed, the application must retrieve rss entries from the "link" attribute and insert all entries into a global datatype called "RSSItem".
Here is what i've done:
1. Open the website composite Solution
2. Create a new Console Library Project
3. Reference Composite.dll, Composite.generated.dll, ... into my new project
4. Implement the functionnality
Here is the problem:
At the design time, i have all reference working perfectly fine, I can write my code with intellisense. But when i want to launch the project (debug | release mode), the reference to composite is not working anymore ...
"Error 15 The type or namespace name 'Composite' could not be found (are you missing a using directive or an assembly reference?)"
When i do a refresh in the project browser, intelisense works again.
Thanks for your help.
Best regards,
Jonathan
PS: sorry for my english, not my native language
For info: here is a little bit of the code:
List<MCG.RSSItem> rssItemList = new List<MCG.RSSItem>();
for (int i = 0; i < 10; i++)
{
MCG.RSSItem rssItem = DataConnection.New<MCG.RSSItem>();
rssItem.Link = rss.Link;
rssItem.RSSFeed = rssFeed.Id;
rssItem.Summary = rss.Description;
rssItem.Title = rss.Title;
rssItem.PublicationStatus = "published";
rssItem.Id = Guid.NewGuid();
connection.Add<MCG.RSSItem>(rssItemList);
}
This problem was discussed here - http://compositec1.codeplex.com/discussions/357939
The problem was that Composite C1 from a std. Windows application is not supported.
Based on this problem the feature request was created - Refactor core parts of C1 to be used in "selfhost"
in my app i for one of the handler i need to get a bunch of entities and execute a function for each one of them.
i have the keys of all the enities i need. after fetching them i need to execute 1 or 2 instance methods for each one of them and this slows my app down quite a bit. doing this for 100 entities takes around 10 seconds which is way to slow.
im trying to find a way to get the entities and execute those functions in parallel to save time but im not really sure which way is the best.
i tried the _post_get_hook but the i have a future object and need to call get_result() and execute the function in the hook which works kind of ok in the sdk but gets a lot of 'maximum recursion depth exceeded while calling a Python objec' but i can't really undestand why and the error message is not really elaborate.
is the Pipeline api or ndb.Tasklets what im searching for?
atm im going by trial and error but i would be happy if someone could lead me to the right direction.
EDIT
my code is something similar to a filesystem, every folder contains other folders and files. The path of the Collections set on another entity so to serialize a collection entity i need to get the referenced entity and get the path. On a Collection the serialized_assets() function is slower the more entities it contains. If i could execute a serialize function for each contained asset side by side it would speed things up quite a bit.
class Index(ndb.Model):
path = ndb.StringProperty()
class Folder(ndb.Model):
label = ndb.StringProperty()
index = ndb.KeyProperty()
# contents is a list of keys of contaied Folders and Files
contents = ndb.StringProperty(repeated=True)
def serialized_assets(self):
assets = ndb.get_multi(self.contents)
serialized_assets = []
for a in assets:
kind = a._get_kind()
assetdict = a.to_dict()
if kind == 'Collection':
assetdict['path'] = asset.path
# other operations ...
elif kind == 'File':
assetdict['another_prop'] = asset.another_property
# ...
serialized_assets.append(assetdict)
return serialized_assets
#property
def path(self):
return self.index.get().path
class File(ndb.Model):
filename = ndb.StringProperty()
# other properties....
#property
def another_property(self):
# compute something here
return computed_property
EDIT2:
#ndb.tasklet
def serialized_assets(self, keys=None):
assets = yield ndb.get_multi_async(keys)
raise ndb.Return([asset.serialized for asset in assets])
is this tasklet code ok?
Since most of the execution time of your functions are spent waiting for RPCs, NDB's async and tasklet support is your best bet. That's described in some detail here. The simplest usage for your requirements is probably to use the ndb.map function, like this (from the docs):
#ndb.tasklet
def callback(msg):
acct = yield ndb.get_async(msg.author)
raise tasklet.Return('On %s, %s wrote:\n%s' % (msg.when, acct.nick(), msg.body))
qry = Messages.query().order(-Message.when)
outputs = qry.map(callback, limit=20)
for output in outputs:
print output
The callback function is called for each entity returned by the query, and it can do whatever operations it needs (using _async methods and yield to do them asynchronously), returning the result when it's done. Because the callback is a tasklet, and uses yield to make the asynchronous calls, NDB can run multiple instances of it in parallel, and even batch up some operations.
The pipeline API is overkill for what you want to do. Is there any reason why you couldn't just use a taskqueue?
Use the initial request to get all of the entity keys, and then enqueue a task for each key having the task execute the 2 functions per-entity. The concurrency will be based then on the number of concurrent requests as configured for that taskqueue.