Can I query a Near contract for its method signatures? - method-signature

Is there a way to query what methods are offered by a given NEAR contract? (So that one could do autodiscovery of some standard interface, for instance.) Or do you have to just know the method signatures already before you can interact with a contract?

No not yet. Currently all contract methods have the same signature. () -> () No arguments and nothing is returned. Each method has a wrapper function that deserializes the input bytes from a host; calls the method; and serializes the return value and passes the bytes back to the host.
This is done with input and value_return. See input..
There are plans to include the actual signatures of the methods in the binary in a special section, which would solve this issue.

NEP-351 was recently approved, which provides a mechanism for contracts to expose all standards they implement. However, it is up to contract developers to follow this NEP. When integrated into the main SDK, I presume most will.
Alternatively, there is a proposal to create a global registry as a smart contract that provides this information.

Currently, there is not.
You will need to know what contract methods are available in order to interact with a smart contract deployed on NEAR. Hopefully, the ability to query available methods will be added in the near future.

I suppose you can just include a method in your own contracts that returns the other method signatures in some useful format: json or whatever
you would have to make sure that it stays current by maybe writing some unit tests that use this method to exercise all others
I suppose this interface (method and unit tests) can be standardized as an NEP in the short term until our interface becomes discoverable. any contracts that adhere to this NEP must include this "tested reflection method" or "documentation method" or whatever it would be called

Related

How to map array type groups parameter to LTI1p0

I have an LTI Tool Consumer(LMS) that is using LTI1p0 which will send a request to a service that is currently not using LTI. Therefore I'm writing a NodeJS implementation of a wrapper which will
receive from the LTI Tool Consumer,
map it to match service's API,
send it to the service,
then parse the response from the service into an LTI Tool Provider format,
and finally send it back to the Tool Consumer.
The service has a required field called groups which expects an array of group objects like so:
group: [ {
id: <string>, // id of the group
name: <string>, // name of the group
role: <string> // role of the user
}]
This parameter doesn't exactly exist in the LTI1p0 implementation guide. So I want to know how to best send array-type (groups in my case) information via LTI.
When looking through the docs, I've come across a few potential parameters I could use:
1. Context parameters
The guide mentions that a 'type of context would be "group"', and there are parameters for context_id, context_type, context_title. The issue would be that this is only an option for one group per request/user.
2. Custom parameters
I could make a custom parameter and call it custom_groups which seems simple, but I'm not sure how the value should look for arrays? Just like a stringified json object?
custom_groups = "{"id":123,"name":"Group Name","role":"Instructor"}, {"id":124,"name":"Group Name 2","role":"Creator"}"
For the roles parameter, one can send a list of comma-separated strings (i.e. roles= Instructor, Creator,..)but that wouldn't suffice in my case.
I'm still new to LTI, so my apologies if this is blatantly obvious.
Note: Both LTI Consumer (LMS) and the service are external, i.e. I can't change them and only provide the wrapper. I can communicate with the Tool Consumer about possible custom parameters but again not sure which format to request.
Additionally, the service might implement LTI towards the end of the year, so ideally the wrapper could then be removed and the Tool Consumer wouldn't have to change much.
Any help much appreciated!
Groups are notably absent from the LTI spec. So any answer will be part opinion.
I would agree with you that using the context parameter fields, with one LTI launch per group. Would be the most correct way, as far as the spec goes.
However I have not seen an LMS that allows LTI launches from group context. So you may not be able to use the service without a wrapper, even if it supported LTI natively.
Alternatively:
LTI 1.0 Supports custom parameters, as you are extending the the information already sent (context and roles) You could use the ext_ prefix.
Referer: https://www.imsglobal.org/specs/ltiv1p0/implementation-guide
If a profile wants to extend these fields, they should prefix all fields not described herein with "ext_".
So you could send a custom parameter with that prefix. Assuming your LMS lets you send a useful custom paramater. LTI is designed to use basic POST request, Not multidimensional Json objects. But a stringified JSON object is perfectly valid with an appropriate key.
i.e:
ext_custom_groups = "{"id":123,"name":"Group Name","role":"Instructor"}, {"id":124,"name":"Group Name 2","role":"Creator"}"

In C, should Unit Test be written for Header file or C file

I have a header file which contains declaration of struct and some methods, and a 'C' file which defines(implements) the struct and the methods. Now while writing Unit Test Cases, I need to check if some struct variable(which do not have getter methods) are modified.Since the struct's definition is contained in the C file, should the unit test cases be based on header file or C file ?
Test the interface of the component available to other parts of the system, the header in your case, and not the implementation details of the interface.
Unit tests make assertions about the behavior of a component but shouldn't depend on how that behavior is implemented. The tests describe what the component does, not how it is done. If you change your implementation but preserve the same behavior your tests should still pass.
If instead your tests depend on a specific implementation they will be brittle. Changing the implementation will require changing the tests. Not only is this extra work but it voids the reassurance the tests should offer. If you could run the existing test against the new implementation you would have some confidence that the new implementation has not changed behavior other components might depend on. Once you have to change the test to be able to run it against a new implementation you must carefully consider if you have changed any of the test's expectations in the process.
It may be important to test behavior not accessible using the public interface of this component. Consider that a good hint that this interface may not be well designed. TDD encourages a "test first" approach and this is one of the reasons why. If you start by defining the assertions you want to make about a component's behavior you must then design an interface which exposes that behavior. That is what makes the process "test driven".
If you must write tests after the component they are testing then at least try to use this opportunity to re-evaluate the design and learn from your test. Either this behavior is an implementation detail and not worth testing or else the interface should be updated to expose it (this may be more complicated than just making it public as it should also be safe and reasonable for other components of the system to access this now public attribute).
I would suggest that all testing, other than ephemeral stuff performed by developers during the development process, should be testing the API rather than the internals. That is, after all, the specification you're required to meet.
So, if you maintain stuff that's not visible to the outside world, that itself is not an aspect that needs direct testing. Instead, assuming something wrong there would affect the outside world, it's that effect you should test.
Encapsulation means being free to totally change the underlying implementation without the outside world being affected and you'll find that, if you code your tests based on the external view, no changes will be needed if you make such a change.
So, for example, if your unit is an address book, the things you should be testing are along the lines of:
can we insert an entry?
can we delete it?
can we retrieve it?
can we insert a hundred entries and query them in random order?
It's not things like:
do the structure data fields get created properly?
is the linked list internally consistent?

Global property for DB access rather than passing DB around everywhere? Advice anyone?

Globals are evil right? At least everything I read says so, because something might alter the state of the global at any point.
However, I've a DB object that's a bit of a tramp in regards class parameters. The property below is an instance of a wrapper class that automatically works in MS Access or SQL - hence why it's not EF or some other ORM.
Public Property db As New DBI.DBI(DBI.DBI.modeenum.access, String.Format("Provider=Microsoft.ACE.OLEDB.12.0;Data Source={0} ;Persist Security Info=True;Jet OLEDB:Database Password=""lkjhgfds8928""", GetRpcd("c:\cms")))
The code itself does have PostSharp for exception handling, so I'm thinking that I can conditionally handle oledb errors by logging them and re initialising the DB if it is Null.
Up till now, the solution has been to continually pass the db around as a parameter to every single class that needs it. Most of the data classes have a shared observablecollection that is built from structures that individually implement inotifyproperty changed. One of these is asynchronously built. The collection property checks if it's empty before firing off the private Async buildCollection sub.
Given that we don't use dependency injection (yet) as I need to learn it; is the Global property all that bad? Db is needed everywhere that data is pulled in or saved. The only places I don't need it at all is the View and its code behind.
It's not a customer facing project but it does need to be solid.
Any advice gratefully recieved!!
Passing the DB connection as a parameter into your classes IS using dependency injection, perhaps you just didn't recognize it as such. Hard coding the connection string in the callers is still code that is not free of dependencies, but at least your database accessors themselves are free of the dependency upon a global connection.
Globals aren't just evil because they change without notice - that's just one effect you see resulting from the bad design choice. They're evil because a design using them is brittle. Code that depends upon globals requires invisible stuff to be set correctly before calling it, and that leads to inter-dependencies between unrelated code. The invisible stuff becomes critically important stuff. Reading just the interface of a module that internally uses globals, how would I know that I have to call the SetupGlobalThing() method before calling it? What happens if I call IncrementGlobalThing() and DecrementGlobalThing() and MultiplyGlobalThing() in varying orders, depending on the function the user selects?
Instead, prefer stateless methods where you pass in all the stuff to be changed and used: IncrementThing(Integer thing) doesn't rely on hidden setup steps. It clearly does one thing: it increments the thing passed in.
It may help to think about it from a unit testing viewpoint. If you were to write a unit test to prove a specific module of code works, would you need to pass in a real database connection (hard*), or would you be able to pass in a fake database reference that meets your testing needs easily?
The best way to test your logic is to unit test it. The best way to test your class interfaces and method structure is to write unit tests that call them. If the class is hard to test, it's likely due to dependencies upon external things (globals, singletons, databases, inappropriate member variables, etc.)
The reason I called using a real database "hard" is that a unit test needs to be easy and fast to run. It shouldn't rely on slow or breakable or complex external things. Think about unit testing your software on the bus, with no network connection. Think about how much work it is to create a dummy database: you have to add users, you have to have the right version of schema in it, it has to be installed, it has to be filled with the right kind of testing data, you need network connectivity to it, all those things can make your testing unreliable. Instead, in a unit test you pass in a mock database, which simply returns values that exercise your code being tested.

Django Models / SQLAlchemy are bloated! Any truly Pythonic DB models out there?

"Make things as simple as possible, but no simpler."
Can we find the solution/s that fix the Python database world?
Update: A 'lustdb' prototype has been written by Alex Martelli - if you know any somewhat lightweight, high-level database libraries with multiple backends we could wrap in syntax sugar honey, please weigh in!
from someAmazingDB import *
#we imported a smart model class and db object which talk to database adapter/s
class Task (model):
title = ''
done = False #native types not a custom object we have to think about!
db.taskList = []
#or
db.taskList = expandableTypeCollection(Task) #not sure what this syntax would be
db['taskList'].append(Task(title='Beat old sql interfaces',done=False))
db.taskList.append(Task('Illustrate different syntax modes',True)) # ok maybe we should just use kwargs
#at this point it should be autosaved to a default db option
#by default we should be able to reload the console and access the default db:
>> from someAmazingDB import *
>> print 'Done tasks:'
>> for task in db.taskList:
>> if task.done:
>> print task.title
'Illustrate different syntax modes'
I'm a fan of Python, webPy and Cherry Py, and KISS in general.
We're talking automatic Python to SQL type translation or NoSQL.
We don't have to totally be SQL compatible! Just a scalable subset or ignore it!
Re:model changes, it's ok to ask the developer when they try to change it or have a set of sensible defaults.
Here is the challenge: The above code should work with very little modification or thinking required. Why must we put up with compromise when we know better?
It's 2010, we should be able to code scalable, simple databases in our sleep.
If you think this is important, please upvote!
What you request cannot be done in Python 2.whatever, for a very specific reason. You want to write:
class Task(model):
title = ''
isDone = False
In Python 2.anything, whatever model may possibly be, this cannot ever allow you to predict any "ordering" for the two fields, because the semantics of a class statement are:
execute the body, thus preparing a dict
locate the metaclass and run special methods thereof
Whatever the metaclass may be, step 1 has destroyed any predictability of the fields' order.
Therefore, your desired use of positional parameters, in the snippet:
Task('Illustrate different syntax modes', True)
cannot associate the arguments' values with the model's various fields. (Trying to guess by type association -- hoping no two fields ever have the same type -- would be even more horribly unpythonic than your expressed desire to use db.tasklist and db['tasklist'] indifferently and interchangeably).
One of the backwards-incompatible changes in Python 3 was introduced specifically to deal with situations of this ilk. In Python 3, a custom metaclass can define a __prepare__ function which runs before "step 1" in the above simplified list, and this lets it have more control about the class's body. Specifically, quoting PEP 3115...:
__prepare__ returns a dictionary-like object which is used to store
the class member definitions during evaluation of the class body.
In other words, the class body is evaluated as a function block
(just like it is now), except that the local variables dictionary
is replaced by the dictionary returned from __prepare__. This
dictionary object can be a regular dictionary or a custom mapping
type.
...
An example would be a metaclass that
uses information about the
ordering of member declarations to create a C struct. The metaclass
would provide a custom dictionary that simply keeps a record of the
order of insertions.
You don't want to "create a C struct" as in this example, but the order of fields is crucial (to allow the use of positional parameters that you want) and so the custom metaclass (obtained through base model) would have a __prepare__ classmethod returning an ordered dictionary. This removes the specific issue, but, of course, only if you're willing to switch all of your code using this "magic ORM" to Python 3. Would you be?
Once that's settled, the issue is, what database operations do you want to perform, and how. Your example, of course, does not clarify this at all. Is the taskList attribute name special, or should any other attribute assigned to the db object be "autosaved" (by name and, what other characteristic[s]?) and "autoretrieved" upon use? Are there to be ways to remove entities, alter them, locate them (otherwise than by having once been listed in the same attribute of the db object)? How does your sample code know what DB service to use and how to authenticate to it (e.g. by userid and password) if it requires authentication?
The specific tasks you list would not be hard to implement (e.g. on top of Google App Engine's storage service, which does not require authentication nor specification of "what DB service to use"). model's metaclass would introspect the class's fields and generate a GAE Model for the class, the db object would use __setattr__ to set an atexit trigger for storing the final value of an attribute (as an entity in a different kind of Model of course), and __getattr__ to fetch that attribute's info back from storage. Of course without some extra database functionality this all would be pretty useless;-).
Edit: so I did a little prototype (Python 2.6, and based on sqlite) and put it up on http://www.aleax.it/lustdb.zip -- it's a 3K zipfile including 225-lines lustdb.py (too long to post here) and two small test files roughly equivalent to the OP's originals: test0.py is...:
from lustdb import *
class Task(Model):
title = ''
done = False
db.taskList = []
db.taskList.append(Task(title='Beat old sql interfaces', done=False))
db.taskList.append(Task(title='Illustrate different syntax modes', done=True))
and test1.p1 is...:
from lustdb import *
print 'Done tasks:'
for task in db.taskList:
if task.done:
print task
Running test0.py (on a machine with a writable /tmp directory -- i.e., any Unix-y OS, or, on Windows, one on which a mkdir \tmp has been run at any previous time;-) has no output; after that, running test1.py outputs:
Done tasks:
Task(done=True, title=u'Illustrate different syntax modes')
Note that these are vastly less "crazily magical" than the OP's examples, in many ways, such as...:
1. no (expletive delete) redundancy whereby `db.taskList` is a synonym of `db['taskList']`, only the sensible former syntax (attribute-access) is supported
2. no mysterious (and totally crazy) way whereby a `done` attribute magically becomes `isDone` instead midway through the code
3. no mysterious (and utterly batty) way whereby a `print task` arbitrarily (or magically?) picks and prints just one of the attributes of the task
4. no weird gyrations and incantations to allow positional-attributes in lieu of named ones (this one the OP agreed to)
The prototype of course (as prototypes will;-) leaves a lot to be desired in many respects (clarity, documentation, unit tests, optimization, error checking and diagnosis, portability among different back-ends, and especially DB features beyond those implied in the question). The missing DB features are legion (for example, the OP's original examples give no way to identify a "primary key" for a model, or any other kinds of uniqueness constraints, so duplicates can abound; and it only gets worse from there;-). Nevertheless, for 225 lines (190 net of empty lines, comments and docstrings;-), it's not too bad in my biased opinion.
The proper way to continue playing with this project would of course be to initiate a new lustdb open source project on the hosting part of code.google.com (or any other good open source hosting site with issue tracker, wiki, code reviews support, online browsing, DVCS support, etc, etc) - I'd do it myself but I'm close to the limit in terms of number of open source projects I can initiate on code.google.com and don't want to "burn" the last one or two in this way;-).
BTW, the lustdb name for the module is a play of word with the OP's initials (first two letters each of first and last names), in the tradition of awk and friends -- I think it sounds nicely (and most other obvious names such as simpledb and dumbdb are taken;-).
I think you should try ZODB. It is object oriented database designed for storing python objects. Its API is quite close to example you have provided in your question, just take a look at tutorial.
What about using Elixir?
Forget ORM! I like vanilla SQL. The python wrappers like psycopg2 for postgreSQL do automatic type conversion, offer pretty good protection against SQL injection, and are nice and simple.
sql = "SELECT * FROM table WHERE id=%s"
data = (5,)
cursor.execute(sql, data)
The more I think on't the more the Smalltalk model of operation seems more relevant. Indeed the OP may not have reached far enough by using the term "database" to describe a thing which should have no need for naming.
A running Python interpreter has a pile of objects that live in memory. Their inter-relationships can be arbitrarily complex, but namespaces and the "tags" that objects are bound to are very flexible. And as pickle can explicitly serialize arbitrary structures for persistence, it doesn't seem that much of a reach to consider each Python interpreter living in that object space. Why should that object space evaporate with the interpreter's close? Semantically, this could be viewed as an extension of the anydbm tied dictionaries. And since most every thing in Python is dictionary-like, the mechanism is almost already there.
I think this may be the generic model that Alex Martelli was proposing above, it might be nice to say something like:
class Book:
def __init__(self, attributes):
self.attributes = attributes
def __getattr__(....): pass
$ python
>>> import book
>>> my_stuff.library = {'garp':
Book({'author': 'John Irving', 'title': 'The World According to Garp',
'isbn': '0-525-23770-4', 'location': 'kitchen table',
'bookmark': 'page 127'}),
...
}
>>> exit
[sometime next week]
$ python
>>> import my_stuff
>>> print my_stuff.library['garp'].location
'kitchen table'
# or even
>>> for book in my_stuff.library where book.location.contains('kitchen'):
print book.title
I don't know that you'd call the resultant language Python, but it seems like it is not that hard to implement and makes backing store equivalent to active store.
There is a natural tension between the inherent structure imposed - and sometimes desired - by RDBMs and the rather free-form navel-gazing put here, but NoSQLy databases are already approaching the content addressable memory model and probably better approximates how our minds keep track of things. Contrariwise, you wouldn't want to keep all the corporate purchase orders such a storage system, but perhaps you might.
How about you give an example of how "simple" you want your "dealing with database" to be, and I then tell you all the stuff that is needed for that "simplicity" to get working ?
(And of which it will still be YOU that will be required to give the information/config to the database interface engine, somewhere, somehow.)
To name but one example :
If your database management engine is some external machine with which you/your app interfaces over IP or some such, there is no way around the fact that the IP identity of where that database engine is running, will have to be provided by your app's database interface client, somewhere, somehow. Regardless of whether that gets explicitly exposed in the code or not.
I've been busy, here it is, released under LGPL:
http://github.com/lukestanley/lustdb
It uses JSON as it's backend at the moment.
This is not the same codebase Alex Martelli did.
I wanted to make the code more readable and reusable with different
backends and such.
Elsewhere I have been working on object oriented HTML elements
accessable in Python in similar ways, AND a library for making web.py
more minimalist.
I'm thinking of ways of using all 3 elements together with automatic
MVC prototype construction or smart mapping.
While old fashioned text based template web programming will be around
for a while still because of legacy systems and because it doesn't
require any particular library or implementation, I feel soon we'll
have a lot more efficent ways of building robust, prototype friendly
web apps.
Please see the mailing list for those interested.
If you like CherryPy, you might like the complementary ORMs I wrote: GeniuSQL (which follows a Table Data gateway model) and Dejavu (which is a complete Data Mapper).
There's far too much in this question and all its subcomments to address completely, but one thing I wanted to point out was that GeniuSQL and Dejavu have a very robust system for mapping native Python types to the types that your particular backend is using. There are very sane defaults, which can be overridden as needed, and even extended if you make a new backend or use types from a backend that isn't yet supported. See http://www.aminus.net/geniusql/chrome/common/doc/trunk/advanced.html#custom for more discussion on that.

Can a Mock framework do this for me?

I am a bit confused
from wiki:
"This means that a true mock... performing tests on the data passed into the method calls as arguments."
I never used unit testing or mock framework. I thought unit tests are for automated tests so what are mock tests for?
What I want is a object replacing my database I might use later but still dont know what database or orm tool I use.
When I do my programm with mocks could I easily replace them with POCO`s later to make entity framework for example working pretty fast?
edit: I do not want to use unit testing but using Mocks as a full replacement for entities + database would be nice.
Yes, I think you are a bit confused.
Mock frameworks are for creating "Mock" objects which basically fake part of the functionality of your real objects so you can pass them to methods during tests, without having to go to the trouble of creating the real object for testing.
Lets run through a quick example
Say you have a 'Save()' method that takes a 'Doc' object, and returns a 'boolean' success flag
public bool Save(Doc docToSave(){...}
Now if you want to write a unit test for this method, you are going to have to first create a document object, and populate it with appropriate data before you can test the 'Save()' method. This requires more work than you really want to do.
Instead, it is possible to use a Mocking framework to create a mock 'Doc' object for you.
Syntax various between frameworks, but in pseudo-code you would write something like this:
CreateMock of type Doc
SetReturnValue for method Doc.data = "some test data"
The mocking framework will create a dummy mock object of type Doc that correctly returns "some test data" when it's '.data' property is called.
You can then use this dummy object to test your save method:
public void MyTest()
{
...
bool isSuccess = someClass.Save(dummyDoc);
...
}
The mocking framework ensures that when your 'Save()' method accesses the properties on the dummyDoc object, the correct data is returned, and the save can happen naturally.
This is a slightly contrived example, and in such a simple case it would probably be just as easy to create a real Doc object, but often in a complex bit software it might be much harder to create the object because it has dependencies on other things, or it has requirements for other things to be created first. Mocking removes some of that extra overload and allows you to test just the specific method that you are trying to test and not worry about the intricacies of the Doc class as well.
Mock tests are simply unit tests using mocked objects as opposed to real ones. Mocked objects are not really used as part of actual production code.
If you want something that will take the place of your database classes so you can change your mind later, you need to write interfaces or abstract classes to provide the methods you require to match your save/load semantics, then you can fill out several full implementations depending on what storage types you choose.
I think what you're looking for is the Repository Pattern. That link is for NHibernate, but the general pattern should work for Entity Framework as well. Searching for that, I found Implementing Repository Pattern With Entity Framework.
This abstracts the details of the actual O/RM behind an interface (or set of interfaces).
(I'm no expert on repositories, so please post better explanations/links if anyone has them.)
You could then use a mocking (isolation) framework or hand-code fakes/stubs during initial development prior to deciding on an O/RM.
Unit testing is where you'll realize the full benefits. You can then test classes that depend on repository interfaces by supplying mock or stub repositories. You won't need to set up an actual database for these tests, and they will execute very quickly. Tests pay for themselves over and over, and the quality of your code will increase.

Resources