Thor - how to use lambda functions for expressing default values on method_options - thor

Suppose a method_option of the form:
method_option :host_domain, aliases: '-h', default: 'some value'
Can a non-trivial lambda function be used in place of 'some value' to derive the default value at run time.
Something trivial like:
method_option :host_domain, aliases: '-h', default: -> { 'some value'.upcase}.call
works fine.
However, something like:
method_option :host_domain, aliases: '-h', default: -> { Model.method}.call
does not because it cannot find the Model instance (which is defined in app/lib/models & has already been loaded prior). It also seems at the point of the module_option statement, the Thor class has not been initialized. I guess thereby hangs a tale.hir
Further, in method def new, which follows the method_option statements, I can reference Model with out issue. I also note that the Thor class has been initialized prior to entry of the new method (as you would expect).
Would appreciate any guidance on how to navigate the Thor hierarchy to sort out the problem.

On further exploration, I found that delaying the execution of the lambda function solved the problem.
That is, in the method new, call the lambda function as follows:
Change:
method_option :host_domain, type: :string, default: -> {Model.method}.call
to:
method_option :host_domain, type: :string, default: -> {Model.method}
and in:
def new(url)
...
...
options[:host_domain] = options[:host_domain].call

Related

Drupal-7 how to get hook_field_[formatter_]prepare_view() invoked without overwriting existing formatter

From my module, I'm looking for a way to change text-fields value during rendering process, but WITHOUT creating a new formatter, and BEFORE the currently affected formatter works.
In other words I want my changes always made on any text-field, as a generic preparatory step, regardless of which formatter will work afterwards.
For this to work:
I first considered using hook_field_formatter_prepare_view().
To get it invoked, I wanted to use hook_field_formatter_info_alter()
to add my module name to each involved formatter found here. But it
appears that the "module" index only accepts a unique module-name,
not an array.
BTW I'm quite surprised by this lack: I seem it should make sense to allow a sequence of formatters, like are allowed a sequence of
filters!
Then I considered using hook_field_prepare_view(), which seemed to
be the best candidate since the doc sayd it runs before the
formatters own hook_field_formatter_prepare_view(). But that
doesn't work either: this hook is invoked only for a field created by
the involved module (this issue had been discussed here).
Any idea? Thanks in advance.
I actually found a pretty way to do what I looked for.
The method is quite invasive but works fine and may be re-used for different cases.
1. To be as clear as possible, first I rephrase my question in terms of a
general use case:
In the rendering process, how to permit a module to change value of one or more
fields (given field-id, given field-type...) before the formatter (if any) do its own job?
2. The problem to accomplish this:
We can't make the module define a new formatter, because only one may
be defined at the same time for the same field
3. The strategy which led me to the desired result:
use hook_field_formatter_info_alter() to run through existing formatters and "graft" my module inside of those where I wish to intervene
(see detail under 4 below)
use hook_field_formatter_prepare_view() to:
(a) execute the required changes in field values
(the job my module is intended to: here it may be done or not, upon all fields of a given type or precisely identified fiels and so on, depending on any detailed needs)
(b) again run through formatters list and, when involved, fire their own hook_field_formatter_prepare_view() if it exists
(see detail under 5 below)
do the same job as in (b) above, successively for each of the other possibly involved hooks of any formatter:
hook_field_formatter_view()
hook_field_formatter_setting_form()
hook_field_formatter_setting_summary()
4. Detail about how to graft my module in the process:
Whith hook_field_formatter_info_alter(&$info) we face the following $info structure:
$info = array(
['formatter machine name'] = array(
['label'] => 'Human readable formatter description',
['field types'] => array(
[0] => 'a_field_type,
[1] => 'another_field_type',
# ...
),
['settings'] => array(
['option A'] => 'option A value',
['option B'] => 'option B value',
# ...
),
['module'] => 'formatter_module_name',
),
['formatter machine name'] = array(
# ...
),
# ...
);
We can easily run through the formatters list and look at "field types" index to select which ones are concerned by our needs.
Then for each involved one, we can:
substitute our own module name to formatter module name in "module" index
add a new sub-index in "settings" index (say "our module graft") to register the original formatter module name
So our hook_field_formatter_info_alter() will be something like:
function mymodule_field_formatter_info_alter(&$info) {
if($info) {
foreach($info as $name=>$formatter) {
if(
!#$formatter['settings']['mymodule graft'] # or already grafted
and
array_intersect($formatter['field types'],
array('text','text_long','text_with_summary')) # here it is for text fields only
) {
# substitute mymodule to original module:
$info[$name]['settings']['mymodule graft']=$formatter['module'];
$info[$name]['module']='mymodule';
}
}
}
}
Once flushing class registry, now all involved fields have their formatting phase redirected to our own module.
NOTE: installing a new formatter now requires flushing class registry again, in order our module to take it in hand also.
5. Detail about how to make original formatters to work after us:
As stated above, now it is our own module which is notified when a field has to been formatted, rather than the originally affected formatter.
So we must react in our hook_field_formatter_prepare_view(), which should look like:
function mymodule_field_formatter_prepare_view(
$entity_type,$entities,$field,$instances,$langcode,&$items,$displays
) {
# here we do our own job with field values:
if($items) {
foreach($items as $nid=>$node_data) {
# ...
}
}
# then we give original formatter a chance to execute its own hook:
foreach($displays as $display) {
$hook=
$display['settings']['mymodule graft'].'_field_formatter_prepare_view';
if(function_exists($hook)) {
$hook(
$entity_type,$entities,$field,$instances,$langcode,$items,$displays
);
}
}
}
Finally we also must give a chance to other formatters hooks to execute.
For each one, it should look like (replace HOOK and ARGS by the right data for each hook):
function mymodule_field_formatter_HOOK(ARGS) {
$hook=$display['settings']['mymodule graft'].'_field_formatter_HOOK';
if(function_exists($hook)) {
return $hook(ARGS);
}
}
Hope this helps...

The best way of implementing objects factory in AngularJS

I searching the best way to objects in AngularJS. The goal is to use it like this model = new Model 'wowwowwow'
So I started with this approach:
app.factory 'Model', ->
(blah) ->
Model =
blah: ''
foo: []
Model.blah = blah
Model
Then I wanted to move constructor actions in separate function that would not be accessible from class instance:
app.factory 'Model', ->
constructor = (blah) ->
#blah = blah
#
(blah) ->
constructor.call
blah: ''
foo: []
Finally I wanted to use goals of coffeescript and tried to solve my task in kinda obvious way:
app.factory 'Model', ->
class Model
blah: ''
foo: []
constructor: (blah) ->
#blah = blah
(blah) ->
new Model blah
But in this last case doesn't work as needed:
var1 = new Model 'test'
var1.foo.push 'blah'
var2 = new Model 'asdasd'
console.log var2.foo
the problem here is that var2 after creation will have same values as var1 (they are linked in fact).
Here is the plunk for this problem.
So the questions are:
What is wrong with my 3rd approach?
How can I change 3rd approach to make it work with coffeescript's OOP features.
Are any better approaches for my task?
After a discussion started on a google plus angular community post, Dean Sofer gave this good explanation on classing and subclassing on angular: https://gist.github.com/ProLoser/5645860
For your problem, Model should be defined as:
app.factory 'Model', ->
class Model
constructor: (blah) ->
#blah = blah
#foo = []
(blah) ->
new Model blah
Answering comment #4:
#SET This happens because you are assigning the array to the prototype, somewhat similar to using a static field on a Java class.
Each time you push/pop to that array, you're using the shared prototype array instance.
When you do the same with strings, you're not manipulating a shared instance because javascript strings are immutable. What you are doing is assigning new strings to your object instances (not the prototype anymore).
Heres whats really happening behind the curtain:
Your declarations
foo: []
blah: ''
Converts to:
Model.prototype.blah = '';
Model.prototype.foo = [];
When you manipulate those members, your generated code is (more or less):
a.blah = 'aaa'; // New string on the a object, not on the Model prototype
b.blah = 'bbb'; // New string on the b object, not on the Model prototype
a.foo.push('aaa'); // Push to the foo shared array
b.foo.push('bbb'); // Push to the foo shared array
You can also get what you're trying to do by doing:
a.foo = ['aaa']; // a.foo is a new independent array
b.foo = ['bbb']; // b.foo is a new independent array
But these foos are just hiding the prototype one, that is still there.
Anyway, maybe you should look for more information on javascript's prototypal inheritance.

Typed literal objects in TypeScript

I have a TS definition file for Ext JS containing function store.add(a : any) (it has many overloads so I guess this is to simplify the d.ts file). I want to pass it a literal object which implements a Person interface:
interface Person
{
name: string;
age: number
}
store.add(<Person>{ name: "Sam" });
This gives me intellisense but unfortunately it is just coercing the literal into a Person, without detecting the missing field. This works as I want:
var p : Person = { name: "Sam" }; // detects a missing field
store.add(p);
But is there a way to do this without a separate variable?
I realise this would be solved with 'fixed' definition files, but I think many Javascript libraries have too many overloads to allow this. I almost need a way to dynamically overload the function definition..! Would generics help here?
Yes generics seem to be the answer. In the definition file changing:
add?( model:any ): Ext.data.IModel[];
to
add?<T>( model:T ): Ext.data.IModel[];
Allows you to call
store.add<Person>({ name: "sam" });
And it correctly shows an error!

relatedModel does not inherit from Backbone.RelationalModel -- using RequireJS and exports for circular dependency

I've run into a problem that may have to do with my lack of understanding of the use of exports / RequireJS for circular dependencies.
I'm getting the error relatedModel does not inherit from Backbone.RelationalModel.
On to the code (in CoffeeScript; I hope that's alright)...
I have two Backbone Models / RequireJS modules, FooModel and BarModel:
FooModel:
define (require) ->
Backbone = require 'backbone'
BarModel = require 'models/bar'
FooModel = Backbone.RelationalModel.extend
relations: [
type: Backbone.HasMany
key: 'theBars'
relatedModel: BarModel # <-- this is where the BB Relational error is coming from
]
return FooModel
BarModel:
define (require, exports) ->
Backbone = require 'backbone'
FooCollection = require 'collections/foos'
BarModel = Backbone.RelationalModel.extend
someFunction: ->
# uses FooCollection
# I've tried moving the require in here and getting rid of exports
exports.BarModel = BarModel
return BarModel # I've tried with and without this line, but CS just returns the last line anyway so removing it is functionally the same
I have also tried:
Extending FooModel from Backbone.Model instead of Backbone.RelationalModel and creating the theBars collection myself (in parse and in custom function). (BarModel has a HasOne relation of a another model, so I need it to still be a RelationalModel.
Is this possibly a problem with the way exports works? As far as I understand, exports just provides an object to hang module objects on so the modules are accessible elsewhere. Is the error occurring because the BarModel isn't actually a Backbone Model at the point in the FooModel code where I define relations?
Update
I seem to have solved my issue, although I'm unsure how. Can't say I'm pleased about not understanding why it's working, but I sure am pleased that it is working. Also see my comment about _.memoize below in the BarModel code.
(Before I got the code below to work, I created a workaround whereby I created the associated collection in FooModel's parse function and exported BarModel. However, the response of require 'collections/foos' returned an object like so: {FooCollection: <Backbone.Collection Object>}, i.e. it was unexpectedly wrapped in another object.)
Here's the updated code:
FooModel:
define (require) ->
Backbone = require 'backbone'
BarModel = require 'models/bar'
BarCollection = require 'collections/bars'
FooModel = Backbone.RelationalModel.extend
relations: [
type: Backbone.HasMany
key: 'theBars'
relatedModel: BarModel
collectionType: BarCollection
]
return FooModel
BarModel:
define (require, exports) ->
Backbone = require 'backbone'
BarModel = Backbone.RelationalModel.extend
someFunction: -> #this actually used to use _.memoize (sorry for the incomplete code), so maybe it would have tried to run the function argument immediately?
# uses FooCollection
FooCollection = require 'collections/foos'
return AttributeModel
Your BarModel requires 'collections/foos', correct? And I'm guessing (since there's no code for FooCollection) that the collection requires 'models/foo', because a collection needs to define it's model right? Finally, I can see from the code above that your foo model requires 'models/bar'.
In other words foos needs foo needs bar needs foos needs foo needs bar needs ...
No matter how Require decides to order that, one of those three has to be loaded before the others, which will give you problems like the one you are having.
The solution is to not load one of those three until after all three modules are loaded. For instance, what if you change:
define (require, exports) ->
Backbone = require 'backbone'
FooCollection = require 'collections/foos'
BarModel = Backbone.RelationalModel.extend
someFunction: ->
# uses FooCollection
to:
define (require, exports) ->
Backbone = require 'backbone'
BarModel = Backbone.RelationalModel.extend
someFunction: ->
FooCollection = require 'collections/foos'
# uses FooCollection
Now BarModel can load, and while someFunction is defined, it is not actually run yet, so it won't require foos and create a circular dependency. Later on, after everything is loaded and some code invokes someFunction, foos will already have had a chance to load, and the require should work.
Now I say should work because of your comment:
# I've tried moving the require in here and getting rid of exports
Again, I have to guess since I can't see your code, but I'd imagine that what happened is that nothing else depended on foos, so it never got loaded. In order for the require of foos to work synchronously inside someFunction, the foos module has to have previously been loaded.
To fix this you just need to add a dependency on foos ... only this time not in any module that requires foos (or any that require a module that requires foos, or ...).
Hope that helps.

Model.set() with new and undefined values in Backbone.js

I would like to save calls to my server, so I am currently using Model.save() with the patch option and sending changedAttributes().
I would like to remove an attribute and add a new one. Model.set()/unset() will modify changedAttributes() each time such that I cannot use it with the Model.save()/patch scheme described above.
I think I would like to simply call Model.set() and pass in an object with the values I wish to unset set to undefined along with the values I wish to set.
Is there a way that I can unset() and set() in one go to get the changedAttributes()? Or maybe determine the changedAttributes() for a combined set of operations?
// Currently
var m = new Backbone.Model({ "foo": "bar" });
m.unset("foo");
console.log(m.changedAttributes()); // { "foo": undefined }
m.set("baz", "bar");
console.log(m.changedAttributes()); // { "baz": "bar" }
console.log(m.attributes); // { "baz": "bar" }
// At this point, how do I get the combination of changed attributes? something like: { "foo": undefined, "baz": "bar" }?
// Is that even possible? Am I doing something horribly wrong?
//================================================================
// What (I think) I want is for Model.set() to remove attributes with values of undefined, so I only have to make one call and changedAttributes() will be pristine. Maybe with a option or something?
var w = new Backbone.Model({ "foo": "bar" });
w.set({ "foo": undefined, "baz": "bar" });
console.log(w.changedAttributes()); // { "foo": undefined, "baz": "bar" }
console.log(w.attributes); // I would like it to be { "baz": "bar" }, "foo" having been removed in the set() call.
//================================================================
// I was trying to avoid processing the objects by hand. I realize that I can do something like the following.
var h = new Backbone.Model({ "foo": "bar" });
var changes = { "foo": undefined, "baz": "bar" };
_.each(changes, function(val, key) {
if (_.isUndefined(val)) {
h.unset(key, { "silent": true });
} else {
h.set(key, val, { "silent": true });
}
});
h.trigger('change'); // Trigger a change event after all the changes have been done.
console.log(changes); // { "foo": undefined, "baz": "bar" }
console.log(h.attributes); // { "baz": "bar" }
Fiddle of above code in action: http://jsfiddle.net/clayzermk1/AmBfh/
There seems to have been some discussion on this topic about a year ago https://github.com/documentcloud/backbone/pull/879. It seems like the functionality I wanted existed at some point.
EDIT: As #dennis-rongo pointed out, I can obviously do this by hand. To restate my question above: "Does Backbone allow setting/deleting of attributes at once?" and if not, what is the rationale behind that decision? Derick Bailey created Backbone.Memento (https://github.com/derickbailey/backbone.memento) to deal with attribute states, and there are several issues on Backbone about model states closely related to this scenario (https://github.com/documentcloud/backbone/pull/2360, somewhat relevant: https://github.com/documentcloud/backbone/issues/2316, highly relevant: https://github.com/documentcloud/backbone/issues/2301).
EDIT 2: I'm not looking for a hand-rolled solution, I can make it do more or less what I want (see sample code above). I'm looking for a justification of the current design with a clean example for this common scenario - set and unset in one go.
UPDATE: There has been some conversation about this subject in https://github.com/documentcloud/backbone/issues/2301. I have submitted a pull request (https://github.com/documentcloud/backbone/pull/2368) to try and encourage discussion of the current implementation.
Thank you to everyone who posted an answer!
There's a lot of ways to skin this one! So, I'll focus on your the part of your question where you ask:
Is there a way that I can unset() and set() in one go to get the changedAttributes()?
because I think that's the way to go here.
Backbone.Model.unset() is just an alias for Backbone.Model.set(). From the source:
unset: function(attr, options) {
return this.set(attr, void 0, _.extend({}, options, {unset: true}));
},
So why not just do m.set({"baz": "bar", "foo": void 0});? See this fiddle I forked from yours: http://jsfiddle.net/dimadima/Q8ZuV/. Pasting from there, the result will be
console.log(m.changedAttributes()); // { "baz": "bar", "foo": undefined }
console.log(m.attributes); // // {foo: undefined, baz: "bar"}, unfortunately "foo"is not deleted
So m.attributes is a bit off because the key you've unset hasn't been deleted, but you can test for that.
Anyway, I recommend skimming the source of Backbone.Model.set() to get a sense of what your other options would be. I could elaborate if you'd like.
Something like this should work.
Basically loop through all the attributes and unset properties that are invalid (falsy and non-boolean values).
/** Create a Model and set attributes */
var President = Backbone.Model.extend({});
var m = new President({first: 'Abraham', last: 'Lincoln', age: 90});
/** Blank out an attribute or two */
m.set({age: ''});
/** Loop through each attributes and unset falsy ones.
Also pass in silent = true so it doesn't trigger a change event
(in case you have listeners).
*/
_.each(m.toJSON(), function(val, col){
if (typeof val !=='boolean' && !val) {
m.unset(col, {silent: true});
}
}, this);
/** Output the new Model */
console.log(m.toJSON());
OR
You can create a new Model that only contains the changed attributes if you'd rather go in that direction.
var President = Backbone.Model.extend({});
var m = new President({
first: 'Abraham', last: 'Lincoln', age: 90, registered: true});
/** Blank out or change an attribute or two */
m.set({first: null});
/** Pass changed attributes to a new Model */
var t = new President();
if (!_.isEmpty(m.changedAttributes())) {
_.each(m.changedAttributes(), function(val, col) {
t.set(col, val);
}, this);
}
The current implementation will allow you to call set() and pass in an object with mixed attributes to set and unset. To effectively unset an attribute, assign it the value of undefined. The key will remain in the attributes hash on the model, but any time the model is serialized to JSON, the undefined values will not be present in the serialization (this is due to the implementation of JSON.stringify()). It will not delete the attributes from the model.
The behavior of JSON.stringify() removing undefined values in serialization is described on MDN - JSON - stringify:
If undefined, a function, or an XML value is encountered during conversion it is either omitted (when it is found in an object) or censored to null (when it is found in an array).
I was not using JSON serialization for my specific case (BSON), so I ended up having to hand-code a solution for myself.
I struck up a discussion on GitHub with a pull request, in the end a decision was made to keep the API as it is. For details see this pull requtest: https://github.com/documentcloud/backbone/pull/2368.
Thank you again to everyone who participated in the discussion, both on SO and GH.

Resources