Forgive my minimal knowledge of AngularJS and valdr...
I have an application using AngularJS where the ui is dynamically generated to edit some object with meta-data provided to determine the type to interpret the members of the object. I'm going to add extra meta-data to set validation rules for each member.
I found valdr and I wondered if it might be possible to add the rules using valdrProvider.addConstraints() called repeatedly for each editable field. Presumably the rule names would have to be made unique?
How could I remove rules from the rule set when data was unloaded?
Is this approach valid or should I just map the rule meta data directly using an AngularJS directive or something?
Your approach sounds ok. valdr offers a removeConstraint(constraintName) function that might do what you need. Note, however, that this removes all contraints for a given model type.
Take the example at https://github.com/netceteragroup/valdr#getting-started.
yourApp.config(function(valdrProvider) {
valdrProvider.addConstraints({
'Person': {
'lastName': {
'size': {
'min': 2,
'max': 10,
'message': 'Last name must be between 2 and 10 characters.'
},
'required': {
'message': 'Last name is required.'
}
},
'firstName': {
'size': {
'min': 2,
'max': 20,
'message': 'First name must be between 2 and 20 characters.'
}
}
}
});
Calling removeConstraint('Person') would remove all constraints for Person. If you just want to remove the firstName because you remove the first name input field you can call addConstraints again with an updated constraints definition for Person.
Final notes:
valdr doesn't impose you remove constraints if fields are removed (see discussion at https://github.com/netceteragroup/valdr/issues/46)
yes, constraint names are unique because they are bound to model types which should have unique names, there shouldn't be two Person types with different implementation
Related
When I update a model, waterlock .update() always return an array of objects, even if I set on criteria a primaryKey.
on my code
Ad.update({ id: req.param('id') }, {
// desired attributed to be updated
}).exec(function(err, updatedRecord) {
// updatedRecord is always an array of objects
});
And in order to use the updatedRecord, I have to point out to 0 index like updatedRecord[0] which is something I consider not very clean. According to docs update() in sails, this is a common escenario.
Knowing that, I have 2 questions:
Wouldn't be better that when you find one model return just a updated object for that model, not an array?
If that is a convention, how could be overrided this function in order to return just an object instead of an array when .update() have only affected one record?
it is a convention that it will update all the records that matches the find criteria, but as you are probably using a unique validation on model, it will probably return an array of 1 or 0. You need to do it on hand.
You can override methods in model, by implementing a method with same name as waterline default. But as you will need to completely rewrite the code, it is not viable. Neither changing waterline underlying code.
A solution will be creating a new function on your Ad model:
module.exports = {
attributes: {
adid: {
unique: true,
required: true
},
updateMe: {
}
},
updateOne: function(adid, newUpdateMe, cb){
Ad.update({ id: req.param('id') }, {
// desired attributed to be updated
}).exec(function(err, updatedRecord) {
// updatedRecord is always an array of objects
if (updatedRecord.length == 1){
return cb(null, updatedRecord[0]);
}
return cb(null, {}); //also can error if not found.
});
}
};
Also. Avoid using id as an model attribute (use other name), as some databases like mongodb already add this attribute as default and may cause conflicts with your model.
I dont think its possible with waterline. Its because update method is a generalized one, passing a primary key in where condition is always not the case.
I am trying to make a Meteor app to let users push a value to the database. It works ok, but there a small issue. As soon a certain user has pushed his information, i don't want to let the same user create another entry. Or this must be blocked, or the value the user is pushing must be overwritten for the value he is posting the second time. Now I get multiple entry's of the same user.
Here is my code. Hope you can help me here. Thanks in advance.
Estimations.update(userstory._id, {
$addToSet: {
estimations: [
{name: Meteor.user().username, estimation: this.value}
]
}
});
From the mongo docs
The $addToSet operator adds a value to an array unless the value is
already present, in which case $addToSet does nothing to that array.
Since your array elements are objects the value is the entire object, not just the username key. This means a single user can create multiple name, estimation pairs as long as the estimation value is different.
What you can do is remove any value for the user first, then reinsert:
var username = Meteor.user().username;
Estimations.update({ userstory._id },
{ $pull: { estimations: { name: username }}}); // if it doesn't exist this will no-op
Estimations.update({userstory._id },
{ $push: { estimations: { name: username, estimation: this.value }}});
By way of commentary, you've got a collection called Estimations that contains an array called estimations that contains objects with keys estimation. This might confuse future developers on the project ;) Also if your Estimations collection is 1:1 with UserStorys then perhaps the array could just be a key inside the UserStory document?
I'm using app engine's bulkloader to import a CSV file into my datastore. I've got a number of columns that I want to merge into one, for example they're all URLs, but not all of them are supplied and there is a superseding order, eg:
url_main
url_temp
url_test
I want to say: "Ok, if url_main exists, use that, otherwise user url_test and then use url_temp"
Is it, therefore, possible to create a custom import transform that references columns and merges them into one based on conditions?
Ok, so after reading https://developers.google.com/appengine/docs/python/tools/uploadingdata#Configuring_the_Bulk_Loader I learnt about import_transform and that this can use custom functions.
With that in mind, this pointed me the right way:
... a two-argument function with the keyword argument bulkload_state,
which on return contains useful information about the entity:
bulkload_state.current_entity, which is the current entity being
processed; bulkload_state.current_dictionary, the current export
dictionary ...
So, I created a function that handled two variables, one would be the value of the current entity and the second would be the bulkload_state that allowed me to fetch the current row, like so:
def check_url(value, bulkload_state):
row = bulkload_state.current_dictionary
fields = [ 'Final URL', 'URL', 'Temporary URL' ]
for field in fields:
if field in row:
return row[ field ]
return None
All this does is grab the current row (bulkload_state.current_dictionary) and then checks which URL fields exist, otherwise it just returns None.
In my bulkloader.yaml I call this function simply by setting:
- property: business_url
external_name: URL
import_transform: bulkloader_helper.check_url
Note: the external_name doesn't matter, as long as it exists as I'm not actually using it, I'm making use of multiple columns.
Simples!
Let's say I have the following document schema in a collection called 'users':
{
name: 'John',
items: [ {}, {}, {}, ... ]
}
The 'items' array contains objects in the following format:
{
item_id: "1234",
name: "some item"
}
Each user can have multiple items embedded in the 'items' array.
Now, I want to be able to fetch an item by an item_id for a given user.
For example, I want to get the item with id "1234" that belong to the user with name "John".
Can I do this with mongoDB? I'd like to utilize its powerful array indexing, but I'm not sure if you can run queries on embedded arrays and return objects from the array instead of the document that contains it.
I know I can fetch users that have a certain item using {users.items.item_id: "1234"}. But I want to fetch the actual item from the array, not the user.
Alternatively, is there maybe a better way to organize this data so that I can easily get what I want? I'm still fairly new to mongodb.
Thanks for any help or advice you can provide.
The question is old, but the response has changed since the time. With MongoDB >= 2.2, you can do :
db.users.find( { name: "John"}, { items: { $elemMatch: { item_id: "1234" } } })
You will have :
{
name: "John",
items:
[
{
item_id: "1234",
name: "some item"
}
]
}
See Documentation of $elemMatch
There are a couple of things to note about this:
1) I find that the hardest thing for folks learning MongoDB is UN-learning the relational thinking that they're used to. Your data model looks to be the right one.
2) Normally, what you do with MongoDB is return the entire document into the client program, and then search for the portion of the document that you want on the client side using your client programming language.
In your example, you'd fetch the entire 'user' document and then iterate through the 'items[]' array on the client side.
3) If you want to return just the 'items[]' array, you can do so by using the 'Field Selection' syntax. See http://www.mongodb.org/display/DOCS/Querying#Querying-FieldSelection for details. Unfortunately, it will return the entire 'items[]' array, and not just one element of the array.
4) There is an existing Jira ticket to add this functionality: it is https://jira.mongodb.org/browse/SERVER-828 SERVER-828. It looks like it's been added to the latest 2.1 (development) branch: that means it will be available for production use when release 2.2 ships.
If this is an embedded array, then you can't retrieve its elements directly. The retrieved document will have form of a user (root document), although not all fields may be filled (depending on your query).
If you want to retrieve just that element, then you have to store it as a separate document in a separate collection. It will have one additional field, user_id (can be part of _id). Then it's trivial to do what you want.
A sample document might look like this:
{
_id: {user_id: ObjectId, item_id: "1234"},
name: "some item"
}
Note that this structure ensures uniqueness of item_id per user (I'm not sure you want this or not).
ExtJS Model fields have mapping option.
fields: [
{name: 'brandId', mapping:'brand.id', type: 'int'},
{name: 'brandName', mapping:'brand.name', type: 'string'},
The problem is: if the response from server does not contain some field(brand field in my example) and mapping from inner fields is defined, Ext Store silently fails to load any records.
Does anybody have problems with this? Is it some kind of a bug?
UPDATE
To make it clear: suppose I have ten fields in my model. Response from server has nine fields, one is missing. If there is no nested mapping for this field (mapping:'x.y.z') everything is OK - store loads record, the field is empty. But if this field has to be loaded from some nested field and has mapping option - store fails to load ANYTHING.
UPDATE 2
I have found the code, that causes problems. The fact is: when Ext tries to load some field from Json it performs a check like this
(source["id"] === undefined) ? __field0.defaultValue : source["id"]
But when field has mapping option(mapping 'brand.id') Reader does it this way
(source.brand.id === undefined) ? __field20.defaultValue : source.brand.id
which causes error if source has no brand field.
In case you have same problems as I: you can fix it by overloading Ext.data.reader.Json's method createFieldAccessExpression
I agree that Ext should only fail to load that field, not the entire record. One option that isn't great, but should work, is instead use a mapping function:
{
name: 'brandId',
mapping: function(data, record) {
return data.brand && data.brand.id;
}
}
I could have the arguments wrong (I figured out that this feature existed by looking at the source code), so maybe put a breakpoint in there to see what's available if it doesn't work like this.
I think you're misinterpret mapping and nesting paradigms: these are not interchangeable.
If you define nesting in your data, the result MUST have the corresponding field.