Custom Validation in Parsley.js - parsley.js

Which parsley.js functions do I need to call to custom validate with javascript?
The reason I ask this is because I defined the following Assert Array in the back end and want to reuse this in the front end:
[ new Assert().Length( { min: 33, max: 25 } ), new Assert().NotBlank() ]
Thank you,

You'll find here the documentation on how to define your custom validators for Parsley.
What you are trying to do may look something like that:
<script type="text/javascript">
window.ParsleyConfig = {
validators: {
myvalidator: {
fn: function (value) {
return Validator.validate(value, [ new Assert().Length( { min: 33, max: 25 } ), new Assert().NotBlank() ]);
},
priority: 32
}
},
i18n: {
en: {
myvalidator: 'Your field is invalid, or some different message'
}
}
};
</script>
Note: why defining these two asserts? NotBlank() is redundant with previous one, because a Blank field would obviously be wrong, its length being inferior to 33 min length. I don't really see the point to add NotBlank() here :)
Best

Related

Formatting data from a database in TypeScript

I am having trouble with writing the following method on an Angular class. I don't know how to add values from arrayId to the data array in the series object.
getChartOptions() {
const arrayId=[];
const arrayTimestamp=[];
const arrayData=[];
const arrayData2=[];
var i=0;
this.httpClient.get<any>('http://prod.kaisens.fr:811/api/sleep/?deviceid=93debd97-6564-454b-be33-35bd377a2563&startdate=1612310400000&enddate=1614729600000').subscribe(
reponse => {
this.sleeps = reponse;
this.sleeps.forEach(element => { arrayId.push(this.sleeps[i]._id),arrayTimestamp.push(this.sleeps[i].timestamp),arrayData.push(this.sleeps[i].data[18]),arrayData2.push(this.sleeps[i].data[39])
i++;
});
console.log(arrayId);
console.log(arrayTimestamp);
console.log(arrayData);
console.log(arrayData2);
}
)
return {
series: [{
name: 'Id',
data: [35, 65, 75, 55, 45, 60, 55]
}]
}
}
I have two main pieces of advice for you:
Know the types of that data that you are dealing with.
Get familiar with all of the various array methods.
get<any>() is not a helpful type. If you understand what the response is then Typescript can help ensure that you are handling it correctly.
I checked out the URL and it looks like you get an array of objects like this:
{
"_id": 4,
"device_id": "93debd97-6564-454b-be33-35bd377a2563",
"timestamp": 1612310400000.0,
"data": "{'sleep_quality': 1, 'sleep_duration': 9}"
},
That data property is not properly encoded as an object or as a parseable JSON string. If you control this backend then you will want to fix that.
At first I thought that the data[18] and data[39] in your code were mistakes. Now I see that it as attempt to extract values from this malformed data. Accessing by index won't work if these numbers can be 10 or more.
The type that you have now is:
interface DataPoint {
_id: number;
device_id: string;
timestamp: number;
data: string;
}
The type that you want is:
interface DataPoint {
_id: number;
device_id: string;
timestamp: number;
data: {
sleep_quality: number;
sleep_duration: number;
}
}
You can type the request as this.httpClient.get<DataPoint[]>( and now you'll get autocomplete on the data.
It looks like what you are trying to do is basically to convert this from one array of rows to a separate array for each column.
You do not need the variable i because the .forEach loop handles the iteration. The element variable in the callback is the row that you want.
this.sleeps.forEach(element => {
arrayId.push(element._id);
arrayTimestamp.push(element.timestamp);
arrayData.push(element.data[18]);
arrayData2.push(element.data[39]);
});
The .forEach loop that you have now is efficient because it only loops through the array once. A .map for each column is technically less efficient because we have to loop through separately for each column, but I think it might make the code easier to read and understand. It also allows Typescript to infer the types of the arrays. Whereas with an empty array you would need to annotate it like const arrayId: number[] = [];.
const mapData = (response: DataPoint[]) => {
return [{
name: 'Id',
data: response.map(element => element._id)
}, {
name: 'Timestamp',
data: response.map(element => element.timestamp)
}, {
name: 'Sleep Quality',
data: response.map(element => parseInt(element.data[18])) // fix this
}, {
name: 'Sleep Duration',
data: response.map(element => parseInt(element.data[39])) // fix this
}]
}
The HTTP request is asynchronous. If you access your array outside of the subscribe callback then they are still empty. I'm not an angular person so this part I'm unsure of, but I think that you want to be updating a property on your class instead of returning the value?
Just follow this piece of code:
series: [{
name: 'Id',
data: arrayId
}]

MongoDB / Mongoose - When updating an array field with fewer elements problems

I have a collection that looks like:
name: {
type: String,
maxlength: 150,
required: true
},
description: {
type: String,
maxlength: 350
},
status: {
type: String,
default: 'active'
},
targets: [ {
type: Schema.Types.ObjectId,
ref: 'Thing',
index: true
} ]
});
The problem is with targets. Creating and adding to that array is no problem. However, if I reduce the number of elements in the array, it updates the targets, but does NOT reduce the size of the array, which causes numerous problems.
For example if targets = ["111111111111111111111111", "222222222222222222222222", "333333333333333333333333"]
and I do an update with targets = ["111111111111111111111111", "333333333333333333333333"],
the resulting array is ["111111111111111111111111", "333333333333333333333333", "333333333333333333333333"] since it doesn't reduce the size of the array.
I've looked at numerous things, and can't figure this out. The actual targets in my case can have several hundred elements. Also, doing an $addToSet doesn't seem to work, as it still won't remove the extra elements at the end. I really can't do a $slice, either - at least I haven't figured-out a way to do that. When I tried, I got an error saying that I couldn't update the same field twice.
How does one do this?
Here is the update code:
let filter = {
_id: aRecord._id
};
let update = aRecord;
MyCollection.findOneAndUpdate(filter, update, (err, insertStatus) => {
if (err) {
console.error(err);
return next(err);
}
if (1 === insertStatus.ok) {
res.status(200);
}
return res.json(insertStatus);
});
Thanks!
Seems stupid, but this works when reducing number of array elements of an array field:
{ $push:{ targets: { $each: sourceArray, $position: 0, $slice: sourceArray.length } } };
Basically, insert the array of elements in the front, then truncate the array to the length of the source array.
This assumes the source array has entire list of array elements. So, the front-end, user changes the number of checkboxes in a list - it sends the entire list of checkboxes, not a delta.

How can I get an item in the redux store by a key?

Suppose I have a reducer defined which returns an array of objects which contain keys like an id or something. What is the a redux way of getting /finding a certain object with a certain id in the array. The array itself can contain several arrays:
{ items:[id:1,...],cases:{...}}
What is the redux way to go to find a record/ node by id?
The perfect redux way to store such a data would be to store them byId and allIds in an object in reducer.
In your case it would be:
{
items: {
byId : {
item1: {
id : 'item1',
details: {}
},
item2: {
id : 'item2',
details: {}
}
},
allIds: [ 'item1', 'item2' ],
},
cases: {
byId : {
case1: {
id : 'case1',
details: {}
},
case2: {
id : 'case2',
details: {}
}
},
allIds: [ 'case1', 'case2' ],
},
}
Ref: http://redux.js.org/docs/recipes/reducers/NormalizingStateShape.html
This helps in keeping state normalized for both maintaining as well as using data.
This way makes it easier for iterating through all the array and render it or if we need to get any object just by it's id, then it'll be an O(1) operation, instead of iterating every time in complete array.
I'd use a library like lodash:
var fred = _.find(users, function(user) { return user.id === 1001; });
fiddle
It might be worth noting that it is seen as good practice to 'prefer objects over arrays' in the store (especially for large state trees); in this case you'd store your items in an object with (say) id as the key:
{
'1000': { name: 'apple', price: 10 },
'1001': { name: 'banana', price: 40 },
'1002': { name: 'pear', price: 50 },
}
This makes selection easier, however you have to arrange the shape of the state when loading.
there is no special way of doing this with redux. This is a plain JS task. I suppose you use react as well:
function mapStoreToProps(store) {
function findMyInterestingThingy(result, key) {
// assign anything you want to result
return result;
}
return {
myInterestingThingy: Object.keys(store).reduce(findMyInterestingThingy, {})
// you dont really need to use reduce. you can have any logic you want
};
}
export default connect(mapStoreToProps)(MyComponent)
regards

Exclude items from first-level filter when children-levels are empty at Loopback find()

When using Strongloop Loopback, we can make a data request (with relations) to the database these ways:
(1) Using lb-service (at front-end)
Model.find({
filter: {
where: {id: 1},
include: {
relation: 'relationship',
scope: {where: {id: 2}}
}
}
}, function (instances) {
}, function (err) {
});
(2) Using node.js (at server-side)
Model.find({
where: {id: 1},
include: {
relation: 'relationship',
scope: {where: {id: 2}}
}
}, function (err, instances) {
});
What I need: Exclude items from first filter whether another filter fails.
There is one obvious solution: filtering the response, this way:
instances = instances.filter(function(instance){
return typeof(instance.relationship) !== "undefined";
});
But... Using filter() to eliminate is not a good scalable solution, because it will always iterate over the array. Using this solution at the front-end is not good, because the size of the array will slow down the performance. Bringing it to the server-side could be a solution. But... each model will have a particular set of relations... and it is not scalable again!
Main question: Is there some way to overcome this situation, excluding items from the first filter whether second (third, or more) fails simultaneously (or not)?
Something like, defining it on filter object:
var filter = {
where: {id: 1},
include: {
relation: {name: 'relationship', required: true}, // required means this filter *needs* to be satisfied
scope: {where: {id: 2}}
}
};
Requirements:
(1) SQL query is not an option ;)
(2) I am using MySQL as database. So things like
{ where: { id: 1, relationship.id: 2 } }
will not work as desired.
I don't know of a way to do this within the filter syntax itself. I think you would have to write a custom remote method to do the filtering yourself after the initial query was complete. Here's what that might look like:
// in /common/models/model.js
Model.filterResults = function filterResults(filter, next) {
Model.find(filter, function doFilter(err, data) {
if (err) { return next(err); }
var filteredData = data.filter(function(model) {
return model.otherThings && model.otherThings().length;
});
next(null, filteredData);
});
};
Model.remoteMethod(
'filterResults',
{
accepts: { arg: 'filter', type: 'object', http: { source: 'query' } },
returns: { arg: 'results', type: 'array' },
http: { verb: 'get', path: '/no-empties' }
}
);
Now you can hit: .../api/Models/no-empies?filter={"include":"otherThings"} and you will only get back Models that have a related OtherThing. Note that this is for a one-to-many relationship, but hopefully you can see how to change it to fit your needs.

Meteor, MongoDB get part of array through subscription

I have a question about how to just get a certain element of an array using MongoDB and MeteorJS. I have the following schema for the user document:
bankList:[
{
id: "34567890987654345678",
name: "xfgchjbkn",
type: "credit"
},
{
id: "09876543456789098767"
name: "65789876t8",
type: "debit"
}
]
I first subscribe to only part of the fields in the array, specifically I gather a list of all the ids. Then I have an edit screen that should subscribe to all of the fields for a specific element in the array with a matching id. I do not want to expose the rest of the array just the single element. Currently, I use the following to first gather a list of just the ids:
Meteor.users.find({_id: this.userId},
{fields:{'bankList.id': 1}});
And the following publication-subscription method to get just a specific element's information:
Publication:
Meteor.publish("userBankAdvanced", function(bankId){
check(bankId,String);
if(this.userId){
return Meteor.users.find({_id:this.userId,"bankList.id": bankId}, {'bankList.$': 1});
}else{
this.ready();
}
});
Subscription:
this.route('edit_account', {
path: '/edit/account/',
waitOn: function(){
if(Session.get("bankId")){
return Meteor.subscribe('userBankAdvanced',Session.get("bankId"));
}
return null;
},
data: function(){
if(Session.get("bankId")){
return Meteor.users.findOne();
}
return null;
},
onBeforeAction: function(){
beforeHooks.isRevise(Session.get("bankId"));
}
});
The subscription method returns all of the elements of the array with all of the information.
I want, for example, just this (not the entire list with all of the information):
bankList:[
{
id: "34567890987654345678",
name: "xfgchjbkn",
type: "credit"
}]
It looks like you're just missing the "fields" specifier in your "userBankAdvanced" publish function. I wrote a test in meteorpad using your example and it seems to work fine. The bank id is hardcoded for simplicity there.
So instead of
return Meteor.users.find({_id:this.userId,"bankList.id": bankId}, {'bankList.$': 1});
try using
return Meteor.users.find({_id:this.userId,"bankList.id": bankId}, {fields: {'bankList.$': 1}});
No luck, in meteor the "fields" option works only one level deep. In other words there's no builtin way to include/exclude subdocument fields.
But not all is lost. You can always do it manually
Meteor.publish("userBankAdvanced", function (bankId) {
var self = this;
var handle = Meteor.users.find({
_id: self.userId, "bankList.id": bankId
}).observeChanges({
added: function (id, fields) {
self.added("users", id, filter(fields, bankId));
},
changed: function (id, fields) {
self.changed("users", id, filter(fields, bankId));
},
removed: function (id) {
self.removed("users", id);
},
});
self.ready();
self.onStop(function () {
handle.stop();
});
});
function filter(fields, bankId) {
if (_.has(fields, 'bankList') {
fields.bankList = _.filter(fields.bankList, function (bank) {
return bank.id === bankId;
});
}
return fields;
}
EDIT I updated the above code to match the question requirements. It turns out though that the Carlos answer is correct as well and it's of course much more simple, so I recommend using that one.

Resources