Does uuidv1() generate differently? - uuid

I am confused about uuidv1(). In the following code it uses uuidv1() as a salt and encrypt a password. But I thought that uuidv1() generates different strings so that I am not able to use it to encrypting a password.
Does uuidv1() generate always the same strings?
const mongoose = require("mongoose");
const uuidv1 = require("uuid/v1");
const crypto = require("crypto");
const { ObjectId } = mongoose.Schema;
const userSchema = new mongoose.Schema({
name: {
type: String,
trim: true,
required: true
},
email: {
type: String,
trim: true,
required: true
},
hashed_password: {
type: String,
required: true
},
salt: String,
...
});
// virtual field
userSchema
.virtual("password")
.set(function(password) {
// create temporary variable called _password
this._password = password;
// generate a timestamp
this.salt = uuidv1();
// encryptPassword()
this.hashed_password = this.encryptPassword(password);
})
.get(function() {
return this._password;
});
// methods
userSchema.methods = {
authenticate: function(plainText) {
return this.encryptPassword(plainText) === this.hashed_password;
},
encryptPassword: function(password) {
if (!password) return "";
try {
return crypto
.createHmac("sha1", this.salt)
.update(password)
.digest("hex");
} catch (err) {
return "";
}
}
};

uuidv1 does generate a unique output everytime which is why you save that as salt in the user model
so uuid creates salt which is like alphabet for crypting strings, uuid has like v1, v2 ... check out their npm doc, i checked it and it was simple, then your userSchema encryptPassword crypts your password using crypto (you imported it in user model) based on that "salt" alphabet and you store the outcome as hashed_password which in future will be used in comparison, based on the saved salt every time

Related

Amplify AppSync doesn't upload S3Object file from client

First, when the docs at https://aws-amplify.github.io/docs/js/api#complex-objects say:
input CreateTodoInput {
id: ID
name: String!
description: String
file: S3ObjectInput # This input type will be generated for you
}
I get an error Type "S3ObjectInput" not found in document. and I have to add S3ObjectInput manually.
This is my schema (the docs are not very clear on it so I put it together from similar questions)
type Picture #model {
id: ID!
file: S3Object!
url: String!
rating: Int
appearedForRanking: Int
}
type S3Object {
bucket: String!
key: String!
region: String!
}
input CreatePictureInput {
id: ID
file: S3ObjectInput!
url: String!
rating: Int
appearedForRanking: Int
}
input S3ObjectInput {
bucket: String!
region: String!
localUri: String
visibility: Visibility
key: String
mimeType: String
}
enum Visibility {
public
protected
private
}
And this is the client code (with React)
class PictureUpload extends Component {
state = { fileUrl: '', file: '', filename: '' }
handleChange = e => {
let file = e.target.files[0]
let filext = file.name.split('.').pop()
let filename = uuid() + '.' + filext
this.setState({
fileUrl: URL.createObjectURL(file),
filename: filename
})
}
saveFile = async () => {
let visibility = 'public'
let fileObj = {
bucket: awsConfig.aws_user_files_s3_bucket,
region: awsConfig.aws_user_files_s3_bucket_region,
key: visibility + '/' + this.state.filename,
mimeType:'image/jpeg',
localUri: this.state.fileUrl,
visibility: visibility
}
try {
const picture = await API.graphql(
graphqlOperation(mutations.createPicture, {
input: {
url: this.state.filename,
file: fileObj
}
})
)
The problem is that the mutation runs without errors, setting the DB records, but the file does not appear in S3. The docs say the SDK uploads the file to Amazon S3 for you. so I don't think I forgot to add something.
Any idea why the upload doesn't happen?
Automatic upload of file to S3 happens only if using the aws-appsync package, with aws-amplify you need to upload the file yourself using Storage.put(...).
This GitHub issue explain the differences in more detail
For ReactNative I've found that you can't simply provide a uri, but rather a blob. Try this code instead:
const response = await fetch(uri);
const blob = await response.blob();
let file = {
bucket,
key,
region,
localUri: blob,
mimeType,
};
This should get the image data to S3 as long as your authentication is properly configured.

Parsing dynamic CSV through Node and writing schema in Mongo [duplicate]

Currently I need to push a large CSV file into a mongo DB and the order of the values needs to determine the key for the DB entry:
Example CSV file:
9,1557,358,286,Mutantville,4368,2358026,,M,0,0,0,1,0
9,1557,359,147,Wroogny,4853,2356061,,D,0,0,0,1,0
Code to parse it into arrays:
var fs = require("fs");
var csv = require("fast-csv");
fs.createReadStream("rank.txt")
.pipe(csv())
.on("data", function(data){
console.log(data);
})
.on("end", function(data){
console.log("Read Finished");
});
Code Output:
[ '9',
'1557',
'358',
'286',
'Mutantville',
'4368',
'2358026',
'',
'M',
'0',
'0',
'0',
'1',
'0' ]
[ '9',
'1557',
'359',
'147',
'Wroogny',
'4853',
'2356061',
'',
'D',
'0',
'0',
'0',
'1',
'0' ]
How do I insert the arrays into my mongoose schema to go into mongo db?
Schema:
var mongoose = require("mongoose");
var rankSchema = new mongoose.Schema({
serverid: Number,
resetid: Number,
rank: Number,
number: Number,
name: String,
land: Number,
networth: Number,
tag: String,
gov: String,
gdi: Number,
protection: Number,
vacation: Number,
alive: Number,
deleted: Number
});
module.exports = mongoose.model("Rank", rankSchema);
The order of the array needs to match the order of the schema for instance in the array the first number 9 needs to always be saved as they key "serverid" and so forth. I'm using Node.JS
You can do it with fast-csv by getting the headers from the schema definition which will return the parsed lines as "objects". You actually have some mismatches, so I've marked them with corrections:
const fs = require('mz/fs');
const csv = require('fast-csv');
const { Schema } = mongoose = require('mongoose');
const uri = 'mongodb://localhost/test';
mongoose.Promise = global.Promise;
mongoose.set('debug', true);
const rankSchema = new Schema({
serverid: Number,
resetid: Number,
rank: Number,
name: String,
land: String, // <-- You have this as Number but it's a string
networth: Number,
tag: String,
stuff: String, // the empty field in the csv
gov: String,
gdi: Number,
protection: Number,
vacation: Number,
alive: Number,
deleted: Number
});
const Rank = mongoose.model('Rank', rankSchema);
const log = data => console.log(JSON.stringify(data, undefined, 2));
(async function() {
try {
const conn = await mongoose.connect(uri);
await Promise.all(Object.entries(conn.models).map(([k,m]) => m.remove()));
let headers = Object.keys(Rank.schema.paths)
.filter(k => ['_id','__v'].indexOf(k) === -1);
console.log(headers);
await new Promise((resolve,reject) => {
let buffer = [],
counter = 0;
let stream = fs.createReadStream('input.csv')
.pipe(csv({ headers }))
.on("error", reject)
.on("data", async doc => {
stream.pause();
buffer.push(doc);
counter++;
log(doc);
try {
if ( counter > 10000 ) {
await Rank.insertMany(buffer);
buffer = [];
counter = 0;
}
} catch(e) {
stream.destroy(e);
}
stream.resume();
})
.on("end", async () => {
try {
if ( counter > 0 ) {
await Rank.insertMany(buffer);
buffer = [];
counter = 0;
resolve();
}
} catch(e) {
stream.destroy(e);
}
});
});
} catch(e) {
console.error(e)
} finally {
process.exit()
}
})()
As long as the schema actually lines up to the provided CSV then it's okay. These are the corrections that I can see but if you need the actual field names aligned differently then you need to adjust. But there was basically a Number in the position where there is a String and essentially an extra field, which I'm presuming is the blank one in the CSV.
The general things are getting the array of field names from the schema and passing that into the options when making the csv parser instance:
let headers = Object.keys(Rank.schema.paths)
.filter(k => ['_id','__v'].indexOf(k) === -1);
let stream = fs.createReadStream('input.csv')
.pipe(csv({ headers }))
Once you actually do that then you get an "Object" back instead of an array:
{
"serverid": "9",
"resetid": "1557",
"rank": "358",
"name": "286",
"land": "Mutantville",
"networth": "4368",
"tag": "2358026",
"stuff": "",
"gov": "M",
"gdi": "0",
"protection": "0",
"vacation": "0",
"alive": "1",
"deleted": "0"
}
Don't worry about the "types" because Mongoose will cast the values according to schema.
The rest happens within the handler for the data event. For maximum efficiency we are using insertMany() to only write to the database once every 10,000 lines. How that actually goes to the server and processes depends on the MongoDB version, but 10,000 should be pretty reasonable based on the average number of fields you would import for a single collection in terms of the "trade-off" for memory usage and writing a reasonable network request. Make the number smaller if necessary.
The important parts are to mark these calls as async functions and await the result of the insertMany() before continuing. Also we need to pause() the stream and resume() on each item otherwise we run the risk of overwriting the buffer of documents to insert before they are actually sent. The pause() and resume() are necessary to put "back-pressure" on the pipe, otherwise items just keep "coming out" and firing the data event.
Naturally the control for the 10,000 entries requires we check that both on each iteration and on stream completion in order to empty the buffer and send any remaining documents to the server.
That's really what you want to do, as you certainly don't want to fire off an async request to the server both on "every" iteration through the data event or essentially without waiting for each request to complete. You'll get away with not checking that for "very small files", but for any real world load you're certain to exceed the call stack due to "in flight" async calls which have not yet completed.
FYI - a package.json used. The mz is optional as it's just a modernized Promise enabled library of standard node "built-in" libraries that I'm simply used to using. The code is of course completely interchangeable with the fs module.
{
"description": "",
"main": "index.js",
"dependencies": {
"fast-csv": "^2.4.1",
"mongoose": "^5.1.1",
"mz": "^2.7.0"
},
"keywords": [],
"author": "",
"license": "ISC"
}
Actually with Node v8.9.x and above then we can even make this much simpler with an implementation of AsyncIterator through the stream-to-iterator module. It's still in Iterator<Promise<T>> mode, but it should do until Node v10.x becomes stable LTS:
const fs = require('mz/fs');
const csv = require('fast-csv');
const streamToIterator = require('stream-to-iterator');
const { Schema } = mongoose = require('mongoose');
const uri = 'mongodb://localhost/test';
mongoose.Promise = global.Promise;
mongoose.set('debug', true);
const rankSchema = new Schema({
serverid: Number,
resetid: Number,
rank: Number,
name: String,
land: String,
networth: Number,
tag: String,
stuff: String, // the empty field
gov: String,
gdi: Number,
protection: Number,
vacation: Number,
alive: Number,
deleted: Number
});
const Rank = mongoose.model('Rank', rankSchema);
const log = data => console.log(JSON.stringify(data, undefined, 2));
(async function() {
try {
const conn = await mongoose.connect(uri);
await Promise.all(Object.entries(conn.models).map(([k,m]) => m.remove()));
let headers = Object.keys(Rank.schema.paths)
.filter(k => ['_id','__v'].indexOf(k) === -1);
//console.log(headers);
let stream = fs.createReadStream('input.csv')
.pipe(csv({ headers }));
const iterator = await streamToIterator(stream).init();
let buffer = [],
counter = 0;
for ( let docPromise of iterator ) {
let doc = await docPromise;
buffer.push(doc);
counter++;
if ( counter > 10000 ) {
await Rank.insertMany(buffer);
buffer = [];
counter = 0;
}
}
if ( counter > 0 ) {
await Rank.insertMany(buffer);
buffer = [];
counter = 0;
}
} catch(e) {
console.error(e)
} finally {
process.exit()
}
})()
Basically, all of the stream "event" handling and pausing and resuming gets replaced by a simple for loop:
const iterator = await streamToIterator(stream).init();
for ( let docPromise of iterator ) {
let doc = await docPromise;
// ... The things in the loop
}
Easy! This gets cleaned up in later node implementation with for..await..of when it becomes more stable. But the above runs fine on the from the specified version and above.
By saying #Neil Lunn need headerline within the CSV itself.
Example using csvtojson module.
const csv = require('csvtojson');
const csvArray = [];
csv()
.fromFile(file-path)
.on('json', (jsonObj) => {
csvArray.push({ name: jsonObj.name, id: jsonObj.id });
})
.on('done', (error) => {
if (error) {
return res.status(500).json({ error});
}
Model.create(csvArray)
.then((result) => {
return res.status(200).json({result});
}).catch((err) => {
return res.status(500).json({ error});
});
});
});

Mongoose models' save() won't update empty array

I have an array (bookedby) in a Mongoose model defined like this:
var mongoose = require('mongoose');
var Schema = mongoose.Schema;
var BarSchema = new Schema({
date: {
type: Date,
required: true
},
barid: {
type: String,
required: true
},
bookedby: {
type: [String],
required: true
},
});
module.exports = mongoose.model('Bar', BarSchema);
I update it with following function, called by a nodejs express router:
const Bars = require("../../models/bars");
const { getToday } = require('../../utils');
module.exports = function(req, res) {
const { barid } = req.body;
const { username } = req.user;
const date = getToday();
if( !barid ) return res.json({ success: false, error: 'Please specify parameter \'barid\'.'})
Bars.findOne({ barid, date }, function (err, bar) {
if (err) return next(err);
if (!bar || bar.bookedby.indexOf(username) === -1) return res.json({ error: `Bar is not booked yet.` });
// Someone booked the bar
const index = bar.bookedby.indexOf(username);
bar.bookedby.splice(index, 1);
bar.save(err => {
if (err) res.json({ error: `Error saving booking.` });
else res.json({ success: true });
});
});
};
Everything works fine, except when I remove the last item from the bookedby array. Then the save() function doesn't update the database. The last item remains there. I guess it has something to do with mongodb optimizing empty arrays, but how can I solve this?
According to the Mongoose FAQ:
http://mongoosejs.com/docs/faq.html
For version >= 3.2.0 you should use the array.set() syntax:
doc.array.set(3, 'changed');
doc.save();
If you are running a version less than 3.2.0, you must mark the array modified before saving:
doc.array[3] = 'changed';
doc.markModified('array');
doc.save();

in whole process i need to try enter n no. of questions and n no. of choices are save in mongoshell

1. schema code
let mongoose = require('mongoose');
let Schema = mongoose.Schema;
let presenterschema = new Schema({
Name: String,
Organization: String,
Email: String,
phone: Number,
question: [{
question: String,
choice: [{
choice_a: String,
choice_b: String,
choice_c: String,
choice_d: String,
}],
}],
});
module.exports = mongoose.model('presenter', presenterschema);
Email using as unique keyword to find out those email address which was already in db and push question and choices to that email id.
1. node.js code
app.post('/get_questio', function (req, res) {
presenter.findOne({ Email: req.body.Email }, function (err, data) {
question = req.body.question;
choice = req.body.choice;
choice_a = req.body.choice_a;
choice_b = req.body.choice_b;
choice_c = req.body.choice_c;
choice_d = req.body.choice_d;
data.question.push({ question })
data.question.choice.push({ choice_a, choice_b, choice_c, choice_d })
data.save(function (err, data) {
if (err) {
res.send("something went wrong " + err)
} else {
res.send(data)
}
})
})
})
actual i tried create a quiz application.

How to add mongoose db for auto increment?

I want add in auto increment for my model class in mean stack. How I do bellow class?
var mongoose = require('mongoose');
mongoose.connect('mongodb://localhost/sampleApp');
module.exports = mongoose.model('User', {
email : { type: String, required: true, index: { unique: true } },
password : { type: String}
});

Resources