How do I select certain set of data(rows) from sql database by the time when they are inserted? I don't see any related documents in regards to how to do this using mssql module in node.js... could anyone suggest me any reading material or something else? So my question is how to create timestamp column when data are inserted in database
Thank you
The doc seem straight forward on how to achieve it
const sql = require('mssql')
async () => {
try {
const pool = await
sql.connect('mssql://username:password#localhost/database')
const result = await sql.query`SELECT * FROM TABLE WHERE DATE BETWEEN '09/16/2010 05:00:00' and '09/21/2010 09:00:00'`
console.dir(result)
} catch (err) {
// ... error checks
}
}
Related
I'm using sequelize, and using transactions, but I have to make a lot of inserts every night, my fear is if these inserts/changes are stored in memory until transaction is commited and can crash the server and lost it all.
Or if these changes are stored and handled by the DBMS (in this case i'm using aurora/postgresql) and i don't have to worry about nothing
Help!
I'm usings express 4, sequelize 5 and this will run maybe on a cronJob
This is an abstract example of my structure
const db = require('../database/models')
class Controller {
async test (req, res) {
let transaction = await db.sequelize.transaction()
try {
await this.storeData(req.body, transaction)
await transaction.commit()
res.status(200)
} catch (error) {
if (transaction) await transaction.rollback()
res.status(400)
}
}
async storeDate (params, transaction = null) {
// Calculation of the data to insert
var records = []
await Promise.all(records.map(async item => {
await db.MyModel.create(item, { transaction })
}
))
}
A transaction feature in Sequelize is just a wrapper over a DB transaction and, of course, a transactional DBMS has a transaction log and stores all ongoing transaction operations there.
One edge case would be if you really like to take too many objects and insert them all in one operation so I'd recommend to divide a huge amount of rows into smaller batches.
I am running executing raw sql to delete some records that I added for the test. If I run the same query in management studio it works fine but when I run that query EF Core 2.0 it throws below error
System.Data.SqlClient.SqlException: 'Conversion failed when converting the nvarchar value '1,2' to data type int.'
Code
var idList = await Context.User.ToListAsync();
var ids = string.Join(",",idList.Select(x=>x.Id));
await _context.Database.ExecuteSqlCommandAsync($"Delete from User where Id in ({ids}) and RoleId = {contact.RoleId}");
Query executing
Delete from sale.WatchList where OfferId in (1,2) and UserId = 9
Could anybody please advise on what wrong with the above code.
Thanks
EF Core will transform interpolated strings into queries with parameters to create reusable queries and protect against SQL Injection vulnerabilities. See: Raw SQL Queries - EF Core - Passing Parameters
So
$"Delete from User where Id in ({ids}) and RoleId = {contact.RoleId}"
is transformed into
Delete from User where Id in (#ids) and RoleId = #RoleId
With SqlParameters bound.
If that's not what you want, just build the SQL Query on a previous line.
This will not work. You have to write dynamic query. Please try like below one
var idList = await _dataContext.User.ToListAsync();
var ids = string.Join(",", idList.Select(x => x.Id));
await _dataContext.Database.ExecuteSqlCommandAsync($"execute('Delete from User where Id in ({ids}) and RoleId = {contact.RoleId}')");
although accepted answer does work, it creates lot of warnings so for now I am using what #Abu Zafor suggested with small change/fix
await _dataContext.Database.ExecuteSqlCommandAsync($"execute('Delete from User where Id in ({ids}) and RoleId = {contact.RoleId}')",ids,contact.RoleId);
In my app I'm using node mssql module to input datetime into a SQL Server database. The problem is that in the database datetime is always changed and time is one hour less then the one that I input.
async function insertDate(date, logIO) {
try {
var d = new Date(date)
var name = 'zdzmar'
const pool = await sql.connect(config)
let result = await pool.request(pool)
.input('date', TYPES.DateTime, d)
.input('name', TYPES.VarChar, name)
.input('logIO', TYPES.TinyInt, logIO)
.query(`insert clock(date, name, logIO) values(#date, #name, #logIO)`)
} finally {
sql.close();
}
}
Where is the problem?
It seems like a bug with the mssql package. I run into the same problem even when I formated the datetime into a string, the date still got converted into a different datetime, in my case 12:04pm got converted to 20:04pm, which is not directly related to timezone or UTC time.
A workaround to solve this problem is to use sql.VarChar instead of sql.DataTime.
I am trying to get SQL Server date as a date object using the npm mssql package. Unfortunately it returns type = object (Orderdate is a datetime object in SQL Server).
How can I easily return the date column as an actual node.js Date object?
const sql = require('mssql')
const ordersSelect = `
SELECT TOP 20
OrderDate
FROM Orders
`
let pool = await sql.connect('mssql://sa:wonton$98#TS1\\SQL2014/Artoo')
const request = pool.request()
const orders = await request.query(ordersSelect)
var orders = orders.recordset
As far as I know, the Node module mssql cannot be configured to return SQL datetimes to Node date objects. Everything just comes back as strings inside the recordset object
However you could write some code to transform the data set after it has been loaded by the module. JavaScript can convert strings from the format yyyy-mm-dd hh:mm:ss.sss (returned by SQL Server) to its own Date object, so something like
for (var row in orders) {
orders[row].OrderDate = new Date(orders[row].OrderDate)
// or
orders[row]["OrderDate"] = new Date(orders[row]["OrderDate"])
}
to the end of your code sample provided would work.
Working with Express, node, the MSSQL package to create a backend for an application, and I would like to do as much processing on the server as possible before sending to the client.
I have two queries I need to run, but I need to combine the data in a specific way before sending to the client.
The first query gathers data that will be of a one-to-one relationship, and the other is a one-to-many relationship. I would like to append the one-to-Many onto the One-to-one.
First Query:
select updatedInfo.*,
nameInfo.*, nameInfo.updated as nameUpdated, nameInfo.alreadyCorrect as nameWasCorrect,
addressInfo.*, addressInfo.alreadyCorrect as addWasCorrect, addressInfo.updated as addUpdated,
phoneInfo.*, phoneInfo.alreadyCorrect as phoneWasCorrect, phoneInfo.updated as phoneUpdated,
emailInfo.*, emailInfo.alreadyCorrect as emailWasCorrect, emailInfo.updated as emailUpdated
from updatedInfo join nameInfo on updatedInfo.IndivId=nameInfo.nameInfoId
join addressInfo on updatedInfo.IndivId=addressInfo.addressInfoId
join emailInfo on updatedInfo.IndivId=emailInfo.emailInfoId
join phoneInfo on updatedInfo.IndivId=phoneInfo.phoneInfoId
where updatedInfo.correctedInFNV is not null
order by updatedInfo.IndivId
Second Query: ID is a variable passed to the query
select * from positionInfo where IndivId='${id}'
How would I go about appending the second query results to the first on the correct record?
I'm using the mssql package and using it like this:
var sqlConfig = {
server: 'IP',
database: 'db',
user: 'sweeper',
password: 'pass'
}
const connPool = new mssql.ConnectionPool(sqlConfig, err => {
console.error(err);
});
var query = {
getAllUpdatedPool: () => {
connPool.Request().query(`----first query ----`)
.then((set) => {
console.log(set);
return set;
}).catch((err) => {
console.error(err);
return err;
});
},
getPositionByIdPool: (id) => {
connPool.Request().query(`----second query-----`)
.then((set) => {
console.log(set);
return set;
}).catch((err) => {
console.error(err);
return err;
});
How should I call these to add the results of the second query to the results of the first one as an additional property? Callbacks are making this confusing.
It looks like both queries execute on the same server, have you considered using subqueries? (https://learn.microsoft.com/en-us/sql/relational-databases/performance/subqueries?view=sql-server-2017). If you can express what you're trying to do in SQL it'll probably be 1) cleaner and 2) faster to just do it with subqueries than manually merging recordsets. If they exist on different servers, you could use linked servers to achieve the same subquery result.