Saving time data to database - database

Y-m-d\TH:i:sP
How should I save time data in this format to the database?
or varcharc ?

Related

What should be the database for storing large lookup tables?

I have a requirement where i need to store a very large lookup(10 million records) file in database
File will look like this
{
"users":["user1","user2","user3","user4"]
}
and this file will also be updated time to time by the admin what should be the database i should choose here

What is best way to update multipli record in database (800000 row) and persist the new data from csv file using spring bash?

I have File csv that contains large data,every time the user upload new file the old data will be updated or deleted it depends on the file and save the new data.
I am using Spring bash for this task.
I am creating a job that contains two steps :
first steps : A tasklet for updating the old data
second steps : steps that contains a reader,procssor and writer with chunk data to persist the new data
the problèm is in the time of save and update is very lard 12min for file that contains 80000 row.
can I optimize the time for this job ?
Import (export) big data process, update, delete, update using joining to tables, searching process, all these operations very very faster on Databases than on programming languages. I recommended to you these:
Use SQL Server BULK INSERT command to import data from CSV into Database. For example, for 10 million records this process will be executed in 12 seconds.
After importing data you can update, delete or insert new data on the database using joining to import table.
This is the best way, I think that.

How can I change the data in TDengine?

I'm using TDengine database for collected sensor data. Sometimes, there will be some error in collected data, and they will be also stored in the database. The data is inserted by timestamp.
How can I change them when I found the error?
you cannot if your database option is 0, but you can update the data by using
insert into dbname.tableName values(timestamp_value, col1_value, ...)
if your database option is 1 or 2. just keep in mind that the timestamp_value should keep the same with the data you want to update.

SQL Server access to old data that has backed up and not available in current version

I have a UI that users can search IDs of students. The current database contains student data for the last 2 years, and data before that has been "FULL" backed up in some files in which saved in some name format that contains date, like backup_db_2017_01_to_2018_01.
Currently when the user searches for an old student ID:
I search the current database and if there is no data, it automatically restores the last backup and merges data with the current database. If the id is not in the last backup, it restored another backup, and so on...
In this way, too much data merged with current data and it takes too long time. In the worst scenario, the student id is in the oldest backup.
I wonder what is the best way to do that?
I assume that you have the space to RESTORE and MEREGE all of the old backups?
You could consider merging all of the old data onto a READ-ONLY FILEGROUP so that it is always available but not able to be updated.

Populate SQL database from textfile on a background thread constantly

Currently, I would like provide this as an option to the user when storing data to the database.
Save the data to a file and use a background thread to read data from the textfile to SQL server.
Flow of my program:
- A stream of data coming from a server constantly (100 per second).
- want to store the data in a textfile and use background thread to copy data from the textfile back to the SQL database constantly as another user option.
Has this been done before?
Cheers.
Your question is indeed a bit confusing.
I'm guessing you mean that:
100 rows per second come from a certain source or server (eg. log entries)
One option for the user is textfile caching: the rows are stored in a textfile and periodically an incremental copy of the contents of the textfile into (an) SQL Server table(s) is performed.
Another option for the user is direct insert: the data is stored directly in the database as it comes in, with no textfile in between.
Am I right?
If yes, then you should do something in the lines of:
Create a trigger on an INSERT action to the table
In that trigger, check which user is inserting. If the user has textfile caching disabled, then the insert can go on. Otherwise, the data is redirected to a textfile (or a caching table)
Create a stored procedure that checks the caching table or text file for new data, copies the new data into the real table, and deletes the cached data.
Create an SQL Server Agent job that runs above stored procedure every minute, hour, day...
Since the interface from T-SQL to textfiles is not very flexible, I would recommend using a caching table instead. Why a textfile?
And for that matter, why cache the data before inserting it into the table? Perhaps we can suggest a better solution, if you explain the context of your question.

Resources