I want to check yesterday’s data and improve performance in TDengine database - tdengine

The dates I save in the TDengine table are all in UTC0, but both the server and the client are in the UTC8 time zone.
I want to check yesterday’s data. How should I use to improve performance?
Version: 2.6
I want to check yesterday’s data. How should I use to improve performance?

Related

Database decision - Cassandra or PostGres [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I'm trying to determine wether or not to use PostGres or Cassandra data store. Data will be inserted to this new store initially via :
1. A batch process to insert 100MB per day.
2. At a later point the existing batch process + CRUD operations at a rate of 20 operations per second.
From reading articles & various conversations with other I have determined:
**Cassandra**
Type: NoSql
Read Speed: Fast
Write Speed: Slow
Storage: Distributed Cluster
Scaling: Horizontal
**PostGres**
Type: Relational
Storage: Single Instance
Scaling: Horizontal
Some resources I have been reading :
https://www.quora.com/How-do-you-compare-Postgres-to-Cassandra-for-big-data-application
https://www.quora.com/How-do-you-approach-choosing-a-database-architecture-and-design
https://www.thegeekstuff.com/2014/01/sql-vs-nosql-db/?utm_source=tuicool
What other considerations should be taken before making the decision ? Are there other data points I should include in the decision process such as determining the expected number reads, writes, updates, deletes from the table ?
I could utilize PostGres and then migrate to Cassandra at some other point but would prefer to avoid the overhead of a DB migration process.
I work with postgres, because it is more customizable for developers and it is a nosql database combined with the most beautiful parts with sql db.

How to Synch two databases in Microservice architecture in CQRS separate ones for read/write [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I was asked this question in this interview:
How to Synch two database data? There will be time delays etc. How do we handle?
The background: I mentioned about Microservice architecture and also using CQRS for performance (Separate Read/get query database) and separate write command database.
Now, if the customer enters or modifies data, how it will be replicated/synched in to the read database?
I was talking about stuffs like cosmos db options etc which prevents dirty read etc. I also mentioned about cache. But I am not certain what are all variousoptions to do synch. Interviewer specifically asked me in SQL DB level how do I synch between two DBs.
CQRS is a pattern which dictates that the responsibility of Command and Query operations be seperated.
Now there are multiple ways you can sychronize the data between databases. You can use Master-Slave Configuration or Oplog Replication Mechanism or something very much specific to the database.
But what's more important here is to decide what strategy to use. Since, you are using CQRS pattern now you have more than one data store (write store, read store) and there are fair chances that these data stores are network partitioned. In which case you would have to decide what really matters to you the most Consistency or Availabililty, which is generally goverened by what businesses require.
So in general, what replication strategy is to be used depends on whether your businesses require Consistency or Availabililty.
References:
CAP Theroem: https://en.wikipedia.org/wiki/CAP_theorem
Replication (Driven by CAP Theorem): https://www.brianstorti.com/replication/
There are a couple of options for database syncing in SQL server.
1. SQL Server Always on Feature (SQL 2012 Onwards) - By using this feature, You need to make a primary and secondary replica (could be multiple secondry replica), Once Always On feature is configured, the Second replicas automatically updated based on Primary replica updates. This also provides HADR feature, if the primary replica goes down, the secondary replica will be active and play primary replica role.
https://learn.microsoft.com/en-us/sql/database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server?view=sql-server-2017
2. SQL Server Replication - Merge replication, Transaction replication etc.
https://learn.microsoft.com/en-us/sql/relational-databases/replication/types-of-replication?view=sql-server-2017

SQL Server replication model snapshot, transactional and merge - which is best [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am trying to implement sql server database replication between 2 branch servers to a Head Office Server.
My application is a distributed one, the main application is hosted on head office which is controlling the masters and final approvals. The branch servers are located on two other countries, which are used to enter daily transactions.
Since internet bandwidth is too slow, I am planning to run the replication only on off hours (ie.. night 12 AM to Morning 8 AM). During business hours it is difficult to synchronize. All tables are designed such a way to validate and avoid duplication or other errors.
Also there are chances of internet outage for couple of days.. may be up to a week.
There are three type of tables,
BI Directional - Needs to sync between both sides(HO to branch and
branch to HO, Approvals)
Sync from Branch to HO (Transactions)
Sync from HO to Branches (Masters)
When I configure replication, I am confused between different types of replication such as snapshot, transactional and merge replication.
Can anybody suggest which one is the best method for my model
I am also facing some issues with Primary key and foreign keys lost after configuring replication.. Any idea why this is happened..?
Transaction replication is best for one way sync and Merge for Bi directional sync, will be the best options.

How to achieve 100 000 users concurrent connectivity to sql server database? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
How to achieve 100 000 users concurrent connectivity to sql server database?
I have a scenario where 100 000 users will login to the website which is using sql server as backend to perform insert on the same table. Is it practical? How to achieve it? How should one design the database?
SQL Server can handle very large number of users inserting rows. What will happen is users will connect, their identity will be checked if they can connect and if they have the correct privileges and then the insert will happen and SQL Server will then drop the connection. There is no reason for a user to maintain a connection once the insert has happened and SQL Server has returned a status code indicating success or failure.
I have had success with very large number of users doing this to a single audit table and I have had to have the audit table be a heap without any keys. This way SQL Server would just add consecutive rows to consecutive pages and this happened fast enough for the inserts to successfully happen. There is no guarantee that the inserts will occur in this fashion (but in practice they did) and it handled very high volumes of data. You of course have to test to see if your install can handle the volume you anticipate. It does not matter what order the data is stored as long as it is successfully saved.
I have never seen an install that can handle 100,000 active sessions. The number of locks would probably overwhelm any conceivable set of hardware. You may also want to do a select ##MAX_CONNECTIONS from the intended machine that will indicate, as the number returned will indicate the maximum number of simultaneous connections the current instance can (theoretically) handle. On both SQL Server 2008 and 2012 enterprise the number returned is 32,767.

Database archive technique with 2 databases in SQL Server

I have a database which contains data for events. For every event database size increases by 1 GB.
I have planned to create a separate database for archive. After completing each event, I plan to move that data to the archive database using a stored procedure.
I have already added many indexes on database to improve speed.
So is this good technique or is there any better idea to improve the speed on database ?
Thanks in advance.

Resources