Is it possible to query across multiple databases in TDengine? For example, I have a database with a super table which has some performance data for wind turbines, and another database with a super table which has maintenance data. Can I select from the performance super table when there is no ongoing maintenance?
Related
Is there a way to reset the table SYS.COL_USAGE$? Does the number keep going up for ever?
Of course, I can truncate the table or do DML operations but this is a SYSTEM table and I prefer not to do that.
Background: We have an unusual data warehouse setup with two databases; a warehouse database where the overnight ETL writes to and a user database which is customer facing and is cloned before the start of the day from the warehouse database. We gather stats in the warehouse database which gets copied to the user database as part of the clone.
However, I realized that SYS.COL_USAGE$, which drives the histogram creation, are based entirely on ETL queries and not user queries.
DBMS_STATS.RESET_COL_USAGE is your friend here.
https://docs.oracle.com/en/database/oracle/oracle-database/19/arpls/DBMS_STATS.html#GUID-0ED25A41-8642-46E4-AB5C-AAC08E622A8F
I have a table with 200 million records. This table is updated per minute and new records added to it. I want to query in format of a group by and sum function for KPI analysis. What is the best way to query the table without performance drawbacks? Currently, I save the result in a separate table and I updated this table with a SQL Server trigger, but it isn't a good way. Is there any other way you can suggest?
If you use SQL Server 2016 or an upper version of SQL Server, you can use
Real-Time Operational Analytics approach in order to overcome this type
of issue. Real-Time Operational helps to run analytics and OLTP workloads
on the same database. In this way, you can avoid the ETL process.
Real-Time Operational Analytics could be an option for your issue.
Using another table is a good solution if the events are stored in the second table. You can save events by month, weekly, daily, etc. and calculate the system analysis according to it.
i have a master table and corresponding configuration table. Each record in master table can have more than 100 000 records. Master table can have more than 200 records. Which of the following approach is best?
Having separate configuration for each master record
Having single configuration table for all master records with proper indexing and partitioning
You should have a single configuration table for all the masters, creating an individual table for each configuration will be very bad.
If you have individual tables for each configuration, you will end up with a lot of issues like
Low maintainability
You might require writing dynamic queries for fetch the data which
is not good.
To get the data for multiple configurations, you will be required to
use UNION, which will impact the performance.
Any new configuration in the system will lead to code changes.
Fetching data from 100000*200 rows should be fine if your table is indexed properly.
For better performance, you can create a partition on the configuration table on MasterId.
My project database is huge and there are multiple Stored Procedure jobs running based on schedule..Basically it is bottleneck for database performance. I want to distribute the load to different DB servers and merge data back to primary DB.
My question is: How do I manage this? which is the most efficient method?
My Plan : Primary database replicate data to multiple databases
Specific Jobs/SP's will be executed on replica databases
Once job done merge data (using ssis) to primary DB
Any ideas or suggestions on how to tackle it?
I am working on a project which is highly performance dashboard where results are mostly aggregated mixed with non-aggregated data. First page is loaded by 8 different complex queries, getting mixed data. Dashboard is served by a centralized database (Oracle 11g) which is receiving data from many systems in realtime ( using replication tool). Data which is shown is realized through very complex queries ( multiple join, count, group by and many where conditions).
The issue is that as data is increasing, DB queries are taking more time than defined/agreed. I am thinking to move aggregated functionality to Columnar database say HBase ( all the counts), and rest linear data will be fetched from Oracle. Both the data will be merged based on a key on App layer. Need experts opinion if this is correct approach.
There are few things which are not clear to me:
1. Will Sqoop be able to load data based on query/view or only tables? on continuous basis or one time?
2. If a record is modified ( e.g. status is changed), how will HBase get to know?
My two cents. HBase is a NoSQL database build for fast lookup queries, not to make aggregated, ad-hoc queries.
If you are planning to use a hadoop cluster, you can try hive with parquet storage formart. If you need near real-time queries, you can go with MPP database. A commercial option is Vertica or maybe Redshift from Amazon. For an open-source solution, you can use InfoBrigth.
These columnar options is going to give you a greate aggregate query performance.