I am using TDengine database, but the table creation statement is not displayed completely, how can I solve it?
Version: tdengine 3.0
enter image description here
Related
I have a series of tables in TDengine database. If I execute "show tables", I can see all of them. However, when I execute some queries on a table, it reports "Fail to get table info".
You can see the error information on the picture.
enter image description here
What may be the problem?
I've created some hive tables using a JDBC in a python notebook on Databricks. This was on Data Science and Engineering UI. I'm able to query the tables in a Databricks Notebook and user direct SQL with the magic command %
When switching to Databricks SQL UI, I'm still able to see the tables in Hive metastore explorer. However I'm not able to read the data. A very clear message says that only csv, parquet and so are supported.
Even though, I found this surprising, since I can use the data on DS and Engineering UI why it's not the case on Databricks SQL? Is there any solution to overcome that?
Yes, it's a known limitation that Databricks SQL right now supports only file-based formats. As I remember it's related to a security model, plus the fact that DBSQL is using Photon under the hood where JDBC integration could be not so performant. You may reach your solution architect or customer success engineer to get information on if it will be supported in the future.
The current workaround would be only to have a job that will periodically read all data from database via JDBC and dump into Delta table - it could be even more performant compared to JDBC, the only issue is the freshness of data.
You can import a Hive table from cloud storage into Databricks using an external table and query it using Databricks SQL.
Step 1: Show the CREATE TABLE statement
Issue a SHOW CREATE TABLE <tablename> command on your Hive command line to see the statement that created the table.
Refer below example:
hive> SHOW CREATE TABLE wikicc;
OK
CREATE TABLE `wikicc`(
`country` string,
`count` int)
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
'/user/hive/warehouse/wikicc'
TBLPROPERTIES (
'totalSize'='2335',
'numRows'='240',
'rawDataSize'='2095',
'COLUMN_STATS_ACCURATE'='true',
'numFiles'='1',
'transient_lastDdlTime'='1418173653')
Step 2: Issue a CREATE EXTERNAL TABLE statement
If the statement that is returned uses a CREATE TABLE command, copy the statement and replace CREATE TABLE with CREATE EXTERNAL TABLE.
EXTERNAL ensures that Spark SQL does not delete your data if you drop the table.
You can omit the TBLPROPERTIES field.
DROP TABLE wikicc
CREATE EXTERNAL TABLE `wikicc`(
`country` string,
`count` int)
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
'/user/hive/warehouse/wikicc'
Step 3: Issue SQL commands on your data
SELECT * FROM wikicc
Source: https://docs.databricks.com/data/data-sources/hive-tables.html
I'm using two softwares dbvisualizer and PGAdmin and connecting to the same database say db(Postgresql). The database db has a table timeRecord with a column which has a datatype as timestamp. I'm connecting both PGAdmin and dbvisualizer to db, and open table timeRecord. The value in the column with datatype timestamp in both dbvisualizer and pgadmin are different, even though I'm connecting to the same database. My Java application uses the value that is seen in dbvisualizer and .net app uses the value in pgadmin(There is data mismatch because of this). Can anyone please help me with this?
Thank you.
I have just installed the intellij idea plugin DB Navigator to view Postgresql database and I am wondering whether I can show the values of the user-defined columns or not,plus is it possible to update/insert record by the gui tool not by writing sql statement?
You don't need that plugin to achieve what you want. Just use the basic database integration.
Open the Database view: View > Tool Windows > Database (or click on Database on the right ribbon), add your Postgres database, select your table and open the Table Editor (F4). Now you can add, delete and update entries without writing SQL.
I have a merge replication (SQL 2005) with a filter.
The filter is on table delLog and says WHERE 0 = 1, so all the data is only uploaded, not downloaded.
When I want to download my data, and it comes to this table it says downloading table delLog and hangs there for an hour...
I get no errors for this it just hangs there...
any ideas how to solve this problem?