I need to use mybatis provide data which stored in TDengine time-series database. But I found the count() function will return nothing if there is no any data.
taos> select server_version();
server_version() |
===================
2.0.20.12 |
Query OK, 1 row(s) in set (0.000156s)
taos> select count(*) from d_entrance_data;
Query OK, 0 row(s) in set (0.000905s)
I supposed it should return 0 at least. Now I can't process it with mybatis.
count(*) |
========================
0 |
Query OK, 1 row(s) in set (0.016176s)
It confuses me a lot.
Any idea?
this is a known issue by design in TDengine, in TDengine 2.x version, this issue won't be fixed.
Related
After importing a csv data set to sql server I am having problem with filtering.
I used the following query to create the table.
create table AmharicNews(
headline nvarchar(max),
category nvarchar(400),
article nvarchar(max)
)
I have seen no problem when using Select * from AmharicNews.It shows the entire data but when using WHERE clause or DISTINCT keyword it doesn't work. For example the following queries return wrong data.
select DISTINCT category from AmharicNews
It returns the following result which is wrong.
category
--------------------------------------------------------------------------------------------
ሀገር አቀፍ ዜና
(1 row affected)
Completion time: 2021-12-08T15:05:28.6984198+03:00
Expected result
category
--------------------------------------------------------------------------------------------
ሀገር አቀፍ ዜና
መዝናኛ
ስፖርት
ቢዝነስ
ፖለቲካ
ዓለም አቀፍ ዜና
(6 rows affected)
Completion time: 2021-12-08T15:18:30.6179762+03:00
And the following returns the entire rows when its expected to return rows whose category equals ስፖርት
select * from AmharicNews WHERE category = N'ስፖርት' --ስፖርት is Sport equivalent of Amharic
Not just the above queries but everything containing WHERE clause doesn't work. This includes updatestatements affecting the entire rows.
I am using SQL Server 2019
I currently am trying to integrate a trigger into my sql code.
However, I am facing an issue where integrating the trigger
yields connection issues and breaks any future queries while
using that sourced database. I am using MariaDB.
This is what I have.
/* TRIGGERS */
DELIMITER |
CREATE TRIGGER max_trials
BEFORE INSERT ON Customer_Trials
FOR EACH ROW
BEGIN
DECLARE dummy INT DEFAULT 0;
IF NOT (SELECT customer_id
FROM Active_Customers
WHERE NEW.customer_id = customer_id)
THEN
SET #dummy = 1;
END IF;
END |
DELIMITER ;
I source a file which contains all of this code.
When trigger uncommented and I source (the table will not exist),
I get this output
MariaDB [(none)]> SOURCE db.sql;
Query OK, 0 rows affected, **1 warning** (0.000 sec)
Query OK, 1 row affected (0.000 sec)
Database changed
Query OK, 0 rows affected (0.028 sec)
Query OK, 0 rows affected (0.019 sec)
...
...
**ERROR 2013 (HY000) at line 182 in file: 'db.sql': Lost connection to MySQL server during query**
Notice that a warning is produced at the top
and an error is produced at the bottom. Now let's look at the
warning:
MariaDB [carpets]> SHOW WARNINGS;
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111)
ERROR: Can't connect to the server
In the above snippets, you see a warning and an error, but
referring to the issue of a loss of connection, for some
reason that I do not understand.
Let's look at the other case.
When I drop the database and reload with trigger commented out,
I recieve the following result:
MariaDB [(none)]> SOURCE carpet.sql;
Query OK, 9 rows affected (0.042 sec)
Query OK, 1 row affected (0.000 sec)
Database changed
Query OK, 0 rows affected (0.018 sec)
...
...
Query OK, 0 rows affected (0.003 sec)
I do not receive any issues. From this, it appears that the
trigger is causing an issue which prevents defined functionality.
I cannot insert or do much after I have generated the error, for every
query after will result in a connection error.
Having just got my hands on triggers, would anyone happen to have
an idea of what is going on here?
I have an java application reading from a database table for jobs to process, and I may have multiple instances of this application running on different servers as each job is independent. Once a job is picked up for processing, its status will be update to "running". What I want to make sure is the retrieval of to be processed jobs from each instance to be atomic, how can I achieve this using JDBC?
One approach that would be completely generic*, though perhaps slightly inefficient, would be to use a server-specific identifier to "claim" a job by first updating its status to that identifier, then retrieve the job based on that value. For example, if you were working with Windows servers on the same network then their server name would uniquely identify them. If your table looked like
JobID JobName Status
----- ------- ---------
1 Job_A Completed
2 Job_B
3 Job_C
where unclaimed jobs have a Status of NULL then your application running on SERVER1 could claim a job by doing setAutoCommit(true) followed by
UPDATE Jobs SET Status='SERVER1'
WHERE JobID IN (
SELECT TOP 1 JobID FROM Jobs
WHERE Status IS NULL
ORDER BY JobID)
If ExecuteUpdate returns 0 then there are no jobs pending. If it returns 1 then you can get the row with
SELECT JobID, ... FROM Jobs WHERE Status='SERVER1'
and then update its Status to 'Running' with a parameterized query like
UPDATE Jobs SET Status='Running' WHERE JobID=?
where you supply the JobID you retrieved from the previous SELECT.
*(i.e., not relying on any specific SQL extensions, explicit locking, or transaction handling)
Lock the table using whatever mechanism is supported by your database server.
For example, in Postgres it would be:
LOCK yourtable;
And it's your table for the duration of the transaction.
Other databases will have something similar.
Use ResultSet that has CONCUR_READ_ONLY and TYPE_FORWARD_ONLY. If your database jdbc driver supports it, it will only return atomic read of your select time.
According to this documentation, (Table Summary of Visibility of Internal and External Changes)
forward-only cursor will only show your read time results. CONCUR_READ_ONLY will prevent your internal updates.
When using databases of a transactional nature, one popular practice is to perform ROW-LEVEL LOCKING. Row-level locks prevent multiple transactions from modifying the same row. SELECT for UPDATE is an easy way to achieve this effect. Assuming you have a processes table:
SELECT process_id, status
from processes
for UPDATE of status SKIP LOCKED;
When done processing, issue
update processes set status = 'updated'
where process_id = :process_id; --from before
Issue
commit;
to release the lock.
Here's an actual example
Disclaimer: SELECT FOR UPDATE is a form of pessimistic locking and has its caveats as explained by Burleson. However, it might be a viable solution if the client is not web-based and extremely concurrent.
Problem
Take jobs ready to process and make their status running atomically.
Solution
No need for additional locks. Since an update operation is already atomic by itself in terms of the same query (see the excerpt from the docs below), update the jobs table, setting the status running to those that are ready to be processed and get the result of this update - it will be the jobs you took for processing.
Examples:
Postgres
UPDATE jobs SET status = 'running'
WHERE status is NULL
RETURNING id;
In terms of JDBC you can go similar to this:
String sql = "update ... returning ...";
boolean hasResult = statement.execute(sql);
if (hasResult) {
ResultSet rs = statement.getResult();
}
SQL Server
UPDATE jobs SET status = 'running'
WHERE status is NULL
OUTPUT UPDATED.id;
Excerpt from the Postgres documentation that shows how 2 transactions behave when doing UPDATE on the same table with the same query:
UPDATE will only find target rows that were committed as of the command start
time. However, such a target row might have already been updated (or
deleted or locked) by another concurrent transaction by the time it is
found. In this case, the would-be updater will wait for the first
updating transaction to commit or roll back (if it is still in
progress).
if you want to ensure proper work in concurrent environment in your specific example you can use the server name.
The table will look like:
JobID JobName Server Status
----- ------- ------- ---------
1 Job_A host-1 Completed
2 Job_A host-2 Working
3 Job_B host-3 Working
if you have multiple instances on the same host add the process id too:
JobID JobName Server ProcessID Status
----- ------- ------- ---------- ---------
1 Job_A host-1 1000 Completed
2 Job_A host-2 1000 Working
3 Job_A host-2 1001 Working
5 Job_B host-3 1000 Working
how to auto update the database according to date?
Message ID | Message | StartDate | EndDate | Status
1 | Hello | 07/7/2012 | 08/7/2012 | Expired
2 | Hi | 10/7/2012 | 12/7/2012 | Ongoing
3 | Hi World | 11/7/2012 | 18/7/2012 | Pending
How to auto update the status according to the date today?
More information : I'm using SQL-Server Management Studio. Sorry for not stating.
I would create a SP that sets the status to "Expired" for all messages that have EndDate > GETDATE() and schedule it using a job in Sql Server:
CREATE PROCEDURE UpdateMessages
AS
UPDATE Messages SET Status = 'Expired' WHERE EndDate > GETDATE()
GO
The best thing you can do is to create store procedure that update records on your table based on your date time and create SQL server job then schedule it on your desired time when to execute it.
I don't think there is a way for the table to update itself. You should consider scheduling a Job in SQL Server.
See this MSDN article here
The job would run daily and consider each row and update the status where appropriate.
I run this query on SQL Server and it doesn't work:
SELECT * FROM dbo.marcas
but if I put at least one field in the query, it works.
SELECT code FROM dbo.marcas
I know it must be simple, but I can't find an answer.
Thansk
Most likely, someone else is updating that same table, and thus places certain locks on the table.
When you do a SELECT * ... those locks will cause a conflict and your query won't execute, while a SELECT (list of columns)...... will work (since it's not affected by the locks)
I'm answering my own question because I have found the answer by myself.
Using EMS Sql Manager 2008 for SQL Server I executed select * from marcas and have no results, just errors. But If I recreated the table, voila, it just worked fine !!!
So the problem was the way I created the tables in the server. After a while, I realized the command that created the table in Foxpro using ODBC was:
oerr = sqlexec(oconn, "ALTER TABLE ["+xtabla+"] ADD ["+borrar.field_name+"] "+tipo_campo(borrar.field_type, borrar.field_len, borrar.field_dec),"")
so changed it to:
oerr = sqlexec(oconn, "ALTER TABLE ["+xtabla+"] ADD ["+alltrim(borrar.field_name)+"] "+tipo_campo(borrar.field_type, borrar.field_len, borrar.field_dec),"")
that is, I just deleted the extra spaces right after the table name.
Thats all, "codigo" is not equal to "codigo ".
Thanks to all of you who tried to help me.
I beleve
One possibility would be if you have a computed column in the table that's generating an error when SQL Server attempts to compute it. Sample code:
create function dbo.Crash ()
returns int
as
begin
return 1/0
end
go
create table dbo.cctest (
Col1 int not null,
Col2 int not null,
CrashCol as dbo.Crash()
)
go
insert into dbo.cctest (Col1,Col2)
select 1,2 union all
select 3,4
go
select Col1 from dbo.cctest
go
select * from dbo.cctest
go
results:
Col1
----
1
3
(2 row(s) affected)
Col1 Col2 CrashCol
--------------------
(2 row(s) affected)
Msg 8134, Level 16, State 1, Line 1
Divide by zero error encountered.
So the first select worked since it didn't access the fault computed column
I recommend running the query in a sql client other than EMS, in the hope that you can get an informative error message.
"La operación en varios pasos generó errores. Compruebe los valores de estado." -> "The multi-step operation generated errors. Check the status values."