Why is select from sqlx lib executing so long? - database

I've got a function which is doing simple sqlx.Select(...):
func filteredEntities(ex sqlx.Ext, baseStmt string, res interface{}, f entities.UserProjectTimeFilter) error {
finalStmt, args, err := filteredStatement(baseStmt, f)
if err != nil {
return err
}
err = sqlx.Select(ex, res, finalStmt, args...)
return errors.Wrap(err, "db query failed")
}
Most of queries are going well (10-20ms), but one of them, such as:
"\n\tSELECT * FROM Productivity\n\t WHERE user_id IN (?) AND (tracker_id, project_id) IN ((?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?,?),(?...
The query was taken from logs and not formatted. It is executing more than 10 seconds (sometimes even more than 20). However, from a dump of the database, it is executing less than 1 second. The selection has less than 100 rows. A query even doesn't have "joins" and etc. Just simple "select" with conditions.
Which might be slower? All time was measured by default time.Now() and time.Since().
The table has these columns:
+---------+-------------+---------+-------------+----------+------------+------------+------------+--------+
| id | planning_id | user_id | activity_id | duration | project_id | tracker_id | created_at | useful |
+---------+-------------+---------+-------------+----------+------------+------------+------------+--------+

Related

How to inner join two windowed tables in Flux query language?

The goal is to join tables min and max returned by the following query:
data = from(bucket: "my_bucket")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
min = data
|> aggregateWindow(
every: 1d,
fn: min,
column: "_value")
max = data
|> aggregateWindow(
every: 1d,
fn: max,
column: "_value")
The columns of max look like this:
+---------------------------------+
| Columns |
+---------------------------------+
| table MAX |
| _measurement GROUP STRING |
| _field GROUP STRING |
| _value NO GROUP DOUBLE |
| _start GROUP DATETIME:RFC3339 |
| _stop GROUP DATETIME:RFC3339 |
| _time NO GROUP DATETIME:RFC3339 |
| env GROUP STRING |
| path GROUP STRING |
+---------------------------------+
The min table looks the same except the name of the first column. Both tables return data which can be confirmed by running yield(tables:min) or yield(tables:max). The join should be an inner join on columns _measurement, _field, _time, env and path and it should contain both the minimum and the maximum value _value of every window.
When I try to run within influxdb DataExplorer
join(tables: {min: min, max: max}, on: ["_time", "_field", "path", "_measurement", "env"], method: "inner")
I get the following error:
Failed to execute Flux query
When I run the job in Bash via influx query --file ./query.flux -r > ./query.csv; I get the following error:
Error: failed to execute query: 504 Gateway Timeout: unable to decode response content type "text/html; charset=utf-8"
No more logging-output is available to investigate the issue further. Whats wrong with this join?
join can only take two tables as the parameters according to this doc. You could try the union where it can take more than two tables as the input. See more details here.
You might just need to modify the script as below:
union(tables: [min: min, max: max]

Snowflake JDBC TIME datatype value converted to UTC

We are using JDBC driver to connect to Snowflake and perform inserts.
While working with TIME datatype, we provide time value as 10:10:10 with setTime in insert and when retrieved with getTime, we get 02:10:10.
The documentation says - TIME internally stores “wallclock” time, and all operations on TIME values are performed without taking any time zone into consideration.
Why this conversion? How to make sure we get back what we inserted?
Sample program -
try (Connection con = getConnection()) {
try (PreparedStatement ps = con.prepareStatement("insert into Test_Time values (?)")) {
ps.setTime('10:10:10');
ps.executeUpdate();
}
try (Statement statement = con.createStatement()) {
try (ResultSet resultSet = statement.executeQuery("select * from Test_Time ")) {
while (resultSet.next()) {
System.out.println(resultSet.getTime(1));
}
}
}
}
We tried both -
ps.setTime(Time.valueOf('10:10:10'))
ps.setTime('10:10:10')
When similar is tried from WEB portal worksheet, we see the same inserted value 10:10:10. Web portal able to show the right desired result so wondering Is there anything basic missing in my case?
Thank you for help in advance.
Update -
Done more testing around this - found there is no difference between "setTime(int idx, Time t)" and "setTime(int idx, Time t, Calendar c)". If you already know the timezone of your data then there is no way to supply custom timezone.
The conversion is done by Java. If you use bind a string variable instead of Time variable, you will see that it works as expected:
CREATE TABLE test_time ( t TIME );
When we run the below Java block, it will insert "10:10:10" to the table:
try (Connection con = getConnection()) {
try (PreparedStatement ps = con.prepareStatement("insert into Test_Time values (?)")) {
ps.setString(1, "10:10:10" );
ps.executeUpdate();
}
try (Statement statement = con.createStatement()) {
try (ResultSet resultSet = statement.executeQuery("select * from Test_Time ")) {
while (resultSet.next()) {
System.out.println(resultSet.getString(1));
System.out.println(resultSet.getTime(1));
}
}
}
}
You will see the following output:
10:10:10
02:10:10
The first one is reading the time as string, and the second was using Time object so it's converted to your timezone. If you query the table on web UI, you will see that it will return the expected value:
SELECT * FROM test_time;
+----------+
| T |
+----------+
| 10:10:10 |
+----------+
At least, it worked for me :)
From what I've seen searching around (eg. here) it seems like it comes down to the driver that you're using and not the database. I'm seeing different results to you too - in my script I'm inserting 10:10:10 and getting back 10:10:10 but when I look at the Snowflake table it says 09:10:10. This is because I live in London which is currently BST (GMT+1) so my driver is doing the conversion from my local timezone to GMT before inserting into Snowflake (when ps.setTime is run) and also when retrieving from the database (when results.getTime is run).
Example here in scala but basically the same code:
val ps: PreparedStatement = connection.get.prepareStatement("insert overwrite into test_db.public.test_table values (?)")
val insertedTime: Time = Time.valueOf("10:10:10")
println(s"Inserting ${insertedTime} into table")
ps.setTime(1, insertedTime)
ps.executeUpdate()
val results: ResultSet = connection.get.createStatement().executeQuery("select time_col from test_db.public.test_table")
while(results.next) {
val returnedTime = results.getTime(1)
println(s"Time returned: ${returnedTime}")
}
The above script prints out:
Inserting 10:10:10 into table
Time returned: 10:10:10
In Snowflake the time is:
+----------+
| time_col |
+----------+
| 09:10:10 |
+----------+
To fix this you can set the global timezone to GMT and run the same script. Here is the same example above with the TimeZone changed to GMT:
TimeZone.setDefault(TimeZone.getTimeZone("GMT+00:00")) // <----- ** New bit **
val ps: PreparedStatement = connection.get.prepareStatement("insert overwrite into test_db.public.test_table values (?)")
val insertedTime: Time = Time.valueOf("10:10:10")
println(s"Inserting ${insertedTime} into table")
ps.setTime(1, insertedTime)
ps.executeUpdate()
val results: ResultSet = connection.get.createStatement().executeQuery("select time_col from test_db.public.test_table")
while(results.next) {
val returnedTime = results.getTime(1)
println(s"Time returned: ${returnedTime}")
}
Now the script prints (the same):
Inserting 10:10:10 into table
Time returned: 10:10:10
And in Snowflake the time now shows what you would expect:
+----------+
| time_col |
+----------+
| 10:10:10 |
+----------+

SQL: MAX like implementation of OR on Grouped object [duplicate]

I have a field in a table which contains bitwise flags. Let's say for the sake of example there are three flags: 4 => read, 2 => write, 1 => execute and the table looks like this*:
user_id | file | permissions
-----------+--------+---------------
1 | a.txt | 6 ( <-- 6 = 4 + 2 = read + write)
1 | b.txt | 4 ( <-- 4 = 4 = read)
2 | a.txt | 4
2 | c.exe | 1 ( <-- 1 = execute)
I'm interested to find all users who have a particular flag set (eg: write) on ANY record. To do this in one query, I figured that if you OR'd all the user's permissions together you'd get a single value which is the "sum total" of their permissions:
user_id | all_perms
-----------+-------------
1 | 6 (<-- 6 | 4 = 6)
2 | 5 (<-- 4 | 1 = 5)
*My actual table isn't to do with files or file permissions, 'tis but an example
Is there a way I could perform this in one statement? The way I see it, it's very similar to a normal aggregate function with GROUP BY:
SELECT user_id, SUM(permissions) as all_perms
FROM permissions
GROUP BY user_id
...but obviously, some magical "bitwise-or" function instead of SUM. Anyone know of anything like that?
(And for bonus points, does it work in oracle?)
MySQL:
SELECT user_id, BIT_OR(permissions) as all_perms
FROM permissions
GROUP BY user_id
Ah, another one of those questions where I find the answer 5 minutes after asking... Accepted answer will go to the MySQL implementation though...
Here's how to do it with Oracle, as I discovered on Radino's blog
You create an object...
CREATE OR REPLACE TYPE bitor_impl AS OBJECT
(
bitor NUMBER,
STATIC FUNCTION ODCIAggregateInitialize(ctx IN OUT bitor_impl) RETURN NUMBER,
MEMBER FUNCTION ODCIAggregateIterate(SELF IN OUT bitor_impl,
VALUE IN NUMBER) RETURN NUMBER,
MEMBER FUNCTION ODCIAggregateMerge(SELF IN OUT bitor_impl,
ctx2 IN bitor_impl) RETURN NUMBER,
MEMBER FUNCTION ODCIAggregateTerminate(SELF IN OUT bitor_impl,
returnvalue OUT NUMBER,
flags IN NUMBER) RETURN NUMBER
)
/
CREATE OR REPLACE TYPE BODY bitor_impl IS
STATIC FUNCTION ODCIAggregateInitialize(ctx IN OUT bitor_impl) RETURN NUMBER IS
BEGIN
ctx := bitor_impl(0);
RETURN ODCIConst.Success;
END ODCIAggregateInitialize;
MEMBER FUNCTION ODCIAggregateIterate(SELF IN OUT bitor_impl,
VALUE IN NUMBER) RETURN NUMBER IS
BEGIN
SELF.bitor := SELF.bitor + VALUE - bitand(SELF.bitor, VALUE);
RETURN ODCIConst.Success;
END ODCIAggregateIterate;
MEMBER FUNCTION ODCIAggregateMerge(SELF IN OUT bitor_impl,
ctx2 IN bitor_impl) RETURN NUMBER IS
BEGIN
SELF.bitor := SELF.bitor + ctx2.bitor - bitand(SELF.bitor, ctx2.bitor);
RETURN ODCIConst.Success;
END ODCIAggregateMerge;
MEMBER FUNCTION ODCIAggregateTerminate(SELF IN OUT bitor_impl,
returnvalue OUT NUMBER,
flags IN NUMBER) RETURN NUMBER IS
BEGIN
returnvalue := SELF.bitor;
RETURN ODCIConst.Success;
END ODCIAggregateTerminate;
END;
/
...and then define your own aggregate function
CREATE OR REPLACE FUNCTION bitoragg(x IN NUMBER) RETURN NUMBER
PARALLEL_ENABLE
AGGREGATE USING bitor_impl;
/
Usage:
SELECT user_id, bitoragg(permissions) FROM perms GROUP BY user_id
And you can do a bitwise or with...
FUNCTION BITOR(x IN NUMBER, y IN NUMBER)
RETURN NUMBER
AS
BEGIN
RETURN x + y - BITAND(x,y);
END;
You would need to know the possible permission components (1, 2 and 4) apriori (thus harder to maintain), but this is pretty simple and would work:
SELECT user_id,
MAX(BITAND(permissions, 1)) +
MAX(BITAND(permissions, 2)) +
MAX(BITAND(permissions, 4)) all_perms
FROM permissions
GROUP BY user_id
I'm interested to find all users who
have a particular flag set (eg: write)
on ANY record
What's wrong with simply
SELECT DISTINCT User_ID
FROM Permissions
WHERE permissions & 2 = 2

gorm.DB can't preload field currencies for model.Currency

I was looking through gorm.DB's docs and sources but can't seem to understand the purpose of Preload.
I thought that is the "preloaded schema/tables/rows" that you can use afterwards" but cannot somehow use it that way.
For instance I have the following struct
package model
type Currency struct {
ID uint64 `gorm:"primary_key"`
CurrencyCode string `gorm:"size:3"`
}
but when I do something like this to compare Find and Preload
logger.Info("Preload")
var preloadCurrencies []model.Currency
dbMySQL.Preload("currencies").Find(&preloadCurrencies)
for i, curr := range preloadCurrencies {
// stuff
}
logger.Info("find")
var currencies []model.Currency
dbMySQL.Find(&currencies)
for i, curr := range currencies {
// stuff
}
I get the following (with ):
INFO[0000] Preload
(/go/src/SOMESOURCES/main.go:28)
[2018-01-21 11:08:47] [1.02ms] SELECT * FROM `currencies`
[168 rows affected or returned ]
(/go/src/SOMESOURCES/main.go:28)
[2018-01-21 11:08:47] can't preload field currencies for model.Currency
INFO[0000] find
(/go/src/SOMESOURCES/main.go:37)
[2018-01-21 11:08:47] [0.90ms] SELECT * FROM `currencies`
[168 rows affected or returned ]
DB schema:
show columns from currencies;
+---------------+---------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------+---------------------+------+-----+---------+----------------+
| id | bigint(20) unsigned | NO | PRI | NULL | auto_increment |
| currency_code | char(3) | NO | | NULL | |
+---------------+---------------------+------+-----+---------+----------------+
2 rows in set (0.00 sec)

Why Locate() in TADOTable with date/time value is not working

I'm working on a small subsystem for logging user activity. The system is using MS SQL Server as a database, Delphi7 and ADO for building the interface.
The problem I have is that I can't locate a record with specific datetime value.
Below is a sample reproduction of the problem:
1. Database: MS SQL Server 2005 Express Edition.
-- Table creation
CREATE TABLE [tlog] (
[USERN] [numeric](10, 0) NULL,
[USERDATE] [datetime] NULL,
[LOGTEXT] [varchar](250) COLLATE Cyrillic_General_CS_AS NULL
);
-- Insert date/time value
INSERT INTO [tlog] (USERN, USERDATE, LOGTEXT)
VALUES (1, CURRENT_TIMESTAMP, 'Record current activity')
-- Insert date only value
INSERT INTO [tlog] (USERN, USERDATE, LOGTEXT)
VALUES (1, '20180202', 'Record current activity')
-- Table's content
-------------------------------------------------------------
| USERN | USERDATE | LOGTEXT |
-------------------------------------------------------------
| 1 | 26/10/2015 17:13:36.597 | Record current activity |
-------------------------------------------------------------
| 1 | 02/02/2018 00:00:00.000 | Record current activity |
-------------------------------------------------------------
2. Sample code: Delphi 7 and ADO
procedure TfrmMain.btnLocateClick(Sender: TObject);
var
d: TDateTime;
tblLog: TADOTable;
begin
//
ThousandSeparator := ' ';
DecimalSeparator := '.';
DateSeparator := '/';
ShortDateFormat := 'dd/mm/yyyy';
LongDateFormat := 'dd/mm/yyyy';
TimeSeparator := ':';
ShortTimeFormat := 'hh:mm';
LongTimeFormat := 'hh:mm';
TwoDigitYearCenturyWindow := 50;
ListSeparator := ';';
//
tblLog := TADOTable.Create(Application);
try
//
tblLog.ConnectionString :=
'Provider=SQLOLEDB.1;'+
'Password=xxxx;'+
'Persist Security Info=True;'+
'User ID=xxxxxxxx;'+
'Initial Catalog=xxxxxxxxx;'+
'Data Source=127.0.0.1\xxxxxxx,1066';
tblLog.TableName := '[tlog]';
tblLog.Open;
// First try - locate with exact value. NOT WORKING.
d := StrToDateTime('26/10/2015 17:13:36.597');
if tblLog.Locate('USERDATE', d, []) then
ShowMessage('Exact value, no Locate options: Located')
else
ShowMessage('Exact value, no Locate options: Not located');
if tblLog.Locate('USERDATE', d, [loPartialKey]) then
ShowMessage('Exact value, with Locate options: Located')
else
ShowMessage('Exact value, with Locate options: Not located');
// Second try - locate with value that matches format settings. NOT WORKING.
d := StrToDateTime('26/10/2015 17:13');
if tblLog.Locate('USERDATE', d, []) then
ShowMessage('Hours and minutes, no Locate options: Located')
else
ShowMessage('Hours and minutes, no Locate options: Not located');
if tblLog.Locate('USERDATE', d, [loPartialKey]) then
ShowMessage('Hours and minutes, with Locate options: Located')
else
ShowMessage('Hours and minutes, with Locate options: Not located');
// Locate with date only value. WORKING.
d := StrToDateTime('02/02/2018');
if tblLog.Locate('USERDATE', d, []) then
ShowMessage('Located')
else
ShowMessage('Not located');
finally
//
tblLog.Close;
tblLog.Free;
end;
end;
3. Expected result: Locate the record.
4. Actual result: TADOTable.Locate() returns false.
What am I doing wrong and how to pass datetime values to TADOTable.Locate() method?
Thanks in advance!
You have used Locate almost correctly. Almost, because the loPartialKey option you've included is pointless when searching for TDateTime values. In this case you need to search the exact date time value. The problem is in your tests.
Your first test has a wrong date time value. Its millisecond portion is ignored in your conversion so you're actually trying to locate date time 26/10/2015 17:13:36 which is not in your table.
In second case you're trying to locate date time 26/10/2015 17:13 which is not in your table.
I would suggest using e.g. EncodeDateTime function for building date time rather than that string conversion and removing those extra calls with loPartialKey option.
Use type string to locate or where for date with format yyyy-mm-dd
example:
in locate:
if tblLog.Locate('USERDATE', '2015-10-26', []) then ...
in where:
select * from tbllog where userdate = '2015-10-26'

Resources