I have two columns in table Start time and end time. I want to make search on the basis of time. I have saved the start time and end time in below format
14:57:44
using Convert(varchar(20),Getdate(),108)
Please help me.
select * from your_table
where cast(time_column as time) between '10:00:00' and '11:00:00'
Related
I want to use a sql query with DATE_TRUNC(). I saw that entry: Snowflake date_trunc to remove time from date
I tested on local docker containers. It worked fine. Just to be sure, does trunc remove/pop timestamps ? It's sound likes truncate :) Thanks for your time.
i.e
SELECT
DATE_TRUNC('month',production_timestamp)
AS production_to_month,
COUNT(id) AS count
FROM watch
GROUP BY DATE_TRUNC('month',production_timestamp);
I want to calculate the monthly number of data in a column. With out update any data.
https://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-TRUNC
It truncates a timestamp to the accuracy you specify, returning that new value. It doesn't change any data in tables.
We have an API that queries an Influx database and a report functionality was implemented so the user can query data using a start and end date.
The problem is that when a longer period is chosen(usually more than 8 weeks), we get a timeout from influx, query takes around 13 seconds to run. When the query returns a dataset successfully, we store that in cache.
The most time-consuming part of the query is probably comparison and averages we do, something like this:
SELECT mean("value") AS "mean", min("value") AS "min", max("value") AS "max"
FROM $MEASUREMENT
WHERE time >= $startDate AND time < $endDate
AND ("field" = 'myFieldValue' )
GROUP BY "tagname"
What would be the best approach to fix this? I can of course limit the amount of weeks the user can choose, but I guess that's not the ideal fix.
How would you approach this? Increase timeout? Batch query? Any database optimization to be able to run this faster?
In such cases where you allow user to select in days, I would suggest to have another table that stores the result (min, max and avg) of each day as a document. This table can be populated using some job after end of the day.
You can also think changing the document per day to per week or per month, based on how you plot the values. You can also add more fields like in your case, tagname and other fields.
Reason why this is superior to using a cache: When you use a cache, you can store the result of the query, so you have to compute for every different combination in realtime. However, in this case, the cumulative results are already available with much smaller dataset to compute.
Based on your query, I assume you are using InfluxDB v1.X. You could try Continuous Queries which are InfluxQL queries that run automatically and periodically on realtime data and store query results in a specified measurement.
In your case, for each report, you could generate a CQ and let your users to query it.
e.g.:
Step 1: create a CQ
CREATE CONTINUOUS QUERY "cq_basic_rp" ON "db"
BEGIN
SELECT mean("value") AS "mean", min("value") AS "min", max("value") AS "max"
INTO "mean_min_max"
FROM $MEASUREMENT
WHERE "field" = 'myFieldValue' // note that the time filter is not here
GROUP BY time(1h), "tagname" // here you can define the job interval
END
Step 2: Query against that CQ
SELECT * FROM "mean_min_max"
WHERE time >= $startDate AND time < $endDate // here you can pass the user's time filter
Since you already ask InfluxDB to run these aggregates continuously based on the specified interval, you should be able to trade space for time.
I need to make a report of all patients who had an appointment last week. This report will be added to another excel with some lookups and then put into Power BI because we don't have way of connecting our sql server.
I'm trying to reduce the amount of manual work I have to do by instead of using parameters with dates, adding a dynamic date.
I have tried using TODAY, CURRENT_DATE and they all come back with an error.
I just need it to give me data for 7 days prior to the current date
Any help would be greatly appreciated.
This is what the first part looks like:
SELECT
PM.vwApptDetail.Patient_Last_Name
,PM.vwApptDetail.Patient_First_Name
,PM.vwApptDetail.Patient_DOB
,PM.vwApptDetail.Appointment_DateTime
,PM.vwApptDetail.Appt_Type_Desc
,PM.vwApptDetail.Resource_Desc
,PM.vwApptDetail.Status
FROM
PM.vwApptDetail
WHERE
PM.vwApptDetail.Appointment_DateTime >
I ended up using:
WHERE Appointment_DateTime BETWEEN GETDATE() AND DATEADD(DAY, -7, GETDATE())
and it seems to have worked.
I have a flat file as source which contains two columns named "Event begin time" and "event end time" that has both date and time in it .
How can I calculate MOU(minutes of usage) for it using
Informatica.
Please help me..
Thanks
Vinay
The DATE_DIFF function can be used for calculating the time duration:
DATE_DIFF( Event_End_Time, Event_Begin_Time, MI)
First you need Informatica to know that each of the 2 dates from the flat file are indeed dates and the format from the INCOMING date fields, you will do this by passing them to an expression transformaton i.e. if they are in 'DD/MM/YYYY HH24:MI:SS' then the expression to turn them into date/time in informatica will be TO_DATE (EVENT_BEGIN_TIME, 'DD/MM/YYYY HH24:MI:SS') (You'll have to do same for event end time... I've used name with underscores instead of spaces as informatica doesnt allow spaces in port names)
Then you'll use datediff to subtract the begin time from the end time... lets say you named the 2 variable ports which contain the above calculation as v_BEGIN and v_END, the calculation for minutes will be DATE_DIFF(v_BEGIN, v_END, 'MI')
Simplest way of achieving it :
Consider T1 and T2 as Start time and end time (Ensure that,both are in DATE Format).
In variable calculate T2-T1 : This will give you difference in days.
Multiply it by ( 24*60 ) will give you number of minutes.
So, 24*60*(T2-T1).
Ok this is pretty simple but i'm drawing a blank and can't even think on the right combination of words to search for the answer.
I have a tsql table with start and end time, task, as well as a new/repeat flag.
I want to pull the average duration between start and end, both when the record is new and when it is a repeat. I'll be grouping on the task.
My result would look like Task - NewDurationAverage - RepeatDurationAverage.
Cheers in advance.
Your query should be something like this:
SELECT TaskId, NewDurationAverage, RepeatDurationAverage FROM
(SELECT TaskId, DATEDIFF(hh, TaskStart, TaskEnd) as NewDurationAverage
FROM Task WHERE IsNew=1 GROUP BY TaskId) NewTasks
LEFT OUTER JOIN
(SELECT TaskId, DATEDIFF(hh, TaskStart, TaskEnd) as RepeatDurationAverage
FROM Task WHERE IsRepeat=1 GROUP BY TaskId) RepeatTasks
ON NewTasks.TaskId=RepeatTasks.TaskId
You need to follow the steps below:
Find the difference between the start and the end date/time columns. For example, using DATEDIFF function
Perform the AVG on the calculated value
Convert the result in any appropriate format you want
Depending on your needs, you can make DATEDIFF to return the time difference in a desire format (days, minutes, nanoseconds, etc). So, have to decided how precise the results should be (smaller is better).