I want to store times in a database table but only need to store the hours and minutes.
I know I could just use DATETIME and ignore the other components of the date, but what's the best way to do this without storing more info than I actually need?
You could store it as an integer of the number of minutes past midnight:
eg.
0 = 00:00
60 = 01:00
252 = 04:12
You would however need to write some code to reconstitute the time, but that shouldn't be tricky.
If you are using SQL Server 2008+, consider the TIME datatype. SQLTeam article with more usage examples.
DATETIME start DATETIME end
I implore you to use two DATETIME values instead, labelled something like event_start and event_end.
Time is a complex business
Most of the world has now adopted the denery based metric system for most measurements, rightly or wrongly. This is good overall, because at least we can all agree that a g, is a ml, is a cubic cm. At least approximately so. The metric system has many flaws, but at least it's internationally consistently flawed.
With time however, we have; 1000 milliseconds in a second, 60 seconds to a minute, 60 minutes to an hour, 12 hours for each half a day, approximately 30 days per month which vary by the month and even year in question, each country has its time offset from others, the way time is formatted in each country vary.
It's a lot to digest, but the long and short of it is impossible for such a complex scenario to have a simple solution.
Some corners can be cut, but there are those where it is wiser not to
Although the top answer here suggests that you store an integer of minutes past midnight might seem perfectly reasonable, I have learned to avoid doing so the hard way.
The reasons to implement two DATETIME values are for an increase in accuracy, resolution and feedback.
These are all very handy for when the design produces undesirable results.
Am I storing more data than required?
It might initially appear like more information is being stored than I require, but there is a good reason to take this hit.
Storing this extra information almost always ends up saving me time and effort in the long-run, because I inevitably find that when somebody is told how long something took, they'll additionally want to know when and where the event took place too.
It's a huge planet
In the past, I have been guilty of ignoring that there are other countries on this planet aside from my own. It seemed like a good idea at the time, but this has ALWAYS resulted in problems, headaches and wasted time later on down the line. ALWAYS consider all time zones.
C#
A DateTime renders nicely to a string in C#. The ToString(string Format) method is compact and easy to read.
E.g.
new TimeSpan(EventStart.Ticks - EventEnd.Ticks).ToString("h'h 'm'm 's's'")
SQL server
Also if you're reading your database seperate to your application interface, then dateTimes are pleasnat to read at a glance and performing calculations on them are straightforward.
E.g.
SELECT DATEDIFF(MINUTE, event_start, event_end)
ISO8601 date standard
If using SQLite then you don't have this, so instead use a Text field and store it in ISO8601 format eg.
"2013-01-27T12:30:00+0000"
Notes:
This uses 24 hour clock*
The time offset (or +0000) part of the ISO8601 maps directly to longitude value of a GPS coordiate (not taking into account daylight saving or countrywide).
E.g.
TimeOffset=(±Longitude.24)/360
...where ± refers to east or west direction.
It is therefore worth considering if it would be worth storing longitude, latitude and altitude along with the data. This will vary in application.
ISO8601 is an international format.
The wiki is very good for further details at http://en.wikipedia.org/wiki/ISO_8601.
The date and time is stored in international time and the offset is recorded depending on where in the world the time was stored.
In my experience there is always a need to store the full date and time, regardless of whether I think there is when I begin the project. ISO8601 is a very good, futureproof way of doing it.
Additional advice for free
It is also worth grouping events together like a chain. E.g. if recording a race, the whole event could be grouped by racer, race_circuit, circuit_checkpoints and circuit_laps.
In my experience, it is also wise to identify who stored the record. Either as a seperate table populated via trigger or as an additional column within the original table.
The more you put in, the more you get out
I completely understand the desire to be as economical with space as possible, but I would rarely do so at the expense of losing information.
A rule of thumb with databases is as the title says, a database can only tell you as much as it has data for, and it can be very costly to go back through historical data, filling in gaps.
The solution is to get it correct first time. This is certainly easier said than done, but you should now have a deeper insight of effective database design and subsequently stand a much improved chance of getting it right the first time.
The better your initial design, the less costly the repairs will be later on.
I only say all this, because if I could go back in time then it is what I'd tell myself when I got there.
Just store a regular datetime and ignore everything else. Why spend extra time writing code that loads an int, manipulates it, and converts it into a datetime, when you could just load a datetime?
since you didn't mention it bit if you are on SQL Server 2008 you can use the time datatype otherwise use minutes since midnight
SQL Server actually stores time as fractions of a day. For example, 1 whole day = value of 1. 12 hours is a value of 0.5.
If you want to store the time value without utilizing a DATETIME type, storing the time in a decimal form would suit that need, while also making conversion to a DATETIME simple.
For example:
SELECT CAST(0.5 AS DATETIME)
--1900-01-01 12:00:00.000
Storing the value as a DECIMAL(9,9) would consume 5 bytes. However, if precision to not of utmost importance, a REAL would consume only 4 bytes. In either case, aggregate calculation (i.e. mean time) can be easily calculated on numeric values, but not on Data/Time types.
I would convert them to an integer (HH*3600 + MM*60), and store it that way. Small storage size, and still easy enough to work with.
If you are using MySQL use a field type of TIME and the associated functionality that comes with TIME.
00:00:00 is standard unix time format.
If you ever have to look back and review the tables by hand, integers can be more confusing than an actual time stamp.
Instead of minutes-past-midnight we store it as 24 hours clock, as an SMALLINT.
09:12 = 912
14:15 = 1415
when converting back to "human readable form" we just insert a colon ":" two characters from the right. Left-pad with zeros if you need to. Saves the mathematics each way, and uses a few fewer bytes (compared to varchar), plus enforces that the value is numeric (rather than alphanumeric)
Pretty goofy though ... there should have been a TIME datatype in MS SQL for many a year already IMHO ...
Try smalldatetime. It may not give you what you want but it will help you in your future needs in date/time manipulations.
Are you sure you will only ever need the hours and minutes? If you want to do anything meaningful with it (like for example compute time spans between two such data points) not having information about time zones and DST may give incorrect results. Time zones do maybe not apply in your case, but DST most certainly will.
What I think you're asking for is a variable that will store minutes as a number. This can be done with the varying types of integer variable:
SELECT 9823754987598 AS MinutesInput
Then, in your program you could simply view this in the form you'd like by calculating:
long MinutesInAnHour = 60;
long MinutesInADay = MinutesInAnHour * 24;
long MinutesInAWeek = MinutesInADay * 7;
long MinutesCalc = long.Parse(rdr["MinutesInput"].toString()); //BigInt converts to long. rdr is an SqlDataReader.
long Weeks = MinutesCalc / MinutesInAWeek;
MinutesCalc -= Weeks * MinutesInAWeek;
long Days = MinutesCalc / MinutesInADay;
MinutesCalc -= Days * MinutesInADay;
long Hours = MinutesCalc / MinutesInAnHour;
MinutesCalc -= Hours * MinutesInAnHour;
long Minutes = MinutesCalc;
An issue arises where you request for efficiency to be used. But, if you're short for time then just use a nullable BigInt to store your minutes value.
A value of null means that the time hasn't been recorded yet.
Now, I will explain in the form of a round-trip to outer-space.
Unfortunately, a table column will only store a single type. Therefore, you will need to create a new table for each type as it is required.
For example:
If MinutesInput = 0..255 then use TinyInt (Convert as described above).
If MinutesInput = 256..131071 then use SmallInt (Note: SmallInt's min
value is -32,768. Therefore, negate and add 32768 when storing and
retrieving value to utilise full range before converting as above).
If MinutesInput = 131072..8589934591 then use Int (Note: Negate and add
2147483648 as necessary).
If MinutesInput = 8589934592..36893488147419103231 then use BigInt
(Note: Add and negate 9223372036854775808 as necessary).
If MinutesInput > 36893488147419103231 then I'd personally use
VARCHAR(X) increasing X as necessary since a char is a byte. I shall
have to revisit this answer at a later date to describe this in full
(or maybe a fellow stackoverflowee can finish this answer).
Since each value will undoubtedly require a unique key, the efficiency of the database will only be apparent if the range of the values stored are a good mix between very small (close to 0 minutes) and very high (Greater than 8589934591).
Until the values being stored actually reach a number greater than 36893488147419103231 then you might as well have a single BigInt column to represent your minutes, as you won't need to waste an Int on a unique identifier and another int to store the minutes value.
The saving of time in UTC format can help better as Kristen suggested.
Make sure that you are using 24 hr clock because there is no meridian AM or PM be used in UTC.
Example:
4:12 AM - 0412
10:12 AM - 1012
2:28 PM - 1428
11:56 PM - 2356
Its still preferrable to use standard four digit format.
Store the ticks as a long/bigint, which are currently measured in milliseconds. The updated value can be found by looking at the TimeSpan.TicksPerSecond value.
Most databases have a DateTime type that automatically stores the time as ticks behind the scenes, but in the case of some databases e.g. SqlLite, storing ticks can be a way to store the date.
Most languages allow the easy conversion from Ticks → TimeSpan → Ticks.
Example
In C# the code would be:
long TimeAsTicks = TimeAsTimeSpan.Ticks;
TimeAsTimeSpan = TimeSpan.FromTicks(TimeAsTicks);
Be aware though, because in the case of SqlLite, which only offers a small number of different types, which are; INT, REAL and VARCHAR It will be necessary to store the number of ticks as a string or two INT cells combined. This is, because an INT is a 32bit signed number whereas BIGINT is a 64bit signed number.
Note
My personal preference however, would be to store the date and time as an ISO8601 string.
IMHO what the best solution is depends to some extent on how you store time in the rest of the database (and the rest of your application)
Personally I have worked with SQLite and try to always use unix timestamps for storing absolute time, so when dealing with the time of day (like you ask for) I do what Glen Solsberry writes in his answer and store the number of seconds since midnight
When taking this general approach people (including me!) reading the code are less confused if I use the same standard everywhere
Related
We are storing wine data in our database. The vintage of the wine might be a number like 2010 or a string such as Non-Vintage.
2010 means the grapes were harvested in 2010.
Non-Vintage means the grapes were harvested across an unknown time-period.
At first we decided to store the field as a string, since 2010 and Non-Vintage are both potentially strings. However, we need to be able to sort the years or perform some arithmetic (i.e. year > 2010).
We are considering either:
Store the data as a number and "Non-Vintage" would be assigned 0. However, we'd have to provide weird validations everywhere in the app for handling the 0 value.
Store the year as a number and provide a boolean "non_vintage" field for non-vintage wines.
The data pulled from the database will be delivered to an AngularJS front-end via an API. The Javascript code will have to parse through and use the year at various points... i.e. "show me all wines where year > 2010".
Anyone have any thoughts on which is better and why?
are you using mysql? you can cast strings as ints in your mysql statements.
select * from table order by cast(string AS signed) asc;
That would mean you can store as a string.
Option 1
Since you're not really treating the year as a number, nor do you need to .. the simplest option you have is to just create a string and have at it.
This has some advantages and disads right off the bat:
Advantage:
simplest, nothing complex. any database can handle this.
logic is fairly simple, although a side case to handle your "Non-Vintage" is needed.
Disadvantage:
potentially allows values you don't want/expect/etc. :
ie: "NonVintage", "Non-vantage", "Unknown", "Spider-man!" ... O.o
This might be mitigated by whatever your process for putting values in might be (ie if it's mostly automated, this might be a smaller issue) :)
========================================
Option 2
A more stricter way would be to use 2 columns.
A number and a string.
Store the vintage year in a 4-digit number field, and you know it'll always be a "proper" year. (you could add a check constraint to prevent years < 1000 if you want ;) )
Store the code "NV" in a 2 (or 3?) digit string "code" column. This gives you good flexibility going forward in case other requirements in future start asking for additional types or such.
You could just do a boolean - it would work, however, if things changed in future (they always do), you'd have to redesign .. the string code column has no real hard disad on the boolean and gives you simple flexibility going forward ;)
========================
It would depend on your system and what you know of it, and how the data's coming in ... but I'd probably lean towards option 2 (number + string) myself.
SQLServer datetime format is stored as 8 bytes where the first four bytes are number of days since Jan 1, 1900 and the other four bytes are number of ticks since midnight. And the tick is 1/300 of the second.
I'm wondering why is that? Where is that 1/300 came from? There must be some historic reason for that.
Yes, there is a historical reason: UNIX !
For details, read this excelent article by Joe Celko.
Here is the detail you're looking for:
Temporal data in T-SQL
used to be a prisoner of UNIX system clock ticks and could only go
to three decimal seconds with rounding errors. The new ANSI/ISO data
types can go to seven decimal seconds, have a true DATE and TIME data
types. Since they are new, most programmers are not using them yet.
I create a pilot logbook software application and pilots log flight time in various formats. In the US pilots log flight time typically in tenths, i.e. 90 minutes of flying would be logged as 1.5. International/European pilots typically log time in HH:MM so the 90 minutes would be logged as 1:30. Some may also want other formats such as 1 + 30.
I need a universal way of storing this value in SQL Server such that it can be converted to the display format as shown above. I'd hate to have to have two columns for every field, one is decimal time, the other is total minutes. Or I could convert somehow the decimal time into a total minutes representation and store it as an INT.
We've also discussed storing TICKS. The problem come into aggregating the values later in reports, pivot table systems such as DevExpress's XtraPivotGrid, etc. And even more of a concern is how will this data be handled on mobile devices such as the logbook apps on iPhone, Android, etc. using SQLite. In the past we had issues with Palm OS and the size of the number (integer) and overflow problems. When you add up TICKS from 30 years of a pilots flying you could end up with a huge number. Doing averages, etc....
For pilots that track flight time in decimal we use time scales such as 27-33 minutes is a .5 so if you fly for 32 minutes, you log a 0.5 or 1 hour and 33 minutes you would log it as a 1.5, so we need to apply the time scale when converting the stored value to the display value. For European it would be 1:33 for the last case that was logged as a 1.5 in the US.
How do you suggest we store a pilot's flight time in the database so it can be converted to tenths, hundredths, or HH:MM presentation? Also consider aggregates, mobile apps, etc.
Thank you.
Use the definition of TIME (or DATETIME as you wish) native to your database's variant of SQL. Transform to/from the external representations as required on output/input.
I am wondering how to represent an end-of-time (positive infinity) value in the database.
When we were using a 32-bit time value, the obvious answer was the actual 32-bit end of time - something near the year 2038.
Now that we're using a 64-bit time value, we can't represent the 64-bit end of time in a DATETIME field, since 64-bit end of time is billions of years from now.
Since SQL Server and Oracle (our two supported platforms) both allow years up to 9999, I was thinking that we could just pick some "big" future date like 1/1/3000.
However, since customers and our QA department will both be looking at the DB values, I want it to be obvious and not appear like someone messed up their date arithmetic.
Do we just pick a date and stick to it?
Use the max collating date, which, depending on your DBMS, is likely going to be 9999-12-31. You want to do this because queries based on date ranges will quickly become miserably complex if you try to take a "purist" approach like using Null, as suggested by some commenters or using a forever flag, as suggested by Marc B.
When you use max collating date to mean "forever" or "until further notice" in your date ranges, it makes for very simple, natural queries. It makes these kind of queries very clear and simple:
Find me records that are in effect as of a given point in time.
... WHERE effective_date <= #PointInTime AND expiry_date >= #PointInTime
Find me records that are in effect over the following time range.
... WHERE effective_date <= #StartOfRange AND expiry_date >= #EndOfRange
Find me records that have overlapping date ranges.
... WHERE A.effective_date <= B.expiry_date AND B.effective_date <= A.expiry_date
Find me records that have no expiry.
... WHERE expiry_date = #MaxCollatingDate
Find me time periods where no record is in effect.
OK, so this one isn't simple, but it's simpler using max collating dates for the end point. See: this question for a good approach.
Using this approach can create a bit of an issue for some users, who might find "9999-12-31" to be confusing in a report or on a screen. If this is going to be a problem for you then drdwicox's suggestion of using a translation to a user-friendly value is good. However, I would suggest that the user interface layer, not the middle tier, is the place to do this, since what may be the most sensible or palatable may differ, depending on whether you are talking about a report or a data entry form and whether the audience is internal or external. For example, some places what you might want is a simple blank. Others you might want the word "forever". Others you may want an empty text box with a check box that says "Until Further Notice".
In PostgreSQL, the end of time is 'infinity'. It also supports '-infinity'. The value 'infinity' is guaranteed to be later than all other timestamps.
create table infinite_time (
ts timestamp primary key
);
insert into infinite_time values
(current_timestamp),
('infinity');
select *
from infinite_time
order by ts;
2011-11-06 08:16:22.078
infinity
PostgreSQL has supported 'infinity' and '-infinity' since at least version 8.0.
You can mimic this behavior, in part at least, by using the maximum date your dbms supports. But the maximum date might not be the best choice. PostgreSQL's maximum timestamp is some time in the year 294,276, which is sure to surprise some people. (I don't like to surprise users.)
2011-11-06 08:16:21.734
294276-01-01 00:00:00
infinity
A value like this is probably more useful: '9999-12-31 11:59:59.999'.
2011-11-06 08:16:21.734
9999-12-31 11:59:59.999
infinity
That's not quite the maximum value in the year 9999, but the digits align nicely. You can wrap that value in an infinity() function and in a CREATE DOMAIN statement. If you build or maintain your database structure from source code, you can use macro expansion to expand INFINITY to a suitable value.
We sometimes pick a date, then establish a policy that the date must never appear unfiltered. The most common place to enforce that policy is in the middle tier. We just filter the results to change the "magic" end-of-time date to something more palatable.
Representing the notion of "until eternity" or "until further notice" is an iffy proposition.
Relational theory proper says that there is no such thing as null, so you're obliged to have whatever table it is split in two: one part with the rows for which the end date/end time is known, and another for the rows for which the end time is not yet known.
But (like having a null) splitting the tables in two will make a mess of your query writing too. Views can somewhat accommodate the read-only parts, but updates (or writing the INSTEAD OF on your view) will be tough no matter what, and likely to affect performance negatively no matter what at that).
Having the null represent "end time not yet known" will make updating a bit "easier", but the read queries get messy with all the CASE ... or COALESCE ... constructs you'll need.
Using the theoretically correct solution mentioned by dportas gets messy in all those cases where you want to "extract" a DATE from a DATETIME. If the DATETIME value at hand is "the end of (representable) time (billions of years from now as you say)", then this is not just a simple case of invoking the DATE extractor function on that DATETIME value, because you'd also want that DATE extractor to produce the "end of representable DATEs" for your case.
Plus, you probably do not want to show "absent end of time" as being a value 9999-12-31 in your user interface. So if you use the "real value" of the end of time in your database, you're facing a bit of work seeing to it that that value won't appear in your UI anywhere.
Sorry for not being able to say that there's a way to stay out of all messes. The only choice you really have is which mess to end up in.
Don't make a date be "special". While it's unlikely your code would be around in 9999 or even in 2^63-1, look at all the fun that using '12/31/1999' caused just a few years ago.
If you need to signal an "endless" or "infinite" time, then add a boolean/bit field to signal that state.
What can bite me if I store a datetime as a float in the database? I have a very good reason for doing it so don't complain about that :)
Edit: I was thinking about just storing convert(float, #thedate) in a float column.
What can bite you? Well first float is not an exact datatype and thus should probably never be used for anything that requires precision. Next, float will not automatically reject an incorrect date. Next, if you want to perform any date functions you will first have to convert the data back to a datetime data type which is a waste of server resources.
You say you have a good reason for wanting to do this, but with a clue as to what that might be, I submit that the dates should be stored in the datatype meant to handle them.
Here is a good article on "Demystifying the SQL Server DATETIME Datatype"
http://www.sql-server-performance.com/articles/dev/datetime_datatype_p1.aspx
From reading that, it looks like datetime is stored as 2 4-byte ints or you could use binary(8)
as others have said, storing as a float causes you to lose some precision.
What Alan said, plus... the fundamental problem of maintenance; when someone else comes onto the project, and sees the float for datetime in the DB, and tries to do something wrong with it, or tries to refactor it to the proper type, or just spends hours poring over the code to figure out what the heck is going on. The whole problem of maintenance can to some extent be mitigated by extensively documenting what's going on and why you're doing it, which I'd recommend highly.
SQLite uses a time format (one of a few available) with a (64 bit) double float using the integer part for days since an epoch, and the fractional part as fraction of a day. It seems to work well.
See SQLite Date and Time Functions "Format 12 is the Julian day number expressed as a floating point value."
Using Julian Dates 15 decimal digits gets you millisecond precision for several millennia.
According to this Julian Date Converter, JD 9999999.99999 is CE 22666 December 20 11:59:59.1 UT Thursday
Precision loss for one. Lack of resolution is another.
Its a minor issue, but floating point versions IEEE 754 vs VAX Floating Point.
You lose some precision. I tested in SQL Server with:
select getdate(), cast(getdate() as float), cast(cast(getdate() as float) as datetime)
You can see if you run this repeatedly that you can lose as much as 4 milliseconds in the conversion. If your database supports a data type like smalldatetime and you only need accuracy to the second, then you can smooth out this difference.
Actually, as far as I know, MSSQL and Oracle actually do internally store datetimes as floats (as day and fractional days)
select cast(0 as datetime), cast(0.5 as datetime)
1900-01-01 00:00:00.000 1900-01-01 12:00:00.000
Regardless of precision - depending on what number you choose as a start point, you may have trouble with comparisons.
If you are unable to create precise floats for each datetime, you may also have have two datetimes that compare both one way and the other if they resolve to different floats.
A potential disadvantage of using a float would be that you lose the ability to use the system's built-in date manipulation functions. If you're only doing interval calculations, floating-point may be fine, but anything relating to calendars could require some wheel-reinventing.
BTW, it seems Oracle at least uses fixed fields for its internal date representation:
select
to_char(sysdate, 'YYYY-MM-DD HH24:MI:SS') as Date_Value,
dump(sysdate) as Internals
from dual;
DATE_VALUE INTERNALS
------------------- ------------------------------------------------------------
2009-05-28 09:51:12 Typ=13 Len=8: 217,7,5,28,9,51,12,0
1 row selected
If you're going to store that data as a float, I think you'd be better off storing seconds since epoch as a float than datetime as a float