about date in database question - sql-server

i need to find data between 2 date's and time's.
i use one field for date , and one field for time.
is it be better to use only one field for date & time ?
i see that it came in dd/mm/yyyy hh:mm:ss format that
can contain date and time.
this question is for acceess and for sql-server
thank's in advance

In nearly all circumstances, date and time are needed together. Both Access and SQL server have a date/time data type. In Access, even if you specify the format as time, you can show a date. This is because all datetime data is stored as a number, and time is the decimal portion:
Say I store data: 10:31:46, I can type lines in the immediate window that illustrate the storage of datetime, like so:
?CDec(DlookUp("TimeFormattedField", "Test"))
0.438726851851852
?Year(DlookUp("TimeFormattedField", "Test"))
1899
?Format(dlookup("F4", "Table2"),"dd/mm/yyyy")
30/12/1899
This is because zero (0) is a valid date.
It is very easy to get the different portions of a datetime field, so store datetime as a single fields, because that is what you are going to get, anyway.

I like to store date and time separately. In general, I almost never need time in my apps. One case where I store them separately is in some of my logging routines. This is mostly because I only ever query on dates and never on date+time.
If you need to query on both date and time, then storing them separately is really problematic, because then you have to concatenate two fields for comparison, and that means your criteria won't use any indexes on the two fields. This may not be an issue for a few thousand records, but for anything above that, it can quickly become quite a performance drag. It's also a major issue if you're using a server back end, since all the rows will have to be pulled across the wire, instead of Access/Jet/ACE being able to hand off the selection to the server.

depends on the requirement. If you are using sql server 2008+ then if you store in separate is not a problem, as well as it is as easy option to write the query

Related

Can't group on this selection using Dates column from SQL in Pivot Table in Excel

I have a pivot table in Excel, which I'd like to be able to performing Grouping on using the SaleDate column.
However, when I've created my Pivot Table, right click an element in the field and choose Grouping... I get the error that:
Cannot Group on this selection
Which I've figured is because there is either
1) Blanks in the column, or
2) The column is not of date type Date in Excel
I've copied the whole column to Notepad++ and performed a Find what: (a blankspace) but that gave nothing in return, i.e. there are no blank spaces in the columns.
That leaves option number 2, and since I can't filter the SaleDate column on Year or Month it seems to in fact be interpreted as a text rather than a Date.
I'm using a SQL database as a source, which I have tried to adjust to parry this (my raw data to the SQL is of data type numeric, hence I first need to convert it to varchar and subsequently to date (note that these are Three different approaches I have used to adjust the date in SQL. I have noticed that the table to which I save the data is indeed of data type date in SQL):
left(convert(date,convert(varchar,Rawdate,110),110),7) as SaleDate
convert(date,convert(varchar,Rawdate,112),112) as SaleDate
convert(date,convert(varchar,Rawdate,110),110) as SaleDate
which returns, in order, yyyy-mm, yyyy-mm-dd, yyyy-mm-dd but none of these works to either Group on in the Pivot Table, or filter on Months or year in Excel.
While I never worked specifically on Excels that utilize SQL-Server directly, I know SQL-Server has many date & time types, unlike .NET's C# which has fewer, or Excel which has only one(1). The types of SQL-Server itself are not that cooperative with each other to begin with(2), so I wouldn't be surprised if issues arise from even the tiniest differences when trying to port to other technologies, which I faced a few times.
With that in mind, and your evidence of a likely failure in date conversion, plus the chained conversions you mentioned, my first suggestion is to feed the Excel a different date type, and my first choices would be datetime or datetime2, for being the most popular, the most complete, and the most similar to Excel's lonely type.
(1): It's more like zero, it's just an integer with everybody around it giving it special treatment, which they fail at half the time.
(2): Why would int to/from datetime be fine, but int to/from datetime2 is not...
If you make the field of interest a rowfield, and click on the "Filter" triangle in the column heading, then often right to the bottom of the list in that PivotFilter box you'll see the item(s) causing you the problem. Be aware that it might be text that simply looks like a date, or it might be something more obvious as per the basic example in the screenshot below:
As per my comment, another way to diagnose what's going on is by taking just a few of the items that you are sure are dates, putting them into a range, making a PivotTable out of them, and seeing if you can group them. If you can, then you know that the problem is indeed likely some text in the data. If you can't, then it's likely you have text that still needs to be converted to dates...but you'll need to post some examples here in order for us to give you suggestions on how to turn it into something Excel recognizes as a date.

Store a specific hour but in datetime type SQL Server

I have a table that have two columns, one is called HourFrom and HourTo, this hours can be changed. I dont want the type to be nvarchar the only thing that needs to store is for example 08:00 or 23:00.
Is there a way to design the table with functions that only shows the hour instead of creating (for example) a Store Procedure that saves the datetime with this functions?
The reason behind this is because I have an entity in my backend where these two members are Datetime as they should be and dont want to mix types and doing weird castings or using split,indexOf, etc.
You can use time(0), data may look like 03:06:12, 08:45:00. I have no idea how you going to use the data. Eg, if in where clause, where HourFrom between '03:00:00' and '04:00:00', does it matter if HourFrom contains minute and second?

Why does Azure lose timezone information in DateTimeOffset fields?

We're working on a IOS app using Microsoft's Azure Mobile Services. The web GUI creates date-time as DateTimeOffset fields, which is fine. But when we have the mobile put datetimes into the database, then read them from the database, via Entity Framework, we're seeing them adjusted to UCT. (We see the same thing when we view the records in SSMS.)
I've always been frustrated by the lack of timezone support, in SQL's standard datetime types, and I'd thought that DateTimeOffset would be better. But if I wanted my times in UTC, I'd have stored them in UTC. If a user enters a time as 3:00 AM, CST, I want to know he entered CST. It makes as little sense to me to convert it to UTC, and throw away the offset, as it did to assume that 3:00 AM CST and 3:00 AM PDT were the same.
Is there some kind of database configuration I can do to keep the Azure database from storing the dates in UTC?
The issue is that at some point in Azure Mobile Services, the property is converted to a JavaScript Date object, which cannot not retain the offset.
There are a couple of blog posts describing this issue, and possible workarounds:
https://blogs.msdn.microsoft.com/carlosfigueira/2013/05/13/preserving-date-time-offsets-in-azure-mobile-services/
http://michele-colombo.it/2014/11/azure-mobileservices-how-to-properly-save-datetimeoffset-with-offset/
Essentially, they both take the same approach of splitting out the offset into a separate field. However, looking closely at these, they both make a crucial mistake:
dto.DateTime.ToUniversalTime()
Should actually be:
dto.UtcDateTime
The DateTime of a DateTimeOffset will always have DateTimeKind.Unspecified, and thus ToUniversalTime will assume the source is local, rather than using the offset of the DTO.
There are a few other similar errors I see in the code in these posts, so be careful to test thoroughly. However, the general approach is sound.
We're using a Node.js backend and noticed the same thing with DATETIMEOFFSETs read from our SQL Server database being returned in UTC regardless of the offset. Another alternative is to convert the DATETIMEOFFSET at the query-level so that it is outputted as a string with the timezone information. The following converts a DATETIMEOFFSET(0) field to the ISO8601 format; however, other possible styles can be used as documented here:
SELECT CONVERT(VARCHAR(33), [StartDate], 126) AS [StartDate] FROM [Products];
The new output is now: "2016-05-26T00:00:00-06:00" instead of "2016-05-26T06:00:00+00:00"
Of course, this means that the client must serialize the string into their respective format. In iOS, the ISO8601 library can be used to read the output as either a NSDateComponents or NSDate.
One benefit of this approach is that any database-level checks or triggers can do date comparisons using the DATETIMEOFFSET instead of trying to take into account a separate offset column with a basic DATETIME.

SQLBulkCopy and Dates (1/1/1753)

I've got an application which has been working fine for quite a while, but there is an annoying item that continues to get in the way on occasion.
Let's say that I use an object such as OracleDataReader or MySQLDataReader to pass the data to the sqlbulkcopy object for insert. Let's assume that all the columns maps just fine and for the most part, it all works well.
Granted, I don't have control over the source application or database (which is either MySQL or Oracle). So some goof goes into a different application and puts in a date on the invoice table of 5/31/0210. He really meant to put in 5/31/2010, but the application he's using is not validating the data very tightly and the Oracle database accepts it. For all intensive purposes, the data of 5/31/0210 is a valid date for the Oracle db. It might be stupid in terms of data entry, but it is what it is at this point.
Now our OracleDataReader comes along and is transferring this invoice table over to SQL Server via the SQLBulkCopy. It is passing the data to perfectly matched table with the right column names and data types. You can see what is going to happen. This date of 05/31/0210 from Oracle is not accepted by the SQL Server db engine, as the DATETIME field only allows dates from 1/1/1753 to 12/31/9999.
When it encounters this record, it simply fails and gives an overflow error. It doesn't skip the record, it kills the feed. So if it happens a thousand records in on a million record table, you don't get the remaining 999,000 records.
Is there anyway to get around this issue so that the feed will continue?
Ideally, I'd like to move the receiving SQL Server DB to 2008 and use DATETIME2, which would allow for these goofy dates, but unfortunately not all my clients are ready to move to this version yet, so I'm stuck with DATETIME in SQL 2000/2005/2008.
Any ideas on how to get around this without changing the SQL? Ideally, I wouldn't mind if it just skipped the record. I know that I could do this in the SQL for the datareder, but this would be extremely complicated when you have twenty date fields in a single query. It would be maintenance nightmare.
Any thoughts would be appreciated.
One option would be to change the datetime column type to varchar. Then add a derived column for converting the string to datetime. The trick would be to use a function in the derived column to validate the date and put an arbitrary datetime if the coversion will fail. If you do heavy date comparisons, persist the computed column and/or index it.
I say all of this under the impression that sqlbulkcopy is not able to do transforms. Maybe you can. Hopefully, someone will chime in with a way to.
SSIS would be great in this situation, as you could do the transform and also get the performance benefits of the bulk update lock.

Informix to Oracle: Dealing with Fetching Null Values

A bit of background first. My company is evaluating whether or not we will migrate our Informix database to Oracle 10g. We have several ESQL/C programs. I've run some through the Oracle Migration workbench and have been muddling through some testing. Now I've come to realize a few things.
First, we have dynamic sql statements that are not handling null values at all. From what I've read, I either have to manually modify the queries to utilize the nvl( ) function or implement indicator variables. Can someone confirm if manual modifications are necessary? The least amount of manual changes we have to make to our converted ESQL/C programs, the better.
Second, we have several queries which pull dates from various tables etc., and in Informix dates are treated as type long, the # of days since Dec 31st, 1899.
In Pro*C, what format is a date being selected as? I know it's not numeric because I tried selecting date field into my long variable and get Oracle error stating "expected NUMBER but got a DATE". So I'm assuming we'd have to modify how we are selecting date fields - either select a date field in a converted manner so it becomes a long (ie, # of days since 12/31/1899), or change the host variable to match what Oracle is returning (what is that, string?).
Ya. You will need to modify your queries as you described.
long is tripping you up. long has a different meaning in Oracle. There is a specific DATE type. Generally when selecting one uses the TO_DATE function with a format, to get the result as a VARCHAR2, in exactly the format you want.
Probably it didn't hit you yet but be aware that in Oracle empty VARCHAR2 fields are NULLs. I see no logic behind this (probably because I came from Informix land) - just keep it in mind. I think it is stupid - IMHO empty string is meaningful and different from NULL.
Either modify all your VARCHAR2 fields to be NOT NULL DEFAULT '-' or any other arbitrary value, or use indicatores in ALL your queries that return VARCHAR2 fields, or always use NVL().
In order to convert the oracle dates (which are store in Oracle internal format) into a long integer, you will need to alter your queries. Use the following formula for your dates:
to_number (to_char (date_column, 'J')) - to_number(to_char(to_date('12/31/1899', 'MM/DD/YYYY'), 'J'))
The Oracle system 'J' (for Julian date) format is a count of number of days since December 31, 4712BC. If you want to count from a later date, you'll need to subtract off the Julian day count of that later date.
One suggestion: instead of altering all of your queries in your programs (which may create problems and introduce bugs), create a set of views in a different schema. These views would be named the same as all the tables, with all the same columns, but include the NVL() and date() formulas (like above). Then point your application at the view schema rather than the base table schema. Much less testing and fewer places to missing something.
So for example, put all your tables into a schema called "APPS_BASE" (defined by the user "APPS_BASE". Then create another schema/user called "APPS_VIEWS". In the APPS_VIEWS create a view:
CREATE OR REPLACE VIEW EMP AS
SELECT name, birth_date
FROM APPS_BASE.EMP;

Resources