I have been working with oracle DB for a long time and have been storing dates without bothering about timezone.
But now we have a issue where client and server are in different timezones and we need to perform date/time conversion according to timezone. This has opened up lots of questions now:
Should we always store date/time along with its timezone or not?
I am asking this because if servertimezone changes then my data will be corrupted.
If oracle db server is located in a particular region then should it always run on local timezone? Is there any standard for this?
My second point is in relation to DR database servers which are located in different regions but have same data as prod DB. If timezones are not same for both DB then we are in trouble.
To be able to show clients date in their timezone you need some offset.
for example, your server runs in US EDT and you saved time like that for years. All your data is saved that way. So, you need to create a field, where you store offset for each user. You will then need to apply this offset to each date/time field on select. How you do this, I have no idea - because I have no idea how you use your data. Is it just a select statement, a report, or a website? But if this was an application, client would load user info including offset. For example, user in US EDT will have offset of 0 in your case. And then, any date/time field that is requested from db should get offset value applied. Of course, since your application was not designed for this from the beginning, it might will take some good effort.
Related
I am trying to find a way to change SQL server from using UTC time to Local time. This is because I need to be getting Local time when I pull data using ODATA via excel.
Is there a way to configure the SQL server from UTC to local time?
If you have UTC date/time values stored in a datetime, datetime2, or smalldatetime column, you can use AT TIME ZONE to indicate the current value is UTC and to convert the value to the time zone of your choice:
SELECT YourUTCDateTimeColumn AT TIME ZONE 'UTC' AT TIME ZONE 'Pacific Standard Time' AS YourLocalDateTimeColumn
AT TIME ZONE returns a datetimeoffset data type. This can be cast back to the source type if datetimeoffset is problematic for your use case:
SELECT CAST(YourUTCDateTimeColumn AT TIME ZONE 'UTC' AT TIME ZONE 'Pacific Standard Time' AS datetime2(3)) AS YourLocalDateTimeColumn
SQL Server's local timezone is derived from the local operating system. If you wanted to globally affect the timezone that SELECT GetDate() uses, then you can change the time zone of the server.
This is a workaround that is not often recommended to a much larger issue.
Since 2005, the best practice for handling date and time in .Net is to use DateTimeOffset, and this advice became standard in SQL Server 2008. Storing times in DateTime is an anti-pattern that forces a lot of manipulations but worse, it forces a lot of asumptions, for instance how do you know that the value is a local time or UTC, and then which local time zone was used and was it daylight savings or not?
DateTimeOffset provides us with correct sorting and time difference calculations for values that might be entered in different time zones, read more in my blog: Why was DateTime removed from OData v4
If you are querying through an OData API, then you can implement the conversion of timezones in the API logic, or you can manipulate the SQL queries direct, either via middleware, custom serialization or other injection techniques. To go into specifics would however require you to post your associated code.
This answer from #Dan Guzman shows some examples of using AT TIME ZONE in your SQL queries directly, which was introduced in SQL Server 2016. You might also be interested in TODATETIMEOFFSET() or SWITCHOFFSET() but implementing these functions still requires assumptions about the specific time zone to either be hardcoded into the API logic or to be passed through from the client.
It can be done, but when consuming the data in Excel, via an OData API from an SQL database there are multiple points where the conversion logic can be implemented, therefor multiple plausible solutions.
Don't forget, you could apply this time zone logic as a transformation step inside Excel after the data has been retrieved. I hope this post inspires you to research a bit further and to choose a specific pathway. Then if you get stuck, please post a more focused question that details your specific attempt. We are here to help ;)
I do not want to pay any more money - so, this is not a duplicate.
A request has come down to restart using numbers after 30 days.
I am using AZURE SQL Server.
I have an integer field that is populated when a new record is created. This number, from the previous usage was an autoincrement integer, but, from the manager, this is getting confusing to the users to a see a number that keeps getting larger as they use the system.
So, the manager wants to see the number restart at one on the first of the month, and then increment from there, until the 1st of the month, then the number should start over at 1.
This resetting number is used as part of a concatenated string to generate a unique value, so i am not too worried about the repeating effect of using the same number.
My question is this -
how in the world would this resetting be done? Azure does not really have a event scheduler 'thing' like a locally installed SQL Server does, and I only need it to run one time, where if two records are entered at 12:02AM only the first record would reset the number, the second would build from there.
My first guess was to use an insert trigger on the table, but this would require a large effort. my second thought was to have a next number table, but since it is a reactive type scenario, the number cannot be reset until AFTER midnight and on the first request.
any ideas would be greatly appreciated.
Just done a lift and shift to Azure (mainly App Services & Azure SQL) where there were more than 30 SQL Agent and SSIS Jobs
Solution was to use Azure Functions with Schedule triggers and a set of helpers (there is a small set of common operations that covers most of what was done like sending an email, generating a CSV, saving content to blob storage).
In an on-premises SQL Server database, I have a number of tables in to which various sales data for a chain of stores is inserted during the day. I would like to "harvest" these data to Azure every, say 15, minutes via Data Factory and an on-premises data management gateway. Clearly, I am not interested in copying all table data every 15 minutes, but only in copying the rows that have been inserted since last fetch.
As far as I can see, the documentation suggests using data "slices" for this purpose. However, as far as I can see, these slices require a timestamp (e.g. a datetime) column to exist on the tables where data is fetched from.
Can I perform a "delta" fetch (i.e. only fetch the rows inserted since last fetch) without having such a timestamp column? Could I use a sequential integer column instead? Or even have no incrementally increasing column at all?
Assume that the last slice fetched had a window from 08:15 to 08:30. Now, if the clock on the database server is a bit behind the Azure clock, it might add some rows with the timestamp being set to 08:29 after that slice was fetched, and these rows will not be included when the next slice (08:30 to 08:45) is fetched. Is there a smart way to avoid this problem? Shifting the slice window a few minutes into the past could minimize the risk, but not totally eliminate it.
Take Azure Data Factory out of the equation. How do you arrange for transfer of deltas to a target system? I think you have a few options:
add date created / changed columns to the source tables. Write parameterised queries to pick up only new or modified values. ADF supports this scenario with time slices and system variables. Re identity column, you could do that with a stored procedure (as per here) and a table tracking the last ID sent.
Engage Change Data Capture (CDC) on the source system. This will allow you to access deltas via the CDC functions. Wrap them in a proc and call with the system variables, similar to the above example.
Always transfer all data, eg to staging tables on the target. Use delta code EXCEPT and MERGE to work out what records have change; obviously not ideal for large volumes, this would work for small volumes.
HTH
We are planning to add this capability into ADF. It may start from sequential integer column instead of timestamp. Could you please let me know if the sequential integer column will help?
By enabling "Change Tracking" on SQL Server, you can leverage on the "SYS_CHANGE_VERSION " to incrementally load data from On-premise SQL Server or Azure SQL Database via Azure Data Factory.
https://learn.microsoft.com/en-us/azure/data-factory/tutorial-incremental-copy-change-tracking-feature-portal
If using SQL Server 2016, see https://msdn.microsoft.com/en-us/library/mt631669.aspx#Enabling-system-versioning-on-a-new-table-for-data-audit. Otherwise, you can implement the same using triggers.
And use NTP to synchronize your server time.
We have a large database that receives records concerning several hundred thousand persons per year. For a multitude of reasons I won't get into when information is entered into the system for a specific person it is often the case that the individual entering the data will be unable to verify whether or not this person is already in the database. Due to legal reqirements, we have to strive towards each individual in our database having a unique identifier (and no individual should have two or more.) Because of data collection issues it'll often be the case that one individual will be assigned many different unique identifiers.
We have various automated and manual processes that mostly clean up the database on a set schedule and merge unique identifiers for persons who have had muliple assigned to them.
Where we're having problems is we are also legally required to generate reports at year end. We have a set of year-end reports we always generate, however it is also the case that every year several dozen ad hoc reports will be requested by decision makers. Where things get troublesome is because of the continuous merging of unique identifiers, our data is not static. So any reports generated at year end will be based on the data as it existed the last day of the year, 3 weeks later if a decision maker requests a report, whatever we give them can (and will) often conflict direcly with our legally required year end reports. Sometimes we'll merge up to 30,000 identifiers in a month which can greatly change the results of any query.
It is understood/accepted that our database is not static, but we are being asked to come up with a method for generating ad hoc reports based off of a static snapshot of the database. So if a report is requested on 1/25 it will be based off the exact same dataset as our year end reports.
After doing some research I'm familiar with database snapshots, but we have a SQL Server 2000 database and we have little ability to get that changed in the short-to-medium term and database snapshots are a new feature in the 2005 edition. So my question would be what is the best way to create a queryable snapshot of a database in SQL Server 2000?
Can you simply take a backup of the database on 12/31 and restore it under a different name?
You either need to take a snapshot and work off it (to another db or external file-based system, like Access or Excel) or, if there's enough date information stored, work from your live copy using the date value to distinguish previously reported data from new.
You're better off working from a snapshot because the date approach won't always work. Ideally, you'd export your live database at the end of the year somewhere (anywhere, really) else.
I'm looking for a way in SQL Server without using the .NET framework to find out the time in a given time zone, paying attention to daylight savings. However, this method also needs to take account for states (e.g. Arizona) that do NOT follow DST.
There is a workaround if the server is located somewhere that does follow DST - subtract your current time from GMT to get the offset - but I am looking for a more general-purpose solution. I know the .NET framework has ways to do this, but it would be great to avoid having this kind of dependency within this stored procedure.
Thanks!
You could do this in a CLR Stored Procedure. This still is a process from within SQL Server, and is a dependency on the server itself, not any external .Net app that you have to write.
http://msdn.microsoft.com/en-us/library/ms190790.aspx
I typically consider local time stuff to be a view concern, and therefore put it into the web/desktop application. In the database, I work entirely in UTC. So all of the datetime columns in the DB have UTC stored in them, all of the sprocs for reporting and such take UTC, etc. The web/desktop app convert any dates going to/from the user into whatever timezone they have selected.
I know that doesn't really answer your question, but maybe it gives you a different approach?