I am relatively new to grafana and as a visualization tool, it's been fantastic. The major issue I have is I have some pretty legacy systems that log based on the server's time (which is always in Eastern time) From what I have noticed, grafana is always expecting the data source time to be UTC.
Is there a way to get the timezone of a data source in eastern time for a Microsoft SQL Server data source? Or potentially a plugin that could do the conversion? Or is my only option code like this in every query
dateadd(hour, datediff(hour, getdate(), getutcdate()), l.date) as date,
There is no way to apply a specific timezone/offset by DataSource in Grafana.
In Grafana it can be managed as a Server/Org/user setting, by default it uses the timezone in your web browser which I think works fine in most use cases.
Your workaround works fine and the only other option I'm aware of is to use the AT TIME ZONE, which does not solve the fact that you will have to put it more or less everywhere.
The only way I see to centralize the timezone conversions is to create a view (or maybe a function) that handles the conversion, therefore standardizing the timezone of those legacy systems.
If possible I'd go for the view as it is the most transparent solution I can think of.
Related
I have a table in my database which stores time zone information with country code.
Now I want to update this table whenever Daylight saving time is applied for that country.
Any Free API available for this?
Or Any other kind of solution anyone knows.
Since the question has been updated to reflect SQL Server rather than MySQL, I'll point at this answer, which describes in part how to use either the AT TIME ZONE statement, or my SQL Server Time Zone Support project.
Original answer below, when question was tagged as mysql
Don't keep your own tables. MySQL has support for time zones built-in within the CONVERT_TZ function. Read here about how to keep MySQL updated.
You do not need to update the tables for every regular DST change. You only need to update them if the governments have decided the rules for DST or standard time have changed. The TZ Database tracks these sort of changes, and makes them available for operating systems, programming languages, databases, etc.
You should also read the timezone tag wiki. Especially the section "Time Zone != Offset". If you have a custom table that you feel like you need to update because of DST, then likely you have a table of offsets, not a table of time zones. This is a bad design, and you should just use the MySQL built-in stuff.
What are the scenarios in which I should use timestamps? I know what a timestamp is, but I'd like to hear how it's helped folks in real-life projects.
Like #Paul McCowat we also used to use timestamp for concurrency handling long ago. With our switch to NHibernate (ORM) the trend is to use a simpler version number rather than timestamp. The Rails framework uses version numbers instead of timestamps for concurrency as well. We've pulled the timestamps from our database structure as we've migrated to newer ORM's.
Before I used an ORM (Linq2Sql or Enitity in my case), I read an interesting article by Imar Spaanjaars about n-tiered in asp.net using ado.net. In this example to handle concurrency, the timestamp column was checked when a record was returned, in say a gridview, and then checked on the database before an edit was made. By comparing the timestamp in memory and in the database, you are able to see if a change was made between retrieving the record and then asking then reporting back to the user.
So in summary, I have used the timestamp column to handle concurrency.
How is this problem generally solved?
To give some context, I have a single database and multiple processes connected to that database (a mobile api, a custom content management tool and a custom front end website), they all run on different servers. Sometimes it is useful to get delta changes on the database; for instance say I update the database with new data using the content management tool (which then sets some metadata on the data that was changed, specifically its updated_at field), and all my mobile apps need a local copy of the database to work. It wouldn't make much sense to redownload the whole db, so it's useful for the app to send its local last_updated timestamp and retrieve the subset of fields which were updated since that time. But the mobile api's server's last_updated and the database server's updated_at are not calculated from the same time source, each server has its own clock.
Is using a time based "freshness indicator" just a dirty mess or is there a robust and efficient way to do it? Or is there a better approach? I'm thinking something like incrementing a version number. What's the best practice for this kind of thing?
A) Is it possible to have Grandfather-Father-Son archiving? For example we would like to have the following precalculated at all times, and nothing else precalculated:
Daily totals of last week
Weekly totals of previous 5-6 weeks
Monthly totals of all previous months
Note that we don't want daily totals of a day that was 2 months ago for example. We want that daily total to be deleted.
Will indexed views be good enough for this purpose? We want all the fields to be precalculated and stored.
B) We would like to have some features like StackExchanges (and generally Wikis) versioning. Are there ways to archive the older versions somehow on the production environment, and keep the newer versions of stuff more readily available? We have been looking into Partitioning but it doesn't seem to handle such intricate scenario (we don't want ALL posts prior to X date to be partitioned, rather than we need all versions that are older than the newest version)
What are the best practices on these matter?
A: You are describing the storage of time-series data with a varying retention period/graularity, features SQL Server does not provide in any form natively. You can of course build this strategy yourself fairly easily, and luckily you have a few great examples in open source projects for guidance:
RRD Tool - time series database and graphing library.
Graphite - inspired by RRD Tool, built by
B: Its hard to speak in such general terms, but perhaps you can maintain an Version column on your table, where 0 always reflects the latest version. Each time you promote a new version you demote all the other versions by incrementing this value. Having versions increase as they get older allows you to create a filtered index on a deterministic value (Version = 0) making it very performant to get the current version.
You can then purge based on number of versions ago rather than date. Just delete yourTable where Version > 5. A partitioning scheme could work using this method as well.
Is there a new or different product that people are using?
Are there no new features people can think of?
Is it not being used by many people?
Or, has Microsoft just decided not to invest any more resources into it?
I am trying to assess whether this is still a good enough tool to use, even though it appears it is no longer being supported or developed by Microsoft.
The basic log file format has remained the same through the last four(?) versions of IIS, so I'd say it's probably just the case that nothing else needs to be added.
I would recommend it.
We use it at a client site to parse 50 logs from 30 applications on an hourly basis. It's fast enough and with the ability to parse custom logs (We have http logs parsing), there is little limitation.
We are looking into Splunk now though for more real time analysis.