Fixing an old ntp server with no leap second - ntp

I have to take care of an old STRATUM1 (GPS source) ntp server. It is installed with Free BSD 8.0 and ntp-4.2.6p3 and has an uptime of 1800+ days. It is used to provide time to many other servers in the datacenter.
Comparing the accuracy of this server's time with other public ntp servers, this one is -1 second ahead. The ntp.conf has no leapfile reference so I think this server has not taken into account the leap seconds.
The question is, can it be fixed?. Is there a way to add the leap second know?. I have tried to add an updated leap-second file to the configuration and restart the service, but no luck, offsets is always the same (-1 second).

Related

time difference between two machines makes problem in service broker communication

I have two machines and each machine has a sql instance.between these machines i run service broker ,but one of my machine has a different time from another i mean the machine time of A is 9:00 o'clock and another one is 11:00 o'clock .so the difference between two times makes the message can't be received from another machine , so when i sync both times with the same time the messages are received .
so my question is how can i configured the service broker to skip the time difference.
You can't. The requirements of security protocols dictate a limit on the time skew between machines.
Use time zones to set your machines to 9:00 vs. 11:00.

Cube Processing fails only in parallel mode

i do have a somewhat reproducable error, which i am no longer willing to bear, so i hope some of you might know a better workaround.
I have several larger cubes (around 10-50 GByte) which i process daily. The processing of only the partitions (about 20%) takes about 1h when i use in XMLA-Script - so processing dimensions and measures in paralellel-mode.
This works in only 2/5 runs.
So i have a procedure which detects the crash and if happened starts the serial-processing instead, which will run about 2-5 times slower - but at least works every time.
The error codes are very generic and not very helpful:
Operation canceled; HY008
Communication link failure; 08S01; TCP Provider: An existing connection was forcibly closed by the remote host.
And since it works every time in serial mode i know that there are no errors in principle.
I am using
Microsoft SQL Server 2014 - 12.0.2000.8 (X64)
Enterprise Edition: Windows NT 6.3 (Build 9600:)
Please, if you have any ideas how to solve (workaround) this.
I am thankful for every new insight or idea you might have to this behaviour.
It looks like concurrency issue and/or heavy SQL server workload (e.g. caused by mostly unlimited threads from SSAS server). I guess, it's better to set up MAX-values for necessary parameters:
ThreadPool\Process\MaxThreads = 4*cores
DB Number of Connections = 2*cores-1 (it's based on my own practice). You can tune this right before and after processing task if it's necessary to have huge amount of connections not in process time.
And maybe somehow play with affinity masks, but previous parameters tuning should be enough.
This article
http://phoebix.com/2014/07/01/what-you-need-to-know-about-ssas-processor-affinity/
and this book
http://msdn.microsoft.com/en-us/library/hh226085.aspx
describe whole technique in details.
UPDATE:
There is also small possibility of wrong TCP settings, described here: http://blogs.msdn.com/b/cindygross/archive/2009/10/22/sql-server-and-tcp-chimney.aspx
But this may cause fails even in serial processing.

connection timeout

My method executes lots of asynchronous SQL requests and I constantly get connection timeout exceptions. What else can I do except increasing the connection timeout value and proper indexing? I mean with the database part, not with the code part. Can't change the code part. Besides, the application is running fine on different servers, but only I experience those timeout exceptions on my pc and local MS SQL Server 2008 R2 database (which is also on the same PC). So I think this is clearly a performance issue since the connection timeout is already set to 3 minutes. Maybe there is something I can change on the server? Maybe there is a number of simultanious requests constraint? Each of my requests needs clearly less that 3 minutes, but there are about 26 000 of them running asynchroniously, and only I experience those problems on my local PC and local DB.
I've run the process monitor and I see that at the time when my code starts the SQL Server eventually consumes 200 MB of RAM and takes up about a half of CPU processing time. But I still have 1 GB of RAM free, so this is not a memory problem.
I think the number of connection can be the cause. Make sure you close the connection properly or try to reduce the amount of them. You can also use pipes, which will overcome the limitations of usual connections.

SQL Server Reporting Services - Fast TimeDataRetrieval - Long TimeProcessing

An application that I support has recently begun experiencing extended periods of time required to execute a report in SQL Server Reporting Services. The reports that are being executed are not terribly complex. There are multiple stored procedures (between 5 and 8) which return anywhere from a handful to 8000 records total. Reports are generally from 2 to 100 pages. One can argue (and I have) the benefit of a 100 page report, but the client is footing the bill.
At any rate, the problem is that even the reports with 500 records (11 pages) being returned takes 5 minutes to return to the browser. In the execution log the TimeDataRetrieval is 60 seconds, but the TimeProcessing is 235 seconds. It seems bizarre to me that my query runs so quickly, but it takes Reporting Services so long to process the data.
Any suggestions are greatly appreciated.
Kind Regards,
Bernie
Forgot to post an update to this. I found the problem. The problem was associated with an image with an external source on the report. Recently the report server was disallowed internet access. So when reporting services was processing the report, it was trying to do an HTTP GET, to retreive the image. Since the server was disallowed outbound internet access, the request would eventually timeout with a 301 error. Unfortunately this timeout period was very long, and I suspect it happened for each page of the report, becase the longer the report, the longer the processing time. At any rate, I was not able to get outbound internet access reopened on the server so I took a different path. Since the web server where the image was hosted and the reporting server were on the same local network, I was able to modify the HOST file on the reporting server with the image hosts domain and local IP address. For example:
www.someplacewheremyimageis.com/images/myimage.gif
reporting server would try to resolve this via its local dns and no doubt get external ip address X.X.X.X
so I modified the HOST file on the report server by adding the the following line
192.168.X.X www.someplacewheremyimageis.com
So now when reporting services tries to generate the report it resolves to the above internal IP address and includes the image in the report.
The reports are now running snappier than ever.
Its these kinds of problems that you figure out with a flash of inspiriation at 4:30 am after hours of beating your head against your keyboard, that make it wonderful and terrible to be a software developer.
Hope this helps someone.
Thanks,
Bernie

SQL 2008 CALs - how are they enforced?

I am running a server that needs a very limited number of user connections (max of 5 at a time). So the 5-CAL edition of SQL 2008 Workgroup seems perfect for me. Now, what I need to know is the following: do I actually need to physically install a CAL on each machine/user that will be using the server (by using I mean reading and writing) or does the 5-CAL licence simply mean that only 5 machines/users will be able to log at any one time?
I.e. can 6 or 7 different people use this as long as only 5 of them are connecting simultaneously?
Thanks
Karl
It's all here: SQL Server 2008 Licensing Frequently Asked Questions
However, it is not the easiest to follow.
Basically, a CAL is per user or device. The difference is under "What is the difference between device client access licenses (CALs) and user CALs?" is the link above".
Whatever CAL type you use, it does not matter if only 5 can connect at once. Or one client has 200 connections open.
It is per user or per PC (User CAL or Device CAL), not per connection. In the olden days (SQL 7?) you did have per connection but it has been some time since I looked at this.
Edit, to actually answer: they are not strictly enforced in that the SQL Server instance counts things, but you'll be using the software illegally.
It depends on what licensing model you go for:
Per CPU licensing - no CALs required, but you pay up front for unlimited user and device access. You are limited to the number of physical CPUs (not cores) you can have in the SQL Server box.
Per device CAL - you can assign a CAL to a physical device, and anyone accessing the SQL Server from that device is licensed. So 10 people accessing the SQL Server from 1 shared computer requires you to have 1 device CAL.
Per user CAL - you can assign a CAL to a person, and anyone accessing the SQL Server is required to have a license. So 10 people accessing the SQL Server from 1 shared computer requires you to have 10 user CALs, but 1 person accessing the SQL Server from 10 computers only requires you to have 1 CAL.
CALs are handled logically - there is no hard limit, just don't exceed the number of licenses you have.
http://www.microsoft.com/Sqlserver/2005/en/us/pricing-licensing-faq.aspx
http://www.microsoft.com/sqlserver/2008/en/us/licensing-faq.aspx

Resources