Sql Query to get system current time - sql-server

Lets say Machine A is app server where I am running sql query.
Machine B where sqlserver is setup. They are in different time zone. eg Machine A is in IST and Machine B in CST
Now I want to run sql query in Machine A to get its current time rather than sql server current time.
For eg. I tried all these functions but they return Machine B(sql server current time)
SELECT SYSDATETIME()
,SYSDATETIMEOFFSET()
,CURRENT_TIMESTAMP
,GETDATE()
I can fetch Machine A current time by Java and pass it to sql query but is there any way to fetch by sql query itself??

No. And it makes sense because SQL will send your command to the server. As such - you expect the SQL Server to then magically go to your machine and execute code?
The general approach is that all local processing (app server level) is finished when the SQL gets sent to the server (as a string).
Also good coding standards would ask you what you care (outside of clocks not being in sync) as all application internal timestamps are in UTC anyway (regardless of timezone of the server) and the timezone conversion is only done on the UI level code.
So, no - what you want can not be done, and in most applications is not needed as they handle the problem outside the database.

You have to use the DateTime Class tools of other languages.
You don't query from a Database the Current Time because it is not a stored data, or a storable ones.
There is no solution if you do it in the SQL, you really have to do it in a language like Java or C# because they have access to the Time of the machine, since the program is intended to run in a machine, therefore they have acess to it.

Related

Queries slow when run by specific Windows account

Running SQL Server 2014 Express on our domain. We use Windows Authentication to log on. All queries are performed in stored procedures.
Now, the system runs fine for all our users - except one. When he logs on (using our software), all queries take around 10 times longer (e.g. 30 ms instead of 2 ms). The queries are identical, the database is the same, the network speed is the same, the operative system is the same, the SQL Server drivers are the same, connection pooling is the same, DNS is the same. Changing computer does not help. The problem seems to be linked to the account being used.
What on Earth may be the cause for this huge performance hit?
Please advise!
I would try rebuilding the SP (by running an ALTER statement that duplicates its existing structure) to force SQL Server to recompile. I don't know every way SQL Server caches things but it can definitely create distinct execution plans for different types of connections so I wouldn't be surprised if your slow user is running a version with an inefficient execution plan.
http://www.sommarskog.se/query-plan-mysteries.html

Different users Access front end generating different SQL for backend

Here's my situation:
Both users are using the same Microsoft Access front end.
Both users have the same version of the SQL Server Native Client 10.0 ODBC driver.
Both users have the same version of Microsoft Office 2007 installed.
Both users are opening the same report with the same parameter values.
One takes 2 minutes, the other takes an hour.
The issue travels with the user, it happens regardless of which PC they log into on the network.
I checked the SQL passed to SQL Server and it is different. The one that runs quickly is standard T-SQL. The one that hangs is sending a parameterized query that has a terrible execution plan. I can't figure out what is causing it to send one query for one user and a different query for another user.
Since the issue travels with the user regardless of which PC they are on. This makes me think that it is some sort of obscure group policy setting or active directory setting that is part of the users account that is being replicated from PC to PC. Which probably means it's in the registry somewhere. But I am having a difficult time tracking down what that registry key is. Does anyone have any ideas?

What mechanism to be used for asynchronous communication between two SQL Servers in the case?

We use a central SQL Server (2008 Standard edition) and several smaller, dedicated SQL Servers (Express editions). We need to implement some mechanism for transferring data asynchronously* from the dedicated decentralized SQL Server (bigger volume, see below) and back from the central SQL Server (few records, basically some notifications for the machines and possibly some optimization hints).
The dedicated SQL Servers are physically located near technology machines, and they are collecting say datetime, temperature rows in regular intervals (think about few seconds interval). There are about 500 records for one job, but the next job follows immediately (the machine does not know it is a new job--being quite stupid in the sense -- and simply collects the temperatures on and on).
The technology machines must be able to work without the central SQL Server, and the central SQL Server must work also when the machine is not accessible (i.e. its dedicated SQL engine cannot be reached, switched off with the machine). In other words, the solution need not to be super fast, but must be robust in the sense that no collected data is lost.
The basic idea is to move the collected data from the dedicated SQL Server (preprocessed to the normalized format with ID of the machine) to the well known table on the central SQL Server. Only the newer data should be sent to minimize the amount of the data. That transfer should be started by the dedicated SQL Server in regular intervals (say one hour) if the connection is OK. If the connection is not OK, the data will be sent after next hour, etc.
Another well known table on the central SQL Server will be used to send notifications for the dedicated SQL Server engines. This way the dedicated engine can be told (for example) what data was already processed/archived on the central SQL Server (i.e. the hint for what records may already be deleted from the local database on the dedicated machine), or whatever information that is hinted from the central (just hints or other not the real-time requirements). The hints will be collected by the dedicated SQL Server (i.e. also the machine responsibility). In other words, the central SQL Server only processes the well known, local tables. It does not try to connect the dedicated SQL Server machines.
The solution should use only the standard mechanisms -- SQL commands (via stored procedures), no external software. What kind of solution should I focus on?
Thanks,
Petr
[Edited later] The SQL servers are at the same Local Area Network.
If you are willing to make a mental switch and stop thinking in terms of tables and rows and instead think in terms of data and messages then Service Broker can do handle all the communication, delivery and message processing. Instead of locally (on the Express machines) doing INSERT INTO LocalTable(datetime, temperature) VALUES (...) you think in terms of:
BEGIN CONVERSATION WITH CentralServer ...;
SEND ON conversation MESSAGE TYPE [Measurement] (<datetime...><temperature ...>)
See Using Service Broker instead of Replication or High Volume Contiguous Real Time ETL
Sounds like a job for merge replication.

Advice needed: cold backup for SQL Server 2008 Express?

What are my options for achieving a cold backup server for SQL Server Express instance running a single database?
I have an SQL Server 2008 Express instance in production that currently represents a single point of failure for my application. I have a second physical box sitting at the installation that is currently doing nothing. I want to somehow replicate my database in near real time (a little bit of data loss is acceptable) to the second box. The database is very small and resources are utilized very lightly.
In the case that the production server dies, I would manually reconfigure my application to point to the backup server instead.
Although Express doesn't support log shipping, I am thinking that I could manually script a poor man's version of it, where I use batch files to take the logs and copy them across the network and apply them to the second server at 5 minute intervals.
Does anyone have any advice on whether this is technically achievable, or if there is a better way to do what I am trying to do?
Note that I want to avoid having to pay for the full version of SQL Server and configure mirroring as I think it is an overkill for this application. I understand that other DB platforms may present suitable options (eg. a MySQL Cluster), but for the purposes of this discussion, let's assume we have to stick to SQL Server.
I would also advise for a script based log shipping. After all, this is how log shipping started. All you need is a time based agent to schedule the scripts (ie. Tasks Scheduler), and a smart(er) file copy (robocopy).

How much is the network - determing network overhead in SQL Server

We have a dev server running C# and talking to SQL server on the same machine.
We have another server running the same code and talking to SQL server on another machine.
A job does 60,000 reads (that is it calls a stored procedure 60,000 times - each read returns one row).
The job runs in 1/40th of the time on the first server compared to it running on the second server.
We're already looking at the 'internal' differences between the two SQL Servers (fragmentation, tempdb, memory etc) but what's a good way to determine how much slower the second config is simply because it has to go over the network ?
[rather confusingly I found a 'SQL Server Ping' tool but it doesn't actually attempt any timing measurement which, as far as I can see, is what we need]
Open SQL Server Management Server on the remote machine. Start a new query. Click Query, Include Client Statistics. Run your stored procedure. In the Client Statistics tab of the results, you'll see some basic information about how many packets were sent back & forth over the network. My guess is that for one read, you're not going to see that much overhead.
To get a better idea, I'd try doing a plain select of 60,000 records (since you said it's returning 60,000 records one by one) over the network from your remote machine. Again, that doesn't give you an idea of the stored procedure overhead, but it'll give you a quick seat-of-the-pants idea of the network speed between machines.
SQL Server ships with the Profiler utility. This will tell you what the execution time of your query is on each of your SQL Server instances. Note any discrepencies. Whatever time (in the ExecutionTime column) can not be accounted for here is transmission time... or client display time. Perhaps your client machine takes longer to render the results, or compute the results.
What results are you expecting? Running everything on one machine vs over a network will certainly give you different timings. Your biggest timing difference will be the network throughput. You need to communicate to the networked server both ways.
If you can set NOCOUNT to on, this will help in less network traffic.

Resources