We have a MS Access 2010 app, using SQL Server 2008 R2 as its database. All workstations were running on Windows 7 with no problems.
Our customer started to upgrade the workstations to Windows 10, and now we see connection to the server drops at a random occasions. Not related to any specific action, query, report or form.
The application is using ODBC connection to access the tables.
When this happens, all connections to all tables are dropped, and the app is unusable.
To resolve this, we need to restart the application and everything is working fine again, until the next time.
We opened up a table when this error occurs and see all the records showing #Name as the data.
Please help
Tested the network connection - no problems or errors
Upgraded SQL Server Service Pack to the latest (thought it might related to TLS version)
Related
In our company we have to support a large legacy system built on Microsoft Access 2010 as frontend and SQL Server 2008 R2 as backend. The backend SQL server runs on Windows Server 2008 R2. Currently our users works on Terminal Server sessions on a Windows Server 2008 R2. A couple of days ago we started to test Windows Server 2019 and Notebooks with the latest version of Windows 10. We recognized a big performance difference while executing the same Access databases on the different environments.
For instance the creation of a report takes 27 seconds (new environment) instead of 7 seconds (old environment). The database.accdb is identical, the backend is identical (still Windows 2008 R2 Server with SQL Server 2008 R2 and SP2), only the execution environment (Windows) changed.
Does anyone of you have an idea how to explain this?
In Access 2010 the SQL server tables are linked using System-DSN data sources. On the old environment ODBC is used (Driver: SQL Server, Version: 6.01.7601.17514).
On the new environment I tested the following drivers:
ODBC Driver 11 for SQL Server (2014.120.5543.11)
ODBC Driver 17 for SQL Server (2017.173.01.01)
SQL Server (10.00.17763.01)
SQL Server Native Client 10.0 (2009.100.4000.00)
SQL Server Native Client 11.0 (2011.110.5058.00)
I created a new System-DSN using the different drivers and updated the linked tables in Access. But in any case the performance is still bad. I also tested the latest version of Access which comes with Office 2019, but again it is slow.
Sounds like your terminal sessions are getting throttled. Despite the fact that you have a SQL Server back end, Access is still doing a fair bit of thunking with the result sets, so any resource throttling differences between your Server 2008 and Server 2019 policies could be choking Access in the new server.
I think your answer is going to be found in Windows System Resource Manager. The page says it's not being maintained, but following the "Recommended Version" link leads to a generic Server 2019 page. Here's another article about how WSRM might be throttling sessions: Using WSRM to control RDS Dynamic Fair Share Scheduling.
Compare the Weighted_Remote_Sessions policy in 2008 and 2019 servers. There's either been a change to the default settings or behavior or the 2008 server policy was modified in the past to get to the current performance level.
Ok, a number of things to check.
First thing to check:
Launch the ODBC manager and check if SQL log tracing is on. I don’t know why, but I see sql logging turned on.
You NEED to be 100% sure it is turned off.
You MUST launch the ODBC manager from the command line or start menu, since the one in the control panel is for the x64 bit version, and you are using Access x32 (I assume).
So launch this version:
c:\Windows\SysWOW64\odbcad32.exe
So VERY important to launch the x32. It is assumed you are using a FILE dsn. So check these two settings:
(Make sure they are un-checked).
Next up?
Link access using the IP address of the sql server.
So, place of say:
myServer\SQLEXPRESS
Use:
10.50.10.101\SQLEXPRESS
(Of course use the IP address of sql server, not the above “example” IP).
The above things are quite easy to check.
Still no performance fix?
Then disable the fire wall on your new Terminal server (I seen this REALLY cause havoc).
And, disable windows defender on the new TS server if running.
The above tips should fix your issues.
If above don’t work, then next would be to check the priority settings for the TS server (GUI over server).
However, I am betting the above checks should restore your performance.
I have an AWS instance, on which SQL Server 2014 has been running for more than 3 years.
But a few days ago, suddenly the SQL Server stopped running.
I checked the server and tried to start SQL Server service from services, from SQL Server Configuration Manager etc. but I'm not able to start the server and got following error:
So I checked event viewer entries and I found these two errors:
I started some research work over web to overcome on this issue and I found that I can start SQL service using below command using T902:
net start MSSQL$REVCORD /T902
And SQL Server service started successfully.
But I want to make SQL Server / services back to normal as before so I can start / stop services normally.
I found on web that this cause is due to corruption in master SQL Server database, I don't have backup of master SQL Server database so I cannot restore it back.
I checked multiple threads over web and tried multiple things to overcome this issue but no luck.
So finally I decided to reinstall/recover SQL Server 2014, but I am getting another error while reinstalling:
Based on finding over web, all threads showing that I have to uninstall and reinstall SQL Server to make it normal again.
Please help! It's a live server with multiple calls continously so I cannot uninstall/reinstall SQL Server there due to possible data loss.
The first thing:
select * from sys.sysmessages where error = 5833
The message:
The affinity mask specified is greater than the number of CPUs supported or licensed on this edition of SQL Server.
Check your edition, and fix the affinity mask so that the number of CPUs satisfy the number which is supported by your edition.
You can fix it in SSMS on the Processors tab in your server properties, or using sp_configure
I am trying to run SQL Server Data Quality Services on SQL Server 2014 with 32GB of RAM, plenty of disk space, and the latest updates (Microsoft SQL Server 2014 (SP2-CU2) (KB3188778) - 12.0.5522.0 (X64) Developer Edition (64-bit) on Windows NT 6.3 (Build 9600: ) (Hypervisor))
The data resides on the same server, separate database.
Knowledge base is created and published with three domains over about a million records table. And that where it stops working: creating data quality project fails after displaying cheerful message "Analysis of data source has been completed successfully" - clicking "Next" button leads to message #1, and, after restarting the application (and the server - just in case), the message #2
SQL Server Data Quality Services server has stopped working
Refresh of client view table for user [domain\user] failed.
These are fairly consistent.
Examining both the server and the client logs reveals nothing (besides a full stack dump for the error), and the only suggestion from Microsoft forums is “to apply latest service pack”; the latest service pack has been applied but still no cigar.
Any insights/suggestions would be highly appreciated!
thank you,
-al
P.S. Excerpt from the client log:
2/13/2017 9:19:26 AM|[]|1|ERROR|CLIENT|Microsoft.Ssdqs.Studio.ViewModels.Utilities.UIHelper|An error has occurred.
Microsoft.Ssdqs.Infra.Exceptions.EntryPointException: Refresh of client view table for user [domain\user] failed.;
at Microsoft.Ssdqs.Proxy.Database.DBAccessClient.Exec();
After poking around I have found a question similar to my on MSDN blog: a user complained that his DQS installation has STOPPED working after applying CU2 to his SQL Server 2014 instance... So, I downgraded mine from [(SP2-CU2) (KB3188778)] to [(SP2) (KB3171021)], and - drums rolling - everything started working!
This is not a solution but an acceptable workaround, and I can continue analyzing my data until I upgrade to SQL Server 2016, and the fun begins anew!
I have a strange (but also common) problem with Crystal Reports.
DB is SQL Server 2014 Express (12.0.2000 or 12.0.2269)
Web app can connect to DB with no problems. Problem arises when user wants to run a report.
Now, I have few production sites. A Windows Server 2012 R2 cloud VM, couple of Windows Server 2008 R2 machines and one Windows 10 machine.
Reports run fine on windows server 2008 machines, but not on server 2012 R2 or win 10. There, I get dreaded database logon failed error. It even doesn't work on my development laptop (Win 10). I mean I can run reports from within Visual Studio, but not after I deploy them to IIS.
Reports themselves mostly use sql native client (SQLNCLI11) driver for connecting to db, some of them are using OLE DB (SQLOLEDB), but that doesn't seem to be the problem since I've tried both versions, and they both fail.
Now, I would think maybe there is some dll missing in my app, but that very same app deployed to win server 2008 works just fine. So I am thinking, it must be environmental. But what?
I am guessing that client drivers are somehow broken, or something is changed in newer versions of windows.
So, I am asking for some ideas, to point me in right direction, if somebody has any.
Here is error snippet:
[COMException (0x8004100f): Database logon failed.]
CrystalDecisions.ReportAppServer.Controllers.ReportSourceClass.Export(ExportOptions pExportOptions, RequestContext pRequestContext) +0
CrystalDecisions.ReportSource.EromReportSourceBase.ExportToStream(ExportRequestContext reqContext) +644
[LogOnException: Database logon failed.]
CrystalDecisions.ReportAppServer.ConvertDotNetToErom.ThrowDotNetException(Exception e) +263
CrystalDecisions.ReportSource.EromReportSourceBase.ExportToStream(ExportRequestContext reqContext) +1522
CrystalDecisions.CrystalReports.Engine.FormatEngine.ExportToStream(ExportRequestContext reqContext) +704
CrystalDecisions.CrystalReports.Engine.ReportDocument.ExportToStream(ExportOptions options) +115
CrystalDecisions.CrystalReports.Engine.ReportDocument.ExportToStream(ExportFormatType formatType) +96
SYSTEM.Controllers.ReportController.GenerateReport(NameValueCollection Form, String how) in C:\SYSTEM\SYSTEM\Controllers\ReportController.cs:210
SYSTEM.Controllers.ReportController.Index() in C:\SYSTEM\SYSTEM\Controllers\ReportController.cs:467
lambda_method(Closure , ControllerBase , Object[] ) +90
UPDATE:
It seems to be due to Windows 10, but I haven't found a solution.
SAP says install .NET 3.5, because it's not installed by default in WIN 10, but even when I do, error persists.
You should install 13.0.15 version of CR, because it't the only one that supports WIN 10, but as I said, it doesn't work.
I've tested on three different WIN 10 machines, always the same result.
I've run into issues when using a OLE DB (ADO) data source type, along with a Native Client provider. When that Native Client isn't installed on the consuming user's computer, it asks for a database login, and it seems that no login will work. My solution was to use the OLE DB (ADO) data source with the OLEDB provider as well. You can see the provider by looking at the properties of the data source by right-clicking on it. The preferred provider in this case is SQLOLEDB, whereas the Native Client will be something like SQLNCLI11.
You must utilize SQL-Client 2005 or version 10 (I guess).
Crystal Reports is really problematic if using the most recent drivers.
Try to establish the connection using the client-driver of SQL 2005 - it will function...
If anyone is interested, let my just share the solution:
It wasn't crystal reports, nor Windows 10, it was me. My reports were built upon SQL Server 2008, which uses SQLNCL10, and SQLNCL11 is not backwards compatible with it, hence database logon error (which is not helpful at all, btw).
Just in case.
I'm trying to figure out how to get back the list of SQL servers in my Visual Studios (2012, 2015) and even in MS SQL Server management studio... I've been searching for a solution but I'm lost. Is there any way to get these servers back? Everything is working properly, I can write server manually but I'm to lazy to ask my colleagues.
The SQL Server Browser service is running. There are no Windows updates to install and the computer has been rebooted many times.
Thank you for any advice.
The SQL Server Browser service is running
Do you mean on your computer? You'll need it running on the machines you are trying to get to appear in the list.
It's a pretty standard dialog - assuming that it uses the same technology as SSMS, according to MSDN:
This dialog is populated by the SQL Server Browser service on the
server computers. There are several reasons why the name of an
instance might not appear in the list:
The SQL Server Browser service might not be running on the computer running SQL Server.
UDP port 1434 might be blocked by a firewall.
The HideInstance flag might be set.