I am using MS Access to connect to Sql Server through a DSN connection. This is a linked table to a sql server backend. Here is the connection string
ODBC;DSN=mydsn;Description=mydesc;Trusted_Connection=Yes;APP=Microsoft Office 2010;DATABASE=mydb;ApplicationIntent=READONLY;;TABLE=dbo.mytable
As you can see there is a ApplicationIntent=READONLY tag in the connection string. What does this mean. Am I connecting to the database in a read only fashion? Is it recommended to perform updates and inserts using this connection string?
This means that if you are using Availability Groups in SQL Server 2012, the engine knows that your connections are read only and can be routed to read-only replicas (if they exist). Some information here:
Configure Read-Only Access on an Availability Replica
Availability Group Listeners, Client Connectivity, and Application Failover
If you are not currently using Availability Groups, it may be a good idea to leave that in there for forward compatibility, but it really depends on whether or not you are intentionally only just reading. This should prevent writes but there are some caveats. These Connect items may be useful or may leave you scratching your head. I'll confess I haven't read them through.
ApplicationIntent=ReadOnly allows updates to a database
ApplicationIntent=ReadOnly does not send the connection to the secondary copy
Related
I have a client that uses MS SQL with availability groups. I develop a java based software and connect to the server in the following fasion: jdbc:sqlserver://[serverName[:portNumber]]
Everytime the DBA does a update on the servers, we loose the connection to the server (We get a Connection Closed). According to the DBA this is normal behavior in SQL-Servers and our software should just do a retry.
Is it really normal that the sql server closes all connections in a failover situation? Shouldn't it just redirect all connections to the new instance?
Unfortunately I am no SQL-expert and the DBA is anything but helpful, he just claims that our software should simple reconnect after receiving a closed connection. Am I missing something or is this really the desired experience in sql server?
Is it really normal that the sql server closes all connections in a failover situation? Shouldn't it just redirect all connections to the new instance?
Yes. On the assumption that the Availability Group (hereafter abbreviated "AG") is built on top of Windows Failover Clusters†, the AG listener is brought offline, transferred to new owner of the AG, and brought back online. Moreover, the databases in the AG on both the primary and all secondary replicas are transitioned from online → recovery pending → back online in their new (perhaps same as old) capacity as read/write or read-only.
Either of these phenomena alone would cause your application to lose communication with a database that it had previously established a connection with. But also, two of the fallacies of distributed computing are:
The network is reliable.
Topology doesn't change.
Regardless of the reason for those assumptions being violated, if your application isn't resilient to them, there's going to be a problem.
† - while it's technically possible to build an AG without Windows Clustering (i.e. a basic AG or if the AG is on Linux and is using Pacemaker as the coordinator), I think that the majority of implementations do use Windows Clustering.
We have an SQL Server Mirroring established between a database on two SQL Server 2012 Standard instances on two different servers. The witness is on a third server running the express edition. The IP addresses (not hostnames) of primary, mirror and witness servers are mentioned in the mirroring configurations while creating the mirror using SSMS. Now problem is that a change in the IP address of the mirror is required. There is a proper reasoning behind that and it can't be avoided. The system is live and outage should be avoided to the max. The new IP is accessible from both primary and witness.
When we take the mirror server out, the mirroring is affected, but as soon the server is back with the same old IP, mirroring resumes appropriately. However, how to change the endpoints IP address without having the need to remove and then recreate the mirror. Is it possible or there is NO way this can be achieved without remove/recreate?
If there is an absolute need here to do this remove/recreate, how to ensure that we don't copy over the complete database and its logs and redo the process from scratch. If all client's access to the primary is blocked during the time to ensure that no transaction is taking place, would this suffice?
A solution without remove/recreate would be the preferred one.
Thanks.
My scenary:
I am trying to develop a service which will query different databases.
To clear the above statement up:
I use the word service in its broadest sense: a sofware component that will provide some value to the database owner.
These databases will be in no way under my control as they will belong to different companies. They won't be known beforehand and multiple vendors are to be supported: Oracle, MS (SQL Server), MySql, PostgreSQL. Also, OLE DB and ODBC connections will be supported.
The problem: security of database credentials and overall traffic is a big concern but the configuration effort should be reduced at a minimum. Ideally, all the security issues should be addressed programmatically in the service implementation and require no configuration effort for the database owner other than provide a valid connection string.
Usually, database SSL support is done through server certificates which I want to avoid as it is cumbersome for the client (the database owner).
I have been looking into how to do this to no avail. Hopefully this might be done with openssl, SSPI, client SSL certificate or any form of tunneling; or may be it is just not posible. Some advice would be greatly apreciatted.
I am having a bit of difficulty understanding how this service would work without being extremely cumbersome for the database owner even before you try to secure the traffic with the database.
Take Oracle in particular (though I assume there would be similar issues with other databases). In order for your service to access an Oracle database, the owner of the database would have to open up a hole in their firewall to allow your server(s) to access the database on a particular port so they would need to know the IP addresses of your servers and there is a good chance that they would need to configure a service that does all of its communication on a single port (by default, the Oracle listener will frequently redirect the client to a different port for the actual interaction with the database). If they are at all security conscious, they would have to install Oracle Connection Manager on a separate machine to proxy the connection between your server and the database rather than exposing the database directly to the internet. That's quite a bit of configuration work that would be required internally and that's assuming that the database account already exists with appropriate privileges and that everyone signs off on granting database access from outside the firewall.
If you then want to encrypt communication with the database, you'd either need to establish a VPN connection to the database owner's network (which would potentially eliminate some of the firewall issues) or you'd need to use something like Oracle Advanced Security to encrypt the communication between your servers. Creating VPN connections to many different customer networks would require a potentially huge configuration effort and could require that you maintain one server per customer because different customers will have different VPN software requirements that may be mutually incompatible. The Advanced Security option is an extra cost license on top of the enterprise edition Oracle license that the customer would have to go out and purchase (and it would not be cheap). You'd only get to the point of worrying about getting an appropriate SSL certificate once all these other hoops had been jumped through. The SSL certificate exchange would seem like the easiest part of the whole process.
And that's just to support Oracle. Support for other databases will involve a similar series of steps but the exact process will tend to be slightly different.
I would tend to expect that you'd be better served, depending on the business problem you're trying to solve, by creating a product that your customers could install on their own servers inside their network that would connect to a database and would have an interface that would either send data to your central server via something like HTTPS POST calls or that would listen for HTTPS requests that could be sent to database and the results returned over HTTP.
SSL is very important in order to keep a client's database safe. But there is more than just that. You have to make sure that each database account is locked down. Each client must only have access to their own database. Further more, every database has other privileges which are nasty. For instance MySQL has FILE_PRIV which allows an account to read/write files. MS-SQL has xp_cmdshell which allows the user to access cmd.exe from sql (why would they do this!?). PostgreSQL allows you to write stored procedures in any language and from there you can call all sorts of nasty functions.
Then, there are other problems. A Malformed query can cause a buffer overflows which will give an attacker the keys to the kingdom. You have to make sure all of your databases are up to date, and then pray no one drops an 0-day.
i have a security application that stores its data in a access database.now i'm required to
make a realtime synchronization (replication) between that access database and a new database in sql server 2005. these two database are the same.
any suggestion?!
i don't know how to do it using a windows service or not. i need exact technical answer.
Mostly, I would suggest you use a windows service to periodically check the MS Access db, and attempt to synchronize it with the Sql database.
This will allow you to remove the Human factor, and have this task run periodically to sync the dbs.
Have a look at
Creating a Basic Windows Service in
C#
Creating a Windows Service in C#
Also
Connect to Microsoft Access .mdb
database using C#
Beginners guide to accessing SQL
Server through C#
SQL server has built-in replication functionality that you get for free, so you don't need to worry about copying rows & tracking changes. There are several types of SQL replication that are used for different situations, such as merge replication, snapshot replication, and transactional replication. This last one, transactional replication sounds like what you want. Merge replication is used when you have users that might disconnect, go away and return later to synchronize (like remote users). Transactional replication is used where the subscribers and publisher are reliably connected. Snapshot replication generates a new snapshot each time synchronization occurs, and doesn't think about changes to the data. Read the MSDN documentation and find which of these types is appropriate for your situation.
Using these replication methods will require that you set up your tables in a SQL server or express instance - you can use that to synchronize with your SQL server and keep everything else Access as the front end. I think you want to follow astander's suggestion and use a windows service to trigger synchronization. However you can set up the Windows Synchronization Manager to automatically try to synchronize at startup, shutdown, when the computer is idle, etc. If you need finer control over triggering the synchronization then perhapse use a Windows app or service as astander suggested.
When I profile my application using SQL Server Profiler, I am seeing lots of Audit Login and Audit Logout messages for connections to the same database. I am wondering, does this indicate that something is wrong with my connection pooling? The reason I ask, is because I found this in the MSDN documentation in regards to connection pooling:
Login and logout events will not be
raised on the server when a connection
is fetched from or returned to the
connection pool. This is because the
connection is not actually closed when
it is returned to the connection pool.
For more information, see Audit Login
Event Class and Audit Logout Event
Class in SQL Server Books Online.
http://msdn.microsoft.com/en-us/library/8xx3tyca.aspx
Also, does anyone have any tips for determining how effective the connection pooling is for a given SQL server? I have lots of databases on a single server and I know this can have a huge impact, but I am wondering if there is an easy way to obtain metrics on the effectiveness of my connection pooling. Thanks in advance!
While the MSDN article says that the event will only be raised for non-reused connections, the SQL Server documentation contradicts this statement:
"The Audit Login event class indicates that a user has successfully logged in to Microsoft SQL Server. Events in this class are fired by new connections or by connections that are reused from a connection pool."
The best way to measure the effectiveness of pooling is to collect the time spent in connecting with and without pooling. With pooling, you should see that the first connection is slow and the subsequent ones are extremely fast. Without pooling, every connection will take a lot of time.
If you want to track the Audit Logon event, you can use the EventSubClass data column to whether the login is with a reused connection or a new connection. The value will be 1 for a real connection and 2 for a reused connection from the pool.application.
Remember that connections are pooled per connectionstring. If you have many databases and connect using many connectionstrings, your app will create a new connection when none exist with the correct connectionstring. Then it will pool that connection and, if the pool is full, bump an existing connection. The default Max Pool Size is 100 connections, so if you're routinely bouncing through more than 100 databases, you'll close and open connections all the time.
It's not ideal, but you can solve the problem by always connecting to a single database (one connection string) and then switch db context 'USE [DBName]'. There are drawbacks:
You lose the ability to specify a user/pass per connection string (your app user needs permission to all databases).
Your SQL becomes more complex (especially if you're using an out-of-the-box ORM or stored procs).
You could experiment with increasing the Max Pool Size if your database count isn't huge. Otherwise, if some databases are used frequently while others aren't, you could turn pooling off on the infrequent dbs. Both items are configured via connectionstring.
As far as metrics, monitoring the login and logout events on SQL Server is a good start. If your app is pooling nicely you shouldn't see a lot of them.