Linked Server vs. Ad Hoc (OpenRowset / OpenDatasource) Distributed Queries - sql-server

I have an application which needs to grab data from different clients' databases that reside at the client's location. So alongside their normal details such as company name, address etc, I also store the name of their DB server and the name of the database I need to interrogate.
The number of clients is currently zero. However, I expect this to grow to to around 200+ in a year's time.
I am a bit confused with which option to go with to run distributed queries:
Creating linked servers for every single client (up to 200+!)
Using an OpenDataSource() or OpenRowset() ad-hoc query within an SP that feeds in the DB server and DB name dynamically from the client
account table
Option 2 sounds the easiest to manage because if a client was to move their server or whatever, they just need to update their details in their account once and everything should work correctly.
But the reason I'm confused is because of this statement on Microsoft's site:
OPENDATASOURCE should be used only to reference OLE DB data sources
that are accessed infrequently. For any data sources that will be
accessed more than several times, define a linked server.
The external DBs will get accessed quite frequently, with the majority of transactions being SELECT statements.
I'm also unsure to the security implications and which option is more tight for security. Does anyone have experience in this area and could give me some tips please?

I have used both. Linked servers are typically used if a database has lookup data and is connected to often. There is nothing wrong with using dynamic opendatasource to connect to your clients machines. Be very security aware as to where, and how, you store your clients credentials. You probably should read up on encrypting passwords and usernames.

Related

How to query multiple databases from different SQL Servers

We have approx. 8 odd SQL Servers used for different purposes like inserting data in 1 server, update in another etc. (or connecting to only that database based on user’s region).
The problem is sometimes query for data needs to be done from multiple SQL Server databases. So say, I have an Id property, and based on the Id data needs to be retrieved from multiple of these 8 servers (if there is an Id match, so basically querying all database).
So basically the server which the user is logged into, will use “Linked Server” functionality and connect to other SQL Servers (with the server which the user is currently on acts as the source SQL Server), and using “UNION” functionality to club all data.
As a lot of transactions is taking place each day, this approach is not feasible, performance wise.
So any recommendations on a better approach to achieve the same above functionality. I read a concept called “Server Groups” but not sure of it.
The application is made in .Net Web Forms using Jquery/Ajax/HTML/API and ADO.NET.
If you have a .net application which is outside these 8 servers can't you establish individual connections and pass the ID from .net app to these servers ?
As far as I know "Server Group" is a concept in SSMS which helps you to group the servers and can run common scripts at same time.

Multiple user access database with Sql server

I have an access database that is stored on a network drive where all users access the one file. The database is linked to Sql server tables located in a local on site server. In my vba code my connection is to the access database. My question is I know it's possible to just connect to sql server from vba but all my queries are stored in Access, so will my code be able to run the queries from access if it's connected to Sql server or would I need to re-write all the queries? The problem we are having is that more than one user may be on the same record pulled up and they are overwriting each other's changes. Also a user may need to take the program on their laptop instead of having to remote in to their desktop at office. I was thinking I could just give them a copy each and that would solve the problem. Does anyone have any answers?
Just re-write the queries in SQL Server. It may be painful now, but it shouldn't be too bad, and down the road a bit, you'll be glad you moved everything to SQL Server (much faster, more stable, you're using a real DB, etc.)
You will want to pull all the queries into VBA and rewrite them with the appropriate parameters.

Approaches for Database Synchronization

I am currently been assigned to develop a sync application for my company. We have SQL server on our database server which will be synced with the client database. Client databases are not known, they can be SQLite or MYSQL or whatever.
What this sync app does is, detect changes that occur on server & client databases. Save these changes and sync. If changes occur on server database it will be synced with the client database and vice versa.
I did some research on it and came to know many solutions. One of them is to use a Microsoft Sync Framework. But I hardly found a good implementation example on it for syncing with remote databases.
Then I came across Change Data Capture(CDC) on SQL Server 2008. CDC works by detecting the change on the source table through triggers and put these changes on a separate table called sync_table, this table is then used for syncing.
Since, I cannot use the CDC feature because I don't have sufficient database rights on my machine, I have started to develop my own solution which works like how CDC does. I create separate sync_table for each source table, create triggers to detect data change and put this data in the sync_table.
However, I am advised to do some more research on it for choosing the best implementation methodology.
I need to keep the following things in mind,
Databases may/may not be on the same network.
On server side, the user must be able to select which tables will take part in the sync process.
Devices that will sync with the server database need to be registered first. Meaning that all client devices will be registered by the user before they can start syncing.
As usual any help will be appreciated :)
There is an open source project called SymmetricDS with many of the same goals. Take a look at the documentation and data model to see how the problem was solved, and maybe you will get some ideas. Instead of a separate shadow table for each source table, there is a single sym_data table where all the data is captured in comma separated value format. The advantage is one place to look for captured data and retrieve changes that were part of the same transaction. The table is kept small by purging it often after data is transferred successfully. It uses web protocols (HTTP) for data transfer. The advantage is leveraging existing web servers for performance, administration, and known filtering through firewalls. There is also a registration protocol used before clients are allowed to sync. The server admin "opens registration" for a client ID, which allows the client to connect for the first time. It supports many different databases, so you'll find examples of how to write triggers and retrieve unique transaction IDs on those systems.

Organising Dbs and tables in SSMS

This is a repost of a question I asked 4 or 5 days ago, with zero response. Hoping for more luck this time...
(Using SQL Server 2008)
Within the next few weeks I plan to introduce SQL server to an office that is in dire need of a proper data server. Currently there is a heavy reliance on loose Excel and Access file (supplemented with frighteningly large amount of impenetrable VB code to do data manipulations) strewn all over the internal network.
We need SQL server for two things:
1. For internal databases that will be designed upfront and will be capturing data on an ongoing basis
2. For ad hoc uploads of datasets received from clients, which we then analyse
I am the only person in this office who is familiar with SQL. I will have to train the other 5 or 6 people to use it.
Now, my question is this: how would you guys set up the DBs so that it would be easy using Management Studio to visually recognize where what is being stored? To be more precise: if this were a windows file system it would look something like this:
c:\client work\client 1\piece of work 1 (db with 10 tables)\
c:\client work\client 1\piece of work 2 (db with 8 tables)\
c:\client work\client 1\piece of work 3 (db with 7 tables)\
c:\internal\accounting system\some db with 8 tables\
c:\internal\accounting system\some db with 5 tables\
c:\internal\some other system\some db with 7 tables\
etc.
So briefly, I need to visually split by internal and client work. Client work I need to split by different clients. For each client I need to split out the different distinct sets of work. (Internal work follows a similar pattern).
Solutions that I am aware of:
Run multiple data servers (e.g. one internal, one for client work). Not sure what the cons of this would be though
Assign schemas to tables
I would love to hear your suggestions!
Your organizational tools for managing SQL Server are instances, databases and schemas:
A server can run multiple instances. An instance is basically a completely separate server instance on the same machine.
An instance can manage multiple databases. The database is the standard boundary of integrity - you (usually) back up an entire database, referential integrity is constrained to being between objects in the same database, etc.
Each database can contain multiple schemas, which allow you to organize code.
All these "containers" relate to security in some way.
I recommend that you take an organization data and process inventory first, so that you understand what data you are dealing with, who uses it and how - with special attention on data which is public or collaborative (data used by certain people together) and which needs to be compartmentalized access (only used by a particular role). SQL Server is not really a great place of choice to be storing unstructured data - I would not view it as a simple replacement of a file server, for instance.
From there, proceed to define roles for your users. Having roles is a lot better strategy than assigning rights to individual users. It documents the semantic meaning of the access (any person performing this role needs this access as opposed to the user's identity - john and kate need access - this tells you nothing about why they need access). Be certain that the roles are sufficiently fine-grained. A departmental role like AccountsReceivable isn't nearly as useful as PaymentApprover or InvoiceProcessor or AccountsSupervisor. Users can act in multiple roles - this will give you a lot more self-documenting ability in your infrastructure and a lot fewer security holes and headaches.
This should help to define which containers you will need and what access to grant and guide your data infrastructure from there.
As far as giving users direct access, I'm with Randy Minder, SQL Server is only an expert user tool at best. If they are familiar with Access, a good option is to let them use Access against carefully designed and chosen views in SQL Server until they are ready for a more systematic data engineering approach.
IMO, users of your databases should not have to know or care where or how your databases are set up. And they shouldn't be given access to SSMS unless they are well trained in SQL. This is a disaster waiting to happen. You should be creating applications and/or reports that allow the user access to the data they need. That way they don't care where the data sits, and don't need to know.

MS Access Application - Convert data storage from Access to SQL Server

Bear in mind here, I am not an Access guru. I am proficient with SQL Server and .Net framework. Here is my situation:
A very large MS Access 2007 application was built for my company by a contractor.
The application has been split into two tiers BY ACCESS; there is a front end portion that holds all of the Ms Access forms, and then on the back end part, which are access tables, queries, etc., that is stored on a computer on the network.
Well, of course, there is a need to convert the data storage portion to SQL Server 2005 while keeping all of these GUI forms which were built in Ms Access. This is where I come in.
I have read a little, and have found that you can link the forms or maybe even the access tables to SQL Server tables, but I am still very unsure on what exactly can be done and how to do it.
Has anyone done this? Please comment on any capabilities, limitations, considerations about such an undertaking. Thanks!
Do not use the upsizing wizard from Access:
First, it won't work with SQL Server 2008.
Second, there is a much better tool for the job:
SSMA, the SQL Server Migration Assistant for Access which is provided for free by Microsoft.
It will do a lot for you:
move your data from Access to SQL Server
automatically link the tables back into Access
give you lots of information about potential issues due to differences in the two databases
keeps track of the changes so you can keep the two synchronised over time until your migration is complete.
I wrote a blog entry about it recently.
You have a couple of options, the upsizing wizard does a decent(ish) job of moving structure and data from access to Sql. You can then setup linked tables so your application 'should' work pretty much as it does now. Unfortunately the Sql dialect used by Access is different from Sql Server, so if there are any 'raw sql' statements in the code they may need to be changed.
As you've linked to tables though all the other features of Access, the QBE, forms and so on should work as expected. That's the simplest and probably best approach.
Another way of approaching the issue would be to migrate the data as above, and then rather than using linked tables, make use of ADO from within access. That approach is kind of famaliar if you're used to other languages/dev environments, but it's the wrong approach. Access comes with loads of built in stuff that makes working with data really easy, if you go back to use ADO/Sql you then lose many of those benefits.
I suggest start on a small part of the application - non essential data, and migrate a few tables and see how it goes. Of course you back everything up first.
Good luck
Others have suggested upsizing the Jet back end to SQL Server and linking via ODBC. In an ideal world, the app will work beautifully without needing to change anything.
In the real world, you'll find that some of your front-end objects that were engineered to be efficient and fast with a Jet back end don't actually work very well with a server database. Sometimes Jet guesses wrong and sends something really inefficient to the server. This is particular the case with mass updates of records -- in order not to hog server resources (a good thing), Jet will send a single UPDATE statement for each record (which is a bad thing for your app, since it's much, much slower than a single UPDATE statement).
What you have to do is evaluate everything in your app after you've upsized it and where there are performance problems, move some of the logic to the server. This means you may create a few server-side views, or you may use passthrough queries (to hand off the whole SQL statement to SQL Server and not letting Jet worry about it), or you may need to create stored procedures on the server (especially for update operations).
But in general, it's actually quite safe to assume that most of it will work fine without change. It likely won't be as fast as the old Access/Jet app, but that's where you can use SQL Profiler to figure out what the holdup is and re-architect things to be more efficient with the SQL Server back end.
If the Access app was already efficiently designed (e.g., forms are never bound to full tables, but instead to recordsources with restrictive WHERE clauses returning only 1 or a few records), then it will likely work pretty well. On the other hand, if it uses a lot of the bad practices seen in the Access sample databases and templates, you could run into huge problems.
It's my opinion that every Access/Jet app should be designed from the beginning with the idea that someday it will be upsized to use a server back end. This means that the Access/Jet app will actually be quite efficient and speedy, but also that when you do upsize, it will cause a minimum of pain.
This is your lowest-cost option. You're going to want to set up an ODBC connection for your Access clients pointing to your SQL Server. You can then use the (I think) "Import" option to "link" a table to the SQL Server via the ODBC source. Migrate your data from the Access tables to SQL Server, and you have your data on SQL Server in a form you can manage and back up. Important, queries can then be written on SQL Server as views and presented to the Access db as linked tables as well.
Linked Access tables work fine but I've only used them with ODBC and other databases (Firebird, MySQL, Sqlite3). Information on primary or foreign keys wasn't passing through. There were also problems with datatype interpretation: a date in MySQL is not the same thing as in Access VBA. I guess these problems aren't nearly as bad when using SQL Server.
Important Point: If you link the tables in Access to SQL Server, then EVERY table must have a Primary Key defined (Contractor? Access? Experience says that probably some tables don't have PKs). If a PK is not defined, then the Access forms will not be able to update and insert rows, rendering the tables effectively read-only.
Take a look at this Access to SQL Server migration tool. It might be one of the few, if not the ONLY, true peer-to-peer or server-to-server migration tools running as a pure Web Application. It uses mostly ASP 3.0, XML, the File System Object, the Data Dictionary Object, ADO, ADO Extensions (ADOX), the Dictionary Scripting Objects and a few other neat Microsoft techniques and technologies. If you have the Source Access Table on one server and the destination SQL Server on another server or even the same server and you want to run this as a Web Internet solution this is the product for you. This example discusses the VPASP Shopping Cart, but it will work for ANY version of Access and for ANY version of SQL Server from SQL 2000 to SQL 2008.
I am finishing up development for a generic Database Upgrade Conversion process involving the automated conversion of Access Table, View and Index Structures in a VPASP Shopping or any other Access System to their SQL Server 2005/2008 equivalents. It runs right from your server without the need for any outside assistance from external staff or consultants.
After creating a clone of your Access tables, indexes and views in SQL Server this data migration routine will selectively migrate all the data from your Access tables into your new SQL Server 2005/2008 tables without having to give out either your actual Access Database or the Table Contents or your passwords to anyone.
Here is the Reverse Engineering part of the process running against a system with almost 200 tables and almost 300 indexes and Views which is being done as a system acceptance test. Still a work in progress, but the core pieces are in place.
http://www.21stcenturyecommerce.com/SQLDDL/ViewDBTables.asp
I do the automated reverse engineering of the Access Table DDLs (Data Definition Language) and convert them into SQL equivalent DDL Statements, because table structures and even extra tables might be slightly different for every VPASP customer and for every version of VP-ASP out there.
I am finishing the actual data conversion routine which would migrate the data from Access to SQL Server after these new SQL Tables have been created including any views or indexes. It is written entirely in ASP, with VB Scripting, the File System Object (FSO), the Dictionary Object, XML, DHTML, JavaScript right now and runs pretty quickly as you will see against a SQL Server 2008 Database just for the sake of an example.
It takes perhaps 15-20 seconds to reverse engineer almost 500 different database objects. There might be a total of over 2,000 columns involved in this example for the 170 tables and 270 indexes involved.
I have even come up with a way for you to run both VPASP systems in parallel using 2 different database connection files on the same server just to be sure that orders entered on the Access System and the SQL Server system produce the same results before actual cutover to production.
John (a/k/a The SQL Dude)
sales#designersyles.biz
(This is a VP-ASP Demo Site)
Here is a technique I've heard one developer speak on. This is if you really want something like a Client-Server application.
Create .mdb/.mde frontend files distributed to each user (You'll see why).
For every table they need to perform an CRUD, have a local copy in the file in #1.
The forms stay linked to the local tables.
Write VBA code to handle the CRUD from the local tables to the SQL Server database.
Reports can be based off of temp tables created from the SQL Server (Won't be able to create temp tables in mde file I don't think).
Once you decide how you want to do this with a single form, it is not too difficult to apply the same technique to the rest. The nice thing about working with the form on a local table is you can keep a lot of the existing functionality as the existing application (Which is why they used and continue to use Access I hope). You just need to address getting data back and forth to the SQL Server.
You can continue to have linked tables, and then gradually phase them out with this technique as time and performance needs dictate.
Since each user has their own local file, they can work on their local copy of the data. Only the minimum required to do their task should ever be copied locally. Example: if they are updating a single record, the table would only have that record. When a user adds a new record, you would notice that the ID field for the record is Null, so an insert statement is needed.
I guess the local table acts like a dataset in .NET? I'm sure in some way this is an imperfect analogy.

Resources