How do I query the Sharepoint database? - database

I want to retrieve some data. How can I make a query on a Sharepoint database?

You shouldn't because of these reasons:
This is completely unsupported by the EULA you agreed to when you installed SharePoint. (I have to add a note that changing or calling triggers (except some) directly is unsupported, but not selecting)
Your queries are not guaranteed to work after applying any patches or service packs to SharePoint since Microsoft could change the database schema anytime.
Directly querying the database can place extra load on a server and hence performance issues.
Direct SELECT statements against the database take shared read locks at the default transaction level so your custom queries might cause deadlocks and hence stability issues.
Your custom queries might lead to incorrect data being retrieved.
Let me clarify, that #1 DOES NOT ALLOW you to modify sharepoint database in any way. SELECT`ing is permitted, however, as mentioned, that may lead to other problems.
However, if you are not interested in these points, then just use Visual Studio to connect to existing database, just do the regular procedure on how you connect to any other database.
But you can make your own database and store some additional information there.
Access SharePoint data the right way
Use SharePoint Object Model (Code can only be run on SharePoint server)
Use SharePoint WebServices (Run code from anywhere, from any application)
SharePoint 2013 now features REST API.

I have one thing to add. If you do decide to query sharepoint content databases directy, use the NOLOCK hint to prevent shared lock being taken out and potentially creating dead locks in the application.

If you don't mind using other proprietary Microsoft programs, Access/Excel/PowerBI all offer native connectivity to data stored in sharepoint lists/document libraries/meta data.

Related

Best approach to move data between SQL Servers

Ok, let me explain the environment we are facing here:
We have an ASP.NET MVC 4 app that uses a SQL Server database.
This app isolates data in "projects", so when any user connects to it only can work on the data of one of this projects.
Sometimes... a group of users have to travel to remote regions for some days to retrieve data for a single project, and quite often they won't be able to have an internet connection (even mobile or satellite solutions are often out of reach).
While the displaced team works on a project, people at the office still can work on the rest of the projects (but no on the one that is abroad).
So... we are pondering the possibility of using a laptop to act as a "mobile server", where users can download the data from a specific project before travelling. While abroad, they can work against this "mobile server", update any data on their project and, when they come back, they could upload their updated data to the main server.
Our idea is to create stored procedures on both servers (main and mobile) that executes different queries to update data from a project between them, passing the project identifier as a parameter. Probably using Linked servers to allow main and mobile to see themselves during update operations.
Our questions here are:
Is this a good aproach?
Is there any other better approach that we're not seeing?
Are there any risks we should pay attention to in this or other approachs?
I've never used Bidirectional transaction replication so if that works for you, problem solved. I do have quite a bit of experience with data migration, including merging large data sets into software driven systems. And from that experience, replication has hurt us more than it has helped us (from a migration/merge view).
The biggest challenge in my opinion is going to be conflict resolution. I know you say that all of the data is in project specific databases, but there is no shared data at all? What about multiple remote users updating the same data? In that case you're going to need a little more than just replication.
Instead of maintaining two databases at all times (one for mobile, one as the regular in-house db), why not a system where a job is called to your main system indicating that a project needs to be prepared for "offline mode" (the job could be stored procedures or SSIS packages or straight T-SQL). Whatever the technology used, this job would copy all of the requested project data to a new database on the remote server/laptop and mark it somehow in the main database as read-only to prevent users in the office from updating that data.
Once the data is in offline mode on the remote server, the users can update and use the data as much as they want from that remote server. Then when the users get an internet connection or they are back in the office they can kick off another job that syncs the data to the main server, removes offline mode, and deletes/archives the remote database. Almost like a temporary project database.
Seriously, it sounds like a fun project.
Technologies to look at:
SSIS (Sql Server Integration Services) - In my experience, this is extremely fast at moving data and allows you the ability to add logic to handle conflict resolution, error logic, etc. It's free (with certain Sql Server editions) and the community is huge so supporting it should be easy. SSIS is not as dynamic as some of the specialized solutions out there.
A data migration suite like Pervasive's Data Integrator - I loved this but it's expensive. You could right an entire solution in this product that could handle the processing of your data bidirectionally and like SSIS it allows for complex programming logic.
T-SQL - With a linked server you could just write straight queries (using stored procedures if you wanted). The problem here is security on the linked server. We don't use them because of this issue. Linked Servers: Good or Bad?
Start using some of Microsoft's built in change detection technologies right off the bat. It's harder to implement when you're already using the system. Change Data Capture (CDC) will give you a full history of the records updated while Change Tracking will give you a light-weight summary of your changes. Using either technology will make syncing the data many times easier.
Change Tracking: http://msdn.microsoft.com/en-us/library/bb933874.aspx
Change Data Capture: http://msdn.microsoft.com/en-us/library/cc645937.aspx
SSIS: http://msdn.microsoft.com/en-us/library/ms169917.aspx
SQL Server Agent Jobs: http://msdn.microsoft.com/en-us/library/ms189237.aspx

Using access 2010 as a frontend to SQL Server database

I am having to develop an application very rapidly. I have chosen SQL Server (2012) as the DB backend, and I will write all my stored procs, triggers etc in the backend.
However, for UI (logon, reporting etc), I am using Access 2010 for the frontend. I am new to both Access used earlier versions of Access waaay back in the day) and SQL Server (familiar with other Db's).
The goal is to have the database reside on a server and let clients connect with an instance of Access 2010 running locally on their machine.
I am looking for a quick tutorial that shows me how to use the SQL server objects from the Access frontend (I believe its called linking) - any link to useful resource would be very helpful, as I can't seem to locate anything useful (I may searching using the wrong keywords).
Assuming you built all the tables and data on SQL server, then in Access it is a simple matter to link Access to that database.
And to save development time, you can continue to use the Access simple approach of using forms bound to those tables. As long as you launch a form with some kind of criteria (say an invoice number), then that bound form will ONLY pull down the one record from SQL server into that form. (so need to write or use store procedures etc. for that form). And any triggers etc. you have built in SQL server will run without you having to do anything from the Access side.
So a plane jane form build in Access that is bound to a table of say 1 million rows in Access does not need any “special” code – just make sure you launch the form with the “where” clause that Access provides and the form will only pull + load the one record.
So 99% of the normal development process you used in Access in the past will continue to work. Using SQL server for the most part does not change much if anything in regards to building forms that edit such data.
However, for reports and some forms that query + search for data etc., or some VBA code that needs to “process” data, you are most free to call store procedures. You simply create a pass-through query in Access. The VBA code to use that T-SQL thus looks like this:
Currentdb.QueryDefs("MyRawt-sqlPassThoughquery").Execute
Or
with CurrentDb.QueryDefs("MyPass")
.SQL = "exec sp_myProc"
.Execute
end with
In the past for most access applications you likely used liked tables – those linked tables can be to a Access file (back end), or Oracle, or SQL server – how the actual application works and functions is really much the same for all cases. (so there not really a lot of “specific” things you need to know from the Access side – if you comfortable with Oracle, or SQL server, then using Access as a front end works just fine, and the typical development approaches used in Access will remain typical.
Here is an article that outlines the linking process:
https://support.office.com/en-us/article/Import-or-link-to-SQL-Server-data-A5A3B4EB-57B9-45A0-B732-77BC6089B84E?ui=en-US&rs=en-US&ad=US&fromAR=1
Keep in mind you will see MANY articles that talk about ADP projects - they have been deprecated since Access 2010, and I don't recommend using ADP projects with Access - so be careful, since many articles that talk about Access + SQL server are built around ADP projects which as noted should not be used anymore.
This office.microsoft.com article should give you a good overview.
With plenty of more technical information searching for "query sql server from ms access".
Don't mess about with linked tables. Use an Access ADP (Access data project), which is natively connected to SQL Server. Sadly this type of access file is being phased out but it is the optimal solution for an MS Access front end with a SQL Server back end
Pros and Cons of Access Data Project (MS Access front end with SQL Server Backend)

C# How to Synch Data Between Database Instances Over the Internet

We have an application that requires our customers to have a SQL server instance on site. At their request, the application needs to synchronize the data in their database with a copy in our datacenter.
We're using .Net 3.5 SP1. We need to synchronize the data exactly, including IDENTITY columns.
We'd prefer to use something like LINQ to SQL that would let us make some simple select and insert/update calls against mapped entities. However, the IDENTITY columns seem to be a problem with LINQ and similar approaches.
We can do this all with built-up SQL statements and turn IDENTITY INSERT on / off as needed, but I'd prefer a more elegant solution.
Thanks!
** Edit - We DO need to write our own solution, and we do need to use .Net 3.5 SP1 to do it. I won't spend your time explaining all the reasons why, but please limit suggestions to options within the .Net playground.
Microsoft Sync Framework can be your solution. This is framework description from Microsoft:
Microsoft Sync Framework is a data synchronization platform from Microsoft that can be used to synchronize data across multiple data stores. Sync Framework includes a transport-agnostic architecture, into which data store-specific synchronization providers, modelled on the ADO.NET data provider API, can be plugged in.
Sync Framework is a comprehensive data synchronization solution that enables developers to build solutions that support synchronization of any database, on any data protocol over any network topology. msdn.microsoft.com
For your convinience providing link to good tutorial on the subject
If it is just a couple of tables that need to be synchronized and there is not a lot of data in the tables (now and future) you could develop some sort of bulk copy from your servers and bulk insert routine on the customer's server.
Since you said you can't use SQL Server replication services or SSIS, then perhaps a backup/restore procedure could be written. You could take a scheduled backup of your database and make it available to calling applications which could then copy the backup, restore it to another instance on the customers server, then pull all data you need via any number of methods and it would exist locally on the customers servers.
Beyond that, I think you may be asking for a maintenance and synchronization nightmare if you can't base your solution on tools that are made to do this sort of thing.

Can I store any custom tables in SharePoint system database?

Can I store any custom tables in SharePoint's own database?
Is this supported behavior or not?
(I mean tables in MS SQL database, not SharePoint lists.)
If I can, how well does this play with backup/restore functionality?
What are possible caveats?
For anyone wondering why I'm asking: there's an app which is bound to SharePoint server and needs to store some purely relational internal information that doesn't make sense apart from that SharePoint instance. I would like to narrow down data storage to one place but I'm not sure if SharePoint likes its database being used for other purposes.
I'm using SharePoint 2007.
Is it possible? Sure. Should you? Nope.
The SharePoint content/configuration databases are subject to change with any update Microsoft releases, and any changes you make will very likely be destroyed, and if your farm depends on them, be left non-functional.
If you want to store purely relational data in a set of tables, just create another database. There's nothing stopping you from using the same SQL Server instance that houses your SharePoint content and/or configuration databases to store other relational databases as well.
Not a good idea: Support for changes to the databases used by Windows Sharepoint Services
...
Making any modification to the database schema
Adding tables to any of the databases
...
If an unsupported database modification is discovered during a support call, the customer must perform one of the following procedures at a minimum:
Perform a database restoration from the last known good backup that did not include the database modifications
Roll back all the database modifications
It is even worse than the above. It is likely that future upgrades will notice your changes to the content database schema and refuse to upgrade the database period.

MS Access Application - Convert data storage from Access to SQL Server

Bear in mind here, I am not an Access guru. I am proficient with SQL Server and .Net framework. Here is my situation:
A very large MS Access 2007 application was built for my company by a contractor.
The application has been split into two tiers BY ACCESS; there is a front end portion that holds all of the Ms Access forms, and then on the back end part, which are access tables, queries, etc., that is stored on a computer on the network.
Well, of course, there is a need to convert the data storage portion to SQL Server 2005 while keeping all of these GUI forms which were built in Ms Access. This is where I come in.
I have read a little, and have found that you can link the forms or maybe even the access tables to SQL Server tables, but I am still very unsure on what exactly can be done and how to do it.
Has anyone done this? Please comment on any capabilities, limitations, considerations about such an undertaking. Thanks!
Do not use the upsizing wizard from Access:
First, it won't work with SQL Server 2008.
Second, there is a much better tool for the job:
SSMA, the SQL Server Migration Assistant for Access which is provided for free by Microsoft.
It will do a lot for you:
move your data from Access to SQL Server
automatically link the tables back into Access
give you lots of information about potential issues due to differences in the two databases
keeps track of the changes so you can keep the two synchronised over time until your migration is complete.
I wrote a blog entry about it recently.
You have a couple of options, the upsizing wizard does a decent(ish) job of moving structure and data from access to Sql. You can then setup linked tables so your application 'should' work pretty much as it does now. Unfortunately the Sql dialect used by Access is different from Sql Server, so if there are any 'raw sql' statements in the code they may need to be changed.
As you've linked to tables though all the other features of Access, the QBE, forms and so on should work as expected. That's the simplest and probably best approach.
Another way of approaching the issue would be to migrate the data as above, and then rather than using linked tables, make use of ADO from within access. That approach is kind of famaliar if you're used to other languages/dev environments, but it's the wrong approach. Access comes with loads of built in stuff that makes working with data really easy, if you go back to use ADO/Sql you then lose many of those benefits.
I suggest start on a small part of the application - non essential data, and migrate a few tables and see how it goes. Of course you back everything up first.
Good luck
Others have suggested upsizing the Jet back end to SQL Server and linking via ODBC. In an ideal world, the app will work beautifully without needing to change anything.
In the real world, you'll find that some of your front-end objects that were engineered to be efficient and fast with a Jet back end don't actually work very well with a server database. Sometimes Jet guesses wrong and sends something really inefficient to the server. This is particular the case with mass updates of records -- in order not to hog server resources (a good thing), Jet will send a single UPDATE statement for each record (which is a bad thing for your app, since it's much, much slower than a single UPDATE statement).
What you have to do is evaluate everything in your app after you've upsized it and where there are performance problems, move some of the logic to the server. This means you may create a few server-side views, or you may use passthrough queries (to hand off the whole SQL statement to SQL Server and not letting Jet worry about it), or you may need to create stored procedures on the server (especially for update operations).
But in general, it's actually quite safe to assume that most of it will work fine without change. It likely won't be as fast as the old Access/Jet app, but that's where you can use SQL Profiler to figure out what the holdup is and re-architect things to be more efficient with the SQL Server back end.
If the Access app was already efficiently designed (e.g., forms are never bound to full tables, but instead to recordsources with restrictive WHERE clauses returning only 1 or a few records), then it will likely work pretty well. On the other hand, if it uses a lot of the bad practices seen in the Access sample databases and templates, you could run into huge problems.
It's my opinion that every Access/Jet app should be designed from the beginning with the idea that someday it will be upsized to use a server back end. This means that the Access/Jet app will actually be quite efficient and speedy, but also that when you do upsize, it will cause a minimum of pain.
This is your lowest-cost option. You're going to want to set up an ODBC connection for your Access clients pointing to your SQL Server. You can then use the (I think) "Import" option to "link" a table to the SQL Server via the ODBC source. Migrate your data from the Access tables to SQL Server, and you have your data on SQL Server in a form you can manage and back up. Important, queries can then be written on SQL Server as views and presented to the Access db as linked tables as well.
Linked Access tables work fine but I've only used them with ODBC and other databases (Firebird, MySQL, Sqlite3). Information on primary or foreign keys wasn't passing through. There were also problems with datatype interpretation: a date in MySQL is not the same thing as in Access VBA. I guess these problems aren't nearly as bad when using SQL Server.
Important Point: If you link the tables in Access to SQL Server, then EVERY table must have a Primary Key defined (Contractor? Access? Experience says that probably some tables don't have PKs). If a PK is not defined, then the Access forms will not be able to update and insert rows, rendering the tables effectively read-only.
Take a look at this Access to SQL Server migration tool. It might be one of the few, if not the ONLY, true peer-to-peer or server-to-server migration tools running as a pure Web Application. It uses mostly ASP 3.0, XML, the File System Object, the Data Dictionary Object, ADO, ADO Extensions (ADOX), the Dictionary Scripting Objects and a few other neat Microsoft techniques and technologies. If you have the Source Access Table on one server and the destination SQL Server on another server or even the same server and you want to run this as a Web Internet solution this is the product for you. This example discusses the VPASP Shopping Cart, but it will work for ANY version of Access and for ANY version of SQL Server from SQL 2000 to SQL 2008.
I am finishing up development for a generic Database Upgrade Conversion process involving the automated conversion of Access Table, View and Index Structures in a VPASP Shopping or any other Access System to their SQL Server 2005/2008 equivalents. It runs right from your server without the need for any outside assistance from external staff or consultants.
After creating a clone of your Access tables, indexes and views in SQL Server this data migration routine will selectively migrate all the data from your Access tables into your new SQL Server 2005/2008 tables without having to give out either your actual Access Database or the Table Contents or your passwords to anyone.
Here is the Reverse Engineering part of the process running against a system with almost 200 tables and almost 300 indexes and Views which is being done as a system acceptance test. Still a work in progress, but the core pieces are in place.
http://www.21stcenturyecommerce.com/SQLDDL/ViewDBTables.asp
I do the automated reverse engineering of the Access Table DDLs (Data Definition Language) and convert them into SQL equivalent DDL Statements, because table structures and even extra tables might be slightly different for every VPASP customer and for every version of VP-ASP out there.
I am finishing the actual data conversion routine which would migrate the data from Access to SQL Server after these new SQL Tables have been created including any views or indexes. It is written entirely in ASP, with VB Scripting, the File System Object (FSO), the Dictionary Object, XML, DHTML, JavaScript right now and runs pretty quickly as you will see against a SQL Server 2008 Database just for the sake of an example.
It takes perhaps 15-20 seconds to reverse engineer almost 500 different database objects. There might be a total of over 2,000 columns involved in this example for the 170 tables and 270 indexes involved.
I have even come up with a way for you to run both VPASP systems in parallel using 2 different database connection files on the same server just to be sure that orders entered on the Access System and the SQL Server system produce the same results before actual cutover to production.
John (a/k/a The SQL Dude)
sales#designersyles.biz
(This is a VP-ASP Demo Site)
Here is a technique I've heard one developer speak on. This is if you really want something like a Client-Server application.
Create .mdb/.mde frontend files distributed to each user (You'll see why).
For every table they need to perform an CRUD, have a local copy in the file in #1.
The forms stay linked to the local tables.
Write VBA code to handle the CRUD from the local tables to the SQL Server database.
Reports can be based off of temp tables created from the SQL Server (Won't be able to create temp tables in mde file I don't think).
Once you decide how you want to do this with a single form, it is not too difficult to apply the same technique to the rest. The nice thing about working with the form on a local table is you can keep a lot of the existing functionality as the existing application (Which is why they used and continue to use Access I hope). You just need to address getting data back and forth to the SQL Server.
You can continue to have linked tables, and then gradually phase them out with this technique as time and performance needs dictate.
Since each user has their own local file, they can work on their local copy of the data. Only the minimum required to do their task should ever be copied locally. Example: if they are updating a single record, the table would only have that record. When a user adds a new record, you would notice that the ID field for the record is Null, so an insert statement is needed.
I guess the local table acts like a dataset in .NET? I'm sure in some way this is an imperfect analogy.

Resources