How do I use EF4 in SQLCLR (SQL Server 2012)? - sql-server

I've a project that uses EF4 on the client side to talk to a SQL Server db. On that DB there's a bunch of SPs that do operations on records and return subsets of records. Nothing particularly major, about the most intensive bit is generating a few bytes of random data and overwriting part of a byte array with it. The records have to stay on the server for security reasons because the data array's akin to a private key.
In the past I've used ADO for DB access within the SPs, but that seems horribly outdated and clunky compared with EF4. And now that I've updated to SQL Server 2012 which has .NET4 available for SQLCLR stuff, I'd like to do the next SP using EF4 and shift to using that in the future.
Problem is, nobody seems to do it yet and I'm not sure it's possible yet.
I know I could do this by writing a seperate server-side app and having that talk to the the DB on my behalf, but that means the client side needs a connection both to the database and to the helper service. More things to get working, more things to go wrong, and aesthetically displeasing.
I've created a simple assembly that uses EF4 to give access to a couple of standard tables. No extra references added. On importing that into SQL Server, I get errors that it can't find "System.Data.Entity" and "System.Runtime.Serialization". Ok, I copy them into the same directory and import again. I get a couple of warnings that I'm in uncharted territory, which is fine for experimenting. But then I get a missing assembly "smdiagnostics". And I can't find any file on the drive that has "smdiagnostics" in the name, a full text search through the project for that term gives no hits, and I've not found much elsewhere that's helped me track down where this can be found.
So does anyone know how I can either satisfy or remove the requirement for smdiagnostics?

Ah. Bit of a "doh". Copying individual assemblies into the same directory as my own DLL isn't the way to go, just a knee jerk reaction to seeing an error message about the referenced assembles not being found even after looking in the same directory.
Assuming the use of the 32 bit server, having selected your own db and having already set trustworthy on, this registers everything needed in order to add an EF4 project;
CREATE ASSEMBLY [System.Data.Entity]
from 'C:\Windows\Microsoft.NET\Framework\v4.0.30319\System.Data.Entity.dll'
with permission_set = UNSAFE;

Related

How to eager load entire database with EF

My database consists of 5 tables with ~10000 rows combined. It takes ~1Mb in SQL Server CE which is on shared folder. The database itself is hierarchical Country-Region-City-Street-Building. I am using Entity Framework 4.
Because the database is small users are able to explore and edit all 2000 Cities in a WPF ListView. But with every approach I tried so far the GUI is sluggish (because of many database round-trips, with dummy data GUI is lightfast). How can I load entire database into memory with one or few database round-trips?
I tried multiple Include() but I noted great performance penalty as described here
Should I write my own ORM-light? I could also use plain ascii CSV files instead of database but it would obviously exclude concurrency.
Honestly, I've done something like this myself, and the answer for me was to copy the whole database locally and work on it.
If you're looking not only to read but also to write, I'd definitely suggest ditching CE and installing one of the Express versions of Sql Server. They are designed for this kind of situation; CE is not*.
*SP1 is better for concurrent access, but over the network will never be performant for large datasets.
I re-asked this question on Microsoft forum and they were kind to give me some guidance:
Basically my question can be restated as following:
read only once from database on application start
do all subsequent queries from local data, not database (for performance)
write to both context and database each time an entity is being added or deleted.
With plain EF it is not possible because each query goes to the database. This implies that I must read data fast on start and then cache it.
Implementation details:
The best way seems to be using ESQL to import data fast and then cache it, for example using entities not connected to context. From my first experiments it seems to work well.

How do you put an large existing database (schema) under source control?

My DBA just lost some development work that he did on our development database. Poor fella. So naturally our manager asked him, at our status meeting, how this could happen and how we could avoid this happening in the future. "Source control could alleviate the problem" I suggested... The dba's response; "No, we just backup the server more often". Now I would like to help my DBA understand what source control is and how it fits together with a database schema and development on that schema.
Previously I've tried to explain him that there's nothing special about the source code behind tables and stored procedures and it should be in a source control system (TFS in this case). But he just didn't bite. Now, while this misap is in recent memory, I would like to take another stab at it.
So my question is, do you know of any good advice I could pass on to my DBA and maybe even a couple of resources explaining how you would go about migrating a DB schema to be under source control and find its proper place in the build and deployment processes?
A couple of facts about the environment:
Source Control on a TFS 2008 Server.
Database is a MS SQL server 2008 with >300 tables and >300 other objects (sprocs, triggers, functions etc.).
Clarification:
We have been using DB Ghost and other change management solutions on other projects with other DBAs, in the past. We even have the license for VS DB edition! The problem is getting the DBA to even think about this way of developing for the database. He's really old school (i.e. migrating changes manually from environment to environment), and unfortunately hes the only one who knows anything about this particular DB.
See how to version control sql server databases and Do you source control your databases, among many others. Or use the search page. Basically, your approach seems correct. Good luck persuading the DBA...
If you are using Visual Studio Team System, I recommend having a stab at their Database Edition (i think these days it comes with the Developer Edition if you are an MSDN Subscriber). What this will allow you to do is to script out all your schema, stored procs, views, triggers, etc and source control these. This should also make the dba more comfortable since he will be working with a "Database" version of the tool rather than the "Developer" version (naming can go a great lengths with people). As you make changes from Visual Studio, you can manage script changes as you work, and source control them.
If your company has an MSDN license, they can use the Visual Studio Database edition. There's a video tutorial of it here.
I have no power of purchase, so I don't know what the cost breakdowns are. But it has the capability of source controlling all the parts of a DB schema, and includes creating change-scripts as well as auto-deploying straight from VS if you want (I wouldn't recommend that).
In general though, it's pretty solid as a database source control option.
Source control for databases can be quite contentious. It's different to use source control for something that produces a binary because you can't lock the source: a stored proc is a row in a table and there is not single table to read to get a table definition.
Also, version to version is mostly a set of ALTER statements. You script out CREATEs and add them to source control. This makes it harder to use in cases like this.
To me, this is more a procedural error.
Why was the change not done from a script? Forget where the script lives, but why no reproducable and re-runnable script? Perhaps linked to the change tracking number? If the database is reset (loaded from prod) then how would the change have been re-applied to prepare for production. And other questions.
I believe in source control and we use it: but it has limits for database work.
First you are approaching this incorrectly. If the dba won't bite on Source Control and he is making errors that affect the system, the person you need to persuade is his boss.
If it helps, I'm from the old school too and I love having our database objects in source control. How nice to be able to revert one table without having to restore the whole database backup to a different location and then move the table. How much faster and simpler. How nice to be able to compare two different versions and see what changed. How nice to deploy a change and know exactly which database changes (say, for instance only twelve of the 23 possible ones) go with the part you are deploying and not some other unfinished project. How nice to know exactly which scripts were involved in a particular change you had to rollback. How nice that nobody is making on-the-fly changes on production since we now require all production changes to be from source control scripts. There are so many fewer errors and issues to worry about.
Yes it was a change in how we did business, but we did it through a policy change from on high so three was no argument and the dbas went through a couple of times and reverted any objects different from source control to the source control version, so now nobody will even think of doing a database change without it being in source control.
As the product manager for SQL Compare I've spoken to many 'traditional' DBAs who are uncomfortable with third party tools mainly because they have a system that works for them and sometimes changing can be difficult. There are many situations where I am convinced that they would benefit from our tools if only they gave them a chance. Frustrating.
One thing you might consider trying is Red Gate's upcoming tool, SQL Source Control. This is designed to build source control into SSMS, in other words it doesn't require DBAs to leave the comfort zone of their management environment. The bad news is that the tool hasn't been released yet. The good news is that we have an Early Access Program. Please visit the following link to find out more about the tool:
http://www.red-gate.com/Products/SQL_Source_Control/index.htm
you can't really put a large database under source control, so your DBA is right.
what you can do practically is to put your schema under source control, and maybe a few smallish 'configuration' tables.
One way to source control database is to store the data in and about the database separately
You can have the all the tables, procedures and function scripts as SQL files and add them to source control.
Export the database data as insert statements into SQL files, each with a fixed size. This is a cumbersome process as it would involve a lot of files that are to be tracked and controlled.
I am not sure if the VSS/SVN are able to read and keep history of changes to dump files created by the database backup options.
Its not clear from you question if you want to protect the data in the Db or the schemas in the Db. If the latter then you could identify all the important schemas and run an cron job that pulls the schema definitions from the Db and inserts them automatically into a source control system (perhaps even via triggers on the schemas??).
But this still just amounts to backing the system up more often. For what you envision you would need source control integrated with the Db tools and I don't know of any product that does that.
(and I shudder to think of VSS integrated into SQL management studio :-(( )
My answer to this same problem was to export all DB objects to text form (more than 136,000 of them) and then create the SourceSafe projects to hold them. Any New or changed objects in the DB now go to the SourceSafe structure, while unchanged are left alone.

LINQ to SQL and the DBML file - multiple database development

The way I develop may not be correct, any advice welcome.
At the moment I have a WPF application that uses a SQL2008 database. I have a copy of the database on a laptop and on my home machine. My application is versioned using SVN and I am obviously able go from the work laptop to the home machine and update/commit as required to ensure I am using the latest code for the application.
However the database is a different story in that any change I make I create a backup and then transfer the backup to the other machine etc. This way I get the data and the changes made on each system. In order to do this the database connection using a different connectionstring and I change a setting in my app to use a different connection based on my location.
I have now started to use LINQ to SQL and DBML files in my application, and finally getting to the question, I don't know how I can change the connectionstring it uses in code so it will use the correct database in the DBML.
Also, is there a better way to transfer the database so I don't need to do the backups and restores? The only reason why I have not versioned the Schema is because I am not sure how that would handle my data as this is key to my development, ie various environment settings etc are stored in the DB and brought through at runtime.
Your Statement:
I have now started to use LINQ to SQL and DBML files in my application, and finally getting to the question, I don't know how I can change the connectionstring it uses in code so it will use the correct database in the DBML.
Yes it's possible.
MYDataContext mycontext = new MYDataContext("Your Connection String");
There is a Constructor where you can chage the Connectionstring.
This is such a common problem, and I have never found a minimal and clean solution to it. How to keep all the values and variables and databases and source files in sync between machines?
Well SVN works great for the source files.
For the database, I TRY to just use one DB if we can get away with it. All the devs point to one machine that hosts the db, then we aren't wasting time with DB setup and merging. If that's not possible, then we usually just end up dumping the database when there is a change and distributing the .bak file around. You can try adding this file to SVN, and it works. you can even have the DB dump to a schedule so that SVN is always getting a new copy. But it's still too much work to keep restoring a db over and over. Perhaps you could hook in some scripting to SVN (we use Tortise for windows) and have a job that would do that automatically. That'd be nice.
For the config files - I do ASP.NET so I have web.config, connectionstrings.config, etc, I do one of two things - either just manually copy sections that need to be changed between machines and comment out the part that doesn't need to be used (clunky), or I've at times written ConfigurationSettings helper objects that diagnose a config key to decide what setting to use, based on the current machine name. eg:
Say my current machine is DEV1. The server is SERVER1. I'll have config keys with names like DEV1.connections.sqlserver and SERVER1.connections.sqlserver. In the code I'll use the helper method GetConfig("connections.sqlserver"). GetConfig figures out which key to use based on the current machine name.
Using this method, I don't have to keep remembering to monkey around with the dozen .configs every time I upload to the server or change things. But I DO have to make a duplicate key for every machine that will be running the application, which can get a bit much. For large teams, instead of using machine names, I use group names and have a config key that assigns machine names to a group - with the idea that every machine in the group will have that application set up in an identical fashion - same file paths etc.
Now onto your second question about LINQ - when you create a linq dbml, it will add a connection string to your config. you just have to make sure that you find this connectionstring and copy it into your active application. eg:
I have a solution that has 2 projects:
1 - website
2 - library
I put the dbml into the library project. If I go and look into the App.config of the library project, I'll see the connectionstring that LINQ wants to use. If I copy this connectionstring into the website's connectionstrings.confing file, when I reference the library and run the website, LINQ will be able to see the connectionstring it wants to use.
You can try Sql Server Merge Replication and use SQL Compact 3.5 as your laptop database and use master as your work/home machine database. However you may do this with only Sql Server Standard Edition.
Other option is , Microsoft Sync Framework.. here..
http://msdn.microsoft.com/en-us/sync/default.aspx
You could use red_gate's SQL COmpare and SQLDataCompare to script out changes to the database. You should be in the habit of scripting database changes anyway as that is what you will need to do when it is time to move changes to prod. I would also make sure all database changes are in SVN, we don't make any changes to the database ever without a script in source control.
I ended up just using multiple connection strings and then manually changing the connection on the dbml file whenever I moved locations. However I also have some code in place to programmatically change it based on the project setting for the location.
I haven't really got a good solution to the transferring of the databases and continue to use the backup and restore method.

Link tables issue for Compiled Access (mde) file

I have an old compiled Access Application mde file. This application has linked tables to network shared folder. I tried to upgrade main database using upsizing wizard on main database and everything went well. Then when the application starts it gives error message that
Microsoft jet database engine cannot find the input table or query table
I have checked the shared mdb file it has exact table names and everything.
Then I called the guy who developed this application. He said I have to rewrite the application to not use Jet engine...
What does Jet Engine has to do with linking tables? Do I really have to rewrite the whole application to use ADO?
Many questions:
do you have the source MDB file? I can't recall if creating an MDE fails if the linked tables are not correctly connected. In any event, should you end up needing to alter the app, you're going to need the source MDB file.
the error message you report should give the name of the missing table.
do you know when the error is being reported? There could be any number of places where simply replacing tables linked to a Jet MDB back end with ODBC links to a server will not fix things. For instance, should there be any saved queries or SQL in code that bypasses linked tables and uses a direct connection string, that could produce an error like you see.
in regard to the developer's response that "I have to rewrite the application to not use Jet engine..." either you misunderstood what he said, or your developer is completely incompetent. Or both, I guess. Jet works very well with ODBC linked tables and if you're using an MDB front end, it is impossible to completely eliminate Jet, as the MDB is a Jet data file. The desire to eliminate Jet mostly comes from people who can't be bothered to learn how to use it properly.
It sounds to me as though you're getting an unhandled errror but insufficient information on what's producing it. You need the actual MDB to troubleshoot it, as the code isn't there to display in the MDE so there's no way to figure out what the actual source of the problem is. If your developer won't give you the MDB, then you need to check the contract under which the app was developed -- if you agreed to letting him control the source code, you're basically at his mercy and should fire whoever signed off on that. For what it's worth, when I deliver an MDE to a client, they also get the full MDB. They generally don't do anything with it, but should I no longer be available to do further development work, they've got the source code that they can give to whomever they want.
Last of all, I think it's very unlikely that even if you get your app working, a mere upsizing is going to offer much in terms of performance or stability. It is true that very often, 90% or more of an upsized app will work without alteration, but the other 10% can be very problematic. Often you need to move certain operations server-side to get the efficiency a server back end offers. This means your front end app needs to be re-architected to work better with your upsized back end. The degree to which this is true will differ from app to app, but it's very seldom that absolutely everything works without revision.
You did change Access database version?
It is possible that your mdb was linked with old version of Jet drivers and these drivers cannot connect to newer mdb version.

MS Access Application - Convert data storage from Access to SQL Server

Bear in mind here, I am not an Access guru. I am proficient with SQL Server and .Net framework. Here is my situation:
A very large MS Access 2007 application was built for my company by a contractor.
The application has been split into two tiers BY ACCESS; there is a front end portion that holds all of the Ms Access forms, and then on the back end part, which are access tables, queries, etc., that is stored on a computer on the network.
Well, of course, there is a need to convert the data storage portion to SQL Server 2005 while keeping all of these GUI forms which were built in Ms Access. This is where I come in.
I have read a little, and have found that you can link the forms or maybe even the access tables to SQL Server tables, but I am still very unsure on what exactly can be done and how to do it.
Has anyone done this? Please comment on any capabilities, limitations, considerations about such an undertaking. Thanks!
Do not use the upsizing wizard from Access:
First, it won't work with SQL Server 2008.
Second, there is a much better tool for the job:
SSMA, the SQL Server Migration Assistant for Access which is provided for free by Microsoft.
It will do a lot for you:
move your data from Access to SQL Server
automatically link the tables back into Access
give you lots of information about potential issues due to differences in the two databases
keeps track of the changes so you can keep the two synchronised over time until your migration is complete.
I wrote a blog entry about it recently.
You have a couple of options, the upsizing wizard does a decent(ish) job of moving structure and data from access to Sql. You can then setup linked tables so your application 'should' work pretty much as it does now. Unfortunately the Sql dialect used by Access is different from Sql Server, so if there are any 'raw sql' statements in the code they may need to be changed.
As you've linked to tables though all the other features of Access, the QBE, forms and so on should work as expected. That's the simplest and probably best approach.
Another way of approaching the issue would be to migrate the data as above, and then rather than using linked tables, make use of ADO from within access. That approach is kind of famaliar if you're used to other languages/dev environments, but it's the wrong approach. Access comes with loads of built in stuff that makes working with data really easy, if you go back to use ADO/Sql you then lose many of those benefits.
I suggest start on a small part of the application - non essential data, and migrate a few tables and see how it goes. Of course you back everything up first.
Good luck
Others have suggested upsizing the Jet back end to SQL Server and linking via ODBC. In an ideal world, the app will work beautifully without needing to change anything.
In the real world, you'll find that some of your front-end objects that were engineered to be efficient and fast with a Jet back end don't actually work very well with a server database. Sometimes Jet guesses wrong and sends something really inefficient to the server. This is particular the case with mass updates of records -- in order not to hog server resources (a good thing), Jet will send a single UPDATE statement for each record (which is a bad thing for your app, since it's much, much slower than a single UPDATE statement).
What you have to do is evaluate everything in your app after you've upsized it and where there are performance problems, move some of the logic to the server. This means you may create a few server-side views, or you may use passthrough queries (to hand off the whole SQL statement to SQL Server and not letting Jet worry about it), or you may need to create stored procedures on the server (especially for update operations).
But in general, it's actually quite safe to assume that most of it will work fine without change. It likely won't be as fast as the old Access/Jet app, but that's where you can use SQL Profiler to figure out what the holdup is and re-architect things to be more efficient with the SQL Server back end.
If the Access app was already efficiently designed (e.g., forms are never bound to full tables, but instead to recordsources with restrictive WHERE clauses returning only 1 or a few records), then it will likely work pretty well. On the other hand, if it uses a lot of the bad practices seen in the Access sample databases and templates, you could run into huge problems.
It's my opinion that every Access/Jet app should be designed from the beginning with the idea that someday it will be upsized to use a server back end. This means that the Access/Jet app will actually be quite efficient and speedy, but also that when you do upsize, it will cause a minimum of pain.
This is your lowest-cost option. You're going to want to set up an ODBC connection for your Access clients pointing to your SQL Server. You can then use the (I think) "Import" option to "link" a table to the SQL Server via the ODBC source. Migrate your data from the Access tables to SQL Server, and you have your data on SQL Server in a form you can manage and back up. Important, queries can then be written on SQL Server as views and presented to the Access db as linked tables as well.
Linked Access tables work fine but I've only used them with ODBC and other databases (Firebird, MySQL, Sqlite3). Information on primary or foreign keys wasn't passing through. There were also problems with datatype interpretation: a date in MySQL is not the same thing as in Access VBA. I guess these problems aren't nearly as bad when using SQL Server.
Important Point: If you link the tables in Access to SQL Server, then EVERY table must have a Primary Key defined (Contractor? Access? Experience says that probably some tables don't have PKs). If a PK is not defined, then the Access forms will not be able to update and insert rows, rendering the tables effectively read-only.
Take a look at this Access to SQL Server migration tool. It might be one of the few, if not the ONLY, true peer-to-peer or server-to-server migration tools running as a pure Web Application. It uses mostly ASP 3.0, XML, the File System Object, the Data Dictionary Object, ADO, ADO Extensions (ADOX), the Dictionary Scripting Objects and a few other neat Microsoft techniques and technologies. If you have the Source Access Table on one server and the destination SQL Server on another server or even the same server and you want to run this as a Web Internet solution this is the product for you. This example discusses the VPASP Shopping Cart, but it will work for ANY version of Access and for ANY version of SQL Server from SQL 2000 to SQL 2008.
I am finishing up development for a generic Database Upgrade Conversion process involving the automated conversion of Access Table, View and Index Structures in a VPASP Shopping or any other Access System to their SQL Server 2005/2008 equivalents. It runs right from your server without the need for any outside assistance from external staff or consultants.
After creating a clone of your Access tables, indexes and views in SQL Server this data migration routine will selectively migrate all the data from your Access tables into your new SQL Server 2005/2008 tables without having to give out either your actual Access Database or the Table Contents or your passwords to anyone.
Here is the Reverse Engineering part of the process running against a system with almost 200 tables and almost 300 indexes and Views which is being done as a system acceptance test. Still a work in progress, but the core pieces are in place.
http://www.21stcenturyecommerce.com/SQLDDL/ViewDBTables.asp
I do the automated reverse engineering of the Access Table DDLs (Data Definition Language) and convert them into SQL equivalent DDL Statements, because table structures and even extra tables might be slightly different for every VPASP customer and for every version of VP-ASP out there.
I am finishing the actual data conversion routine which would migrate the data from Access to SQL Server after these new SQL Tables have been created including any views or indexes. It is written entirely in ASP, with VB Scripting, the File System Object (FSO), the Dictionary Object, XML, DHTML, JavaScript right now and runs pretty quickly as you will see against a SQL Server 2008 Database just for the sake of an example.
It takes perhaps 15-20 seconds to reverse engineer almost 500 different database objects. There might be a total of over 2,000 columns involved in this example for the 170 tables and 270 indexes involved.
I have even come up with a way for you to run both VPASP systems in parallel using 2 different database connection files on the same server just to be sure that orders entered on the Access System and the SQL Server system produce the same results before actual cutover to production.
John (a/k/a The SQL Dude)
sales#designersyles.biz
(This is a VP-ASP Demo Site)
Here is a technique I've heard one developer speak on. This is if you really want something like a Client-Server application.
Create .mdb/.mde frontend files distributed to each user (You'll see why).
For every table they need to perform an CRUD, have a local copy in the file in #1.
The forms stay linked to the local tables.
Write VBA code to handle the CRUD from the local tables to the SQL Server database.
Reports can be based off of temp tables created from the SQL Server (Won't be able to create temp tables in mde file I don't think).
Once you decide how you want to do this with a single form, it is not too difficult to apply the same technique to the rest. The nice thing about working with the form on a local table is you can keep a lot of the existing functionality as the existing application (Which is why they used and continue to use Access I hope). You just need to address getting data back and forth to the SQL Server.
You can continue to have linked tables, and then gradually phase them out with this technique as time and performance needs dictate.
Since each user has their own local file, they can work on their local copy of the data. Only the minimum required to do their task should ever be copied locally. Example: if they are updating a single record, the table would only have that record. When a user adds a new record, you would notice that the ID field for the record is Null, so an insert statement is needed.
I guess the local table acts like a dataset in .NET? I'm sure in some way this is an imperfect analogy.

Resources