redeploying a .NET assembly that contains CLR stored procedures - sql-server

I have written a CLR stored procedure that is in an assembly.
I have a build system that can auto-build and deploy .net application from our source control repository.
I want the two things to work together so I can redeploy the assembly that hosts the CLR stored proc.
However it looks like, unlike IIS, simply replacing the binaries doesn't work. It seems like you have to DROP ASSEMBLY on the database. In order to do that you need to drop all the objects that reference that assembly.
That seems reasonable in one way -- i.e., from a database integrity point of view-- and unreasonable in another -- the JIT approach that applies for runtime evaluation of dependencies for .NET in general.
So, is it possible to do something so I can replace the binary then give SQL server a kick and make it figure out that the new assembly satisfies all the requirements (i.e., has the right public namespaces, types, methods etc to satisfy the sprocs that are bound to it).

The CLR assemblies are stored in the database, not on disk, so you cannot simply replace some binary dll. To refresh them you use ALTER ASSEMBLY [assemblyname] FROM 'disklocation'.

Short answer is 'No, it will not work this way'. As it was pointed by Remus, SQL Server stores assemblies inside your database and not somewhere in a file system. Thus there's no such place that is monitored by the server and where you should place updated binaries.
Uploading an updated assembly(ies) to the database should be an integral part of your deployment process. And the only way of doing it to perform the following actions explicitly:
Drop all objects that are defined in an assembly (i.e. all external SPs/UDFs/Triggers/Types)
Drop assembly(ies)
Create assembly(ies) - with either "FROM 'disklocation'" (as advised by Remus, but note that the path should refer to SQL Server's local path) or "FROM 'binary content'"
Create all [external] objects
Step 1 can actually be implemented in T-SQL in a generic way (so you don't have to list objects explicitly). But there's not such way for p.4 except custom tool (which will use reflection in order to discover assembly content and generate appropriate T-SQL for creating all objects).

As Remus's answer states, you can use ALTER ASSEMBLY ... to update an assembly.
From the MSDN page ALTER ASSEMBLY (Transact-SQL) for SQL Server 2008 R2 [emphasis mine]:
If the FROM clause is specified, ALTER ASSEMBLY updates the assembly with respect to the latest copies of the modules provided. Because there might be CLR functions, stored procedures, triggers, data types, and user-defined aggregate functions in the instance of SQL Server that are already defined against the assembly, the ALTER ASSEMBLY statement rebinds them to the latest implementation of the assembly. To accomplish this rebinding, the methods that map to CLR functions, stored procedures, and triggers must still exist in the modified assembly with the same signatures. The classes that implement CLR user-defined types and user-defined aggregate functions must still satisfy the requirements for being a user-defined type or aggregate.
So, if the functions, stored procedures, etc. that reference the assembly haven't changed, you can simply update the assembly. Also, doing so doesn't disrupt currently running sessions; from the same MSDN page as mentioned above:
ALTER ASSEMBLY does not disrupt currently running sessions that are running code in the assembly being modified. Current sessions complete execution by using the unaltered bits of the assembly.
However, you could fairly easily re-deploy an assembly and its dependent objects automatically, but to do so generally, you would need to drop and re-create it. If you do so, you may find it easier to deploy the assembly by 'embedding' it in a script by first converting the bytes of the assembly file to hexadecimal digits which can then be included in the relevant CREATE ASSEMBLY statement.

I agree with what AlexS suggested except the last sentence.
First off, reflection will not truly work as the datatypes used in the CLR functions do not necessarily determine the SQL datatypes. For example, you could have SqlString on the CLR side but use NVARCHAR(50) or NVARCHAR(MAX) instead of NVARCHAR(4000) on the SQL side.
However, it is still possible to automate this. You should be using the source code repository to be storing the Stored Proc and Function definitions that point to the CLR code, just as you would any Stored Proc or Function. So you could grab all of those definitions and run all of the CREATE PROCEDURE and CREATE FUNCTION statements as Step 4.
Also, Steps 1 and 2 can be a single SQL script.
Essentially, this entire process can be automated :).

Related

Is there a way to run Visual FoxPro code from a SQL Server CLR or CMD?

Apologies for the strange question here.
Long story short is that we have a Visual FoxPro .prg that will execute a query using SQLEXEC(), and then uses AFIELDS() to get Field Name, Field Type, Field Width (length), and Decimal Places of the resultset columns to store into a SQL table (even if selected from a temp table or dynamic SQL).
What I'm looking for it to be able to run this .prg from a SQL Server CLR, or even using xp_cmdshell without having to manually open a Visual FoxPro Application.
I've already attempted to recreate this functionality in a CLR alone but kept running into issues with nested INSERT INTO statements when using it as a CLR Stored Procedure.
Additionally, I've attempted to create as a CLR SQL Function, but it would attempt to execute our custom hooks and throw "Procedure does not exist" errors, even when surrounded with IF Object_ID() IS NOT NULL.
Also, as some queries may be results of a temp table, I am unable to use OPENQUERY or OPENROWSET.
The end goal here is to move away from our Visual FoxPro client entirely, but the ability to get the column metadata of anything thrown at it is holding us back.
(Not an answer really, writing here as in a comment it would be a mess)
If I understood you right, the table in question is a VFP table and have some stored procedure functions (maybe for insert\update\delete triggers, or validation check).
If that is the case, I am afraid you are out of luck. Your best bet might be having a VFP COM object in between or a VFP SP using SetResultSet() - that might fail if there are unsupported VFP commands.
On the other hand you are saying it starts with an SqlExec(), then likely it is not a VFP table. Then likely you could create a CLR function. Would it be possible for you to share more details, along with the VFP and C# codes.

CLR Stored Procedure v regular SQL Stored Procedure

I have been writing a CLR stored procedure that moves data from one database to another. I went with the CLR stored procedure because I like the .NET framework's ability to connect to remote servers better than I like linked servers, or openrowset, but I now find that my class is mostly embedded SQL strings. I was considering just using the CLR stored procedures to retrieve the data onto the local SQL Server, and then using a regular SQL stored procedure for the actual inserts and updates.
I'm not worried about pre-compilation of the procedure or performance, and I do like that the CLR procedure allows me to see all of the logic in one place, read from top to bottom.
Are there any reasons I should consider moving to a TSQL solution instead of CLR?
Thanks.
There are multiple reasons why you would stick to a regular stored procedure. I'll try to give you an overview of the ones that I know of:
Performance.
Memory issues. SQL Server only operates with its own max memory settings. CLR's go out of this bound. This could comprimise other applications (and the OS) running on this server.
Updatebility. You can update a Stored procedure with a simple script. CLR's are more complicated to update
Security. CLR's often require more security settings than regular t-sql.
As a general rule you only want to use CLR for:
interaction with the OS, such as reading from a file or dropping a message in MSMQ
performing complex calculations, especially when you already have the code written in a .NET language to do the calculation.

Common function / stored procedures for all databases

We have a database server and it has about 10 databases.
I would like to create some functions / stored procedures which can be used in all databases.
For example, we can use sp_executesql in any database.
We have some requirements like that (getting current academic year, financial year, etc...)
Is it doable?
As others have suggested, you could put objects into the master database, but Microsoft explicitly recommends that you should not do that. I find that solution to be rather risky anyway, because the master database is 'owned' by the system, not by you, so there are no guarantees that it will continue to behave in the same way in the future.
Instead, I would consider this to be primarily a deployment issue. There are (at least) two strategies you could use:
Deploy the objects to every database
Deploy them to one 'reference' database that is only used for shared objects and create synonyms in the other databases
The second option is perhaps the better one, because if your functions use tables (e.g. you use a calendar table to get the academic year, which is much easier than calculating it) then you would have to create the same tables in every database too. By using synonyms, you only have to maintain one set of tables.
For the actual deployment, it's straightforward to use scripting to do manage the objects, because you just need a list of databases to connect to and run each DDL script against. You can do that using batch files and SQLCMD (perhaps with SQLCMD variables in your .sql scripts), or drive it from PowerShell or any other language that you prefer.
Depending upon what the SP actually does, you want to create the procedure in master, name it with sp_ and mark it as a system procedure:
http://weblogs.sqlteam.com/mladenp/archive/2007/01/18/58287.aspx
A couple of options:
You can use a system stored procedure as Cade says. I've done this in the past and it works ok. One warning on this is that the sp_MS_marksystemobject procedure is undocumented, which may mean that it could vanish or change without warning in future SQL versions. Thinking back I think there were other problems using this approach with functions though.
Another approach is to use standardized procedure and functions, and roll them out across your databases using sp_MSforeachdb to run code against every database. If you need to run against only your 10 database you can take copy the code in this procedure and modify it to check that a database matches your schema before running the code (or you can write your own version that does a similar thing).

How to execute dynamic .net code in clr stored procedure in SQL Server

Is there a way to change the code of CLR procedure in SQL Server dynamically?
Suppose you have an assembly with your business logic deployed in MS SQL Server 2008 R2. This assembly (or assemblies) is being used constantly (for example calling some functions for each row of a table in multiple concurrent queries). So you cannot just drop assembly. Is there a way to change my business logic dynamicly or some way to execute external changable code?
I've already explored these approaches, but none worked:
Reflection.Emit
Mono.Cecil
Loading external assembly in the assembly deployed in SQL Server
UPDATE:
The question was not about release process: I want to be able to set some security rules dynamically via GUI.
For instance some users should be able to see only clients without their addresses or the transactions within the last year and so on.
The rules are not complicated but they may change almost every day and we cannot put them in the code. The rest of the business logic is implemented in TSQL. CLR was chosen because of the performance issue (dynamic SQL is too slow).
There was another option: generate clustered views (with rules in WHERE section) but it was not quick enough.
Some more details:
Suppose we have some code selecting a part of big table dbo.Transactions
select *
from dbo.Transactions
where ... --filters from your business logic
If we want to filter the result to show allowed rows we could generate some indexed view and join it with the result set like this:
select *
from dbo.Transactions t
inner join dbo.vw_Transactions v
on t.id = v.id
where ... --filters from your business logic
But if we check the execution plan in most cases the query analyzer decides not to filter dbo.Transaction and then join with vw_Transactions, but to join first and filter later (which is absolutely not desirable). Hints like FORCE ORDER doesn't help.
I'm not a CLR assembly expert, but the obvious options are:
ALTER ASSEMBLY
Drop and re-create the assembly inside a transaction
Define a maintenance window and deploy it then
gbn's point about release processes is a good one. If your procedures (and therefore your business operations) are really constantly running 24x7 then presumably you already have some form of system redundancy and established maintenance procedures for patching and upgrading applications? If so, just deploy your new code in your usual maintenance window.
There's a good library for dynamic evaluating arithmetic expression (with parameters) - Flee
In my case I didn't have to execute any .Net code - just the expressions like "Date > '20100101' Or Status = 2", so Flee satisfies almost completely. The only issue is that its logical operators don't work with SqlBoolean type (which is used in sql expressions) but it's not a big deal to add this feature.
But in general case it's seems to be impossible to execute dynamic .Net code inside Sql Server host.

Compatible DDL (CREATE TABLE) across different SQL databases?

I'm working on a desktop application that must support (currently) MS Access and SQL Server back ends. The application is under constant development and changes are frequently being made to the database, mostly the addition of tables and views to support new features (but also some DROPs and ALTER TABLEs to add new columns).
I have a system that compiles the DDL into the executable, checks the database to see if the executable has any new DDL that has to be executed, and executes it. This works fine for a single database.
My immediate problem is that SQL Server and Access support wildly different names for data types so a CREATE TABLE statement that executes against Access will not execute against SQL Server (or worse, will execute but create a table with different datatypes).
Is there a method that can be used to create DDL (especially CREATE TABLE commands) that can be executed through ADO against both of these databases without having to craft separate DDL for each provider?
Since you are already using ADO, you should look into Microsoft ADOX
This allows you to manipulate structures in a data source using an ADO object model that is independent of the underlying data source type. i.e. without resorting to explicit DDL
Support for ADOX is not guaranteed by any given ADO Provider, and the level of ADOX support may vary even when it is available. But for MS Access and MS SQL Server I think you will find all the capability you require (and quite possibly more!)
This can be done using DBX in Delphi.
The following is links to sample code showing how this can be done.
http://cc.embarcadero.com/item/26210
I had the same problem.
I resolved it applying a C preproccessor to the SQL before executing it.
The preprocessor includes macros in order to handle the different dbs.
Stefano
Did you check
http://db.apache.org/ddlutils/
or
http://publib.boulder.ibm.com/infocenter/wtelecom/v6r1/index.jsp?topic=%2Fcom.ibm.twss.plan.doc%2Fdb_scripts.html

Resources