How to execute dynamic .net code in clr stored procedure in SQL Server - sql-server

Is there a way to change the code of CLR procedure in SQL Server dynamically?
Suppose you have an assembly with your business logic deployed in MS SQL Server 2008 R2. This assembly (or assemblies) is being used constantly (for example calling some functions for each row of a table in multiple concurrent queries). So you cannot just drop assembly. Is there a way to change my business logic dynamicly or some way to execute external changable code?
I've already explored these approaches, but none worked:
Reflection.Emit
Mono.Cecil
Loading external assembly in the assembly deployed in SQL Server
UPDATE:
The question was not about release process: I want to be able to set some security rules dynamically via GUI.
For instance some users should be able to see only clients without their addresses or the transactions within the last year and so on.
The rules are not complicated but they may change almost every day and we cannot put them in the code. The rest of the business logic is implemented in TSQL. CLR was chosen because of the performance issue (dynamic SQL is too slow).
There was another option: generate clustered views (with rules in WHERE section) but it was not quick enough.
Some more details:
Suppose we have some code selecting a part of big table dbo.Transactions
select *
from dbo.Transactions
where ... --filters from your business logic
If we want to filter the result to show allowed rows we could generate some indexed view and join it with the result set like this:
select *
from dbo.Transactions t
inner join dbo.vw_Transactions v
on t.id = v.id
where ... --filters from your business logic
But if we check the execution plan in most cases the query analyzer decides not to filter dbo.Transaction and then join with vw_Transactions, but to join first and filter later (which is absolutely not desirable). Hints like FORCE ORDER doesn't help.

I'm not a CLR assembly expert, but the obvious options are:
ALTER ASSEMBLY
Drop and re-create the assembly inside a transaction
Define a maintenance window and deploy it then
gbn's point about release processes is a good one. If your procedures (and therefore your business operations) are really constantly running 24x7 then presumably you already have some form of system redundancy and established maintenance procedures for patching and upgrading applications? If so, just deploy your new code in your usual maintenance window.

There's a good library for dynamic evaluating arithmetic expression (with parameters) - Flee
In my case I didn't have to execute any .Net code - just the expressions like "Date > '20100101' Or Status = 2", so Flee satisfies almost completely. The only issue is that its logical operators don't work with SqlBoolean type (which is used in sql expressions) but it's not a big deal to add this feature.
But in general case it's seems to be impossible to execute dynamic .Net code inside Sql Server host.

Related

Visual Studio Integration Services becomes unresponsive

I am developing ETL solutions in Visual Studio and as soon as I select a view from a SQL Server database, the Visual Studio freezes, and clicking anywhere results in the following notification: "Visual Studio is Busy".
It is very frustrating and I cannot finish creating my solution.
Any advice for making it faster and more responsive?
I. What happens when selecting a view as an OLE DB Source?
I created an SQL Server Profiler trace to track all T-SQL commands execute over the AdventureWorks2017 database while I am selecting the [HumanResources].[vEmployee] view as an OLE DB Source.
The following screenshot shows that the following command is executed twice:
set rowcount 1
select * from [HumanResources].[vEmployee]
This means that the OLE DB source limit the result set of the query to a single row and executes the Select * command over the selected view in order to extract the required metadata.
It is worth mentioning that the SET ROWCOUNT 1 causes SQL Server to stop processing the query after the specified number of rows are returned. This means that only one row is requested and not all the view's data.
II. Issue's possible reasons
The issue you mentioned mostly happens due to the following reasons:
(1) Third-party extensions installed in Visual Studio
In that case, you should try to start Visual Studio in safe mode to prevent loading third-party extensions. You can use the following command
devenv.exe /safemode
(2) View querying a large amount of data
Visual Studio may freeze if the view returns a huge amount of data or contains bad JOINS. You may solve this using a simple workaround. Alter the view's SQL and add a condition that only returns a few rows (For example SELECT TOP 1). Then, use this view while designing the package. Once done, remove the added condition.
(3) Bad database design
Moreover, it is highly important that your views are well designed and that the underlying tables have the appropriate indexes. Besides, check that you don't have any issues related to the database design. For example:
(a) Index fragmentation
The index fragmentation is the index performance value in percentage, which can be fetched by SQL Server DMV. You can refer to the following article for more information:
How to identify and resolve SQL Server Index Fragmentation
(b) Large binary column
Make sure that the view does not include large binary columns since it highly affects the query execution.
Best Practices for tables with VARBINARY(MAX)
How Your SQL Server Data Type Choices Can Affect Database Performance
(4) Hardware issues
Even I do not think this should be the cause in that case. Try to check the available resources on your machine. For Example:
(a) Drive out of storage
If using windows, check the C: drive storage (default system databases directory) and the drive where the databases are stored and make sure they are not full.
(b) Server is out of memory
Make sure that your machine is not running out of memory. You can simply use the Task Manager to identify the amount of available memory.
(5) Optimizing Visual Studio performance
The last thing to mention is that there are several recommendations to improve the performance of Visual Studio. Feel free to check them:
Optimize Visual Studio performance
This can sometimes happen when you try to validate a select statement against a huge table. Depending on the RDBMS , some data sources while doing the validation do not do a good job of returning just metadata to validate against, and instead run Select * from table. So, validation can take what seems like forever.
Try to check if this is actually happening , check the running queries on the RDBMS in the package, when you load up the package.
Otherwise try copying the package and switch to the XML and rebuild it until you find issue. Remove the problem from your XML file, save, and redraw in the designer.

syntax difference between .. and . in SQL Server in sysobjects

I've seen something like this in SQL Server (running this query against the master database):
select * from tempdb..sysobjects
which seems to return exactly the same results as:
select * from tempdb.sys.objects
I've seen that the double dot can be used as a way to omit the schema name but I don't see anything omitted here, going by that logic then tempdb..objects would be valid which is not).
tempdb..objects will be interpretted as tempdb.dbo.objects
Both are two different system views
sys.objects
Contains a row for each user-defined, schema-scoped object that is
created within a database, including natively compiled scalar
user-defined function.
sys.sysobjects
Contains one row for each object that is created within a database,
such as a constraint, default, log, rule, and stored procedure
Note : This SQL Server 2000 system table is included as a view for backward compatibility. We recommend that you use the current SQL Server system views instead. To find the equivalent system view or views, see Mapping System Tables to System Views (Transact-SQL). This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work, and plan to modify applications that currently use this feature.

Compatible DDL (CREATE TABLE) across different SQL databases?

I'm working on a desktop application that must support (currently) MS Access and SQL Server back ends. The application is under constant development and changes are frequently being made to the database, mostly the addition of tables and views to support new features (but also some DROPs and ALTER TABLEs to add new columns).
I have a system that compiles the DDL into the executable, checks the database to see if the executable has any new DDL that has to be executed, and executes it. This works fine for a single database.
My immediate problem is that SQL Server and Access support wildly different names for data types so a CREATE TABLE statement that executes against Access will not execute against SQL Server (or worse, will execute but create a table with different datatypes).
Is there a method that can be used to create DDL (especially CREATE TABLE commands) that can be executed through ADO against both of these databases without having to craft separate DDL for each provider?
Since you are already using ADO, you should look into Microsoft ADOX
This allows you to manipulate structures in a data source using an ADO object model that is independent of the underlying data source type. i.e. without resorting to explicit DDL
Support for ADOX is not guaranteed by any given ADO Provider, and the level of ADOX support may vary even when it is available. But for MS Access and MS SQL Server I think you will find all the capability you require (and quite possibly more!)
This can be done using DBX in Delphi.
The following is links to sample code showing how this can be done.
http://cc.embarcadero.com/item/26210
I had the same problem.
I resolved it applying a C preproccessor to the SQL before executing it.
The preprocessor includes macros in order to handle the different dbs.
Stefano
Did you check
http://db.apache.org/ddlutils/
or
http://publib.boulder.ibm.com/infocenter/wtelecom/v6r1/index.jsp?topic=%2Fcom.ibm.twss.plan.doc%2Fdb_scripts.html

how to handle db schema updates when using schemabinding and updating often

I'm using a MS SQL Server db and use plenty of views (for use with an O/R mapper). A little annoyance is that I'd like to
use schema binding
update with scripts (to deploy on servers and put in a source control system)
but run into the issue that whenever I want to e.g. add a column to a table, I have to first drop all views that reference that table, update the table, and then recreate the views, even if the views wouldn't need to be updated otherwise. This makes my update scripts a lot longer and also, looking the diffs in the source control system, it is harder to see what the actual relevant change was.
Is there a better way to handle this?
I need to still be able to use simple and source-controllable sql updates. A code generator like is included in SQL Server Management Studio would be helpful, but I had issues with SQL Server Management Studio in that it tends to create code that does not specify the names for some indices or (default) constraints. But I want to have identical dbs when I run my scripts on different systems, including the names of all contraints etc, so that I don't have to jump through loops when updating those constraints later.
So perhaps a smarter SQL code generator would a solution?
My workflow now is:
type the alter table statement in query editor
check if I get an error statement like "cannot ALTER 'XXX' because it is being referenced by object 'YYY'."
use SQL Server Managment Studio to script me create code for the referenced object
insert a drop statement before the alter statement and create statement after
check if the drop statement creates error and repeat
this annoys me, but perhaps I simply have to live with it if I want to continue using schemabinding and script updates...
You can at least eliminate the "check if I get an error" step by querying a few dynamic managment functions and system views to find your dependencies. This article gives a decent explanation of how to do that. Beyond that, I think you're right, you can't have your cake and eat it too with schema-binding.
Also keep in mind that dropping/creating views will cause you to lose any permissions that were granted on those objects, so those permissions should be included in your scripts as well.

Stored Procedures MSSQL2005

If you have a lot of Stored Procedures and you change the name of a column of a table, is there a way to check which Stored Procedures won't work any longer?
Update: I've read some of the answers and it's clear to me that there's is no easy way to do this. Would it be easier to move away from Stored Procedures?
I'm a big fan of SysComments for this:
SELECT DISTINCT Object_Name(ID)
FROM SysComments
WHERE text LIKE '%Table%'
AND text LIKE '%Column%'
There's a book-style answer to this, and a real-world answer.
First, for the book answer, you can use sp_depends to see what other stored procs reference the table (not the individual column) and then examine those to see if they reference the table:
http://msdn.microsoft.com/en-us/library/ms189487.aspx
The real-world answer, though, is that it doesn't work in a lot of cases:
Dynamic SQL strings: if you're building strings dynamically, either in a stored proc or in your application code, and then executing that string, SQL Server has no way of knowing what your code is doing. You may have the column name hard-coded in your code, and that'll break.
Embedded T-SQL code: if you've got code in your application (not in SQL Server) then nothing in the SQL Server side will detect it.
Another option is to use SQL Server Profiler to capture a trace of all activity on the server, then search through the captured queries for the field name you want. It's not a good idea on a production server, because the profile incurs some overhead, but it does work - most of the time. Where it will break is if your application does a "SELECT *", and then in your application, you're expecting a specific field name to come back as part of that result set.
You're probably beginning to get the picture that there's no simple, straightforward way to do this.
While this will take the most work, the best way to ensure that everything works is to write integration tests.
Integration tests are just like unit tests, except in this case they would integrate with the database. It would take some effort, but you could easily write tests that exercise each stored procedure to ensure it executes w/o error.
In the simplest case it would just execute the sp and make sure there is no error and not be concerned about the actual results. If your tests just executed sp's w/o checking results you could write a lot of this genericly.
To do this you would need a database to execute against. While you could setup the database and deploy your stored procs manually, the best way would be to use continuous integration to automatically get the latest code (database DDL, stored procs, tests) from your source control system, build your database, and execute your tests. This would happen every time you committed changes to source control.
Yes it seems like a lot of work. It's a lot of work, but the payoff is also big. The ability to ensure that your changes don't break anything allows you to move your product forward faster with a better quality.
Take a look at NUnit and NDbUnit
I'm sure there are more elegant ways to address this, but if the database isn't too complex, here's a quick and dirty way:
Select all the sprocs and script to a query window.
Search for the old column name.
If you are only interested in finding the column usage in the stored procedure probably the best way will be do do a brute force search for the column name in the definition column sys.sql_modules table - which stores the definition for the stored procedures/functions.

Resources