How do I add a new message type to an existing contract?
CREATE CONTRACTand DROP CONTRACT commands exist, but no ALTER command.
ALTER CONTRACT is not amiss due to omission, is missing by design. This is exactly like asking to change COM interfaces: is not supported, because the interface is a contract. If one party changes the interface, it breaks the contract and will cause the other party to crash when calling the wrong v-table entry. Exactly the same reasoning was applied in Service Broker design: one party cannot change (ALTER) the contract and start sending some new messages the other party does not expect, it will cause it to crash (error in the message processing procedure). Contracts are immutable.
If you say 'but I can ALTER the other party too' then you are not considering real use cases, when the other party is remote and often under a different administrative control and not willing to change its contract(s). even when a change is possible, deploying a distributed change that requires many side to roll out new bits is just asking for (unnecessary!) downtime.
Changes in communication pattern must be deployed as new contracts. Services can implement multiple contracts, and adding a new contract to a service (via ALTER SERVICE) is supported. Changes in distributed apps are rolled out by deploying new contract(s) while still supporting old one(s), then retiring old contract(s) (ie. overlap).
From here:-
A poor workaround to this would be to version all changes as new
contracts but that would require an additional service and queue and
existing conversations would not be able to benefit from the new
message type.
I highly recomend support for an alter contract command but also add
support for SSDT to issue an alter command instead of drop/create.
I highly reconsideration of this request. Or at a minimum: Add a check
and error message to be raised from SSDT whenever any conversation
exists that uses that service and/contract before dropping in a
similar way that SSDT checks for existing data in a table before
dropping said table. This would at least help raise awareness of this
side-effect and would have prevented deployment headaches.
Related
I want to remove entire tables from Salesforce data that I am no longer using. I also want to prevent Salesforce from writing to these tables moving forward. Where do I start?
If it's your own custom tables you should be able to delete them. Hit delete action next to the table name in object manager and see what explodes?
The rule of thumb is "if you can create it straight in production without running a deployment process - you should be able to delete it". So if by "prevent Salesforce from writing to these" you mean flows, workflows, process builder... - they'll stop your delete, you'll have to get rid of them first before deleting the table itself - but should be relatively easy if repetitive.
It gets more interesting if you have code (triggers, classes, aura/lwc). You'll need to delete them using a special deployment command, changesets are useless. You can try to craft your own so-called destructiveChanges.xml (it'll be documented in metadata API guide) and try to deploy it even without any developer tools (for example with workbench.developerforce.com) but might be better idea to poke developer to do it. Identify dependencies and edit the code / delete classes as needed.
If it's custom tables but not your own (coming from managed package) - maybe packag has some config options, worst case uninstall it?
I'm needing to add serial numbers to most of the entities in my application because I'm going to be running a Lucene search index side-by-side.
Rather than having to run an ongoing polling process, or manually run my indexer by my application I'm thinking of the following:
Add a Created column with a default value of GETUTCDATE().
Add a Modified column with a default value of GETUTCDATE().
Add an ON UPDATE trigger to the table that updates Modified to GETUTCDATE() (can this happen as the UPDATE is executed? i.e. it adds SET [Modified] = GETUTCDATE() to the SQL query instead of updating it individually afterwards?)
The ON UPDATE trigger will call my Lucene indexer to update its index (this would have to be an xp_cmdshell call presumably, but is there a way of sending a message to the process instead of starting a new one? I heard I could use Named Pipes, but how do you use named pipes from within a Sproc or trigger? (searching for "SQL Server named pipes" gives me irrelevant results, of course).
Does this sound okay, and how can I solve the small sub-problems?
As I understood, you have to introduce two columns to your existing tables and have them processed (at east one of them) in 'runtime' and used by an external component.
Your first three points are nothing unusual. There are two types of triggers in SQL Server according to time when trigger get processed: INSTEAD OF trigger (actually processed before insert happens) and AFTER trigger. However, inside INSTEAD OF trigger you have to provide logic what to really insert data into the table, along with other custom processing you require. I usually avoid that if not really necessary.
Now about your fourth point - it's tricky and there are several approaches to solve this in SQL Server, but all of them are at least a bit ugly. Basically you have to either execute external process or send message to it. I really don't have any experience with Lucene indexer but I guess one of these methods (execute or send message) would apply.
So, you can do one of the the following to directly or indirectly access external component, meaning to access Lucene indexer directly or via some proxy module:
Implement unsafe CLR trigger; basically you execute .NET code inside the trigger and thus get access to the whole (be careful with that - not entirely true) .NET framework
Implement unsafe CLR procedure; only difference to CLR trigger is that you wouldn't call it imediatelly after INSERT, but you will do fine with some database job that runs periodically
Use xp_cmdshell; you already know about this one, but you can combine this aproach with job-wrapping technique in last point
Call web service; this technique is usually marked as experimental AND you have to implement the service by yourself (if Lucene indexer doesn't install some web service on its own)
There surely are other methods I can't think of right now...
I would personally go with third point (job+xp_cmdshell) because of the simplicity, but that's just because I lack any knowledge of how does the Lucene indexer work.
EDIT (another option):
Use Query Notifications; SQL Server Service Broker allows an external application to connect and monitor interesting changes. You even have several options how to do that (basically synchronous or asynchronous), only precondition is that your Service Borker is up, running and available to your application. This is more sophisticated method to inform external component that something has changed.
I'm a developer at a small company where we're struggling for consistent change control. I'm running into issues where non-dev staff are tweaking stored procedures and triggers in production installations. Their changes are being overwritten when we apply upgrades because they've gone outside of the process the dev team uses to verify db changes are incorporated into source control.
How would you recommend approaching this problem from a technical as well as personal perspective?
Edit 1: A little background on our current process might help this along. We're using a continuous integration server (TeamCity) to generate install artifacts and label svn upon check in. I'm using NMigrations to manage schema and sp/trigger changes when we apply fixes. Unfortunately it's beyond my ability to stop unauthorized schema changes so what I would love to find is a design pattern that allows for an overridable trigger/sp definition.
You need to clearly separate:
source control management
release management
Tweaking in prod shouldn't be possible if the release environment is protected through strict ACL preventing anyone duly appointed to deploy and change stuff.
If that deployment process is automated, then all changes will go through the proper channel because anyone will known a simple "push button" process will be enough to deploy the hotfix.
But if getting that fix in source control and deploy it is complicated, then a tweak directly in prod is usually the result...
Limit rights to change stored procedures and triggers, especially on production. Go ahead and let them know first so they aren't blindsided, but clearly protect production from all unauthorized changes.
We are building a webapp which is shipped to several client as a debian package. Each client runs his own server. But the update and support is done by us.
We make regular releases of the product, with a clean version number. Most of the users get an automatic update (by Puppet), some others don't.
We want to keep a trace of the version of the application (in order to allow the user to check the version in an "about" section, and for our support to help the user more accurately).
We plan to store the version of the code and the version of the base in our database, and to keep the info up to date automatically.
Is that a good idea ?
The other alternative we see is a file.
EDIT : The code and database schema are updated together. ( if we update to version x.y.z , both code and database go to x.y.z )
Using a table to track every change to a schema as described in this post is a good practice that I'd definitely suggest to follow.
For the application, if it is shipped independently of the database (which is not clear to me), I'd embed a file in the package (and thus not use the database to store the version of the web application).
If not and thus if both the application and the database versions are maintained in sync, then I'd just use the information stored in the database.
As a general rule, I would have both, DB version and application version. The problem here is how "private" is the database. If the database is "private" to the application, and user never modifies the schema then your initial solution is fine. In my experience, databases which accumulate several years of data stop being private, it means that users add a table or two and access data using some reporting tool; from that point on the database is not exclusively used by the application any more.
UPDATE
One more thing to consider is users (application) not being able to connect to the DB and calling for support. For this case it would be better to have version, etc.. stored on file system.
Assuming there are no compelling reasons to go with one approach or the other, I think I'd go with keeping them in the database.
I'd put them in both places. Then when running your about function you quickly check that they are both the same, and if they aren't you can display extra information about the version mismatch. If they're the same then you will only need to display one of them.
I've generally found users can do "clever" things like revert databases back to old versions by manually copying directories around "because they can" so defensively dealing with it is always a good idea.
In our current database development evironment we have automated build procceses check all the sql code out of svn create database scripts and apply them to the various development/qa databases.
This is all well and good, and is a tremdous improvement over what we did in the past, but we have a problem with rerunning scripts. Obviously this isn't a problem with some scripts like altering procedures, because you can run them over and over without adversly affecting the system. Right now to add metadata and run statements like create/alter table statements we add code to check and see if the objects exists, and if they do, don't run them.
Our problem is that we really only get one shot to run the script, because once the script has been run, the objects are in the environment and system won't run the script again. If something needs to change once it's been deployed, we have a difficult process of running update scripts agaist the update scripts and hoping that everything falls in the correct order and all of the PKs line up between the environments (the databases are, shall we say, "special").
Short of dropping the database and starting the process from scratch (the last most current release), does anyone have a more elegant solution to this?
I'm not sure how best to approach the problem in your specific environment, but I'd suggest reading up on Rail's migrations feature for some inspiration on how to get started.
http://wiki.rubyonrails.org/rails/pages/UnderstandingMigrations
We address this - or at least a similar problem to this - as follows:
The schema has a version number - this is represented by a table which has one row per version which, as well as the version number, carries boring things like a date/time stamp for when that version came into existence.
By having the schema create/modify DDL wrapped in code that performs the changes for us.
In the context above one would build the schema change code as part of the build process then run it and it would only apply schema changes that haven't already been applied.
In our experience (which is bound not to be representative) in most cases the schema changes are sufficiently small/fast that they can safely be run in a transaction which means that if it fails we get a rollback and the db is "safe" - although one would always recommend taking backups before applying schema updates if practicable.
I evolved this out of nasty painful experience. Its not a perfect system (or an original idea) but as a result of working this way we have a high degree of confidence that if there are two instances of one of our databases with the same version that then the schema for those two databases will be the same in almost all respects and that we can safely bring any db up to the current schema for that application without ill effects. (That last isn't 100% true unfortunately - there's always an exception - but its not too far from the truth!)
Do you keep your existing data in the database? If not, you may want to look at something similar to what Matt mentioned for .NET called RikMigrations
http://www.rikware.com/RikMigrations.html
I use that on my projects to update my database on the fly, while keeping track of revisions. Also, it makes it very simple to move database schema to different servers, etc.
if you want to have re-runnability in your scripts, then you can't have them as definitions... what I mean by this is that you need to focus on change scripts rather than here is my Table script.
let's say you have a table Customers:
create table Customers (
id int identity(1,1) primary key,
first_name varchar(255) not null,
last_name varchar(255) not null
)
and later you want to add a status column. Don't modify your original table script, that one has already run (and can have the if(! exists) syntax to prevent it from causing errors while running again).
Instead, have a new script, called add_customer_status.sql
in this script you'll have something like:
alter table Customers
add column status varchar(50) null
update Customers set status = 'Silver' where status is null
alter table Customers
alter column status varchar(50) not null
Again you can wrap this with an if(! exists) block to allow re-running, but here we've leveraged the notion that this is a change script, and we adapt the database accordingly. If there is data already in the customers table then we're still okay, since we add the column, seed it with data, then add the not null constraint.
Both of the migration frameworks mentioned above are good, I've also had excellent experience with MigratorDotNet.
Scott named a couple of other SQL tools that address the problem of change management. But I'm still rolling my own.
I would like to second this question, and add my puzzlement that there is still no free, community-based tool for this problem. Obviously, scripts are not a satisfactory way to maintain a database schema; neither are instances. So, why don't we keep metadata in a separate (and while we're at it, platform-neutral) format?
That's what I'm doing now. My master database schema is a version-controlled XML file, created initially from a simple web service. A simple javascript program compares instances against it, and a simple XSL transform yields the CREATE or ALTER statements. It has limits, like RikMigrations; for instance it doesn't always sequence inter-depdendent objects correctly. (But guess what — neither does Microsoft's SQL Server Database Publication tool.) Really, it's too simple. I simply didn't include objects (roles, users, etc.) that I wasn't using.
So, my view is that this problem is indeed inadequately addressed, and that sooner or later we'll have to get together and tackle the devilish details.
We went the 'drop and recreate the schema' route. We had some classes in our JUnit test package which parameterized the scripts to create all the objects in the schema for the developer executing the code. This allowed all the developers to share one test database and everyone could simultaneously create/test/drop their test tables without conflicts.
Did it take a long time to run? Yes. At first we used the setup method for this which meant the tables were dropped/created for every test and that took way too long. Then we created a TestSuite which could be run once before all the tests for a class and then cleaned up when all the class tests were complete. This still meant that the db setup ran many times when we ran our 'AllTests' class which included all the tests in all our packages. How I solved it was adding a semaphore to the OracleTestSuite code so when the first test requested the database to be setup it would do that but any subsequent call would just increment a counter. As each tearDown() method was called, the counter would decrement the counter until it reached 0 and the OracleTestSuite code would drop everything. One issue this leaves is whether the tests assume that the database is empty. It can be convenient to let database tests know the order in which they run so they can take advantage of the state of the database because it can reduce the duplication of DB setup.
We used the concept of ObjectMothers to solve a similar problem with creating complex domain objects for testing purposes. Mock objects might be a better answer but we hadn't heard about them at the time. After all this time, I'd recommend creating test helper methods that could create standardized datasets for the typical scenarios. Plus that would help document the important edge cases from a data perspective.