In the earlier versions of CRM, the default organization could be set from Deployment Manager. It's not the case anymore, though. Now, every user gets his own default depending on the first organization ever accessed on the server.
I have strong (and less than favorable) opinions on the subject but it seems that Microsoft cares very little what I think.
So, I'm going to do the following to the DB.
use MSCRM_CONFIG
update SystemUser
set DefaultOrganizationId = 'GUID of the main organization'
--where Id='GUID of a user'
However, I'm concerned that it'll break something and cause an eternity to restore, so I'm verifying by asking the question here.
How can I ensure beyond any possible doubts that I've got the correct GUID for the organization?
Will it work well when commenting the clause targeting the individual users and hitting all of them in one swing?
What other consideration should I have, except backing up the whole system prior to the operation?
And if anybody can suggest a smoother and less intrusive way, I'll be jumping of joy.
You can utilize the script at http://complexitykills.blogspot.com/2009/09/default-organization-for-user-is.html which is similar to yours, but has a little bit more logging included with it - note in the comments that there's a comment to include a where clause condition looking for Organization that has "IsDeleted = 0" to prevent selecting an organization that has been deleted. If issue your SQL Command inside of a SQL Transaction, you can run the script, validate users can still login to Microsoft CRM, and if needed, quickly issue a "Rollback Tran" to roll the SQL transaction back rather than having to perform a complete restore of the MSCRM_CONFIG database (although that should be quick to restore as that's never very large as far as SQL Server databases are concerned).
To get the correct OrganizationID, you can use a SQL Query like this:
DECLARE #DefaultOrganizationId AS VARCHAR (100);
SET #DefaultOrganization = '<organizationname>';
SELECT #DefaultOrganizationId = id
FROM MSCRM_CONFIG..organization
WHERE UniqueName = #DefaultOrganization AND IsDeleted = 0;
If you don't have the where clause included, it will indeed update all of the users to the organizationid you have added here and should work well (see the query above for an example of how to retrieve the organizationid from the MSCRM_CONFIG..organization table).
This is not necessarily a common operation, but I have seen it used with a few organizations to successfully update the default organization associated to a user, noting that precautions were made before hand to back up the databases and testing was performed after to ensure everything worked for these users in Microsoft CRM.
Related
Background
I have a multi-tenant scenario and a unique Sql Server project that will be deployed into multiple databases instances on the same server. There will be one db for each tenant, plus one "model" db.
The "model" database serves three purposes:
Force some "system" data to be always present in each tenant database
Serves as an access point for users with a special permission to edit system data (which will be punctually synced to all tenants)
When creating a new tenant, the database will be copied and attached with a new name representing the tenant
There are triggers that checks if the modified / deleted data within tenant db corresponds to "system" data inside the "model" db. If it does, an error is raised saying that system data cannot be altered.
Issue
So here's a part of the trigger that checks if deletion can be allowed:
IF DB_NAME() <> 'ModelTenant' AND EXISTS
(
SELECT
[deleted].*
FROM
[deleted]
INNER JOIN [---MODEL DB NAME??? ---].[MySchema].[MyTable] [ModelTable]
ON [deleted].[Guid] = [ModelTable].[Guid]
)
BEGIN;
THROW 50000, 'The DELETE operation on table MyTable cannot be performed. At least one targeted record is reserved by the system and cannot be removed.', 1
END
I can't seem to find what should take the place of --- MODEL DB NAME??? --- in the above code that would allow the project to compile properly. When refering to a completely different project I know what to do: use a reference to the project that's represented with an SQLCMD variable. But in this scenario the project reference is essentially the same project; only on a different database. I can't seem to be able to add a self-reference in this manner.
What can I do? Does SSDT offers some kind of support for such a scenario?
Have you tried setting up a Database Variable? You can read under "Reference aware statements" here. You could then say:
SELECT * FROM [$(MyModelDb)][MySchema].[MyTable] [ModelTable]
If you don't have a specific project for $(MyModelDb) you can choose the option to "suppress errors by unresolved references...". It's been forever since I've used SSDT projects, but I think that should work.
TIP: If you need to reference 1-table 100-times, you may find it better to create a SYNONM that uses the database variable, then point to the SYNONM in your SPROCs/TRIGGERs. Why? Because that way you don't need to deploy your SPROCs/TRIGGERs to get the variable replaced with the actual value and that can make development smoother.
I'm not quite sure if SSDT is particularly well-suited to projects of any decent amount of complexity. I can think of one or two ways to most likely accomplish this (especially depending on exactly how you do the publishing / deployment), but I think you would actually lose more than you gain. What I mean by that is: you could add steps to get this to work (i.e. win the battle), but you would be creating a more complex system in order to get SSDT to publish a system that is more complex (and slower) than it needs to be (i.e. lose the war).
Before worrying about SSDT, let's look at why you need/want SSDT to do this in the first place. You have system data intermixed with tenant data, and you need to validate UPDATE and DELETE operations to ensure that the system data does not get modified, and the only way to identify data that is "system" data is by matching it to a home-of-record -- ModelDB -- based on GUID PKs.
That theory on identifying what data belongs to the "system" and not to a tenant is your main problem, not SSDT. You are definitely on the right track for a multi-tentant system by having the "model" database, but using it for data validation is a poor design choice: on top of the performance degradation already incurred from using GUIDs as PKs, you are further slowing down all of these UPDATE and DELETE operations by funneling them through a single point of contention since all client DBS need to check this common source.
You would be far better off to include a BIT field in each of these tables that mixes system and tenant data, denoting whether the row was "system" or not. Just look at the system catalog views within SQL Server:
sys.objects has an is_ms_shipped column
sys.assemblies went the other direction and has an is_user_defined column.
So, if you were to add an [IsSystemData] BIT NOT NULL column to these tables, your Trigger logic would become:
IF DB_NAME() <> N'ModelTenant' AND EXISTS
(
SELECT del.*
FROM [deleted] del
WHERE del.[IsSystemData] = 1
)
BEGIN
;THROW 50000, 'The DELETE operation on table MyTable cannot be performed. At least one targeted record is reserved by the system and cannot be removed.', 1;
END;
Benefits:
No more SSDT issue (at least for from this part ;-)
Faster UPDATE and DELETE operations
Less contention on the shared resource (i.e. ModelDB)
Less code complexity
As an alternative to referencing another database project, you can produce a dacpac, then reference the dacpac as a database reference in "same server, different database" mode.
Can I find out when the last INSERT, UPDATE or DELETE statement was performed on a table in an Oracle database and if so, how?
A little background: The Oracle version is 10g. I have a batch application that runs regularly, reads data from a single Oracle table and writes it into a file. I would like to skip this if the data hasn't changed since the last time the job ran.
The application is written in C++ and communicates with Oracle via OCI. It logs into Oracle with a "normal" user, so I can't use any special admin stuff.
Edit: Okay, "Special Admin Stuff" wasn't exactly a good description. What I mean is: I can't do anything besides SELECTing from tables and calling stored procedures. Changing anything about the database itself (like adding triggers), is sadly not an option if want to get it done before 2010.
I'm really late to this party but here's how I did it:
SELECT SCN_TO_TIMESTAMP(MAX(ora_rowscn)) from myTable;
It's close enough for my purposes.
Since you are on 10g, you could potentially use the ORA_ROWSCN pseudocolumn. That gives you an upper bound of the last SCN (system change number) that caused a change in the row. Since this is an increasing sequence, you could store off the maximum ORA_ROWSCN that you've seen and then look only for data with an SCN greater than that.
By default, ORA_ROWSCN is actually maintained at the block level, so a change to any row in a block will change the ORA_ROWSCN for all rows in the block. This is probably quite sufficient if the intention is to minimize the number of rows you process multiple times with no changes if we're talking about "normal" data access patterns. You can rebuild the table with ROWDEPENDENCIES which will cause the ORA_ROWSCN to be tracked at the row level, which gives you more granular information but requires a one-time effort to rebuild the table.
Another option would be to configure something like Change Data Capture (CDC) and to make your OCI application a subscriber to changes to the table, but that also requires a one-time effort to configure CDC.
Ask your DBA about auditing. He can start an audit with a simple command like :
AUDIT INSERT ON user.table
Then you can query the table USER_AUDIT_OBJECT to determine if there has been an insert on your table since the last export.
google for Oracle auditing for more info...
SELECT * FROM all_tab_modifications;
Could you run a checksum of some sort on the result and store that locally? Then when your application queries the database, you can compare its checksum and determine if you should import it?
It looks like you may be able to use the ORA_HASH function to accomplish this.
Update: Another good resource: 10g’s ORA_HASH function to determine if two Oracle tables’ data are equal
Oracle can watch tables for changes and when a change occurs can execute a callback function in PL/SQL or OCI. The callback gets an object that's a collection of tables which changed, and that has a collection of rowid which changed, and the type of action, Ins, upd, del.
So you don't even go to the table, you sit and wait to be called. You'll only go if there are changes to write.
It's called Database Change Notification. It's much simpler than CDC as Justin mentioned, but both require some fancy admin stuff. The good part is that neither of these require changes to the APPLICATION.
The caveat is that CDC is fine for high volume tables, DCN is not.
If the auditing is enabled on the server, just simply use
SELECT *
FROM ALL_TAB_MODIFICATIONS
WHERE TABLE_NAME IN ()
You would need to add a trigger on insert, update, delete that sets a value in another table to sysdate.
When you run application, it would read the value and save it somewhere so that the next time it is run it has a reference to compare.
Would you consider that "Special Admin Stuff"?
It would be better to describe what you're actually doing so you get clearer answers.
How long does the batch process take to write the file? It may be easiest to let it go ahead and then compare the file against a copy of the file from the previous run to see if they are identical.
If any one is still looking for an answer they can use Oracle Database Change Notification feature coming with Oracle 10g. It requires CHANGE NOTIFICATION system privilege. You can register listeners when to trigger a notification back to the application.
Please use the below statement
select * from all_objects ao where ao.OBJECT_TYPE = 'TABLE' and ao.OWNER = 'YOUR_SCHEMA_NAME'
This is a hypothetical question - the problem listed below is entirely fictional, but I believe if anyone has an answer it could prove useful for future reference.
We have a situation wherein multiple systems all populate the same data table on our SQL Server. One of these systems seems to be populating the table incorrectly, albeit in a consistent pattern (leading me to believe it is only a bug in a single system, not multiple) These are majoritively third-party systems and we do not have access to modify or view their source code, nor alter their functionality. We want to file a bug report with the culprit system's developer, but we don't know which one it is as the systems leave no identifiable trace on the table - those in charge before me, when the database was new and only occasionally used by a single system, believed that a single timestamp field was an adequate audit, and this has never been reconsidered.
Our solution has to be entirely SQL-based. Our thought was to write a trigger on the table, and somehow pull through the source of the query - ie, where it came from - but we don't know how, or even if that's possible.
There are some clear solutions to this - for eg contact all the developers to update their software to populate a new software_ID field, and then use the new information to identify the faulty system later (and save my fictional self similar headaches later) - but I'm particularly interested to know if there's anything that could be done purely in-house on SQL Server (or another clever solution) with the restrictions noted.
you can use functions:
select HOST_NAME(), APP_NAME()
So you will know the computer and application that caused the changes..
And you can modify application connection string to add custom Application name, for example:
„Data Source=SQLServerExpress;Initial Catalog=TestDB;
Integrated Security=True; Application Name=MyProgramm”
You could create a copy of the table in question with one additional nvarchar field to hold the identifier.
Then create a trigger for insert (and maybe update) on the table, and in the trigger insert the same rows to the copy, adding in an identifier. The identifier could be for instance the login name on the connection:
insert into tableCopy select SUSER_SNAME(), inserted.* from inserted
or maybe a client IP:
declare #clientIp varchar(255);
SELECT clientIp = client_net_address
FROM sys.dm_exec_connections
WHERE Session_id = ##SPID
insert into tableCopy select #clientIp, inserted.* from inserted
or possibly something else that you could get from the connection context (for lack of a more precise term) that can identify the client application.
Make sure though that inserting into the table copy will under no circumstances cause errors. Primary keys and indexes should probably be dropped from the copy.
Just an idea: create a trigger that save in a dedicated table the info obtained by EXEC sp_who2 when suspicious value are stored in the table.
Maybe you can filter sp_who2 values by status RUNNABLE.
So, if multiple users share the same login, you can determine the exact moment in which the command is executed and start your research from this...
so, I'm facing the challenge of having to log the data being changed for each field in a table. Now I can obviously do that with triggers (which I've never used before, but I can imagine is not that difficult), but I also need to be able to link the log who performed the change which is where the problem lies. The trigger wouldn't be aware of who is performing the change and I can't pass in a user id either.
So, how can I do what I need to do? If it helps say I have these tables:
Employees {
EmployeeId
}
Jobs {
JobId
}
Cookies {
CookieId
EmployeeId -> Employees.EmployeeId
}
So, as you can see I have a Cookies table which the application uses to verify sessions, and I can infer the user out of it, but again, I can't make the trigger be aware of it if I want to make changes to the Jobs table.
Help would be highly appreciated!
We use context_info to set the user making the calls to the DB. Then our application level security can be enforced all the way to in DB code. It might seem like an overhead, but really there is no performance issue for us.
make_db_call() {
Set context_info --some data representing the user----
do sql incantation
}
in db
select #user = dbo.ParseContextInfo()
... audit/log/security etc can determine who....
To get the previous value inside the trigger you select from the 'deleted' pseudo table, and to the get the values you are putting in you select from th 'inserted' pseudo table.
Before you issue linq2sql query issue the command like this.
context.ExecuteQuery('exec some_sp_to_set_context ' + userId')
Or more preferably I'd suggest an overloaded DataContext, where the above is executed before eqch query. See here for an example.
we don't use multiple SQL logins as we rely on the connection pooling and also locking down the db caller to a limited user.
So for the second day in a row, someone has wiped out an entire table of data as opposed to the one row they were trying to delete because they didn't have the qualified where clause.
I've been all up and down the mgmt studio options, but can't find a confirm option. I know other tools for other databases have it.
I'd suggest that you should always write SELECT statement with WHERE clause first and execute it to actually see what rows will your DELETE command delete. Then just execute DELETE with the same WHERE clause. The same applies for UPDATEs.
Under Tools>Options>Query Execution>SQL Server>ANSI, you can enable the Implicit Transactions option which means that you don't need to explicitly include the Begin Transaction command.
The obvious downside of this is that you might forget to add a Commit (or Rollback) at the end, or worse still, your colleagues will add Commit at the end of every script by default.
You can lead the horse to water...
You might suggest that they always take an ad-hoc backup before they do anything (depending on the size of your DB) just in case.
Try using a BEGIN TRANSACTION before you run your DELETE statement.
Then you can choose to COMMIT or ROLLBACK same.
In SSMS 2005, you can enable this option under Tools|Options|Query Execution|SQL Server|ANSI ... check SET IMPLICIT_TRANSACTIONS. That will require a commit to affect update/delete queries for future connections.
For the current query, go to Query|Query Options|Execution|ANSI and check the same box.
This page also has instructions for SSMS 2000, if that is what you're using.
As others have pointed out, this won't address the root cause: it's almost as easy to paste a COMMIT at the end of every new query you create as it is to fire off a query in the first place.
First, this is what audit tables are for. If you know who deleted all the records you can either restrict their database privileges or deal with them from a performance perspective. The last person who did this at my office is currently on probation. If she does it again, she will be let go. You have responsibilites if you have access to production data and ensuring that you cause no harm is one of them. This is a performance problem as much as a technical problem. You will never find a way to prevent people from making dumb mistakes (the database has no way to know if you meant delete table a or delete table a where id = 100 and a confirm will get hit automatically by most people). You can only try to reduce them by making sure the people who run this code are responsible and by putting into place policies to help them remember what to do. Employees who have a pattern of behaving irresponsibly with your busness data (particulaly after they have been given a warning) should be fired.
Others have suggested the kinds of things we do to prevent this from happening. I always embed a select in a delete that I'm running from a query window to make sure it will delete only the records I intend. All our code on production that changes, inserts or deletes data must be enclosed in a transaction. If it is being run manually, you don't run the rollback or commit until you see the number of records affected.
Example of delete with embedded select
delete a
--select a.* from
from table1 a
join table 2 b on a.id = b.id
where b.somefield = 'test'
But even these techniques can't prevent all human error. A developer who doesn't understand the data may run the select and still not understand that it is deleting too many records. Running in a transaction may mean you have other problems when people forget to commit or rollback and lock up the system. Or people may put it in a transaction and still hit commit without thinking just as they would hit confirm on a message box if there was one. The best prevention is to have a way to quickly recover from errors like these. Recovery from an audit log table tends to be faster than from backups. Plus you have the advantage of being able to tell who made the error and exactly which records were affected (maybe you didn't delete the whole table but your where clause was wrong and you deleted a few wrong records.)
For the most part, production data should not be changed on the fly. You should script the change and check it on dev first. Then on prod, all you have to do is run the script with no changes rather than highlighting and running little pieces one at a time. Now inthe real world this isn't always possible as sometimes you are fixing something broken only on prod that needs to be fixed now (for instance when none of your customers can log in because critical data got deleted). In a case like this, you may not have the luxury of reproducing the problem first on dev and then writing the fix. When you have these types of problems, you may need to fix directly on prod and you should have only dbas or database analysts, or configuration managers or others who are normally responsible for data on the prod do the fix not a developer. Developers in general should not have access to prod.
That is why I believe you should always:
1 Use stored procedures that are tested on a dev database before deploying to production
2 Select the data before deletion
3 Screen developers using an interview and performance evaluation process :)
4 Base performance evaluation on how many database tables they do/do not delete
5 Treat production data as if it were poisonous and be very afraid
So for the second day in a row, someone has wiped out an entire table of data as opposed to the one row they were trying to delete because they didn't have the qualified where clause
Probably the only solution will be to replace someone with someone else ;). Otherwise they will always find their workaround
Eventually restrict the database access for that person and provide them with the stored procedure that takes the parameter used in the where clause and grant them access to execute that stored procedure.
Put on your best Trogdor and Burninate until they learn to put in the WHERE clause.
The best advice is to get the muckety-mucks that are mucking around in the database to use transactions when testing. It goes a long way towards preventing "whoops" moments. The caveat is that now you have to tell them to COMMIT or ROLLBACK because for sure they're going to lock up your DB at least once.
Lock it down:
REVOKE delete rights on all your tables.
Put in an audit trigger and audit table.
Create parametrized delete SPs and only give rights to execute on an as needed basis.
Isn't there a way to give users the results they need without providing raw access to SQL? If you at least had a separate entry box for "WHERE", you could default it to "WHERE 1 = 0" or something.
I think there must be a way to back these out of the transaction journaling, too. But probably not without rolling everything back, and then selectively reapplying whatever came after the fatal mistake.
Another ugly option is to create a trigger to write all DELETEs (maybe over some minimum number of records) to a log table.