We are thinking of implementing Rational ClearQuest for Change Management and Defect tracking.
When we integrate Rational ClearQuest and Rational ClearCase, the activities will be coming from the Rational ClearQuest.
Now since the implementation of Rational ClearQuest will take time due to the process, we are thinking of removing the activity creation from the developer side. We are thinking of making the admin create the activities for each developer.
Now I have a few concerns, If the admin creates the activities and changes the owner for the activity and the group using the protect command, Is it enough? Would this activity used by other developer too? Since activity is a work can this be shared?
I need some clarity on this.
Thanks.
I don't remember having had to protect somehow UCM activity creation with special priviledge when using ClearQuest (we are no longer using it right now)
The IBM article "About creating UCM activities in a project enabled for Rational ClearQuest" summarizes the activity creation process: a user "work on a (ClearQuest) activity"
An activity object is created in the stream to which the view is attached.
The activity object is linked to the record in the Rational ClearQuest user database whose record type is enabled for UCM.
The name of the Rational ClearCase activity is set to match the Rational ClearQuest record's ID.
You do not create UCM activity objects directly.
Since the creation of ClearCase UCM activities is managed by ClearQuest, you don't need to:
create yourself some UCM activities
try protect those by a specal owner.
You should rather use some policies like the WorkOn policy:
This policy is invoked when a developer attempts to set an activity.
The default policy script checks whether the developer’s user name matches the name in the Rational® ClearQuest® record Owner field.
If the names match, the developer can work on the activity. If the names do not match, the WorkOn fails.
The intent of this policy is to ensure that all criteria are met before a developer can start working on an activity. You may want to modify the policy to check for additional criteria.
The article "About Rational ClearCase activities and record types enabled for UCM" details the link between the two notions (UCM activities and ClearQuest record type)
In a project that uses the UCM integration with Rational ClearQuest, records based on a record type enabled for UCM can be linked with Rational ClearCase activity objects
alt text http://publib.boulder.ibm.com/infocenter/cchelp/v7r0m0/topic/com.ibm.rational.clearcase.hlp.doc/cc_main/images/cq_pvob_map.gif
This link enables the Rational ClearQuest client to display information about the Rational ClearCase activity (such as its change set, its stream, and whether it is currently set in any view).
The link also enables policies governing when you can deliver an activity in the Rational ClearCase environment and when you can close an activity in the Rational ClearQuest environment.
Because of the close association between linked UCM-enabled records and Rational ClearCase activities, the UCM documentation usually refers to both entities as activities.
At any point in a project, your Rational ClearQuest user database may contain records that are not linked to a Rational ClearCase activity object, but have a record type that is enabled for UCM.
For example, a newly created record might not be linked to a Rational ClearCase activity. You must explicitly complete an action (for example, by clicking Action > Work On) to link such a record to a UCM activity.
However, each Rational ClearCase activity object in a project enabled for Rational ClearQuest must be linked to a Rational ClearQuest record.
You cannot create a Rational ClearCase activity object without linking it to a record in a Rational ClearQuest user database.
Tip: In a project enabled for Rational ClearQuest, a field is included to describe the activity owner. The Rational ClearQuest owner field and the Rational ClearCase activity creator are two different data points; the former is stored in a Rational ClearQuest user database and the latter in a Rational ClearCase PVOB.
#kadaba Don't know if u r still looking for an answer, but if you are, this could work -- create a pre-op trigger on mkactivity with excluded users - you/admin etc.
Related
On one of our test servers the view in Settings -> Auditing -> Audit Log Management says there are no deletable audit logs available, even though a select count (*) from AuditBase in the database returns nearly 90 million records. The user looking at the view has System Administrator privileges. In our other 2 servers (all using the same version of CRM), the view displays records as expected.
What might we do to get the records to show, or alternately to clean up the audit table without using the view?
Strange that the Audit Logs are not appearing in the view inside Dynamics, despite records existing in the database.
It might make sense to open a Microsoft ticket about it.
Or, since your system is on-prem, here's a discussion about manually deleting data from the AuditBase table, which is of course highly unsupported.
I did a blunder in my client database. I have dropped all tables and created new tables with same name in client database. I lost all client data. I don’t have any backup of client DB. Can you please let me know if I can recover data of old tables.
Few options .All untested and i am not sure,how consistent the database will be
1.RedGate Provides a tool called SQL Log Rescue..It claims to do View and recover deleted and modified data
2.Volume Shadow Copy service
some reference of what this means :(emphasis mine)
This service allows Windows to take automatic or manual backups, or snapshots, of the current state of the files on a particular volume (drive letter). The important part of this process is that these backups can be taken of files even if they are open. Therefore, this provides a mechanism that backup programs and Windows can use to retain a reliable history of a computer's files
Below is a step by step tutorial on how to do this
https://www.bleepingcomputer.com/tutorials/how-to-recover-files-and-folders-using-shadow-volume-copies/
This is a long question, but please bear with me as I believe it raises important issues about database ownership and access.
I manage and internationally market a "universal" geothermal data management program, written in Delphi, that is a front end to a SQL Server database. The data in the database is derived from many diverse measurements generated and used by the program users over time periods of 30 years or more - i.e. they "own" the data, and the database is primarily a way to efficiently store and manage the data.
Like all databases, the database structure needs to be modified from time to time, including new tables, and this modification is delivered by the release of a new version of the program. The program prompts for a database upgrade, which has to be carried out by a dbo user so that all new tables can be accessed by the other program users. Unfortunately, the program may be used in remote sites and the IT personnel may not be readily available, so that the new version may get installed but the databases are not upgraded. What has frequently happened in such locations is that a program user will upgrade the databases without appropriate SQL Server permissions, and then the other users cannot access the new tables and the program crashes.
One of the program customers has taken another approach. They have created a db_owner role for all the databases used by the program and then make all program users members of the db_owner role. The program has inbuilt permission levels that can restrict the ability to upgrade databases, so normally only one or two users have this permission. However, with everyone a member of the db_owner role, it doesn't matter who upgrades the database, all tables will be accessible to all program users.
The advantage of this approach include the following:
Access permissions can be granted by the group who uses the program, and who has ultimate responsibility for the database.
Knowledge and understanding of the program is passed on within the program users group when staff changes, rather than relying on the IT department as the repository of information on "how it works" (and often they do not know).
Direct data-mining and back-door data modification is possible to selected user experts. While the program has extensive datasearch and editing tools, sometimes these are not enough and the users need hands-on access.
The program users retain "ownership" of their data.
I would appreciate your comments. I believe that in circumstances such as these, it is important that all the database users are db_owners, and the group of users controls access. Not allowing db_owner roles (a strategy commonly employed by IT departments) fails to recognize the importance of data ownership and data accessibility, and the responsibility of the database users to manage their own data.
The way you've stated your question makes it sound like you've already arrived at a conclusion. The one question that I always ask when someone comes to me (a DBA) with this sort of situation is: if someone accidentally deletes data, am I on the hook to get it back? If the answer is "yes", then they don't get db_owner. If the answer is "no", then the db gets moved to its own server and I get the contract in writing.
The only time I wouldn't bother with access control would be with a simple app running on a local single-user database like SqlExpress. As soon as there are multiple users on a centralised database and important data to protect, restricted access is important. I'm not familiar with your domain (geothermal data management), but surely this data is important to your customers, from integrity, tampering and even a data access point of view (theft of data could be resold to a competitor).
the program may be used in remote sites and the IT personnel may not
be readily available, so that the new version may get installed but
the databases are not upgraded
(i.e. I'm assuming an upgrade script needs to be manually and independently run on the database). It is common nowadays for apps to check the database for schema versioning and even for static data population, e.g. Entity Framework code-first migrations on the .net stack. The app will then have the ability to actually perform the schema and data upgrade automatically. It should be quite straightforward for you to add the last N versions of your DB upgrade scripts into your app and then do a version check? (Obviously the app itself would need to prompt for dbo access, assuming that even the app should not have dbo access).
with everyone a member of the db_owner role, it doesn't matter who
upgrades the database
I believe this may place unnecessary responsibility (and power) in the hands of unqualified customer users.
Even the ad-hoc data mining (SELECT) access should be reconsidered, as a badly formed query without consideration can cause performance degradation or block other concurrent writers. If nothing else, providing a few well formed views will at least ensure decent query plans.
/10c
When I process a cube in SSMS and script to XMLA, I notice the following element:
<WriteBackTableCreation>UseExisting</WriteBackTableCreation>
What is the writeback table creation feature, and what does it mean for SSAS to UseExisting?
WritebackTableCreation Element (XMLA):
Determines whether a writeback table is created during the Process
operation...
UseExisting Use the existing writeback table, if one already exists. If one does not exist, an error occurs.
Also on Specifying Processing Options:
Writeback Table Options If writeback is enabled in the Analysis
Services project, this setting defines how writeback is handled....
For more details, see Enabling and Securing Data Entry with Analysis Services Writeback:
Why would you want to write data back to Analysis Services rather than
the relational database that provides the raw data? One reason is
latency. When you write data back to a relational database, users have
to wait until the cube is processed before the latest data becomes
available in their reports. However, when you enable writeback, users
can submit data straight into the cube in the current session, making
it instantly visible to other users of the Analysis Services database
Check out this blog post, describes writeback and why it would create a table (or use an existing table):
http://bimatters1403.wordpress.com/2008/02/05/ssas-2008-molap-writeback/
Is it possibly to guarantee transactional integrity when storing information in a Sharepoint list (SP 2010)?
Underneath the covers a single SharePoint operation like adding a list item can involve multiple database operations and they will all be protected by a single database transaction. With that said, the product doesn't expose that transactional capability to you so that you can perform multiple SharePoint operations under the aegis of a single transaction. To be very safe, you'll need to implement very carefully coded error handlers.
According to this, SharePoint 2010 does not offer any transactional support out of the box.
The underlying database does support transactions, so a single insert will probably either succeed or fail, but if an error occurs during a complex routine involving multiple database operations, the data will end up being partially modified.
Sharepoint does not offer transaction support out of the box.
Here is a good resource on Building a System.Transactions resource manager for SharePoint
Though I would save the effort and store any critical data directly into a RDB