Allow everyone to lock/unlock a ClearCase branch - clearcase

In this question I concluded to use the lock/unlock mechanism of ClearCase to work efficiently with Git.
Unfortunately I realized only a branch owner can perform a lock/unlock operation
$ ct lock brtype:main-br-foo
cleartool: Error: No permission to perform operation "lock".
cleartool: Error: Must be one of: object owner, VOB owner, member of ClearCase group
cleartool: Error: Unable to lock branch type "main-br-foo".
Is there any solution to allow any member of a VOB to do lock/unlock ?

Another approach would be to use a pre-op trigger on the checkout operation, which would:
prevent the checkout if an attribute (named 'lock') is set on the file (set using cleartool mkattr as in this question)
allow the checkout to proceed if the attribute is not there.
Removing an attribute can be done by anyone with the same group member as the object having the attribute, so this model is less constrained than the one using cleartool lock.
You can complete it with a post-op trigger on checkin, which would remove automatically the attribute 'lock' if found on the element.
To set an attribute to a branch you do:
cleartool mkattype -nc islocked # Should be done once
cleartool mkattr islocked \"true\" brtype:branch_name

Related

In SQLite, do attached databases take on, or inherit, the pragma settings of the main database?

In SQLite, do attached databases take on or inherit the "pragmatizations" of the "main" database? Or must I "pragmatize" each database, after an "ATTACH" command, or take some other action to set the state information for all opened and attached databases?
I examined the SQLite documentation, both for attaching databases and for the pragma command, but could not find an answer:
> https://www.sqlite.org/lang_attach.html
> https://www.sqlite.org/pragma.html
Example:
SQLite documentation on ignore_check_constraints explains "the default setting is off, meaning that CHECK constraints are enforced by default."
> https://www.sqlite.org/pragma.html#pragma_ignore_check_constraints
Here, in this example, I open my main database, turn off the enforcement for checking contraints and attach two databases:
sqlite3 '/path_to_sqlite_databases/main_database.db'
PRAGMA ignore_check_constraints = true;
ATTACH '/path_to_sqlite_databases/database_one.db' AS [attached_first];
ATTACH '/path_to_sqlite_databases/database_two.db' AS [attached_second];
My question:
Are my attached databases, "attached_first" and "attached_second":
are these also set to ignore check
constraints, inheriting the setting from the
main database?
are these set to the default of enforcing
check constraints, and I must specifically set
each attached database to my desired state?
must I wait to set my desired state until
after I open and attach all databases, because
the pragma command affects all open and
attached databases?
or must I take some other action to set the
state of my databases?

Stop Cassandra Materialized View Build

Is there any way to stop the build of a materialized view in Cassandra (3.7)?
Background: I created two materialized views A and B (full disclosure - I may have attempted to drop them before the build was complete) and those views seem to be perpetually stuck...any attempt to create another view C on the same table seems to hang. Using nodetool
nodetool.viewbuildstatus <keyspace>.<view>
shows a combination of STARTED and UNKNOWN for A and B, and STARTED for views in C. Using cql:
select * from system.views_builds_in_progress
all views are listed, but generation number and last_token have not changed in the last 24hrs (generation_number is in fact null for A).
Its not documented, but nodetool stop actually takes any compaction type, not just the ones listed (which the view build is one of). So you can simply:
nodetool stop VIEW_BUILD
Or you can hit JMX directly with the org.apache.cassandra.db:type=CompactionManager mbean's stopCompaction operation.
All thats really gonna do is set a flag for the view builder to stop on its next loop. If it threw an uncaught exception or something so its no longer doing anything (worth checking system/output logs) the stop wont do anything either. In that case its not really hurting anything though so can ignore it and retry. Worst case restart the node.

Strategy for rolling back an altered table using liquibase

I want to migrate my database from v1.0 to v1.1 and one of the changes is updates on some of the values in Table1. I know that for INSERT, I can easily include a rollback command of deleting the values I just added, but how about a table alteration? Is there a way to store the current value and use this information for the rollback process (in the future)?
Thanks.
You can specify a <rollback> block (docs) in your changeset to describe how to roll back the change. Within your rollback tag you can use raw SQL or a <createTable> tag to re-describe what the table looked like before it was altered.
You can also specify the changeSetId and changeSetAuthor in the rollback tag to point to an existing changeSet that will recreate the table. This approach can be easier if there has been no other changes since the object was created but doesn't work as well if there has been multiple changeSets that modified the object since it was first created.
Any DDL operation (ALTER TABLE being one of them) in SQL Server is transactional.
It means that you can open a transaction, do alterations to the database objects, and rollback the transaction as if it never happened.
There are some exceptions, mainly actions involving filesystem operations (adding a file to database and such).

TFS - Merge relationship - How to exclude ?

We have a case here where a developer creates a wrong branch. The branch should be: $\projectA\branch01\pg5Dev from $\projectA\main\pg5Dev\ but he creates a $\projectA\branch01\ from $\projectA\main\pg5Dev.
We deleted the folder and creates the branch again, but the merge relationship in merge wizard remains.
We need to know the database structure of Merge Relations ships to remove $\projectA\branch01\, because everytime we will make a merge, the worng branch is appearing in combobox of merge wizard.
Please, help us identify the tables in database that have this wrong record.
If the incorrect branch isn't needed then I would recommend destroying it. Once it is destroyed, it will no longer show up in the combobox. You can destroy it by running "tf destroy ". Note that a destroy is non-recoverable and it will delete all of the history for that branch.

Specify trigger's parent schema in trigger body

In DB2 for IBM System i I create this trigger for recording on MYLOGTABLE every insert operation made on MYCHECKEDTABLE:
SET SCHEMA MYSCHEMA;
CREATE TRIGGER MYTRIGGER AFTER INSERT ON MYCHECKEDTABLE
REFERENCING NEW AS ROWREF
FOR EACH ROW BEGIN ATOMIC
INSERT INTO MYLOGTABLE -- after creation becomes MYSCHEMA.MYLOGTABLE
(MMACOD, OPTYPE, OPDATE)
VALUES (ROWREF.ID, 'I', CURRENT TIMESTAMP);
END;
The DBMS stores the trigger body with MYSCHEMA.MYLOGTABLE hardcoded.
Now imagine that we copy the entire schema as a new schema NEWSCHEMA. When I insert a record in NEWSCHEMA.MYCHECKEDTABLE a log record will be added to MYSCHEMA.MYLOGTABLE instead of NEWSCHEMA.MYLOGTABLE, i.e. in the schema where trigger and its table live. This is cause of big issues!! Also because many users can copy the schema without my control...
So, is there a way to specify, in the trigger body, the schema where the trigger lives? In this way we'll write the log record in the correct MYLOGTABLE. Something like PARENT SCHEMA... Or is there a workaround?
Many thanks!
External triggers defined in an HLL have access to a trigger buffer that includes the library name of the table that fired the trigger. This could be used to qualify the reference to the MYLOGTABLE.
See chapter 11.2 "Trigger program structure" of the IBM Redbook Stored Procedures, Triggers, and User-Defined Functions on DB2 Universal Database for iSeries for more information.
Alternatively you may be able to use the CURRENT SCHEMA special register or the GET DESCRIPTOR statement to find out where the trigger and/or table are currently located.
Unfortunately I realized that the schema where a trigger lives can't be detected from inside trigger's body.
But there are some workarounds (thanks to #krmilligan too):
Take away the user's authority to execute CPYLIB and make them use a utility.
Create a background agent on the system that peridiocally runs looking for triggers that are out of synch.
For command CPYLIB set the default for TRG option to *NO. In this way triggers will never be copied, except if the user explicitly specifies it.
I choose the last one because it's the simplest one, even if there can be contexts where trigger copy is required. In such cases I'd take the first workaround.

Resources