TFS - Merge relationship - How to exclude ? - database

We have a case here where a developer creates a wrong branch. The branch should be: $\projectA\branch01\pg5Dev from $\projectA\main\pg5Dev\ but he creates a $\projectA\branch01\ from $\projectA\main\pg5Dev.
We deleted the folder and creates the branch again, but the merge relationship in merge wizard remains.
We need to know the database structure of Merge Relations ships to remove $\projectA\branch01\, because everytime we will make a merge, the worng branch is appearing in combobox of merge wizard.
Please, help us identify the tables in database that have this wrong record.

If the incorrect branch isn't needed then I would recommend destroying it. Once it is destroyed, it will no longer show up in the combobox. You can destroy it by running "tf destroy ". Note that a destroy is non-recoverable and it will delete all of the history for that branch.

Related

Postgresql Database Design Questions (Trigger vs Function)

I am building a database for a CMS system and I am at a point where I am no longer sure which way to go anymore, noting that all of the business logic is in the database layer (We use PostgreSQL 13 and the application is planned to be a SaaS):
1- The application has folders and documents associated with them, so if we move a folder (Or a group of folders in bulk) from its parent folder to another, then the permissions of the folder as well as the underlying documents must follow the permissions of the new location (An update to a permissions table is sent), is this better enforced via an after statement trigger, or do we need to force all of the code to call a single method to move the folder, documents and update their permissions.
2- Wouldn't make more sense to have an AFTER statement trigger rather than an AFTER row trigger in all cases since they do the same thing, but with statement triggers you can process all affected rows in bulk (Thus done more efficiently) , so if I was to enforce inserting a record in another table if an update or an insert takes place, it will have a similar performance for a a single row, but will be a lot faster if they were 1000 rows in the statement level trigger (Since I can easily do INSERT INTO .. SELECT * FORM new_table).
You need a row level trigger or a statement level trigger with transition tables, so that you know which rows were affected by the statement. To avoid repetition, the latter might be a better choice.
Rather than modifying permissions whenever you move an object, you could figure out the permissions when you query the table by recursively following the chain of containment. The question here is if you prefer to do the extra work when you modify the data or when you query the data.

Order by creation time in OpenEdge

Is there an automatic way of knowing which rows are the latest to have been added to an OpenEdge table? I am working with a client and have access to their database, but they are not saving ids nor timestamps for the data.
I was wondering if, hopefully, OpenEdge is somehow doing this out of the box. (I doubt it is but it won't hurt to check)
Edit: My Goal
My goal from this is to be able to only import the new data, i.e. the delta, of a specific table. Without having which rows are new, I am forced to import everything because I have no clue what was aded.
1) Short answer is No - there's no "in the box" way for you to tell which records were added, or the order they were added.
The only way to tell the order of creation is by applying a sequence or by time-stamping the record. Since your application does neither, you're out of luck.
2) If you're looking for changes w/out applying schema changes, you can capture changes using session or db triggers to capture updates to the db, and saving that activity log somewhere.
3) If you're just looking for a "delta" - you can take a periodic backup of the database, and then use queries to compare the current db with the backup db and get the differences that way.
4) Maintain a db on the customer site with the contents of the last table dump. The next time you want to get deltas from the customer, compare that table's contents with the current table, dump the differences, then update the db table to match the current db's table.
5) Personally. I'd talk to the customer and see if (a) they actually require this functionality, (b) find out what they think about adding some fields and a bit of code to the system to get an activity log. Adding a few fields and some code to update them shouldn't be that big of a deal.
You could use database triggers to meet this need. In order to do so you will need to be able to write and deploy trigger procedures. And you need to keep in mind that the 4GL and SQL-92 engines do not recognize each other's triggers. So if updates are possible via SQL, 4GL triggers will be blind to those updates. And vice-versa. (If you do not use SQL none of this matters.)
You would probably want to use WRITE triggers to catch both insertions and updates to data. Do you care about deletes?
Simple-minded 4gl WRITE trigger:
TRIGGER PROCEDURE FOR WRITE OF Customer. /* OLD BUFFER oldCustomer. */ /* OLD BUFFER is optional and not needed in this use case ... */
output to "customer.dat" append.
export customer.
output close.
return.
end.

Safe/reliable/standard process for making major changes to a database with existing data?

I would like to take one table that is heavy with flags and fields, and break it into smaller tables. The parent table to be revised/broken down already contains live data that must be handled with care.
Here is my plan of attack, that I'm hoping to execute this weekend while no one is using the systsem.
Create the new tables that we will need
Rename the existing parent table, ParentTable, to ParentTableOLD
Create a new table called ParentTable with the unneeded fields gone, and new fields added
Run a procedure to copy the entries in ParentTableOLD to the new tables, mapping old data to new tables/fields where applicable
Delete the ParentTableOLD table
The above seems pretty reasonable and simple to me, I'm fairly certain it will work. I'm interested in other techniques to achieve this (the above is the only thing I can think of), as well as any kind of tools to help stay organized. Right now I'm running on pen and paper.
Reason I ask is that several times now, I've been re-inventing the wheel just because I didn't know any better, and someone more experienced came along and saw what I was doing and said, "oh there's a built-in way to help do this," or, "there's a simpler way to do this." I did coding for months and months with Visual Studio before someone stopped by and said "you know about breakpoints to step through the code, yeah?" --- life changing, hah.
I have SQL Server 2008 R2 with SSMS.
A good trick to assist you in creating your '_old' tables is:
SELECT *
INTO mytable_old
FROM mytable
SELECT INTO will copy all of the data and create your table for you in one step.
This said - I would actually retain the current table names and instead copy everything into another schema. This will make adapting queries and reports to run over the old schema (where needed) a lot easier then having to add '_old' to all the names (since instead you can just find/replace the schema names).
If at all possible, I'd be doing this is some sort of test environment first and foremost. If you have external applications that rely on the database, then make sure they all run against your modified structure without any hiccups.
Also do a search on your database objects that might reference the table you are going to rename. For example;
SELECT Name
FROM sys.procedures
WHERE OBJECT_DEFINITION(OBJECT_ID) LIKE '%MyTable%'
Try and ensure some sort of functional equivalence from queries between your new and old schemas. Have some query/queries that can be run against your renamed table and then have your reworked schema referencing your new table structure. This way you can make sure the data returned is the same for both structures. If possible, prepare these all ahead of time so that it is simply a series of checks you can do once you've done your modifications and if there are differences, this can help you decide whether you proceed with the change or back it out.
Lastly, have a plan for working out how you could revert to the old schema if something catastrophic were to occur. If you'd been working with your new table structure for a period of time, and then discovered a major issue, could you revert back to the old table and successfully get the data out of your modified table structure back to the old table? Basically, just be a follow the boy scout rules and be prepared.
This isn't really an answer for your overall problem, but a couple tools that you might find useful for your Step 4 is RedGate's SQL Compare and Data Compare. SQL Compare will perform schema migrations, and Data Compare will help migrate data. You can move data to new columns and new tables, populate default values, sync from dev to production, among other things.
You can make your changes in a dev environment with production data, and when you're satisfied with the process, do the actual migration in production.
Make a backup of database (for reference : http://msdn.microsoft.com/en-us/library/ms187510.aspx) and then you can do the required steps. If everything goes fine, then go ahead else restore the old database ( for reference : http://msdn.microsoft.com/en-us/library/ms177429.aspx)
You can even automate this process of making a backup for say, every week.

What are good strategies for updating a live database table?

I have a db table that gets entirely re-populated with fresh data periodically. This data needs to be then pushed into a corresponding live db table, overwriting the previous live data.
As the table size increases, the time required to push the data into the live table also increases, and the app would look like its missing data.
One solution is to push the new data into a live_temp table and then run an SQL RENAME command on this table to rename it as the live table. The rename usually runs in sub-second time. Is this the "right" way to solve this problem?
Are there other strategies or tools to tackle this problem? Thanks.
I don't like messing with schema objects in this way - it can confuse query optimizers and I have no idea what will happen to any transactions that are going on while you execute the rename.
I much prefer to add a version column to the table, and have a separate table to hold the current version.
That way, the client code becomes
select *
from myTable t,
myTable_currentVersion tcv
where t.versionID = tcv.CurrentVersion
This also keeps history around - which may or not be useful; if it's not delete old records after setting the CurrentVersion column.
Create a duplicate table - exact copy.
Create a new table that does nothing more than keep track of the "up to date" table.
MostCurrent (table)
id (column) - holds name of table holding the "up to date" data.
When repopulating, populate the older table and update MostCurrent.id to reflect this table.
Now, in your app where you bind the data to the page, bind the newest table.
Would it be appropriate to only push changes to the live db table? For most applications I have worked with changes have been minimal. You should be able to apply all the changes in a single transaction. Committing the transaction will make them visible with no outage on the table.
If the data does change entirely, then you could configure the database so that you can replace all the data in a single transaction.

Confirm before delete/update in SQL Management Studio?

So for the second day in a row, someone has wiped out an entire table of data as opposed to the one row they were trying to delete because they didn't have the qualified where clause.
I've been all up and down the mgmt studio options, but can't find a confirm option. I know other tools for other databases have it.
I'd suggest that you should always write SELECT statement with WHERE clause first and execute it to actually see what rows will your DELETE command delete. Then just execute DELETE with the same WHERE clause. The same applies for UPDATEs.
Under Tools>Options>Query Execution>SQL Server>ANSI, you can enable the Implicit Transactions option which means that you don't need to explicitly include the Begin Transaction command.
The obvious downside of this is that you might forget to add a Commit (or Rollback) at the end, or worse still, your colleagues will add Commit at the end of every script by default.
You can lead the horse to water...
You might suggest that they always take an ad-hoc backup before they do anything (depending on the size of your DB) just in case.
Try using a BEGIN TRANSACTION before you run your DELETE statement.
Then you can choose to COMMIT or ROLLBACK same.
In SSMS 2005, you can enable this option under Tools|Options|Query Execution|SQL Server|ANSI ... check SET IMPLICIT_TRANSACTIONS. That will require a commit to affect update/delete queries for future connections.
For the current query, go to Query|Query Options|Execution|ANSI and check the same box.
This page also has instructions for SSMS 2000, if that is what you're using.
As others have pointed out, this won't address the root cause: it's almost as easy to paste a COMMIT at the end of every new query you create as it is to fire off a query in the first place.
First, this is what audit tables are for. If you know who deleted all the records you can either restrict their database privileges or deal with them from a performance perspective. The last person who did this at my office is currently on probation. If she does it again, she will be let go. You have responsibilites if you have access to production data and ensuring that you cause no harm is one of them. This is a performance problem as much as a technical problem. You will never find a way to prevent people from making dumb mistakes (the database has no way to know if you meant delete table a or delete table a where id = 100 and a confirm will get hit automatically by most people). You can only try to reduce them by making sure the people who run this code are responsible and by putting into place policies to help them remember what to do. Employees who have a pattern of behaving irresponsibly with your busness data (particulaly after they have been given a warning) should be fired.
Others have suggested the kinds of things we do to prevent this from happening. I always embed a select in a delete that I'm running from a query window to make sure it will delete only the records I intend. All our code on production that changes, inserts or deletes data must be enclosed in a transaction. If it is being run manually, you don't run the rollback or commit until you see the number of records affected.
Example of delete with embedded select
delete a
--select a.* from
from table1 a
join table 2 b on a.id = b.id
where b.somefield = 'test'
But even these techniques can't prevent all human error. A developer who doesn't understand the data may run the select and still not understand that it is deleting too many records. Running in a transaction may mean you have other problems when people forget to commit or rollback and lock up the system. Or people may put it in a transaction and still hit commit without thinking just as they would hit confirm on a message box if there was one. The best prevention is to have a way to quickly recover from errors like these. Recovery from an audit log table tends to be faster than from backups. Plus you have the advantage of being able to tell who made the error and exactly which records were affected (maybe you didn't delete the whole table but your where clause was wrong and you deleted a few wrong records.)
For the most part, production data should not be changed on the fly. You should script the change and check it on dev first. Then on prod, all you have to do is run the script with no changes rather than highlighting and running little pieces one at a time. Now inthe real world this isn't always possible as sometimes you are fixing something broken only on prod that needs to be fixed now (for instance when none of your customers can log in because critical data got deleted). In a case like this, you may not have the luxury of reproducing the problem first on dev and then writing the fix. When you have these types of problems, you may need to fix directly on prod and you should have only dbas or database analysts, or configuration managers or others who are normally responsible for data on the prod do the fix not a developer. Developers in general should not have access to prod.
That is why I believe you should always:
1 Use stored procedures that are tested on a dev database before deploying to production
2 Select the data before deletion
3 Screen developers using an interview and performance evaluation process :)
4 Base performance evaluation on how many database tables they do/do not delete
5 Treat production data as if it were poisonous and be very afraid
So for the second day in a row, someone has wiped out an entire table of data as opposed to the one row they were trying to delete because they didn't have the qualified where clause
Probably the only solution will be to replace someone with someone else ;). Otherwise they will always find their workaround
Eventually restrict the database access for that person and provide them with the stored procedure that takes the parameter used in the where clause and grant them access to execute that stored procedure.
Put on your best Trogdor and Burninate until they learn to put in the WHERE clause.
The best advice is to get the muckety-mucks that are mucking around in the database to use transactions when testing. It goes a long way towards preventing "whoops" moments. The caveat is that now you have to tell them to COMMIT or ROLLBACK because for sure they're going to lock up your DB at least once.
Lock it down:
REVOKE delete rights on all your tables.
Put in an audit trigger and audit table.
Create parametrized delete SPs and only give rights to execute on an as needed basis.
Isn't there a way to give users the results they need without providing raw access to SQL? If you at least had a separate entry box for "WHERE", you could default it to "WHERE 1 = 0" or something.
I think there must be a way to back these out of the transaction journaling, too. But probably not without rolling everything back, and then selectively reapplying whatever came after the fatal mistake.
Another ugly option is to create a trigger to write all DELETEs (maybe over some minimum number of records) to a log table.

Resources