Im looking for a way to manually adjust TFS task startdates so that my burndown appears correct.
Essentially the iteration has fixed start/end dates and some user stories did not get filled out until half way through the iteration.
This made the burndown have a bump in the road so it looks like we are below target.
I have full access to the TFS database and am wondering what queries I would need to write to get my tasks backdated to the start of the iteration.
I have read somewhere that it is System.AuthorizedDate that controls the burndown chart.
Any help appreciated.
J
You are correct on System.AuthorizedDate being used.
You won't be able to change the System.AuthorizedDate by means of the public API. It won't let you. And you cannot change the System.AuthorizedDate date by means of SQL update commands and remain in a supported state. Officially, Microsoft does not allow this and still maintain the ability for Microsoft to support you unless the SQL enacted changes where made under their guidance such as through a support incident.
I doubt a support incident to Microsoft will yield the update query as it's not a defect and as I explain later it could put you in a very bad place. Could you create a series of updates on the appropriate tables to backdate the System.AuthorizedDate? Without doubt. It might even work but I am not certain if it would work if you dared to do so. The reason is that the work items receive System.Id numbers sequentially as created. I do know that in version control there are expectations in the system that a higher changeset number must have a later commit date (Can't recall the exact field name) than any lower changeset number. It would not surprise me if there are similar expectations in the system for work items. You might find that with such a change to the field in the work item from SQL would render errors or unexpected outcomes in various places - I can imagine a future upgrade or even an update simply bombing and unable to perform. That's all hypothetical though because unless you wish to have your environment in an unsupported state you would not change it via SQL.
Outside creating your own burndown that evaluated differently I am not aware of a means to meet your desired goal under those conditions.
Related
I have an MS Access front-end linked to a SQL Server database.
If some column is required, then the natural thing to do is to include NOT NULL in that column's definition (at the database level). But that seems to create problems on the Access side. When you bind a form to that table, the field bound to that column ends up being pretty un-user-friendly. If the user erases the text from that field, they will not be able to leave the field until they enter something. Each time they try to leave the field while it's blank, they will get this error:
You tried to assign the Null value to a variable that is not a Variant data type.
That's a really terrible error message - even for a developer, let alone the poor user. Luckily, I can silence it or replace it with a better message with some code like this:
Private Sub Form_Error(DataErr As Integer, Response As Integer)
If DataErr = 3162 Then
Response = acDataErrContinue
<check which field is blank>
MsgBox "<some useful message>"
End If
End Sub
But that's only a partial fix. Why shouldn't the user be able to leave the field? No decent modern UI restricts focus like that (think web sites, phone apps, desktop programs - anything, really). How can we get around this behavior of Access with regard to required fields?
I will post the two workarounds I have found as an answer, but I am hoping there are better ways that I have overlooked.
Rather than changing backend table definitions or trying to "trick" Access with out-of-sync linked table definitions, instead just change the control(s) for any "NOT NULL" column from a bound to an unbound field (i.e. Clear the ControlSource property and change the control name--by adding a prefix for example--to avoid annoying collisions with the underlying field name.).
This solution will definitely be less "brittle", but it will require you to manually add binding code to a number of other Form events. To provide a consistent experience as other Access controls and forms, I would at least implement Form_AfterInsert(), Form_AfterUpdate(), Form_BeforeInsert(), Form_BeforeUpdate(), Form_Current(), Form_Error(), Form_Undo().
P.S. Although I do not recall seeing such a poorly-worded error message before, the overall behavior described is identical for an Access table column with Required = True, which is the Access UI equivalent of NOT NULL column criteria.
I would suggest if you can simply change all tables on sql server to allow nulls for those text columns. For bit, number columns default them to 0 sql server side. While our industry tends to suggest to avoid nulls, and many a developer ALSO wants to avoid nulls, so they un-check the allow nulls SQL server side. The problem is you can never run away and avoid tons of nulls anyway. Take a simple query of say customers and their last invoice number + invoice total. But of course VERY common would be to include customers that not bought anything in that list (customers without ivoices yet, or customers without any of a gazillion possible cases where the child record(s) don't yet exist. I find about 80% or MORE of my quires in a typical application are LEFT joins. So that means any parent record without child records will return ALL OF those child columns as null. You going to work with, and see, and HAVE to deal with tons and tons of nulls in a application EVEN if you table designs NEVER allow nulls. You cannot avoid them - you simply cannot run away from those nasty nulls.
Since one will see lots of nulls in code and any sql query (those VERY common left joins), then by far and away the best solution is to simply allow and set all text columns as allowing nulls. I can also much state that if an application designer does not put their foot down and make a strong choice to ALWAYS use nulls, then the creeping in of both NULLS and ZLS data is a much worse issue to deal with.
The problem and issue becomes very nasty and painful if one does not have control or one cannot make this choice.
At the end of the day, Access simply does not work with SQL server and the choice of allowing ZLS columns.
For a migration to sql server (and I been doing them for 10+ years), it is without question that going will nulls for all text columns is by far and away the most easy choice here.
So I recommend that you not attempt to code around this issue but simply change all your sql tables to default to and allow nulls for empty columns.
The result of above may require some minor modifications to the application, but the pain and effort is going to be far less then attempting to fix or code around Access poor support (actually non support) of ZLS columns when working with SQL server.
I will also note that this suggesting is not a great suggestion, but it is simply the best suggestion given the limitations of how Access works with SQL server. Some database systems (oracle) do have a overall setting that says every null is to be converted to ZLS and thus you don't have to care about say this:
select * from tblCustomers where (City is null) or (City is = "")
As above shows, the instant you allow both ZLS and nulls into your application is the SAME instant that you created a huge monster mess. And the scholarly debate about nulls being un-defined is simply a debate for another day.
If you are developing with Access + SQL server, then one needs to adopt a standard approach - I recommend that approach simply is that all text columns are set to allows nulls, and date columns. For numbers and bit columns, default them to 0.
This comes down to which is less pain and work.
Either attempet some MAJOR modifications to the application and say un-bind text columns (that can be a huge amount of work).
Or
Simply assume and set all text columns to allow nulls. It is the lessor of a evil in this case, and one has to conform to the bag of tools that has been handed to you.
So I don't have a workaround, but only a path and course to take that will result in the least amount of work and pain. That least pain road is to go with allowing nulls. This suggestion will only work of course if one can make that choice.
The two workarounds I have come up with are:
Don't make the database column NOT NULL and rely exclusively on Access forms for data integrity rather than the database. Readers of that table will be burdened with an ambiguous column that will not contain nulls in practice (as long as the form-validation code is sound) but could contain nulls in theory due to the way the column is defined within the database. Not having that 100% guarantee is bothersome but may be good enough in reality.
Verdict: easy but sloppy - proceed with caution
Abuse the fact that Access' links to external tables have to be refreshed manually. Make the column NULL in SQL Server, refresh the link in Access, and then make the column NOT NULL again in SQL Server - but this time, don't refresh the link in Access.
The result is that Access won't realize the field is NOT NULL and, therefore, will leave the user alone. They can move about the form as desired without getting cryptic error 3162 or having their focus restricted. If they try to save the form while the field is still blank, they will get an ODBC error stemming from the underlying database. Although that's not desirable, it can be avoided by checking for blank fields in Form_BeforeUpdate() and providing the user with an intelligible error message instead.
Verdict: better for data integrity but also more of a pain to maintain, sort of hacky/astonishing, and brittle in that if someone refreshes the table link, the dreaded error / focus restriction will return - then again, that worst-case scenario isn't catastrophic because the consequence is merely user annoyance, not data-integrity problems or the application breaking
I've been reading around the forums and documentation, and I can't seem to find anything related to what I am looking for, which is a huge surprise to me as it would seem to be a common requirement, so I suspect that there is a better way of approaching this.
I have a database, which I want to run a SQL Consumer on, and I want to query only records that have been modified since the last time I queried.
It appears that you cannot parameterise a SQL Consumer query, which would seem to be the first hurdle, and secondly, even if I could parameterise the consumer query, I don't appear to be able to store the result between one query and the next.
My assumption is that I would want to store the highest dateModified value, and subsequently query records where the dateModified value is strictly greater than the stored value.
(I realise that this is not foolproof, as there could be millisecond issues, but I can't think of another way of achieving this without changing the application or database.)
The only way I can see of using a SQL Consumer is to store the highest dateModified in a custom table in the system database (which I would rather not change) and include some sort of
WHERE dateModified > interfaceDataTable.lastDateModified
in the SQL Query, and an
UPDATE interfaceDataTable SET lastDateModified = :#$latestDateModifiedValue
in the onConsume SQL.
However, I'd much rather not make any changes to the source database, as that will have further implications for testing etc.
I have the sense I'm barking up the wrong tree here. Is there a better way of approaching this?
Yes this is currently not supported in camel-sql to have it dynamic parameters, such as calling a java bean method etc.
I have logged a ticket to see if we can implement this: https://issues.apache.org/jira/browse/CAMEL-12734
I have a production SQL-Server DB (reporting) that has many Stored Procedures.
The SPs are publicly exposed to the external world in different ways
- some users have access directly to the SP,
- some are exposed via a WebService
- while others are encapsulated as interfaces thru a DCOM layer.
The user base is large and we do not know exactly which user-set uses which method of accessing the DB.
We get frequent (about 1 every other month) requests from user-sets for modifying an existing SP by adding one column to the output or a group of columns to the existing output, all else remaining same.
We initially started doing this by modifying the existing SP and adding the newly requested columns to the end of the output. But this broke the custom tools built by some other user bases as their tool had the number of columns hardcoded, so adding a column meant they had to modify their tool as well.
Also for some columns complex logic is required to get that column into the report which meant the SP performance degraded, affecting all users - even those who did not need the new column.
We are thinking of various ways to fix this:
1 Default Parameters to control flow
Update the existing SP and control the new functionality by adding a flag as a default parameter to control the code path. By using default parameters, if value of the Parameter is set to true then only call the new functionality. By default it is set to False.
Advantage
New Object is not required.
On going maintenance is not affected.
Testing overhead remains under control.
Disadvantage
Since an existing SP is modified, it will need testing of existing functionality as well as new functionality.
Since we have no inkling on how the client tools are calling the SPs we can never be sure that we have not broken anything.
It will be difficult to handle if same report gets modified again with more requests – will mean more flags and code will become un-readable.
2 New Stored procedure
A new stored procedure will be created for any requirement which changes the signature(Input/Output) of the SP. The new SP will call the original stored procedure for existing stuff and add the logic for new requirement on top of it.
Advantage
Here benefit will be that there will be No impact on the existing procedure hence No Testing required for old logic.
Disadvantage
Need to create new objects in database whenever changes are requested. This will be overhead in database maintenance.
Will the execution plan change based on adding a new parameter? If yes then this could adversely affect users who did not request the new column.
Considering a SP is a public interface to the DB and interfaces should be immutable should we go for option 2?
What is the best practice or does it depend on a case by case basis, and what should be the main driving factors when choosing a option?
Thanks in advance!
Quoting from a disadvantage for your first option:
It will be difficult to handle if same report gets modified again with more requests – will mean more flags and code will become un-readable.
Personally I feel this is the biggest reason not to modify an existing stored procedure to accommodate the new columns.
When bugs come up with a stored procedure that has several branches, it can become very difficult to debug. Also as you hinted at, the execution plan can change with branching/if statements. (sql using different execution plans when running a query and when running that query inside a stored procedure?)
This is very similar to object oriented coding and your instinct is correct that it's best to extend existing objects instead of modify them.
I would go for approach #2. You will have more objects, but at least when an issue comes up, you will be able to know the affected stored procedure has limited scope/impact.
Over time I've learned to grow objects/data structures horizontally, not vertically. In other words, just make something new, don't keep making existing things bigger and bigger and bigger.
Ok. #2. Definitely. No doubt.
#1 says: "change the existing procedure", causing things to break. No way that's a good thing! Your customers will hate you. Your code just gets more complex meaning it is harder and harder to avoid breaking things leading to more hatred. It will go horribly slowly, and be impossible to tune. And so on.
For #2 you have a stable interface. No hatred. Yay! Seriously, "yay" as in "I still have a job!" as opposed to "boo, I got fired for annoying the hell out of my customers". Seriously. Never ever do #1 for that reason alone. You know this is true. You know it!
Having said that, record what people are doing. Take a user-id as a parameter. Log it. Know your users. Find the ones using old crappy code and ask them nicely to upgrade if necessary.
Your reason given to avoid number 2 is proliferation. But that is only a problem if you don't test stuff. If you do test stuff properly, then proliferation is happening anyway, in your tests. And you can always tune things in #2 if you have to, or at least isolate performance problems.
If the fatter procedure is really great, then retrofit the skinny version with a slimmer version of the fat one. In SQL this is tricky, but copy/paste and cut down your select column list works. Generally I just don't bother to do this. Life is too short. Having really good test code is a much better investment of time, and data schema tend to rarely change in ways that break existing queries.
Okay. Rant over. Serious message. Do #2, or at the very least do NOT do #1 or you will get yourself fired, or hated, or both. I can't think of a better reason than that.
Easier to go with #2. Nullable SP parameters can create some very difficult to locate bugs. Although, I do employ them from time to time.
Especially when you start getting into joins on nulls and ANSI settings. The way you write the query will change the results dramatically. KISS. (Keep things simple stupid).
Also, if it's a parameterized search for reporting or displaying, I might consider a super-fast fetch of data into a LINQ-able object. Then you can search an in-memory list rather than re-fetching from the database.
#2 could be better option than #1 particularly considering the bullet 3 of disadvantages of #1 since requirements keep changing on most of the time. I feel this because disadvantages are dominating here than advantages on either side.
I would also vote for #2. I've seen a few stored procedures which take #1 to the extreme: The SPs has a parameter #Option and a few parameters #param1, #param2, .... The net effect is that this is a single stored procedure that tries to play the role of many stored procedures.
The main disadvantage to #2 is that there are more stored procedures. It may be more difficult to find the one you're looking for, but I think that is a small price to pay for the other advantages you get.
I want to make sure also, that you don't just copy and paste the original stored procedure and add some columns. I've also seen too many of those. If you are only adding a few columns, you can call the original stored procedure and join in the new columns. This will definitely incur a performance penalty if those columns were readily available before, but you won't have to change your original stored procedure (refactoring to allow for good performance and no duplication of the code), nor will you have to maintain two copies of the code (copy and paste for performance).
I am going to suggest a couple of other options based on the options you gave.
Alternative option #1: Add another variable, but instead of making it a default variable base the variable off of customer name. That way Customer A can get his specialized report and Customer B can get his slightly different customized report. This adds a ton of work as updates to the 'Main' portion would have to get copied to all the specialty customer ones.
You could do this with branching 'if' statements.
Alternative option #2: Add new stored procedures, just add the customer's name to the stored procedure. Maintenance wise, this might be a little more difficult but it will achieve the same end results, each customer gets his own report type.
Option #2 is the one to choose.
You yourself mentioned (dis)advantages.
While you consider adding new objects to db based on requirement changes, add only necessary objects that don't make your new SP bigger and difficult to maintain.
I'm working on a basic syncing algorithm for a user's notes. I've got most of it figured out, but before I start programming it, I want to run it by here to see if it makes sense. Usually I end up not realizing one huge important thing that someone else easily saw that I couldn't. Here's how it works:
I have a table in my database where I insert objects called SyncOperation. A SyncOperation is a sort of metadata on the nature of what every device needs to perform to be up to date. Say a user has 2 registered devices, firstDevice and secondDevice. firstDevice creates a new note and pushes it to the server. Now, a SyncOperation is created with the note's Id, operation type, and processedDeviceList. I create a SyncOperation with type "NewNote", and I add the originating device ID to that SyncOperation's processedDeviceList. So now secondDevice checks in to the server to see if it needs to make any updates. It makes a query to get all SyncOperations where secondDeviceId is not in the processedDeviceList. It finds out its type is NewNote, so it gets the new note and adds itself to the processedDeviceList. Now this device is in sync.
When I delete a note, I find the already created SyncOperation in the table with type "NewNote". I change the type to Delete, remove all devices from processedDevicesList except for the device that deleted the note. So now when new devices call in to see what they need to update, since their deviceId is not in the processedList, they'll have to process that SyncOperation, which tells their device to delete that respective note.
And that's generally how it'd work. Is my solution too complicated? Can it be simplified? Can anyone think of a situation where this wouldn't work? Will this be inefficient on a large scale?
Sounds very complicated - the central database shouldn't be responsible for determining which devices have recieved which updates. Here's how I'd do it:
The database keeps a table of SyncOperations for each change. Each SyncOperation is has a change_id numbered in ascending order (that is, change_id INTEGER PRIMARY KEY AUTOINCREMENT.)
Each device keeps a current_change_id number representing what change it last saw.
When a device wants to update, it does SELECT * FROM SyncOperations WHERE change_id > current_change_id. This gets it the list of all changes it needs to be up-to-date. Apply each of them in chronological order.
This has the charming feature that, if you wanted to, you could initialise a new device simply by creating a new client with current_change_id = 0. Then it would pull in all updates.
Note that this won't really work if two users can be doing concurrent edits (which edit "wins"?). You can try and merge edits automatically, or you can raise a notification to the user. If you want some inspiration, look at the operation of the git version control system (or Mercurial, or CVS...) for conflicting edits.
You may want to take a look at SyncML for ideas on how to handle sync operations (http://www.openmobilealliance.org/tech/affiliates/syncml/syncml_sync_protocol_v11_20020215.pdf). SyncML has been around for a while, and as a public standard, has had a fair amount of scrutiny and review. There are also open source implementations (Funambol comes to mind) that can also provide some coding clues. You don't have to use the whole spec, but reading it may give you a few "ahah" moments about syncing data - I know it helped to think through what needs to be done.
Mark
P.S. A later version of the protocol - http://www.openmobilealliance.org/technical/release_program/docs/DS/V1_2_1-20070810-A/OMA-TS-DS_Protocol-V1_2_1-20070810-A.pdf
I have seen the basic idea of keeping track of operations in a database elsewhere, so I dare say it can be made to work. You may wish to think about what should happen if different devices are in use at much the same time, and end up submitting conflicting changes - e.g. two different attempts to edit the same note. This may surface as a change to the user interface, to allow them to intervene to resolve such conflicts manually.
I have several horrors of old ASP web applications. Does anyone have any easy ways to find what scripts, pages, and stored procedures are no longer needed? (besides the stuff in "old___code", "delete_this", etc ;-)
Chances are if the stored proc won't run, it isn't being used because nobody ever bothered to update it when sonmething else changed. Table colunms that are null for every single record are probably not being used.
If you have your sp and database objects in source control (and if you don't why don't you?), you might be able to reaach through and find what other code it was moved to production with which should give you a clue as to what might call it. YOu will also be able to see who touched it last and that person might know if it is still needed.
I generally approach this by first listing all the procs (you can get this from the system tables) and then marking the ones I know are being used off the list. Profiler can help you here as you can see which are commonly being called. (But don't assume that because profiler didn't show the proc that it isn't being used, that just gives you a list of the ones to research.) This makes the ones that need to be rearched a much smaller list. Depending on your naming convention it might be relatively easy to see what part of the code should use them. When researching don't forget that procs are called in places other than the application, so you will need to check through jobs, DTS or SSIS packages, SSRS reports, other applications, triggers etc to be sure something is not being used.
Once you have identified a list of ones you don't think you need, share it with the rest of the development staff and ask if anyone knows if the proc is needed. You'll probably get a a couple more taken off the list this way that are used for something specialized. Then when you have the list, change the names to some convention that allows you to identify them as a candidate for deletion. At the same time set a deletion date (how far out that date is depends on how often something might be called, if it is called something like AnnualXYZReport, then make that date a year out). If no one complains by the deletion date, delete the proc (of course if it is in source control you can alawys get it back even then).
Onnce you have gone through the hell of identifying the bad ones, then it is time to realize you need to train people that part of the development process is to identify procs that are no longer being used and get rid of them as part of a change to a section of code. Depending on code reuse, this may mean searching the code base to see if someother part of the code base uses it and then doing the same thing discussed as above, let everyone know it will be deleted on this date, change the name so that any code referncing it will break and then on the date to delete getting rid of it. Or maybe you can have a meta data table where you can put candidates for deletion at the time you know that you have stopped using something and send a report around to everyone once a month or so to determine if anyone else needs it.
I can't think of any easy way to do this, it's just a matter of identifying what might not be used and slogging through.
For SQL Server only, 3 options that I can think of:
modify the stored procs to log usage
check if code has no permissions set
run profiler
And of course, remove access or delete it and see who calls...