I have a hierarchy of stuff I want to display (at the same time) in both outline view and a custom view. Sort of analagous to the Buck and Yacktman (Cocoa Design Patterns) example in CH. 29, but with Outline instead of Table. I'll most likely have a detail view available also.
I've only used NSTreeController with a single outline view before. Now I have found that "arrangedObjects" aren't what one would like them to be. Also found that (for some reason) all the 'canInsert' and it's relatives have value NO (for some reason I can't find (or find with google)). So so far, it appears that NSTreeController is little help in coordinating my two views. (By the way, I've always had my add, delete functions work directly on the model in the past.)
it seems to me now it would be better and simpler to go back to using a data source approach, and use an architecture more like Buck and Yacktman's figure 29.4 (page 357) with a handmade mediating controller.
This has been hanging around for quite a while with no takers.
Just to close this out:
I've tried both NSTreeController and data source versions. Currently, I'm sticking with data source, since it seems to give me more flexibility.
-- The program I'm working on has been very much experimental, trying a number of different things. A secondary goal is to make an application I will find useful, and ternaryily :-) maybe make a cleaned up version for distribution.
Related
I have a generic LiveDoc in Polarion which contains a series of referenced requirements. Recently I started to insert links into the description of some of the requirements to make it easier to navigate from one requirement to another. However, I've discovered that when I baseline the document the links in the description don't get updated to point to the baselined version of the requirement, but the links (to the same requirement) in the Linked Work Items section are updated to include the baseline revision.
Is there a way to get the links in the description to point to the baselined revision like the ones in the Linked Work Items section?
I'm using Polarion 21 R1 if that matters.
Thanks in advance for your help.
Interesting approach. I doubt you get this working 100%. HTML is notoriously hard to parse(complete and correctly), so you should avoid this workflow.
Use Linked Workitems instead and use the new Collection Feature, which most probably does what you need.
Also while it is possible to link to older / specific revisions (of artefacts) in Polarion, I never found a scenario which was maintainable and useful in same time.
Note that Revisions get big very fast (5-7 digits). Comparing or updating these links is very error prone and demanding work, full of devastating pitfalls.
We follow the approach to keep items unchanged after release and create new items instead of changing existing ones. We have then more WIs but Polarion's UI (and most peoples head) can deal with large number of WIs better than with versioned links.
I'm new to Silverlight, but being dumped right into the fray - good way to learn I suppose :o)
Anyway, the webapp I'm working on has a relatively complex database structure that represents various object types that are linked to each other, and I was wondering 2 things:
1- What is the recommended approach when it comes to dataclasses? Have just one big dataclass, or try and separate it into several smaller dataclasses, keeping in mind they will need to reference each other?
2- If the recommended approach is to have several dataclasses, how do you define the inter-dataclasses references?
I'm asking because I did a small test. In my DB (simplified here, real model is more complex but that's not important), I have a table "Orders" and a table "Parameters". "Orders" has a foreign key on "Parameters". What I did is create 2 dataclasses.
The first one, ParamClass, were I dropped the "Parameters" table only, so I can have a nice "parameter" class. I then created a simple service to add basic SELECT and INSERT functionality.
The second one, OrdersClass, where I dropped both tables, so that the relation between the tables would automatically create a "EntityRef<parameter>" variable inside the "order" class. I then removed the "parameters" class that was automatically created in the OrdersClass dataclass, since the class has already been declared in the ParamClass dataclass. Again I created a small service to test it.
So far so good, it builds happily. The problem is that when I try to handle things on the application code, I added service references for both dataclasses, but it is not happy doing something like:
OrdersServiceReference.order myOrder = new OrdersServiceReference.order();
myOrder.parameter = new ParamServiceReference.parameter(); //<-PROBLEM IS HERE
It comlpains that it cannot implicitly convert from type 'MytestDC.ParamServiceReference.parameter' to 'MytestDC.OrdersServiceReference.parameter'
Do I somehow need to declare some sort of reference to ParamClass from OrdersClass, or how do I "convert" one to the other?
Is this even a recommended and efficient way of doing this?
Since it's a team-project, I initially wanted to separate the dataclasses so that they (and their services) can be easily checked out by one member without checking out the whole entire dataclass.
Any help appreciated!
PS: using Silverlight 4, in case that's important
Based on the widely accepted Single Responsability Principle (SRP), a class should always be responsible for one task, and one task only.
That pretty much invalidates your "one big dataclass" approach.
I would always recommend smaller, more manageable bits that can be combined, instead of one humonguous class that does everything (except brew coffee for you).
Resources for the SRP:
Wikipedia on SRP
OODesign: Single Responsibility Principle
ObjectMentor: list of articles on good app design - which has a few links to PDF documents, like this one on SRP written by Robert C. Martin - the "guru" on proper OO design
OK, some more research let me to this: it is not simple to separate classes from a relational model using LINQtoSQL. I ended up switching to an Entity Framework approach, which itself doesn't deal with it gracefully (see here and there, for example), but at least it solved another major problem I had with LINQtoSQL.
There are other ORMs out there that are apparently much more capable at this (NHibernate comes up often in recommendations), unfortunately, I don't have time to investigate them now, being under such a tight deadline.
As for the referencing, it was quite simple, change the line to:
myOrder.parameter = new OrderServiceReference.parameter();
even though I removed the declaration from that dataclass.
Hope this helps someone!
I would like to make a program (I would prefer in C language) , but even in cocoa , that can take data from an external program (such as iTunes or adium) and will use them. For example i would like to take the data of a listbox or the text of the chat so as to manipulate it. I need a place to start. In windows I think it is possible with some apis that find the hWnd of a window and then find a pointer to the listbox or textbox. Please give me some info on how to start. Thanks you in advance.
It's not clear exactly what you want to do. It's either impossible or severely restricted.
For one thing, different applications use different ways of constructing a “listbox”—Cocoa applications use NSTableView, Carbon applications use DataBrowser, and GTK, Qt, and Java applications use even more different APIs. These do not all go through some common kind of list box thingy; each is an independent implementation.
(You could hope that either NSTableView or DataBrowser would be based on the other, but don't count on it.)
For another, it is impossible to obtain a pointer to that control. You cannot access another application's NSTableView or DataBrowser view or GTK/Qt/Java equivalent unless (and this only works for NSTableView) that application deliberately serves it up to you. It doesn't sound like that's your situation.
The closest you can get to that is Accessibility, which may be pretty close, but is unlikely to work with most applications not based on Cocoa.
Even then, the view may not be showing you all the data. A table view may be lazily populated, and a table view designed in imitation of the iOS UITableView may even never have all the data (because it only has what it can show).
(All of the above applies to every kind of view, not just table views. Collection views, text fields, buttons—same deal for all of them.)
The only way to get at the true, complete copy of the data is to ask the controller that owns it. And, again, that's impossible if the application is not specifically offering it to you. Not to mention, the application might not even have a controller (not object-oriented, not MVC, or just sloppily made).
… so as to manipulate it.
Getting the data in the first place is the easy part. It is nigh-impossible to mess with data in another application—for good reason.
The closest you're going to get to either of these goals is the Accessibility interfaces.
I have a code library that makes heavy use of XPathNavigator to parse some specific xml document. The xml document is cross-referenced, meaning that an element can reference another which has not yet been encountered during parsing:
<ElementA ...>
<DependentElementX id="1234">
</ElementA>
<ElementX id="1234" .../>
The document doesn't really look like this, but the point is that 1) there is an xml schema that enforces the overall document structure, 2) elements inside the document can reference each other using some IDs, and 3) there is quite a few such cross references between different elements in the document.
The document is parsed in two phases. In the first pass I walk through the document
XPathDocument doc = ...;
XPathNavigator nav = doc.CreateNavigator();
nav.MoveToRoot();
nav.MoveToFirstChild()...
and occasionally 'bookmark' the current position (element) in the document using XPathNavigator.Clone() method. This gives me a lightweight instance of an XPathNavigator which I can store somewhere and use later to jump back to a particular place (element) in my document.
Once I have enough information collected in the first pass (for example, I have made sure there is indeed an ElementX with an id='1234'), I jump back to saved bookmarks (using those saved XPathNavigators) and complete the parsing.
Well, now I'm about to use this library in Silverlight 3.0 and to my horror the XPathNavigator is not in the System.Xml assembly.
Questions:
1) Am I missing something obvious (i.e. XPathNavigator does exist in some shape or form, for example in a toolkit or a freeware library)?
2) If I do have to make modifications in the code, what would be the best way to go? Ideally, I would like to make minimal changes, not to rewrite 80% of the code just to be able to use something like XLinq.
To resume, in case I have to give up XPathNavigator, all I need is a way to bookmark places in my document and to get back to them so that I can continue to iterate from where I left off.
Thanks in advance for any help/ideas.
You are not missing something obvious, there is no implementation of XPathNavigator or XPathDocument in the Silverlight versions of the libraries.
The "best way to go" is highly subjective and would really depend on how many lines of code are really depending on XPathNavigator. However I see a couple of choices.
Go ahead and re-write the code using XDocument, XElement etc from the System.Xml.Linq namepsace. This may not be as bad a choice as you might think.
Wrap Xml-to-Linq objects in your own implementation of those properties and methods of the XPathNavigator that you are actually using. It shouldn't be too hard re-create most the features of the XPathNavigator against the Xml-to-Linq objects. You can then run your existing code against your own XPathNavigator.
XPath (xdoc.XPathSelectElements) is available in Silverlight 4: here's an online test tool.
There are tons of ways:
How to deal with XML in C#
You can still use Linq to XML just minus the linq syntax and use the Linq Extension methods.
In many embedded applications there is a tradeoff between making the code very efficient or isolating the code from the specific system configuration to be immune to changing requirements.
What kinds of C constructs do you usually employ to achieve the best of both worlds (flexibility and reconfigurabilty without losing efficiency)?
If you have the time, please read on to see exactly what I am talking about.
When I was developing embedded SW for airbag controllers, we had the problem that we had to change some parts of the code every time the customer changed their mind regarding the specific requirements. For example, the combination of conditions and events that would trigger the airbag deployment changed every couple weeks during development. We hated to change that piece of code so often.
At that time, I attended the Embedded Systems Conference and heard a brilliant presentation by Stephen Mellor called "Coping with changing requirements". You can read the paper here (they make you sign-up but it's free).
The main idea of this was to implement the core behavior in your code but configure the specific details in the form of data. The data is something you can change easily and it can even be programmable in EEPROM or a different section of flash.
This idea sounded great to solve our problem. I shared this with my colleague and we immediately started reworking some of the SW modules.
When trying to use this idea in our coding, we encountered some difficulty in the actual implementation. Our code constructs got terribly heavy and complex for a constrained embedded system.
To illustrate this I will elaborate on the example I mentioned above. Instead of having a a bunch of if-statements to decide if the combination of inputs was in a state that required an airbag deployment, we changed to a big table of tables. Some of the conditions were not trivial, so we used a lot of function pointers to be able to call lots of little helper functions which somehow resolved some of the conditions. We had several levels of indirection and everything became hard to understand. To make a long story short, we ended up using a lot of memory, runtime and code complexity. Debugging the thing was not straightforward either. The boss made us change some things back because the modules were getting too heavy (and he was maybe right!).
PS: There is a similar question in SO but it looks like the focus is different. Adapting to meet changing business requirements?
As another point of view on changing requirements ... requirements go into building the code. So why not take a meta-approach to this:
Separate out parts of the program that are likely to change
Create a script that will glue parts of source together
This way you are maintaining compatible logic-building blocks in C ... and then sticking those compatible parts together at the end:
/* {conditions_for_airbag_placeholder} */
if( require_deployment)
trigger_gas_release()
Then maintain independent conditions:
/* VAG Condition */
if( poll_vag_collision_event() )
require_deployment=1
and another
/* Ford Conditions */
if( ford_interrupt( FRONT_NEARSIDE_COLLISION ))
require_deploymen=1
Your build script could look like:
BUILD airbag_deployment_logic.c WITH vag_events
TEST airbag_deployment_blob WITH vag_event_emitter
Thinking outloud really. This way you get a tight binary blob without reading in config.
This is sort of like using overlays http://en.wikipedia.org/wiki/Overlay_(programming) but doing it at compile-time.
Our system is subdivided into many components, with exposed configuration and test points. There is a configuration file that is read at start-up that actually helps us instantiate components, attach them to each other, and configure their behavior.
It's very OO-like, in C, with the occasional hack to implement something like inheritance.
In the defense/avionics world software upgrades are very strictly controlled, and you can't just upgrade SW to fix issues... however, for some bizarre reason you can update a configuration file without a major fight. So it's been darn useful for us to be able to specify a lot of our implementation in those configuration files.
There is no magic, just good separation of concerns when designing the system and a bit of foresight on the part of the developers.
What are you trying to save exactly? Effort of code re-work? The red tape of a software version release?
It's possible that changing the code is reasonably straight-forward, and quite possibly easier than changing data in tables. Moving your often-changing logic from code to data is only helpful if, for some reason, it's less effort to modify data rather than code. That might be true if the changes are better expressed in a data form (e.g. numeric parameters stored in EEPROM). Or it might be true if the customer's requests make it necessary to release a new version of software, and a new software version is a costly procedure to build (lots of paperwork, or perhaps OTP chips burned by the chip maker).
Modularity is very good principle for these sort of things. Sounds as though you're already doing it to some degree. It's good to aim to isolate the often-changing code to as small an area as possible, and try to keep the rest of the code ("helper" functions) separate (modular) and as stable as possible.
I don't make the code immune to requirements changes per se, but I always tag a section of code that implements a requirement by putting a unique string in a comment. With the requirements tags in place, I can easily search for that code when the requirement needs a change. This practice also satisfies a CMMI process.
For example, in the requirements document:
The following is a list of
requirements related to the RST:
[RST001] Juliet SHALL start the RST with 5 minute delay when the ignition
is turned OFF.
And in the code:
/* Delay for RST when ignition is turned off [RST001] */
#define IGN_OFF_RST_DELAY 5
...snip...
/* Start RST with designated delay [RST001] */
if (IS_ROMEO_ON())
{
rst_set_timer(IGN_OFF_RST_DELAY);
}
I suppose what you could do is to specify several valid behaviors based on a byte or word of data that you could fetch from EEPROM or an I/O port if necessary and then create generic code to handle all possible events described by those bytes.
For instance, if you had a byte that specified the requirements for releasing the airbag it could be something like:
Bit 0: Rear collision
Bit 1: Speed above 55mph (bonus points for generalizing the speed value!)
Bit 2: passenger in car
...
Etc
Then you pull in another byte that says what events happened and compare the two. If they're the same, execute your command, if not, don't.
For adapting to changing requirements I would concentrate on making the code modular and easy to change, e.g. by using macros or inline functions for parameters which are likely to change.
W.r.t. a configuration which can be changed independently from the code, I would hope that the parameters which are reconfigurable are specified in the requirements, too. Especially for safety-critical stuff like airbag controllers.
Hooking in a dynamic language can be a lifesaver, if you've got the memory and processor power for it.
Have the C talk to the hardware, and then pass up a known set of events to a language like Lua. Have the Lua script parse the event and callback to the appropriate C function(s).
Once you've got your C code running well, you won't have to touch it again unless the hardware changes. All of the business logic becomes part of the script, which in my opinion is a lot easier to create, modify and maintain.