UML representation for tasks - c

I am in the process of designing system with a many tasks and a lot of inter-task messages. The system will be basically developed in C.
In my design I am trying to use UML representation to show the messages that are passed between tasks. But it is becoming difficult to represent things like decision making etc.
Are their any predefined method for creating a flow-chart for task based systems that uses a lot messages?
Need not be UML, is their any other standard method that can be used for this design?

For documenting message flow, I have found that state machines and sequence diagrams each have their place. State machines are better at describing the decisions that change the state of a system. Sequence diagrams are better at describing the messages that implement a specific element of a protocol.
Since I like to use Doxygen for internal documentation anyway, and it likes to draw call graphs and other figures with GraphViz tool dot, I started using dot to document my state machines. Since Doxygen has a syntax for including a dot language directly in the source code (and even allows hyperlinks from elements in the drawing to other pages of the generated documentation) this has been really convenient. Recently, Doxygen grew explicit support for sequence diagrams expressed with mscgen, allowing both styles of diagram to be used.
Having the figures expressed in a reasonably natural way directly in the source code makes them a lot more likely to be maintained than if they were drawn externally in Visio or some other drawing tool.

Maybe you need state machines or sequence diagrams.

Please try a software called Umbrello if you are representing your design in UML. This gives you lot of flexibility in representing your design

use sequence diagram or state machine diagram with MARTE(UML profile for modeling and analysis of real-time embedded system) annotation because i noticed you are working with real-time operating system

Related

How to compare the features and performance of JuliaDB and Queryverse? Which is better?

It is observed there are a few feature overlapping amongst packages.
Please guide me in comparing features and performance of JuliaDB and Queryverse and deciding the better one.
JuliaDB.jl and Queryverse operate on different layers of abstraction.
Queryverse provides tools for manipulation and visualization of various data sources and does not provide data source layer itself.
JuliaDB.jl, on the other hand, provides a specific data source implementation, that is particularly valuable when working with very large data sets that do not fit into RAM and are processed in distributed manner. The closest alternative to JuliaDB.jl is DataFrames.jl package. A brief comparison of both is given here, so you can see that each has its uses in different contexts. Queryverse works "on top" of any of these sources.
You might also want to have a look at Tables.jl package that defines a low-level API for tabular data. In particular even NamedTuple of vectors and a vector of NamedTuples can be considered as tabular data.
One thing you should keep in mind when working with Queryverse is that for type inference reasons it defines its own notion of missingness in in DataValues.jl package that is not the same as Missing type defined in Base.

Consensus algorithm for Node.js

I'm trying to implement a collaborative canvas in which many people can draw free-handly or with specific shape tools.
Server has been developed in Node.js and client with Angular1-js (and I am pretty new to them both).
I must use a consensus algorithm for it to show always the same stuff to all the users.
I'm seriously in troubles with it since I cannot find a proper tutorial its use. I have been looking and studying Paxos implementation but it seems like Raft is very used in practical.
Any suggestions? I would really appreciate it.
Writing a distributed system is not an easy task[1], so I'd recommend using some existing strongly consistent one instead of implementing one from scratch. The usual suspects are zookeeper, consul, etcd, atomix/copycat. Some of them offer nodejs clients:
https://github.com/alexguan/node-zookeeper-client
https://www.npmjs.com/package/consul
https://github.com/stianeikeland/node-etcd
I've personally never used any of them with nodejs though, so I won't comment on maturity of clients.
If you insist on implementing consensus on your own, then raft should be easier to understand — the paper is surprisingly accessible https://raft.github.io/raft.pdf. They also have some nodejs implementations, but again, I haven't used them, so it is hard to recommend any particular one. Gaggle readme contains an example and skiff has an integration test which documents its usage.
Taking a step back, I'm not sure if the distributed consensus is what you need here. Seems like you have multiple clients and a single server. You can probably use a centralized data store. The problem domain is not really that distributed as well - shapes can be overlaid one on top of the other when they are received by server according to FIFO (imagine multiple people writing on the same whiteboard, the last one wins). The challenge is with concurrent modifications of existing shapes, by maybe you can fallback to last/first change wins or something like that.
Another interesting avenue to explore here would be Conflict-free Replicated Data Types — CRDT. Folks at github used them to implement collaborative "pair" programming in atom. See the atom teletype blog post, also their implementation maybe useful, as collaborative editing seems to be exactly the problem you try to solve.
Hope this helps.
[1] Take a look at jepsen series https://jepsen.io/analyses where Kyle Kingsbury tests various failure conditions of distribute data stores.
Try reading Understanding Paxos. It's geared towards software developers rather than an academic audience. For this particular application you may also be interested in the Multi-Paxos Example Application referenced by the article. It's intended both to help illustrate the concepts behind the consensus algorithm and it sounds like it's almost exactly what you need for this application. Raft and most Multi-Paxos designs tend to get bogged down with an overabundance of accumulated history that generates a new set of problems to deal with beyond simple consistency. An initial prototype could easily handle sending the full-state of the drawing on each update and ignore the history issue entirely, which is what the example application does. Later optimizations could be made to reduce network overhead.

How to generate sequence diagram for my Native (C, C++) code?

I would like to know how to generate a sequence diagram for my Native (C, C++) code. I have written my C code using vim editor.
Thanks,
Sen
First of all, sequence diagram is an object oriented concept. It is meant to convey, at a glance, message passing between objects in an object oriented program in a sequential fashion, which is supposed to help understand time-considerate interaction between the objects. As such, it does not make sense to talk about sequence diagrams in the context of a procedural language like C.
When it comes to C++, sequence diagrams are defined in the general sense by the UML specification, which is the same for all object oriented languages. UML is considered a higher-level concept from source code that looks the same for all languages, and the process of converting source code to UML is called code reverse engineering. There are tools that allow you to convert source code of Java, C++ and other languages into UML diagrams that show relationships between classes, like Enterprise Architect, Visual Paradigm and IBM Rational Software Architect.
A sequence diagram, however, is a special kind of a UML diagram and it turns out that reverse engineering a sequence diagram is quite challenging. First, if you wanted to generate a sequence diagram through static analysis, one of the first questions you must answer is whether, given two objects and a message passed between them, a result is ever returned. This means that, given a method, you would have to analyze its algorithm and figure out if it loops forever or it returns. This is known as the halting problem and has been proven to be undecidable in computer science. This means that in order to produce a sequence diagram through static analysis, you would have to sacrifice accuracy. Dynamic analysis works by actually running the code and mapping the interactions between the objects at run time. This presents its own challenges. First, you would have to instrument the code. Then, filtering out the interactions you are interested in from library and system calls and other fluff present in the code would not be doable without user intervention.
This is not to say that creating a tool that would produce usable sequence diagrams is not possible, but the market interest has apparently not been strong enough to justify the effort, and apart from a few research papers on the subject, like CPP2XMI, I'm not aware of any commercially available tools to reverse engineer C++ into sequence diagrams.
Compounding the problem is the fact that C++ is one of the most complex object oriented languages around, so even if somebody devised a good way of reverse engineering sequence diagrams, C++ would be the last language to receive the treatment. Case in point: Visual Paradigm offers rudimentary support for reversing Java code into sequence diagrams, but not for C++.
Even if such a tool existed for C++, the sad truth is that if your C++ code is complex enough that you would rather use a tool to make a sequence diagram for it instead of doing it manually, then it is most likely too complex for the tool to give you anything useful and you would have to fix it up yourself anyways.
You can try CppDepend which provides the Dependency graph and the dependency matrix to explore the dependencies between directories, files and functions.
Have you tried with plantuml? It works really well with Doxygen, I use it at work with the company template and the syntax it's really easy, you have to write the call sequence yourself though. There are plenty examples in the page, if you are working in Linux you can use your native packaging tool to install it, the same applies to Doxygen (e.g. sudo apt-get plantuml). Otherwise if you are using Windows you can use the installers from the official pages too.
You'll have to do some configuration but it's pretty straightforward, I'll leave you the links to each tool.
Download pages:
http://plantuml.com/download
http://www.doxygen.nl/download.html
Plantuml examples:
http://plantuml.com/sequence-diagram
You can find the documentation in each page, for plantmul you use java executable (.jar) then you don't have to install nothing, you just need to configure doxygen to find the executable, you can find how in the doxygen documentation page:
http://www.doxygen.nl/manual/index.html
If you want to configure it without reading the documentation you could also watch this video:
https://www.youtube.com/watch?v=LZ5E4vEhsKs
I hope this helps, cheers.
You could explore trace2uml with works with doxygen.

Tracing Designs - Screen to Database Traceability

This is vaguely related to:
Should I design the application or model (database) first?
Design from the database first through to UI or t’other way round?
But my question is more about modeling and artifacts and less about the right way to do design. I'm trying to figure out what sort of design artifact would best enunciate the link between features (use cases), screens and database elements (tables and columns, most particularly). UML is very code-centric. Database models are very database centric. And of screen designs are UI centric!
Here's the deal... my team is working on the first release of the product. We used use cases, then did screen designs and database design was somewhat isolated from the two. A critical area for bugs was the lack of traceability between the use cases and their accompanying screens and the database. In our product, there's a very high degree of overlap between use cases and database elements. Many use cases touch over 75% of the database infrastructure. So we have high contention over database design areas, and it's easy for a small database change to disrupt the lower levels of business logic.
For our next release, I want the developers and our DBA to have a really clear insight into what parts of the database each feature touches. The use case/screen design approach worked well, so we're keeping it... the trick is linking each use case and screens to the database model so the relationships are really obvious and hard to forget about.
On smaller projects (we're only 10 people, but often I've worked on teams of 3 or less), I've created my own custom diagrams to show this part of the design. Sort of a fusion of screen, UML and database table, done in Visio with no link to actual code or SQL. I'm not sure it will work for a larger team, as its highly manual to keep up to date, and it doesn't auto generate code the way our database modeling tool does.
Any recommendations? Is there a commonly accepted mechanism for this?
FYI - we're pretty waterfall, that isn't going to change any time soon. And we do love artifacts... Saying "switch to agile" is not a viable solution for our group.
I can't tell from your question how detailed your use cases are. I get the impression that they may be high-level use cases, not broken down into detailed uses cases (perhaps through include or extend relationships.
In any case, I prefer to start with Requirements, and trace them to use cases. While I'm writing the use cases and the use case diagrams, I'm also creating a domain model (a high-level class diagram). This is mostly to give me something to discuss with stakeholders ("did I get that right?").
When the use cases and domain model are finished, it's possible to begin to work on screen design, and possibly also for an activity model, if there are complex interactions between screens. I would treat the screens as though they were classes with UI - a screen might have a FirstName attribute, which I'd note as being related to the FirstName attribute of the Person entity in my domain model. Yet the FirstName attribute might be represented on that screen as a text box.
At the same time, physical database design can occur. This would produce a class or ER diagram, with traceability back to the domain model. Eventually, you might find that some of your screen attributes or activity modeling refer to things that are part of the physical database model that are not present in the domain model. It's ok to relate a screen "PersonalName" attribute to the computed PersonalName column in the Person table in the People schema.
The tool I use for this sort of thing is Sparx Enterprise Architect. It's a great tool, and can do all of this and more, even in the Professional edition.
I also have to say for the sake of Truth that I mostly model on my own - I haven't yet worked on a project where the model, code and database were being developed by separate people. If someone told me that the above wouldn't work in "real life", I might be forced to believe them.
I am not sure I understand your question clearly, but I'll try to respond based on some reasonable assumptions...
Essentially, my recommendation is the same as what John Saunders already suggested: to consider using UML along with a good UML tool. But I would like to add a few points that might be important in your specific situation.
First and foremost, I don't think that UML is "too code-centric". To the contrary, it can be used to model pretty much any aspect of a software system, almost at any level of abstraction. With a good tool at hand (like Sparx EA), the beauty of UML is that you actually do get a well-defined model under the hood (as opposed to just a set of unconnected drawings/charts). As a result, even if the tool itself does not give you a feature that you are looking for (like traceability from DB to use cases)... well, at least you have an option to automate (or at least semi-automate) the task yourself: for example, you can export your UML model into an XMI (standard!) and then derive whatever traceability-related info you need from there (e.g. using XSL or any XML-aware library for your favorite programming/scripting language). I am not saying that it would be easy to do (especially if you want traceability on the level of individual DB columns 8-), but it's possible and it is very likely to beat any manual method if it has to scale along the size of the project.
BTW, speaking of Sparx EA... I don't know all of its capabilities yet, but it has so many that I would not be surprized if it allowed you to select a class (or even an attribute of a class) and show you other model elements that depend on it in some way. You might want to check this out.
Having said all that, I do understand that you may have at least the following two important concerns about UML:
It may appear to require too many modeling details to be in place to get what you want.
As any "universal tool", it may be grossly inferior to specialized modeling tools that you already use.
Regarding issue #1: Again, with a good UML tool at hand, you might be able to do as many shortcuts as you want. For example, instead of building a very detailed and accurate activity model for a use case, you could focus just on classes involved in the use case (just enough to enable tracking classes back to use cases). The same applies to UI, of course.
Regarding issue #2: I don't know what exact tools do you use now to model use cases, UI, and DB schema. So, theoretically it is possible that they are all so superior to UML that you wouldn't want to give any of them up in exchange of easier traceability. However, something tells me that your DB modeling tool (with its code-generating capabilities) might be the only one that is truly indispensable. If that's the case, then I would still recommend to consider using UML: you just do not model down to DB schema level and "stop" at the level of domain model (even if you do not have it in your application!). At that point, the UML tool would give you traceability from domain model elements (entities, their attributes, and their relationships) back to use cases and UI elements, and mappings between your domain model and DB schema could be left "in the air" because, in the vast majority of cases, they should be simple enough to track without drawing anything. This might not give you 100% of what you want, but it could give you 80% that would be sufficient to mitigate most of you problems.
The bottom line: if you are using three different tools/technologies to model three different aspects of your system... well, it's obvious that any traceability between those three aspects remains at mercy of those three tools: you could automate only as much as those tools allow (which probably means that you are going to be stuck with a lot of laborous manual tasks to do). As of today, UML appears the only well-defined and widely supported "lingua franca" that could help you to connect your models and could enable automation of substantial part of your analytical activities. Just make sure you distinguish UML "just-drawing tools" (like most of Visio add-ons and stencils) from true UML modeling tools (like Sparx EA and a bunch of others).
Your use cases is a good place to start from.
Convert your use cases into
executable test code. This test code
needs to verify that the resultant
return values are according to the
requirements of your use cases.
The smaller the parts of the work you
can identify and test, the more
robust you will be able to build
your application.
This means that the interaction of
your use cases with a large part
of your database and the GUI, will
be simpler to understand.
It's hard to lock-down the architecture or business logic interplay in complex projects with complete upfront design of the different layers. One truly learns about what will be able to facilitate your requirements only as you get to the point of implementing them.
As a developer, find the techniques, tools and processes that help you do your job in the best way possible. Don't judge these on their origin. Judge them on their value in making you the best developer possible.
Some items from the agile world has certainly made a big difference to the quality and productivity of my work. This doesn't require throwing out the apple cart and putting an experienced waterfall team into disarray.
The database should model your Problem Domain. It should model it completely enough so that you can extract solutions -- truths -- from it. Bad design is essentially "lying" to the database (allowing the possibility of invalid data or relationships), and when you "lie" to your database, it'll "lie" to you when you ask it questions.
Simple examples are modeling many-to-many relation where a relation can only be one-to-many, or assuming values can't be null, or treating a foreign key as an attribute. Many of these can be avoided by proper normalization, which requires you to explicitly find out what is a key and what isn't.
By "making illegal states unrepresentable" in the model, you avoid having to write "defensive" code to check for the impossible or to validate that a relation is possible, as impossible things are made unrepresentable because of table structures or declarative check constraints.
This lowers the cost of writing your code, as you can concentrate for the most part, on what it needs to do, rather than guarding against the impossible.

Meta-model evolution in the Eclipse Modeling Framework

I am making an attempt to evaluate EMF for use within a project. One of the things i am looking at is some kind of versioning support at the metamodel (M2 or the .ecore model) level.
In terms of metamodel evolution, i have read certain discussions and have come across this paper. However, i wanted to know if there is anything concrete in this direction that is happening within EMF.
In general, what is the level of support for features involving versioning - such as merge and compare, evolution, migration, co-existence of multiple versions simultaneously, etc. I realize that the actual versioning itself will be provided by the source control system that one would use to store these meta-models, however semantic versioning capabilities (such as the ones i have mentioned above) should be provided by EMF itself, right?
I am aware of certain initiatives such as EMF Compare and Temporality which are meant for the EMF models. I am not sure if these work at the meta-model level.
I am working on metamodel evolution in my PhD thesis. To show the applicability of my ideas, I have developed tool support for metamodel evolution in EMF which is called COPE. On the website, you can access a number of publications about COPE as well as download the tool itself. In addition, I am currently proposing a project to contribute COPE to EMF.
In general, every tool which works with Ecore models will work with Ecore meta-models as well, since the meta-model of Ecore is Ecore. (Take some time to let this sink in, I know I had to...)
I've successfully used EMF Compare with my Ecore meta-model, don't know about the other tools you mentioned.

Resources