I see the term "pure upstream" used a lot describing different software packages/distributions. I get that "upstream" in the context of open source refers to a code base from which a certain software package was forked. But what does it mean to say that a certain software package is "pure upstream"?
I think the “pure” part of "pure upstream" means that the codebase remains unencumbered with changes that are custom changes needed for a particular application use or environment which isn’t applicable to most users. Some projects may want to evolve to become more of a Swiss army knife of an application but some may decide to keep to a small coherent functional space. How purity rules apply is a judgement call for the maintainers of the upstream project.
An example of a non-pure fork could be a proprietary extension that could cause vendor lock-in. Take a look at this article about Kubernetes: https://technative.io/kubernetes-must-stay-pure-upstream-open-source/
Related
I have the following Questions About embedded unit testing:
Can Google unit testing be used for embedded C Code?
Is Google unit test compatible with IEC 62304?
I tried to find a document, which could answer my Question but unable to find that. Even in the Google Unit Test Official Documentation1, I was not able to find the answer. Also, I have the following Question:
How would I know an open source Unit Testing Tool is compatible with IEC 62304 Standard?
Please help me with your Knowledge in Unit Testing.
Thanks in Advance.
To answer your questions directly:
googletest is intended for use with C++. The link in Vertexwahn’s answer shows that at least one person has been able to use it for testing C.
2 & 3. IEC 62304 is a software life cycle process, it has nothing to say about whether you can or cannot use a particular tool, only the steps which you must go through in your project.
Unit testing is certainly one step that you would go through in your software life cycle. As the engineer responsible, it is your job to decide whether or not a tool is suitable for a particular task. No person outside your project can ever tell you that a tool is suitable for use in developing a particular medical device, because this depends very highly not only on the design of the particular device, but also on the testing strategy that your are going to adopt.
The testing strategy in turn will depend on the particular risks that you need to mitigate. You will need to follow ISO 14971 for your risk management process.
At every stage of the process you will have to document the reasons for the decisions that you have made according to an ISO 13485 quality management process.
When you come to make a regulatory submission to an approved body they will appoint an auditor who will look through your documentation. In the vast majority of cases the auditor will have absolutely no technical expertise in software. They will check that you have followed the appropriate documentation process but ultimately they will take your word on whether or not a tool is suitable.
It is easy to trick an auditor and use an unsuitable tool by creating a large volume of paperwork which falsely explains why it is suitable. If you do this no one will know until or unless the medical device causes harm to someone and your company (or you personally) gets sued or prosecuted and the documents get examined by technical experts appointed by a court.
What you need to think about when you put your signature on the document that states the tool is suitable is whether you could stand up in court and defend your decision after someone has been harmed.
After all this, having said that no tool is ever either inherently suitable or unsuitable, there are some software suppliers that make claims of suitability or even "pre-approval". What this means is that they have pre-written many of the documents that your regulatory submission will require. These are always very expensive (nothing free like googletest fits into this category). Even if you use these pre-written documents, it is your responsibility to review them and put your signature against them and say that they are correct and more importantly that they are applicable to your specific project. Buying a product like this saves you time, but not liability.
GoogleTest seems to work with C -> https://meekrosoft.wordpress.com/2009/11/09/unit-testing-c-code-with-the-googletest-framework/
Google will not take over the responsibility for you to be compliant with IEC 62304 regarding your use cases. You have to make sure that the tools you use do what they should do for the use case you use them. For instance, you can come up with an acceptance test for GoogleTest that proves that it works for you as expected.
When doing this consider also known bugs. Even if a company offers you a unit test framework that is IEC 62304 compliant I would ask myself if this test framework has more users and is better tested than gtest.
I think something like this does not exist - it would mean that the Open Source project would take over the liability for damages resulting from its users
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I'm an engineer currently developing Linux kernel-mode drivers and user-mode drivers. When I came across the theory of 12 Factor App, there is a strong voice echoing around my brain "THIS IS THE FUTURE OF DEVELOPING!".
And I kept wondering how to apply this method to Linux KMD and UMD design and developing since this theory is much too web-app based (I'm a part-time open-source web developer).
Current Developing language: C
Current Testing automation: Custom Implemented Python testing framework (Progress based, NO unit test)
Please give me some suggestions on this. Thanks and appreciated in advance.
As with most development guidelines, there is a gap between the guideline and the enforcement.
For example, in your "12 factor app" methodology, one of the factors is:
Codebase - One codebase tracked in revision control, many deploys
Which sounds great, and would really simplify things. Until you get to the point of utility libraries. You see, when you find you are reusing code across multiple projects, you probably want:
Independent build and release chains for the multiple projects.
This could mean two codebases, but the above states one codebase (perhaps one per project, perhaps one per company. Let's assume one per company first, which is easy to see as non-ideal; because, you would have commits unrelated to a project in the project's commit history. Ok, one per project, more sensible; but, what if projects need to share code? Like the libraries that format their communications and control the send / receive protocols? Well, we could create a third "protocol library" so that we have revisioning around the protocol; but, that violates the "one codebase (per project)" because now you have two codebases comprising the single releasable item.
The decisions here are not simple. The other approach is to copy the protocol code into both projects and keep them in sync by some other means.
Dependencies - Explicitly declare and isolate dependencies
It's a great idea; and, one that makes development easier in many ways. Again, just to illustrate how a great idea can suffer without clear guidelines on how to implement the idea, what do you do when you are using a library that doesn't attempt to isolate the dependencies the library uses? Many of the more complex libraries themselves depend on other libraries, and generally they clearly declare their dependencies, as do the libraries used by the libraries; however, sometimes the base, core libraries used by multiple projects (logging, configuration, etc) wind up being used at different release versions. The isolation occurred on a per-library basis, but not on a per-project basis. You could fix it, provided you wanted (or could) fork and clone the libraries, restructuring them to properly isolate their dependencies for overall coordination of version numbers; but, generally you will lack the time to work on other people's projects.
In general, the advice under "12 factor app" methodology is good; but, it leaves you up to performing the work of translating the guidelines into development protocols. Enforcement then becomes a matter of interpertation, and the means of enforcement (as well as the interpertation) fall on you to implement.
And some of the guidelines look dangerously over-simlpified:
Concurrency - Scale out via the process model
While this is an easier way to go, it's not how any single high performance web server works. They all use threading, thread pools, and other more complex constructs to avoid process switching. These constructs (which are admittedly harder to use) were created specifically due to the limitations of a traditional process model. After all, it's not common to launch a process per web request, nor would you generally "tune a program for better performance" by starting a second copy on the same machine. Certainly, there are architectures where this could work; but, so far these architectures haven't outperformed their competition.
Between machines, I wholeheartedly agree. Process scalaing is the only way to go in a distrubuted environment; but, there's not much in this methodology that talks about distributed algorithms, or even distributed computing approaches; so, again it's another thing left up to the implementor.
Finally, their process commentary seems really out-of-place for writing a command line tool. The push to daemonize things works really well for microservices; however, you can't microservice away even the clients. Eventually you'll have to write something that isn't "managed by systemd", due to starting execution and ending execution without having an always-on service.
So, it's a good framework, which might not work for some things, even if it is excellent for many things; but, in my opinion, the tooling to enforce it would have to be built by the organization using it because the interpretations one organization might make could differ from another organization.
Looking at LaunchDarkly for feature flagging across our enterprise apps.
Two questions:
1) I'm concerned about being able to effectively flag features across our Java back end and React front ends (2 of them). What are some strategies that people use to define features appropriately so that they are easy to manage across multiple applications/platforms?
2) Have you replaced most/all of your git / Bitbucket / ?? branching workflow with feature flags and purely trunk - based development? If not, have you made significant changes to your existing git / Bitbucket branching strategy?
Disclamer: I work at DevCycle
I'm a few years late, but, I really wanted to make sure anyone finding their way to this question has a little more information.
1) While levlaz provided an answer explaining that you should put your management as far up the stack as possible, I don't necessarily agree that this is the best approach to consider first.
Consider this: A simple setup of a single feature across multiple platforms
Within DevCycle (and others), when you create a Feature, it is available across all platforms and the API.
You simply request the features for a user on any platform, and if the user qualifies, you'll receive it. There is no extra setup necessary to enable it on various platforms.
This means that if a feature is meant to be accessed on either your Java backend or React frontend, you can be guaranteed that the feature will be available at the correct times for the correct user/service regardless of where you call it from.
In short: one single Feature is managed across all platforms in one spot, with one toggle (if desired).
Another Approach: A single feature across multiple platform with different toggles or use cases.
You could very easily simply create multiple flags for each individual platform a feature is meant to be available on, and manage each individually. However, this isn't entirely necessary!
Within a feature setup, you can simply have two separate rules defining different variations being delivered to each platform. For example, you can set up a simple rule which ensures that Java will receive the feature, but React would not.
A unique DevCycle approach: Managing multiple platforms independently.
Here is something DevCycle offers that would may handle your use case in a unique way:
Imagine every single time you create a feature, both a Java and React version of that feature are created.
These platforms would be managed separately within each feature, meaning that there is no potential of any accidental feature data bleeding between platforms in event that a feature doesn't exist on one platform but it does on another.
You can set up each platform as an entirely separate entity, meaning they would use different SDK keys, and all targeting will always be separate.
In the example above for example, the feature would be entirely disabled and not available in any Java SDKs calling out to DevCycle, but it would be available in React.
tl;dr
It's up to you how you want to manage things across platforms. DevCycle makes it easy to do this however you'd like: have all features across all platforms, splitting up your platforms, or just choosing to target differently depending on the feature.
2) Like levlaz said, that is the ideal, but you'll likely never want to achieve fully trunk-based nirvana, as there are a lot of use cases for having various environments and paths for your team to take in various scenarios.
That said, we've seen a lot of folks successfully get REALLY close by using Feature Flags.
I wouldn't suggest removing your build pipelines and CI/CD in favor of feature flags, instead, feature flags enhance those.
For example, with feature flags, you can remove the concept of feature branches and large feature pull requests. Instead, ensure that everything that ever gets put into production is always behind a feature flag. To ensure this happens, you can use workflow tools like github actionsthat do these safety checks for you. With these guards in place, you should now always be able to simply push through to prod without any concerns and run your deploy scripts on each merge. Then you can just target your internal / QA users for testing, and not worry about things hitting prod users!
You may still want to have some sort of disaster recovery environment and local environments, so never truly hitting a pure trunk, but you can get close!
[Disclamer: I work at LaunchDarkly]
For your first question, my general recommendation is to put flags as "high up on the stack" as possible. At the end of the day, you are making a decision somewhere. Where you put that decision point is entirely up to you. Within LaunchDarkly the flags are agnostic to the implementation so a single flag can live on the server, mobile, and client-side without any issues. Keep things simple.
For your second question, in practice, it is very rare to see teams fully make the switch to trunk-based development. This is the goal of 99% of the teams that I work with but depending on if you have a greenfield or a brownfield project the complexity of making the switch can be not worth the effort.
Lastly, Our CTO wrote a book this year called "Effective Feature Management"[1]. If you have not heard of this, I would recommend you take a look. I think you'll find some great insights there.
https://launchdarkly.com/effective-feature-management-ebook/
I'm trying to implement a collaborative canvas in which many people can draw free-handly or with specific shape tools.
Server has been developed in Node.js and client with Angular1-js (and I am pretty new to them both).
I must use a consensus algorithm for it to show always the same stuff to all the users.
I'm seriously in troubles with it since I cannot find a proper tutorial its use. I have been looking and studying Paxos implementation but it seems like Raft is very used in practical.
Any suggestions? I would really appreciate it.
Writing a distributed system is not an easy task[1], so I'd recommend using some existing strongly consistent one instead of implementing one from scratch. The usual suspects are zookeeper, consul, etcd, atomix/copycat. Some of them offer nodejs clients:
https://github.com/alexguan/node-zookeeper-client
https://www.npmjs.com/package/consul
https://github.com/stianeikeland/node-etcd
I've personally never used any of them with nodejs though, so I won't comment on maturity of clients.
If you insist on implementing consensus on your own, then raft should be easier to understand — the paper is surprisingly accessible https://raft.github.io/raft.pdf. They also have some nodejs implementations, but again, I haven't used them, so it is hard to recommend any particular one. Gaggle readme contains an example and skiff has an integration test which documents its usage.
Taking a step back, I'm not sure if the distributed consensus is what you need here. Seems like you have multiple clients and a single server. You can probably use a centralized data store. The problem domain is not really that distributed as well - shapes can be overlaid one on top of the other when they are received by server according to FIFO (imagine multiple people writing on the same whiteboard, the last one wins). The challenge is with concurrent modifications of existing shapes, by maybe you can fallback to last/first change wins or something like that.
Another interesting avenue to explore here would be Conflict-free Replicated Data Types — CRDT. Folks at github used them to implement collaborative "pair" programming in atom. See the atom teletype blog post, also their implementation maybe useful, as collaborative editing seems to be exactly the problem you try to solve.
Hope this helps.
[1] Take a look at jepsen series https://jepsen.io/analyses where Kyle Kingsbury tests various failure conditions of distribute data stores.
Try reading Understanding Paxos. It's geared towards software developers rather than an academic audience. For this particular application you may also be interested in the Multi-Paxos Example Application referenced by the article. It's intended both to help illustrate the concepts behind the consensus algorithm and it sounds like it's almost exactly what you need for this application. Raft and most Multi-Paxos designs tend to get bogged down with an overabundance of accumulated history that generates a new set of problems to deal with beyond simple consistency. An initial prototype could easily handle sending the full-state of the drawing on each update and ignore the history issue entirely, which is what the example application does. Later optimizations could be made to reduce network overhead.
I see this time and time again. The UAT test manager wants the new build to be ready to test by Friday. The one of the first questions asked, in the pre-testing meeting is, "what version will I be testing, against?" (which is a fair question to ask). The room goes silent, then someone will come back with, "All the assemblies have their own version, just right-click and look at the properties...".
From the testing managers point-of-view, this is no use. They want a version/label/tag across everything that tells them what they are working on. They want this information easily avaialble.
I have seen solutions where the version of diffierent areas of a system being stored in a datastore, then shown on the main application's about box. Problem is, this needs to be maintained.
What solutions have you seen that gets around this on going problem?
EDIT. The distributed system covers VB6, Classic ASP, VB.Net, C#, Web Services (accross departments, so which version are we using ?), SQL Server 2005.
I think the problem is that you and your testing manager are speaking of two different things. Assembly versions are great for assemblies, but your test manager is speaking of a higher-level version, a "system version", if you will. At least that's my read of your post.
What you have to do in such situations is map all of your different component assemblies into a system version. You say something along the lines of "Version 1.5 of the system is composed of Foo.Bar.dll v1.4.6 and Baz.Qux.dll v2.6.7 and (etc.)". Hell, in a distributed system, you may want different versions for each of your services, which may in and of themselves, be composed of different versions of .dlls. You might say, for example: "Version 1.5 of the system is composed of the Foo service v1.3, which is composed of Foo.dll v1.9.3 and Bar.dll v1.6.9, and the Bar service v1.9, which is composed of Baz.dll v1.8.2 and Qux.dll v1.5.2 and (etc.)".
Doing stuff like this is typically the job of the software architect and/or build manager in your organization.
There are a number of tools that you can use to handle this issue that have nothing to do with your language of choice. My personal favorite is currently Jira, which, in addition to bug tracking, has great product versioning and roadmapping support.
Might want to have a look at this page that explains some ways to integrate consistent versioning into your build process.
There are a number of different things that contribute to the problem. Off of the top of my head, here's one:
One of the benefits of a distributed architecture is that we gain huge potential for re-use by creating services and publishing their interfaces in some form or another. What that then means is that releases of a client application are not necessarily closely synchronized with releases of the underlying services. So, a new version of a business application may be released that uses the same old reliable service it's been using for a year. How shall we then apply a single release tag in this case?
Nevertheless, it's a fair question, but one that requires a non-trivial answer to be meaningful.
Not using build based version numbering for anything but internal references. When the UAT manager asks the question you say "Friday's*".
The only trick then is to make sure labelling happens reliably in your source control.
* insert appropriate datestamp/label here
We use .NET and Subversion. All of our application assemblies share a version number, which is derived from a manually updated major and minor revision numbers and the Subversion revision number (<major>.<minor>.<revision>). We have a prebuild task that updates this version number in a shared AssemblyVersionInfo.vb file. Then when testers ask for the version number, we can either give them the full 3-part number or just the subversion revision. The libraries we consume aren't changing or the change is not relevant to the tester.