Manage Bugs between Dev, QA and Production - versioning

My software team have just started using Jira to manage bugs, so I am fairly new to the process that is available to Jira.
The systems we build are internally facing applications or customer facing web applications, and we release to these environments on a change request by change request basis, rather than through a product lifecycle type of development lifecycle.
The way we plan to use Jira, is for each project (which is usually a single CR) to have its own project created in Jira. Our process is then as follows
Developers code and unit tests, until ready for integration testing
Developers start integration testing and any bugs are raised in Jira, under a version named Development.
Once all development bugs are fixed, we then move into QA, where we hand over the build to the testing team. A new version 'QA' is created, and all bugs found in QA are logged against this verison.
Once all bugs are closed, the project goes Live, and the project is closed in Jira.
From what I have seen of the more agile product type uses of Jira, I suspect we are using the version fields in the wrong way, but as I am new to Jira I am not sure if we are, or if there is a better way of doing it.
Would appreciate hearing from someone who has used Jira in this type of environment to see what the right way to use Jira is.

Here's what I'd recommend after using JIRA in a number of different environments, with varying team sizes and types of project.
Use JIRA projects to denote large but discrete functional areas of work that correspond to a subset of team members in your organization, e.g. a new web application or internal customer app.
If the project or the team working on it is big enough to warrant it, use JIRA components to define different functional areas. You can then assign component leads who will automatically be assigned new issues against their components, and you'll be able to track which functional areas have the most bugs and maybe need more attention from the test team.
For versions, you can certainly set up development, live and QA versions as you've described, but these are more traditionally mapped to the JIRA issue status. With the standard JIRA workflow an issue will be Open while a developer is working on it, Resolved after the feature or bug fix is completed, and then Closed if QA verifies the feature or fix, or Open again if QA identifies a problem.
If you have long-lived applications where you get multiple CRs that specify new features for the same app, I would use JIRA versions to define the different releases of the app, based on a feature set and / or time schedule.
With the approach above you'll be able to track the work of each team or individual developer / tester and know when all issues on an app have been addressed so that you're ready to do into test or deployment. I see you mentioned that you're not using a traditional product lifecycle, but unless your organization is very small and you develop apps that are thrown away after their first version is ready, I think you'll get a lot of benefit from this approach.

Related

What is the problem of incompatibility of library versions and how monorepo-style solve it?

I started to interest in monorepo approach and Nx.js in particularly. Almost all articles talks that monorepo solve the problem of incompatibility of library versions and I don't quite understand the how. There I have few questions:
If i understood right, the idea of monorepo (in terms of shared code) that all shared code always the same version and all changes are happen in one atomic commit (as advertisement of monorepo states). So lets imagine monorepo with 100 of projects and all of them are depend on libA in the same repo. If I change smth in libA than I have to check changes in all dependent project. Moreover, I have to wait all codeowners to review my changes. So what is pros?
Lets imagine I have monorepo with following projects: appA, libC, libD and there are some third party library, let's call it third-party-lib. appA depends on libC and libD. At some time appA need third-party-lib-v3, BUT libC depends on third-party-lib-v1. https://monorepo.tools/#code-generation states that: "One version of everything
No need to worry about incompatibilities because of projects depending on conflicting versions of third party libraries.". But it is not. In world of Javascript it results in 2 different versions of third-party-lib in different node_modules. Angain what is pros?
I could be very naive in my questions because I never encountered problems with libraries, also I just started learning monorepo topic so I would be glad if someone help me to deal with it.
Having worked with shared code in a non-monorepo environment, I can say that managing internal packages without a monorepo like NX requires discipline and can be more time consuming.
In your example of 100 projects using 1 library, all 100 projects should be tested and deployed with the new version of the code. The difference is when.
In separate repos, you would publish the new version of your package, with all the code reviews and unit testing that go along with it. Next you would update the package version in all your 100 apps, probably one by one. You would test them, get code reviews, and then deploy them.
Now, what if you found an issue with your new changes in one of the apps? Would you roll back to the previous version? If it was in the app then you could fix it in that one app, but if it was in the library, would you roll back the version number in all your apps? What if another change was needed in your library?
You could find yourself in a situation where your apps are using different versions of your library, and you can't push out new versions because you can't get some of your apps working with the previous version. Multiply that across many shared libraries and you have an administrative nightmare.
In a mono-repo, the pain is the same, but it requires less administrative work. With NX, you know what apps your change is affecting and can test all those apps before you deploy your changes, and deploy them all at once. You don't block other changes going into your library because the changes aren't committed until they are tested everywhere they are used.
It is the same with third party libraries. When you update the version of a library, you test it in all applications that use it before your change is committed. If it doesn't work in one application, you have a choice.
Fix the issue preventing that application from working OR
Don't update the package to the new version
It means that you don't have applications that are 'left behind' and are forced to keep everything up to date. It does mean that sometimes updates can take so much time that they are difficult to prioritise, but that is the same for multi-repo development.
Finally, I would to add that when starting to work with NX you may find yourself creating large, frequently changing libraries that are used by all apps, or perhaps putting large amounts of code in the apps themselves. This leads to pain where changes frequently result in deployments of the whole monorepo. I have found that it is better to create app specific folders that contain libraries that are only used by that app, and only create shared libraries when it makes business sense to do so. Examples are:
Services that call APIs and return business domain objects that should not really be changed (changes to these APIs and responses generally result in a V2 of the API and a new NX library could be created to serve that V2 API, leaving the V1 unchanged).
Core, stable atomic UI libraries for each component (again, try not to change the component itself, but create a V2 if it needs to change)
More information on this can be found here NX applications and libraries

Change runtime from Python to Go in App Engine standard environment

I have a website on AppEngine that is 99% static. It is running on Python 2.7 runtime. Now the time has come to evolve this webapp, and since I have almost none Python code in it, I'd prefer to write it in Go instead.
Can I change runtime from Python 2.7 to Go, while keeping the project intact? Specifically, I want to keep the same app-ID, the same custom domain attached to it, the same SSL certificate, and so on.
What do I have to do in order to do that? I surely have to change runtime in the app.yaml. Is there anything else?
Bonus question: will such change happen without a downtime?
I'd be grateful for any links to documentation on exactly that (swapping runtime on a live app). I can't find any.
Specify a runtime as well as a new value for version. When deployed you'll have an older version that is Python and a newer version that is Go. There won't be any downtime (same as when deploying a newer version of Python).
Rather than trusting links/docs (that may be out of date or not 100% exactly what you're trying to do), why not create a new GAE-Std project for testing purposes and try it yourself. Having a GAE-Std test project is good for testing new function (especially by other testers who won't have access to the dev environ on your laptop).
The GAE services offer complete code isolation. So it should be possible to simply deploy a new version of the service, which can be written in a different language or even use a different GAE (standard/flex) environment. Personally I didn't go through a language change, but I did go through a split of a single-service app into a multi-service one, I see no reason for which the same principles wouldn't apply.
Maybe develop the new version as a separate app first, to be able to test it properly without risking an accidental impact on the old version and only after that bring the code as a new version in the old app. That'd be using the GAE project isolation. You can, in fact, test the entire version migration as a separate app if you so desire without even touching the existing app. I am using this technique - a separate app ID - to implement a staging environment for my app, completely isolated from my production app, see How to copy / clone entire Google App Engine Project
Make sure to not switch traffic to the new version at deployment time. This keeps the app working with the old version. Test first that the new version works as expected using Targeted routing. Then maybe use Splitting traffic across multiple versions to perform A/B testing with just a small percentage of the traffic going to the new version. Finally, when happy with the results, switch all traffic to the new version.
You need to pay special attention to the app-level configs (dispatch, cron, queue, datastore indexes), shared by all services/versions. They need to be functionally equivalent in the 2 versions. The service isolation doesn't apply to them, only project isolation can ensure no impact to the old version.
There should be no need to make any change to the app ID, custom domain mapping or SSL config. The above mentioned tests should confirm that.
A few potentially interesting posts related to re-working services/modules:
Converting App Engine frontend versions to modules
Google App Engine upgrading part by part
Migrating to app engine modules, test versions first?
Advantages of implementing CI/CD environments at GAE project/app level vs service/module level?

Continuous Delivery with Fastlane

I have recently moved from a web context into a mobile context (building a React Native app). One of the most powerful processes in the web world was Continuous Delivery. I would like to recreate a continuous delivery pipeline, into production, for the React Native mobile context. My understanding is that this is possible so long as only the javascript bundle gets updates rather than the underlying native components.
I have been finding blogs such as:
https://hiddentao.com/archives/2017/02/17/continuous-integration-for-react-native-with-testfairy-testflight-deploy/ and it appears that fastlane is the most common solution for Continuous Integration in the mobile ecosystem, but posts about Continuous Delivery are a little thin on the ground.
Is this because it is impossible? Is the promised land of "just update the js bundle" a lie? And if it is not impossible, how would I configure fastlane to push directly to production? Or would I use some other tool? Is it generally considered an anti pattern in the space? If so why?
It IS possible to update the javascript portion of a React Native app.
Fastlane is a great tool for building and deploying mobile apps but it is not itself a continuous delivery tool. However, used in conjunction with some other CI tool (Jenkins etc) it can make it easy to configure app store or beta releases triggered at some set interval or based on a certain trigger.
Fastlane is primarily designed to solve the issues associated with building and deploying native applications and as such it is very useful for building/deploying the native RN app to the app store but is likely not the best tool for managing your JS pushes. There are a few tools that are popular for pushing the JS code:
https://deploy.apphub.io/
and
http://microsoft.github.io/code-push/
are two popular mechanisms specifically built for this purpose and provide command line tools for deploying updated javascript. These could be configured in Jenkins (or another CI server) without necessarily needing to use fastlane.
As #john_ryan specified you could use CodePush for application updates. Nevertheless, it is worth taking into account some features of this solution:
You can't use hot updates if you need to add some native modules.
It strictly not recommended to use hot updates if you updated React Native version. In most cases, the consequences will be sad.
New users will get outdated built-in version at first app launching. Actual version will be presented on second run only.
Given all of the above CodePush is best for:
To decrease app update time for critical bugs. Of course you must release native update as soon as possible in this case also.
Non major updates. For major updates in most cases you need to update screenshots, app description, release notes and draw the user's attention about update. Hot update for major changes does not fit because of "second run update" cycle.
Testing stability of new update on a small part of users.
A/B testing.
You should use fastlane in any case. It really cool for custom builds, updating stores meta info, screenshots etc. For beta builds delivery I recommend Crashlytics Beta.

What is the general practice for express and react based application. Keeping the server and client code in same or different projects/folders?

I am from a microsoft background where I always used to keep server and client applications in separate projects.
Now I am writing a client-server application with express as back-end and react js as front-end. Since i am totally a newbie to these two tools, I would like to know..
what is the general practice?:
keeping the express(server) code base and react(client) code base as separate projects? or keeping the server and client code bases together in the same project? I could not think of any pros & cons of either of these approaches.
Your valuable recommendations are welcome!.
PS: please do not mark this question as opinionated.. i believe have a valid reason to ask for recommendations.
I would prefer keeping the server and client as separate projects because that way we can easily manage their dependencies, dev dependencies and unit tests files.
Also if in case we need to move to a different framework for front end at later point we can do that without disturbing the server.
In my opinion, it's probably best to have separate projects here. But you made me think a little about the "why" for something that seems obvious at first glance, but maybe is not.
My expectation is that a project should be mostly organized one-to-one on building a single type of target, whether that be a website, a mobile app, a backend service. Projects are usually an expression of all the dependencies needed to build or otherwise output one functioning, standalone software component. Build and testing tools in the software development ecosystem are organized around this convention, as are industry expectations.
Even if you could make the argument that there are advantages to monolithic projects that generate multiple software components, you are going against people's expectations and that creates the need for more learning and communication. So all things being equal, it's better to go with a more popular choice.
Other common disadvantages of monolithic projects:
greater tendency for design to become tightly coupled and brittle
longer build times (if using one "build everything" script)
takes longer to figure out what the heck all this code in the project is!
It's also quite possible to make macro-projects that work with multiple sub-projects, and in a way have the benefits of both approaches. This is basically just some kind of build script that grabs the output of sub-project builds and does something useful with them in a combination, e.g. deploy to a server environment, run automated tests.
Finally, all devs should be equipped with tools that let them hop between discreet projects easily. If there are pains to doing this, it's best to solve them without resorting to a monolothic project structure.
Some examples of practices that help with developing React/Node-based software that relies on multiple projects:
The IDE easily supports editing multiple projects. And not in some cumbersome "one project loaded at a time" way.
Projects are deployed to a repository that can be easily used by npm or yarn to load in software components as dependencies.
Use "npm link" to work with editable local versions of sub-projects all at once. More generally, don't require a full publish and deploy action to have access to sub-projects you are developing along with your main React-based project.
Use automated build systems like Jenkins to handle macro tasks like building projects together, deploying, or running automated tests.
Use versioning scrupulously in package.json. Let each software component have it's own version# and follow the semver convention which indicates when changes may break compatibility.
If you have a single team (developer) working on front and back end software, then set the dependency versions in package.json to always get the latest versions of sub-projects (packages).
If you have separate teams working on front and backend software, you may want to relax the dependency version to be major version#s only with semver range in package.json. (Basically, you want some protection from breaking changes.)

How to version and manage angularjs components for different projects

This is more of a curious question than a technical one. In my company we have an MVP with lots of angularjs components, but now, we are offering the MVP to different companies with specific needs.
Here's what it will look like in real life scenario:
Company 1
Module 1
Module 2
Module 3
Company 2
Module 1 (with a specific feature or change)
Module 3
Company 3
Module 2
Module 3
Module 4 (only for this project)
And we were looking for a versionning system that could fit in our future business model, because as we speak, we are using branches for different companies and other branches for specific component features.
You can see the hell this has become. It's really hard to maintain and it's even harder to deploy the different versions of the application.
I'll be glad to share my findings if we come up with a solution for this case. I'll write a blog post if that's the case.
Thanks!
Are you looking for management of process guidance, or tools?
From a tools standpoint you could use npm, with their private package service or just directed at some private git repo. Bower can do the same.
In the Windows space there's NuGet which you can host your repositories for or there's services out there for that, too.
Git has support for submodules and subtrees, but I don't personally recommend them. Making dependencies part of your actual git history is complicated.
The biggest thing from a process perspective is probably just avoid breaking changes. Put the effort into design of shared components up front so you're not having to redesign everything around the shared component when it changes drastically because it didn't work right the way it was built the first time around.
Treat your shared modules as if they're open source projects. Keep good documentation, clean code, and adhere to semantic versioning. Apply version numbers to stable builds (git tag them so they're easy to check out). Put someone in charge of accepting changes to the component so they can keep track of what everyone else is doing with it and guide it's development.
Fork it into a new package of the requirements one project has is wildly different than the others. Maintaining a component with too many different requirements can become a nightmare.

Resources