I started a Dart project and now I need some functionality that is not available in the Dart API Reference. I was advised to use a package from pub.dartlang.org and now I am browsing through the pub.
Previous experience with Javascript libraries tell me that quality and support can vary wildly between libraries. Therefore I am a bit reluctant to use packages from pub. How would I know which package has a good quality, and whether a package will be updated when there are breaking changes in Dart?
Therefore I would like to know:
Is there a way to know which packages on pub.dartlang.org are safe to choose for a long-term project?
Some questions related to this:
Will packages where "Dart team" is the author be supported for a long time?
Should I prefer packages where the uploaders have #google.com in their email address?
Is there a list of google-supported packages? (I suppose polymer would be on it)
Is google currently monitoring the quality of the pub packages?
Kind regards,
Hendrik Jan van Meerveld
You are correct that the quality of packages can vary in Pub or any other pack repo. Here are a few things you could use to evaluate the quality of the packages:
Is the package actively maintained?
How many active committers does it have?
How many people have starred or forked it on GitHub?
How much use do you think it is getting? Are there questions about it on StackOverflow or other mailing lists?
To answer your specific questions:
You can reasonably expect "Dart team" packages to be supported.
There isn't a list of official Google supported packages. Just look for packages supported by the Dart team if you're looking for packages created by members of the Dart project.
The Dart project doesn't currently have any way of ranking Pub packages.
You can see a list of Dart-team developed packages on the Dart API page. Any package there not prefixed with dart is a library that has been developed and supported by the Dart team. I would definitely prefer a library developed by the Dart Team or someone from Google.
If the source repo for the package is available publicly (e.g. on GitHub), you can view the frequency of commits, and responsiveness of the author to issues/pull requests. For instance, you can easily tell that StageXL is a well maintained library by taking a look at their GitHub: 550+ commits, new commits within the last couple of weeks, accepts code from other contributors, and has almost 50 closed issues.
Bob Nystrom has talked about a ranking mechanism for pub in the past (he recently posted some ranking results that you can see here). Once a ranking system is in place, you will be able to better choose between two XML libraries for instance.
Related
I started to interest in monorepo approach and Nx.js in particularly. Almost all articles talks that monorepo solve the problem of incompatibility of library versions and I don't quite understand the how. There I have few questions:
If i understood right, the idea of monorepo (in terms of shared code) that all shared code always the same version and all changes are happen in one atomic commit (as advertisement of monorepo states). So lets imagine monorepo with 100 of projects and all of them are depend on libA in the same repo. If I change smth in libA than I have to check changes in all dependent project. Moreover, I have to wait all codeowners to review my changes. So what is pros?
Lets imagine I have monorepo with following projects: appA, libC, libD and there are some third party library, let's call it third-party-lib. appA depends on libC and libD. At some time appA need third-party-lib-v3, BUT libC depends on third-party-lib-v1. https://monorepo.tools/#code-generation states that: "One version of everything
No need to worry about incompatibilities because of projects depending on conflicting versions of third party libraries.". But it is not. In world of Javascript it results in 2 different versions of third-party-lib in different node_modules. Angain what is pros?
I could be very naive in my questions because I never encountered problems with libraries, also I just started learning monorepo topic so I would be glad if someone help me to deal with it.
Having worked with shared code in a non-monorepo environment, I can say that managing internal packages without a monorepo like NX requires discipline and can be more time consuming.
In your example of 100 projects using 1 library, all 100 projects should be tested and deployed with the new version of the code. The difference is when.
In separate repos, you would publish the new version of your package, with all the code reviews and unit testing that go along with it. Next you would update the package version in all your 100 apps, probably one by one. You would test them, get code reviews, and then deploy them.
Now, what if you found an issue with your new changes in one of the apps? Would you roll back to the previous version? If it was in the app then you could fix it in that one app, but if it was in the library, would you roll back the version number in all your apps? What if another change was needed in your library?
You could find yourself in a situation where your apps are using different versions of your library, and you can't push out new versions because you can't get some of your apps working with the previous version. Multiply that across many shared libraries and you have an administrative nightmare.
In a mono-repo, the pain is the same, but it requires less administrative work. With NX, you know what apps your change is affecting and can test all those apps before you deploy your changes, and deploy them all at once. You don't block other changes going into your library because the changes aren't committed until they are tested everywhere they are used.
It is the same with third party libraries. When you update the version of a library, you test it in all applications that use it before your change is committed. If it doesn't work in one application, you have a choice.
Fix the issue preventing that application from working OR
Don't update the package to the new version
It means that you don't have applications that are 'left behind' and are forced to keep everything up to date. It does mean that sometimes updates can take so much time that they are difficult to prioritise, but that is the same for multi-repo development.
Finally, I would to add that when starting to work with NX you may find yourself creating large, frequently changing libraries that are used by all apps, or perhaps putting large amounts of code in the apps themselves. This leads to pain where changes frequently result in deployments of the whole monorepo. I have found that it is better to create app specific folders that contain libraries that are only used by that app, and only create shared libraries when it makes business sense to do so. Examples are:
Services that call APIs and return business domain objects that should not really be changed (changes to these APIs and responses generally result in a V2 of the API and a new NX library could be created to serve that V2 API, leaving the V1 unchanged).
Core, stable atomic UI libraries for each component (again, try not to change the component itself, but create a V2 if it needs to change)
More information on this can be found here NX applications and libraries
I started using a few general purpose utility packages which I integrate into my projects with composer via packagist. A good one I found is jbzoo/utils:
https://github.com/JBZoo/Utils
It has a group of classes with useful methods. I see that some packages have useful methods that others don't, so I created my own package in which I combine the useful classes and methods of various packages, and also add my own methods. I don't have time to setup tests and meet the requirements to become a contributor on github, but it would be great if I could collaborate with others in building packages like this.
Is there a system where users collaborate in building packages in a wikipedia style fashion where anyone can go in there and make improvements and pitch in whatever they want in an open and free way? For example, I could add a few new methods, then the next programmer might take a look at the methods, see some potential problems and improve the methods. Another programmer may decide to spare a bit of time to write up tests for the methods and classes written by the previous programmer etc.
I realise this comes with some big security risks, but again, using wikipedia as an analogy, most people who use it, use it with the right intentions and the result is wikipedia is a relatively trustworthy source of information. I assume the same principal would apply to this.
Is there a system like this? It would of course be ideal if you could install the packages with composer (or whatever package manager for whatever language the package is written in).
I have been searching for a way to obtain a timestamp for when a package was either released for general use or possibly when first loaded on a local repository. Something in either Shell or Python would be ideal, but I'm open to other options at this point. I know packages support a changelog, but it looks like not all packages include a release date.
Thanks!
The answer depends on what exactly you are looking for, and it's not clear from the question. Before reproducible builds were introduced, the date a package was built could be retrieved from the raw ar members such as:
ar tv pkgname_version_arch.deb
If you are looking for the date the package got accepted/uploaded into a specific repository, then the answer will depend on what repository and the software used to manage it. For Debian you can get the information from UDD, from the debian-devel-changes mailing list for the maintainer uploads (but not the buildd uploads, or from package tracker, other derivatives and distributions might have different interfaces or none at all. For other private repositories perhaps there are publicly accessible logs available.
As you mention the changelog can be used for when the source package was prepared, but that might be wildly different to when it got built or even uploaded.
We have an NodeJS - Express application on top of which we have implemented Snowplow analytics, and are migrating away from Google Analytics. We want to now configure a JS Tracker in the NodeJS code. We are having difficulty choosing between the two available NodeJS trackers.
My question is - what are the differences between the two snowplow-tracker-* npm modules? I understand that snowplow-tracker is a more detailed implementation with more abstraction. But what are the features or level of complexity one should look at when choosing one over the other?
I'm looking at :
Complexity of application
Performance overhead between the two npm packages
Any particular features excluded from snowplow-tracker-core that one might want to use
Thanks!!
I answered this on the user group. My answer:
The core module contains shared functionality used by the client-side JavaScript Tracker, the snowplow-tracker module, and the Segment.io integration. It isn't really intended to be used directly and excludes some fairly important functionality, like methods to actually send events. You should probably use the snowplow-tracker module, also known as the Node.js Tracker.
We are starting a new nodejs project. Our current database is MS SQL. We need to select a module and drivers to use. I'm trying to find a good way to compare all of these different tools without needing to install them individually and test them on our systems. I'll find the occasional blog post that compares a few of them, but I often find these articles to be terse and will only say something like "tedious is light weight". I'd like to know how light weight it is compared to other drivers. Are there any benchmarks out there?
Aside from drivers a good module is needed. There are several that interest me such as sequelize, mssql and seriate. Some of these can use similar drivers in their configuration. However, the modules themselves need metrics to compare them. Right now the best method seems to be scanning the documentation and the internet getting information about these modules piece by piece. It seems like npm should have some page that compares the different modules that are offered on it. Keep in mind I'd like a source that has quantifiable comparisons between these modules and drivers.
This question might be closed as answer is heavily opinion based.
Anyway, what you could do is to search here: https://nodejsmodules.org then check how popular particular module is by checking download count in NPM, this is probably the best (quick) way to minimise the risk to pick wrong module.