I was recently interviewed by a technical architect and he mentioned that his group has begun to consolidate all domain-based database calls into packages. He said this had significant advantages when sharing code and devising TDD (should.js). Is this a recommended practice? What are the advantages/disadvantages of using packages for encapsulating such resource IO? Please include links, if available. Thanks.
You can use npm packages to share code between projects since that's what packages do. Starting out with a package is probably a bad idea. You mostly start with a module and if needed extract out a package.
Related
I am trying to deploy my own cluster using DC/OS CLI installation. Mesosphere has a huge support as there are many packages ready to install provided in Mesosphere Universe repo (https://github.com/mesosphere/universe).
However, I would like to make one step further. I am trying to install my own applications to my cluster using the DC/OS CLI installation process. To do this, as far as I understand, I need to either (i) make my application recognizable to the system repo (as the other repo packages that are provided in Universe) or (ii) make a new image that consists all my applications and modify the DC/OS script to make the installation possible.
Unfortunately, my modest knowledge is flawed and I could not find any where a clear answer to this.
Therefore, I would like to ask:
1) Is it possible to do what I am trying to do?
2) If the answer is YES, how exactly should I do? My goal is to install my awesome apps for my own purpose, not to publish them. But to add my apps as repo into Universe, it seems like I have to publish them.
It is possible! :)
Please follow these instructions
I am from a microsoft background where I always used to keep server and client applications in separate projects.
Now I am writing a client-server application with express as back-end and react js as front-end. Since i am totally a newbie to these two tools, I would like to know..
what is the general practice?:
keeping the express(server) code base and react(client) code base as separate projects? or keeping the server and client code bases together in the same project? I could not think of any pros & cons of either of these approaches.
Your valuable recommendations are welcome!.
PS: please do not mark this question as opinionated.. i believe have a valid reason to ask for recommendations.
I would prefer keeping the server and client as separate projects because that way we can easily manage their dependencies, dev dependencies and unit tests files.
Also if in case we need to move to a different framework for front end at later point we can do that without disturbing the server.
In my opinion, it's probably best to have separate projects here. But you made me think a little about the "why" for something that seems obvious at first glance, but maybe is not.
My expectation is that a project should be mostly organized one-to-one on building a single type of target, whether that be a website, a mobile app, a backend service. Projects are usually an expression of all the dependencies needed to build or otherwise output one functioning, standalone software component. Build and testing tools in the software development ecosystem are organized around this convention, as are industry expectations.
Even if you could make the argument that there are advantages to monolithic projects that generate multiple software components, you are going against people's expectations and that creates the need for more learning and communication. So all things being equal, it's better to go with a more popular choice.
Other common disadvantages of monolithic projects:
greater tendency for design to become tightly coupled and brittle
longer build times (if using one "build everything" script)
takes longer to figure out what the heck all this code in the project is!
It's also quite possible to make macro-projects that work with multiple sub-projects, and in a way have the benefits of both approaches. This is basically just some kind of build script that grabs the output of sub-project builds and does something useful with them in a combination, e.g. deploy to a server environment, run automated tests.
Finally, all devs should be equipped with tools that let them hop between discreet projects easily. If there are pains to doing this, it's best to solve them without resorting to a monolothic project structure.
Some examples of practices that help with developing React/Node-based software that relies on multiple projects:
The IDE easily supports editing multiple projects. And not in some cumbersome "one project loaded at a time" way.
Projects are deployed to a repository that can be easily used by npm or yarn to load in software components as dependencies.
Use "npm link" to work with editable local versions of sub-projects all at once. More generally, don't require a full publish and deploy action to have access to sub-projects you are developing along with your main React-based project.
Use automated build systems like Jenkins to handle macro tasks like building projects together, deploying, or running automated tests.
Use versioning scrupulously in package.json. Let each software component have it's own version# and follow the semver convention which indicates when changes may break compatibility.
If you have a single team (developer) working on front and back end software, then set the dependency versions in package.json to always get the latest versions of sub-projects (packages).
If you have separate teams working on front and backend software, you may want to relax the dependency version to be major version#s only with semver range in package.json. (Basically, you want some protection from breaking changes.)
We have an NodeJS - Express application on top of which we have implemented Snowplow analytics, and are migrating away from Google Analytics. We want to now configure a JS Tracker in the NodeJS code. We are having difficulty choosing between the two available NodeJS trackers.
My question is - what are the differences between the two snowplow-tracker-* npm modules? I understand that snowplow-tracker is a more detailed implementation with more abstraction. But what are the features or level of complexity one should look at when choosing one over the other?
I'm looking at :
Complexity of application
Performance overhead between the two npm packages
Any particular features excluded from snowplow-tracker-core that one might want to use
Thanks!!
I answered this on the user group. My answer:
The core module contains shared functionality used by the client-side JavaScript Tracker, the snowplow-tracker module, and the Segment.io integration. It isn't really intended to be used directly and excludes some fairly important functionality, like methods to actually send events. You should probably use the snowplow-tracker module, also known as the Node.js Tracker.
If a Chef recipe (or any of it's cookbook dependencies) use the package resource without specifying a version, then the latest version of the package is installed. If you want to control and test exactly what you are installing, then you must always supply the package version. What can you do when the cookbooks that you depend on do not take the same precautions?
See for example the default recipe in the ark cookbook. If this recipe is used on a production server, it could install packages that have not been tested. This is just one example (with over 5m downloads) so I am wondering how people are getting around this problem.
What can you do when the cookbooks that you depend on do not take the same precautions?
I don't think there is a simple answer. This is basically "the Chef way" ...
(Actually, I would suggest that hard-wiring package versions could do more harm than good. One of the good things about using a (good) distribution's package repo is that they regularly release updates with patches for security issues and bugs. But if you wired fixed package versions into your recipes or roles/nodes or something, you would prevent any such patches from propagating to your system.)
However, if it is critical to you that package versions are stable then maybe ...
Clone and hack the cookbooks in question to use specific versions. (Actually, you probably need to do this anyway, to avoid being bitten by unstable cookbooks!)
Use a distro (such as RHEL or its "clones") that values long-term stability, and only pushes out package updates that are "really important".
Create your own private mirror of the distro's package repos with only the "good" versions of the critical packages in it.
Modify Chef so that the package resources pick/install specified versions by default. (I don't imagine this would be easy. But if you did come up with a good solution at this level, it would be a pretty useful addition to Chef! IMO.)
UPDATE
Actually, there is a way to do this for (at least) Debian-based systems; see the apt cookbook and in particular the references to "pinning".
Or with yum, you could "lock" particular versions using "yum versionlock ..." as described here: https://www.zulius.com/how-to/yum-install-specific-package-version/
UPDATE - 2
Another possible trick would be to "inject" a version attribute into the "unsafe" package resources. Something like this:
# first, include_recipe a recipe that specifies 'package "foo"' without
# a version attribute
# then ...
r = resources("package[foo]")
r.variables['version'] = "1.2.3"
With a little ingenuity, one could create a "package version lock" recipe that pulled the versions from a databag, and dealt with missing resource exceptions and version attributes that were actually provided. But I don't know if this is "A Good Idea" (tm).
Chef's package resource uses the package manager of the node's operating system (like apt, yum, etc). These tools always install the most recent version that is available through the repositories. That's why chef's package resource also installs this version.
What the ark cookbook is that it downloads the source code and then compiles it - obvious that you can specify the version to install (through the passed URL).
So it depends on your actual need. If you want to install the version that is available through the distro's or your own package repo, then it's totally fine (and that's what most cookbooks do). If you want to compile everything from source (where you usually have the option to specify the version, the coverage of chef cookbooks supporting this is lower.
Personally, I'd suggest that you set up an own apt/yum/whatever repo for the software for which have specific version requirements.
In short : I'm not managing this.
In a more complete answer:
All distro/release go throught a validation phase before releasing new packages, I'm confident over it and it helps me keep in sync with security fixes.
As far as I know all package managers takes care of not upgrading a package in a breaking way if it is a dependencies of a package installed manually, again you have to trust the package maintainer about this.
i.e.: the package ressource without version won't update make nor gcc if it is a dependency of one package you installed with a fixed version.
For exemple under ubuntu, if you set the nagios package to manual, it will never try to update the libc package over a breaking change, and so I could break other package isntallation as dependencies are not satisfied.
If you're absolutely concerned about it, you have some choices:
Rewrite each pacakge ressource to use fixed version of packages
Fork any cookbook to fix thooses issues (you can write a foodcritic rule to help you detect package ressources without specific versions)
Have your own repos, stable and testing, and move packages inside stable repo once tested and use testing repo on your staging/QA environement.
The 3 is the most conservative as you choose what is in the repo in the stable branch and it won't change magically. The drawback is security fixes you'll have to manage.
Hope it would help.
We had the same issue in our cookbook. So we decided to use data bags.
Data bags can be easily changed, for example:
knife data bag from file my_data_bag host1
OR
knife data bag edit my_data_bag host1
Your recipe will be able to see the specified version from the data bag using the code like this:
my_bag = data_bag_item('my_data_bag', 'host1')
Chef::Log.info("You have changed the version to: #{my_bag['version']}")
package 'java' do
version my_bag['version']
action :install
end
So finally you don't need to modify Cookbook or Recipe. All you need is to pass the version to the data bag.
I started a Dart project and now I need some functionality that is not available in the Dart API Reference. I was advised to use a package from pub.dartlang.org and now I am browsing through the pub.
Previous experience with Javascript libraries tell me that quality and support can vary wildly between libraries. Therefore I am a bit reluctant to use packages from pub. How would I know which package has a good quality, and whether a package will be updated when there are breaking changes in Dart?
Therefore I would like to know:
Is there a way to know which packages on pub.dartlang.org are safe to choose for a long-term project?
Some questions related to this:
Will packages where "Dart team" is the author be supported for a long time?
Should I prefer packages where the uploaders have #google.com in their email address?
Is there a list of google-supported packages? (I suppose polymer would be on it)
Is google currently monitoring the quality of the pub packages?
Kind regards,
Hendrik Jan van Meerveld
You are correct that the quality of packages can vary in Pub or any other pack repo. Here are a few things you could use to evaluate the quality of the packages:
Is the package actively maintained?
How many active committers does it have?
How many people have starred or forked it on GitHub?
How much use do you think it is getting? Are there questions about it on StackOverflow or other mailing lists?
To answer your specific questions:
You can reasonably expect "Dart team" packages to be supported.
There isn't a list of official Google supported packages. Just look for packages supported by the Dart team if you're looking for packages created by members of the Dart project.
The Dart project doesn't currently have any way of ranking Pub packages.
You can see a list of Dart-team developed packages on the Dart API page. Any package there not prefixed with dart is a library that has been developed and supported by the Dart team. I would definitely prefer a library developed by the Dart Team or someone from Google.
If the source repo for the package is available publicly (e.g. on GitHub), you can view the frequency of commits, and responsiveness of the author to issues/pull requests. For instance, you can easily tell that StageXL is a well maintained library by taking a look at their GitHub: 550+ commits, new commits within the last couple of weeks, accepts code from other contributors, and has almost 50 closed issues.
Bob Nystrom has talked about a ranking mechanism for pub in the past (he recently posted some ranking results that you can see here). Once a ranking system is in place, you will be able to better choose between two XML libraries for instance.