Comparing NodeJS drivers and modules for SQL - sql-server

We are starting a new nodejs project. Our current database is MS SQL. We need to select a module and drivers to use. I'm trying to find a good way to compare all of these different tools without needing to install them individually and test them on our systems. I'll find the occasional blog post that compares a few of them, but I often find these articles to be terse and will only say something like "tedious is light weight". I'd like to know how light weight it is compared to other drivers. Are there any benchmarks out there?
Aside from drivers a good module is needed. There are several that interest me such as sequelize, mssql and seriate. Some of these can use similar drivers in their configuration. However, the modules themselves need metrics to compare them. Right now the best method seems to be scanning the documentation and the internet getting information about these modules piece by piece. It seems like npm should have some page that compares the different modules that are offered on it. Keep in mind I'd like a source that has quantifiable comparisons between these modules and drivers.

This question might be closed as answer is heavily opinion based.
Anyway, what you could do is to search here: https://nodejsmodules.org then check how popular particular module is by checking download count in NPM, this is probably the best (quick) way to minimise the risk to pick wrong module.

Related

PHP - Wikipedia style package repository system?

I started using a few general purpose utility packages which I integrate into my projects with composer via packagist. A good one I found is jbzoo/utils:
https://github.com/JBZoo/Utils
It has a group of classes with useful methods. I see that some packages have useful methods that others don't, so I created my own package in which I combine the useful classes and methods of various packages, and also add my own methods. I don't have time to setup tests and meet the requirements to become a contributor on github, but it would be great if I could collaborate with others in building packages like this.
Is there a system where users collaborate in building packages in a wikipedia style fashion where anyone can go in there and make improvements and pitch in whatever they want in an open and free way? For example, I could add a few new methods, then the next programmer might take a look at the methods, see some potential problems and improve the methods. Another programmer may decide to spare a bit of time to write up tests for the methods and classes written by the previous programmer etc.
I realise this comes with some big security risks, but again, using wikipedia as an analogy, most people who use it, use it with the right intentions and the result is wikipedia is a relatively trustworthy source of information. I assume the same principal would apply to this.
Is there a system like this? It would of course be ideal if you could install the packages with composer (or whatever package manager for whatever language the package is written in).

Advice for components for a Multi Device program in Delphi Berlin

I have a Delphi 5 app that has too many 3rd party components to move to Delphi 10.1, so I am starting from scratch and need some advice from some Experts out there.
It basically is a database program that used DBISAM with a CSV importing and an I used Report Builder for building reports from the data. My goal is to create a multi-device application (Win64 and MacOS). I thought Fast Reports would work but I don't see it as an option for a multi-device program (even after downloading the Fast Reports FMX Install from Embarcadro for Berlin). I was going to use IBLite for a small database, but again don't see this installed. I was told by Embarcadero these components would work for the multi-device app I had in mind.
Any suggestions on where to start. Thanks.
I do not know for sure but you may not find a DBISAM driver for mobile platforms. You should also keep in mind that it is not good idea to load mobile device with CPU consuming tasks. I would suggest to use multi-tire approach. You should divide your application into several parts. In other words you should have a back-end server and a light-weight client implementing UI to your server.
I also think that you do not need to start from scratch. You may improve the existing application step-by-step. First of all you need to understand how to isolate your busyness logic from UI. You may do it even on Delphi 5.
Sorry my answer is too general, but your question does not have enough details too

Is the Meanstack suitable for production?

I have been looking at the various Meanstack frameworks out on the net - and whilst impressed with what they achieve I have one serious concern - the number of files used in a typical stack - meanstack.js uses over 15000 files whilst the bmean example has a modest 1900 in comparison.
The question I am asking myself is would I be happy to put my trust is such a system from a production view point - what happens when something goes wrong how easy is it going to be to find the answer? You can almost bet that when your most important customer logs on it is going to go haywire. Also what happens when Angular version 2 comes along it could require a complete rewrite but by then the stack your using has been customised and difficult to change?
Am I getting over concerned about the technology - my intended approach is to strip the client side code out of the bmean example and rewrite it with my own - at least that way I know (and control) what goes on in the client. Do you think this is the correct way to proceed?
With most systems there is a bit of preparation required before going to production. The same is true with mean.io (using multiple cpu's, improved aggregation, caching, etc etc)
The large number of files is essentially a product of the way npm handles dependencies. Each module is able to define independent versions of the same dependencies thus creating a bit of bloat but at the same time allowing a lot of flexability in nodejs code.
We currently have a number of mean.io projects in production phase and have been very happy with performance and the overall experience.
New releases of the project are scheduled every couple of months, upgrading should not be too much of a problem if you use the package system correctly.
Issues with the project are handled and managed through github issues additional support can be found on our irc (freenode #mean_io) channel as well as on facebook.
For commercial support have a look at the support page

Software packages to create graphs or charts from a database full of numbers?

I have a device which generates a bunch of statistics once per second. All of the statistics are stored in a PostgreSQL database on a Ubuntu server.
I'd like to create a web interface to prompt the user for a time range and which values to graph. I'm also thinking this kind of thing is common when people have databases full of numbers, so it must already exist. Problem is I don't know what terms to google to find relevant software packages. So far, the only 2 I've found are php5-rrd, and Carbon/Graphite.
The PHP5-RRD solution seems simple enough, though I'm worried I'll be needlessly re-inventing the wheel. Can anyone recommend other similar software packages that can help generate a bunch of "live" charts or graphs with a web front-end?
Try this d3.js tutorial. Depending on your needs it might solve your problem with a way simpler solution than whatever you were thinking.
Edit: if you want to learn the very basics of d3.js, I recommend Scott Murray's tutorials.
Depending on your needs, you can try:
BIRT
Saiku
Shiny (RStudio)
Or you can google charting library or try something from this article
Instead of storing things in a heavy PostgreSQL database, I did eventually change my app to use RRD (round robin database). Lots of ways to easily get and store information in RRDs.
# on Ubuntu:
sudo apt-get install rrdtool
Once I had my RRD files, it was trivial to use the usual RRD tools combined with PHP and the free Google Charts to generate the many different graphs I needed. Google Charts by itself is an amazing project worth highlighting: https://developers.google.com/chart/

migrator.net vs fluentmigrator vs migsharp

I am currently investigating possible options of a migration framework/tool. I like the idea of ruby migrations on which the above frameworks are based.
So I am asking for your experience, opinions and maybe a comparison between them. Are you using them in production?
thanks for responses. The goal of this question was to get a feeling about which tools is used most in the developer community but it seems that migrations are not a hot topic here.
Anyway, I have decided to go with MigSharp as the codebase seem to be pretty clean and it is quite easy to handle and had build in support for MS SQL CE. Second runner up would have been FluentMigrator where I was not able to produce a working example for compact edition.
Cheers
I use FluentMigrator in production, and am a longtime contributor to FM. I think your question is to general; be more specific. Also, FM has a google group which is fairly active if you want FM information.
FM is derived from migrator.net, as I recall. It uses a fluent-syntax, and supports multiple databases. We have taken some inspiration from rails migrations, but it's definitely not a port. Worth checking out.
One thing I've learned is not to put your migrations in the same assembly as you app code. Separate them into a migration assembly, and use that for migrating your databases.
Also, you should always work on multiple environments to avoid problems with migrations run straight against production. I always have at least a development and production environment, and most of the time there is a testing environment as well.
I use mig#.
It works well, but you will need to have some guidelines for usage - as migrations can get complicated.
We use sequence number on the end of our migrations rather than a date-time stamp. This is because we don't know when the date time stamp was set (when they begun the source code change-set; just before committing; some time inbetween) different developers could use different approaches.
Names such as Migration_0000034.cs give you plenty of space.
At this point, I would stick with migrator.net. I like the promise of FluentMigrator, but it seems to not have any better active development than migrator.net (see the issues and pull requests that have languished on their github site).
There is also no easy way to do an ExecuteScalar(). I'd add it, but I don't want to create my own fork, and I see no reason that a pull request would actually land in the master. (Execute.WithConnection is an Action so it will fire on demand rather than when I need it to fire)
So for me, I'm heading back to migrator.net.

Resources