Is creating a docker container effectively offering a guarantee that some program can be run long in the future (let us say 25 years)? - archive

Let's say I have a program, quite complex, relying on many packages / dependencies, and I want to make sure that I will still be able to run it all in 25 years from now.
Is docker (or similar) containerization a way to (realistically) allow this? Or is this outside of the scope of what a docker container offers?
If this sort of containerization is not the way to go, any other suggestion?
Note:
If you believe that this question is outside of the scope of Stackoverflow, please feel free to comment about it and I will remove it. In this case, any suggestion of where I could post it?

Version out EVERYTHING, OS, Docker Img, any dependency and in theory, yes you can build something that will last 25 years if you satisfy all those things that require your docker image to work, disabling automated updates if any of those dependencies might be helpful and not recommended but it will fulfill the task of working out for 25 years.
Build a REALLY STRONG Documentation about your docker image, unless it's going to be a nightmare to do maintance for future developers and best of luck my friend, I don't recommend you doing this tbh
A quick example is DOOM, you can run it almost everywhere, certainly there might be a way be a way to make your docker image work after 25 years

Related

How to create Salesforce incremental package.xml automatically?

Does anyone experiment in creating salesforce Package.xml automatically for continuous integration? If there any script or some idea please share.
You know incremental package.xml helps to deploy only the modified files rather than using complete package.xml that redeploy unmodified files as well which takes a lot of time.
Thanks in advance!
Tricky. And not really a programming-related problem, consider cross-posting this to https://salesforce.stackexchange.com/ or maybe even https://devops.stackexchange.com/
I don't think there's no clear answer, you'll have to experiment. Especially that you tagged "migration tool" (so old-school, battle-tested but lower priority Metadata API; seems that all focus is now on SFDX style of deployments). Do you use any version control (ideally Git) or do you hope to somehow compare source & target org, figure out the deltas and deploy only them?
Remember that often SF gets better at detecting "no changes" with every release (how old is your migration tool's jar file?). For example when I deploy my current project to an empty sandbox (exact copy of prod, no custom objects, code etc yet) the initial deploy takes ~7 minutes. But any subsequent deploy with same content or slight changes takes just 3-4. So try to calculate time lost in the grand scheme of things and decide what gains you want to see / how much time you want to spend on experimenting and tweaking the solution.
You could look into dedicated deployment solutions such as Gearset, Autorabit, Odaseva (I'm not affiliated with either and this list is not exhaustive). They often are capable of running a comparison for you.
There are several projects that try to compose package.xml based on Git diff(erence) between two commits. Of course you need to have a repo first and some regime:
https://github.com/cloudsandbox/sfdx-gen-pack saw presentation about it at Cloudforce London 2019
https://github.com/Accenture/sfpowerkit seems to have a "diff" command (disclaimer: I used to work for Accenture but not affiliated now, haven't worked on the tool, haven't used it personally)
https://cumulusci.readthedocs.io/en/latest/ this seems to be interesting and mature. Built by SF employees, not an official tool but used to CI deploy the non-profit packages they build (maybe you heard about Non Profit Starter Pack, especially if you ever considered enabling Person Accounts). I'm not sure if they do delta deployments as such but there seems to be a command that updates package.xml with files in repository so it's a start? https://cumulusci.readthedocs.io/en/latest/tutorial.html#part-4-running-tasks
I'm not saying CumulusCI will be a silver bullet but out of these 3 seems to be most actively maintained ;) But sounds like you'd have to get familiar with SFDX (if not whole thing then at least commands to convert the project back and forth between "source" (SFDX) structure and Metadata API structure
Answering my question by myself: I found git diff master feature/vat | force-dev-tool changeset create vat working!
Thanks to Roman answered in https://salesforce.stackexchange.com/questions/184332/is-there-a-pre-build-solution-for-generating-a-package-xml-from-a-git-repo

Comparing NodeJS drivers and modules for SQL

We are starting a new nodejs project. Our current database is MS SQL. We need to select a module and drivers to use. I'm trying to find a good way to compare all of these different tools without needing to install them individually and test them on our systems. I'll find the occasional blog post that compares a few of them, but I often find these articles to be terse and will only say something like "tedious is light weight". I'd like to know how light weight it is compared to other drivers. Are there any benchmarks out there?
Aside from drivers a good module is needed. There are several that interest me such as sequelize, mssql and seriate. Some of these can use similar drivers in their configuration. However, the modules themselves need metrics to compare them. Right now the best method seems to be scanning the documentation and the internet getting information about these modules piece by piece. It seems like npm should have some page that compares the different modules that are offered on it. Keep in mind I'd like a source that has quantifiable comparisons between these modules and drivers.
This question might be closed as answer is heavily opinion based.
Anyway, what you could do is to search here: https://nodejsmodules.org then check how popular particular module is by checking download count in NPM, this is probably the best (quick) way to minimise the risk to pick wrong module.

How do I play with Lynda etc. Postgres example files on MAMP without screwing up my current local environment? (beginner here)

So my company installed PostgreSQL on my computer, which I use, rarely and without understanding, for one specific function.
I'm trying to follow Lynda etc. tutorials to understand (Postgres)SQL better, since that's what we use, but all the tutorials ask students to reconfigure certain aspects of their system in order to follow along with example files (which I would really like to do).
Since I've messed up my dev env once already, I'm hesitant to touch anything that will cause issues with the local versions of our project.
I know this is an extremely wide-angle question with no easy answer, but if anyone has any general advice for playing with sample databases in MAMP Pro (or anywhere else) using Postgres without interfering with the servers I'm currently running, it would be a huge help.
i would recommend you use Vagrant and set up a isolated postgresql instance. Here is a great wiki you can follow to do this.
UPDATE: Given your comment,An easy solution is to just backup your data and proceed trying out the the Postgres examples you can always restore your data after you are done..

Is the Meanstack suitable for production?

I have been looking at the various Meanstack frameworks out on the net - and whilst impressed with what they achieve I have one serious concern - the number of files used in a typical stack - meanstack.js uses over 15000 files whilst the bmean example has a modest 1900 in comparison.
The question I am asking myself is would I be happy to put my trust is such a system from a production view point - what happens when something goes wrong how easy is it going to be to find the answer? You can almost bet that when your most important customer logs on it is going to go haywire. Also what happens when Angular version 2 comes along it could require a complete rewrite but by then the stack your using has been customised and difficult to change?
Am I getting over concerned about the technology - my intended approach is to strip the client side code out of the bmean example and rewrite it with my own - at least that way I know (and control) what goes on in the client. Do you think this is the correct way to proceed?
With most systems there is a bit of preparation required before going to production. The same is true with mean.io (using multiple cpu's, improved aggregation, caching, etc etc)
The large number of files is essentially a product of the way npm handles dependencies. Each module is able to define independent versions of the same dependencies thus creating a bit of bloat but at the same time allowing a lot of flexability in nodejs code.
We currently have a number of mean.io projects in production phase and have been very happy with performance and the overall experience.
New releases of the project are scheduled every couple of months, upgrading should not be too much of a problem if you use the package system correctly.
Issues with the project are handled and managed through github issues additional support can be found on our irc (freenode #mean_io) channel as well as on facebook.
For commercial support have a look at the support page

migrator.net vs fluentmigrator vs migsharp

I am currently investigating possible options of a migration framework/tool. I like the idea of ruby migrations on which the above frameworks are based.
So I am asking for your experience, opinions and maybe a comparison between them. Are you using them in production?
thanks for responses. The goal of this question was to get a feeling about which tools is used most in the developer community but it seems that migrations are not a hot topic here.
Anyway, I have decided to go with MigSharp as the codebase seem to be pretty clean and it is quite easy to handle and had build in support for MS SQL CE. Second runner up would have been FluentMigrator where I was not able to produce a working example for compact edition.
Cheers
I use FluentMigrator in production, and am a longtime contributor to FM. I think your question is to general; be more specific. Also, FM has a google group which is fairly active if you want FM information.
FM is derived from migrator.net, as I recall. It uses a fluent-syntax, and supports multiple databases. We have taken some inspiration from rails migrations, but it's definitely not a port. Worth checking out.
One thing I've learned is not to put your migrations in the same assembly as you app code. Separate them into a migration assembly, and use that for migrating your databases.
Also, you should always work on multiple environments to avoid problems with migrations run straight against production. I always have at least a development and production environment, and most of the time there is a testing environment as well.
I use mig#.
It works well, but you will need to have some guidelines for usage - as migrations can get complicated.
We use sequence number on the end of our migrations rather than a date-time stamp. This is because we don't know when the date time stamp was set (when they begun the source code change-set; just before committing; some time inbetween) different developers could use different approaches.
Names such as Migration_0000034.cs give you plenty of space.
At this point, I would stick with migrator.net. I like the promise of FluentMigrator, but it seems to not have any better active development than migrator.net (see the issues and pull requests that have languished on their github site).
There is also no easy way to do an ExecuteScalar(). I'd add it, but I don't want to create my own fork, and I see no reason that a pull request would actually land in the master. (Execute.WithConnection is an Action so it will fire on demand rather than when I need it to fire)
So for me, I'm heading back to migrator.net.

Resources