How do I run a docker image on a DigitalOcean droplet? - reactjs

Caveat that docker is completely new to me and I may be making glaring errors in the configuration that I'm simply not aware about.
My goal is to have a droplet on digital ocean doing two things. Pulling the image from a repo when it is modified and running the container.
The container will need to run a react application. This should also pull from a repository on change.
I currently have a docker image for my react project. And the questions I'm trying to answer are:
Docker image pull on droplet:
Pull an image from a repo on a regular schedule
Restart the image
React application pull on droplet:
Pull a version from a repo on a regular schedule
Restart the application
It occurs to me that pulling the version from the repo could be achieved with a cron job. It's been a long time but I could probably figure that out.
I realise this question provides few details. I'm still trying to get my head around many concepts here and I find that a lot of the documentation doesn't quite provide the answers I need in whole, and if in part it's small parts strewn across many pages. Any help, or pointing in a direction is greatly appreciated.

You can use WatchTower
Pull an image from a repo on a regular schedule
Restart the image
Full documentation here : WatchTower - Go to the Argument section to view Scheduling arguments.
I don't know why you want to pull the project from repo while running the docker image but for this, you can use jenkins for CI/CD on the digital ocean server.
You just need some basic tutorials to do this:
Pull a version from a repo on a regular schedule
Restart the application

I think first you need to clarify some basic concepts.
Image is like a template
Container is the instance of an image
The images can't be restarted because those are not instances of something while containers can be restarted because those are running a specific version of an image.
Also, I think you're implementing bad approach updating under a cron your environment because what would happen if you by accident push a wrong image? All the system will fail, so, IMHO, I strong recommend you don't do that, better yet, do it through a tool like Jenkins, Github Actions, Gitlab Pipelines, son on, and use better practices of CI/CD.

Related

AWS codepipline rebuild only the affected monorepo apps

I have a NX Monorepo with 2 react applications and a shared library between them:
-apps
-app1
-app2
-libs
-global files for both apps
I have them both deployed on AWS codepipeline with s3 bucket and they share one monorepo repository, but the main issue here is that whenever I push some changes to the repo, no matter if they are in the libs(shared) or the app itself, the pipeline rebuilds all of the applications I have, my expected results are if I change something in the libs for example to rebuild all projects, because it affects them, but if I do a change in app1, which doesn't affect app2, AWS to rebuild only app1.
I read a lot of posts and landed on Lambdas and Lerna js, but everything looks pretty complicated since I am new to AWS
this is an image I landed on, it shows that I need to use lamba functions to check which part of the repo is changed and determine which pipline to rebuild, I would be really glad if someone simpliefies things for me so I can find easier solution or if someone dealt with this problem to help me find a solution.
If you use codepipeline/codebuild with a self-created build server container image including nx you don't need that logic. In that scenario nx inside build server watches for changes and builds only needed changes. Obviously you have to use EFS etc. for persistence.

Making changes to flask app without db.drop_all(), db.create_all()

I have a flask app that is deployed on Google's App Engine. I have noticed a minor bug and I would like to fix it but my database is already populated.
How can I make this minor code change and push / deploy back to my app without losing all my data? (which is probably a basic question but I'm not finding much. all tutorials online are focused on creating the app and deploy, not updating)
Thus far, I have been dropping and re-creating the tables whenever I redeploy, mostly out of ignorance. Here are the steps I have followed
1). make the change in my app
commit and push changes to bitbucket source code
in Google Cloud SDK: git pull
Google Cloud SDK: gcloud app deploy
These steps result in an empty database because the directory I am pushing from my local computer has an empty database. Is this where I should be using git merge?
Is this a database "migration" or is this a "git merge"? I'm not sure what the right terms are to use to research this further. Thanks.
There are a couple of angles to your question. I'm going to try to give you some information, but let me warn you, this isn't going to be a trivial change to your workflow, you'll have to change some things.
First of all, based on the way you worded your question I get the idea that you commit your database to git along with your code. If I got this right, then this is something that you need to stop doing. The database is not code, so it should not be committed to source control.
You should have a completely independent database on each installation of your application. For example, you will have a database on your own machine to do development. You will also need another database in your gcloud deployment. You may need more databases if you have other uses for your application. A very common third database for many people is one that is used for automated tests, which could also be located in your local development machine, but is not the same database that you use for day to day development.
To make changes to your database schema you will not drop and recreate tables anymore, that is clearly something that you already realized that needs an improvement. A good approach to make these changes is to use a database migration framework. These tools allow you to generate short scripts that make these changes to the database in a more focused way, without destroying and recreating everything, and for that reason, the data is in general not lost. For Flask-SQLAlchemy, the best option for database migrations is Flask-Migrate, which is a lightweight wrapper around the Alembic migration framework. (I might be biased here as I'm the author of the Flask-Migrate extension!).
Documentation for Flask-Migrate: https://flask-migrate.readthedocs.io/en/latest/.

How to develop react app via online development

I'm just curios about this situations creating app with React Js. Is there any way to build directly on the hosting Cpanel not on localhost during development? I don't know if this question is right I'm new about this but how about if were done developing on local then build and upload to server, if there is small changes of the application then you can't change directly on the server because the code is bundle and minified. I tried to search on google and watch tutorials but can't find it. I know there nothing wrong to build on local, however I like the point that while i'm building I know it works very well and see it on live then if there is small changes I could change directly.
Apologies to my curiosity. Thanks in advance for your ideas and correcting me.
I'm not sure if react requires bundling. It is not so big itself. One useful way that you can do it, just build your react app in local, then create a git repository, push it to there then from there you can pull it to your server by connecting your server with SSH.
This way may require some installations on server side again with SSH connection. You can search the details about the way I suggest you.
Appreciating your curiosity, I can think of two possible (not at all recommended though) solutions.
1. Dump jsx
React applications requires a build process primarily for JSX syntax. It is developer intuitive. If there is no jsx in your code no need to build. So, this jsx
return (
<h1>Greetings, {this.props.name}!</h1>
);
Should be written as this js
return React.createElement('h1', null, 'Greetings, ' + this.props.name + '!');
2. Setup development environment in Server
This is a risky one. There're possible security issues.
Its like have a centralized code base on the server that anyone with access can modify.
Here, you can edit files & run build task directly on server.
Notes:
Today's basic development flow is code -> build -> deploy. Better stick with it for serious applications.

Deploying AngularJs + Sinatra to AWS

I have an AngularJS site consuming an API written in Sinatra.
I'm simply trying to deploy these 2 components together on an AWS EC2 instance.
How would one go about doing that? What tools do you recommend? What structure do you think is most suitable?
Cheers
This is based upon my experience of utilizing the HashciCorp line of tools.
Manual: Launch an Ubuntu image, gem install sinatra and deploy your code. Take a snapshot for safe keeping. This one off approach is good for a development box to iron out the configuration process. Write down the commands you run and any options you may need.
Automated: Use the Packer EC2 Builder and Shell Provisioner to automate your commands from the previous manual approach. This will give you a configured AMI that can be launched.
You can apply different methods of getting to an AMI using different toolsets. However, in the end, you want a single immutable image that can be deployed. repeatedly.

How to launch app from the web/cloud

I have developed an app in Twilio which I would like to run from the cloud. I tried learning about AWS and Google App Engine but am quite confused at this stage:
I have 2 questions which I hope to get your help on:
1) How can I store my scripts and database in the cloud? Right now, everything is running out of my local machine but I would like to transfer the scripts and db to another server and run my app at a predetermined time of day. What would be the best way to do this?
2) How can I write a batch file to run my app at a predetermined time of day in the cloud?
I understand this does not have code, but I really hope someone can point me to the right direction. I have spent lots of time trying to understand this myself but still am unsure. Tks in adv.
Update: The application is a Twilio app that makes calls to people, the script simply applies an algorithm to make calls in a certain fashion and the database is a mysql db that provides the details of people to be called.
This is quite difficult to provide an exact answer without understanding what is the application, what is the DB or what is the script that you wish to run.
I can give you a couple of ideas that might be helpful in such cases.
OpsWorks (http://aws.amazon.com/opsworks/) is a managed service for managing applications. You can define your stack (multiple layers like web, workers, DB...) and what are the chef recipes that should run in various points in the life of the instances in each layer (startup, shutdown, app deployment or stack modification..). Then you can use the ability to add instances to each layer in specific days and hours, to implement the functionality of running at predetermined times as you requested.
In such a solution you can either have some of your instances (like DB) always on, or even to bootstrap them using the chef recipes every day, with restore from snapshot on start and create snapshot on shutdown.
Another AWS service that you use is Data Pipeline (http://aws.amazon.com/datapipeline/). It is designed to move data periodically between data sources, for example from a MySQL database to Amazon Redshift, the Data warehouse service. But you can use it to trigger scripts and run random shell scripts that you wish (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-shellcommandactivity.html), and schedule it to run in various conditions like every hour/day or specific times (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-concepts-schedules.html).
A simple path here would be just to create an EC2 instance in AWS, and put the components needed to run your app there. A thorough walk through is here:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/get-set-up-for-amazon-ec2.html
Essentially you will create an EC2 virtual machine, which you can for most purposes treat just like any other Linux server. You can install MySQL on it, copy your script there, and run it. Of course whatever container or support libraries your code requires will need to be installed as well.
You don't say what OS you are using locally, but if it is Mac or Linux, you should be able to follow almost the same process to get your script running on an EC2 instance that you used on your local machine.
As you get to know AWS, there are sophisticated services you can use for deployment, infrastructure orchestration, database services, and so on. But just to get started running a script from a virtual machine should be pretty straightforward.
I recently developed a Twilio application using Ruby on Rails for the backend and found Heroku extremely simple to setup and launch. While Heroku does cost more than AWS, I found that the time I saved using Heroku more than made up this. As an early stage startup, we wanted to spend our time developing important features, and not "wasting" time optimizing our AWS cloud.
However, while I believe Heroku is ideal for early-stage websites/startups I do believe hosting should be reevaluated once a company reaches a certain size. At some point it becomes economically viable to devote resources into optimizing an AWS cloud solution because it will be cheaper than Heroku in the long run.

Resources