Deploying AngularJs + Sinatra to AWS - angularjs

I have an AngularJS site consuming an API written in Sinatra.
I'm simply trying to deploy these 2 components together on an AWS EC2 instance.
How would one go about doing that? What tools do you recommend? What structure do you think is most suitable?
Cheers

This is based upon my experience of utilizing the HashciCorp line of tools.
Manual: Launch an Ubuntu image, gem install sinatra and deploy your code. Take a snapshot for safe keeping. This one off approach is good for a development box to iron out the configuration process. Write down the commands you run and any options you may need.
Automated: Use the Packer EC2 Builder and Shell Provisioner to automate your commands from the previous manual approach. This will give you a configured AMI that can be launched.
You can apply different methods of getting to an AMI using different toolsets. However, in the end, you want a single immutable image that can be deployed. repeatedly.

Related

How do I run a docker image on a DigitalOcean droplet?

Caveat that docker is completely new to me and I may be making glaring errors in the configuration that I'm simply not aware about.
My goal is to have a droplet on digital ocean doing two things. Pulling the image from a repo when it is modified and running the container.
The container will need to run a react application. This should also pull from a repository on change.
I currently have a docker image for my react project. And the questions I'm trying to answer are:
Docker image pull on droplet:
Pull an image from a repo on a regular schedule
Restart the image
React application pull on droplet:
Pull a version from a repo on a regular schedule
Restart the application
It occurs to me that pulling the version from the repo could be achieved with a cron job. It's been a long time but I could probably figure that out.
I realise this question provides few details. I'm still trying to get my head around many concepts here and I find that a lot of the documentation doesn't quite provide the answers I need in whole, and if in part it's small parts strewn across many pages. Any help, or pointing in a direction is greatly appreciated.
You can use WatchTower
Pull an image from a repo on a regular schedule
Restart the image
Full documentation here : WatchTower - Go to the Argument section to view Scheduling arguments.
I don't know why you want to pull the project from repo while running the docker image but for this, you can use jenkins for CI/CD on the digital ocean server.
You just need some basic tutorials to do this:
Pull a version from a repo on a regular schedule
Restart the application
I think first you need to clarify some basic concepts.
Image is like a template
Container is the instance of an image
The images can't be restarted because those are not instances of something while containers can be restarted because those are running a specific version of an image.
Also, I think you're implementing bad approach updating under a cron your environment because what would happen if you by accident push a wrong image? All the system will fail, so, IMHO, I strong recommend you don't do that, better yet, do it through a tool like Jenkins, Github Actions, Gitlab Pipelines, son on, and use better practices of CI/CD.

Is PAA a good candidate for automating wcm library deployment and setup in portal?

I have created a Web Content Management library for use in WebSphere Portal. At the moment I'm using import-wcm-data to import the library, then I need to add some additional propeties to 2-3 files on the server under Resource Environment Providers and then restart particular services so those changes are detected.
Can anyone explain the benefits of using a paa over writing a simple bash (or similar) script to automate this process?
I don't understand if I get any advantages when using paa, or is paa even capable of updating properties files and restarting services?
I have been working intensively with PAA files and I must say that it is a very stable way of deploying a app requirering multiple depl steps and components.
It does need a startup process but is well worth it in a multi server environment.
You can do all the tasks that you can do in a Ant file as well as using the wsadmin script interface. I only update res env settings and the such in WAS and do not touch any props files for that reason since all settings are stored in WAS.
In my experience, a PAA is not a good method if you're merely importing a content library.
I don't think I understand why you are doing the import manually and not syndicating, but even if there's a good reason not to syndicate, the PAA process was too involved and required too many precursor actions (deleting libraries, remove PAA, deploy PAA and then activate the portliest) to be a viable option for something as simple as importing a WCM library.
Since activating the portlets I was importing with the PAA was an extra step, I don't believe you can restart applications either.

How to launch app from the web/cloud

I have developed an app in Twilio which I would like to run from the cloud. I tried learning about AWS and Google App Engine but am quite confused at this stage:
I have 2 questions which I hope to get your help on:
1) How can I store my scripts and database in the cloud? Right now, everything is running out of my local machine but I would like to transfer the scripts and db to another server and run my app at a predetermined time of day. What would be the best way to do this?
2) How can I write a batch file to run my app at a predetermined time of day in the cloud?
I understand this does not have code, but I really hope someone can point me to the right direction. I have spent lots of time trying to understand this myself but still am unsure. Tks in adv.
Update: The application is a Twilio app that makes calls to people, the script simply applies an algorithm to make calls in a certain fashion and the database is a mysql db that provides the details of people to be called.
This is quite difficult to provide an exact answer without understanding what is the application, what is the DB or what is the script that you wish to run.
I can give you a couple of ideas that might be helpful in such cases.
OpsWorks (http://aws.amazon.com/opsworks/) is a managed service for managing applications. You can define your stack (multiple layers like web, workers, DB...) and what are the chef recipes that should run in various points in the life of the instances in each layer (startup, shutdown, app deployment or stack modification..). Then you can use the ability to add instances to each layer in specific days and hours, to implement the functionality of running at predetermined times as you requested.
In such a solution you can either have some of your instances (like DB) always on, or even to bootstrap them using the chef recipes every day, with restore from snapshot on start and create snapshot on shutdown.
Another AWS service that you use is Data Pipeline (http://aws.amazon.com/datapipeline/). It is designed to move data periodically between data sources, for example from a MySQL database to Amazon Redshift, the Data warehouse service. But you can use it to trigger scripts and run random shell scripts that you wish (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-shellcommandactivity.html), and schedule it to run in various conditions like every hour/day or specific times (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-concepts-schedules.html).
A simple path here would be just to create an EC2 instance in AWS, and put the components needed to run your app there. A thorough walk through is here:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/get-set-up-for-amazon-ec2.html
Essentially you will create an EC2 virtual machine, which you can for most purposes treat just like any other Linux server. You can install MySQL on it, copy your script there, and run it. Of course whatever container or support libraries your code requires will need to be installed as well.
You don't say what OS you are using locally, but if it is Mac or Linux, you should be able to follow almost the same process to get your script running on an EC2 instance that you used on your local machine.
As you get to know AWS, there are sophisticated services you can use for deployment, infrastructure orchestration, database services, and so on. But just to get started running a script from a virtual machine should be pretty straightforward.
I recently developed a Twilio application using Ruby on Rails for the backend and found Heroku extremely simple to setup and launch. While Heroku does cost more than AWS, I found that the time I saved using Heroku more than made up this. As an early stage startup, we wanted to spend our time developing important features, and not "wasting" time optimizing our AWS cloud.
However, while I believe Heroku is ideal for early-stage websites/startups I do believe hosting should be reevaluated once a company reaches a certain size. At some point it becomes economically viable to devote resources into optimizing an AWS cloud solution because it will be cheaper than Heroku in the long run.

Should i be using gradle for continuous deployment?

anyone has past experience with gradle? i'm thinking of using it for continuous deployment... i'm considering either using my own scripts (python) or gradle.
can anyone tell from experience which way he thinks recommanded to go? note i already use maven and i don't intend to move away for my dependency management and project management.
thanks
We have implemented Gradle-based deployment and environment management in a big governmental project (100+ servers). But we had to develop a custom set of plugins (which is actually rather straight forward process in Gradle) to handle tasks like remote SSH command execution through Groovy DSL, creation of application server domains/clusters (we are using WebLogic), application/configuration deployment.
We also are thinking of integrating Gradle with Puppet for easier Linux administration.
If you are coming from Java world, then using Gradle (which is Groovy-based) would be rather simple for you, because you can reuse your Java/Ant/Maven/Groovy knowledge to write scripts. Also an ability to create DSLs in Groovy may allow you to build interesting abstractions. Gradle also has very clean API which allows building nice dependencies between tasks. It also integrates very well with Maven infrastructure and you can reuse all Ant tasks.
Yes, Gradle-based deployment possible with gradle-ssh-plugin
Here is an article with good usage example.

how to integrate bugzilla and HP quality center?

I'm working on integrating Bugzilla with HP Qc. I'm performing this by using perl script by directly manipulating the database using sql commands. I want to use the web services of Bugzilla. I have gone through the Bugzilla webservice API but tat wasn't enough to get started. I'm a beginner and this is the first project of my career. How do I go about this?
Check out the Perl script bz_webservice_demo.pl in Bugzilla's contrib directory, it shows how to talk to Bugzilla via XMLRPC.
There are a few things you could do:
Export defects from Bugzilla into a spreadsheet and upload it into Quality Center
Use the Open Test Architecture API (OTAClient.dll) to update defects in Quality Center
Use the HP Synchronization Server and build an adapter
Using the HP Synchronizer is probably the only "real" way to do it. Though you could potentially build your own sync mechanism, potentially using just OTA and a message queue.
There may be an existing adapter available from proficom-ag based on a presentation I found via a web search

Resources