I want to build a Libra TestNet with two servers.
I don't know how to use config-builder to configure the program.
This answer might be a bit late but it might help for someone who is looking for a solution.
I was able to setup local test network with single/multiple nodes based on the following
For a single node, the libra-swarm package is well documented here https://developers.libra.org/docs/run-local-network and defines easy steps to setup your local test network with defined number of nodes.
If you are planning to use multiple nodes, you can use docker files and shell scripts to create docker images from Libra's github repo and use those images with some container-orchestration system like kubernetes to setup your network. I was able to do this and have it setup using in this github repository.
Related
Caveat that docker is completely new to me and I may be making glaring errors in the configuration that I'm simply not aware about.
My goal is to have a droplet on digital ocean doing two things. Pulling the image from a repo when it is modified and running the container.
The container will need to run a react application. This should also pull from a repository on change.
I currently have a docker image for my react project. And the questions I'm trying to answer are:
Docker image pull on droplet:
Pull an image from a repo on a regular schedule
Restart the image
React application pull on droplet:
Pull a version from a repo on a regular schedule
Restart the application
It occurs to me that pulling the version from the repo could be achieved with a cron job. It's been a long time but I could probably figure that out.
I realise this question provides few details. I'm still trying to get my head around many concepts here and I find that a lot of the documentation doesn't quite provide the answers I need in whole, and if in part it's small parts strewn across many pages. Any help, or pointing in a direction is greatly appreciated.
You can use WatchTower
Pull an image from a repo on a regular schedule
Restart the image
Full documentation here : WatchTower - Go to the Argument section to view Scheduling arguments.
I don't know why you want to pull the project from repo while running the docker image but for this, you can use jenkins for CI/CD on the digital ocean server.
You just need some basic tutorials to do this:
Pull a version from a repo on a regular schedule
Restart the application
I think first you need to clarify some basic concepts.
Image is like a template
Container is the instance of an image
The images can't be restarted because those are not instances of something while containers can be restarted because those are running a specific version of an image.
Also, I think you're implementing bad approach updating under a cron your environment because what would happen if you by accident push a wrong image? All the system will fail, so, IMHO, I strong recommend you don't do that, better yet, do it through a tool like Jenkins, Github Actions, Gitlab Pipelines, son on, and use better practices of CI/CD.
I am trying to deploy my own cluster using DC/OS CLI installation. Mesosphere has a huge support as there are many packages ready to install provided in Mesosphere Universe repo (https://github.com/mesosphere/universe).
However, I would like to make one step further. I am trying to install my own applications to my cluster using the DC/OS CLI installation process. To do this, as far as I understand, I need to either (i) make my application recognizable to the system repo (as the other repo packages that are provided in Universe) or (ii) make a new image that consists all my applications and modify the DC/OS script to make the installation possible.
Unfortunately, my modest knowledge is flawed and I could not find any where a clear answer to this.
Therefore, I would like to ask:
1) Is it possible to do what I am trying to do?
2) If the answer is YES, how exactly should I do? My goal is to install my awesome apps for my own purpose, not to publish them. But to add my apps as repo into Universe, it seems like I have to publish them.
It is possible! :)
Please follow these instructions
We've been looking into implementing an Oracle Database system using Docker on our server and are considering two different strategies for managing our data:
Storing the database files (.dbfs) in a folder on the server's path, and then making that folder available in a container using the -v option. The idea is to have multiple containers accessing multiple folders so we can manage different versions of our data.
Keeping the database files inside the container as if it were a regular installation.
The reason behind this is that we like the idea of being able to change versions on the fly, when we need to revert to an old version of our program to fix a bug (for example if an older version is in production than the one we're currently working on). In that sense, the containers we have should also have a specific version of our app in them (it's a webapp).
So my question is the following: which approach would be the best in our case? Is there another organization of our containers that we missed and would be better? Looking for more opinions on the matter. We've been told already that the first method would perform better than keeping everything in containers, but our tests have not shown any improvement.
Thanks!
Go with the first option and forget the second one because you don't want to make any intensive disk access on a Docker layered file system.
Docker containers have a layered file system which does not deliver great performances. That's why you should have your data on a volume which is in the end a mount point on a folder in your docker host file system.
If you look at all official docker images for databases, they all declare volumes for the data. See MySQL, Postgres
I have an AngularJS site consuming an API written in Sinatra.
I'm simply trying to deploy these 2 components together on an AWS EC2 instance.
How would one go about doing that? What tools do you recommend? What structure do you think is most suitable?
Cheers
This is based upon my experience of utilizing the HashciCorp line of tools.
Manual: Launch an Ubuntu image, gem install sinatra and deploy your code. Take a snapshot for safe keeping. This one off approach is good for a development box to iron out the configuration process. Write down the commands you run and any options you may need.
Automated: Use the Packer EC2 Builder and Shell Provisioner to automate your commands from the previous manual approach. This will give you a configured AMI that can be launched.
You can apply different methods of getting to an AMI using different toolsets. However, in the end, you want a single immutable image that can be deployed. repeatedly.
I have created a Web Content Management library for use in WebSphere Portal. At the moment I'm using import-wcm-data to import the library, then I need to add some additional propeties to 2-3 files on the server under Resource Environment Providers and then restart particular services so those changes are detected.
Can anyone explain the benefits of using a paa over writing a simple bash (or similar) script to automate this process?
I don't understand if I get any advantages when using paa, or is paa even capable of updating properties files and restarting services?
I have been working intensively with PAA files and I must say that it is a very stable way of deploying a app requirering multiple depl steps and components.
It does need a startup process but is well worth it in a multi server environment.
You can do all the tasks that you can do in a Ant file as well as using the wsadmin script interface. I only update res env settings and the such in WAS and do not touch any props files for that reason since all settings are stored in WAS.
In my experience, a PAA is not a good method if you're merely importing a content library.
I don't think I understand why you are doing the import manually and not syndicating, but even if there's a good reason not to syndicate, the PAA process was too involved and required too many precursor actions (deleting libraries, remove PAA, deploy PAA and then activate the portliest) to be a viable option for something as simple as importing a WCM library.
Since activating the portlets I was importing with the PAA was an extra step, I don't believe you can restart applications either.