I have followed the steps given to run the kiwi tcms with docker. I can successfully setup the process.
https://kiwitcms.readthedocs.io/en/latest/installing_docker.html
But, im not sure how to use my own certificate files on this website.
I have used my custom domain "testportal.mycompany.com" in the initial setup of docker-compose. I have a certificate file of "*.mycompany.com" domain. I just want to apply it now.
How to apply the custom certificate to the site, where can I find the location to replace certificate files.
So you put your SSL files alongside your docker-compose.yml file and then mount them inside the app docker container. For example see:
https://github.com/kiwitcms/Kiwi/blob/master/docker-compose.yml#L7
This is mounting the local file ./99-charset.sh under /usr/share/container-scripts/mysql/init/99-charset.sh inside the db container.
So you will mount your own certificate files under /Kiwi/ssl/localhost.key and /Kiwi/ssl/localhost.crt
Update:
if you are new to docker you may want to engage your sysadmin/devops counterpart to help you out. Assuming you are aiming to deploy Kiwi TCMS into production this isn't something you will do yourself if you don't have any experience in the subject.
Related
I want to ask on how can I host my react app. It is a 3d product configurator.
I tried to host it on AWS Amplify but the 3d models doesnt load
If you want to host an application on aws amplify you have to create a build version of your app (assuming that it works already without any start issues meaning that you have a functional react app created with the command npx create-react-app).
Usually your react app runs on local host and it's basically like a test/development version of your app. When you take it into aws it really wants a build version of your app. The build command will generate everything you need for this. Navigate to your react application folder and
Run the command
npm run build
This will create a folder that you can send to aws amplify.
When you go to the aws amplify site it'll ask you if you would like to build a website or host a website.
Select host and then it'll ask if you would like to push it from a repository like github. For now lets just skip it and keep the deployment as simple as possible. Deploy without git for now.
Next, we want to click on drag and drop so that you can manually select the file build folder that your npm run build command generated.
Look for the build folder that was generated and drag that folder into the aws area. You don't actually have to click the 'choose files button'. Sometimes the box glitches and won't let you drag anything outside of the box. So what you can do is just open up your directories and manually find that build file in your folders. Drag it from there to the aws zone at the bottom of the screen.
Give your AWS app a name and env name.
From there you can deploy. Once you deploy it'll give you a site address. Also before you make your build, be sure that all of the packages you need are installed. I had an issue where my axiom commands were not working because I had not installed it prior to pushing my build.
So if your project depends on a certain npm package to run your .gltf files make sure that it is installed on your application. You should see it inside the node modules folder (in your apps local directory not the aws one).
I think AWS uses the node modules folder to generate everything your project needs (But I am not 100% sure of this). But it didn't work prior to me installing the package and pushing the build folder again to aws via drag and drop.
There are better ways to do this but this is what worked for me! Hope this helps to at least get your site up and running. Also hope it helps with any package issues that might have been happening with your 3d models. This is about as far as I can take you. Good luck!
I developed a pipeline with CI/CD on azure Devops for deploying a React app on Azure web app service. Locally I'm using a .env file and this file is on .gitignore. I want to know how can i set the .env for reading it on production.
You can check the documentation below:
https://create-react-app.dev/docs/adding-custom-environment-variables/#adding-development-environment-variables-in-env
.env files should be checked into source control (with the exclusion of .env*.local ).
If you don't want to check in the .env files to DevOps, you could add the variables in the pipeline variables:
In addition, you can refer to the following case for more suggestions:
How to use environment variables in React app hosted in Azure
Many of the proposed solutions related to this issue may not work but I solved it the following way. However, first let me explain why other solutions may not (should not) work (please correct me if I am wrong)
Adding pipeline variables (even though they are environment variables) should not work since a react app is run on the client side and there is no server side code that can inject environment variables to the react app.
Installing environment variable task on the classic pipeline should not work for the same reason.
Adding to Application Settings in azure app service should not work for the same reason.
Having .env or .env.development or .env.production file in a git repo should not be a good practice as it may compromise api keys and other sensitive information.
So here is my solution -
Step1: Add all those .env files to azure devops library as secure files. You can download these secure files in the build machine using a DownloadSecureFile#1 pipeline task (yml). This way we are making sure the correct .env file is provided in the build machine before the task yarn build --mode development in the pipeline.
Step2:
Add the following task in your azure yml pipeline in appropriate place. I have created a github repo https://github.com/mail4hafij/react-yarn-azure-pipeline if you want to see a complete example.
# Download secure file from azure library
- task: DownloadSecureFile#1
inputs:
secureFile: '.env.development'
# Copy the .env file
- task: CopyFiles#2
inputs:
sourceFolder: '$(Agent.TempDirectory)'
contents: '**/*.env.development'
targetFolder: '$(YOUR_DEFINED_PROJECT_ROOT_FOLDER_VARIABLE)'
cleanTargetFolder: false
Keep note, secure files can't be edited but you can always re-upload.
what is the best way to deploy a react web app (ex. employee database) on the local area network only? i'm using create-react-app and i'm almost done with the code but after that, i don't know what to do. it was easier to do this in php/mysql with the help of xammp. but i would like to do this using react this time. thank you
The react is already working in your local lan IP
for example your local lan ip is 192.168.1.20 you and your react running port is
localhost:3000
other PC's mobile devices laptop could already access the server in your PC via
http:/192.168.1.20:3000
just make sure you enable it to be allowed in your firewall usually the antivirus does not permit is so try to set enable in the firewall settings in your antivirus and windows firewall
If your app is totally static you can serve it with a web server after building. When you are done with your app first build it with:
yarn build
After this you will have all your static files in your build directory inside your project. build/static/js and build/static/css contains bundled js and css files. Those files are being attached to your index.html file. So, when you serve this directory with a web server like a regular html site, you can use your app.
By default
type npm run build
after a build is successful then,
type serve -s build
it will give you URL
use that with IP to other devices connected in the same network.
I'm using create-react-app for my projects using docker as my dev env.
Now I would like to know how is the best practice to deploy my project into AWS (I'll deploy the docker).
Maybe my question is a dummy but I'm really stuck on it.
My docker file has a command yarn start... for dev it is enough I don't need to build anything, my bundle will run in memory, but for QA or PROD I would like to build using npm run build but as I know it will create a new folder with the files that should be used on prod env.
That said, my question is: what is the best practice for this kind of situation?
Thanks.
This is what I did:
Use npm run build to build all static files.
Use _/nginx image to customize an HTTP server which serves those static files. (Dockerfile)
Upload the customized image to Amazon EC2 Container Service (ECS).
Load the image in ECS task. Then use ELBv2 to start a load balance server to forward all outside requests to ECS.
(Optional) Enable HTTPS in ELBv2.
One time things:
Figure out the mechanism of ECS. You need to create at least one host server for ECS. I used the Amazon ECS-Optimized AMI.
Create a Docker repository on ECS so you can upload your customized Docker image.
Create ECS task definition(s) for your service.
Create ECS cluster(s) and add task(s).
Configure ELBv2 so it can forward the traffic to your internal ECS dynamic port.
(Optional) Write script to automate everyday deployment.
I would get paid if someone wants me to do those things for her/him. Or you can figure it out by yourself following those clues.
However, if your website is a simple static site, I recommend to use Github pages: it's free and simple. My solution is for multiple static + dynamic applications which may involved other services (e.g. Redis, ElasticSearch) and required daily/hourly deployments.
You would have to run npm run build and then copy the resulting files into your container. You could use a separate Dockerfile.build to build the files, extract them and add them to your final container. Your final container should be able to serve the files. You can base it on nginx or another server. You can also use it as a data volume container in your existing server container.
Recent versions of Docker make this process easier by allowing you to combine the two Dockerfiles. You can have a build container and then the final container both be defined in the same file.
Here's a simple example for your use case:
FROM node:onbuild AS builder
RUN npm run build
FROM nginx:latest
COPY --from=builder /usr/src/app/build /usr/share/nginx/html
You'd probably want to include your own nginx configuration file.
More on multistage builds here:
https://docs.docker.com/engine/userguide/eng-image/multistage-build/
I'm working on an Angular 2 project. I need to use restful services that cannot be exposed as CORS (the local setup is a docker container that emulates the live server), thus I need to create the angular project under the same domain.
Lets say the local domain is project.dev that already has a main web site at the root.
I was asked to build an angular 2 admin section in a sub directory (say /admin) so that it can be accessed at project.dev/admin.
When the project is done it will be easy to just copy the build in that subdirectory and have it working.
The problem is while developing it.
I'm using ancular-cli with ng serve but the site is served at localhost:4200, not the same domain, thus CORS issues.
I've tried changing it using the arguments --host and --port but the ones I need, obviously, are already used by the main site.
The partial solution was to use ng build --watch --base-href /admin/ --deploy-url /admin/ and configure outDir in anglular-cli.json to built the site in the proper location, but even if --watch is correctly rebuilding on every file changes, the browser is not refreshed as it is with ng-serve.
Unfortunately changing the server to support CORS is not an option. It is a docker container that uses JAVA as server that I'm not able to edit. Furthermore the versioned code is constantly updated by other developers.
I would like to know if it is possible to manage such a situation in a better way, mainly if is possible to serve the angular project under a local domain name while developing.
Thanks