Is it possible to set up/configure Kiwi TCMS' Database and/or the Uploads folder on standalone remote servers? - kiwi-tcms

Expanding on/Clarifying the title a bit, my questions are related to managing the Kiwi TCMS data that needs to be persistent. After reading the documentation, I did not find any example or configuration steps on how to make Kiwi TCMS work with remote DBs and storage servers, so can you help point me in the right direction concerning the below:
Is it possible to have Kiwi TCMS use a remote database (for example, a mariaDB instance on AWS RDS)?
Is it possible to have Kiwi TCMS use a remote Uploads folder for attachments, pictures... etc on a remote server or storage system (for example, any remote storage server or a simple AWS S3 bucket)?
If either of questions "1" or "2" is possible, can they be configured via the "docker-compose.yml" file provided with the repo or would they need be configured using a different method?
If either of questions "1" or "2" is possible (especially the question related to remote DB), would this setup play well when migrating ... /Kiwi/manage.py migrate or would special steps need to be taken since the DB is running remotely?
Note: the main reason for my questions is that having a standalone remote DB and/or Uploads folder would make it easier to backup/update/restore/restart/reset any server or kubernetes pod that is running the Kiwi TCMS tool without having to worry about the data that needs to be persistent.

Note that both DB and file storage volumes are persistent in the default configuration. That's on purpose so that they can survive a docker-compose down and between upgrades! So your question is really how to put those on a different machine.
For the database configuration everything is controlled via environment variables. https://kiwitcms.readthedocs.io/en/latest/configuration.html lists all config settings and https://kiwitcms.readthedocs.io/en/latest/installing_docker.html#customization tells you how you can override them.
# Database settings
DATABASES = {
"default": {
"ENGINE": os.environ.get("KIWI_DB_ENGINE", "django.db.backends.mysql"),
"NAME": os.environ.get("KIWI_DB_NAME", "kiwi"),
"USER": os.environ.get("KIWI_DB_USER", "kiwi"),
"PASSWORD": os.environ.get("KIWI_DB_PASSWORD", "kiwi"),
"HOST": os.environ.get("KIWI_DB_HOST", ""),
"PORT": os.environ.get("KIWI_DB_PORT", ""),
"OPTIONS": {},
},
}
Since these are environment variables you can also configure them directly in your docker-compose.yml file like shown in the upstream file itself:
environment:
KIWI_DB_HOST: db
KIWI_DB_PORT: 3306
KIWI_DB_NAME: kiwi
KIWI_DB_USER: kiwi
KIWI_DB_PASSWORD: kiwi
So there's nothing stopping you from pointing the DB connection to a separate host, presumably your database cluster which you use for other apps as well.
From the point of view of the Kiwi TCMS application the database is remote anyway. The DB is accessed via TCP anyway and it doesn't matter if this is another container running alongside the app or a completely different host in a completely different network.
For the volume storing upload files it is a bit different. The volume needs to be mounted inside the app container which is done via the line:
volumes:
- uploads:/Kiwi/uploads:Z
That maps/mounts a persistent volume from the docker host into the running container. Docker allows various setup for volumes, refer to
https://docs.docker.com/storage/volumes/ but probably one of the simplest ones is the following:
Mount your NFS (or other) volume under /mnt/nfs/kiwitcms_uploads on the docker host
Then mount /mnt/nfs/kiwitcms_uploads to /Kiwi/uploads inside the running container.
IDK what the implications of this are and how stable it is. Refer to the docker documentation and your DevOps admin for more best practices on this.
However at the end of the day if you can make a network storage/block device available to the docker host then you can mount that inside the running container and the Kiwi TCMS application will treat it as a regular filesystem.

Related

How to get database off of localhost and running permanently?

So not sure it this is stupid to ask, but I'm running a neo4j database server (using Apollo server) from my React Application. Currently, I run it using node in a separate terminal (and I can navigate to it on localhost), then run npm start in a different terminal to get my application going. How can I get the database just up and running always, so if customers use the product they can always access the database? Or, if this isn't good practice, how can I establish the database connection while I run my client code?
Technologies being used: ReactJS, Neo4j Database, GraphQL + urql
I tried moving the Apollo server code into the App.tsx file of my application to run it from there directly when my app is launched, but this was giving me errors. I'm not sure if this is the proper way to do it, as I think it should be abstracted out of the client code?
If you want to run your server in the cloud so that customers can access your React application you need two things:
one server/service to run your database, e.g. Neo4j AuraDB (Free/Pro) or other Cloud Marketplaces https://neo4j.com/docs/operations-manual/current/cloud-deployments/
A service to run your react application, e.g. netlify, vercel or one of the cloud providers (GCP, AWS, Azure) that you then have to configure with the server URL + credentials of your Neo4j server
You can run neo4j-admin dump --to database.dump on your local instance to create a copy of your database content and upload it to the cloud service. For 5.x the syntax is different, I think neo4j-admin database dump --path folder.

How to reference exterior SQL storage with Docker container

Being a noobie to Docker, and thinking about storage with a SQL Server that has a size of several hundred gigabytes or more, it doesn't make sense to me that it would be feasible to store that much in a container. It takes time to load a large file and the sensible location for a file in the terabyte range would be to mount it separately from the container.
After several days attempting to google this information, it seemed more logical to ask the community. Here's hoping a picture is worth 1000 words.
How can a SQL Server container mount an exterior SQL Server source (mdf,ldf,ndf) given these sources are on Fortress (see screen shot) and the docker container is elsewhere on say somewhere in one of the clouds? Similarly, Fortress could also be a cloud location.
Example:
SQL CONTAINER 192.169.20.101
SQL Database Files 192.168.10.101
Currently, as is, the .mdf, .ldf files are located in the container. They should connect to another location that is NOT in the container. It would also be great to know how to move that backup file out of the "/var/opt/mssql/data/xxxx.bak" to a location on my Windows machine.
the sensible location for a file in the terabyte range would be to mount it separately from the container
Yes. Also when you update a SQL Server you replace the container.
This updates the SQL Server image for any new containers you create,
but it does not update SQL Server in any running containers. To do
this, you must create a new container with the latest SQL Server
container image and migrate your data to that new container.
Upgrade SQL Server in containers
So read about Docker Volumes, and how to use them with SQL Server.
Open a copy of Visual Studio Code and open the terminal
How to access /var/lib/docker in windows 10 docker desktop? - the link explains how to get to the linux bash command from within VSCode
docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -i sh
cd .. until you reach the root using VSCode and do a find command
find / -name "*.mdf"
This lists a file name, in my case as: /var/lib/docker/overlay2/merged/var/opt/mssql/data as the storage location
Add a storage location on your Windows machine using the docker-compose.yml file
version: "3.7"
services:
docs:
build:
context: .
dockerfile: Dockerfile
target: dev
ports:
- 8000:8000
volumes:
- ./:/app
- shared-drive-1:/your_directory_within_container
volumes:
shared-drive-1:
driver_opts:
type: cifs
o: "username=574677,password=P#sw0rd"
device: "//192.168.3.126/..."
Copy the source files to the volume in the shared drive (found here at /var/lib/docker/overlay2/volumes/) I needed to go to VSCode again for root.
Open SSMS to the SQL Instance in docker and change the file locations (you'll detach them and then swap them with commands to the volume where the files were moved) https://mssqlfun.com/2015/05/18/how-to-move-msdb-model-sql-server-system-databases/
Using the VSCode again, go to the root and enable the mssql login to have permission to the data folder under /var/opt/docker/volumes/Fortress (not sure how to do this but working on it and will update here later if it can be done and otherwise I will remove my answer)
Using the SSMS again, and the new permissions, attach the mdf/ldf again to the docker container SQL Server
Finally, there is a great link here explaining how to pass files back and forth between a container and a windows machine hosting the container

Search in all project files on remote host in PhpStorm

I have lots of files in a project on a remote host and I want to find out from which file another php file is called. Is it possible to use Ctrl+Shift+f search on a remote host project?
Is it possible to use Ctrl+Shift+F search on a remote host project?
Currently it's not possible. (2022-06-09: now possible with remote development using JetBrains Gateway, see at the end)
In order to execute search in a file content in a locally run IDE such file must be read first. For that the IDE must download it... which can be quite time & connection consuming task for (S)FTP connections (depends on how far the server is; how fast your connection; bandwidth limits etc.)
Even if the IDE could do it transparently for search like it does with Remote Edit functionality (where it downloads a remote file but instead of placing it in the actual project it stores it in a temp location) it still needs to download it.
If you execute one search (one term) and then need to do another search (slightly modified term or completely different search string) the IDE would need to re-download those files again (waste of time and connection).
Therefore it makes much more sense to download your project (all or desired files only) locally and then execute such search(es) on local files.
If it has to be purely remote search (when nothing gets downloaded locally)... then you just establish SSH/RDP/etc connection to that remote host (BTW: PhpStorm has built-in SSH Console functionality) and execute such search directly on the remote server with OS native tools (find/grep and alike) or some remote software (e.g. mc or notepad++).
P.S. (on related note)
Some of the disadvantages when doing Remote Edit: https://stackoverflow.com/a/36850634/783119
EDIT 2022-06-09:
BTW, JetBrains now has JetBrains Gateway for remote development where you run the IDE core on a remote server and connect to it via SSH using a local dedicated app or a plugin to your IDE (PhpStorm comes bundled with such a plugin since 2021.3 version).
To check more on JetBrains Gateway:
https://www.jetbrains.com/remote-development/gateway/
https://blog.jetbrains.com/blog/2021/11/29/introducing-remote-development-for-jetbrains-ides/

CI/TFS and database unit testing : app.config based on running environment

We have been trying to set up a CI process for SQL server database, and have successful imported the remote (DEV) db into our local database as a VS project. So we are able to make changes to the local db, run unit tests and then commit/push changes to the remote repo. The CI server (TFS) triggers the process on the remote DEV server, making the build and running the unit tests.
Everything is nearly working apart from the connection string to be made before running the test. We are not able to differentiate the app.config file based on the running environment. It is fixed and (it seems) can contain only one reference (local or remote) therefore the connection to the database can only work locally or remotely.
Is there any possible way to adjust the connection string in the app.config on the fly connecting it against local db (on the user laptop) while working locally and using the remote DEV database while the CI process is triggered by the TFS ?
Is it a good practice to use two different files (i.e. local.app.config and remote.app.config) ? if yes, how ?
or would you rather change its content based on the running env ? or maybe there is an easier way in VS which I don't truly know!
We really appreciate any advice/suggestion.
Thanks!

weblogic managed server autostart

Friends
I have configured WebLogic cluster with 2 managed servers and set crashrecoveryenabled to 'true' in nodemanager.properties so that in case of server crash the managed servers can start automatically.The Node manager and admin server are setup as windows services so that they can start automatically on server reboot. I have 2 questions
1.How can I make sure that the managed servers will start automatically after server reboot(I know adding managed servers as windows service is one option).
2.In nodemanager.properties do I need to set startscriptenabled to true in production environments?
thanks
Setting up a service to have the managed servers start on system reboot is the preferred approach.
I always set startScriptEnabled=true in production environments. This just uses the script to start up the managed servers.
Provided crashRecoveryEnabled is set to true and you have started each of your managed servers then it will start.
You can use wlst to check if they are running (or start them) through some sort of scheduled task if you wish.
EDIT: From the Oracle Documentation 4.2.4 Configuring Node Manager to Start Managed Servers
If a Managed Server contains other Oracle Fusion Middleware products, such as Oracle SOA Suite, Oracle WebCenter Portal, or Oracle JRF, the Managed Servers environment must be configured to set the correct classpath and parameters. This environment information is provided through the start scripts, such as startWebLogic and setDomainEnv, which are located in the domain directory.
If the Managed Servers are started by Node Manager (as is the case when the servers are started by the Oracle WebLogic Server Administration Console or Fusion Middleware Control), Node Manager must be instructed to use these start scripts so that the server environments are correctly configured. Specifically, Node Manager must be started with the property StartScriptEnabled=true.
There are several ways to ensure that Node Manager starts with this property enabled. As a convenience, Oracle Fusion Middleware provides the following script, which adds the property StartScriptEnabled=true to the nodemanager.properties file:
(UNIX) ORACLE_COMMON_HOME/common/bin/setNMProps.sh.
(Windows) ORACLE_COMMON_HOME\common\bin\setNMProps.cmd
For example, on Linux, execute the setNMProps script and start Node Manager:
ORACLE_COMMON_HOME/common/bin/setNMProps.sh
MW_HOME/wlserver_n/server/bin/startNodeManager.sh
When you start Node Manager, it reads the nodemanager.properties file with the StartScriptEnabled=true property, and uses the start scripts when it subsequently starts Managed Servers. Note that you need to run the setNMProps script only once.

Resources