I need create some different projects in Kiwi TCMS and set different roles for Alice, Bob and Vasya:
Alice is Tester in Project A and Project B but have not access to Project C
Bob is Tester in Project A and have not access to Project B and Project C
Vasya is PM in Project A, Project B and Project C. He can set permissions for Alice and Bob and can set any role for any tester in any project (but only A, B or C)
How I can do it?
If it impossible, how I can start 3 or more instances Kiwi TCMS but with only one project in it. And how I can update those instances?
How I can do it?
Three separate instances of application via docker - see https://kiwitcms.readthedocs.io/en/latest/installing_docker.html or multi-tenant installation - see https://github.com/kiwitcms/tenants.
how I can start 3 or more instances Kiwi TCMS but with only one project in it.
You can start as many instances as you want because they are just separate containers. You can use the same image + DB but create multiple instances or create multiple databases on the same DB server + multiple web servers - there are multiple possibilities and the configuration is up to you. It all happens in docker-compose.yml.
If you are looking for a single instance which provides isolation between multiple data sets the check-out kiwitcms-tenants: https://github.com/kiwitcms/tenants. A tenant can be a single project (with multiple products) a single team or a mixture of people from multiple teams. With a multi-tenant instance accounts are kept on the public schema and access can be granted in a many-to-many fashion.
kiwitcms-tenant is an add-on which you can install & configure in your own docker images. Alternatively you may opt into the Kiwi TCMS Enterprise image which has this add-on already included.
Related
I am trying to understand the best way to migrate code when working with Snowflake. There are two scenarios, one where we have only one snowflake account and that houses all environments (dev, test, prod). And the other has two accounts (non-prod, prod). With the second option, I was planning to create a separate script that will set up the correct database name which is based on environment (dev_ent_dw, prod_ent_dw) and then I will refer these as variables when creating objects. Example -
set env = ‘dev’;
set db = $env || ‘_ent_dw.’;
Right now we’re running everything manually so the devops team will run these upfront before running ddl scripts. We may do something similar with the former scenario but I am wondering if folks can share best practices of dealing with this as I am sure it would be common topic at large enterprises.
We have different accounts for each environment (dev, qa, prod). We use Azure DevOps for change management within our team, including the Git repos and Azure Pipelines for deploying scripts via schemachange.
We do NOT append an environment to objects, as that is handled by the account.
Developers write migration scripts and check them into source control. We then create Version folders and move/rename the migration scripts for deployment, and run a pipeline to execute the changes. The only thing we need to change is the URL to deploy against, and that is handled within the pipeline itself. The nice thing here is that we do not need to tweak anything going from different branches in source control.
We have not been doing much with creating clones for development work, as we usually only have one developer working on changes to a set of objects at a time. We are exploring ways to improve our process, but what we have works fairly well for our current needs.
We use different service accounts for the different deployment environments, so Dev has Account_A and Prod has Account_B, and any test app using Account_A will not have access to Prod. Or, as another example, Account_A can have read/write permissions in Dev, but only read permissions in Prod.
Up until now there has been no source control on the database definitions, just manual scripts everywhere, and I'd like to create a SSDT solution in Azure DevOps for this. I understand how you can set up releases to handle different database names across environments (Db_Dev vs Db_Prod, for example), but I'm not able to find anything about different users & permissions across environments.
Is this possible in SSDT? As far as I can tell, I have 2 options, but I'm hoping there's a better way:
Handle users and permissions outside of source control
Handle them somehow in a post-deployment script.
Caveat: I'm only talking about Windows Authentication users & groups. Passwords will obviously not be going into source control.
I wrote about this a long time ago here: https://schottsql.com/2013/05/14/ssdt-setting-different-permissions-per-environment/
You really are dealing with environment variables and a bunch of post-deploy scripts in order to do this. Your better option is to assign permissions to database roles so those are all consistent, then assign your users to those roles in each environment as appropriate - outside of SSDT. It's a lot less painful than trying to create/maintain logins and users in a series of post-deploy scripts in the long run.
On DevOps side, an environment is a collection of resources, such as Kubernetes clusters and virtual machines, that can be targeted by deployments from a pipeline. Typical examples of environment names are Dev, Test, QA, Staging, and Production. You can secure environments by specifying which users and pipelines are allowed to target an environment.
You can control who can create, view, use, and manage the environments
with user permissions. There are four roles - Creator (scope: all
environments), Reader, User, and Administrator. In the specific
environment's user permissions panel, you can set the permissions that
are inherited and you can override the roles for each environment.
More details, check the following link:
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops#security
Managing users and permissions in SSDT is a bit pain in the 🍑 and usually they are not maintained there. However you still have options:
Create post-script as you mentioned (bad choice)
Create separate projects for each environment
What I mean by separate project is: you need to create separate project for each environment, then add reference of your main project (where all of your objects exist) and set reference type as "the same database". Then in that project you'll add all needed users/permissions/modifications. These projects will have their own publish profiles as well. 1 issue you might face as well is that his project should reference ALL databases/dacpacs as your main project as well.
I am running yugabyte using yb-ctl create. I am using --rf 3 to create a 3 node cluster. How can make it listen on the external IP address instead of localhost? And run on three different IPs?
yb-ctl only works for local deployments for quick debugging or testing. To bring up yugabyte on three separate hosts, you can follow the instructions at https://docs.yugabyte.com/latest/deploy/manual-deployment/. The commands there are for 4 different hosts but it should be very similar for 3 hosts.
Indeed, yb-ctl is for local clusters on a single node and not meant to be used for multi-node deployments. In addition to the manual install option, there are a number of orchestrated multi-node deployment options available:
Terraform on any cloud
Cloud formation in AWS, Deployment manager in GCP and ARM templates in Azure
If Kubernetes is of interest, thats another easy way to deploy using Operators or Helm charts.
Hi guys I'm working on an exiting Episerver project (my first one) -
One of the issues that we are having is we have three enviroments for our episerver website. Developer / Staging / Live.
All have sepreate DBs. At the moment, we have had lots of media items added to our live enviroment via the CMS, we want to sync this with our staging enviroment.
However when we use the export data feature from live admin section and try to restore it to our staging enviroment, we end up with missing media, duplicate folders etc.
Is there a tool/plugin avalible to manage content/media across mulitple enviroments. Umbraco has something called "courier" (Umbraco being another CMS I have used in the past) looking for the episerver equvilent.
Or is the best way to do this export the live SQL database and over write my staging one? We have diffrent user permissions set in these enviroments how can we manage that?
How is this genreally done in the world of episerver?
Unfortunately the most common way to handle this is as you say to do it manually. Restore the db, copy the fileshare, and set up the access rights on the stage environment after the restore.
Luc made a nice provider for keeping your local environment in sync. https://devblog.gosso.se/2017/09/downloadifmissingfileblob-provider-version-1-6-for-episerver/
I tried many modules to deploy the changes from development to staging manually but didn't find the better way to deploy the changes either coding or database to the staging server automatically.
Is there anything for Drupal 7 by which I can push my changes from development to staging without any manual work? I want all database related configuration, codes etc to be pushed automatically on the live server.
Thanks
There are many ways you can automate your deployment. one of which we follow is as below:
Using third party services like platform or pantheon for deployment.
Using hook_update_N along with features and strongram modules for configuration management.
Using shell script for running custom commands after deployment.
Using jenkins for deployment automation.
Some other tools / services can be found here https://www.kelltontech.com/monkey-talk