Vespa application config best practices - vespa

What is the best way to dynamically provide configuration to a vespa application?
It seems that the only method that is talked about is baking configuration values into the application package but is there any way to provide configuration values outside of that? ie are there cli tools to update individual configuration values at runtime?
Are there any recommendations or best practices for managing configuration across different environments (ie production vs development) ? At Oath/VMG is configuration checked into source control or managed outside of that?

Typically all configuration changes are made by deploying an updated application package. As you suggest, this is usually done by a CI/CD setup which builds and deploys the application package from a git repository whenever that changes.
This way it is easy to ensure changes have been reviewed (before merge), track all changes that have been made and roll them back if necessary. It is also easy to verify that the same changes which have been deployed and tested (preferably by automated tests) in a development / test environment are the ones that are deployed to production - because the same application package is deployed through each of those environments in order.
It is however also possible to update files in a deployed application package and create a new session from this, which may be useful if your application package has some huge resources. See https://docs.vespa.ai/documentation/cloudconfig/deploy-rest-api-v2.html#use-case-modify

Related

Best way to Continuous deployment for Drupal 7 websites

I tried many modules to deploy the changes from development to staging manually but didn't find the better way to deploy the changes either coding or database to the staging server automatically.
Is there anything for Drupal 7 by which I can push my changes from development to staging without any manual work? I want all database related configuration, codes etc to be pushed automatically on the live server.
Thanks
There are many ways you can automate your deployment. one of which we follow is as below:
Using third party services like platform or pantheon for deployment.
Using hook_update_N along with features and strongram modules for configuration management.
Using shell script for running custom commands after deployment.
Using jenkins for deployment automation.
Some other tools / services can be found here https://www.kelltontech.com/monkey-talk

What is the best strategy to externalise database properties in a multi-module maven project?

I have a multi-module maven based project which has a number of Spring Boot applications, a couple of which (lets call them A and B) connect to a database (I have a separate module with the database related code on which both applications depend.) I am also using Flyway to maintain the database versioning and maintain the database structure.
What is the best approach to maintain the database properties? At the moment I have 3 places where I am repeating the same thing. I have the application.yml of module A and application.yml of module B, since both are separate Spring Boot applications. Then I have the Flyway plugin configuration again which needs the properties in the pom.xml to be able to perform its tasks, like clean, repair and migrate.
What is the proper approach to centralise and externalise this information, like the database URL, username and password? I am also facing the issue that each time I pull the new code onto the test system I have to update the same data again because it gets overwritten, and the database configuration on the test system is different from my local development environment.
What is the best strategy to manage this?
Externalize your configuration into a configuration module. This, of course, depends on how flexible Flyway / Spring Boot are from using classpath based properties.
Look at something like archaius and make your configuration truly externalized, centralized and dynamic by having it backed by, say, an external datastore. More work involved here but gives you additional benefits, like being able to change config in one place and have them dynamically picked up in running applications everywhere.
It's not an easy problem to solve and definitely involves some work to make your tools cooperate by hooking into their lifecycle.
For your flyway you could use the maven-properties-plugin. In that way you can externalize the credential to a properties file. An example is described here.
For the spring-boot application I will recommend the spring cloud config . With the spring cloud config you can externalize your config to a git repository which can be discovered over an Eureka Service, e.g. like here. I will consider to restructure the modules to independent microservices. A good infrastructure for a microservice based architecture provides the JHipster project.

Deploying relevant magento backend changes

I'm thinking of a good deployment strategy for magento. I already have managed to deploy code with git from my local installation to my stage server. (The jump to live is not a problem then)
Now I'm thinking about how to deploy backend changes like the following:
I'm adding a new attribute set and I want it to be available on my stage and later the live server. Since these settings are in the database, I could just do a mysqldump and restore this dump on my stage/live systems.
But I can't do this, since the database has more data like orders, articles (with current stock availability) and a lot more stuff which I don't want to deploy from my testing system.
How are others handling this deployment "problem"?
After some testing, I chose the extension Mageploy, which is nicely to install via modman (I prefer modgit which relies on the same data for installation) and already captures a lot important backend settings.
If you need more, it's possible to extend it to more backend settings by yourself (and then contribute to the git project. Pullrequests are concidered quickly)

Is PAA a good candidate for automating wcm library deployment and setup in portal?

I have created a Web Content Management library for use in WebSphere Portal. At the moment I'm using import-wcm-data to import the library, then I need to add some additional propeties to 2-3 files on the server under Resource Environment Providers and then restart particular services so those changes are detected.
Can anyone explain the benefits of using a paa over writing a simple bash (or similar) script to automate this process?
I don't understand if I get any advantages when using paa, or is paa even capable of updating properties files and restarting services?
I have been working intensively with PAA files and I must say that it is a very stable way of deploying a app requirering multiple depl steps and components.
It does need a startup process but is well worth it in a multi server environment.
You can do all the tasks that you can do in a Ant file as well as using the wsadmin script interface. I only update res env settings and the such in WAS and do not touch any props files for that reason since all settings are stored in WAS.
In my experience, a PAA is not a good method if you're merely importing a content library.
I don't think I understand why you are doing the import manually and not syndicating, but even if there's a good reason not to syndicate, the PAA process was too involved and required too many precursor actions (deleting libraries, remove PAA, deploy PAA and then activate the portliest) to be a viable option for something as simple as importing a WCM library.
Since activating the portlets I was importing with the PAA was an extra step, I don't believe you can restart applications either.

Twist to the standard “SQL database change workflow best practices”

Twist to the standard “SQL database change workflow best practices”
Background
ASP.NET/C# Web App
MS SQL
Environments
Production
UAT
Test
Dev
We create patch scripts (XML and sql) that are source controlled in Mercurial. We have cmd line utility that installs patches to DB (utitlity.exe install –patch) from a Release folder the build packages. Patches have meta data that helps with when patch should run and we log patches installed in a table in the target DB. All these were covered in the 3 year old question:
SQL Server database change workflow best practices
Our Problem/Twist
I think this works well for tables, views, functions and stored procedures. We struggle with application configuration data. Here are some touch points on application configurations.
New client. BA performs system study and fit analysis. Out of this comes a configuration word document of what application configurations need to be setup. Note some of these may also come in phases over time. We need to get these new configurations into the system for the developer and client UAT.
Developer works on feature request or bug fix. A new configuration change comes out of that change. The configuration needs to make it into the system for testing and promotion to UAT and up.
QA finds that the developer missed an associated configuration change. That configuration needs to make it into the system for promotion to UAT and up.
Build goes to UAT. Client performs acceptance testing but find they really want to change another unassociated configuration and have it promoted with the changes. In other words they found they want to change a business process by a configuration. The configuration needs to make it into the system for promotion to PRD.
As the client operates in PRD they may tweak application settings. These configurations need to make it into the system for future development and testing.
The general issue is making sure we are accounting for all the configurations and accidently not miss any during promotions which causes grief.
Our Attempts At A Process
a. We have had member of the QA team to write patches (xml and sql) and check those in. This requires a build to make sure those get into the package. With this approach it really just took care of item 1 above and we fell apart on the other items. The nice thing is for the items that made it into the patches it was just an install with the utility.
b. A developer threw together a Config page on the application. All the configurations could be uploaded and downloaded via XML document but it requires the app to be running. For item 1, member of QA team would manually setup configurations in the application and then would download the Config.xml file. This XML file would be used to upload configurations in other environments. We would use text diff tool to look at differences between config.xml files from different environments. This addressed item 1 and the others items but had problems. Problems were not all configurations made it into the XML document (just needs to be fixed by developer), some of the configurations didn’t have a UI in the application so you still had to manually go to the database on some, comparing the XML document with text diff was difficult at time (looked mostly due to sorting but I’m sure there are other issues), XML was not very human readable and finally the XML document did not allow for deleting existing incorrect or outdated configs.
c. Recently we went with option B, but over time for a new client we just started manually tracking configs and promoting them manually by hand (UI and DB) through the promotions. Needless to say lots of human errors.
So we have been looking at solutions. Eventually it would be great to get as much automation in as possible. I’m looking at going with the scripting approach and just focusing on process, documentation and looking at using Redgate data compare in addition to what we had been doing with compare on config.xml. With Redgate we have to create views though and there is no way to create update scripts from that approach except to manually update the scripts. It does at least allow a comparison without the app running. I’m also looking at pulling out the configs from our normal patches and making it a system independent of the build (utility.exe –patch –config). When I say focus on process it will be things like if we compare and find a config change either reported by client or not, we still script it, just means we have to have a process in place to quickly revalidate config install before promoting to the next level. As for documentation looking at making the original QA document a living document instead of just an upfront document. The goal is to try and enhance clarity and reduce missing configurations during promotion. Unfortunately it doesn’t improve speed of delivery.
Does anyone have any recommendations or best practices to pass along. Thanks.
Can I ask exactly what you mean by application configuration. I'm interpreting that as both:
Config files in the web application
Static reference data inside the database
Full disclosure I work for Red Gate. You might be interested in taking a look at Deployment Manager, it's a deployment tool that deploys applications, databases and configuration. It's free for up to 5 projects and target servers.
The approach it uses is to package application code and the database state into packages. These packages can be deployed into dev, test, staging and production environments. The same package is deployed to each environment.
Any application configuration that needs to change between environments is handled in one of the ways below:
Variable substitution in web.config. The tool allows you to specify override values for variables in these files, and set these per environment/server
Substituting the web.config file per environment.
Custom powershell scripts that are run pre/post deploy. You could use these to execute custom SQL based on the environment or server.
Static data within the database, using SQL Source Control's static
data feature. I've written a blog post about how to supply
different sets of static data to different environments/customers.
This allows you to source control the application configurations and deploy them to different environments.

Resources