Archer GRC Automated Deployments - continuous-deployment

I am trying to figure out if Automated Deployments for Archer GRC is possible for the On Prem version ?
Currently it is deployed manually.

Latest version of Archer (v6.8, v6.9) has limited API provided to allow package deployments, BUT last time I checked they don't allow mapping and partial installs (I can be wrong, so double check).
API is there, but functionality is limited to the point that I don't see how package installation can be automated via provided API. I hope that in the next Archer versions it will be extended to replicate the functionality available with manual package deployment (mapping, partial installs, and other options).
Technically, if you like complex and time consuming tasks, you can decode/parse the package installation page. Then you can write an application to simulate HTTP packets sent to Archer server simulating the package installation.
I'm not aware of any company doing something like this as of today.
If you write a product to implement proper Code/Configuration Version Control for RSA Archer, then you may be able to sell it as well :)
Good luck!

Related

Adobe Experience Manager WorkBench Check out/in Issue

Shortly I ve Windows Server 2012 R2, AEM Forms(6.2), SQLServer(2014) and Workbench(6.2) in same server. At first when i install and configure all of them, i can check out or in my applications from Workbench succesfully. However After my software team executes some scripts at Database, we can not check in/out from workbench. The worst thing when i click check out, workbench gives any error. any log. on event log or server application. It gives nothing and don't do my transaction. I saw at forums some people have same issue but nobody writes solution.
Please if any one knows the solution, share with us. What's wrong with my workbench? what to do fix this issue?
The query that your software team ran turns off security on every single LiveCycle service and makes them run as the system user. This includes the services used by Workbench and is very bad. Some of the services rely on knowing who is logged in to operate correctly. In particular, how can LiveCycle know who has checked in/out a resource if the service always runs as system?
Your best bet is to restore the LiveCycle database - or at least the tb_sc_service_configuration table to be where it was before you ran the script.
If you need to remove security on individual services, you should do it through the admin console, but only do it for your processes. Never do it for systems services unless the Adobe documentation says it is OK.
As JeremyP pointed out, modifying the Adobe database directly is a bad idea. The database should be treated as a black box that is only manipulated by Adobe code (either by doing things in the Adobe tools or making calls to Adobe APIs).
You can either make security changes manually through the adminui (as he indicates, which is the most common way of doing it) or programatically using the Adobe client APIs. See the following links for sample code that uses the APIs:
Removing Security - http://help.adobe.com/en_US/livecycle/10.0/ProgramLC/WS624e3cba99b79e12e69a9941333732bac8-7f35.html
Setting the runAs user - http://help.adobe.com/en_US/livecycle/10.0/ProgramLC/WS624e3cba99b79e12e69a9941333732bac8-7f38.html
My company, 4Point, offers AEM Forms consulting services. We have an in-house Apache Ant library that wraps the code above to automate this (and other) common tasks that are typically required when deploying (and redeploying) AEM Forms solutions. It can be included as part of a consulting engagement.

SCCM Detection Methods - where are they stored?

By the end of last week our central IT Department introduced SCCM and applied it to a bunch of clients in our division. My colleagues and I work as so called "IT-Partner" in a 1st level support for a few hundrets of colleagues. Now we're facing some problems with our new SCCM System (installed packages do not work etc.) Now we'd like to "reset" applications so the SCCM Agend will reinstall them. I've read something about the detection methods but unfortunatelly I do not really know how they work nor I know where those methods are saved. I want to "analyse" those methods so I know which file to modify / delete that the agent will reinstall the application.
By the way, how much time does SCCM take from "assigning" a package to applying to the client?
Assuming you only have the client and no access to the SCCM Console the detection methods can be found using WMI. They are stored in root\ccm\CIModels in the Class Local_Detect_Synclet.
The format is XML in one column and it is designed so that all kinds of detection methods can basically be represented in the same style so it's not very readable but you should be able to get some basic understanding about the detection method used.
Keep in mind this is only true if the software was deployed in the "new" (introduced in sccm 2012) application format and not for the "old" package/program format.
If you want more detail I once tried to automate the process of triggering a reinstall for any given application but ultimately failed due to problems with the chache/distribution point. I posted all my findings here.
So from an application POV. When you deploy an app the detection method is setup in SCCM to determine wether or not the application installed successfully. This detection method could be configured a variety of ways. For example, it could check to see if the msi code is installed to determine success, it could check the .exe and compare it to a specific version, or even check a registry file for existence. In order to change/modify these detection methods you should be an SCCM admin and be able to login to the console. From there you would select the specific application or package you want to analyze and click through the properties of the deployment.

Twist to the standard “SQL database change workflow best practices”

Twist to the standard “SQL database change workflow best practices”
Background
ASP.NET/C# Web App
MS SQL
Environments
Production
UAT
Test
Dev
We create patch scripts (XML and sql) that are source controlled in Mercurial. We have cmd line utility that installs patches to DB (utitlity.exe install –patch) from a Release folder the build packages. Patches have meta data that helps with when patch should run and we log patches installed in a table in the target DB. All these were covered in the 3 year old question:
SQL Server database change workflow best practices
Our Problem/Twist
I think this works well for tables, views, functions and stored procedures. We struggle with application configuration data. Here are some touch points on application configurations.
New client. BA performs system study and fit analysis. Out of this comes a configuration word document of what application configurations need to be setup. Note some of these may also come in phases over time. We need to get these new configurations into the system for the developer and client UAT.
Developer works on feature request or bug fix. A new configuration change comes out of that change. The configuration needs to make it into the system for testing and promotion to UAT and up.
QA finds that the developer missed an associated configuration change. That configuration needs to make it into the system for promotion to UAT and up.
Build goes to UAT. Client performs acceptance testing but find they really want to change another unassociated configuration and have it promoted with the changes. In other words they found they want to change a business process by a configuration. The configuration needs to make it into the system for promotion to PRD.
As the client operates in PRD they may tweak application settings. These configurations need to make it into the system for future development and testing.
The general issue is making sure we are accounting for all the configurations and accidently not miss any during promotions which causes grief.
Our Attempts At A Process
a. We have had member of the QA team to write patches (xml and sql) and check those in. This requires a build to make sure those get into the package. With this approach it really just took care of item 1 above and we fell apart on the other items. The nice thing is for the items that made it into the patches it was just an install with the utility.
b. A developer threw together a Config page on the application. All the configurations could be uploaded and downloaded via XML document but it requires the app to be running. For item 1, member of QA team would manually setup configurations in the application and then would download the Config.xml file. This XML file would be used to upload configurations in other environments. We would use text diff tool to look at differences between config.xml files from different environments. This addressed item 1 and the others items but had problems. Problems were not all configurations made it into the XML document (just needs to be fixed by developer), some of the configurations didn’t have a UI in the application so you still had to manually go to the database on some, comparing the XML document with text diff was difficult at time (looked mostly due to sorting but I’m sure there are other issues), XML was not very human readable and finally the XML document did not allow for deleting existing incorrect or outdated configs.
c. Recently we went with option B, but over time for a new client we just started manually tracking configs and promoting them manually by hand (UI and DB) through the promotions. Needless to say lots of human errors.
So we have been looking at solutions. Eventually it would be great to get as much automation in as possible. I’m looking at going with the scripting approach and just focusing on process, documentation and looking at using Redgate data compare in addition to what we had been doing with compare on config.xml. With Redgate we have to create views though and there is no way to create update scripts from that approach except to manually update the scripts. It does at least allow a comparison without the app running. I’m also looking at pulling out the configs from our normal patches and making it a system independent of the build (utility.exe –patch –config). When I say focus on process it will be things like if we compare and find a config change either reported by client or not, we still script it, just means we have to have a process in place to quickly revalidate config install before promoting to the next level. As for documentation looking at making the original QA document a living document instead of just an upfront document. The goal is to try and enhance clarity and reduce missing configurations during promotion. Unfortunately it doesn’t improve speed of delivery.
Does anyone have any recommendations or best practices to pass along. Thanks.
Can I ask exactly what you mean by application configuration. I'm interpreting that as both:
Config files in the web application
Static reference data inside the database
Full disclosure I work for Red Gate. You might be interested in taking a look at Deployment Manager, it's a deployment tool that deploys applications, databases and configuration. It's free for up to 5 projects and target servers.
The approach it uses is to package application code and the database state into packages. These packages can be deployed into dev, test, staging and production environments. The same package is deployed to each environment.
Any application configuration that needs to change between environments is handled in one of the ways below:
Variable substitution in web.config. The tool allows you to specify override values for variables in these files, and set these per environment/server
Substituting the web.config file per environment.
Custom powershell scripts that are run pre/post deploy. You could use these to execute custom SQL based on the environment or server.
Static data within the database, using SQL Source Control's static
data feature. I've written a blog post about how to supply
different sets of static data to different environments/customers.
This allows you to source control the application configurations and deploy them to different environments.

How to "explore" group of servers?

I need to check a group of servers (Unix, Linux) to know what kind of services, software (also version) are running there (check it once for a while and store it in database).
The idea is to have always fresh info about whole environment - its constantly changing. Perhaps you can suggest some solution that is already there?
Currently i am thinking about using Nagios or Cacti + plugins but I am not sure if this solution will be optimal.
Nagios is a very powerful monitoring solution (the best for me) : Open source, Compatible with both linux & windows, reporting & notifications via emails/SMS, Nice interface, Many many plugins...etc I've already worked with & I was very satisfied.
Check Nico Largo's Forum for Install. If you are not familiar to linux command search for FAN : Fully Automated Nagios which is a .iso where nagios is already in.
If you have any trouble during install or configuration post your questions there : https://serverfault.com/
Given that you want to poll for information on the system that can change dynamically, I would look at Check_MK.
It originally started as a plugin for Nagios that would poll a server for running services and generate the necessary configs for monitoring anything it discovered. Since then, it has evolved into a complete monitoring solution that provides its own complete ui (still based on nagios core), so you are safe in running this if you are familiar with nagios already.
See the website: http://mathias-kettner.com/checkmk_monitoring_system.html
You may need to select that you wish to view the "English" perspective of the site on first visit.

how to integrate bugzilla and HP quality center?

I'm working on integrating Bugzilla with HP Qc. I'm performing this by using perl script by directly manipulating the database using sql commands. I want to use the web services of Bugzilla. I have gone through the Bugzilla webservice API but tat wasn't enough to get started. I'm a beginner and this is the first project of my career. How do I go about this?
Check out the Perl script bz_webservice_demo.pl in Bugzilla's contrib directory, it shows how to talk to Bugzilla via XMLRPC.
There are a few things you could do:
Export defects from Bugzilla into a spreadsheet and upload it into Quality Center
Use the Open Test Architecture API (OTAClient.dll) to update defects in Quality Center
Use the HP Synchronization Server and build an adapter
Using the HP Synchronizer is probably the only "real" way to do it. Though you could potentially build your own sync mechanism, potentially using just OTA and a message queue.
There may be an existing adapter available from proficom-ag based on a presentation I found via a web search

Resources