Best way to generate Ansible documentation - ansible-2.x

I'd like to generate documentation pages about the hosts I manage with Ansible, and upload them to a web server.
What would be the best, non-hacky way of achieving this?
I want the HTML template to list IP addresses of hosts, and the values of many hostvars.
I noticed that when I run my inventory script, many of these variables are listed under _meta hostvars, but all the jinja expressions are non interpolated: i.e. the "{{expressions}}" are not expanded out.
I'm finding that because I'm executing the role to generate the documentation on the web server host only, many of these variables are not available in the template.
For example the IP addresses of other hosts are not available, and fail with: 'dict object' has no attribute 'ansible_default_ipv4'

Enable fact caching https://docs.ansible.com/ansible/latest/plugins/cache.html
Execute setup module on every host in question
Collect required information from fact cache with some script

Related

How to configure Nginx autoindex to edit files?

I have configured Nginx with autoindex module to enable Directory Listing. But I want to extend this feature to enable file editing as well and saving it.
The thing is I have some Private IPs which needs to be monitored and I have added those IPs in a file and made a script to take IPs from the file and monitor them by Pinging. Since sometimes these IPs change due to DHCP, Apart from System Admins, No one is much proficient in using Terminal. Hence I wanted to provide a webUI, so that concerned persons change this IP whenever through webpage. I know this can be possible using code, but since am not a developer, I was finding a way through here. Is it possible?
No, it's not possible using nginx alone.

Protect AdWords Scripts

My company is attempting to protect its scripts used in Google AdWords. We want to share them with clients and other agencies, but retain proprietary control. Which may be impossible, especially in AdWords.
One idea is to use Obfuscation, however some portions of the scripts cannot be obfuscated in order to work in adwords.
Another idea is to place the entire script in a Google drive doc use Google drive as a gateway. However, this makes the scripts buggy.
We could pull out the data, run the script outside of the Google AdWords interface and pull it back in, but we lose access to certain functionality of the Adwords interface.
Any thoughts or suggestions?
Best way is running script from an external file. If you store your script in Google Drive and give permission to only the user who authorizes script. So your clients cannot reach the codes. If you pre-authorize your script, it's should be fine like:
// UrlFetchApp.fetch();
function main() {
var url = "http://example.com/asdf.js";
eval(UrlFetchApp.fetch(url).getContentText());
}
Gokaan is not too far off. I use a base loader script (sort of like a base class in code). The base (AKA script runner) is in charge of loading scripts to run from a Google Drive document. It works best if you have an MCC account because you give the base script permissions through your MCC login. That way, your client can't get to the true scripts, only to the loader (which is worthless from an IP perspective). And if they off you, you off them.
You can read more about it on Russ Savage's site, which is a great resource.
http://www.freeadwordsscripts.com/search/label/generic%20script%20runner
The only issue I have had is when you have many, many accounts trying to WRITE to a shared Google Drive document. Depending on how you write your code, you may get overwrite issues b/c you cannot set the exact time a script runs (google only promises hourly).
Since then, Google now allows parallel scripts. That is my next move - migrate the script runner to the MCC level, the script then iterates through the accounts it should apply the scripts to. Much slicker but will take some reworking.
Good luck.

Get AD Site from LDAP Property

In a domain with AD Sites and Services configured is it possible to get the Site of a computer from LDAP? Is it stored as an attribute?
Unless this has changed over the last couple of years outside of my knowledge, there is not. Historically this was never done as AD site knowledge was ephemeral...the assumption was that computers move around so storing where they are is silly. Plus there was no global need for the knowledge.
You could of course add this. By this i mean, you could do something like, extend the schema with a new attribute for this and set a start-up script on your domain-joined machines to write this (if it has changed since they last wrote) to the directory. Obviously you'll want to test this well to ensure it doesn't create more problems than it solves...
On the Win32 point of view you've got the DsAddressToSiteNamesEx API. I don't know how to find it using pure LDAP.

How to determine at runtime if I am connected to production database?

OK, so I did the dumb thing and released production code (C#, VS2010) that targeted our development database (SQL Server 2008 R2). Luckily we are not using the production database yet so I didn't have the pain of trying to recover and synchronize everything...
But, I want to prevent this from happening again when it could be much more painful. My idea is to add a table I can query at startup and determine what database I am connected to by the value returned. Production would return "PROD" and dev and test would return other values, for example.
If it makes any difference, the application talks to a WCF service to access the database so I have endpoints in the config file, not actual connection strings.
Does this make sense? How have others addressed this problem?
Thanks,
Dave
The easiest way to solve this is to not have access to production accounts. Those are stored in the Machine.config file for our .net applications. In non-.net applications this is easily duplicated, by having a config file in a common location, or (dare I say) a registry entry which holds the account information.
Most of our servers are accessed through aliases too, so no one really needs to change the connection string from environment to environment. Just grab the user from the config and the server alias in the hosts file points you to the correct server. This also removes the headache from us having to update all our config files when we switch db instances (change hardware etc.)
So even with the click once deployment and the end points. You can publish the a new endpoint URI in a machine config on the end users desktop (I'm assuming this is an internal application), and then reference that in the code.
If you absolutely can't do this, as this might be a lot of work (last place I worked had 2000 call center people, so this push was a lot more difficult, but still possible). You can always have an automated build server setup which modifies the app.config file for you as a last step of building the application for you. You then ALWAYS publish the compiled code from the automated build server. Never have the change in the app.config for something like this be a manual step in the developer's process. This will always lead to problems at some point.
Now if none of this works, your final option (done this one too), which I hated, but it worked is to look up the value off of a mapped drive. Essentially, everyone in the company has a mapped drive to say R:. This is where you have your production configuration files etc. The prod account people map to one drive location with the production values, and the devs etc. map to another with the development values. I hate this option compared to the others, but it works, and it can save you in a pinch with others become tedious and difficult (due to say office politics, setting up a build server etc.).
I'm assuming your production server has a different name than your development server, so you could simply SELECT ##SERVERNAME AS ServerName.
Not sure if this answer helps you in a assumed .net environment, but within a *nix/PHP environment, this is how I handle the same situation.
OK, so I did the dumb thing and released production code
There are a times where some app behavior is environment dependent, as you eluded to. In order to provide this ability to check between development and production environments I added the following line to global /etc/profile/profile.d/custom.sh config (CentOS):
SERVICE_ENV=dev
And in code I have a wrapper method which will grab an environment variable based on name and localize it's value making it accessible to my application code. Below is a snippet demonstrating how to check the current environment and react accordingly (in PHP):
public function __call($method, $params)
{
// Reduce chatter on production envs
// Only display debug messages if override told us to
if (($method === 'debug') &&
(CoreLib_Api_Environment_Package::getValue(CoreLib_Api_Environment::VAR_LABEL_SERVICE) === CoreLib_Api_Environment::PROD) &&
(!in_array(CoreLib_Api_Log::DEBUG_ON_PROD_OVERRIDE, $params))) {
return;
}
}
Remember, you don't want to pepper your application logic with environment checks, save for a few extreme use cases as demonstrated with snippet. Rather you should be controlling access to your production databases using DNS. For example, within your development environment the following db hostname mydatabase-db would resolve to a local server instead of your actual production server. And when you push your code to the production environment, your DNS will correctly resolve the hostname, so your code should "just work" without any environment checks.
After hours of wading through textbooks and tutorials on MSBuild and app.config manipulation, I stumbled across something called SlowCheetah - XML Transforms http://visualstudiogallery.msdn.microsoft.com/69023d00-a4f9-4a34-a6cd-7e854ba318b5 that did what I needed it to do in less than hour after first stumbling across it. Definitely recommended! From the article:
This package enables you to transform your app.config or any other XML file based on the build configuration. It also adds additional tooling to help you create XML transforms.
This package is created by Sayed Ibrahim Hashimi, Chuck England and Bill Heibert, the same Hashimi who authored THE book on MSBuild. If you're looking for a simple ubiquitous way to transform your app.config, web.config or any other XML fie based on the build configuration, look no further -- this VS package will do the job.
Yeah I know I answered my own question but I already gave points to the answer that eventually pointed me to the real answer. Now I need to go back and edit the question based on my new understanding of the problem...
Dave
I' assuming yout production serveur has a different ip address. You can simply use
SELECT CONNECTIONPROPERTY('local_net_address') AS local_net_address

Ways to specifying "Routes" and "URIGroups" in WebSphere AS

Environment: WebSphere Network Deployment edition v6.1 (on Linux)
We have 2 applications "Main" and "Dynamic" that each run on a server cluster. Each of these applications is set to run from its unique domain name. So www.main.com/ is serviced by the Main application while www.dynamic.com/ is serviced by the Dynamic application.
The configurations required for this have been pretty simple. So no problem so far.
Dynamic application is related to Main and going forward, we want to be able to do the following:
We want to be able to serve the Dynamic application from:
www.dynamic.com/ as well as
www.main.com/d/
In order to achieve this, we have the following configuration specified in plugin-cfg.xml.
<URIGroup Name="MainURIs">
<URI Name="/*" />
</URIGroup>
<URIGroup Name="DynamicURIs">
<URI Name="/*" />
</URIGroup>
<URIGroup Name="Main_DynamicURIs">
<URI Name="/d/*" />
</URIGroup>
2 server clusters namely "MainCluster" and "DynamicCluster" have already been defined in plugin-cfg.xml. Similarly we have virtual host groups defined for www.main.com (virtual host name) and www.dynamic.com as "MainVH" and "DynamicVH" respectively. We have the routing rules specified as follows:
<Route UriGroup="MainURIs" VirtualHostGroup="MainVH" ServerCluster="MainCluster"/>
<Route UriGroup="DynamicURIs" VirtualHostGroup="DynamicVH" ServerCluster="DynamicCluster"/>
<Route UriGroup="Main_DynamicURIs" ServerCluster="DynamicCluster"/>
Please note that we don't specify a virtual host group for the third route rule.
This seems to work fine for our purpose. However, we had to make the above change to plugin-cfg.xml by hand. Every time the plugin-cfg.xml file in regenerated, the changes are lost and we have to make them again. This is frowned upon by our clients and they don't want that to be the case going forward.
Is there a way we can overcome this problem of hand-editing the plugin-cfg.xml file?
Some vague ways I was thinking of:
1) Some how making this change using the admin console of WebSphere so that even if the xml file is regenerated, it would have the relevant routing rules automatically.
2) Writing a wsadmin JACL/Jython script that could be run each time after the plugin file is regenerated. This script should be able to update the above routing rules in the configuration. I have searched quite a lot for this approach but haven't found an encouraging reply to this approach.
Any helpful tips are highly appreciated.

Resources