Update permissions to a file not deployed by puppet - file

I would like to know how to update the permissions for a file that is not getting deployed by puppet. The file is actually getting deployed by an RPM. I have tried the following with no luck:
file { '/some/directory/myfile.conf' :
ensure => 'file',
replace => 'no',
owner => 'someuser',
group => 'somegroup',
mode => '0644'
}
This actually removes the content of the file and leaves an empty file. However it sets the right permissions and mode. I would like to keep the content. I am using puppet 2.7.3.

Related

CakePHP 3 Cache File Permissions

I have a CakePHP 3 app with shells that I run from crontab.
When I run the shells through crontab, it creates cache files owned by the user running the crontab, which is not the user that runs apache...
Sometimes when I run the crontab the cached models are owned by apache and the shell fails, sometimes when I visit a page the models are owned by ec2-user and the page fails...
I posted a question on github, https://github.com/cakephp/cakephp/issues/11265#issuecomment-333951638
I was told to modify the chmod option for the cache config, I tried the following but it didn't work...
/**
* Configure the cache adapters.
*/
'Cache' => [
'default' => [
'className' => 'File',
'path' => CACHE,
'url' => env('CACHE_DEFAULT_URL', null),
'chmod' => 777
],
Any ideas on how I can make the default file permissions 777 on the cake cache files?
I would suggest having the chron run as the correct user OR having the chron task change owner and keep permissions as set but if you really want to have it all as it is and just change the permissions then you can use the mask option which I assume is what they meant.
Cache Config Options
Set using the following:
'Cache' => [
'default' => [
'mask' => 0777,
// other config options
],
]
#KaffineAddict is correct but make sure you do not wrap the value of mask in quotes as this can cause the values to not give correct permissions.
'Cache' => [
'default' => [
'mask' => 0777,
// other config options
],
]

Does each file need to be copied within a separate resource

I'm still getting to grips with puppet (feels like I'm drinking from a hose pipe at times) so I've attempted to keep my configuration and environment simple initially. I've started by having puppet deploy files to my clients. However, I get the feeling that the way I'm deploying the files isn't the most efficient way of doing so. For every file, I'm specifying like this:
file { "/etc/ntp.conf":
owner => 'root',
group => 'root',
mode => '0444',
source => 'puppet://basxtststinfl01/files/etc/ntp.conf',
}
file { "/etc/snmp/snmpd.conf":
owner => 'root',
group => 'root',
mode => '0644',
source => 'puppet://basxtststinfl01/files/etc/snmpd.conf',
}
I have up 15 files I'd like to deploy. Is this the correct approach?
Thanks.
File in modules is a good keyword.
Generally, to solve the problem of repetitive resource, you can wrap them in a defined type.
define deployed_file($ensure = 'present',
$owner = 'root',
$group = 'root',
$mode = '644',
$recurse = '') {
if $recurse != '' { File { recurse => $recurse } }
file {
$name:
ensure => $ensure,
owner => $owner,
group => $group,
source => "puppet://basxtststinfl01/files${name}",
}
}
Your resources from above can then be written as:
deployed_file {
'/etc/ntp.conf':
mode => '444';
'/etc/snmp/snmpd.conf':
}
You can add more parameters to make the URL customizable.
Note that I added the recurse parameter for posterity. file has lots of attributes, and if you need for the deployed_file to support them, you should add them in this manner, so that they get passed to the wrapped file if specified, but ignored otherwise.
I suppose the question of whether this is the ‘correct approach’ comes down to what you’re doing exactly, but since ‘it depends’ is sometimes an annoying thing to hear, theres a few general points that can be made ...
This is an approach that will work - it would deploy the 15 or so files that you have declared exactly as you specify them.
It does however come at the expense of requiring the exact maintenance of your files as they are written in basxtststinfl01
Since these are static files, you might find it restrictive if you come to running puppet code to provision many different servers.
So to options! The examples you have given there can be considered from the context of puppet modules - reusable code to configure a particular service or logical unit of your system
In your ntp case, there is an ntp puppet labs module which contains logic within it to create an ntp.conf file and takes variables as parameters to configure it. This shortens the puppet declaration and allows you to reuse it for provisioning more servers. An example of how to configure this is given in the documentation of the puppetlabs-ntp module.
class { '::ntp':
servers => [ 'ntp1.corp.com', 'ntp2.corp.com' ],
}
More often that not, someone has written a module that will provision a part of the system that you want, see the Puppet Forge
Decomposing your system requirements into units and using modules means you can specify your config files dynamically according to variables that might vary from system to system.
Best thing to do is work through the excellent documentation on the puppetlabs website:
Some resources:
Learning Puppet (you may already have seen)
Basics of modules

Logstash not reading file input

I have a strange problem with Logstash. I am providing a log file as input to logstash. The configuration is as follows:
input {
file {
type => "apache-access"
path => ["C:\Users\spanguluri\Downloads\logstash\bin\test.log"]
}
}
output {
elasticsearch {
protocol => "http"
host => "10.35.143.93"
port => "9200"
index => "latestindex"
}
}
I am running elasticsearch server already and verifying if the data is being received with
curl queries. The problem is, no data is being received when the input is a file. However, if I change input to stdin { } as follows, it sends all input data smoothly:
input {
stdin{ }
}
output {
elasticsearch {
protocol => "http"
host => "10.35.143.93"
port => "9200"
index => "latestindex"
}
}
I don't get where I am going wrong. Can someone please take a look at this?
You should set start_position under your file section:
start_position => "beginning"
It defaults to end and so won't read any existing lines in your file, only newly added ones:
start_position
Value can be any of: "beginning", "end"
Default value is "end"
Choose where Logstash starts initially reading files: at the beginning
or at the end. The default behavior treats files like live streams and
thus starts at the end. If you have old data you want to import, set
this to ‘beginning’
This option only modifies “first contact” situations where a file is
new and not seen before. If a file has already been seen before, this
option has no effect.
In addition to the provided answer, I had to change the path from c:\my\path to c:/my/path in order for it to read the files.

Redirect to controller action if file not found

I'm trying to configure the rewrite rules for lighttpd so that a specific cakephp controller-action is executed if a file is not found. The controller-action will generate the data (png file), save it for future use, and then serve it to the client. The next time someone tries to access the file, it will be served by lighttpd directly without executing any php. In other words, the png files are cached so there is no need to recreate them.
From what I can tell, url.rewrite-if-not-file is ignored if rewrite-once has a match. Thus, the following can serve up my cached files but not my uncached files.
url.rewrite-if-not-file = (
"^/scan/(.+)\.png" => "/mycontroller/scan/$1"
)
url.rewrite-once = (
"^/(css|files|img|js)/(.*)" => "/$1/$2",
"^/favicon.ico" => "/favicon.ico",
"^/scan/(.+\.png)" => "/scan/$1",
"^([^\?]*)(\?(.+))?$" => "/index.php?url=$1&$3",
)
The only solution I can think of now is to delete the 3rd rule and modify ^([^\?]*)(\?(.+))?$ so it ignores urls starting with \scan\.
Any other suggestions?
i'd do something like this in a controller:
if (file_exists($file_name)) {
$this->redirect($file_name);
} else {
$this->redirect(array('action' => 'some_action', 'controller' => 'some_controller'));
}

Share configuration file at CakePHP between Config files (routles.php, mail.php...)

I have a site running in localhost as a development environment and in a server for production.
There are some differences in some configuration files between both and I every time I have to update the changes in the server I have to be careful not to overwrite some files which are different.
I would like to be able to simplify this process by creating just one single file with the correct configuration for each environment.
I need to read that file currently in this Config files:
app/Config/email.php
app/Config/routes.php
And ideally, if possible in:
app/Vendor/Vendor_name/vendor_file.php
Is it possible somehow?
I have tried to use Configure::read and Configure::write but it seems it can not be done inside email settings such as public $smtp or in the routes file.
Thaks.
The routes file is simply a php file with calls to the router. You could very simply split it up into multiple files and load them yourself:
app/Config/
routes.php
routes_dev.php
routes_production.php
routes.php would then load the proper routes file.
<?php
if ($env == 'dev') {
include 'routes_dev.php';
} else {
include 'routes_production.php';
}
The email config is also just a php file. You could write a function to set the proper default config based on the environment.
class EmailConfig {
public function __construct() {
if ($env == 'dev') {
$this->default = $this->dev;
}
}
public $default = array(
'host' => 'mail.example.com',
'transport' => 'Smtp'
);
public $dev = array(
'host' => 'mail2.example.com',
'transport' => 'Smtp'
);
}
As for vendor files, that's a case by case basis.
If you have a deployment system, it might be better to actually have separate files for each environment (maybe even a full config directory) and rename them after the deployment build is complete, making Cake and your code none the wiser.
The way I used to handle this situation was to add environment variables to the apache virtualhost configuration.
SetEnv cake_apps_path /var/www/apps/
SetEnv cake_libs_path /var/www/libs/
This allowed me to then pull $_SERVER['cake_apps_path'] and $_SERVER['cake_libs_path']. Then each developer can set his own variables in his own virtualhost config, and you add that to the server's virtualhost config, and you're done. Each developer can have their own pathing.

Resources