Guys i am getting error as below while copying the file from puppet master to agents - file-copying

1.when i used "puppet agent -t" command in agents it's not retrieving from puppet server and getting the error as below.
2.the below code i have used in modules path:/etc/puppetlabs/code/environments/production/modules/mailx/manifests/init.pp and i have included this class in site.pp file as below.
Error: 1.I
/Stage[main]/Mailx/File[/etc/mail.rc]: Could not evaluate: Could not retrieve information from environment production source(s) puppet:///modules/mailx/mail.rc
file { '/etc/mail.rc':
ensure => present,
mode => '0644',
owner => 'root',
group => 'root',
source => 'puppet:///modules/mailx/mail.rc',
}
node 'default', {
include mailx
}

Your code will be much easier to read if you present it like this.
#/etc/puppetlabs/code/environments/production/modules/init.pp
class mailx {
file { '/etc/mail.rc':
ensure => present,
mode => '0644',
owner => 'root',
group => 'root',
source => 'puppet:///modules/mailx/mail.rc',
}
}
And then the file
#/etc/puppetlabs/code/environments/production/modules/mailx/files/mail.rc
set smtp=your.smtp.server
set from="from email address"
The module should be in
/etc/puppetlabs/code/environments/production/modules/mailx/manifests/init.pp
Notice the "production" after the environments.
It may have just been a typo when you put the path in to here but you spelt puppetlabs as puppelabc.
And the file should be here
/etc/puppetlabs/code/environments/production/modules/mailx/files/mail.rc
Each directory under the environment directory is an environment and that error says Puppet is looking in the production environment for that file.

Related

How to import Linux logs into ELK stack for log analysis?

I'm building a log analysis environment with the purpose of analyzing linux logs such as: /var/log/auth.log, /var/log/cron, /var/log/syslog, etc. The goal is to be able to upload such a log file and analyze it properly with Kibana/Elasticsearch. To do so, I created a .conf file as seen below, which includes the proper patterns to pars auth.log and the information needed in the input and output section. Unfortunately, when connecting to Kibana I cannot see any data in the "Discover" panel and cannot find the related "index pattern". I tested the grokk pattern and they works well.
input {
file {
type => "linux-auth"
path => [ "/home/ubuntu/logs/auth.log"]
}
filter {
if [type] == "linux-auth" {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:time} %{WORD:method}\[%{POSINT:auth_pid}\]\: %{DATA:message} for %{DATA:user} from %{IPORHOST:IP_address} port %{POSINT:port}" }
}
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:time} %{WORD:method}\[%{POSINT:auth_pid}\]\:%{DATA:message} for %{GREEDYDATA:username}" }
}
}
}
output{
elasticsearch {
hosts => "elasticsearch:9200"
}
}
Example of auth.log:
2018-12-02T14:01:00Z sshd[0000001]: Accepted keyboard-interactive/pam for root from 185.118.167.241 port 64965 ssh2
2018-12-02T14:02:00Z sshd[0000002]: Failed keyboard-interactive/pam for invalid user ubuntu from 36.104.140.175 port 57512 ssh2
2018-12-02T14:03:00Z sshd[0000003]: pam_unix(sshd:session): session closed for user root
Here is the few recommendations which i would like to give:
You can run logstash on debug mode like below to check what is the exact error.
bin/logstash --debug -f file_path.conf
Check with stdout in output section which will print the incoming data. So that you will be sure that logstash reading the file correctly.
The most important as you mention you want to read system log and need to visualize the data, I would recommend to use filebeat with system modules. Filebeat is especially build for such use cases like reading from file.
It is simple setup where in filebeat under the system module you just need to specify which system log file you need read. Mention the Elasticsearch endpoint and run the filebeat.
It will start reading and pushing the data to the elasticsearch.
Also You don't need build the custom dashboard in kibana (As you going to build in case of logstash). Filebeat comes with pre configured dashboards for system logs.
You can check more on above official document.

Configure DDEV with Apache Solr on Drupal 7 for Search API

From #DamienMcKenna in Slack
Having problems creating a Solr instance with a D7 site. I copied the conf files to ~/.ddev/solr/conf but when Solr starts there is no default instance created. I ran ddev stop --remove-data --omit-snapshot and recreated the instance, but the instance still doesn't exist. When I go to the Solr UI to check the system it shows "no cores available", when I try to create one named "dev" it says:
Error CREATEing SolrCore 'dev': Unable to create core [dev] Caused by: Can't find resource 'solrconfig.xml' in classpath or '/opt/solr/server/solr/dev'
I had success on a Drupal 7 project using the example docker-compose-solr.yaml file from version 1.11.0 of DDEV.
Copy https://github.com/drud/ddev/blob/v1.11.0/pkg/servicetest/testdata/services/docker-compose.solr.yaml into your .ddev folder and ensure line 34 matches the solr version you're going to copy from step 2 below, eg solr: 6.6
Copy the files from sites/all/modules/contrib/search_api_solr/solr-conf/6.x/*.* into the .ddev/solr/conf folder.
Download and enable search_api_override module.
Add the following in settings.local.php:
// For ddev only.
$conf['search_api_override_mode'] = 'load';
$conf['search_api_override_servers']['content'] = array(
'name' => 'DDEV: Solr Server',
'options' => array(
'host' => 'solr',
'port' => '8983',
'path' => '/solr/dev',
'http_user' => '',
'http_pass' => '',
'excerpt' => 0,
'retrieve_data' => 1,
'highlight_data' => 0,
'http_method' => 'AUTO',
),
);
Also, ymmv. It may be better to only override the values you need individually… a la:
$conf['search_api_override_servers']['content']['options']['host'] = 'solr';
$conf['search_api_override_servers']['content']['options']['port'] = '8983';
$conf['search_api_override_servers']['content']['options']['host'] = '/solr/dev';
You may need to modify 'content' array index to match whatever you configured in Drupal 7 to be your Solr index's machine name.
Start ddev with ddev start.
NOTE, I place the Search API override values in sites/default/settings.local.php instead of what one would think to be the logical place (sites/default/settings.ddev.php) so as not to interfere with DDEV's own auto-generation of the latter file.
It would be cool if DDEV did this automatically in settings.ddev.php similar to how the DB service settings work, but AFAICT this level of integration is not there and likely never will be for Drupal 7. Firstly, because you need an additional module (search_api_override) that may or may not be present, and secondly because users have the ability to name their Solr server whatever they want, so it would be hard to automate that. E.g. $conf['search_api_override_servers']['content'] could be anything like: $conf['search_api_override_servers']['foo'].

Database (database/database.sqlite) does not exist. Database works from artisan tinker

I created my database.sqlite file in the database folder. My .env file contents are :
DB_CONNECTION=sqlite
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=absolute\path\to\database\database.sqlite
DB_USERNAME=admin
DB_PASSWORD=
When I run php artisan tinker and DB::table('users')->get(); I get the info from the database.
My DatabaseController is:
class DatabaseController extends Controller
{
public function main()
{
$users = DB::table('users')->get();
return view('database',compact($users));
}
}
Yet when I request /database path I get the error:
QueryException in Connection.php line 647:
Database (database/database.sqlite) does not exist. (SQL: select * from "users")
UPDATE:
A temporary fix is to change the database.php from the config folder:
'connections' => [
'sqlite' => [
'driver' => 'sqlite',
'database' => 'absolute\path\database\database.sqlite',
'prefix' => '',
],
Instead of using env('DB_DATABASE', database_path('database.sqlite')), which returns database/database.sqlite not my absolute path.
You need to use full path, something like:
DB_DATABASE=/home/laravel-project/database/database.sqlite
If you remove DB_DATABASE=... from your .env file and use the path in the config/database.php:
'database' => env('DB_DATABASE', database_path('database.sqlite')),...
(if your database.sqlite file is in database/ folder), it will work, too.
I ran the following commands:
php artisan config:cache
php artisan config:clear
php artisan serve - restarted the server
In config/database.php file:
'sqlite' => [
'driver' => 'sqlite',
'database' => dirname(__DIR__).'/database/database.sqlite',
],
Then run following command:
php artisan config:cache
php artisan config:clear
For those who still face this issue: https://laracasts.com/discuss/channels/general-discussion/database-databasedatabasesqlite-does-not-exist-error-on-running?page=1
Since sqlite only require DB_CONNECTION=sqlite in .env file so you just remove the other:
DB_HOST
DB_PORT
DB_DATABASE
DB_USERNAME
DB_PASSWORD
then save and run migration again. This was how I solved the problem. Cheers!
As Chris said in the comments, the main solution is to completely delete DB_DATABASE from the .env file (in fact, only the first line of the following code is needed).
DB_CONNECTION=sqlite
DB_HOST=127.0.0.1
DB_PORT=3306
DB_USERNAME=admin
DB_PASSWORD=
For Windows, you need to set up your path like this
DB_DATABASE="C:\\xampp\\htdocs\\project\\data\\database.db"
I think, that the problem here was because of Homestead.
Absolute path to the database.sqlite file on local machine is not the same as the virtual machine one has.
In my case a had to set:
DB_DATABASE=/home/vagrant/code/database/database.sqlite
Or, you can just comment out this line and you are ready to go.
#DB_DATABASE=
go to the PHP folder from your C: directory and open the file php.ini.
From there find extension=pdo_sqlite and remove ;
Faced the same problem.
Just had to create the database.sqlite file in the database directory and run my migrations
✅ When using sqlite as your db, remove other DB env variables and it should work fine.
I faced the same problem and this solved it perfectly.
DB_DATABASE=sqlite
Remove the others.

How to let Cakephp 3 choose database connection by Apache environment variable

I'm working with cakephp v3 and want to install the application in two different environments, one for development and one for production use. Both installations should consist of exactly the same files (and file contents), so I could use 'git' or 'svn' to easily deploy the application.
If both environments are hosted on the same machine, I need different database settings (so that the development env uses its own 'testing' DB). I thought of configuring two 'Datasources' in app.php, the 'default' one for production and a `development'.
But how can I switch between both sources?
To be more specific: Currently I define the following environment variable in my Apache config for the development environment:
SetEnv CAKEPHP_DEBUG 1
Then I changed the definition of 'debug' in the app.php file like this:
'debug' => (bool)getenv('CAKEPHP_DEBUG'),
This enables DEBUG mode only on the development machine. Now I also want to switch database configuration in the same easy way.
(I already found some solutions for cakephp v2, but all of them are pretty old and I'm not sure what's the best way to do it in cakephp v3.)
The manual says
You can define as many connections as you want in your configuration
file. You can also define additional connections at runtime using
Cake\Datasource\ConnectionManager::config().
So I guess you can check the value of debug in AppController beforeFilter and change the default database connection
AppController.php
if(Configure::read('debug') == 1)
{
ConnectionManager::config('default', [
'className' => 'Cake\Database\Connection',
'driver' => 'Cake\Database\Driver\Mysql',
'persistent' => false,
'host' => 'dev_server',
'username' => 'dev_username',
'password' => 'dev_passwd',
'database' => 'development',
'encoding' => 'utf8',
'timezone' => 'UTC',
'cacheMetadata' => true,
]);
}
I think you can do something similar in app.php using the ternary operator
app.php
'Datasources' => [
'default' => getenv('CAKEPHP_DEBUG')== 1 ? [ /* debug params */ ] : [ /* default params */]
...
]
But somehow it don't seem the 'clean' way to do it
I think that a cleaner way would be to set both configurations in app.php and then in appController choose what configurations to use
app.php
'Datasources' => [
'debug' => [ /* debug params */ ],
'default' => [ /* default params */]
]
Table file
public static function defaultConnectionName() {
if(Configure::read('debug') == 1)
return 'debug';
return 'default';
}

installing tomcat7 using puppet on CentOs

so my idea was to install tomcat7 using puppet, and then deploy the war file of Solr as a web app. Here is what i found.
There are many tomcat7 modules on puppet forge but none of them work out of box, and i am not sure if any of them actually work and a lot of them pertains to have code as documentation.
Take puppet module install fhuertas-tomcat7 as first example:
installs fhuertas-tomcat7 (v0.0.1)
i get an error, when i run:
sudo puppet apply --modulepath=/home/qa/puppet_qa/modules/ -e "include tomcat7" --debug
Error: Could not find data item service_path in any Hiera data file and no default supplied
and similar scenario follows for :
puppet module install llehmijo-tomcat7_rhel ( no longer maintained )
https://github.com/Spredzy/puppet-tomcat7 ( claims to be for CentOs ) but has an Apt (apt for centos ??) pre-requisite, plus it did not install either.
All i want to do is to install tomcat7 via puppet, and then install Solr. seems to be a simple request. Meanwhile i am working on my own to resolve the exact issue, and was able to install and run tomcat7, but not sure how i can install tomcat-users.xml
here is a portion of my init.pp
exec {'start service':
command => 'sh "startup.sh"',
cwd => "/usr/share/apache-tomcat-7.0.42/bin",
path => '/usr/share/apache-tomcat-7.0.42/bin/:/usr/bin:/bin',
#require => File['/usr/share/apache-tomcat-7.0.42/conf/tomcat-users.xml']
}
so if i un-comment the require => File[]
Error: Could not find dependency File[/usr/share/apache-tomcat-7.0.42/conf/tomcat-users.xml] for Exec[start service]
file { "/etc/tomcat7/tomcat-users.xml":
owner => 'root',
require => Package['tomcat'],
notify => Service['tomcat'],
content => template('tomcat/tomcat-users.xml.erb')
}
This works, as for the modules not working on puppet forge , and github, i think there is no resolution as if not working then these modules can be taken as guidelines, or hints.
I used the supported one https://forge.puppetlabs.com/puppetlabs/tomcat and following the examples everything was properly installed using packages (at least on Ubuntu). It is also possible to install it from source.
https://github.com/puppetlabs/puppetlabs-tomcat/tree/master/examples. See the example below:
class { 'java': }
class { 'tomcat':
install_from_source => false,
user => 'tomcat7',
require => Class['java']
}
tomcat::instance { 'tomcat7':
package_name => 'tomcat7',
require => Class['tomcat']
}->
tomcat::instance { 'tomcat7-admin':
package_name => 'tomcat7-admin',
}->
tomcat::config::server::tomcat_users {
'tomcat-admin':
catalina_base => '/var/lib/tomcat7',
element => 'user',
password => 'test',
roles => ['manager-gui','admin'];
'deployer':
catalina_base => '/var/lib/tomcat7',
element => 'user',
password => 'deployer',
roles => ['manager-script'];
}->
tomcat::service { 'tomcat7':
service_ensure => running,
catalina_base => '/var/lib/tomcat7',
require => Tomcat::Instance['tomcat7']
}

Resources