I am trying to extract the contents of a file and store it in a variable. My code looks like this.
define ssh_user (
$name = undef,
$role = undef,
$password = undef,
$usergroup = undef, ) {
user { $name:
home => "/home/${name}/",
ensure => present,
}->
exec { "generate keys ${name}":
user => $name,
command => '/bin/echo -e "\n\n\n" | ssh-keygen -t rsa',
path => "/usr/bin/",
}->
file { "/home/${name}/.ssh/auhorized_keys":
ensure => file,
mode => 700,
source => "/home/${name}/.ssh/id_rsa.pub",
}
##node_user { $name:
require => File["/home/${name}/.ssh/auhorized_keys"],
key => file('/home/${name}/.ssh/id_rsa.pub'), ## line causing error
role => "ssh::users::${name}::role",
}
}
I get the following error,
Error: Could not find any files from /home/${name}/.ssh/id_rsa.pub at /etc/puppet/manifests/sshd_setup.pp:90 on node puppet.colo.seagate.com
Error: Could not find any files from /home/${name}/.ssh/id_rsa.pub at /etc/puppet/manifests/sshd_setup.pp:90 on node puppet.colo.seagate.com
I am creating files for a set users and storing the contents of the file in a varaible.
After storing the file contents in a variable I am using it to create an exported resource.
I tried using "require" and "->" for ensuring the ordering between my resources; but it turns out the error may be because of the path to my file (it contains a variable name).
You're using single quotes instead of double quotes for that line, I'm guessing that's why the variable isn't parsed correctly?
ie. it should be
key => file("/home/${name}/.ssh/id_rsa.pub"), ## line causing error
instead
Related
i have Perl "config files" containing data structures like this:
'xyz' => {
'solaris' => [
"value1",
"valueN",
],
'linux' => [
"valueX",
"valueN",
],
},
i call them doing a simple :
%config = do '/path/to/file.conf';
now, i would like to "generate" config files like this (construct a data structure "structure" directly and print it in a config file).
i can fill the hash of hashes (of arrays or anything) in a normal way, but how do i dump it afterwards in a config file ?
is there a clean & easy way of doing it ?
instead of having to do dirty things like :
print $FH "'xyz' => {\n";
print $FH " 'solaris' => [\n";
etc.
i "guess" Data::Dumper could do that..
thanks!
You want:
$Data::Dumper::Terse = 1;
See the documentation.
$Data::Dumper::Terse or $OBJ->Terse([NEWVAL])
When set, Data::Dumper will emit single, non-self-referential values as atoms/terms rather than statements. This means that the $VARn names will be avoided where possible, but be advised that such output may not always be parseable by eval.
Update (to address the comment below):
Data::Dumper will add the correct punctuation in order for you to get back exactly what you give it. If you give it a hash reference, then you will get a string that starts and ends with curly braces.
$ perl -MData::Dumper -E'$Data::Dumper::Terse=1; say Dumper { foo => { bar => "baz" }}'
{
'foo' => {
'bar' => 'baz'
}
}
If you give it an array reference, then you will get back a string that starts and ends with square brackets.
$ perl -MData::Dumper -E'$Data::Dumper::Terse=1; say Dumper [ foo => { bar => "baz" }]'
[
'foo',
{
'bar' => 'baz'
}
]
If, for some reason, you want neither of those, then give it a list of values.
$ perl -MData::Dumper -E'$Data::Dumper::Terse=1; say Dumper ( foo => { bar => "baz" })'
'foo'
{
'bar' => 'baz'
}
If you have a hash reference and you don't want the surrounding braces (which seems like a strange requirement, to be honest) then dereference the reference before passing it to Dumper(). That will convert the hash reference to a hash and the hash will be "unrolled" to a list by being passed to a function.
$ perl -MData::Dumper -E'$Data::Dumper::Terse=1; $ref = { foo => { bar => "baz" }}; say Dumper %$ref'
'foo'
{
'bar' => 'baz'
}
Good evening :-)
I am learning Symfony(3) now, and I would like to use test for my classes. I've read that unittests shouldn't use databases, mocking its objects rather.
But despite this, I would like to create in setUp()/KernelTestCase database (e.q. MySQL) and read its content for file, next doing tests (simple -unittests), and purge at tearDown().
Is it possible do it with dumped MySQL file?
What is best (? lazy) way to do it?
I would rather to read prepared (dumped) SQL-like file then 'update:schema' from ORM-class. Where to put file in Symfony3? How read it and create database content?
Pls. help me.
I am using this for years:
Load tests data with the DoctrineFixtures bundle
Launch tests
Repeat! This is for me the most efficient as you only load your test data once; NOT for every tests. Of course you will have to think harder of your tests scenario.
For example, think about a crud test:
Test1:
Check that list has 1 item (loaded by fixtures)
Test2:
Create an item
Check that list has 2 items
Test3:
Delete an element
Check that the list has 1 item
So the tests must be executed in this order exactly. If you are loading fixtures between each test you don't have to take care of that order but it will make your tests slow.
I feel loading fixtures once is better because it will act as a user would normally do: create, delete, update items... And therefore you can check that there is no side effect between each action.
My load fixtures script:
#!/bin/bash
echo "##########################################################################"
echo "# Refresh data model, reload all reference data, load fixtures, #"
echo "# validate schema for the dev env. #"
echo "##########################################################################"
php bin/console doctrine:database:create --if-not-exists --env=dev
php bin/console doctrine:schema:drop --force --env=dev
php bin/console doctrine:schema:create --env=dev
php bin/console doctrine:schema:validate --env=dev
php bin/console doctrine:fixtures:load -n --env=dev
echo -e " --> DONE\n"
Or if you want to load a database for SQL files, use:
php bin/console doctrine:database:import db.sqb --env=dev
Instead of the load fixtures command.
Then, to launch the tests:
./bin/simple-phpunit --debug --verbose $1
$1 is an argument to specify the test suite to load. (main, front, API, backend...) which you can parameter in your phpunit.xml.dist file. (you can omit it to run all the tests)
My solution is:
This load all SQL files to MySQL TEST DB - defined in 'parameters_test.yml' and processing test and drops all DB tables before next test and again with next tests... It can be probably be done shorter and in more right way with command php bin/console doctrine:database:import ... as #Tokeeen.com said. Thank you for help.
// tests/AppBundle/Repository/BotsDbTest.php
<?php
use Symfony\Bundle\FrameworkBundle\Test\KernelTestCase;
use Symfony\Component\Finder\Finder;
use Symfony\Component\Config\FileLocator;
use Symfony\Component\Yaml\Parser;
class BotsDbTest extends KernelTestCase
{
private $doctr;
private $db_cred;
private $db;
/**
* {#inheritDoc}
*/
protected function setUp()
{
$kernel = self::bootKernel();
$this->doctr = $kernel->getContainer()
->get('doctrine')
->getManager();
// for tests with loaded content
$this->db = new \AppBundle\Wiks\BotsDb();
// https://symfony.com/doc/current/bundles/extension.html
// get DB credientals from "parameters_test.yml":
$configDirectories = array( 'app/config' );
$locator = new FileLocator( $configDirectories );
$yamlUserFiles = $locator->locate( 'parameters_test.yml', null, false );
// https://davidegan.me/parse-yaml-in-php-using-symfony-yaml/
$yaml = new Parser();
$yaml_array = $yaml->parse( file_get_contents( $yamlUserFiles['0'] ) );
// needen DB is the second in Symfony - as database2 in file "parameters_test.yml":
$prefix_db = 'database2';
// looking for all keys with suffix: eq: 'database2_host'
$needed_sufix = [ 'host', 'port', 'name', 'user', 'password' ];
$this->db_cred = array();
foreach ( $yaml_array[ 'parameters' ] as $key => $value ) {
if ( strpos( $key, $prefix_db ) !== false ) {
foreach ( $needed_sufix as $needed_key ) {
if ( strpos( $key, $needed_key ) !== false ) {
$this->db_cred[ $needed_key ] = $value;
}
}
}
}
if ( count( $this->db_cred ) == count( $needed_sufix ) ) {
// check is all found
/*Array (
[host] => 127.0.0.1
[port] =>
[name] => db_name
[user] => user_name
[password] => ***
) */
$finder = new Finder();
// find and put into mysql all files as prepared content to tests
$finder->files()->name('*.sql');
foreach ( $finder->in( array( 'tests/dbcontent' ) ) as $file ) {
$shell_command = 'mysql --user='.$this->db_cred['user'].' --password='.$this->db_cred['password'];
$shell_command .= ' '.$this->db_cred['name'].'< '.$file->getRealPath();
shell_exec( $shell_command );
}
}
}
/**
* {#inheritDoc}
*/
protected function tearDown()
{
parent::tearDown();
// remoove DB content ( all tabels ):
$shell_command = 'mysqldump --user='.$this->db_cred['user'].' --password='.$this->db_cred['password'].' ';
$shell_command .= '--add-drop-table --no-data '.$this->db_cred['name'].' | ';
$shell_command .= 'grep -e \'^DROP \| FOREIGN_KEY_CHECKS\' | ';
$shell_command .= 'mysql --user='.$this->db_cred['user'].' --password='.$this->db_cred['password'].' '.$this->db_cred['name'];
shell_exec( $shell_command );
$this->doctr->close();
$this->doctr = null; // avoid memory leaks
}
/** tests, tests, tests...
*
*/
public function test_getBots()
{
$res = $this->db->getBots( $this->doctr );
$this->assertEquals(5, count( $res ));
[...]
Helpful links:
How to remove all MySQL tables from the command-line without DROP database permissions?
https://davidegan.me/parse-yaml-in-php-using-symfony-yaml/
https://symfony.com/doc/current/components/finder.html
https://symfony.com/doc/current/bundles/extension.html
NagiosQL creates a file called servicetemplates.cfg.
For easier distribution of some selected templates I' d like to split any service definition into one seperate file.
Sample servicetemplates.cfg
define service {
name imap_service
use generic_service
check_command check_service_imap
register 0
}
define service {
name ldapserver_ldap_service
service_description LDAP
use generic_service
check_command check_service_ldap
icon_image ldapserver.png
register 0
}
What I like to have is some kind of parser, which create files like the "name" of the template, e. g. imap_service.cfg and ldapserver_ldap_service.cfg.
Every file have to include the whole definition (define service { ... } )
The following code is a solution for me, it might not be perfect as it expects a certain syntax of the Nagios object configuration.
The first parameter has to be the file name of the Nagios configuration file.
<?php
if (empty($argv[1])) {
echo "First parameter: File to Nagios object configuration\n";
exit(1);
}
$starttag=0;
$outfile='';
$outtext='';
$inputfile=$argv[1];
$cfgfile = file($inputfile) or exit(2);
foreach ($cfgfile as $line_num => $line) {
# $outtext will be reset if define ... { is found
$outtext.=$line;
# Start tag of a new section "define ... {"
if (preg_match("/.*define.*{/i", $line)) {
$starttag=1;
$outtext=$line;
}
# Split the line with name. The parameter after name is the later filename
if (preg_match("/name[\s]+[\w]+/", $line) && $starttag=1) {
$keywords=preg_split("/[\s]+/", $line);
$outfile=$keywords[2];
$outfile.=".cfg";
$outfile=str_replace(' ', '_', $outfile);
}
# End tag of a new section "}"
if (preg_match("/.*}/", $line) && $starttag=1) {
$starttag=0;
echo "Writing {$outfile}\n";
file_put_contents($outfile, $outtext);
}
}
echo "Read lines {$line_num} from {$inputfile}\n";
?>
I have an array of hashes (AoH) which looks like this:
$VAR1 = [
{
'Unit' => 'M',
'Size' => '321',
'User' => 'test'
}
{
'Unit' => 'M'
'Size' => '0.24'
'User' => 'test1'
}
...
];
How do I write my AoH to a CSV file with separators, to get the following result:
test;321M
test1;0.24M
I've already tried this code:
my $csv = Text::CSV->new ( { sep_char => ';' } );
$csv->print( $fh1, \#homefoldersize );
But I get
HASH(0x....)
in my CSV file.
Pretty fundamentally - CSV is an array based data structure - it's a vaguely enhanced version of join. But the thing you need for this job is print_hr from Text::CSV.
First you need to set your header order:
$csv->column_names (#names); # Set column names for getline_hr ()
Then you can use
$csv -> print_hr ( *STDOUT, $hashref );
E.g.
$csv -> column_names ( qw ( User Size Unit ) );
foreach my $hashref ( #homefoldersize ) {
$csv -> print_hr ( *STDOUT, $hashref );
}
As you want to concatenate a couple of your columns though, that's slightly harder - you'll need to transform the data first, because otherwise Size and Unit are separate columns.
foreach my $hashref ( #homefoldersize ) {
$hashref -> {Size} .= $hashref -> {Unit};
delete $hashref -> {Unit};
}
Additionally -as another poster notes - you'll need to set sep_char to change the delimiter to ;.
As an alternative to that - you could probably use a hash slice:
#values = #hash{#keys};
But print_hr does pretty much the same thing.
All that is necessary is
printf "%s;%s%s\n", #{$_}{qw/ User Size Unit /} for #homefoldersize;
Try using the example for Text::CSV posted here: http://search.cpan.org/~makamaka/Text-CSV-1.33/lib/Text/CSV.pm
You will need to set sep_char = ';' to make it semicolon-separated.
I've got two arrays, both consisting of a list of filenames. The filenames are identical in both arrays except for the extension.
i.e. filename.dwg and filename.zip
Now, I've assigned each list of files to an array.
i.e. #dwg_files and #zip_files
Ultimately, what I'm trying to do is check last modification date between two files of the same name in different arrays, then run a script if one is younger than they other. What I have so far seems to work except when it compares two files with different names. I need it to compare a file from the first array to the identical file in the other array.
i.e. asdf1.dwg should be correlated to asdf1.zip
my $counter = 0 ;
while ( $counter < #dwg_files ) {
print "$counter\n";
my $dwg_file = $dwg_files[$counter];
my $zip_file = $zip_files[$counter];
#check if zip exists
if (-e $zip_file) {
#Checks last modification date
if (-M $dwg_file < $zip_file) {
*runs script to creat zip*
} else {
*Print "Does not need update."*
}
} else {
*runs script to create zip*
}
$counter++;
}
Doing some research, I figured I'd try to use a hash to correlate the two arrays. I just can't seem to figure out how to correlate them by name.
my %hash;
#hash{#dwg_files} = #zip_files;
I'm a complete Perl noob (just started working with it last week). I've been stuck on this for days, any help would be much apprecieted!
You could take dwg file name, change extension to zip, and then proceed with checks,
for my $dwg_file (#dwg_files) {
my $zip_file = $dwg_file;
print "dwg:$dwg_file\n";
$zip_file =~ s/[.]dwg/.zip/i or next;
#check if zip exists
if (-e $zip_file) {
#Checks last modification date
if (-M $dwg_file < -M $zip_file) {
#*runs script to creat zip*
} else {
#*Print "Does not need update."*
}
} else {
#*runs script to create zip*
}
}
To store all of the filenames in a hash, you could do something like this:
#!/usr/bin/perl
use Data::Dumper;
# grab all dwg and zip files
my #dwg_files = glob("*.dwg");
my #zip_files = glob("*.zip");
sub hashify {
my ($dwg_files, $zip_files) = #_;
my %hash;
# iterate through one of the arrays
for my $dwg_file ( #$dwg_files ) {
# parse filename out
my ($name) = $dwg_file =~ /(.*)\.dwg/;
# store an entry in the hash for both the zip
# and dwg files
# Entries of the form:
# { "asdf1" => ["asdf1.dwg", "asdf1.zip"]
$hash{$name} = ["$name.dwg", "$name.zip"];
}
# return a reference to your hash
return \%hash;
}
# \ creates a reference to the arrays
print Dumper ( hashify( \#dwg_files, \#zip_files ) );
This is what the resulting hash looks like:
{
'asdf3' => [
'asdf3.dwg',
'asdf3.zip'
],
'asdf5' => [
'asdf5.dwg',
'asdf5.zip'
],
'asdf2' => [
'asdf2.dwg',
'asdf2.zip'
],
'asdf4' => [
'asdf4.dwg',
'asdf4.zip'
],
'asdf1' => [
'asdf1.dwg',
'asdf1.zip'
]
};