Vagrant Chef path for importing database - database

I have learned a lot about setting up vagrant with chef and I am hitting a wall since I am new to ruby - vagrant - chef and I am not the biggest developer. Being mostly front end but trying to set up a better environment to develop in.
I have searched and found great answers but left with one final question.
I have this code creating the database but I can not figure out where to place the database to import from...
# import an sql dump from your app_root/data/dump.sql to the my_database database
execute "import" do
command "mysql -u root -p\"#{node['mysql']['server_root_password']}\" my_database < /chef/vagrant_db/database-name.mysql"
action :run
end
So I need to know where the path should start from, the top level home directory, from the top level folder where I run vagrant up? Where it is currently and a few other tried places is not working.
Any ideas would be great. I have search google so much so that I am almost ready to give up.
Thanks
Tim

I would recommend using Chef::Config[:file_cache_path] for this. Let's say you want to get that SQL file from a remote web server:
db = File.join(Chef::Config[:file_cache_path], 'database.mysql')
remote_file db do
source 'http://my.web.server/db.mysql
action :create_if_missing
notifies :run, 'execute[import]', :immediately
end
execute "import" do
command "mysql -u root -p\"#{node['mysql']['server_root_password']}\" my_database < #{db}"
action :nothing
end
This will:
Add idempotency - meaning it won't try to import the database on each run
Leverage Chef's file-cache-path, which is persisted and guaranteed to be writable on supported Chef systems
Extensible (you could easily change remote_file to cookbook_file or some custom resource to fetch the database)
Now, getting the file from Vagrant is a different story. By default, Vagrant mounts the directory where the Vagrantfile is located at on the host (local laptop) at /vagrant on the VM (guest machine). You can mount additional locations (called "shared folders") anywhere on your local laptop.
Bonus
If you are running the database on your local machine, you can actually share the socket over a shared folder with Vagrant :). Then you don't even need MySQL on your VM - it will use the one running on your host laptop.
Sources:
I write a lot of Chef :)

Related

I'm trying to import an external Redis database (.RDB file) on a Redis installation on Windows but the new data is not being loaded?

I have been trying for hours to import a .RDB Redis database file into a new installation on my local machine. I have followed all the steps on Stackoverflow stating to basically drop the dump.rdb into the installation folder (i.e. what it's condifured to read in the .conf file. See first screenshot).
I make sure that the redis server is not running when I place the file, and when re-start the server and I open redis-cli and do something like keys * it's saying that there's nothing. What's going on? All of my .conf settings are default settings
The following line from your log suggests that the RDB is indeed loaded:
[9480] 07 Jun 10:34:11.290 * DB loaded from disk: 3.540 seconds
And this line begotten from INFO tells the whole thing:
db2:keys=457985,expires=0,avg_ttl=0
Your keys are sitting in the database numbered 2, so to access them you'll need to issue the following command upon connecting to Redis:
SELECT 2
BTW - numbered (a.k.a. "shared") Redis database is a bad habit that you should stop practicing. If you're looking for the reasons why (except this little mixup), read here: https://redislabs.com/blog/benchmark-shared-vs-dedicated-redis-instances

get database from xampp (not via phpmyadmin)

I would like to ask, if it is possible to get my database from an offline (not functioning) xampp ?
You see, I have backed up my database earlier but I am not sure whether there are all the data I need now and the DB is pretty big (like 50 tables). I wanted to go for a local implementation of apache, mysql and PHP for my web applications. So I have reinstalled mysql and want to use my own local apache server instead of xampp.
I would like to know where can I find some .sql or something that is stored in xampp that could be otherwise accessible via the phpmyadmin? Is it even possible? I have scrolled through the xampp folder and tried to figure out where it can be, but didnt find anything though.
Thanks for help.
EDIT
I am on a mac running mavericks.
First go to localhost/phpmyadmin and create a database as before you have. Then import your database file through browse.
If your database name exmaple.sql then create database name will be example and import example.sql

Key steps to uploading a Drupal website from Local to live using a hosting firm

I'm a newbie to pushing Drupal websites from local to live via a CP panel with a hosting company and wondered if there are any key steps I need to follow? I usually end up with Internal Server 500 errors or no themes showing so not a good start!
The steps I follow are:
Export the database from my local PHPMyAdmin
Log into my hosting CP Panel and create the database on there
Create a user for the database (with password)
Change the settings.php to match the database settings
Load all Drupal files via FTP
Create a 'tmp' folder in the 'sites > default> files' directory
What am I doing wrong?! Is it something to do with the .htaccess file as to why I either get the error or my theme never shows?
Any help would be much appreciated! So stressful and frsutrating as a newbie! Once I've done 1 I'm hoping it'll be plain sailing!!
Thanks!
C
You have the basic steps right. Check the php error logs on the server (probably accessible via the control panel if you dont have ssh access), they should give you more information as to what actually caused the 500 errors.
Doubt it is an htaccess issue unless you are doing something crazy in there.
Can you see he drupal admin at all? If so, clear cache, check watchdog for clues also.
It's easier to download and install Drupal again on the live server rather than to copy everything via FTP. The settings.php file is where your MySQL information is stored so this file should not be copied. Follow Drupal's documentation on how to install Drupal at https://drupal.org/documentation/install/download
To transfer your database, install and enable the Backup and Migrate module on your local server from https://drupal.org/project/backup_migrate and back up your database locally.
After Drupal is installed on the live server, go ahead and copy your modules, themes, and files from /sites/all and /sites/default/files (or any non-Drupal core files that you may have created). Enable and use the Backup and Migrate module to restore your database to your live server. You may need to configure the php.ini file if the database is over 8MB.

Restore PostgreSQL database from mounted volume

My EC2 database server failed, preventing SSH or other access (not sure why ... grrr AWS ... that's another story).
I was able to make a snapshot of the EBS root volume. I can not boot a new instance from this volume (I'm guessing the boot partition is corrupt). However, I can attach and mount the volume on a new instance.
Now I need to get the PostgreSQL 8.4 on the new machine (Ubuntu 10.04) to load the data from the mounted volume. Is this possible? I've tried:
pg_ctl start -D /<mount_dir>/etc/postgresql/8.4/main/
But no joy ... PostgreSQL just starts with empty tables.
Is /etc/postgresql/8.4/main/ the correct location for PostgreSQL data files?
Is there a way to recover the data from the mounted volume in a way that PostgreSQL can read again?
(You should really specify your distro and version, etc, with this sort of system admin question.)
Running Pg via pg_ctl as shown above should work, assuming the original database was from Pg 8.4 and so are the binaries you're trying to use to start it. Perhaps you forgot to stop the instance of PostgreSQL automatically started by the distro? Or connected on the wrong port, so you got the distro's default instance instead of your DB on another port (or different unix socket path, for unix sockets)?
Personally I wouldn't do what you're doing anyway. First, before I did anything else, I'd make a full backup of the entire data directory because you clearly don't have good backups, otherwise you wouldn't be worrying about this. Take them now, because if you break something while restoring you're going to hate yourself. As demonstrated by this fault, trusting Amazon's storage (snapshot or otherwise) probably isn't good enough.
Once you've done that: The easiest way to restore your DB will be to, on a new instance you know you don't have any important data on that has the same major version (eg "8.4" or "9.0") of postgresql as your original instance did installed:
/etc/init.d/postgresql-8.4 stop
datadir=/var/lib/postgresql/8.4/main
rm -rf "$datadir"
cp -aR /<mount_dir>/etc/postgresql/8.4/main/ "$datadir"
chown -R postgres:postgres "$datadir"
/etc/init.d/postgresql-8.4 start
In other words: take a copy, fix the permissions, start the DB.
You might need to edit /etc/postgresql/8.4/main/postgresql.conf and/or /etc/postgresql/8.4/main/pg_hba.conf because any edits you made to the originals aren't there anymore; they're on your corrupted root FS. The postgresql.conf and pg_hba.conf in the datadir are just symlinks to the ones in etc under Debian - something I understand the rationale behind, but don't love.
Once you get it running, do an immediate pg_dumpall and/or just a pg_dump of your important DB, then copy it somewhere safe.

Running batch file remotely using Hudson

What is the simplest way to schedule a batch file to run on a remote machine using Hudson (latest and greatest version)? I was exploring the master slave setup. I created a dumb slave but I am not sure what the parameters should be so that I can trigger the batch file in the remote slave machine.
Basically, I am trying to run 2 different batch files on two different remote machines sequentially, triggered from my machine (the master). The Step by step guide on the Hudson website is a dead link. There are similar questions posted on SO but it does not quite work for me when I use the parameters they mention.
If anyone has done something similar please suggest ways to make this work.
(I know how to set up jobs, and add a step to run a batch file etc what I am having trouble configuring is doing this on a remote machine using hudson in built features)
UPDATE
Thank you all for the suggestions. Quick update on this:
What I wanted to get done is partially working, below are the steps followed to get to it -
Created new Node from Manage Nodes -> New Node -> set # of Executors as 1, Remote FS root set as '/var/hudson', set Launch method as using JNLP, set slavename and saved.
Once slave was set up (from master machine), I logged into the Slave physical machine, I downloaded the _slave.jar from http://masterserver:port/jnlpJars/slave.jar, and ran the following from command line at the download location -> java -jar _slave.jar -jnlpUrl http://masterserver:port/computer/slavename/slave-agent.jnlp. The connection was made successfully.
Checked 'Restrict where this project can be run' in the Master job configuration, and set paramater as slavename.
Checked "Add Build Step" for adding my batch job script
What I am still missing now is a way to connect to 2 slaves from one job in sequence, is that possible?
It is fairly easy and straight forward. Lets assume you already have a slave running. Then you configure the job as if you are locally on the target box. The setting for Restrict where this project can be run needs to be the node that you want to on. This is all for the job configuration.
For the slave configuration read the following pages.
Installing Hudson as a Windows service
Distributed builds
On windows I prefer to run the slave as a service and let the remote machine manage the start up and shut down of the slave. The only disadvantage with this is, you need to upgrade the client every time you update the server Just get the new client.jar from the server, after the upgrade and put it on the slave. Then restart the slave and you are done.
I had troubles using the install as a service option for the slave even though I did it as a local administrator. I used then srvany to wrap the jar into a service. Here is a blog about it. The command that you need to wrap, you will get from your Hudson server from the slave page. For all of this to work, you should set up the slave management as jnlp.
If you have an ssh server on your target machine, you can use the ssl slave settings. These work for me like a charm. I use them with my unix slaves. So far the ssl option with unix is less of an hassle, than the windows service clients.
I had some similar trouble with slave setup and wrote up this blog post - I was running on Linux rather than Windows, but hopefully this will help.
I dont know about how to use built-in hudson features for this job - but in one of my project builds, i run a batch file that in turn uses PSTools
to run the job on a remote server. I found PS tools extremely easy to use - download, unpack and run the command with the right parameters, hence opted to use this.

Resources