Empty my Sqlite3 database in RoR - database

I am working on a Ruby on Rails 3 web application using sqlite3. I have been testing my application on-the-fly creating and destroying things in the Database, sometimes through the new and edit actions and sometimes through the Rails console.
I am interested in emptying my Database totally and having only the empty tables left. How can I achieve this? I am working with a team so I am interested in two answers:
1) How do I empty the Database only by me?
2) How can I (if possible empty) it by the others (some of which are not using sqlite3 but MySql)? (we are all working on an the same project through a SVN repository)

To reset your database, you can run:
rake db:schema:load
Which will recreate your database from your schema.rb file (maintained by your migrations). This will additionally protect you from migrations that may later fail due to code changes.
Your dev database should be distinct to your environment - if you need certain data, add it to your seed.rb file. Don't share a dev database, as you'll quickly get into situations where other changes make your version incompatible.

Download sqlitebrower here http://sqlitebrowser.org/
Install it, run it, click open database (top left) to locationOfYourRailsApp/db/development.sqlite3
Then switch to Browse data tab, there you can delete or add data.

I found that by deleting the deployment.sqlite3 file from the db folder and inserting the command rake db:migrate in the command line, it solves the problem for all of my team working on sqlite3.

As far as I know there is no USER GRANT management in sqlite so it is difficult to control access.
You only can protect the database by file access.
If you want to use an empty database for test purpose.
Generate it once and copy the file somewhere.
and use a copy of this file just before testing.

Related

How do I completely delete a MarkLogic database along with it's servers and forests?

I have multiple databases in my local which I do not need. Can I run a curl script or a REST API command where I can delete the database, it's servers and all of the forests so that I can use gradle to just deploy them again?
I have tried to manually delete the server first, then the database and then the forests. This is a lengthy process.
I want a single command to do the whole job for me instead of manually having to delete the components one by one which is possible through the admin interface.
Wagner Michael has a fair point in his comment. If you already used (ml-)Gradle to create servers and databases, why not use its mlUndeploy -Pconfirm=true task to get rid of them? You could potentially even use a fake project, with stub configs to get rid of a fairly random set of databases and servers, though that still takes some manual work.
By far the quickest way to reset your entire MarkLogic, is to stop it, and wipe its data directory. This SO question gives instructions on how to do it, as part of a solution to recover when you lost your admin password:
https://stackoverflow.com/a/27803923/918496
HTH!

What's the correct way to deal with databases in Git?

I am hosting a website on Heroku, and using an SQLite database with it.
The problem is that I want to be able to pull the database from the repository (mostly for backups), but whenever I commit & push changes to the repository, the database should never be altered. This is because the database on my local computer will probably have completely different (and irrelevant) data in it; it's a test database.
What's the best way to go about this? I have tried adding my database to the .gitignore file, but that results in the database being unversioned completely, disabling me to pull it when I need to.
While git (just like most other version control systems) supports tracking binary files like databases, it only does it best for text files. In other words, you should never use version control system to track constantly changing binary database files (unless they are created once and almost never change).
One popular method to still track databases in git is to track text database dumps. For example, SQLite database could be dumped into *.sql file using sqlite3 utility (subcommand .dump). However, even when using dumps, it is only appropriate to track template databases which do not change very often, and create binary database from such dumps using scripts as part of standard deployment.
you could add a pre-commit hook to your local repository, that will unstage any files that you don't want to push.
e.g. add the following to .git/hooks/pre-commit
git reset ./file/to/database.db
when working on your code (potentially modifying your database) you will at some point end up:
$ git status --porcelain
M file/to/database.db
M src/foo.cc
$ git add .
$ git commit -m "fixing BUG in foo.cc"
M file/to/database.db
.
[master 12345] fixing BUG in foo.cc
1 file changed, 1 deletion(-)
$ git status --porcelain
M file/to/database.db
so you can never accidentally commit changes made to your database.db
Is it the schema of your database you're interested in versioning? But making sure you don't version the data within it?
I'd exclude your database from git (using the .gitignore file).
If you're using an ORM and migrations (e.g. Active Record) then your schema is already tracked in your code and can be recreated.
However if you're not then you may want to take a copy of your database, then save out the create statements and version them.
Heroku don't recommend using SQLite in production, and to use their Postgres system instead. That lets you do many tasks to the remote DB.
If you want to pull the live database from Heroku the instructions for Postgres backups might be helpful.
https://devcenter.heroku.com/articles/pgbackups
https://devcenter.heroku.com/articles/heroku-postgres-import-export

SQLite vs.SQLCE Deployment

I am in the process of writing an offline-capable smartclient that will have syncing capability back to the main backend when a connection can be made. As a side note, I considered the Microsoft Sync Framework but since I'm really only going one-way I didn't feel it would buy me enough to justify it.
The question I have is related to SQLite vs. SQLCE and ClickOnce deployments. I've dealt with SQLite before (impressive little tool) and I've dealt with ClickOnce, but never together. If I setup an installer for my app via ClickOnce, how do I ensure during upgrades the local database doesn't get wiped out? Is it possible to upgrade the database (table structure, etc. if necessary) as part of the installer? Or is it better to use SQLCE for something like this? I definitely don't want to go the route of installing SQL Express or anything as the overhead would be far too high for what I am doing.
I can't speak about SQLLite, having never deployed it, but I do have some info about SQLCE.
First, you don't have to deploy it as a prerequisite. You can just include the dll's in your project. You can check this article which explains how. This gives you finite control over what version is being used, and you don't have to deal with installing it per se.
Second, I don't recommend that you deploy the database as a data file and let ClickOnce manage it. When you change that file, ClickOnce will publish it again and put it in the data directory. Then it will take the previous one and put it in the \pre subfolder, and if you have no code to handle that, your user will lose his data. So if you open the database file to look at the table structure, you might be unpleasantly surprised to get a phone call from your user about the data being gone.
If you need to retain the data between updates, I recommend you move the database to the [LocalApplicationData] folder the first time the application runs, and reference it there. Then if you need to do any updates to the structure, you can do them programmatically and control when they happen. This article explains how to do this and why.
The other advantage to putting the data in LocalApplicationData is that if the user has a problem and has to uninstall and reinstall the application, his data is retained.
Regardless of the embedded database you choose your database file (.sqlite or .sdf) will be a part of your project so you will be able to use "Build Action" and "Copy to Output Directory" properties of that file to control what happens with the file during the install/update.
If you choose "Do not copy" it will not copy the database file and if you choose "Copy if newer" it will only copy if you have a new version of your database file.
You will need to experiment a little but by using these two properties you can have full control of how and when your database file is deployed/updated...

Wordpress database migration

I've looked around the Wordpress forums about this and didn't find anything so I thought I might try here.
If you have a staging/dev Wordpress setup used for testing new pluging and such, how do you go about migrating the data in the staging database back to the production database? Is there a "Wordpress best practices" way to do this, or am I limited to having to manually migrate tables from one database to the other?
I have a script that mysqldumps a copy of my production Wordpress DB, restores it over my test Wordpress install & then corrects all the "production" settings & urls in the test DB.
Both my production & test databases live on the same server, but you could change the mysqldump settings to dump from a remote mysql server & restore to a local server quite easily.
Here are my scripts:
overwrite_test.coach_db_with_coache_db.sh
#!/bin/bash
dbUser="co*******"
dbPassword="*****"
dbSource="coach_production"
dbDest="coach_test"
tmpDumpFile="/tmp/$dbSource.sql"
mysqldump --add-drop-table --extended-insert --user=$dbUser --password=$dbPassword --routines --result-file=$tmpDumpFile $dbSource
mysql --user=$dbUser --password=$dbPassword $dbDest < $tmpDumpFile
mysql --user=$dbUser --password=$dbPassword $dbDest < /AdminScripts/change_coach_to_test.coach.sql
change_coach_to_test.coach.sql
-- Change all db references from #oldDomain to #newDomain
SET #oldDomain = 'coach.co.za';
SET #newDomain = 'test.coach.co.za';
SET #testUsersPassword = 'password';
UPDATE `wp_1_options` SET `option_value` = REPLACE(`option_value`,#oldDomain,#newDomain) WHERE `option_name` IN ('siteurl','home','fileupload_url');
UPDATE `wp_1_posts` SET `post_content` = REPLACE(`post_content`,#oldDomain,#newDomain);
UPDATE `wp_1_posts` SET `guid` = REPLACE(`guid`,#oldDomain,#newDomain);
UPDATE `wp_blogs` SET `domain` = #newDomain WHERE `domain` = #oldDomain;
UPDATE `wp_users` SET `user_pass` = MD5( #testUsersPassword );
-- Only valid for main wpmu site
UPDATE `wp_site` SET `domain` = #newDomain WHERE `domain` = #oldDomain;
Perhaps you are just looking for the wrong thing. Wouldn't a backup plugin handle this with ease? I know they exist for all the big CMS packages...
The two methods would be using the export/import feature under tools or copying the database. I email myself a copy of my production database weekly using the WordPress Database Backup plugin.
The import feature can be problematic for moving a wordpress blog as you have to configure your php.ini file often because the default value of files you can upload on a hosted php implementation tends to be too small by default.
I wanted to pull the database from my production wordpress website into an offline development copy of it on my desktop machine so I could modify the site and test it with a
full set of the existing blog content and history.
This proved to be problematic, as simply making an offline backup of the database and importing it into the local development database did not work.
Overcoming these problems in moving data from the production to the dev database can probably be used to go the other way as well - so I think you can just use these guidelines for what you want to do as well - just start with dev data and move it to prod.
The problems here were:
the permalink designations for the
blog posts are all stored in the
database as they would be for the
online version, but my offline copy
isn't at the domain address, instead
it is in the localhost directory. So
when I launch the site locally,
although the css formatting and
images are all in place (the image
links being relative), the actual
blog posts don't show up.
many of the links throughout the
site link back out to the internet,
so if you try to navigate to
archives, or comments, or
categories, or the main posts, you
get sent back out to the internet
instead of staying in the database
on the local machine.
To make sure I was doing this right, I blew away the wordpress install I had on my local machine and restarted from scratch.
Once I had a clean, new wordpress install and brand new default freshly created local database for it, I opened up the database in phpMyAdmin and took a look at the wp_posts
table. Inside there, each record (in other words, each post) has a column titled "guid", which shows the location of that post. For example, the first one in a fresh, default
install contains this "guid" value:
http://localhost/wordpress/?p=1
If you look in the wp_posts table of your online version, you'll see instead in this location the url to your site online.
You can't just import the tables wholesale into your local install, because you'll be importing all these outside references. It will make your local version impossible to navigate locally.
So, I created a backup copy of my online site's database and saved it locally as a .sql file. I then opened that file in a text editor (I used notepad++, a great piece of free software, but you could use any text editor). Things I needed to look out for:
For whatever reason, the tables on my
online site aren't just, for example,
"wp_posts" - they are
"wp_something_posts"... there are
some extra letters in there in the
table names.
Any references to http://... that contain my online url instead of localhost/wordpress
To keep it simple let's just do only the posts. In the backup copy of the .sql you've made of your online database, find the beginning of the wp_posts table. It will look something like this:
--
-- Table structure for table `wp_posts`
--
DROP TABLE IF EXISTS `wp_posts`;
CREATE TABLE `wp_posts` (
...and so on. Highlight everything above that up to just below the comment marking the beginning of the database at the top of the file (it will say -- Database: 'your database name') and delete it. Then go to the end of your wp_posts table, and delete everything after then end of it down to the bottom of the file. Now your file only contains your posts, and nothing else.
Save this as a separate document. Call it posts.sql or something like that.
Now, in this posts.sql file, you need to do two find/replaces actions.
Find every instance of the name of
the table wp_something_posts and
replace it with wp_posts. You only
need to do this if your backup copy
of your online database doesn't
match your clean local install as
far as the table names go. You want
whatever the table name is in this
file to match what your locally
installed wordpress database has as
this table name. If you don't make
these names match, you are just
going to end up importing the posts
into a new, differently named table,
which will be of no use to you at
all.
Find every instance of http://...
(replace the elipsis with your url)
and replace it with
http://localhost/wordpress (or
whatever the local url to your dev
version of the site is)
Now save this file again, to make sure you've got these changes set.
Now that you've done that, use phpMyAdmin to get into the wordpress database on your local machine, select the "import" tab and navigate the selector to the posts.sql file you just made, and then import it. This will pull all the data in that file into your local wp_posts table.
When that finishes, browse your local wordpress site. You'll see all your posts in there now. Hooray!
You may need to do something similar for a few other tables if you want to bring in your comments, tags, categories, and static pages you've created, etc.
I realize this is a convoluted process. There is probably a tool out there somewhere that makes this activity easier, and if someone knows of one I'd love to find out about it. If someone knows of a better way to do this manually than what I've described, I'd love to know that as well!
Until then, this is the way I figured out how to do it. Hopefully it helps get you going in the right direction.
You need to handle the serialized objects. Here is a client side HTML5 utility to handle it. Because it is all javascript, it's quite fast.
The alternative would be hooking a bash script into your deployment. So once the site is deployed, the db is backed up and deserialized with the new domain.
This about sums up the problems with the wordpress core architecture... but I wrote a plugin that solves the problems with domain names and absolute urls being stored in the database:
http://wordpress.org/extend/plugins/root-relative-urls/
This will solve the problems outlined by #oddbill. Though don't worry too much about the url being in the GUID column as that field is never used for link generation.
#markratledge provides a couple links to some lengthy documents that basically say this:
//export
mysqldump -u[username] -p[password] [database] > backup.sql
//import
mysql -u[username] -p[password] [database] < backup.sql
You'll want to exclude the comments/comments_meta tables if you push to production from staging so you don't lose all of your comments and trackbacks (#DavidLaing's approach will wipe those out.) And this assumes you only make content changes in your staging environment. If you want to make changes in production and your staging environment, you'll need to write scripts that sync the data instead of wholesale overwrite it... good luck on that task, may I suggest adding create & modified timestamp columns before you invest too much time with the current schema.
And finally, #RussellStuever's approach is suitable in most circumstances, just be sure to know when you are browsing your host mapped site versus your production site. And really be sure about it, because some browsers cache dns lookups for days until you physically close them and start a new process. At which point switching hosts may take some time, and switching frequently may get frustrating. And if you need to test with an iPhone, you'll need to publish the site live first, or use a good router that can remap outbound internet requests to local servers because you cannot modify hosts files on most mobile devices.
My plugin lets you develop and test from http://localhost/ or http://staging.server.local/ or http://www.production.com without any of the usual pitfalls. And then to migrate data, it's as simple as exporting and importing the data, no search & replace step or database setting tweaks necessary.
And don't rely on the import/export tool, it doesn't capture everything in typical wordpress installations, and still requires a needless search & replace step.

What are the best practices for database scripts under code control

We are currently reviewing how we store our database scripts (tables, procs, functions, views, data fixes) in subversion and I was wondering if there is any consensus as to what is the best approach?
Some of the factors we'd need to consider include:
Should we checkin 'Create' scripts or checkin incremental changes with 'Alter' scripts
How do we keep track of the state of the database for a given release
It should be easy to build a database from scratch for any given release version
Should a table exist in the database listing the scripts that have run against it, or the version of the database etc.
Obviously it's a pretty open ended question, so I'm keen to hear what people's experience has taught them.
After a few iterations, the approach we took was roughly like this:
One file per table and per stored procedure. Also separate files for other things like setting up database users, populating look-up tables with their data.
The file for a table starts with the CREATE command and a succession of ALTER commands added as the schema evolves. Each of these commands is bracketed in tests for whether the table or column already exists. This means each script can be run in an up-to-date database and won't change anything. It also means that for any old database, the script updates it to the latest schema. And for an empty database the CREATE script creates the table and the ALTER scripts are all skipped.
We also have a program (written in Python) that scans the directory full of scripts and assembles them in to one big script. It parses the SQL just enough to deduce dependencies between tables (based on foreign-key references) and order them appropriately. The result is a monster SQL script that gets the database up to spec in one go. The script-assembling program also calculates the MD5 hash of the input files, and uses that to update a version number that is written in to a special table in the last script in the list.
Barring accidents, the result is that the database script for a give version of the source code creates the schema this code was designed to interoperate with. It also means that there is a single (somewhat large) SQL script to give to the customer to build new databases or update existing ones. (This was important in this case because there would be many instances of the database, one for each of their customers.)
There is an interesting article at this link:
https://blog.codinghorror.com/get-your-database-under-version-control/
It advocates a baseline 'create' script followed by checking in 'alter' scripts and keeping a version table in the database.
The upgrade script option
Store each change in the database as a separate sql script. Store each group of changes in a numbered folder. Use a script to apply changes a folder at a time and record in the database which folders have been applied.
Pros:
Fully automated, testable upgrade path
Cons:
Hard to see full history of each individual element
Have to build a new database from scratch, going through all the versions
I tend to check in the initial create script. I then have a DbVersion table in my database and my code uses that to upgrade the database on initial connection if necessary. For example, if my database is at version 1 and my code is at version 3, my code will apply the ALTER statements to bring it to version 2, then to version 3. I use a simple fallthrough switch statement for this.
This has the advantage that when you deploy a new version of your application, it will automatically upgrade old databases and you never have to worry about the database being out of sync with the software. It also maintains a very visible change history.
This isn't a good idea for all software, but variations can be applied.
You could get some hints by reading how this is done with Ruby On Rails' migrations.
The best way to understand this is probably to just try it out yourself, and then inspecting the database manually.
Answers to each of your factors:
Store CREATE scripts. If you want to checkout version x.y.z then it'd be nice to simply run your create script to setup the database immediately. You could add ALTER scripts as well to go from the previous version to the next (e.g., you commit version 3 which contains a version 3 CREATE script and a version 2 → 3 alter script).
See the Rails migration solution. Basically they keep the table version number in the database, so you always know.
Use CREATE scripts.
Using version numbers would probably be the most generic solution — script names and paths can change over time.
My two cents!
We create a branch in Subversion and all of the database changes for the next release are scripted out and checked in. All scripts are repeatable so you can run them multiple times without error.
We also link the change scripts to issue items or bug ids so we can hold back a change set if needed. We then have an automated build process that looks at the issue items we are releasing and pulls the change scripts from Subversion and creates a single SQL script file with all of the changes sorted appropriately.
This single file is then used to promote the changes to the Test, QA and Production environments. The automated build process also creates database entries documenting the version (branch plus build id.) We think this is the best approach with enterprise developers. More details on how we do this can be found HERE
The create script option:
Use create scripts that will build you the latest version of the database from scratch, which is empty except the default lookup data.
Use standard version control techniques to store,branch,tag versions and view histories of your objects.
When upgrading a live database (where you don't want to loose data), create a blank second copy of the database at the new version and use a tool like red-gate's link text
Pros:
Changes to files are tracked in a standard source-code like manner
Cons:
Reliance on manual use of a 3rd party tool to do actual upgrades (no/little automation)
Our company checks them in simply because someone decided to put it in some SOX document that we do. It makes no sense to me at all, except possible as a reference document. I can't see a time we'd pull them out and try and use them again, and if we did we'd have to know which one ran first and which one to run after which. Backing up the database is much more important then keeping the Alter scripts.
for every release we need to give one update.sql file which contains all the new table scripts, alter statements, new/modified packages,roles,etc. This file is used to upgrade the database from 1 version to 2.
What ever we include in update.sql file above one all this statements need to go to individual respective files. like alter statement has to go to table as a new column (table script has to be modifed not Alter statement is added after create table script in the file) in the same way new tables, roles etc.
So whenever if user wants to upgrade he will use the first update.sql file to upgrade.
If he want to build from scrach then he will use the build.sql which already having all the above statements, it makes the database in sync.
sriRamulu
Sriramis4u#yahoo.com
In my case, I build a SH script for this work: https://github.com/reduardo7/db-version-updater
How is an open question
In my case I am trying to create something simple that is easy to use for developers and I do it under the following scheme
Things I tested:
File-based script handling in git using GitlabCI
It does not work, collisions are created and the Administration part has to be done by hand in case of disaster and the development part is too complicated
Use of permissions and access via mysql clients
There is no traceability on changes to the database and the transition to production is manual
Use of programs mentioned here
They require uploading the structures and many adaptations and usually you end up with change control just like the word
Repository usage
Could not control the DRP part
I could not properly control the backups
I don't think it is a good idea to have the backups on the same server and you generate high lasgs for the process
This was what worked best
Manage permissions per user and generate traceability of everything that is sent to the database
Multi platform
Use of development-Production-QA database
Always support before each modification
Manage an open repository for change control
Multi-server
Deactivate / Activate access to the web page or App through Endpoints
the initial project is in:
In case the comment manager reads this part, I understand the self-promotion but please just remove this part and leave the rest since I think it complies with the answer to the question reacted in the post ...
https://hub.docker.com/r/arelis/gitdb
I hope this reaches you since I see that several
There is an interesting article with new URL at: https://blog.codinghorror.com/get-your-database-under-version-control/
It a bit old but the concepts are still there. Good Read!

Resources