It there a tool for django to install some static data (necessary data to have application runing ) to database?
./manage.py syncdb will make the database schema i db, some tool for pushing static data to db ?
Thanks
Fixtures.
EDIT: Better example.
Fill your database with static data you need, then execute command:
./manage.py dumpdata --indent=4 > initial_data.json
After that every ./manage.py syncdb will insert data that you entered the first time into database.
If you are familiar with json format (or xml or yaml) you can edit initial data by hand. For details check out official documentation.
Related
So my task is to write an assignment in nodejs and bootstrap with mongodb database.
My next task is to share this with the Assignee in a way so that he/ she can run the project in his/ her local environment.
I can transfer the codes in git and share but how to share the database as well?
You can see MongoDB documentation for that
mongodump -d <database_name> -o <directory_backup>
mongorestore -d <database_name> <directory_backup>
video for mongoexport command
You could use a database-as-a-service like mlab so that the database will be the same on both the machines.
Or if you don't like external database, you could also create a node.js script to init the database the first time the program is run.
Use studio3T, choose your collection. Right click on the collection and export in your desired format. Same way your colleague will import from Studio3T.
First of, I want to say, that I am not a DB expert and I have no experience with the heroku service.
I want to deploy a play framework application to the heroku service. And I need a database to do so. So I created a postgresql database with this command, since it's supported by heroku:
Users-MacBook-Air:~ user$ heroku addons:create heroku-postgresql -a name_of_app
And I got this as response
Creating heroku-postgresql on ⬢ benchmarkingsoccerclubs... free
Database has been created and is available
! This database is empty. If upgrading, you can transfer
! data from another database with pg:copy
So the DB is now existing but empty of course. For development I worked with a local H2 Database.
Now I would want to populate the DB on heroku using a sql file, since it's quite a lot of data. But I couldn't find how to do that. Is there a command for the heroku CLI, where I can hand over the sql file as an argument and it populates the database? The File basically consists of a few tables which get created and around 10000 Insert commands.
EDIT: I also have CSV files from all the tables. So if there is a way how I can populate the Postgres DB with those would be also great
First, run the following to get your database's name
heroku pg:info --app <name_of_app>
In the output, note the value of "Add-on", which should look something like this:
Add-on: postgresql-angular-12345
Then, issue the following command:
heroku pg:psql <Add-on> --app <name_of_app> < my_sql_file.sql
For example (assuming your sql commands are in file test.sql):
heroku pg:psql postgresql-angular-12345 --app my_cool_app < test.sql
I use the doctrine migrations bundle to track changes in my database structure. I would like to ensure that when I'm deploying / adding a new server for my application that:
(A) the database schema is up to date (doctrine:migrations:migrate)
(B) the database always contains a pre-defined set of data
For (B) a good example is roles. I want a certain set of roles to always be present. I realize it is possible with database migrations, but I don't like the idea of mixing schema changes with data changes. Also if I use MySql migrations I would have to create a equivalent Sqlite migration for my test database.
Another option I'm aware of is data fixtures. However from reading the documentation I get the feeling that fixtures are more for loading test data. Also if I changed a role name I don't know how that would be updated using fixtures (since they either delete all data in the database before loading or append to it). If I use append then unique keys would also be a problem.
I'm considering creating some sort of command that takes a set of configuration files and ensures that certain tables are always in a consistent state matching the config files - but if another option exists I'd like to use it of course.
What is the best way to handle loading and managing required data into a database?
If you're using Doctrine Migrations, you can generate initial migration with whole database schema, then you should generate migrations (doctrine:migrations:generate or doctrine:migrations:diff) for all changes that are made in database structure AND also add there queries that will migrate existing data.
Fixtures are designed to pre-populate data (with doctrine:fixtures:load) and, in my opinion, they should be kept up-to-date with latest database schema and executed after doctrine:migrations:migrate / doctrine:schema:create.
So finally:
Create base migration with initial database schema (instead of executing doctrine:schema:create just generate migration file and migrate it)
Create new migrations for each database schema change AND for migrating existing data (such as role name changing)
Keep fixtures up-to-date with latest schema (you can use --append option and only update fixtures instead of deleting all database data first)
Then, when deploying new instance you can run doctrine:schema:create, then doctrine:migrations:version --add --all --no-interaction (mark all migrations as migrated, because you have already created latest schema) and doctrine:fixtures:load which will populate data to the database (also latest version, so data migrations from Doctrine migrations files are not required).
Note: Existing instances should NOT use doctrine:schema:update, but only doctrine:migrations:migrate. In our app we even block usage of this command, in app/console:
use Symfony\Component\Console\Output\ConsoleOutput;
use Symfony\Component\Console\Helper\FormatterHelper;
// Deny using doctrine:schema:update command
if(in_array(trim($input->getFirstArgument()), ['doctrine:schema:update'])) {
$formatter = new FormatterHelper();
$output = new ConsoleOutput(ConsoleOutput::VERBOSITY_NORMAL, true);
$formattedBlock = $formatter->formatBlock(['[[ WARNING! ]]', 'You should not use this command! Use doctrine:migrations:migrate instead!'], 'error', true);
$output->writeln($formattedBlock);
die();
}
This is what I figured out from my experience. Hope you will find it useful :-)
I'm looking for the updated Django 1.5 command that can do the following action.
python manage.py reset <app>
What I want to do basically is DROP tables and UPDATE the database structure inside with a manage.py command.
The thing is that reset command is no longer working and
manage.py flush
or
manage.py sqlclear <app>
are just dropping the database / table content.
What's the updated reset version for Django 1.5?
I think what you are looking for is South. South is a 3rd party tool, which may soon be integrated into Django, that assists you with database migrations and schema changes. As is stands, Django 1.5 does not deal with schema changes very well, if at all. The only way to adjust a schema in Django 1.5 is to add new models. You wouldn't want to engage in the practice of adding a new model to fulfill a desired table alteration or deletion. Most developers turn to a 3rd part solution when they need to make schema adjustments.
See http://south.readthedocs.org/en/latest/about.html
See Tutorial http://south.readthedocs.org/en/latest/tutorial/part1.html#tutorial-part-1
South is database agnostic and deals with database migrations automatically for you. So if you change your schema it will detect it in models.py and make the appropriate changes. You can include South as an app to your Django project and install it through pip
Hope this helps
As a MySQL alternative, you can create a application.py next to manage.py with the following code.
This will DROP, CREATE, and UPDATE your database with your models.py
#!/usr/bin/python
import MySQLdb
import subprocess
dbname = "mydbname"
db = MySQLdb.connect(host="127.0.0.1", user="username", passwd="superpassword", db=dbname)
cur = db.cursor()
#Drop all database to Drop all tables
cur.execute("DROP DATABASE "+dbname)
#Recreate the DB
cur.execute("CREATE DATABASE "+dbname)
#Sync with manage.py
proc = subprocess.call(['python','manage.py','syncdb'])
print "\n\nFinished!"
I've been reading up today on database synchronization in Magento.
One thing I am currently struggling with is what needs to be synced during development and during uploads to production. Now assuming that a batch of changes will consist of changes to the DB and code alike, below would be my understanding of a model workflow (I do not currently use a 'stage' server so that is bypassed in this example):
Sync dev DB from production DB
Checkout working copy of code to dev machine
Make changes and test them on dev server
Accept changes and commit them to svn repository
Touch Maintenance.flag on production server and prepare for upgrades (this altogether eliminates sync issues from users interacting with live data that is about to change right?)
Merge branches to trunk and deploy repository to production server
Sync dev DB back to production DB and test changes
So items # 1 & 7 I don't fully understand when working with Magento:
What needs to be synced and what doesn't?
It seems ridiculous to sync order and customer info to me so I wouldn't do it.
I would want product schema and data synced though obviously, and any admin changes, module changes, etc. How to handle that?
What about HOW to sync? (MySql dumps, import/export, etc)
Currently I'm using Navicat 10 Premium which has structure and data sync features (I haven't experimented yet but they look like a huge help)
So I don't necessarily need specifics here (but they would help). More or less I want to know what works for you and how happy you are with that system.
if you are using CE version then:
ditch svn and use GIT :)
never sync a database , prepare your database upgrades as extension upgrade files
have 3 sites dev, stage, live
live database is copied over to stage and dev when needed
make all your admin changes from live and just copy the whole database down the line
this way you never have to sync a database + if you do all config changes via extension upgrade scripts you can cold boot your magento to a new database structure wherever you want without loosing data structure
I use phpunit to build a dev db. I wrote a short script which dumps xml data from the live database and I used it table-by-table, munging anything sensitive and deleting what I didn't need. The schema for my dev database never changes and never gets rebuilt. Only the data gets dropped and recreated each phpunit run.
May not be the right solution for everyone because it's never going to be good for syncing dev up to stage/production, but I don't need to do that.
The main benefit is how little data I need for the dev db. It's about 12000 lines of xml, and handles populating maybe 30 different tables. Some small core tables persist as I don't write to them and many tables are empty because I do not use them.
The database is a representative sample, and is very small. Small enough to edit as a text file, and only a few seconds to populate each time I run tests.
Here's what it looks like at the top of each PHPUnit test. There's good documentation for PHPUnit and DbUnit
<?php
require_once dirname(dirname(__FILE__)) . DIRECTORY_SEPARATOR . 'top.php';
require_once "PHPUnit/Extensions/Database/TestCase.php";
class SomeTest extends PHPUnit_Extensions_Database_TestCase
{
/**
* #return PHPUnit_Extensions_Database_DB_IDatabaseConnection
*/
public function getConnection() {
$database = MY_DB
$hostname = MY_HOST
$user = MY_USER
$password = MY_PASS
$pdo = new PDO("mysql:host=$hostname;dbname=$database", $user, $password);
return $this->createDefaultDBConnection($pdo, $database);
}
/**
* #return PHPUnit_Extensions_Database_DataSet_IDataSet
*/
public function getDataSet() {
return $this->createXMLDataSet(dirname(dirname(__FILE__)) . DIRECTORY_SEPARATOR . 'Tests/_files/seed.xml');
}
}
So, now you just need a seed file that DbUnit reads from to repopulate your database each time Unit tests are invoked.
Start by copying your complete database twice. One will be your dev database and the second will be your "pristine" database, that you can use to dump xml data in case you start having key issues.
Then, use something like my xml dumper againt the "prisine" database to get your xml dumps and begin building your seed file.
generate_flat_xml.php -tcatalog_product_entity -centity_id,entity_type_id,attribute_set_id,type_id,sku,has_options,required_options -oentity_id >> my_seed_file.xml
Edit the seed file to use only what you need. The small size of the dev db means you can examine differences just by looking at your database versus what's in the text files. Not to mention it is much faster having less data.