How can I avoid evolution script creation and execution in Play framework 2.1? - database

I just started Play framework(2.1) and copied sample project (Zentasks) and customising. I removed all the previous view, controller and model classes. When I run the app, my browser shows evolution script and I must run the script. But I do not want to create and execute this script because I have got already my database and tables before this app. \
In addition, there are still DDLs in the script creates tables already deleted.
I removed the evolutions directory again and again, the file auto generated and I did now work.
I want to understand how it works and know how to avoid this annoying?
Thanks.

There is commented option evolutionplugin=disabled in application.conf for this, just uncomment it:
# Evolutions
# ~~~~~
# You can disable evolutions if needed
evolutionplugin=disabled
To make it working again, just comment it or set its value to enabled

Related

How do I run occasional tasks that update data in a database?

Ooccasionaly I need to run tasks that update data in a database. I might need to run them ever again, or not a new server - no idea. For I need to run them once and in a certain release only. And they should be outside of git index.
Some tutorial suggest that I run them with "custom migration", in which a 2nd directory for migrations is created called "custom_migrations" and they'll be run from there via Ecto.Migrator. But this will case a problem: I run all of the custom_migrations, then delete all of migration files (because I won't need them anywhere else, not on a new server either, once I've run them), then create new ones when a need arises, and then Ecto.Migrator will complain about absense of the migrations that I've deleted.
I'm also aware of ./bin/my_app eval MyApp.Tasks.custom_task1 but it's not convinient because I'll have to call it manually and passing arguments to a function isn't convinient via the command line.
What I want is: create a several files that I want to be run in this current release, once. Store them in a certain directory of an application. Deploy a application. They'll get run automatically, probably on application boot and then I remove them. Then, after some time, I may want to create new ones and only those new ones will need to get run.
How to do this? What's a recommended way in Ellixir/Phoenix?

Setting up a CakePHP test database by using app/Config/Schema/schema.php

I'm setting up self-hosted continuous integration using buildbox.io. I need to create the tables and columns in my test database.
On my own computer, I've been using public $import = 'MyTable'; for every fixture, to copy the table defenitions from the $default database connection in database.php. It works well, since my development database is always up to date with the latest migrations.
Also, it seems like a massive pain to do it the other way, where you'd have to manually keep your database field definitions up to date in your fixtures, each time you make changes to your database. This seems especially redundant given the list of fields will already be up to date in app/Config/Schema/schema.php
On the server, using public $import = 'MyTable'; won't work. Even if I did want to make the staging database my $default config when running tests, the staging database can't be relied upon to be up to date at all times.
So, my question is, how can I do it? Is there a way to tell Cake to use the definitions in schema.php for building its test database from fixtures? Or is the only way for me to manually add field definitions in all my fixtures? (that seems like a massive pain!)
What I do for fixtures is just update a set of sql statements that contain test data. The file is something like schema_MD5HASH.sql.
When my tests run, it gets the sql hash of the schema.php file for my app and uses that to run the associated schema_MD5HASH.sql. If the file doesn't exist, the tests fail, and then it's just one extra build to remember to build my fixtures.
Something else I've been playing with is:
annotating the schema with extra data for each field - something akin to a field_type
loading up the schema file in a fixture and reading in the specific table
parsing the field_type and generating fake data according to a library like Faker
This way is sort of weird, and you have to be careful that a schema dump doesn't override your annotations, but it works pretty okay.
OK, I've got a solution I'm happy with, that requires minimal changes to my code.
For the Buildbox CI environment, I was previously using only one database - the test database. I've now added another, so I've got a test and default database (along with $test and $default database configs).
During the Buildbox bootstrap process (where it runs bootstrap.sh - a file that sits on your CI server, outside your app) I call Console/cake schema create, which will populate the default database from schema.php.
Now, I can run my tests as normal, and they will copy the table definitions from the default database, as my $import setting recommends.
So, the default database never contains any data - it only exists so it can be created from schema.php, and can therefore be used to import table definitions during my tests.
Here's the Cake-sepcific lines in my Buildbox bootstrap.sh:
echo -e "--- Installing plugins via composer"
buildbox-run "composer install"
echo -e "--- Setting up database"
buildbox-run "cp ./app/Config/database.buildbox.php ./app/Config/database.php"
# create the default database, so that we can use $import as a means of generating fixture data on the test database.
# say yes at all prompts: http://askubuntu.com/questions/338857/automatically-enter-input-in-command-line
buildbox-run "yes | ./app/Console/cake schema create"

When magento widgets are run for the first time do they alter the database?

I've been following this tutorial http://www.magentocommerce.com/knowledge-base/entry/tutorial-creating-a-magento-widget-part-1 to create a Magento widget as part of an extension I'm working on.
Whilst the widget was created successfully and worked as I wanted it, I changed the code and started getting the following error
Warning: Invalid argument supplied for foreach() in app/code/core/Mage/Widget/Model/Widget/Instance.php on line 502
When I changed the code back, the error was still present. However when I copied all my module to a fresh Magento install then the error wouldn't appear.
Although my widget does not explicitly use the database does anyone know if the act of installing and uninstalling a Magento widget makes changes to the core databases tables and if it does, which tables are altered.
Thanks
The core_resource table contains a list of all modules, so adding a new module will cause a new row to be created.
If you have anything in your module's sql folder, that code will be run depending on your module's version.
Without knowing exactly what code was run and changed, it's hard to know what your specific problem is.
http://www.magentocommerce.com/knowledge-base/entry/magento-for-dev-part-6-magento-setup-resources
So if you followed that tutorial you linked to I do not think it changed any database settings.
You can tell if a module will add tables or modify table columns by the following method.
Assume the module is called Foo_Bar and it is installed in the "community" code pool (as opposed to core or local).
Navigate to app/code/community/Foo/Bar. You will at minimum usually see etc and Block directories there.
If you see a sql directory then this module makes db changes. You also need to understand a module is versioned and may initially make a certain table and then modify it.
By way of example you can go to any core module and look for the same. For example I am running Enterprise 1.12 and went to:
app/code/core/Mage/Sendfriend/sql/sendfriend_setup
I see:
mysql4-upgrade-1.5.9.9-1.6.0.0.php
mysql4-upgrade-0.7.3-0.7.4.php
mysql4-upgrade-0.7.2-0.7.3.php
mysql4-upgrade-0.7.1-0.7.2.php
mysql4-install-0.7.0.php
install-1.6.0.0.php
Note the upgrade x-y nomenclature. That is what core_resource keeps track of.
If you are wondering where your new module's settings are saved, that is actually in core_config_data. try this:
SELECT * FROM core_config_data where path like '%foo%';
Assuming you have some setting in the admin you named "foo".
Now back to your problem. That is a common php error. You are running a foreach on something which can not be iterated. The code right before that is probably not returning an array or collection or whatever.
Ideally you should always wrap the foreach with a statement that checks that the item you are iterating over is not empty.
You can also turn off displaying errors or suppress that error using the # statement which is a bad practice...

Empty my Sqlite3 database in RoR

I am working on a Ruby on Rails 3 web application using sqlite3. I have been testing my application on-the-fly creating and destroying things in the Database, sometimes through the new and edit actions and sometimes through the Rails console.
I am interested in emptying my Database totally and having only the empty tables left. How can I achieve this? I am working with a team so I am interested in two answers:
1) How do I empty the Database only by me?
2) How can I (if possible empty) it by the others (some of which are not using sqlite3 but MySql)? (we are all working on an the same project through a SVN repository)
To reset your database, you can run:
rake db:schema:load
Which will recreate your database from your schema.rb file (maintained by your migrations). This will additionally protect you from migrations that may later fail due to code changes.
Your dev database should be distinct to your environment - if you need certain data, add it to your seed.rb file. Don't share a dev database, as you'll quickly get into situations where other changes make your version incompatible.
Download sqlitebrower here http://sqlitebrowser.org/
Install it, run it, click open database (top left) to locationOfYourRailsApp/db/development.sqlite3
Then switch to Browse data tab, there you can delete or add data.
I found that by deleting the deployment.sqlite3 file from the db folder and inserting the command rake db:migrate in the command line, it solves the problem for all of my team working on sqlite3.
As far as I know there is no USER GRANT management in sqlite so it is difficult to control access.
You only can protect the database by file access.
If you want to use an empty database for test purpose.
Generate it once and copy the file somewhere.
and use a copy of this file just before testing.

What are the best practices for database scripts under code control

We are currently reviewing how we store our database scripts (tables, procs, functions, views, data fixes) in subversion and I was wondering if there is any consensus as to what is the best approach?
Some of the factors we'd need to consider include:
Should we checkin 'Create' scripts or checkin incremental changes with 'Alter' scripts
How do we keep track of the state of the database for a given release
It should be easy to build a database from scratch for any given release version
Should a table exist in the database listing the scripts that have run against it, or the version of the database etc.
Obviously it's a pretty open ended question, so I'm keen to hear what people's experience has taught them.
After a few iterations, the approach we took was roughly like this:
One file per table and per stored procedure. Also separate files for other things like setting up database users, populating look-up tables with their data.
The file for a table starts with the CREATE command and a succession of ALTER commands added as the schema evolves. Each of these commands is bracketed in tests for whether the table or column already exists. This means each script can be run in an up-to-date database and won't change anything. It also means that for any old database, the script updates it to the latest schema. And for an empty database the CREATE script creates the table and the ALTER scripts are all skipped.
We also have a program (written in Python) that scans the directory full of scripts and assembles them in to one big script. It parses the SQL just enough to deduce dependencies between tables (based on foreign-key references) and order them appropriately. The result is a monster SQL script that gets the database up to spec in one go. The script-assembling program also calculates the MD5 hash of the input files, and uses that to update a version number that is written in to a special table in the last script in the list.
Barring accidents, the result is that the database script for a give version of the source code creates the schema this code was designed to interoperate with. It also means that there is a single (somewhat large) SQL script to give to the customer to build new databases or update existing ones. (This was important in this case because there would be many instances of the database, one for each of their customers.)
There is an interesting article at this link:
https://blog.codinghorror.com/get-your-database-under-version-control/
It advocates a baseline 'create' script followed by checking in 'alter' scripts and keeping a version table in the database.
The upgrade script option
Store each change in the database as a separate sql script. Store each group of changes in a numbered folder. Use a script to apply changes a folder at a time and record in the database which folders have been applied.
Pros:
Fully automated, testable upgrade path
Cons:
Hard to see full history of each individual element
Have to build a new database from scratch, going through all the versions
I tend to check in the initial create script. I then have a DbVersion table in my database and my code uses that to upgrade the database on initial connection if necessary. For example, if my database is at version 1 and my code is at version 3, my code will apply the ALTER statements to bring it to version 2, then to version 3. I use a simple fallthrough switch statement for this.
This has the advantage that when you deploy a new version of your application, it will automatically upgrade old databases and you never have to worry about the database being out of sync with the software. It also maintains a very visible change history.
This isn't a good idea for all software, but variations can be applied.
You could get some hints by reading how this is done with Ruby On Rails' migrations.
The best way to understand this is probably to just try it out yourself, and then inspecting the database manually.
Answers to each of your factors:
Store CREATE scripts. If you want to checkout version x.y.z then it'd be nice to simply run your create script to setup the database immediately. You could add ALTER scripts as well to go from the previous version to the next (e.g., you commit version 3 which contains a version 3 CREATE script and a version 2 → 3 alter script).
See the Rails migration solution. Basically they keep the table version number in the database, so you always know.
Use CREATE scripts.
Using version numbers would probably be the most generic solution — script names and paths can change over time.
My two cents!
We create a branch in Subversion and all of the database changes for the next release are scripted out and checked in. All scripts are repeatable so you can run them multiple times without error.
We also link the change scripts to issue items or bug ids so we can hold back a change set if needed. We then have an automated build process that looks at the issue items we are releasing and pulls the change scripts from Subversion and creates a single SQL script file with all of the changes sorted appropriately.
This single file is then used to promote the changes to the Test, QA and Production environments. The automated build process also creates database entries documenting the version (branch plus build id.) We think this is the best approach with enterprise developers. More details on how we do this can be found HERE
The create script option:
Use create scripts that will build you the latest version of the database from scratch, which is empty except the default lookup data.
Use standard version control techniques to store,branch,tag versions and view histories of your objects.
When upgrading a live database (where you don't want to loose data), create a blank second copy of the database at the new version and use a tool like red-gate's link text
Pros:
Changes to files are tracked in a standard source-code like manner
Cons:
Reliance on manual use of a 3rd party tool to do actual upgrades (no/little automation)
Our company checks them in simply because someone decided to put it in some SOX document that we do. It makes no sense to me at all, except possible as a reference document. I can't see a time we'd pull them out and try and use them again, and if we did we'd have to know which one ran first and which one to run after which. Backing up the database is much more important then keeping the Alter scripts.
for every release we need to give one update.sql file which contains all the new table scripts, alter statements, new/modified packages,roles,etc. This file is used to upgrade the database from 1 version to 2.
What ever we include in update.sql file above one all this statements need to go to individual respective files. like alter statement has to go to table as a new column (table script has to be modifed not Alter statement is added after create table script in the file) in the same way new tables, roles etc.
So whenever if user wants to upgrade he will use the first update.sql file to upgrade.
If he want to build from scrach then he will use the build.sql which already having all the above statements, it makes the database in sync.
sriRamulu
Sriramis4u#yahoo.com
In my case, I build a SH script for this work: https://github.com/reduardo7/db-version-updater
How is an open question
In my case I am trying to create something simple that is easy to use for developers and I do it under the following scheme
Things I tested:
File-based script handling in git using GitlabCI
It does not work, collisions are created and the Administration part has to be done by hand in case of disaster and the development part is too complicated
Use of permissions and access via mysql clients
There is no traceability on changes to the database and the transition to production is manual
Use of programs mentioned here
They require uploading the structures and many adaptations and usually you end up with change control just like the word
Repository usage
Could not control the DRP part
I could not properly control the backups
I don't think it is a good idea to have the backups on the same server and you generate high lasgs for the process
This was what worked best
Manage permissions per user and generate traceability of everything that is sent to the database
Multi platform
Use of development-Production-QA database
Always support before each modification
Manage an open repository for change control
Multi-server
Deactivate / Activate access to the web page or App through Endpoints
the initial project is in:
In case the comment manager reads this part, I understand the self-promotion but please just remove this part and leave the rest since I think it complies with the answer to the question reacted in the post ...
https://hub.docker.com/r/arelis/gitdb
I hope this reaches you since I see that several
There is an interesting article with new URL at: https://blog.codinghorror.com/get-your-database-under-version-control/
It a bit old but the concepts are still there. Good Read!

Resources