Is there any way to create migrations in beego? - database

I haven't found in documentation anything except "syncdb" command which create database tables from scratch. Is there any command to create and run migrations based on ORM model? Like in django? Add field, change type, etc.

No, orm.RunSyncdb(name, force, verbose) and it's command line equivalent only do a small subset of what tools like django's south can do.
Beego's orm can:
Create new tables from scratch
Drop all tables (force = true)
Add new columns as you extend your model
You need to handle dropping columns and any changes to the column parameters used to initially create the table.

Sadly beego doesn't include this feature, but no framework in go (as of today) does.
Instead they all relay that to other libraries to handle.
What you can do however is use goose for migrations:
https://bitbucket.org/liamstask/goose
or any other migration library as discussed in the following thread:
http://www.reddit.com/r/golang/comments/2dlbz5/database_migration_handling_in_go/
Remember that due to the modularity of beego you can also use any another orm (like gorm).
Feel free to look for : avelino/awesome-go in google if you want a list of tools/libs around the go ecosystem.

Yes, you can create migrations in beego now. Example, If you need to create a new table, you can start by creating a new migration file using the bee tool:
bee generate migration create_user_table
This command will create a file inside database/migrations folder. The file name contains the date, time and name of the migration.
For further details you can check this article https://ncona.com/2017/10/database-migrations-in-beego

Related

Laravel Dynamic Database Table Names

Is there a way to create database tables dynamically in Laravel. I have one Laravel build which has a database schema for quotes using the migrations tool. There will be several customers using the system which need to each have their own database table.
What I would like to happen is that when a function is called by the customer it will use the quotes schema to create a new table like 'customer1_quotes' and use this table for the customer in future. Additionally when migrations are run it will apply the updates to all tables with the given name structure (*_quotes).
If anyone has details to achieve this or a recommend alternative approach please message :)
Create a trait used by observers
Create a trait which loops through your customers and creates/updates the tables. You don't have to be in a migration to call DB::.
Use the trait in create/update controllers or better for model create/update observers. You could also create a console command for manual triggering or testing.
This should not be executed during maintenance. Using php artisan down should ensure no jobs are run during migrations.
The migrations for the customer{id}_quotes tables can loop through the available tables by querying table names using LIKE and/or REGEXP. See link below.
Links
Laravel Model Observers
How to dynamically set table name in Eloquent Model
Laravel's table Blueprint docs (5.8)
Get table names using LIKE or REGEXP
Optimization: Chunking results when getting query builder results
Edit: A repeatable migration probably won't work well and is confusing to others. Using a trait for flexibility to use for an observer is better for this.

Sqlite3 Adding Columns

I'm working with Django and I added a new model variable meaning that I need another column in my sqlite3 data base.
I have heard that I'm supposed to use sqlite> , but I am really confused when I start to use it. So, if that is part of the solution, can you be very specific on what to do?
thanks
MORE INFO:
my app is called "livestream" & and my class is "Stream"
I added the model "channel"
returns ---->
DatabaseError: table livestream_stream has no column named channel
You can ALTER TABLE to add a new column in Sqlite3 but not rename it nor drop it. Sqlite3 is a very useful database for bootstrapping your app. But sooner or later, you will need to change to a more robust/flexible database engine, say MySql or Postgresql.
Every time you add a new column to your models using Sqlite, you will need to recreate the schema (as far as I know, when you do migrations with Sqlite to add new columns, south complaints. see below). An approach I like more is use MySql with Django-South from the beginning, where I'm not sure about every aspect of my database.
Django South is an app for doing database migrations. It's very useful and the docs are a good starting point for beginners.
Every time you should make modifications to your database, you should consider them as migrations and use South.
Hope this helps!

Move Django model from one app to another [duplicate]

This question already has answers here:
How do I migrate a model out of one django app and into a new one?
(7 answers)
Closed 9 years ago.
I made the stupid mistake of creating too many models in the same Django app, now I want to split it into 3 distinct ones. Problem is: there's already data in production in two customers' sites, so I need to carefully plan any schema/data migration to be done (I'm using django-south). I'm unsure on how to proceed, any advice would be greatly appreciated.
(I'm using PostgreSQL on a Ubuntu server 12.4 LTS, if that's of any relevance)
I thought about using db.rename_table, but can't figure out how to correctly update the foreign keys to those models (old to new) - irrelevant at the database level (since the table renaming already got that covered), but not so at the ORM level.
Update: after thinking about it, and after asking this question on programmmers.SE, I decided to keep things simple and don't worry about migrations between major versions of the product. Short term, I'll just use db.rename_table to match the new name, while also using db_table as Daniel Roseman suggested, all the while keeping the models in the old app. When upgrading to a major version, I swith to the new app and ditch all migrations altogether (so fresh installs of the new version will create the database "as-is" instead of going through all historical migrations).
I don't see why you need any data migration at all.
Just move the models to the new app, and add a db_table setting in the inner Meta classes to point to the old table names.
I did something similar on a smaller scale recently and this was my process:
Create new app and corresponding models
Update views to use new models
Update unit/systems tests to make sure nothing broke (important!)
Write a management command that populates the new models based on the old models
Deploy code
Run migration for new models
Run management command script to update new models
Leave old app for 1-2 weeks and when you think its all good, drop them.
The reasons why I didnt use data migration:
Not familiar -- felt the task was too important to use a process I wasn't familiar with
More comfortable moving data with python code then with South magic
Ran into south migration issues with dependencies. Didn't want to further complicate the migrations with a data migration. This could will be a mis-founded assumption due to my unfamiliarity with the mechanics of a data migration
Perhaps as a bias from point 3, I convinced myself using South purely as a schema management tool is the 'right' way to do. Creation/updating of data should be done in the Django layer using either fixtures or custom management commands
The simplest solution I could think of:
Create a SchemaMigration changing the type of every foreign key to models in the old app to a primitive type (including ones internal to it);
Create the new apps and their models normally;
Do a data migration from the old tables to the new ones;
Create another SchemaMigration, changing every primitive type to a foreign key again, now pointing to the new tables;
Remove the old app from settings and drop its tables.
Laborious, yes, but would do the trick. I'd hope for a better solution though.

How do you make your model's database portable in CakePHP?

I'm not very familiar with cake.. So here's my questions.. we're developing an app on mysql, but it may eventually need to deploy to mssql or oracle. How do we make sure that we won't have strange problems with our primary keys? In mysql they are AUTO INCREMENT columns but IIRC in oracle you need to use sequences... is there a way to make this a transparent change? Am I over thinking it?
Does anyone have experience with switching database vendors on a cakephp app? any pointers or things to keep an eye out for?
In the Cake database config file you choose your driver (see http://book.cakephp.org/view/40/Database-Configuration). Then, if you set your PK (which will also be your A_I column if using MySQL) with the fieldname id, Cake will automatically handle the auto_increment insertion. I would presume (NB: haven't tried Cake w/ something else) that Cake will take care of A_I columns in something like Oracle.
Cake uses its own DB abstraction layer -- but the included abstractions cover quite a bit, and it will perform as specified (i.e. it'll handle your auto increment stuff for you).
In short, you're probably overthinking. That said, I would mock up a little cake app, then try switching databases behind it (change your db config and your app should automagically switch over).
HTH,
Travis
The following practices work great for me
I use cake schemas ( I tend to set up 1 schema file for each group of models. I.E. User, Role, Profile might all be in one UsersSchema file )
Also take a look at using the debuggable.com FixturesShell - it allows you to import test case fixtures into the live database. Great for setting up that initial group of users and roles from the schema file.
Also, if you set your 'id' field to VARCHAR(36) instead of INT(#) cake will automatically use UUID style id's. This means you have a FAR FAR lower chance of your data having id value collisions if you need to move the data to another application or server.
The fixtures shell also has a command line tool for generating uuids ( so you can add them to your $records variable in the fixture for insertion etc. )
In summary - Use the CakeSchema schemas shell, the fixtures shell from debuggable.com and UUID values for your id's and it should give you a portable structure creation tool, a portable data insertion tool, and a portable id field format.
http://github.com/felixge/debuggable-scraps/tree/fd0e5ad625cb21f5ba16e6b186821a5774089ac7/cakephp/shells/fixtures
http://api.cakephp.org/class/schema-shell
You need to be using "cake schema" to manage your DB.. This will handle all the DB specific stuff when you create your database.
http://book.cakephp.org/view/735/Generating-and-using-Schema-files

What are the best practices for database scripts under code control

We are currently reviewing how we store our database scripts (tables, procs, functions, views, data fixes) in subversion and I was wondering if there is any consensus as to what is the best approach?
Some of the factors we'd need to consider include:
Should we checkin 'Create' scripts or checkin incremental changes with 'Alter' scripts
How do we keep track of the state of the database for a given release
It should be easy to build a database from scratch for any given release version
Should a table exist in the database listing the scripts that have run against it, or the version of the database etc.
Obviously it's a pretty open ended question, so I'm keen to hear what people's experience has taught them.
After a few iterations, the approach we took was roughly like this:
One file per table and per stored procedure. Also separate files for other things like setting up database users, populating look-up tables with their data.
The file for a table starts with the CREATE command and a succession of ALTER commands added as the schema evolves. Each of these commands is bracketed in tests for whether the table or column already exists. This means each script can be run in an up-to-date database and won't change anything. It also means that for any old database, the script updates it to the latest schema. And for an empty database the CREATE script creates the table and the ALTER scripts are all skipped.
We also have a program (written in Python) that scans the directory full of scripts and assembles them in to one big script. It parses the SQL just enough to deduce dependencies between tables (based on foreign-key references) and order them appropriately. The result is a monster SQL script that gets the database up to spec in one go. The script-assembling program also calculates the MD5 hash of the input files, and uses that to update a version number that is written in to a special table in the last script in the list.
Barring accidents, the result is that the database script for a give version of the source code creates the schema this code was designed to interoperate with. It also means that there is a single (somewhat large) SQL script to give to the customer to build new databases or update existing ones. (This was important in this case because there would be many instances of the database, one for each of their customers.)
There is an interesting article at this link:
https://blog.codinghorror.com/get-your-database-under-version-control/
It advocates a baseline 'create' script followed by checking in 'alter' scripts and keeping a version table in the database.
The upgrade script option
Store each change in the database as a separate sql script. Store each group of changes in a numbered folder. Use a script to apply changes a folder at a time and record in the database which folders have been applied.
Pros:
Fully automated, testable upgrade path
Cons:
Hard to see full history of each individual element
Have to build a new database from scratch, going through all the versions
I tend to check in the initial create script. I then have a DbVersion table in my database and my code uses that to upgrade the database on initial connection if necessary. For example, if my database is at version 1 and my code is at version 3, my code will apply the ALTER statements to bring it to version 2, then to version 3. I use a simple fallthrough switch statement for this.
This has the advantage that when you deploy a new version of your application, it will automatically upgrade old databases and you never have to worry about the database being out of sync with the software. It also maintains a very visible change history.
This isn't a good idea for all software, but variations can be applied.
You could get some hints by reading how this is done with Ruby On Rails' migrations.
The best way to understand this is probably to just try it out yourself, and then inspecting the database manually.
Answers to each of your factors:
Store CREATE scripts. If you want to checkout version x.y.z then it'd be nice to simply run your create script to setup the database immediately. You could add ALTER scripts as well to go from the previous version to the next (e.g., you commit version 3 which contains a version 3 CREATE script and a version 2 → 3 alter script).
See the Rails migration solution. Basically they keep the table version number in the database, so you always know.
Use CREATE scripts.
Using version numbers would probably be the most generic solution — script names and paths can change over time.
My two cents!
We create a branch in Subversion and all of the database changes for the next release are scripted out and checked in. All scripts are repeatable so you can run them multiple times without error.
We also link the change scripts to issue items or bug ids so we can hold back a change set if needed. We then have an automated build process that looks at the issue items we are releasing and pulls the change scripts from Subversion and creates a single SQL script file with all of the changes sorted appropriately.
This single file is then used to promote the changes to the Test, QA and Production environments. The automated build process also creates database entries documenting the version (branch plus build id.) We think this is the best approach with enterprise developers. More details on how we do this can be found HERE
The create script option:
Use create scripts that will build you the latest version of the database from scratch, which is empty except the default lookup data.
Use standard version control techniques to store,branch,tag versions and view histories of your objects.
When upgrading a live database (where you don't want to loose data), create a blank second copy of the database at the new version and use a tool like red-gate's link text
Pros:
Changes to files are tracked in a standard source-code like manner
Cons:
Reliance on manual use of a 3rd party tool to do actual upgrades (no/little automation)
Our company checks them in simply because someone decided to put it in some SOX document that we do. It makes no sense to me at all, except possible as a reference document. I can't see a time we'd pull them out and try and use them again, and if we did we'd have to know which one ran first and which one to run after which. Backing up the database is much more important then keeping the Alter scripts.
for every release we need to give one update.sql file which contains all the new table scripts, alter statements, new/modified packages,roles,etc. This file is used to upgrade the database from 1 version to 2.
What ever we include in update.sql file above one all this statements need to go to individual respective files. like alter statement has to go to table as a new column (table script has to be modifed not Alter statement is added after create table script in the file) in the same way new tables, roles etc.
So whenever if user wants to upgrade he will use the first update.sql file to upgrade.
If he want to build from scrach then he will use the build.sql which already having all the above statements, it makes the database in sync.
sriRamulu
Sriramis4u#yahoo.com
In my case, I build a SH script for this work: https://github.com/reduardo7/db-version-updater
How is an open question
In my case I am trying to create something simple that is easy to use for developers and I do it under the following scheme
Things I tested:
File-based script handling in git using GitlabCI
It does not work, collisions are created and the Administration part has to be done by hand in case of disaster and the development part is too complicated
Use of permissions and access via mysql clients
There is no traceability on changes to the database and the transition to production is manual
Use of programs mentioned here
They require uploading the structures and many adaptations and usually you end up with change control just like the word
Repository usage
Could not control the DRP part
I could not properly control the backups
I don't think it is a good idea to have the backups on the same server and you generate high lasgs for the process
This was what worked best
Manage permissions per user and generate traceability of everything that is sent to the database
Multi platform
Use of development-Production-QA database
Always support before each modification
Manage an open repository for change control
Multi-server
Deactivate / Activate access to the web page or App through Endpoints
the initial project is in:
In case the comment manager reads this part, I understand the self-promotion but please just remove this part and leave the rest since I think it complies with the answer to the question reacted in the post ...
https://hub.docker.com/r/arelis/gitdb
I hope this reaches you since I see that several
There is an interesting article with new URL at: https://blog.codinghorror.com/get-your-database-under-version-control/
It a bit old but the concepts are still there. Good Read!

Resources