Ooccasionaly I need to run tasks that update data in a database. I might need to run them ever again, or not a new server - no idea. For I need to run them once and in a certain release only. And they should be outside of git index.
Some tutorial suggest that I run them with "custom migration", in which a 2nd directory for migrations is created called "custom_migrations" and they'll be run from there via Ecto.Migrator. But this will case a problem: I run all of the custom_migrations, then delete all of migration files (because I won't need them anywhere else, not on a new server either, once I've run them), then create new ones when a need arises, and then Ecto.Migrator will complain about absense of the migrations that I've deleted.
I'm also aware of ./bin/my_app eval MyApp.Tasks.custom_task1 but it's not convinient because I'll have to call it manually and passing arguments to a function isn't convinient via the command line.
What I want is: create a several files that I want to be run in this current release, once. Store them in a certain directory of an application. Deploy a application. They'll get run automatically, probably on application boot and then I remove them. Then, after some time, I may want to create new ones and only those new ones will need to get run.
How to do this? What's a recommended way in Ellixir/Phoenix?
Related
I have multiple databases in my local which I do not need. Can I run a curl script or a REST API command where I can delete the database, it's servers and all of the forests so that I can use gradle to just deploy them again?
I have tried to manually delete the server first, then the database and then the forests. This is a lengthy process.
I want a single command to do the whole job for me instead of manually having to delete the components one by one which is possible through the admin interface.
Wagner Michael has a fair point in his comment. If you already used (ml-)Gradle to create servers and databases, why not use its mlUndeploy -Pconfirm=true task to get rid of them? You could potentially even use a fake project, with stub configs to get rid of a fairly random set of databases and servers, though that still takes some manual work.
By far the quickest way to reset your entire MarkLogic, is to stop it, and wipe its data directory. This SO question gives instructions on how to do it, as part of a solution to recover when you lost your admin password:
https://stackoverflow.com/a/27803923/918496
HTH!
A bit complex to describe, but I'll do my best. Basically we're using the Git workflow, meaning we have the following branches:
production, which is the live branch. Everything is production is running in the live web environment.
integration, in which all new functionality is integrated. This branch is merged to production every week.
one or more feature branches, in which developers or development teams develop new functionality. After this is done, developers merge their feature branch to integration.
So, nothing really complex here. But, since our application is a web application running against a MySQL database, new functionality often requires changes to the database scheme. To automate this, we're using dbdeploy, which allows us to create alter scripts, given a number. E.g. 00001.sql, 00002.sql, etc. Upon merging to the integration branch, dbdeploy will check which alter scripts have a higher number than the latest executed one on that specific database, and will execute those.
Now assume the following.
- integration has alter scripts up until 00200.sql. All of these are executed on the integration database.
- developer John has a feature branch featureX, which was created when integration still had 00199.sql as the highest alter script.
John creates 00200.sql because of some required db schema changes.
Now, at some point John will merge his modifications back to the integration branch. John will get a merge conflict and will see that his 00200.sql already exists in integration. This means he needs to open the conflicting file, extract his contents, reset that file back to 'mine' (the original state as in integration) and put his own contents in a new file.
Now, since we're working with ten developers, we get this situation daily. And while we do understand the reasons behind this, it's sometimes very cumbersome. John renames his script, does a merge commit to integration, pushes the changes to the upstream only to see that somebody else already created a 00201.sql, requiring John to do the proces again.
Surely there must be more teams using the Git workflow and using a database change management tool for automating database schema changes?
So, in short, my questions are:
How to automate database schema changes, when working on different feature branches, that operate on different instances of the same db?
How to prevent merge conflicts all the time, while still having the option to have a fixed order in the executed alter scripts? E.g. 00199.sql must be executed before 00200.sql, because 00200.sql might be depending on something done in 00199.sql.
Any other tips are most welcome ofcourse.
Rails used to do this, with exactly the problems you describe. They changed to the following scheme: the files (rails calls them migrations) are labelled with a utc timestamp of when the file was created, eg
20140723069701_add_foo_to_bar
(The second part of the name doesn't contribute to the ordering).
Rails records the timestamps of all the migrations that have been run. When you ask it to run pending migrations it selects all the migration files whose timestamp isn't in the list of already run migrations and runs them in numerical order.
You'll no longer get merge conflicts unless two people create one at exactly the same point in time.
Files still get executed in the order you wrote them, but possibly interleaved with someone else's work. In theory you can still have problems - eg developer a decides to rename a table that I had decided to add a column too. That is much less common than 2 developers both making any changes to the db and you would have problems even not considering the schema changes presumably I have just written code that queries a no longer existant table - at some point developers working on related stuff will have to talk to each other!
A few suggestions:
1 - have a look at Liquibase, each version gets a file that references the changes that need to happen, then the change files can be named using a meaningful string rather than by number.
2 - have a central location for getting the next available number, then people use the latest number.
I've used Liquibase in the past, pretty successfully, and we didn't have the problem you describe.
As Frederick Cheung suggested, use timestamps rather than a serial number. Applying schema changes by order of datestamp should work, because schema changes can only depend on changes of a prior date.
In addition, include the name of the developer in the name of the alter script. This will prevent merge conflicts 100%.
Your merge hook should just look for newly added alter scripts (present in the merged branch but not in the upstream branch) and execute them by order of timestamp.
I've used two different approaches to overcome your problem in the past.
The first is to use a n ORM which can handle the schema updates.
The other approach is to create a script, which incrementally builds the database schema. This way if a developer needs to an additional row in a table, he should add the appropriate sql statement after the table is create. Likewise if he needs a new table, he should add the sql statement for that. Then merging becomes a question of making sure things happen in the correct order. This is basically what the database update process in an ORM does. Such a script needs to be coded very defensively, and each statement should check if its perquisites exists.
For the dbvc commandline tool, I use git log to determine the order of the update scripts.
git log -c --no-merges --pretty="format:" --name-status -p dev/db/updates/ | \
grep '^A' | awk '{print $2}' | tac
In this case the way the order of your commits will determine the sequence in which the updates are run. Which is most likely what you want.
If you run git merge b, the updates from master will be run first and than from B.
If you run git rebase b, the update from B will run first and than from master.
I'm using Django in a development process. It is annoying that every time I change a bit in a model I need to delete database and run syncdb. For the purpose of testing, I want to add some initial data into database automatically every time when I run syncdb. I've tried put these sort of code inside one app's __init__.py, but it would run before database created and it's a bit annoying to deal with exceptions. Isn't there a neater way to do this?
Once you have initially populated the database; use the dumpdata command to create a fixture (a copy of data). Save it to a file. Then use the loaddata command to automatically populate the database.
Suppose you have an app called bookstore for which you want to automatically load a series of books, authors, etc.
Once you have added some records in the database:
python django-admin.py dumpdata bookstore > initial.json
Once you have made some changes or want to recreate the database:
python django-admin.py loaddata initial.json
South is nice, but it is overkill for this purpose.
If you need to make changes in database and you want that your data remains safe then SOUTH is the right software for you.
It manages change in database and its data. You wont need to delete database when you make changes if you have South. Also it has backups of previous states of database so that if you want to revert back you can. I recommend you read its documentation. It will surely help.
I'm looking for ideas on how to automatically track the job that calls the package.
We have some genric packages that are called from different jobs, each job passes in different file paths as parameters and therefore processes very different size files depending on the path.
In the package I have some custom auditing setup which basically tracks the package start time and end time, and therefore the duration of execution. I want to be able to also track the job that called the package so if the package is running long, I can determine which job called it.
Also note I would prefer this automatic using possibly some sort of system variable or such, so that human error is not an issue. I also want these auditing tasks built into all of our packages as a template, so I would prefer not to use a user variable either - as different packages may use different variables.
Just looking for some ideas - appreciate any input
We use parent and child packages instead of different jobs calling the same package. You could send the information about which parent called it to the child package and then in the child package records that data to a table along with the start date and end date.
Our solution has a whole meta database that records all the details through logging of each step. The parent tells the child which configuration to use and log details against that configuration. The jobs call the parent package - never the child package (which doesn't have a configuration in the config table as it is always configured through variables sent in by the parent package. No human intervention necessary (except initial development or research when a failure occurs) needed.
Edit for existing jobs.
Consider that jobs can have multiple steps. Make the first step a SQL script that inserts the auditing information into a table including the start time of the package, the name of the job that called it and thename of the ssispacakge being called. Then the second step calls the SSIS package and then make the last step a SQL script that inserts the same data only with the end datetime.
A simple way to do this is to set up a variable on your SSIS package as a varchar. Set the value to the value of the variable to #[System::ParentContainerGUID] using an expression when it starts. SQL Agent won't set the value, so when run as an individual job it will be an empty string. But if called by another package it will contain the GUID of the calling package. You can test for that value. You can use a precedence contraint to control the program logic.
We have packages that run as a part of a big program but sometimes we need to run them individually. Each package has an email on failure task but we only want that to execute when the package is run individually. When it is part of the big run we collect the names of all packages that error and send them as one email from the master package. We don't want individual emails and a summary email going out on the same run.
We are currently reviewing how we store our database scripts (tables, procs, functions, views, data fixes) in subversion and I was wondering if there is any consensus as to what is the best approach?
Some of the factors we'd need to consider include:
Should we checkin 'Create' scripts or checkin incremental changes with 'Alter' scripts
How do we keep track of the state of the database for a given release
It should be easy to build a database from scratch for any given release version
Should a table exist in the database listing the scripts that have run against it, or the version of the database etc.
Obviously it's a pretty open ended question, so I'm keen to hear what people's experience has taught them.
After a few iterations, the approach we took was roughly like this:
One file per table and per stored procedure. Also separate files for other things like setting up database users, populating look-up tables with their data.
The file for a table starts with the CREATE command and a succession of ALTER commands added as the schema evolves. Each of these commands is bracketed in tests for whether the table or column already exists. This means each script can be run in an up-to-date database and won't change anything. It also means that for any old database, the script updates it to the latest schema. And for an empty database the CREATE script creates the table and the ALTER scripts are all skipped.
We also have a program (written in Python) that scans the directory full of scripts and assembles them in to one big script. It parses the SQL just enough to deduce dependencies between tables (based on foreign-key references) and order them appropriately. The result is a monster SQL script that gets the database up to spec in one go. The script-assembling program also calculates the MD5 hash of the input files, and uses that to update a version number that is written in to a special table in the last script in the list.
Barring accidents, the result is that the database script for a give version of the source code creates the schema this code was designed to interoperate with. It also means that there is a single (somewhat large) SQL script to give to the customer to build new databases or update existing ones. (This was important in this case because there would be many instances of the database, one for each of their customers.)
There is an interesting article at this link:
https://blog.codinghorror.com/get-your-database-under-version-control/
It advocates a baseline 'create' script followed by checking in 'alter' scripts and keeping a version table in the database.
The upgrade script option
Store each change in the database as a separate sql script. Store each group of changes in a numbered folder. Use a script to apply changes a folder at a time and record in the database which folders have been applied.
Pros:
Fully automated, testable upgrade path
Cons:
Hard to see full history of each individual element
Have to build a new database from scratch, going through all the versions
I tend to check in the initial create script. I then have a DbVersion table in my database and my code uses that to upgrade the database on initial connection if necessary. For example, if my database is at version 1 and my code is at version 3, my code will apply the ALTER statements to bring it to version 2, then to version 3. I use a simple fallthrough switch statement for this.
This has the advantage that when you deploy a new version of your application, it will automatically upgrade old databases and you never have to worry about the database being out of sync with the software. It also maintains a very visible change history.
This isn't a good idea for all software, but variations can be applied.
You could get some hints by reading how this is done with Ruby On Rails' migrations.
The best way to understand this is probably to just try it out yourself, and then inspecting the database manually.
Answers to each of your factors:
Store CREATE scripts. If you want to checkout version x.y.z then it'd be nice to simply run your create script to setup the database immediately. You could add ALTER scripts as well to go from the previous version to the next (e.g., you commit version 3 which contains a version 3 CREATE script and a version 2 → 3 alter script).
See the Rails migration solution. Basically they keep the table version number in the database, so you always know.
Use CREATE scripts.
Using version numbers would probably be the most generic solution — script names and paths can change over time.
My two cents!
We create a branch in Subversion and all of the database changes for the next release are scripted out and checked in. All scripts are repeatable so you can run them multiple times without error.
We also link the change scripts to issue items or bug ids so we can hold back a change set if needed. We then have an automated build process that looks at the issue items we are releasing and pulls the change scripts from Subversion and creates a single SQL script file with all of the changes sorted appropriately.
This single file is then used to promote the changes to the Test, QA and Production environments. The automated build process also creates database entries documenting the version (branch plus build id.) We think this is the best approach with enterprise developers. More details on how we do this can be found HERE
The create script option:
Use create scripts that will build you the latest version of the database from scratch, which is empty except the default lookup data.
Use standard version control techniques to store,branch,tag versions and view histories of your objects.
When upgrading a live database (where you don't want to loose data), create a blank second copy of the database at the new version and use a tool like red-gate's link text
Pros:
Changes to files are tracked in a standard source-code like manner
Cons:
Reliance on manual use of a 3rd party tool to do actual upgrades (no/little automation)
Our company checks them in simply because someone decided to put it in some SOX document that we do. It makes no sense to me at all, except possible as a reference document. I can't see a time we'd pull them out and try and use them again, and if we did we'd have to know which one ran first and which one to run after which. Backing up the database is much more important then keeping the Alter scripts.
for every release we need to give one update.sql file which contains all the new table scripts, alter statements, new/modified packages,roles,etc. This file is used to upgrade the database from 1 version to 2.
What ever we include in update.sql file above one all this statements need to go to individual respective files. like alter statement has to go to table as a new column (table script has to be modifed not Alter statement is added after create table script in the file) in the same way new tables, roles etc.
So whenever if user wants to upgrade he will use the first update.sql file to upgrade.
If he want to build from scrach then he will use the build.sql which already having all the above statements, it makes the database in sync.
sriRamulu
Sriramis4u#yahoo.com
In my case, I build a SH script for this work: https://github.com/reduardo7/db-version-updater
How is an open question
In my case I am trying to create something simple that is easy to use for developers and I do it under the following scheme
Things I tested:
File-based script handling in git using GitlabCI
It does not work, collisions are created and the Administration part has to be done by hand in case of disaster and the development part is too complicated
Use of permissions and access via mysql clients
There is no traceability on changes to the database and the transition to production is manual
Use of programs mentioned here
They require uploading the structures and many adaptations and usually you end up with change control just like the word
Repository usage
Could not control the DRP part
I could not properly control the backups
I don't think it is a good idea to have the backups on the same server and you generate high lasgs for the process
This was what worked best
Manage permissions per user and generate traceability of everything that is sent to the database
Multi platform
Use of development-Production-QA database
Always support before each modification
Manage an open repository for change control
Multi-server
Deactivate / Activate access to the web page or App through Endpoints
the initial project is in:
In case the comment manager reads this part, I understand the self-promotion but please just remove this part and leave the rest since I think it complies with the answer to the question reacted in the post ...
https://hub.docker.com/r/arelis/gitdb
I hope this reaches you since I see that several
There is an interesting article with new URL at: https://blog.codinghorror.com/get-your-database-under-version-control/
It a bit old but the concepts are still there. Good Read!