I like to perform a set up before each time I run a unit test and clear my mongo db, how do I do it with mongoid?
I found some links about it while googling but nothing seemed to work.
Output of rake -T
rake db:drop # Drops all the collections for the database for the current Rails.env
..
rake db:mongoid:drop # Drops the database for the current Rails.env
rake db:mongoid:purge # Drop all collections except the system collections
..
rake db:purge # Drop all collections except the system collections
This discussion(Delete All Collections in Mongoid 3) in mongoid group seems relevant. There are two methods purge! and truncate!. Purge drops the collections, which means also the indexes. Truncate only drops the documents in each collection, which means you retain indexes, but it is slower than purge.
You might want to take a look at database_cleaner gem which abstracts cleaning databases in specs
If you're using Rails, you can run rake db:mongoid:purge to drop all collections except the system collections.
Or, run rake db:mongoid:drop to drop the database from the current Rails.env.
Related
When I give the following command:
php bin/console doctrine:schema:update --force
the database gets updated, but afterwards this command:
php bin/console doctrine:schema:validate
keeps saying that the database is not in sync (see below screenshot).
What am I missing/doing wrong?
Depending on the database type and OS, the test may give some "false negatives", which means that your db is already ok but Doctrine doesn't quite understand. It happened to me in several projects, regardless of Symfony version (which means, Symfony 2,3 and 4).
Besides, in Symfony 4 you can use migrations as explained in the docs, that is:
bin/console make:migration
this command will create a migration file inside src/Migrations, but won't touch the db.
To understand what's going on (from Doctrine's oint of view) you may have a look at the migration file: it's a PHP class with two methods (up() and down()).
The up() method will contain the query/queries needed to align the database with your mapping files.
To apply all the pending migrations, run:
bin/console doctrine:migrations:migrate
We have a Rails app with a PostgreSQL database. We use git for version control.
We're only two developers on the project, so we both have to do a little of everything, and when emergencies arise we often have to drop everything to address them.
We have a main branch (called staging just to be difficult 🌚) which we only use directly for quick fixes, minor copy changes, etc. For bigger features, we work on independent feature branches.
When I work on a feature that requires changes to the database, I naturally have to create migrations that alter the schema. Let's say I'm working on feature-emoji, and I create a migration 20150706101741_add_emoji_to_users.rb. I run rake db:migrate to get on with my work.
Later, I'm informed of some bug I need to address. I switch to staging to start work on it; however, now my app will misbehave because the db schema does not match what the app expects. So before doing git checkout staging, I have to remember to do rake db:rollback. And then later when I switch back to feature-emoji, I have to run rake db:migrate again.
This whole flow is sort of okay-ish when dealing with just two branches, but when the git rebases and git merges happen, it gets complicated.
Is there no better way to handle versioning of code and db in parallel? Or am I doomed to have to run annoying rake tasks every time I want to change branches?
There is no easy answer to this. You could perhaps set up something like a git hook to check for changes to schema.rb, and fail the checkout if any are present; but there are lots of edge cases to check for in such a setup.
Ultimately, the responsibility lies with the human developer to restore untracked parts of their environment — e.g. the database — to a clean state before switching branches.
Using DB migrations using Flyway with multiple schemas which have the same lifecycle, how can I achieve that they all get clean when I use flyway:clean?
Support for this has now been added to the 1.3.1 release!
Set the flyway.schemas property with the list of schemas you wish to manage, and your good to go!
Example:
flyway.schemas=schema1,schema2,schema3
mvn flyway:clean will now clean all 3 schemas.
I expect the index.yaml file to update with the necessary indices when I run queries in my development environment. It claims that it is updating this file in the dev server log, but the file doesn't actually change. Any idea what might be going on?
Here is the entire index.yaml file:
indexes:
# AUTOGENERATED
# This index.yaml is automatically updated whenever the dev_appserver
# detects that a new type of query is run. If you want to manage the
# index.yaml file manually, remove the above marker line (the line
# saying "# AUTOGENERATED"). If you want to manage some indexes
# manually, move them above the marker line. The index.yaml file is
# automatically uploaded to the admin console when you next deploy
# your application using appcfg.py.
The log has several of these lines at the points where I would expect it to add a new index:
INFO 2010-06-20 18:56:23,957 dev_appserver_index.py:205] Updating C:\photohuntservice\main\index.yaml
Not sure if it's important, but I'm using version 1.3.4 of the AppEngine SDK.
Are you certain you're running queries which need composite indexes to be built? Any queries that are on single properties will be served with the default indexes, and won't need index.yaml entries, and any queries that only use equality filters on multiple properties will be executing using a merge-join strategy that doesn't require building custom indexes.
Unless you're getting NeedIndexErrors thrown in production (without a message about the existing indexes not allowing the query to run efficiently enough), your empty index.yaml may be perfectly fine.
There is an issue that Python SDK on Linux doesn't regenerate index.yaml that was created on Windows. It may be related to your case, but it seems that you just don't really have queries that cause automatic index creation in SDK.
I've been messing around with Django and the Django ORM at home, and I've got to say, I feel it is one of the best out there in terms of ease of use.
However, I was wondering if it was possible to use it in "reverse".
Basically what I would like to do is generate Django models from an existing database schema (from a project that doesn't use django and is pretty old).
Is this possible?
Update: the database in question is Oracle
Yes, use the inspectdb command:
http://docs.djangoproject.com/en/dev/ref/django-admin/#inspectdb
inspectdb
Introspects the database tables in the database pointed-to by the DATABASE_NAME setting and outputs a Django model module (a models.py file) to standard output.
Use this if you have a legacy database with which you'd like to use Django. The script will inspect the database and create a model for each table within it.
As you might expect, the created models will have an attribute for every field in the table. Note that inspectdb has a few special cases in its field-name output:
[...]
(Django 1.7.1) Simply running python manage.py inspectdb will create classes for all tables in database and display on console.
$ python manage.py inspectdb
Save this as a file by using standard Unix output redirection:
$ python manage.py inspectdb > models.py
(This works for me with mysql and django 1.9)
I have made a reusable app based on django's inspectdb command utility,
Django Inspectdb Refactor.
This breaks models into different files inside models folder from a existing database.
This helps managing models when they become large in number.
You can install it via pip:
pip install django-inspectdb-refactor
Then register the app in settings.py as inspectdb_refactor
After this you can use it from command line as :
python manage.py inspectdb_refactor --database=your_dbname_defined_in_settings --app=your_app_label
This will successfully create models folder with all the tables as different model files inside your app. For example:
More details can be found here.