It's my very first time installing and using CakePHP on localhost. I've accomplished all the steps that are required in order to install CakePHP. After installing I checked my database and it is not showing me any tables in my database. Is something wrong or does CakePHP not insert any tables while installing?
Below is my status of CakePHP:
Your version of PHP is 5.2.8 or higher.
Your tmp directory is writable.
The FileEngine is being used for core caching. To change the config
edit APP/Config/core.php
Your database configuration file is present.
Cake is able to connect to the database.
CakePHP: the rapid development php framework (default) 0 query took
ms Nr Query Error Affected Num. rows Took (ms)
No, there is nothing wrong. CakePHP doesn't create any tables for you because it cannot know what tables your application will use.
You can use the Bake console to setup and create the database after the basic installation. See http://book.cakephp.org/2.0/en/console-and-shells/code-generation-with-bake.html
A typical Cake workflow is
Create your tables following Model an database conventions
Generate the application code using bake
Refine the application
Related
I am looking for a solution to sync DB between multiple developers (us at the office..).
We use Wordpress and MAMP (for now, MAMP/Headless WP and NPM/React in the future) and we want to use Appveyor (or similar) to deploy at dev-server and live-server, and want the DB to be synced everywhere or at least among us and the dev server and have a secondary (free standing) on the live-server.
Can this be done with Liquidbase or is there a better option?
Thanks :)
I don't know a whole lot about WordPress and how it uses the database, but in theory this should be possible as long as you are talking about syncing the schema changes. If you are also trying to sync the data, then Liquibase is not the right tool for the job.
To do this with Liquibase, try installing using the installer and working through some of the examples to get an idea for how the tool works. The examples use a local h2 in-memory database, so it is pretty painless to try things and start over if you mess things up.
After getting a feel for things, you will want to use the Liquibase generateChangeLog command to create the initial changelog that contains all the instructions for creating the schema as it exists on the database you are using when you run generateChangeLog. Then test that you can run liquibase update on a separate database and have WordPress use that database successfully.
Once you have proven that workflow, you can continue by following this pattern:
Before making changes to the WordPress schema, run liquibase snapshot to create a JSON formatted snapshot of the "DEV" schema - the schema you are changing in development mode. You will need additional options to generate the JSON format snapshot.
Make the desired changes to the WordPress "DEV" schema, most likely by using the WordPress app itself.
Use liquibase diffChangeLog to compare the JSON snapshot to the newly-altered "DEV" schema. This will add changesets to the existing changelog file that describe how to alter the schema to create the desired changes.
Use liquibase changeLogsSync on the "DEV" schema to update the liquibase tracking tables so that liquibase knows that the changes in the changelog already exist in that database.
Use liquibase update against the "PROD" database to have the new schema changes show up in that environment.
This workflow is described in the Liquibase docs for the snapshot command.
ps - there is no d in Liquibase :-)
I am using Voyager framework for Laravel. Wherenever I create table from Database Manager its creating table but its not creating any migration file. And its not good user interface who works on git repository(share the application's database schema). Everyone in the group has to create table in backend and have to work. This is not good.
but It creates table in database(phpmyadmin)
and we have option to create Model(while creating table)
Any solution? need quick response
Unfortunately, Laravel Voyager doesn't make migrations for user tables.
There are two workarounds.
Laravel Migrations Generator
Use this dev package to generate the migrations for the given tables. Availabe on GitHub at: https://github.com/Xethron/migrations-generator. View the documentation to see how to generate the migrations for the specific tables.
However, the collaborators will have to create the BREADs for them.
Copying the database
While sharing the database, the laravel voyager config tables that have all the changes and specs will be available on all the collaborator accounts.
Laravel voyager saves all its configurations in tables. Hence removing the need to generate migrations. Porting the whole DB works for most of my projects since I work on most apps alone.
Is there an easy way to do a "git rebase"-like operation for Grails Database Migration plugin changelog scripts?
I have already several changelog scripts on top of the initial changelog from an old domain model. Now I'm deploying the application to a new environment and there's no need to migrate the database contents.
I could delete the scripts and generate a fresh initial script from the current domain model but then I'd have to install Grails to the old environment and execute dbm-clear-checksums there, right?
Is there an easier way to tell dbm that I don't want to create an old domain and patch it to current level?
Run the dbm-changelog-sync script - it marks everything as having been run.
So I'm working on an ASP.NET project for university. We have to upload our code to a server running IIS and SQL Server 2008. I've written my project using MVC Code-First EF. I understand that the Entity Framework system needs permission to create the database to work properly (you can't just give it an empty database and let it fill it with data). This has created a problem for me since I do not have database creation privileges on the shared SQL Server. Is there any way around this?
As you don't have permissions, it sounds like you'd need to get a DBA to create your database on the server you are trying to deploy to - this could be done from either a database creation script or from a database backup of the db on your dev machine. You can then instruct EF code first not to try to create / update the database automatically by adding this line to your global.asax (or indeed anywhere before you first access the database)
Database.SetInitializer<YourContextType>(null);
You can use an existing database, rather than let EF create one for you. I have done this myself, but admittedly only when using EF Migrations. Otherwise, you run into trouble with missing table exceptions and what not.
When using migrations, just point your connection string to your empty database, create an initial migration to populate the database with your schema and then update the database itself.
See this answer: How do I create a migration for an existing database in EntityFramework 4.3?
.. which include this nice link to getting started with EF Migrations: http://thedatafarm.com/blog/data-access/using-ef-migrations-with-an-existing-database/
All this is available through Nuget, and if you have access to Pluralsight content, I can highly recommend Julie Lerman's video on the topic.
If you don't want to use Migrations, you can still use Code First if you just create the database objects manually using SMMS, but obviously you have the manual work of keeping your model and the database in sync.
Imagine you are developing a Java EE app using Hibernate and JBoss. You have a running server that has some important data on it. You release the next version of the app once in a while (1-2 weeks) and they have a bunch of changes in the persistence layer:
New entities
Removed entities
Attribute type changes
Attribute name changes
Relationship changes
How do you effectively set up a system that updates the database schema and preserves the data? As far as I know (I may be mistaking), Hibernate doesn't perform alter column, drop/alter constraint.
Thank you,
Artem B.
LiquiBase is your best bet. It has a hibernate integration mode that uses Hibernate's hbm2ddl to compare your database and your hibernate mapping, but rather than updating the database automatically, it outputs a liquibase changelog file which can be inspected before actually running.
While more convenient, any tool that does a comparison of your database and your hibernate mappings is going to make mistakes. See http://www.liquibase.org/2007/06/the-problem-with-database-diffs.html for examples. With liquibase you build up a list of database changes as you develop in a format that can survive code with branches and merges.
I personally keep track of all changes in a migration SQL script.
You can use https://github.com/Devskiller/jpa2ddl tool which provides Maven and Gradle plugin and is capable of generating automated schema migrations for Flyway based on JPA entities. It also includes all properties, dialects, user-types, naming strategies, etc.
For one app I use SchemaUpdate, which is built in to Hibernate, straight from a bootstrap class so the schema is checked every time the app starts up. That takes care of adding new columns or tables which is mostly what happens to a mature app. To handle special cases, like dropping columns, the bootstrap just manually runs the ddl in a try/catch so if it's already been dropped once, it just silently throws an error. I'm not sure I'd do this with mission critical data in a production app, but in several years and hundreds of deployments, I've never had a problem with it.
As a further response of what Nathan Voxland said about LiquiBase, here's an example to execute the migration under Windows for a mySql database:
Put the the mysql connector under lib folder in liquibase distribution for example.
Create a file properties liquibase.properties in the root of the liquibase distribution and insert this recurrent lines :
driver: com.mysql.jdbc.Driver
classpath: lib\\mysql-connector-java-5.1.30.jar
url: jdbc:mysql://localhost:3306/OLDdatabase
username: root
password: pwd
Generate or retrieve an updated database under another name for example NEWdatabase.
Now you will exctract differences in a file Migration.xml with the following command line :
liquibase diffChangeLog --referenceUrl="jdbc:mysql://localhost:3306/NEWdatabase"
--referenceUsername=root --referencePassword=pwd > C:\Users\ME\Desktop\Migration.xml
Finally execute the update by using the just generated Migration.xml file :
java -jar liquibase.jar --changeLogFile="C:\Users\ME\Desktop\Migration.xml" update
NB: All this command lines should be executed from the liquibase home directory where liquibase.bat/.sh and liquibase.jar are present.
I use the hbm2ddl ant task to generate my ddl. There is an option that will perform alter tables/columns in your database.
Please see the "update" attribute of the hbm2ddl ant task:
http://www.hibernate.org/hib_docs/tools/reference/en/html/ant.html#d0e1137
update(default: false): Try and create
an update script representing the
"delta" between what is in the
database and what the mappings
specify. Ignores create/update
attributes. (Do not use against
production databases, no guarantees at
all that the proper delta can be
generated nor that the underlying
database can actually execute the
needed operations)
You can also use DBMigrate. It's similar to Liquibase :
Similar to 'rake migrate' for Ruby on
Rails this library lets you manage
database upgrades for your Java
applications.