Cakephp automated validation generation from MySQL database constraints - database

I'm looking for an easy solution to generate automatically the validation rules in the model from the database constraints in Cakephp because I don't want to make that all by hand with cake bake. So e.g. if there is a NOT NULL constraint for a field in the database it should create a "not empty" validation rule for the field.
So is there a tool that can do this sort of thing?

CakePHP does not support this by default, but I like the idea.
But you could implement that by overloading AppModel::__construct(), adding code into the AppModel::beforeValidate() callback, load the schema by using CakeSchema for the table the model is using and loop over the schema it returns and build rules on the fly and set them to $this->validate.
If you don't want a specific model do it you could add another property like boolean autoValidationRules. Also check if notEmpty is not already set and don't overwrite it automatically or merge it, depends on your needs.
Edit: Try this behavior, I just hacked it together because I like the idea. Going to add an unit test later tonight.
https://github.com/burzum/BzUtils/blob/develop/Model/Behavior/AutoValidateBehavior.php

Indeed there is no built-in feature in CakePHP for this.
Otherwise if you don't want to use the console, you can use an online tool that allows you to design your applications : Models, relations and validations rules, and then automatically generate a SQL file with the right constraints on columns, your Models with the corresponding validation rules for fields, Controllers and Views : Online Cake Bake.
You do not get to do exactly what you want, but at least you get to design your database's constraints and your validation rules at the same time which saves a lot of time.

Related

How to update models in asp.net through database first approach while keeping some previous methods alive

I have the following columns in the Patients table:
I've created models using the following command.
Scaffold-DbContext "Server=.;Database=Tasks3;Trusted_Connection=True;"
Microsoft.EntityFrameworkCore.SqlServer
-OutputDir Models\MainModel
Screenshot of Patient Model
I've some methods in context class to make my connection string dynamic. Actually I used to fetch data from tokens and after some logic connection string changes from client to client.
Now the problem is here, when I make changes in patients table (for instance, I changed CNIC column from CNIC5 to CNIC) and run the following command with -Force keyword, it delete all the data from the previous Tasks3Context class (DbContext).
Scaffold-DbContext "Server=.;Database=Tasks3;Trusted_Connection=True;"
Microsoft.EntityFrameworkCore.SqlServer
-OutputDir Models\MainModel -t <Patient> -f
Tell me some method that update models and make changes in only specific model and column.
Thanks in advance!
I tried the database update procedure, and when I used -Force for the second time, there was a problem similar to yours. The specific reasons are as follows:
The code generated by EF Core is your code. Feel free to change it. It will only be regenerated if you reverse engineer the same model again. The scaffolded code represents one model that can be used to access the database, but it's certainly not the only model that can be used.
Customize the entity type classes and DbContext class to fit your needs. For example, you may choose to rename types and properties, introduce inheritance hierarchies, or split a table into multiple entities. You can also remove non-unique indexes, unused sequences and navigation properties, optional scalar properties, and constraint names from the model.
You can also add additional constructors, methods, properties, etc. using another partial class in a separate file. This approach works even when you intend to reverse engineer the model again.
After making changes to the database, you may need to update your EF Core model to reflect those changes. If the database changes are simple, it may be easiest just to manually make the changes to your EF Core model. For example, renaming a table or column, removing a column, or updating a column's type are trivial changes to make in code.
More significant changes, however, are not as easy to make manually. One common workflow is to reverse engineer the model from the database again using -Force (PMC) or --force (CLI) to overwrite the existing model with an updated one.
Another commonly requested feature is the ability to update the model from the database while preserving customization like renames, type hierarchies, etc. Use issue this to track the progress of this feature.
Warning
If you reverse engineer the model from the database again, any changes
you've made to the files will be lost.

Using liquibase, how to handle an object model that is a subset of database table

Some days I love my dba's, and then there is today...
In a Grails app, we use the database-migration plugin (based on Liquibase) to handle migrations etc.
All works lovely.
I have been informed that there is a set of db administrative meta data that we must support on every table. This information has zero use to the app.
Now, I can easily update my models to accommodate this. But that answer is ugly.
The problem is now at each migration, Liquibase/database-migration plugin, complains about the schema and the model being out of sync.
Is there anyway to tell Liquibase (or GORM) that columns x,y,z are to be ignored?
What I am trying to avoid is changesets like this:
changeSet(author: "cwright (generated)", id: "1333733941347-5") {
dropColumn(columnName: "BUILD_MONTH", tableName: "ASSIGNMENT") }
Which tries to bring the schema back in line with the model. Being able to annotate those columns as not applying to the model would be a good thing.
Sadly, you're probably better off defining your own mapping block and taking control of the Data Mapper (what Hibernate essentially is) yourself at this point. If you need to take control of the way the database-integration plugin handles migrations, you might wanna look at the source or raise an issue on the JIRA. Naively, mapping your columns explicitly in the domain model should allow you to bypass unnecessary columns from the DB.

DbContext with Database changes

I'm now using Entity Framework 4.3 DbContext to generate database entities, my question is when i change the database, how can i ensure the auto-generated code are updated? where does EF to store database rules, like allow null from No to Yes. When i use the Update Model From Database function from .EDMX file, it seems does not update the allow null rules of the table. How can i solve the database changes problem. Where is the code behind to store all these rules.
The error message:
Validation failed for one or more entities. See 'EntityValidationErrors' property for more details.
But when i delete all EF auto-generated file and re-generate it again, it seems the rules are updated. But i thought it is not a good way to solve the problem during development period.
When i use the Update Model From Database function from .EDMX file, it
seems does not update the allow null rules of the table.
That is correct behavior. EDMX file has three parts:
database definition (only visible in model browser)
class definition (that is what you see in designer)
mapping between definitions (that is what you see in mapping details)
When you use Update from database the designer will completely replace database definition and adds new tables or columns to mapping and class definitions. It will never try to remove or change anything. The reason for this is that class definition is customizable. If you make your changes you don't want designer to touch them. Update from database has only actual state - it doesn't know which changes were made by you and which changes are caused by modifying the database so it simply use the better way - don't modify anything and let you to correct inconsistencies.

How to setup EXTJS4 store for CRUD Couch DB?

How to calibrate Extjs 4 store for simple CRUD from/to couchDb?
There is a demo project that was put together for our last Austin Sencha meetup that shows connecting Ext 4 to both Couch and MongoDB:
https://github.com/coreybutler/JSAppStack
Specifically this class will probably help you get started.
I have developed a library called SenchaCouch to make it easy to use CouchDB as the sole server for hosting both application code and data. Check it out at https://github.com/srfarley/sencha-couch.
I'd like to point out that to fully implement CRUD capabilities with the demo require some modification. CouchDB requires you to append revisions for any update/delete operation. This can also cause some issues with the field attributes in the Ext REST proxy. There is a project called mvcCouch that would be worth taking a look at. This project references a plugin that should help with full CRUD operations against CouchDB.
You'll find a number of subtleties in ExtJS 4's REST proxy that can slow you down. this brief post summarises the major ones:
In your Model class, you have to either define a hardcoded 'id' property or use 'idProperty' to specify one column as 'id'.
You server side code needs to return the entire updated record(s) to the browser. CouchDB normally returns only an _id and _rev, so you'll have to find a way to get the entire document on your own.
Be aware that the format of the record in the "data" must be JSON-formatted.
Be sure to implemented at least one Validator in your Model class because, in ExtJS source code AbstractStore.js, you can find the following code, which may always return true for a new created record in RowEditing plugin when the store is set as autoSync = true.
filterNew: function(item) {
// only want phantom records that are valid
return item.phantom === true && item.isValid();
},
This last item is, in my opinion, a design bug. The isValid() function should by rights return true by default, and rely on the developer to throw an error if problems occur.
The end result is that unless you have a validator for every field, the updates never get sent to CouchDB. You won't see any error thrown, it will just do nothing.
I just released two update libs for Sencha Touch and CouchDB respecively(based on S. Farley's previous work). They support nested data writing and basic CRUD.
https://github.com/rwilliams/sencha-couchdb-extjs
https://github.com/rwilliams/sencha-couchdb-touch

How do you make your model's database portable in CakePHP?

I'm not very familiar with cake.. So here's my questions.. we're developing an app on mysql, but it may eventually need to deploy to mssql or oracle. How do we make sure that we won't have strange problems with our primary keys? In mysql they are AUTO INCREMENT columns but IIRC in oracle you need to use sequences... is there a way to make this a transparent change? Am I over thinking it?
Does anyone have experience with switching database vendors on a cakephp app? any pointers or things to keep an eye out for?
In the Cake database config file you choose your driver (see http://book.cakephp.org/view/40/Database-Configuration). Then, if you set your PK (which will also be your A_I column if using MySQL) with the fieldname id, Cake will automatically handle the auto_increment insertion. I would presume (NB: haven't tried Cake w/ something else) that Cake will take care of A_I columns in something like Oracle.
Cake uses its own DB abstraction layer -- but the included abstractions cover quite a bit, and it will perform as specified (i.e. it'll handle your auto increment stuff for you).
In short, you're probably overthinking. That said, I would mock up a little cake app, then try switching databases behind it (change your db config and your app should automagically switch over).
HTH,
Travis
The following practices work great for me
I use cake schemas ( I tend to set up 1 schema file for each group of models. I.E. User, Role, Profile might all be in one UsersSchema file )
Also take a look at using the debuggable.com FixturesShell - it allows you to import test case fixtures into the live database. Great for setting up that initial group of users and roles from the schema file.
Also, if you set your 'id' field to VARCHAR(36) instead of INT(#) cake will automatically use UUID style id's. This means you have a FAR FAR lower chance of your data having id value collisions if you need to move the data to another application or server.
The fixtures shell also has a command line tool for generating uuids ( so you can add them to your $records variable in the fixture for insertion etc. )
In summary - Use the CakeSchema schemas shell, the fixtures shell from debuggable.com and UUID values for your id's and it should give you a portable structure creation tool, a portable data insertion tool, and a portable id field format.
http://github.com/felixge/debuggable-scraps/tree/fd0e5ad625cb21f5ba16e6b186821a5774089ac7/cakephp/shells/fixtures
http://api.cakephp.org/class/schema-shell
You need to be using "cake schema" to manage your DB.. This will handle all the DB specific stuff when you create your database.
http://book.cakephp.org/view/735/Generating-and-using-Schema-files

Resources