How to setup EXTJS4 store for CRUD Couch DB? - extjs

How to calibrate Extjs 4 store for simple CRUD from/to couchDb?

There is a demo project that was put together for our last Austin Sencha meetup that shows connecting Ext 4 to both Couch and MongoDB:
https://github.com/coreybutler/JSAppStack
Specifically this class will probably help you get started.

I have developed a library called SenchaCouch to make it easy to use CouchDB as the sole server for hosting both application code and data. Check it out at https://github.com/srfarley/sencha-couch.

I'd like to point out that to fully implement CRUD capabilities with the demo require some modification. CouchDB requires you to append revisions for any update/delete operation. This can also cause some issues with the field attributes in the Ext REST proxy. There is a project called mvcCouch that would be worth taking a look at. This project references a plugin that should help with full CRUD operations against CouchDB.

You'll find a number of subtleties in ExtJS 4's REST proxy that can slow you down. this brief post summarises the major ones:
In your Model class, you have to either define a hardcoded 'id' property or use 'idProperty' to specify one column as 'id'.
You server side code needs to return the entire updated record(s) to the browser. CouchDB normally returns only an _id and _rev, so you'll have to find a way to get the entire document on your own.
Be aware that the format of the record in the "data" must be JSON-formatted.
Be sure to implemented at least one Validator in your Model class because, in ExtJS source code AbstractStore.js, you can find the following code, which may always return true for a new created record in RowEditing plugin when the store is set as autoSync = true.
filterNew: function(item) {
// only want phantom records that are valid
return item.phantom === true && item.isValid();
},
This last item is, in my opinion, a design bug. The isValid() function should by rights return true by default, and rely on the developer to throw an error if problems occur.
The end result is that unless you have a validator for every field, the updates never get sent to CouchDB. You won't see any error thrown, it will just do nothing.

I just released two update libs for Sencha Touch and CouchDB respecively(based on S. Farley's previous work). They support nested data writing and basic CRUD.
https://github.com/rwilliams/sencha-couchdb-extjs
https://github.com/rwilliams/sencha-couchdb-touch

Related

Best approach to get notifications from database when table record changing in asp.net core web api

I'm working with asp.net-core 2.2 web api project with angular 7. In this I want to get notifications when a record inserting or updating in specific table. I read many of articles on it with sqldependancy class. But none of them didn't give me any satisfied answer. The thing I want to know is, are there any best approaches to do this work with another way. And if the best approach is,using sqldependecy how to combine it with EntityFramework core?
You could create an AFTER UPDATE TRIGGER on the table. Depending on how you want to get notified (SQL Mail for example) you could then code for it.
(on a personal note, I hate triggers but this is another way to do it)

How is ElasticSearch supposed to work in CakePHP 3?

I've been trying my very best not to ask any nosy question here in stackoverflow, but it has been almost one week since I got stuck in this problem and I couldn't find any solution.
I already have my working website built with CakePHP 3.2. What the website basically does is scrape Twitter for tweets containing a given search term, check if it's already in my database, and store it if it doesn't yet exist. Twitter's JSON response has this "tweet_id" property, and I've been using that value to check for whether I should ignore or append a specific tweet to my DB. While this might be okay while my database is small, I suspect it's going to slow things down considerably when my tables grow bigger. Thus my need for ElasticSearch.
My ElasticSearch server is running on my Arch Linux install, and I've configured my app to point to the said server. Also, I have my "Type" object named the same way as my "Tweets" table (I followed the documentation until the overview part http://book.cakephp.org/3.0/en/elasticsearch.html). This craps out an "Unknown method "alias" error, and following Google searches led me to creating an alternate pagination class since that was what some found to be the cause of the error (https://github.com/lorenzo/audit-stash/issues/4), which still doesn't fix things.
I'm not sure if I got this right. I installed the ElasticSearch plugin with the assumption that all I have to do is name the Types the same name as my tables, since to me the documentation "implies" that this should be done on top of the Blog Tutorial they did to "improve query performance".
TLDR, how is this supposed to work? Is my above assumption right? Do I name the Types differently and index everything myself? I'm not sure if there's just too much automagic, or I'm just poor at these sort of things. And yes, I'm new to frameworks (but not PHP, among other languages)
Thanks in advance!

Create does not work in GAE Datastore viewer

When I try to create some entities I don't see the option to input fields. I just see the SaveEntity button.
However I can view all the existing entities.
What is very strange is - there is another entity called VideoEntity for which the create did not work yesterday but works today.
Can somebody help me with this seemingly unpredictable tool ?
Regards,
Sathya
i think the console knows what properties each entity has based on existing data, rather then your models. and the data is only updated periodically. when did you upload your app? maybe waiting a few hours will give the console time to update.
alternatively, you could use the remote api to add your entities, or write a small snippet and upload such as ...
VideoStatsEntity(app='home', ip='116.89.52.67', params='tag=20130210').put()
Writing a simple interface to the data-store to allow you to edit/create models is probably the best thing to do in this case. You know what they contain so you can adjust your interface accordingly, rather then waiting for the admin interface to "catch up" as Gwyn notes.
I believe that there are some property types that are impossible to add via the admin interface that you are using so you'll probably get to the point sooner rather then later of creating a custom interface.
The admin datastore view is good for quickly checking out the contents of the datastore, but ever tried paging through 100's of entries? Not fun.

Using liquibase, how to handle an object model that is a subset of database table

Some days I love my dba's, and then there is today...
In a Grails app, we use the database-migration plugin (based on Liquibase) to handle migrations etc.
All works lovely.
I have been informed that there is a set of db administrative meta data that we must support on every table. This information has zero use to the app.
Now, I can easily update my models to accommodate this. But that answer is ugly.
The problem is now at each migration, Liquibase/database-migration plugin, complains about the schema and the model being out of sync.
Is there anyway to tell Liquibase (or GORM) that columns x,y,z are to be ignored?
What I am trying to avoid is changesets like this:
changeSet(author: "cwright (generated)", id: "1333733941347-5") {
dropColumn(columnName: "BUILD_MONTH", tableName: "ASSIGNMENT") }
Which tries to bring the schema back in line with the model. Being able to annotate those columns as not applying to the model would be a good thing.
Sadly, you're probably better off defining your own mapping block and taking control of the Data Mapper (what Hibernate essentially is) yourself at this point. If you need to take control of the way the database-integration plugin handles migrations, you might wanna look at the source or raise an issue on the JIRA. Naively, mapping your columns explicitly in the domain model should allow you to bypass unnecessary columns from the DB.

Best strategy to initially populate a Grails database backend

I'd like to know your approach/experiences when it's time to initially populate the Grails DB that will hold your app data. Assuming you have CSVs with data, is is "safer" to create a script (with whatever tool fits you) that:
1.-Generates the Bootstrap commands with the domain classes, run it in test or dev environment and then use the native db commands to export it to prod?
2.-Create the DB's insert script assuming GORM's version = 0 and incrementing manually the soon-to-be autogenerated IDs ?
My fear is that the second approach may lead to inconsistencies for hibernate will have the responsability for the IDs generation and there may be something else I'm missing.
Thanks in advance.
Take a look at this link. This allows you to run groovy scripts in the normal grails context giving you access to all grails features including GORM. I'm currently importing data from a legacy database and have found that writing a Groovy script using the Groovy SQL interface to pull out the data then putting that data in domain objects appears to be the easiest thing to do. Once you have the data imported you just use the commands specific to your database system to move that data to the production database.
Update:
Apparently the updated entry referenced from the blog entry I link to no longer exists. I was able to get this working using code at the following link which is also referenced in the comments.
http://pastie.org/180868
Finally it seems that the simplest solution is to consider that GORM as of the current release (1.2) uses a single sequence for all auto-generated ids. So considering this when creating whatever scripts you need (in the language of your preference) should suffice. I understand it's planned for 1.3 release that every table has its own sequence.

Resources