Since each part of MEAN stack projects are separated, it's really hard to refactor the whole project. I'm trying to do the following things
Modify mongoose schemas
Reorganize server code
Rename some api calls and parameters
Modify Angular code to adapt new APIs
Is there any good ways to do them?
There is no tool for any of theese dedicated to MEAN stack. Yeo Man got some generators but these existing are only for creating, not for refactoring. You can still create your own yeo man generator with custom actions or any other server side script that is looking for patterns and changing names according to given configuation file. This can be also automated with gulp or grunt task runners, but its really time-consuming ;)
Related
We're designing a system for a client where they are allowing authenticated users to upload images. We've created an API to upload the files but the client only wants the latest file and delete all previous ones so that there would only ever be one.
We've looked through the docs and can't come across a way for ADAM to handle this in both 2SXC and DNN's file system.
Internally when deleting images we see API calls like the following to the internal 2SXC API, but we're wondering if this is exposed somewhere within the public API?
https://somedomain.com/api/2sxc/app/auto/data/61393528-b401-411f-a001-f423ea46700a/b7d04e2c-c565-496c-8efb-aa133cf90d33/Photo/delete?subfolder=&isFolder=false&id=189&usePortalRoot=false&appId=3
We could probably use the same endpoint above, but we'd likely run into permission issues or changes to the APIs that could be problematic.
Thank you for any advice you can give! Perhaps #iJungleBoy can provide some thoughts on this.
As a solution from a completely different direction, if you are on the later release of 2sxc (v12.8+, v13+), and comfortable programming in C#, you might consider doing this as a "cleanup" from a Dnn Scheduled Task. This can be done with a relatively easy setup. We have a Gist in place that we use as a starter. You simply put the code in the /App_Code folder then setup a normal Dnn Scheduled Task. NOTE that you can scroll down to the first comment on the Gist to see a screenshot of a complete working setup.
Accuraty's AccuTasks template on GitHub Gists
There are two more key things to note:
You need to install Dnn's CodeDom 3.6 because the example uses the later versions C#'s string interpolation - OR remove the few $"ASL2021 - {this.GetType().Name}, Task Scheduled Email", bits or convert to string.Format() or something.
Since your task's code is NOT running in a (2sxc) module, if needed, you'll do stuff like this: 2sxc Docs - Use 2sxc Instance or App Data from External C# Code
So, if you are comfortable writing code that "finds and deletes stuff older than NN days" - this might be the way to go.
currently looking through a boilerplate called react-redux-start-kit by Dave Zuko, and there is a folder called blueprints. Apparently it is for a library called redux-cli, and having no clue what that is I did some research. (Link to redux-cli)
The problem is, the documentation for redux-cli didn't really say what it is. I have also read that redux-cli makes it faster to build apps. Could someone please explain to me what redux-cli is, and how it works?
redux-cli is a utility that speeds up development by reducing work required to create basic elements of your app, e.g. components. Usually, when you create such component, you have to create file for a component, test suite and they all start with a certain structure that you usually copy / paste from other, existing components. With redux-cli you just use simplified commands in your console to auto-generate those basic versions of a new component and you can just jump in and start writing the essence of it.
blueprints are simply templates, redux-cli uses when running commands (for example when running command to create new component it will look at blueprints directory first and see if you have customized template for how you want your components to be initialized). Documentation is pretty clear I'd say - https://github.com/SpencerCDixon/redux-cli#creating-blueprints
Is there any way to auto-generate Swagger documentation (or similar) for a Nancy service?
I found Nancy.Swagger, but there's no information on how to use it and the demo application doesn't seem to demonstrate generating documentation (if it does, it's not obvious).
Any help would be appreciated. Thanks!
In my current project I've been looking a lot into this problem. I used both nancy.swagger and nancy.swagger.attributes.
I quickly discarded Nancy.swagger, because for me personally it doesn't sound right that you have to create a pure documentation class for each nancy module. The attributes solution was a bit "cleaner" - at least codebase and documentation were in one place. But very fast this became unmaintainable. Module code is unreadable because of many attributes. Nothing is generated automatically: you have to put path, all parameters, even http method as an attribute. This is a huge effort duplication. Problems came very fast, a few examples:
I changed POST to PUT in Nancy and forgot to update [Method] attribute.
I added a parameter but not the attribute for it.
I changed parameter from path to query and didn't update the attribute.
It's too easy to forget to update the attributes (let alone documentation module solution), which leads to discrepancies between your documentation and actual code base. Our UI team is in another country and they had some trouble using the APIs because docu just wasn't up-to-date.
My solution? Don't mix code and documentation. Generating docu from code (like Swashbuckle does) IS ok, but actually writing docu in code and try to dublicate the code in docu is NOT. It's not better than writing it in a Word document for your clients.
If you want Swagger docu, just do it the Swagger way.
- Spend some time with Swagger.Editor and really author your API in
YAML. It looks all-text and hard, but once you get used to it, it's
not.
- Spend some time with Swagger.Codegen and adapt it (it already does a fair job for generating Nancy server code and with a few
adjustments to moustache templates it was just what I needed).
- Automate your process: write a couple of batches to generate your modules and models from yaml and copy them to your repository.
Benefits? Quite a few:
-
Your YAML definition is now the single truth of your REST contract.
If somewhere something is defferent, it's wrong.
Nancy server code is auto-generated
Client code-bases are auto-generated (in our case it's android, ios and angular)
So whenever I change something in REST contract, all codebases are regenerated and added to projects in one batch. I just have to tell the teams something was updated. They don't have to look through some documents and search for it. They just have their code regenerated and probably see some compile errors, in case of breaking changes.
Do I still use nancy.swagger(.annotations)?
Yes, I do use it in another project, which has just one endpoint with a couple of methods. They don't change often. It's not worth the effort to set up everything, I have my swagger docu fast up and running. But if your project is big, API is changing, and you have multiple code-bases depending on your API, my advice is to invest some time into a real swagger setup.
I am quoting the author answer here from https://github.com/khellang/Nancy.Swagger/issues/59
The installation should be really simple, just pull down the NuGet package, add metadata modules to describe your routes, and hit /api-docs. That should get you the JSON. If you want to add swagger-ui as well, you have to add that manually right now.
No. Not in an automated. https://github.com/yahehe/Nancy.Swagger needs lots of manually created metadata.
There is a nice article here: http://www.c-sharpcorner.com/article/generating-api-document-in-nancy-using-swagger/
Looks like you still have to add swagger-ui separately.
I would like to know your opinion about how you would organize the files/directores in a big web application using MVC (backbone for example).
I would make the following ( * ). Please tell me your opinion.
( * )
js
js/models/myModel.js
js/collections/myCollection.js
js/views/myView.js
spec/model/myModel.spec.js
spec/collections/myCollection.spec.js
spec/views/myView.spec.js
This is how I've traditionally organized my files. However, I've found that with larger applications it really becomes a pain to keep everything organized, named uniquely, etc. A 'new' way that I've been going about it is organizing my files by feature rather than type. So, for example:
js/feature1/someView.js
js/feature1/someController.js
js/feature1/someTemplate.html
js/feature1/someModel.js
But, oftentimes there are global "things" that you need, like the "user" or a collection of locations that the user has built. So:
js/application/model/user.js
js/application/collection/location.js
This pattern was suggested to me because then you can work on feature sets, package and deploy them using requirejs with relatively little effort. It also reduces the possibility of dependencies occurring between feature sets, so if you want to remove a feature or update it with brand new code, you can just replace a folder of 'stuff' rather than hunting for every file. Also, in IDE's, it just makes the files you're working on easier to find.
My two cents.
Edit: What about the spec files?
A few thoughts - you'll just have to pick the one that seems most natural to you I think.
You could follow the same 'feature folder' pattern with the spec files. The upside being that all of the specs are in one place. The downside is that now, much like what you're currently doing, you have to places for one feature's files.
You could put the specs in a 'spec' folder of the feature folder. The upside is that you now have actual packages that can be wrapped up in a single zip file with no chance of clobbering other work. It's also easier to find directly related files for writing tests - they're all in the same parent folder. The downside is that now your production code and test code is in the same folder, publishing it (possibly) to the world. Granted you'll probably end up compiling the production javascript down to one file at some point.. so I'm not sure that's much of an issue.
My suggestion - if this is a large application and you figure you're going to have a few hands touching the files, leave something like a 'package.json/yml/xml' file in the folder. In there, list out the production, spec, and any data files you need for testing (you can most likely write a quick shell script to do this for you). Then write out a quick script to look through your source folder for 'package.whateverYouChose' files, get the test files and then build your unit testing page with it. So, let's say you add another package.. run 'updateSpecRunner' or whatever you name the script, and it'll generate you another SpecRunner.html file (or whatever you named the file your running the specs on). Then you can manually test it in a browser, or automate it using phantomjs/rhino.
Does that make sense?
You can find a good example how to organize your application to this link
Backbone Jasmine examples
It looks more or less like your implementation.
We currently use an SVN repository to ensure everyone's local environments are kept up-to-date. However, Drupal website development is somewhat trickier in that any custom code you write (for instance, PHP code written for a node body) is stored in the DB and the changes aren't recognized by the SVN working copy.
There are a couple of developers who are presently working on the same area of a Drupal site, but we're uncertain about how to best merge our local Drupal database changes together. Committing patches of database dumps seem clumsy at best and is most likely inefficient and error-prone for this purpose.
Any suggestions about how to approach this issue is appreciated!
Unfortunately, database deployment/update is one of Drupals weak spots. See this question & answers as well as this one for some suggestions on how to deal with it.
As for CCK, you could find some hints here.
As for php code in content, I agree with googletorp in that you should avoid doing this. However, if for some reason you absolutely have to do it, you could try to reduce the code to a simple function call. Thus you'd have the function itself in a module (and this would be tracked via SVN). But then you are only a little step from removing the need for the inline code anyways ...
If you are putting php code into your database then you are doing it wrong. Some stuff are inside the database like views and cck fields plus some settings. But if you put php code inside the node body you are creating a big code maintenance problem. You should really use the API and hooks instead. Create modules instead of ugly hacks with eval etc.
All that has been said above is true and good advice.. To answer your practical question, there are a number of recent modules that you could use to transport the changes done by the various developers.
The "Features" modules is a cure the the described issue of Drupal often providing nice features, albeit storing lots of configs and structure in the DB. This module enables you to capture a feature and output it as a pseudo-module (qualifies as a module with .info and code-files and all). Here is how it works:
Select functionality/feature to export
The module analyses the modules, files, DB content that is required to rebuild that feature elsewhere
The module creates a pseudo-module that contains the instructions in #3 and outputs everything (even SQL to rebuild the stuff in the DB) into a module package (as well as sets dependencies for other modules required)
Install the pseudo-module on your new site and enable it
The pseudo-module replicates the feature you exported rebuilding DB data and all
And you can tell your boss you did it all manually with razor focus to avoid even 1 error ;)
I hope this helps - http://drupal.org/project/features
By committing patches of database dumps, do you mean taking an entire extract of the db and committing it after each change?
How about a master copy of the database? Extract all tables, views, sps, etc... into individual files, put them into svn and do your merge edits on the individual objects?