Purpose of redux-cli - reactjs

currently looking through a boilerplate called react-redux-start-kit by Dave Zuko, and there is a folder called blueprints. Apparently it is for a library called redux-cli, and having no clue what that is I did some research. (Link to redux-cli)
The problem is, the documentation for redux-cli didn't really say what it is. I have also read that redux-cli makes it faster to build apps. Could someone please explain to me what redux-cli is, and how it works?

redux-cli is a utility that speeds up development by reducing work required to create basic elements of your app, e.g. components. Usually, when you create such component, you have to create file for a component, test suite and they all start with a certain structure that you usually copy / paste from other, existing components. With redux-cli you just use simplified commands in your console to auto-generate those basic versions of a new component and you can just jump in and start writing the essence of it.
blueprints are simply templates, redux-cli uses when running commands (for example when running command to create new component it will look at blueprints directory first and see if you have customized template for how you want your components to be initialized). Documentation is pretty clear I'd say - https://github.com/SpencerCDixon/redux-cli#creating-blueprints

Related

What's a good way to navigate code base and find source for a webpage

I'm new to frontend development and thinking about what's a good way to find source code in our code base for a webpage. What I usually do is going to the element tab in chrome dev tool, finding a special class name, and searching that in code base to locate the file. I feel there should be better way for this task. I tried to use source tab in dev tool, but it shows tons of files and folders in navigation column. I also tried to use Components tab since we're using react, but component names are minified to single letters. So want to get suggestions from you folks. How do you usually do this? Thanks!
You have the right idea, the problem is that you are looking at the minified (presumably production) version of the website. In general, while developing a website, you run a development server, in which all of the code (mostly) appears as it is written in your IDE/editor. That way you can find component names and inspect the source code through the chrome dev tools.
You should talk to whoever is currently responsible for the code to help you get a development server running on your machine. Then, you find the component names and then do a "find in files" search through your IDE/editor to see what they are, and where they are used in the code base. There may be many results that you have to sift through. That's par for the course in large code bases until you become more familiar with what goes where. And even afterwards.
I will say; even things that appear simple can be fiendishly complex, so it would be useful for you if the owner of the code could give you a rough outline of how things are organised and why to make navigating the code base easier. But, it will always be a bit hard, and depending on how clean the code is, it might be nearly impenetrable. Good luck.
There are many ways to to find source code or debug Code
①You can use Chrome dev tool
②You can use debbuger in VS
③you can debug your code by puttin debugger in java script code
④browser has good functionality to find
code(For reference please check Image.)

Advice and experience for testing a CN1 app

I would like to start automating the testing of my app written in CodenameOne, but I find it difficult to visualize how to use the TestRecorder (section "Unit Testing") for "industrial" testing.
If anyone here is already using it, could you share a few tips about how you use it?
E.g. how do you use the different "Asserts" buttons, how do you structure your tests into suites and how do you chain them together (e.g. so each test case will start in the right context like where in the navigation structure it is supposed to run), do you need to manually edit the tests, ... And is there anything to be aware of before creating lots of tests interactively, e.g. to avoid that your tests are invalidated by some irrelevant change to your UI?
I read in the blog post from May 2017 that the TestRecorder "wasn’t picked up by many developers and as such it stagnated". I tried TestRecorder and immediately came across a seemingly basis error in it (missing test for null) when recording a test case using the Toolbar, which gave the impression it is still the case. So, if anyone here is using another approach that is working well for you, I'd love to hear about that.
See the test classes we use to test Codename One itself here: https://github.com/codenameone/CodenameOne/tree/master/tests/core
You can use the test recorder to generate a skeleton but you can do this manually just like any test. The test API lets you invoke the app or just pieces of it and perform assertions on the behaviors within.

Is there any good way to refactor an MEAN stack project?

Since each part of MEAN stack projects are separated, it's really hard to refactor the whole project. I'm trying to do the following things
Modify mongoose schemas
Reorganize server code
Rename some api calls and parameters
Modify Angular code to adapt new APIs
Is there any good ways to do them?
There is no tool for any of theese dedicated to MEAN stack. Yeo Man got some generators but these existing are only for creating, not for refactoring. You can still create your own yeo man generator with custom actions or any other server side script that is looking for patterns and changing names according to given configuation file. This can be also automated with gulp or grunt task runners, but its really time-consuming ;)

Is there an automated way to document Nancy services?

Is there any way to auto-generate Swagger documentation (or similar) for a Nancy service?
I found Nancy.Swagger, but there's no information on how to use it and the demo application doesn't seem to demonstrate generating documentation (if it does, it's not obvious).
Any help would be appreciated. Thanks!
In my current project I've been looking a lot into this problem. I used both nancy.swagger and nancy.swagger.attributes.
I quickly discarded Nancy.swagger, because for me personally it doesn't sound right that you have to create a pure documentation class for each nancy module. The attributes solution was a bit "cleaner" - at least codebase and documentation were in one place. But very fast this became unmaintainable. Module code is unreadable because of many attributes. Nothing is generated automatically: you have to put path, all parameters, even http method as an attribute. This is a huge effort duplication. Problems came very fast, a few examples:
I changed POST to PUT in Nancy and forgot to update [Method] attribute.
I added a parameter but not the attribute for it.
I changed parameter from path to query and didn't update the attribute.
It's too easy to forget to update the attributes (let alone documentation module solution), which leads to discrepancies between your documentation and actual code base. Our UI team is in another country and they had some trouble using the APIs because docu just wasn't up-to-date.
My solution? Don't mix code and documentation. Generating docu from code (like Swashbuckle does) IS ok, but actually writing docu in code and try to dublicate the code in docu is NOT. It's not better than writing it in a Word document for your clients.
If you want Swagger docu, just do it the Swagger way.
- Spend some time with Swagger.Editor and really author your API in
YAML. It looks all-text and hard, but once you get used to it, it's
not.
- Spend some time with Swagger.Codegen and adapt it (it already does a fair job for generating Nancy server code and with a few
adjustments to moustache templates it was just what I needed).
- Automate your process: write a couple of batches to generate your modules and models from yaml and copy them to your repository.
Benefits? Quite a few:
-
Your YAML definition is now the single truth of your REST contract.
If somewhere something is defferent, it's wrong.
Nancy server code is auto-generated
Client code-bases are auto-generated (in our case it's android, ios and angular)
So whenever I change something in REST contract, all codebases are regenerated and added to projects in one batch. I just have to tell the teams something was updated. They don't have to look through some documents and search for it. They just have their code regenerated and probably see some compile errors, in case of breaking changes.
Do I still use nancy.swagger(.annotations)?
Yes, I do use it in another project, which has just one endpoint with a couple of methods. They don't change often. It's not worth the effort to set up everything, I have my swagger docu fast up and running. But if your project is big, API is changing, and you have multiple code-bases depending on your API, my advice is to invest some time into a real swagger setup.
I am quoting the author answer here from https://github.com/khellang/Nancy.Swagger/issues/59
The installation should be really simple, just pull down the NuGet package, add metadata modules to describe your routes, and hit /api-docs. That should get you the JSON. If you want to add swagger-ui as well, you have to add that manually right now.
No. Not in an automated. https://github.com/yahehe/Nancy.Swagger needs lots of manually created metadata.
There is a nice article here: http://www.c-sharpcorner.com/article/generating-api-document-in-nancy-using-swagger/
Looks like you still have to add swagger-ui separately.

How should spec files be organised in a javascript application using MVC

I would like to know your opinion about how you would organize the files/directores in a big web application using MVC (backbone for example).
I would make the following ( * ). Please tell me your opinion.
( * )
js
js/models/myModel.js
js/collections/myCollection.js
js/views/myView.js
spec/model/myModel.spec.js
spec/collections/myCollection.spec.js
spec/views/myView.spec.js
This is how I've traditionally organized my files. However, I've found that with larger applications it really becomes a pain to keep everything organized, named uniquely, etc. A 'new' way that I've been going about it is organizing my files by feature rather than type. So, for example:
js/feature1/someView.js
js/feature1/someController.js
js/feature1/someTemplate.html
js/feature1/someModel.js
But, oftentimes there are global "things" that you need, like the "user" or a collection of locations that the user has built. So:
js/application/model/user.js
js/application/collection/location.js
This pattern was suggested to me because then you can work on feature sets, package and deploy them using requirejs with relatively little effort. It also reduces the possibility of dependencies occurring between feature sets, so if you want to remove a feature or update it with brand new code, you can just replace a folder of 'stuff' rather than hunting for every file. Also, in IDE's, it just makes the files you're working on easier to find.
My two cents.
Edit: What about the spec files?
A few thoughts - you'll just have to pick the one that seems most natural to you I think.
You could follow the same 'feature folder' pattern with the spec files. The upside being that all of the specs are in one place. The downside is that now, much like what you're currently doing, you have to places for one feature's files.
You could put the specs in a 'spec' folder of the feature folder. The upside is that you now have actual packages that can be wrapped up in a single zip file with no chance of clobbering other work. It's also easier to find directly related files for writing tests - they're all in the same parent folder. The downside is that now your production code and test code is in the same folder, publishing it (possibly) to the world. Granted you'll probably end up compiling the production javascript down to one file at some point.. so I'm not sure that's much of an issue.
My suggestion - if this is a large application and you figure you're going to have a few hands touching the files, leave something like a 'package.json/yml/xml' file in the folder. In there, list out the production, spec, and any data files you need for testing (you can most likely write a quick shell script to do this for you). Then write out a quick script to look through your source folder for 'package.whateverYouChose' files, get the test files and then build your unit testing page with it. So, let's say you add another package.. run 'updateSpecRunner' or whatever you name the script, and it'll generate you another SpecRunner.html file (or whatever you named the file your running the specs on). Then you can manually test it in a browser, or automate it using phantomjs/rhino.
Does that make sense?
You can find a good example how to organize your application to this link
Backbone Jasmine examples
It looks more or less like your implementation.

Resources