Node.js CLI to Webpage Wrapper - angularjs

My company has a folder called tools... which has about 50 some CLI Tools our support agents use for various troubleshooting and reporting...
Company is getting bigger... giving every rep access to our source code just so they can run the tools is not ideal... Plus things like npm package dependencies happen and it's more maintenance than they want.
Ideally, I would create an internal only website that simply presents a dropdown of all the tools in the /tools folder. The webserver (like Express) would execute the scripts and then redirect the standard output to the screen... The kicker is I need to allow for standard input as well since the tools are somewhat interactive... they get to select choices.
I'm sure there are all kinds of security issues with this and I just want emphasize this would be for internal use only and run by trusted users.
I've seen various terminal emulators and projects like this but looked complicated to make it work for our use case. I really just want to let people run a preset number of commands... I feel like this type of thing should exist and I just haven't stumbled upon it yet.
Alternatively... I've considered refactoring the tools to use something like swagger which would present the options for them to fill out but that too isn't ideal as we have conditional prompts...

You could try to use xterm.js to create browser based terminal that can execute the CLI tools.

You could use socket.io and build a node.js app for specific required commands.
socket.io allows for client/server communications on webpage.
node.js allows for a framework where you can pass commands through.

Related

How to develop react app via online development

I'm just curios about this situations creating app with React Js. Is there any way to build directly on the hosting Cpanel not on localhost during development? I don't know if this question is right I'm new about this but how about if were done developing on local then build and upload to server, if there is small changes of the application then you can't change directly on the server because the code is bundle and minified. I tried to search on google and watch tutorials but can't find it. I know there nothing wrong to build on local, however I like the point that while i'm building I know it works very well and see it on live then if there is small changes I could change directly.
Apologies to my curiosity. Thanks in advance for your ideas and correcting me.
I'm not sure if react requires bundling. It is not so big itself. One useful way that you can do it, just build your react app in local, then create a git repository, push it to there then from there you can pull it to your server by connecting your server with SSH.
This way may require some installations on server side again with SSH connection. You can search the details about the way I suggest you.
Appreciating your curiosity, I can think of two possible (not at all recommended though) solutions.
1. Dump jsx
React applications requires a build process primarily for JSX syntax. It is developer intuitive. If there is no jsx in your code no need to build. So, this jsx
return (
<h1>Greetings, {this.props.name}!</h1>
);
Should be written as this js
return React.createElement('h1', null, 'Greetings, ' + this.props.name + '!');
2. Setup development environment in Server
This is a risky one. There're possible security issues.
Its like have a centralized code base on the server that anyone with access can modify.
Here, you can edit files & run build task directly on server.
Notes:
Today's basic development flow is code -> build -> deploy. Better stick with it for serious applications.

What is the general practice for express and react based application. Keeping the server and client code in same or different projects/folders?

I am from a microsoft background where I always used to keep server and client applications in separate projects.
Now I am writing a client-server application with express as back-end and react js as front-end. Since i am totally a newbie to these two tools, I would like to know..
what is the general practice?:
keeping the express(server) code base and react(client) code base as separate projects? or keeping the server and client code bases together in the same project? I could not think of any pros & cons of either of these approaches.
Your valuable recommendations are welcome!.
PS: please do not mark this question as opinionated.. i believe have a valid reason to ask for recommendations.
I would prefer keeping the server and client as separate projects because that way we can easily manage their dependencies, dev dependencies and unit tests files.
Also if in case we need to move to a different framework for front end at later point we can do that without disturbing the server.
In my opinion, it's probably best to have separate projects here. But you made me think a little about the "why" for something that seems obvious at first glance, but maybe is not.
My expectation is that a project should be mostly organized one-to-one on building a single type of target, whether that be a website, a mobile app, a backend service. Projects are usually an expression of all the dependencies needed to build or otherwise output one functioning, standalone software component. Build and testing tools in the software development ecosystem are organized around this convention, as are industry expectations.
Even if you could make the argument that there are advantages to monolithic projects that generate multiple software components, you are going against people's expectations and that creates the need for more learning and communication. So all things being equal, it's better to go with a more popular choice.
Other common disadvantages of monolithic projects:
greater tendency for design to become tightly coupled and brittle
longer build times (if using one "build everything" script)
takes longer to figure out what the heck all this code in the project is!
It's also quite possible to make macro-projects that work with multiple sub-projects, and in a way have the benefits of both approaches. This is basically just some kind of build script that grabs the output of sub-project builds and does something useful with them in a combination, e.g. deploy to a server environment, run automated tests.
Finally, all devs should be equipped with tools that let them hop between discreet projects easily. If there are pains to doing this, it's best to solve them without resorting to a monolothic project structure.
Some examples of practices that help with developing React/Node-based software that relies on multiple projects:
The IDE easily supports editing multiple projects. And not in some cumbersome "one project loaded at a time" way.
Projects are deployed to a repository that can be easily used by npm or yarn to load in software components as dependencies.
Use "npm link" to work with editable local versions of sub-projects all at once. More generally, don't require a full publish and deploy action to have access to sub-projects you are developing along with your main React-based project.
Use automated build systems like Jenkins to handle macro tasks like building projects together, deploying, or running automated tests.
Use versioning scrupulously in package.json. Let each software component have it's own version# and follow the semver convention which indicates when changes may break compatibility.
If you have a single team (developer) working on front and back end software, then set the dependency versions in package.json to always get the latest versions of sub-projects (packages).
If you have separate teams working on front and backend software, you may want to relax the dependency version to be major version#s only with semver range in package.json. (Basically, you want some protection from breaking changes.)

How to publish AIML embedded with javascript?

I've written an AIML file for a chat bot and I'd like to build an interactive web application which allows me to chat with the bot in the web browser.
Is it possible to achieve this with HTML & Javascript?
There is no short answer on how to write a web application which allows a user to interact with your AIML. Writing such an application from scratch will be much more work then compiling the AIML was.
The easiest option would be to use a pre-built service like PandoraBots which allows you to upload AIML files and interact with them in the web browser. It's free to use the explorer part of website. They also have paid developer options which generates an API to bridge your AIML script and any applications you might want to build. It can be easily connected to work with common chat apps like Google talk ect.
If you decide to build everything from scratch you might want to check out the AIML Interpreter library for nodejs.
UPDATE: Here is a node.js based interpreter that you might find useful https://github.com/mrchimp/surly2
I was looking at AIML too and had similar questions. I just found RiveScript RiveScript and it looks like it fits your need to run javascript based on a match. It is not AIML, but very close. There is also at least one tool to convert from AIML to RiveScript, so I would say this fits your needs within those constraints.

SCCM Detection Methods - where are they stored?

By the end of last week our central IT Department introduced SCCM and applied it to a bunch of clients in our division. My colleagues and I work as so called "IT-Partner" in a 1st level support for a few hundrets of colleagues. Now we're facing some problems with our new SCCM System (installed packages do not work etc.) Now we'd like to "reset" applications so the SCCM Agend will reinstall them. I've read something about the detection methods but unfortunatelly I do not really know how they work nor I know where those methods are saved. I want to "analyse" those methods so I know which file to modify / delete that the agent will reinstall the application.
By the way, how much time does SCCM take from "assigning" a package to applying to the client?
Assuming you only have the client and no access to the SCCM Console the detection methods can be found using WMI. They are stored in root\ccm\CIModels in the Class Local_Detect_Synclet.
The format is XML in one column and it is designed so that all kinds of detection methods can basically be represented in the same style so it's not very readable but you should be able to get some basic understanding about the detection method used.
Keep in mind this is only true if the software was deployed in the "new" (introduced in sccm 2012) application format and not for the "old" package/program format.
If you want more detail I once tried to automate the process of triggering a reinstall for any given application but ultimately failed due to problems with the chache/distribution point. I posted all my findings here.
So from an application POV. When you deploy an app the detection method is setup in SCCM to determine wether or not the application installed successfully. This detection method could be configured a variety of ways. For example, it could check to see if the msi code is installed to determine success, it could check the .exe and compare it to a specific version, or even check a registry file for existence. In order to change/modify these detection methods you should be an SCCM admin and be able to login to the console. From there you would select the specific application or package you want to analyze and click through the properties of the deployment.

Is PAA a good candidate for automating wcm library deployment and setup in portal?

I have created a Web Content Management library for use in WebSphere Portal. At the moment I'm using import-wcm-data to import the library, then I need to add some additional propeties to 2-3 files on the server under Resource Environment Providers and then restart particular services so those changes are detected.
Can anyone explain the benefits of using a paa over writing a simple bash (or similar) script to automate this process?
I don't understand if I get any advantages when using paa, or is paa even capable of updating properties files and restarting services?
I have been working intensively with PAA files and I must say that it is a very stable way of deploying a app requirering multiple depl steps and components.
It does need a startup process but is well worth it in a multi server environment.
You can do all the tasks that you can do in a Ant file as well as using the wsadmin script interface. I only update res env settings and the such in WAS and do not touch any props files for that reason since all settings are stored in WAS.
In my experience, a PAA is not a good method if you're merely importing a content library.
I don't think I understand why you are doing the import manually and not syndicating, but even if there's a good reason not to syndicate, the PAA process was too involved and required too many precursor actions (deleting libraries, remove PAA, deploy PAA and then activate the portliest) to be a viable option for something as simple as importing a WCM library.
Since activating the portlets I was importing with the PAA was an extra step, I don't believe you can restart applications either.

Resources