How to create a react app without admin credentials in Windows? - reactjs

I am a newbie to React and I work on in a corporation where they do not give you admin credentials to your working PC but you can call IT-support and get them to type a few commands in command terminal and install the software.
So they have installed node.js and run a simple npm i -g create-react-app under administrator privileges.
But when I run npx create-react-app my_app without admin credentials, I fail after a long installation.
So is there anyway to download the basic react app to get started, or make IT-support write a few command lines or is it impossible to develop React without admin credentials?

One option is to use a Linux VM (Virtual Machine). Oracle's VirtualBox is free. Your company may already allow you to load it from some central software repository. This allows you to go ahead and develop in a Sandbox that won't disrupt your company's main network. Ask your IT or Security Dept before installing and using.

First of all removing the -g and installing with the local user helped. But we countinued to face challenges with all sort of corporate IT security blockades. At the end we ended up provisioning our own EC2 machine on amazon and where developing using VScode's remote SSH and we have not looked back since. So please consider this if you are on a restricted corperate PC.

Related

ask deploy hangs while deploying to lambda

I have manually deployed a number of Alexa skills using a lambda backend and understand the manual process, however I am new to using the ask cli v2.
I believe I have all of the steps in the guide as far as having both ask and aws cli set. I have set my roles in AWS.
I am currently just trying to get used to the process and running
ask new
changing the invocation and then running
ask deploy
Everything runs seemingly correct until
Skill code built successfully.
Code for region default built to C:\location\projectName.ask\lambda\build.zip successfully with build flow nodejs-npm.
==================== Deploy Skill Infrastructure ====================
/ Deploy Alexa skill infrastructure for region "default"
→ No IAM role exists. Creating an IAM role...
And then we just wait... forever.
The AWS CLI profile has IAMFullAccess to create roles as needed.
What am I missing?
So It ended up being an issue somewhere between permissions on my aws role and the configuration. I changed which role I was using and re-configured ask and aws.
I am not exactly sure where things were fixed because I immediately ran into another error that ended up being a bit of a rabbit hole. That I will describe here because it is common enough and could be seen while trouble shooting my original issue.
The issue I ran into is was when the deploy would happen successfully I could not test with the code that made it to my lambda. In cloud watch it presented as
"Runtime.ImportModuleError: Error: Cannot find module './dispatcher/error/mapper/GenericErrorMapper'"
This ended up being a bug within powershell and compressing to .zip on windows and being unpacked on linux.
I had to run
Install-Module Microsoft.PowerShell.Archive -MinimumVersion 1.2.3.0 -Repository PSGallery -Force
https://github.com/PowerShell/PowerShell/issues/2140
This fixed my final issue.

Accessing Google App Engine Python App code in production

(Background: I am new to Google App Engine, familiar with other cloud providers' services)
I am looking for access/view similar to shell access to production node.
With a Python/Django based Google App Engine App, I would like to view the code in production.
One view I could find is the StackDriver 'Debug' view.
However, apparently the code shown in the Debug view doesn't reflect the updated production code (based on what is showing on the production site, for example, the text on the home page different).
Does Google App Engine allow me to ssh into the VM where the application/code is running?
If not, how can check the code that's running in production?
Thanks.
According to the SSH debugging row in the Comparing environments table SSH access is supported for flex environment apps but not for standard environment apps.
From Connecting to the instance:
If a VM instance is in debug mode, you can connect to its host by
using SSH in the console or with gcloud.
To connect to an instance in the console:
Visit the Cloud Platform Console instances page for your project:
Go to the instances page
Click SSH in the far right of the row containing the instance you want to access:
This puts the instance into debug mode, and opens an SSH session for the instance in a terminal window.
You can also select different options to start an SSH session from the drop-down list.
At this point you are in the instance host, which has several containers running in it. See Understanding common
containers next for more information about these.
In the terminal window, list the containers running in the instance:
sudo docker ps
The output of the sudo docker ps command lists each container by row; locate the row that contains your project ID: this is the
container running your code. Note the NAME of this container.
Optionally, list logging information for your application by invoking:
sudo docker logs [CONTAINER-NAME]
Start a shell in the container that is running your code:
container_exec [CONTAINER-NAME] /bin/bash
When finished debugging, enter exit to exit the container, then exit again to exit the SSH session.
Disable debugging for your instance to allow it to resume normal operation.
If you are using the standard environment, the answer is no, you can't really inspect or see the code directly. You've mentioned looking at it via Stackdriver Debugger, which is one way to see a representation of it.
It sounds like if you have a reason to be looking at the code, then someone in your organization should grant you the appropriate level of access to your source code management system. I'd imagine if you're deployment practices are mature, then they'd likely branch the code to map to your deployed versions and you could inspect in detail locally.

How to export a project from IBM Bluemix PaaS to anywhere else as a Docker?

I lead a web/mobile project and I still need to know the tools we will be using for development.
We have a 6 months access to IBM Bluemix, and its security check tools, CloudFoundry, and others may appear really useful.
However, we don't want to rely on a solution that would trap our project without any possibility of migration if needed.
I looked up on the internet how to export a project from Bluemix as a docker, with elements created from IBM. I didn't find anything relevant (I might be bad at googling, but all I can find is "how to export to Bluemix/how to work locally").
Does Bluemix allow to export the entire project onto another hoster, does it depend on the services we used in the project ?
Thank you in advance.
If you package your application in a container you can run it on any provider that supports Docker. That could be another cloud, in a local datacenter or on your own laptop.
If you are planning to use Bluemix services as part of that application then you will have two options if moving your application off Bluemix.
Keep using the services in Bluemix but connect to them remotely from wherever you're now hosting your appliaction. This will require internet connectivity and you'll have to hard code the service credentials in to your application (not good practice).
Migrate the services as well as the application. This will only be possible for the non-unique services IBM offer e.g. Redis, Mongo, Elasticsearch etc.. You'll need to refactor your application to accept the new provider of these services.
If your service/app is dockerized, and is being hosted as a container on Bluemix.
You can pull the container image of your service/app in your own docker enabled cloud or local environment. Following steps can be followed for the same:
install bluemix-container cli package https://www.ng.bluemix.net/docs/containers/container_cli_ov.html
do cf ic login using your bluemix credentials
check for your images using cf ic images command
pull the image in your environment using docker pull <image-registry-url>
run the container with required parameters using docker run
Hope it helps. Thanks.

Hosting Angular fullstack project

I started a new Yeoman angular-fullstack project (client-angular.js, server-node.js)
(generator: https://github.com/DaftMonk/generator-angular-fullstack)
I have 2 seperated directories for client and server,
I want to launch the app but the deployment don't show any index.html file,
The question is, Should I make 2 different hosts for the server and the client?
if no, how can I host and use the united projects?
No, it is not needed to create 2 different hosts for the server.
The server needs to point to app.js, usually located at server/app.js, as this is the entry point (instead of index.html) of your app. How this is done depends solely on the server you intend on using.
If you consider using IIS you can take a look at: Installing and Running node.js applications within IIS on Windows
As for the other deployment options, as laggingreflex said, "Heroku is the popular choice to host node.js projects". The angular-fullstack git site has more information on deploying to Heroku or Openshift.
As a side note:
Deploying to IIS requires a bit more attention than the information in the link specified. You need to set file access, create a web.config file as well as a few other stuff. At least, I had to...
You'll need a host that supports MongoDB assuming you kept the Database the same after generating your application. Heroku is a great option as it allows you to setup up plugins like mongolab or mongohq fairly easily. I would also recommend looking into Digital Ocean as they allow you to set up a droplet/server that has what you need for the application to run.
If you go with Digital Ocean and are a student check out https://education.github.com/pack. You'll actually receive $100 credit towards a new Digital Ocean account which will let you test things out.
Good luck!

Unable to access BigQuery from local App Engine development server

This is specifically a question relating to server to server authorisation between a python Google AppEngine app and Google's BigQuery, but could be relevant for other cloud services.
tldr; Is it possible to get the App Engine local development server to authenticate with the remote BigQuery service? Better yet is there a local BigQuery?
I understand that AppAssertionCredentials does not currently work on the local development server, though that in itself is very frustrating.
The alternative method which works for standard python code, outside of the local development server sandbox, detailed here does not work for the local development server because even with PyCrypto enabled the sandbox does not allow some posix modules e.g. 'pwd'.
I have got AppAssertionCredentials working on the remote server and the SignedJwtAssertionCredentials method working in native python locally, so the service accounts are set up properly.
The imports fail within oauth2client/crypt.py within the try/except blocks - after commenting them out the sandbox whitelist exceptions are easily seen.
I've fiddled around with adding 'pwd' to the whitelist, then another problem crops up, so I scurried back out of that rabbit hole.
I've tried including PyCrypto directly into the project with similar results.
I've also tried with OpenSSL with similar results.
I have looked for a local appengine specific PyCrypto to no avail, have I missed one? I should say this is on Mac OSX - perhaps I should fire up a linux box and give that a go?
A recent release of Google App Engine SDK added support for the AppAssertionCredentials method on the development server. To use this method locally, add the following arguments to dev_appserver.py:
$ dev_appserver.py --help
...
Application Identity:
--appidentity_email_address APPIDENTITY_EMAIL_ADDRESS
email address associated with a service account that
has a downloadable key. May be None for no local
application identity. (default: None)
--appidentity_private_key_path APPIDENTITY_PRIVATE_KEY_PATH
path to private key file associated with service
account (.pem format). Must be set if
appidentity_email_address is set. (default: None)
To use these:
In Google Developer Console, select a project then navigate to "API & auth" -> "Credentials" -> "Create new client ID".
Select "Service account" and follow the prompts to download the private key in PKCS12 (.p12) format. Take note of the email address for the service account.
Make sure you add that service account email address to the "Permissions" tab for any project that contains data it needs to access, by default it is added to the project team in which it was created.
Convert the PKCS12 format to PKCS1 format using the following command:
$ cat /path/to/xxxx-privatekey.p12 | openssl pkcs12 -nodes -nocerts -passin pass:notasecret | openssl rsa > /path/to/secret.pem
Start dev_appserver.py as:
$ dev_appserver.py --appidentity_email_address xxxx#developer.gserviceaccount.com --appidentity_private_key_path /path/to/secret.pem ...
Use appidentity module and AppAssertionCredentials in the same manner locally as you normally would in production.
Please ensure that /path/to/secret.pem is outside of your application source directory so that it is not accidentally deployed as part of your application.
So searching deeper for PyCrypto and local appengine sandbox lead me onto this thread and response specifically...
https://code.google.com/p/googleappengine/issues/detail?id=1627#c22
This is fixed in 1.7.4. However, you must use easy_install -Z
(--always-unzip) to install PyCrypto. The default zipfile option in
OSX 10.8 is incompatible with the sandbox emulation in the
dev_appserver.
The solution turns out to be very straight forward...
I used:
sudo easy_install pycrypto
and it should have been:
sudo easy_install -Z pycrypto
as per the thread above. Using PIP will work as well:
pip install pycrypto
or a manual download and install of pycrypto will also work. I tested all three.
If you have installed pycrypto with easy_install and without -Z flag then you may want to install pip just so you can easily uninstall pycrypto...
easy_install pip
for the record I built and installed libgmp, as pil and the manual install showed this warning...
warning: GMP or MPIR library not found; Not building
Crypto.PublicKey._fastmath.
Although this gave me fastmath, it was not essential to solve the problem as the Crypto libs gracefully fail to slowmath.
Another point that tripped me up for a bit was I removed pycrypto from app.yaml whilst testing to see if OpenSSL might give me all I need.
So dont forget to add...
- name: pycrypto
version: latest
into app.yaml under the libraries: section.
With this missing the native _counter library was not imported hence Counter failed etc.
Also for the record any talk of having to move Crypto into the app folders themselves or out of the default Mac OS X location of /Library/Python/2.7/site-packages/Crypto was only valid in earlier versions of the dev server.
Similarly there is now no need to edit any _WHITE_LIST_C_MODULES lists (which is in sandbox.py in appengine 1.8 onwards, which also includes the regex which allows Crypto.Util._counter etc)
The other bit of the puzzle in case you get here before discovering the key issue is that the key file you download from the console is PKCS12 and is downloaded as hex text, so I converted that to binary and then converted that to a PEM so I could include it in the source code.
I struggled with this one for a day or two. And I was finally able to get localhost working with server to server authentication, a service account and a .p12 cert.
If it's at all helpful to anyone, here's a simple gist: https://gist.github.com/dandelauro/7836962
I agree with the first post - the localhost/production impedance is a real pain in the a**. AppAssertionCredentials is the right way to go on production and I don't want to have two different code paths between production and localhost. So the development environments need to be adjusted to be able to perform the required authentication without affecting the main code path.
E.g., perhaps a developer could log in with their own Google account using appcfg.py and then that auth would be cached for a period such that AppAssertionCredentials would work out. The developer's Google account could be granted permissions on the appropriate environments (dev and test for us, e.g.)
re: "local BigQuery" - we have some initial stuff in place that uses SQLLite to simulate BigQuery interactions for unit tests and other offline/local testing, but of course, it's not a great simulation. I agree that all the Cloud Platform products need to spend as much time thinking about the development-time experience as App Engine has.
Is it possible to get the App Engine local development server to authenticate with the remote BigQuery service?
I think it's impossible to use AppAssertionCredentials as authentication method between BigQuery service and your local App Engine server currently.
Alternatively, I'm using OAuth2 authentication which is associated with specific user(this user must be registered in your project at google api console) to access BigQuery from local App Engine server.
For getting user OAuth2 authentication, I use oauth2client.client module in the app code.
I hope this will be helpful to your problem.
Updated:
This is what I'm doing for getting the user OAuth2 authorization.
Edited:
Added missing import statement.
Thanks mattes!
import os
import webapp2
import httplib2
from oauth2client.client import OAuth2Credentials
from oauth2client.appengine import StorageByKeyName, CredentialsModel, OAuth2DecoratorFromClientSecrets
from google.appengine.api import users
oauth2_decorator = OAuth2DecoratorFromClientSecrets(
os.path.join(os.path.dirname(__file__), 'client_secrets.json'),
scope='https://www.googleapis.com/auth/bigquery')
oauth2_decorator._kwargs = {'approval_prompt': 'force'}
class TestPage(webapp2.RequestHandler):
#oauth2_decorator.oauth_required
def get(self):
user_id = users.get_current_user().user_id()
credentials = StorageByKeyName(CredentialsModel, user_id, 'credentials').locked_get()
http = credentials.authorize(httplib2.Http()) # now you can use this http object to access BigQuery service
application = webapp2.WSGIApplication([
('/', TestPage),
(oauth2_decorator.callback_path, oauth2_decorator.callback_handler()),
], debug=True)

Resources