I'm trying to run the most basic react app possible and run it locally. I have referred to the startup docs: https://reactjs.org/docs/create-a-new-react-app.html
I run the app using both npm start from the docs and yarn start as indicated in the terminal. The result in the terminal looks like this:
My app appears to be running, but the IP address looks wrong. When I copy and paste this into my browser's address bar or try localhost:3000, my page does not load.
Other solutions I tried after referencing other StackOverflow threads:
restarting my computer
changing the port to another port
deleting and rebuilding the node_modules folder with yarn
None of those worked. What can I do to fix this?
I ran the command unset HOST in the terminal just before starting up my app and this worked.
Also, several other threads have referenced this article: https://choy.medium.com/fixing-create-react-app-when-npm-fails-to-start-because-your-host-environment-variable-is-being-4c8a9fa0b461
To set your localhost in .bash_profile do the following in the terminal...
Open bash profile in nano
nano .bash_profile
type HOST="localhost" and save this file
back in the terminal, type
source .bash_profile
Related
I need to run my local React app at banker-dev.mh*b.my on port number 443.
This is to enable the app to call a third party API
I've set my local .env file with the following:
HTTPS=true
HOST=banker-dev.mh*b.my
PORT=443
When I run npm start, I encountered the message:
? Admin permissions are required to run a server on a port below 1024.
Would you like to run the app on another port instead? (Y/n)
So I tried sudo npm start but hit error sudo: npm: command not found
I then sudo installed npm by referring to this: sudo: npm: command not found
Now I'm able to run sudo npm start
The app is started as I can see from the code editor.
But it's not opening a new browser window with the app loading as it usually does with my normal npm start
And I noticed a weird thing: it's trying to bind to the host dev.mh*b.my instead of banker-dev.mh*b.my
My hosts file contains the following:
127.0.0.1 localhost
127.0.1.1 yogesnsamy-ThinkPad-E490s
127.0.0.1 banker-dev.mh*b.my
127.0.0.1 dev.mh*b.my
I'm doing something wrong, but I'm not sure what. I'm attaching a video reference of what's happening: https://drive.google.com/file/d/1TYcX3wG8yBFjSukvVAkYZDloPJE3JS8T/view?usp=sharing
Sorry that part where the app starts to load takes long (0:34 to 1:00). After it does, I try to view the app on the browser but fail.
I am creating new Reactjs application using Docker and I want to create new instance without installing Node.js to host system. I have seen many of tutorials but everytime first step was to install Node.js to the host, init app and then setup Docker. Problem I ran into was the official Node.je Docker images are designed for run application only instead of to run like detached container, so I cannot use container command line to initial install. I was about to create image based on any linux distro and install Node.js on my own, but with these approache I cannot use advantages of prepared official images of Node.js.
Does exist any option how to init React app using Docker without installing Node.js to the host system?
Thank You
EDIT: Based od #David Maze answer I decide to use docker-compose, just mount project directory to container and put command: ["sleep", "infinity"] to docker-compose file. So I didn't need to install Node.js to host and I can manage everthing from container command line as usual in project folder. I wasn't solving any shared global cache, but I am not really sure that it is needed if I will have more versions of node containered because of conflict of npms of different versions. Maybe I try to mount it like volume to containers from some global place in the host one day, but disk space is not so big problem ...
You should be able to run something like:
sudo docker run \
--rm \
-it \
-u$(id -u):$(id -g) \
-w/ \
-v"$PWD":/app \
node:10 \
npx create-react-app app
You will have to repeat this litany of Docker options every time you want to do anything to use a Docker-packaged version of Node.
Ultimately this sequence of things starts in the container root directory (-w/) and uses create-react-app to create an app directory; the -v option has that backed by the current directory on the host, and the -u option is needed to make filesystem permissions line up. The -it options make it possible to answer interactive questions, and --rm causes the container to clean up after itself.
I suspect you will find it much easier to just install Node.
I am new to appengine and have installed google-cloud-sdk from the AUR(arch user repository) and and the google-appengine-go extention at /opt/google-cloud-sdk
thanks to this I am able to run a dev server using
dev_appserver.py app.yaml
But when using goapp serve I found
goapp: command not found
After adding /opt/google-cloud-sdk/platform/google_appengine:$PATH to my $PATH variable in zshrc and running goapp serve i now get the error.
zsh: permission denied: goapp
if sudo goapp serve
sudo: goapp: command not found
Due to this I am unable to use the updated sdk to run tests using goapp test
Thank you in advance for your help.
I had the same problem and I think I figured out how it usually works.
You download the google cloud sdk (https://cloud.google.com/sdk/downloads)
After downloading and unzipping to the folder where you want to use it you have to executet the ./google-cloud-sdk/install.sh.
Appengine is not part of the download.
It can be chosen with that install.sh script.
it will download items like appengine.
Afterwards you have a folder called
platform/google_appengine
as you mentioned yourself.
You might have to change execution permissions like
chmod 755 platform/google_appengine/go*
Add folder platform/google_appengine to the PATH if not done already.
The command "which" will not show non-executable binaries.
If you did not change permissions it will not show the path, even being within the PATH variable.
Trying to run gcloud init to initialize the Google App Engine Engine SDK by typing ./google-cloud-sdk/bin/gcloud init but it showed: no such file or directory or command not found. Is something wrong with my PATH? My path is:
/Users/AnneLutz/Documents/google-cloud-sdk\
If you typing ./google-cloud-sdk/bin/gcloud init and you installed Cloud SDK in /Users/AnneLutz/Documents/google-cloud-sdk, then your current directory should be /Users/AnneLutz/Documents in order for what you type to work.
That said you should add /Users/AnneLutz/Documents/google-cloud-sdk/bin to you path. To do this, assuming you are using bash you can
source /Users/AnneLutz/Documents/google-cloud-sdk/path.bash.inc
To make it so that every-time you start your shell you can add it to shell profile. For example you can add above source command at the end of ~/.bash_profile file.
It looks like you used the option to download the SDK zip file and are then trying to configure your environment with that download option. If you aren't comfortable with setting environment variables, you might want to instead try installing using the "interactive" installer, which will automate the steps for making the commands always available on your system.
The directions are here, but for Mac OS users are basically:
Enter the following at a command prompt:
curl https://sdk.cloud.google.com | bash
Restart your shell:
exec -l $SHELL
Run gcloud init to initialize the gcloud environment:
gcloud init
For many, this procedure is easier than getting everything configured manually.
I am on a Windows machine trying to create a Dart server. I had success building and image with my files with ADD and running the container. However, it is painful to build an image every time I wan't to test my code so I thought it would be better to mount my files with the -v command since they are access live from my host machine at runtime.
The problem is that dart's packages folder at /bin/packages is really a symlink (if its called symlink in windows) and docker or boot2docker or whatever doesn't seem able to go past it and I get
Protocol error, errno = 71
I've used dart with GAE and the gcloud command somehow created the container, got your files in there and reacted to changes in your host files. I don't know if they used the -v option (as I am trying) or they have some auto-builder that created a new image with your files using ADD and the ran it, in anycase that seemed to work.
More Info
I've been using this Dockerfile that I modified from google/dart
FROM google/dart
RUN ln -s /usr/lib/dart /usr/lib/dart/bin/dart-sdk
WORKDIR /app
# ADD pubspec.* /app/
# RUN pub get
# ADD . /app
# RUN pub get --offline
WORKDIR /app/bin
ENTRYPOINT dart
CMD server.dart
As you see, most of it is commented out because instead of ADD I'd like to use -v. However, you can notice that on this script they do pub get twice, and that effectively creates the packages inside the container.
Using -v it can't reach those packages because they are behind host symlinks. However, pub get actually takes a while as it install the standard packages plus your added dependencies. It this the only way?
As far as I know you need to add the Windows folder as shared folder to VirtualBox to be able to mount it using -v using boot2docker.
gcloud doesn't use -v, it uses these Dockerfiles https://github.com/dart-lang/dart_docker.
See also https://www.dartlang.org/server/google-cloud-platform/app-engine/setup.html, https://www.dartlang.org/server/google-cloud-platform/app-engine/run.html
gclould monitors the source directory for changes and rebuilds the image.