Why "gcloud init" creates "default/" directory? - google-app-engine

When running gcloud init, it creates a directory named "default" where it clones the sources.
Maybe a silly question, but why is it named "default"?
Is there a way to change the name or clone sources in the current directory (without creating a new one)?

The 'gcloud init' command currently only clones a single repo, which is named default. in the future you may be able to host multiple repos, each with their own name.
Also, we may add the ability to nicely import other assets into your project as well, which would not necessarily live in your repo.
So, the primary Google-hosted repository is one asset that is part of your local developer workspace, and since we intend to bring in more in the future, it gets put in its own directory 'default' (which is the name of that repo) so that it does not have conflicts with future assets.

Related

How configure two pages myproj/a and myproj/b in a Firebase project?

I have two webapps - "manager" and "viewer" - coded in separate VSCode projects. These are deployed to a common Firebase project where they share a common database. The "manager" webapp is used to maintain the database and the "viewer" provides public read-only access.
To create the "page" structure I have added a robocopy to React's build script for each VSCode project to produce a structured "mybuild" folder with the page subfolder within it. Firebase.json's "public": setting is then used to deploy from "mybuild".
Individually the two pages work fine, but each deployment overrides the functionality of the other. So, following the deployment of "manager", webapp/viewer returns a 404 (not found) error and vice versa.
To cut a long story short, the only way I've found around this is to manually copy the results of a deployment for one project into the "mybuild" folder of the other and then deploy from this. But this is no way to proceed.
I think I've taken a wrong turn somewhere here. Can anyone suggest the correct "firebase solution" for this requirement? In the longer term I'd like the viewer webapp to be available at the root of some user-friendly "appurl" while the manager is accessed via "appurl/manager", but other arrangements would be acceptable. The main issue right now is finding a simple way of maintaining the arrangement.
I needed to fix this fast, so here's my own answer to my question.
It seems that when you deploy a project, firebase replaces the current public folder for your URL with the content of whatever folder is specified in your firebase.json. So I decided that I had to accept that whenever either of my projects was deployed it must deploy from a "composite" folder that contains the build files for the other project as well as its own build.
That being the case, it seemed I was on the right lines with my "manual copy" approach and that what I now needed to do was simply to automate the arrangement.
Each of my projects now contains a script file with the following pattern:
npm run build
ROBOCOPY build ./composite/x /E
ROBOCOPY ../y/build ./composite/y /E
firebase deploy --only hosting
In each script, x is the owner project and y is the other. Additionally, firebase.json in each project is edited to deploy from composite.
When I run the script for a project it first builds a composite folder combining the latest build state for both that project and its partner, and then deploys the pair.
The final twist is to tell the react build process that the result is to be deployed from a relative folder and so that the build therefore also needs to use relative references. I do this by adding a
"homepage": "https://mywebapp/x",
to the package.json for each project. Here, x is the name of the project.
I'm not able to convince myself that there's not something wrong with my whole approach to this issue, but at least I have a fix for the immediate problem.

How to add a service worker to an existing, old, react project?

I'm working on an old react project, which I need to add functionality to, but when deploying the react build on the server, it fails, claiming it cannot find several css and js files, although I published all files within the build folder. I tried different things:
First, I kept the old service-worker.js in the production folder the IIS uses, but replaced everything else.
Then, I tried also deleting the service-worker.js, since I thought it was optional, and my npm run build didn't create a service-worker.js file.
Then, I tried copying the service-worker.js file that existed on production, and manually changing it to point to my css and js files in the /static/ folder of my build folder.
All of these solutions have yielded the same result. So I have a few questions:
Is the service worker necessary? If not, could this error relate to something entirely different other than the service worker?
If it is necessary, why could my npm run build command not create the service worker with the rest of the files in the build folder?
If I do need it, how can I manually add it to a project that already exists?
If the production folder already had a service worker, and my build is not building it, I can also assume maybe my react version is newer, but I find that odd, since the computer I use is one an older employee in my company used, and I didn't manually change anything about this project.

Do I have to add .env file to versioning in react project?

I have seen some medium blogs and StackOverflow answers that I shouldn't add .env file to the versioning.
I quite understand why that is needed.
But how about when dealing with a Reactjs project?
Reactjs is for the frontend and all the environment variables are public even it is bundled for production and anyone can read it using a web browser.
I have 2 env files, .env.production for production and .env.staging for staging. The environment variable values are put to the bundled version while building. These files are the same across all other team members.
Actually there is no secret at all in these files.
The question is:
Should I add these 2 files to versioning or do I have to distribute these files manually to other team members? Then why?
No.
Preferred way is to have a file called .env.sample This will contain all the keys with random or no values in it. Any developer who clones the repo, will get to know the env_vars that are needed to run/build the project.
Within the teams, have a secrets sharing mechanism. There are lots of tools available to solve this.
First time after cloning the repo, the developers needs to run cp .env.sample .env and copy the values from the secret manager.
Make sure to add .env to .gitignore so no-one accidently pushes the .env containing secrets to the repo.
No you don't. .env file should be put on server in a separated way. Otherwise whoever can access to your source code repository can read or even modify .env file.

How do I force Google App Deploy to upload a tgz file of my node app (Meteor) instead of 57K individual files?

Have a Meteor application that I’m deploying to a custom flex environment. Deploying same built folder to multiple Google Projects. Usually a .tgz file is created in my local temp folder and then uploaded to the projects default Google bucket and extracted from there to create an App Engine version.
Usually isn’t working for me in one of the projects and instead the gcloud app deploy command is uploading 57K individual files from node_modules. This makes a process go from minutes to multiple hours (ran it over night and still not done).‘
I’ve tried reinitializing the gcloud configuration, updating gcloud components, changing the default bucket, but not working. It’s doing some sort of check to see what’s been uploaded because it’ll skip uploaded files if I kill and start again.
An option for you to achieve that, it's to use the .gcloudignore to indicate which files you want to be uploaded, deployed, etc., or not.
As per the official documentation gcloud topic gcloudignore:
Several commands in gcloud involve uploading the contents of a directory to Google Cloud Platform to host or build. In many cases, you will not want to upload certain files (i.e., "ignore" them).
If there is a file called .gcloudignore in the top-level directory to upload, the files that it specifies will be ignored.
This means that you can use this file to decide which files you want to ignore. This way, you can, for example, ignore a whole directory or more than one, so this thousand of files of your don't get uploaded.
In this below post, there are examples of how to set this file for your own usage.
How to ignore files when running gcloud app deploy?
Let me know if the information helped you!

Git didn't add x64\SQLite.Interop.dll

I installed SQLite into my WPF project via Nuget. Then added the entire project to a remote repo. Then I cloned the project on another machine, and had a broken build.
x64\SQLite.Interop.dll was missing.
I'm puzzled why Git didn't include one file from my project. I checked the repo on BitBucket and confirmed it is not there. Git status reports nothing to commit, working directory clean
It added the x86 version, but not the x64 version, I can't imagine why.
(project)\x64\SQLite.Interop.dll Git ignored this file!
(project)\x86\SQLite.Interop.dll
You might want to check the .gitignore file at the root of the repo. If it contains for example x64, it would ignore this file.
There would be two main possibilities then:
edit this file to fit your need
or force this file to be added; ie: git add -f x64/SQLite.Interop.dll
However, committing binary files is often frowned upon. It's true in particular if you want to keep up to date with the latest package, hence if you plan to commit new versions of the dlls on a regular basis.
You might rather want to consider Nuget package restore feature. Basically the idea is that you commit a config file, and the client will automatically download the corresponding packages.

Resources