Could not start backup:Request failed with status code 400 Google Cloud - google-app-engine

I'm trying to create a backup system for Firestore.
I followed every step of this guide and when I tried to deploy the code, it returned Request failed with status code 400
PROJECT-ID#appspot.gserviceaccount.com permissions: Cloud Datastore Import Export Admin,
Editor,
Storage Admin
This is the code of app.js
'use strict';
const axios = require('axios');
const dateformat = require('dateformat');
const { google } = require('googleapis');
const express = require('express');
const util = require('util')
const request = require('request');
const admin = require('firebase-admin');
const {Storage} = require('#google-cloud/storage');
// Creates a client
const storage = new Storage();
admin.initializeApp({
credential: admin.credential.applicationDefault()
});
const db = admin.firestore();
const googleMapsClient = require('#google/maps').createClient({
key: 'AIza*****',
Promise: Promise
});
const app = express();
// Trigger a backup
app.get('/cloud-firestore-export', async (req, res) => {
const auth = await google.auth.getClient({
scopes: ['https://www.googleapis.com/auth/datastore'],
});
const accessTokenResponse = await auth.getAccessToken();
const accessToken = accessTokenResponse.token;
const headers = {
'Content-Type': 'application/json',
Authorization: 'Bearer ' + accessToken,
};
const { outputUriPrefix } = req.query;
if (!outputUriPrefix) {
res.status(500).send('outputUriPrefix required');
} else if (outputUriPrefix && outputUriPrefix.indexOf('gs://') !== 0) {
res.status(500).send('Malformed outputUriPrefix: ${outputUriPrefix}');
}
// Construct a backup path folder based on the timestamp
const timestamp = dateformat(Date.now(), 'yyyy-mm-dd-HH-MM-ss');
let path = outputUriPrefix;
if (path.endsWith('/')) {
path += timestamp;
} else {
path += '/' + timestamp;
}
const body = {
outputUriPrefix: path,
};
// If specified, mark specific collections for backup
const { collections } = req.query;
if (collections) {
body.collectionIds = collections.split(',');
}
const projectId = process.env.GOOGLE_CLOUD_PROJECT;
const url = 'https://firestore.googleapis.com/v1beta1/projects/' + projectId + '/databases/(default):exportDocuments';
try {
const response = await axios.post(url, body, { headers });
res
.status(200)
.send(response.data)
.end();
} catch (e) {
if (e.response) {
console.warn(e.response.data);
}
res
.status(500)
.send('Could not start backup:' + e.message)
.end();
}
});
°°°°
// Start the server
const PORT = process.env.PORT || 8080;
app.listen(PORT, () => {
console.log('App listening on port ${PORT}');
console.log('Press Ctrl+C to quit.');
});
I have another function that is listening to '/'. Is possible that this can cause the problem?
package.json:
{
"name": "solution-scheduled-backups",
"version": "1.0.0",
"description": "Scheduled Cloud Firestore backups via AppEngine cron",
"main": "app.js",
"engines": {
"node": "10.x.x"
},
"scripts": {
"deploy": "gcloud app deploy --quiet app.yaml cron.yaml",
"start": "node app.js"
},
"author": "Google, Inc.",
"license": "Apache-2.0",
"dependencies": {
"#google-cloud/storage": "^3.2.1",
"#google/maps": "^0.5.5",
"axios": "^0.19.0",
"dateformat": "^3.0.3",
"express": "^4.17.1",
"firebase-admin": "^8.4.0",
"googleapis": "^42.0.0",
"request": "^2.88.0"
},
"devDependencies": {
"prettier": "^1.18.2"
}
}
I also look inside Cron log and there is nothing related to the error. It only returns 500 error

The main problem is the bucket location
When you create a backup bucket you must use Multi-Region or you will receive a deny from the server.
I think this is a bug of Google Cloud
Solution
Delete the bucket and create a new one with a Multi-Regional location
The error with Single Location:
'Bucket backup-bucket is in location EUR4. This project can only operate on buckets
spanning location europe-north1 or europe-west1 or eu or europe-west2 or
europe-west3 or europe-west4 or europe-west5 or europe-west6.',

So i've been replicating your issue and i've found the solution.
I kept getting the same error as you and I finally managed to figure it out. If you look into GAE logs you can see the error saying that 'Project \' "YOUR PROJECT" \' is not a Cloud Firestore enabled project.'.
This worked for me:
Make a new project.
Go to API Library in GCP and enable Firestore Installation API.
Go to Firebase and link your GCP project to a firebase project.
Go to database and create Firestore Database.
Follow the repo with the permissions and the deployment to app engine.
Test the cron and it will be successful.
If you ever had DataStore enabled in your actual project you will not be able to do a Firestore instance at step 3.
This will create you the needed buckets with the .appspot.com format, that you have give permissions to.
Go to GAE and create your cron.yaml, app.js and everything else you need. I used this repo for tests.
In the readme.md of the repo you have the exact commands that you have to do in order to give permissions to your service account.
Remember to change the bucket as told in cron.yaml.
Follow the steps mentioned in the repo as they are pretty well done.
Let me know if it worked for you!

Related

Vite serving shader file with wrong (none) MIME type

I'm developing a BabylonJS application. BabylonJS PostProcess class appends .fragment.fx to a given file name and requests that from the server. When my local Vite (version 4.0.4) dev server serves this file the content-type header is empty. This causes Firefox to intepret it as type xml and fail. Chrome fails through a different, but I think related, mechanism.
How do you configure Vite to serve the *.fragment.fx static files as text/plain? I assume I need to disable the default middleware and write some custom code instead, like this: https://vitejs.dev/config/server-options.html#server-middlewaremode but I wanted to first check there wasn't something else going on / a simpler way to configure / fix this.
The vite dev server is started using vite --host --port 3000 --force and the config in vite.config.js is:
import { defineConfig } from 'vite';
export default defineConfig(({ command, mode }) => {
// if (command === 'serve') {
// return {
// // dev specific config
// }
// } else {
// // command === 'build'
// return {
// // build specific config
// }
// }
return {
resolve: {
alias: {
"babylonjs": mode === "development" ? "babylonjs/babylon.max" : "babylonjs",
}
},
base: "",
// assetsInclude: ['**/*.fx'],
};
});
* edit 1 *
I have seen there's a parameter ?raw that can be added to the URL however I don't control how BabylonJS forms the URL so I can't see how to make this work in this situation.
I followed these instructions and set up a dev server using express. I added this block of code above the call to app.use(vite.middlewares):
app.use("**/*.*.fx", async (req, res, next) => {
const url = req.originalUrl
const file_path = path.resolve(__dirname, "." + url)
const file = fs.readFileSync(file_path, "utf-8")
res.status(200).set({ "Content-Type": "text/plain" }).end(file)
})
I now start the dev server using the following script line in the package.json of "dev": "node server",
I could not find a way to solve this by configuring the default vite dev server.

Error creating StandardAppVersion: googleapi: Error 404: App does not exist

Hi i am trying to create a simple node app on google app standard app engine using this terraform code. This code used to work before but today i was trying to restart the whole project and re-deploy everything again and i see that i am getting an error.
compute_engine.tf
resource "google_app_engine_standard_app_version" "nodetest" {
version_id = "v1"
service = "mainApp"
runtime = "nodejs10"
instance_class = "B1"
basic_scaling {
max_instances = 1
}
entrypoint {
shell = "node test.js"
}
deployment {
files {
name = google_storage_bucket_object.object.name
source_url = "https://storage.googleapis.com/${google_storage_bucket.bucket.name}/${google_storage_bucket_object.object.name}"
}
}
delete_service_on_destroy = true
depends_on = [
google_project_service.appengine_api
]
}
resource "google_storage_bucket" "bucket" {
project = var.project_id
name = var.bucket_name
location = var.region
}
resource "google_storage_bucket_object" "object" {
name = "test.js"
bucket = google_storage_bucket.bucket.name
source = "test.js"
}
My test.js is located in the same directory as where tf is located.
test.js
const http = require('http');
const hostname = '127.0.0.1';
const port = 3000;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello World');
});
server.listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`);
});
I see that the files have already been deployed correctly
And the error i am getting
I tried changing the url from
"https://storage.googleapis.com/${google_storage_bucket.bucket.name}/${google_storage_bucket_object.object.name}"
To
"https://storage.cloud.com/${google_storage_bucket.bucket.name}/${google_storage_bucket_object.object.name}"
Try changing the shell = "node test.js" to shell = "node ./test.js"
Also i did take a look at GitHub Issue 4974 but is doesnt solve my problem. I did notice that when i try to terraform apply the error pretty much pop up quite fast so it seem that is stuck on a very first validation error.
Does the user that runs compute_engine.tf has "appengine.applications.create" and deploy permissions?
Also check if you set project and region in your google provider.

Contentful with react expected parameter accessToken error

I created a contentful blog in a separate project below and would like to add my blog to my blog.js page in my main project. I sent up my Environment Variables in an .env file and my Access Token was exposed on github and I never had this problem in my gitignore file. Not sure if I have to change Set Environment Variables in Windows 10.
I also have "dotenv": "^8.2.0", and "config": "^3.3.2", as a dependency for my mini social network for my users and profile. Not sure if I have to add require('dotenv').config(); to client.js.
.gitignore file:
.env
node_modules/
config/default.json
.env.development
client.js file:
import * as contentful from "contentful";
export const client = contentful.createClient({
space: process.env.REACT_APP_SPACE_ID,
accessToken: process.env.REACT_APP_SPACE_TOKEN,
});
.env
REACT_APP_SPACE_ID=my access key
REACT_APP_SPACE_TOKEN=my access token
Console error:
createClient
56 | */
57 | function createClient(params) {
58 | if (!params.accessToken) {
> 59 | throw new TypeError('Expected parameter accessToken');
60 | }
61 |
62 | if (!params.space) {
I also have a config file for my mongoDB and I'm not sure if this of sets my Contentful accesstoken
config/db.js
const mongoose = require('mongoose');
const config = require('config');
const db = config.get('mongoURI');
const connectDB = async () => {
try{
await mongoose.connect(db, {
useNewUrlParser: true,
useUnifiedTopology: true,
useCreateIndex: true,
useFindAndModify: false
});
console.log('MongoDB Connected...');
} catch(err){
console.error(err.message);
// exit process with failure
process.exit(1);
}
};
// db.js
module.exports = connectDB;
Just added my .env file path to use via the path
server.js
const express = require('express');
const connectDB = require('./config/db');
const app = express();
// Connect Database
connectDB();
// Init Middleware
app.use(express.json({ extended: false }));
app.get('/', (req, res) => res.send('API Running'));
//DEfine Routes
app.use('/api/users', require('./routes/api/users'));
app.use('/api/auth', require('./routes/api/auth'));
app.use('/api/profile', require('./routes/api/profile'));
app.use('/api/posts', require('./routes/api/posts'));
const PORT = process.env.PORT || 5000;
app.listen(PORT, () => console.log(`Server started on port ${PORT}`));
//server.js
To solve this problem you have to add the environment variable in the right path for contentful blog you have to place in .env.development.
.env.development
REACT_APP_SPACE_ID=my access key
REACT_APP_SPACE_TOKEN=my access token
You can risk to expose your token if you place your .env.development outside of your project. If you place console.log(process.env); in the area you want to test. Be sure to restart the server to test by placing npm start in the terminal every time you test the console.log.
client\src\client.js
import * as contentful from "contentful";
console.log(process.env);
export const client = contentful.createClient({
space: process.env.REACT_APP_SPACE_ID,
accessToken: process.env.REACT_APP_SPACE_TOKEN,
});
If you place the environment variable in the wrong area you receive an undefined
Object
FAST_REFRESH: true
NODE_ENV: "development"
PUBLIC_URL: ""
WDS_SOCKET_HOST: undefined
WDS_SOCKET_PATH: undefined
WDS_SOCKET_PORT: undefined
__proto__: Object
If you test your .env.development file in the appropriate area you will see your access key and access token will be displayed in the console.
Object
FAST_REFRESH: true
NODE_ENV: "development"
PUBLIC_URL: ""
REACT_APP_SPACE_ID: "my access key"
REACT_APP_SPACE_TOKEN: "my access token"
WDS_SOCKET_HOST: undefined
WDS_SOCKET_PATH: undefined
WDS_SOCKET_PORT: undefined
__proto__: Object

How to run lighthouse for the homepage after login from puppeteer

I added two npm "#lhci/cli" and puppeteer.After that I added two config file
lighthouserc.js : config details are:
module.exports = {
ci: {
upload: {
target: 'temporary-public-storage'
},
collect: {
puppeteerScript: 'puppeteer-script.js',
chromePath: puppeteer.executablePath(),
url: ["https://myWebsite.com/abc"],
headful: true,
numberOfRuns: 1,
disableStorageReset: true,
setting: {
disableStorageReset: true
},
puppeteerLaunchOptions: {
slowMo: 20,
headless: false,
disableStorageReset: true
}
},
assert: {
assertions: {
'categories:performance': ['warn', { minScore: 1 }],
'categories:accessibility': ['error', { minScore: 0.5 }]
}
}
}
};
puppeteer-script.js
module.exports = async (browser, context) => {
await page.setDefaultNavigationTimeout(90000);
await page.goto(context.url);
await page.type('input[type=text]', 'abc');
await page.type('input[type=email]', 'abc#abc.com');
await page.type('input[type=password]', 'abc#100');
await page.click('[type="button"]');
await page.waitForNavigation({ waitUntil: "networkidle2" })
await page.close();
};
and in package.json I added script command as :
"test:lighthouse": "lhci autorun --collect.settings.chromeFlags='--no-sandbox'"
Now Login is working fine but I want to run the lighthouse for the url that I specified in lighthouserc.js (https://myWebsite.com/abc).
But after login it is trying to access the url and again login screen is coming and the lighthouse is measuring performance for the login page.
Is it possible to run lighthouse on url I specified in the config.Please assist me.
https://myWebsite.com/abc is my reactjs application
I do not have complete information on the workflow of your site but as mentioned in the configuration guide puppeteer script is run for each url mentioned in the lhci config file.
And after puppeteer script is ran, lighthouse will open URL. Now if your site is opening login page again, that its an issue with your app or configuration most likely. Either your app is not setting cookie correctly or login process is failing somehow, you will need to check that.
Also, as puppeteer script will be running for every url in the config, its good idea to not re-login if you already logged in once, check out this issue on Github.

How can I use firebase's firestore/admin sdk with next.js

I am building an application with firebase and next.js
I am fairly new to this set up, completely new to SSR, and the firebase docs are confusing me.
Currently, I am using firebase functions to run next.js, and that works like a charm. But now, I want to use firestore. I see two ways to use it in my project according to the docs (if I get it right). The first one is the 'web' solution which would not be benificial for me, because I believe it is not SSR, while the whole point of my app is being just that.
The other one is the 'node.js' solution, which runs on the firebase functions, this makes a lot more sense to me. The part I can't figure out, is using it with Next.js
In my current set up I am building my next.js application to the functions folder, inside the function folder I can reference the databaseref object I create with the 'node.js' solution, but how can I reference this before building my next application? So when I'm not in the functions folder?
Setup:
- src
- utils
- pages
- index.js
- signin.js
- // etc.
- functions
- next // this is the output folder of my 'src' build
- index.js
- // etc.
inside functions/index.js I could do:
const admin = require('firebase-admin');
const functions = require('firebase-functions');
admin.initializeApp(functions.config().firebase);
let db = admin.firestore();
and use db to read and add to firestore, serverside (right?)
but all my code is in src/ before I build it and I don't think I could use it there. Should I structure my project differently? Or what should I do to be able to use db? Or, of course, another way to have a server side connection with my firestore.
Sorry for the bad answer. It's my first time. I was looking for cookie cuter code and seen that uor question wasn't answered.
I don't know the proper jargon. Yet, you have t run your app wit a custom server. Atleast that's wjhat I do to use firebase-admin. Notice here my answer is bad becase I acyually interfcae wit my client through socket.io. I only use firebase for client code and authentication
In package.json you are adding the script tag to stratfrom the command line
{
"scripts:
"server": "node server.js"
}
that makes it so you can run
$ npm run server
from the command line
~/package.json
{
"name": "app",
"version": "0.1.0",
"private": true,
"scripts": {
"server": "node server.js",
"dev": "next dev",
"build": "next build",
"start": "next start"
},
"dependencies": {
"next": "9.3.1",
"react": "16.13.1",
"react-dom": "16.13.1"
}
}
In the server.js fil you load up express for a server side rendering, probably can start your own http server with another post. However, as seen below I actullay use socket.io so it has that connection details
the key is right here thogh
he nextHandler() passes the control of the server to the next. So you can probably start an http server and use nextHandler()
app.get('*', (req, res) => {
return nextHandler(req, res)
})
~/server.js
const fs = require('fs');
const express = require('express');
const app = express();
const server = require('http').Server(app)
const firebaseAdmin = require('./services/dwf.firebase.admin.js');
const secureServer = require('https').createServer({
key: fs.readFileSync('./key.pem'),
cert: fs.readFileSync('./cert.pem')
}, app)
const io = require('socket.io')(secureServer, {secure: true})
const User = require('../../users/user.manager.js');
let user = User(io,firebaseAdmin.auth(),firebaseAdmin.database());
const next = require('next')
const dev = process.env.NODE_ENV !== 'production'
const nextApp = next({dev})
const nextHandler = nextApp.getRequestHandler()
// socket.io server
io.on('connection', socket => {
console.log(`Main Socket Opened by:\n ${socket.id}`);
socket.on('getDb',function(userId,refs,fn){
console.log("Getting Data")
firebaseAdmin.database().ref(refs).once('value',(snapshot)=>{
console.log(snapshot.val());
fn({body: snapshot.val()})
socket.emit('getDb',snapshot.val());
});
})
socket.on('disconnect', () => {
console.log(`Main Socket Closed by:\n ${socket.id}`);
});
})
nextApp
.prepare()
.then(() => {
app.get('/data/messages', (req, res) => {
res.json(messages)
})
app.get('*', (req, res) => {
return nextHandler(req, res)
})
secureServer.listen(PORT, () => console.log('#> Main Server ready for clients on https://0.0.0.0:PORT'));
})

Resources