Contentful with react expected parameter accessToken error - reactjs

I created a contentful blog in a separate project below and would like to add my blog to my blog.js page in my main project. I sent up my Environment Variables in an .env file and my Access Token was exposed on github and I never had this problem in my gitignore file. Not sure if I have to change Set Environment Variables in Windows 10.
I also have "dotenv": "^8.2.0", and "config": "^3.3.2", as a dependency for my mini social network for my users and profile. Not sure if I have to add require('dotenv').config(); to client.js.
.gitignore file:
.env
node_modules/
config/default.json
.env.development
client.js file:
import * as contentful from "contentful";
export const client = contentful.createClient({
space: process.env.REACT_APP_SPACE_ID,
accessToken: process.env.REACT_APP_SPACE_TOKEN,
});
.env
REACT_APP_SPACE_ID=my access key
REACT_APP_SPACE_TOKEN=my access token
Console error:
createClient
56 | */
57 | function createClient(params) {
58 | if (!params.accessToken) {
> 59 | throw new TypeError('Expected parameter accessToken');
60 | }
61 |
62 | if (!params.space) {
I also have a config file for my mongoDB and I'm not sure if this of sets my Contentful accesstoken
config/db.js
const mongoose = require('mongoose');
const config = require('config');
const db = config.get('mongoURI');
const connectDB = async () => {
try{
await mongoose.connect(db, {
useNewUrlParser: true,
useUnifiedTopology: true,
useCreateIndex: true,
useFindAndModify: false
});
console.log('MongoDB Connected...');
} catch(err){
console.error(err.message);
// exit process with failure
process.exit(1);
}
};
// db.js
module.exports = connectDB;
Just added my .env file path to use via the path
server.js
const express = require('express');
const connectDB = require('./config/db');
const app = express();
// Connect Database
connectDB();
// Init Middleware
app.use(express.json({ extended: false }));
app.get('/', (req, res) => res.send('API Running'));
//DEfine Routes
app.use('/api/users', require('./routes/api/users'));
app.use('/api/auth', require('./routes/api/auth'));
app.use('/api/profile', require('./routes/api/profile'));
app.use('/api/posts', require('./routes/api/posts'));
const PORT = process.env.PORT || 5000;
app.listen(PORT, () => console.log(`Server started on port ${PORT}`));
//server.js

To solve this problem you have to add the environment variable in the right path for contentful blog you have to place in .env.development.
.env.development
REACT_APP_SPACE_ID=my access key
REACT_APP_SPACE_TOKEN=my access token
You can risk to expose your token if you place your .env.development outside of your project. If you place console.log(process.env); in the area you want to test. Be sure to restart the server to test by placing npm start in the terminal every time you test the console.log.
client\src\client.js
import * as contentful from "contentful";
console.log(process.env);
export const client = contentful.createClient({
space: process.env.REACT_APP_SPACE_ID,
accessToken: process.env.REACT_APP_SPACE_TOKEN,
});
If you place the environment variable in the wrong area you receive an undefined
Object
FAST_REFRESH: true
NODE_ENV: "development"
PUBLIC_URL: ""
WDS_SOCKET_HOST: undefined
WDS_SOCKET_PATH: undefined
WDS_SOCKET_PORT: undefined
__proto__: Object
If you test your .env.development file in the appropriate area you will see your access key and access token will be displayed in the console.
Object
FAST_REFRESH: true
NODE_ENV: "development"
PUBLIC_URL: ""
REACT_APP_SPACE_ID: "my access key"
REACT_APP_SPACE_TOKEN: "my access token"
WDS_SOCKET_HOST: undefined
WDS_SOCKET_PATH: undefined
WDS_SOCKET_PORT: undefined
__proto__: Object

Related

Vite serving shader file with wrong (none) MIME type

I'm developing a BabylonJS application. BabylonJS PostProcess class appends .fragment.fx to a given file name and requests that from the server. When my local Vite (version 4.0.4) dev server serves this file the content-type header is empty. This causes Firefox to intepret it as type xml and fail. Chrome fails through a different, but I think related, mechanism.
How do you configure Vite to serve the *.fragment.fx static files as text/plain? I assume I need to disable the default middleware and write some custom code instead, like this: https://vitejs.dev/config/server-options.html#server-middlewaremode but I wanted to first check there wasn't something else going on / a simpler way to configure / fix this.
The vite dev server is started using vite --host --port 3000 --force and the config in vite.config.js is:
import { defineConfig } from 'vite';
export default defineConfig(({ command, mode }) => {
// if (command === 'serve') {
// return {
// // dev specific config
// }
// } else {
// // command === 'build'
// return {
// // build specific config
// }
// }
return {
resolve: {
alias: {
"babylonjs": mode === "development" ? "babylonjs/babylon.max" : "babylonjs",
}
},
base: "",
// assetsInclude: ['**/*.fx'],
};
});
* edit 1 *
I have seen there's a parameter ?raw that can be added to the URL however I don't control how BabylonJS forms the URL so I can't see how to make this work in this situation.
I followed these instructions and set up a dev server using express. I added this block of code above the call to app.use(vite.middlewares):
app.use("**/*.*.fx", async (req, res, next) => {
const url = req.originalUrl
const file_path = path.resolve(__dirname, "." + url)
const file = fs.readFileSync(file_path, "utf-8")
res.status(200).set({ "Content-Type": "text/plain" }).end(file)
})
I now start the dev server using the following script line in the package.json of "dev": "node server",
I could not find a way to solve this by configuring the default vite dev server.

Why is my amplify environment variable appearing for just a moment then disappearing again [duplicate]

So i'm using the Contentful API to get some content from my account and display it in my Next.Js app (i'm using next 9.4.4). Very basic here. Now to protect my credentials, i'd like to use environment variables (i've never used it before and i'm new to all of this so i'm a little bit losted).
I'm using the following to create the Contentful Client in my index.js file :
const client = require('contentful').createClient({
space: 'MYSPACEID',
accessToken: 'MYACCESSTOKEN',
});
MYSPACEID and MYACCESSTOKEN are hardcoded, so i'd like to put them in an .env file to protect it and don't make it public when deploying on Vercel.
I've created a .env file and filled it like this :
CONTENTFUL_SPACE_ID=MYSPACEID
CONTENTFUL_ACCESS_TOKEN=MYACCESSTOKEN
Of course, MYACCESSTOKEN and MYSPACEID contains the right keys.
Then in my index.js file, i do the following :
const client = require('contentful').createClient({
space: `${process.env.CONTENTFUL_SPACE_ID}`,
accessToken: `${process.env.CONTENTFUL_ACCESS_TOKEN}`,
});
But it doesn't work when i use yarn dev, i get the following console error :
{
sys: { type: 'Error', id: 'NotFound' },
message: 'The resource could not be found.',
requestId: 'c7340a45-a1ef-4171-93de-c606672b65c3'
}
Here is my Homepage and how i retrieve the content from Contentful and pass them as props to my components :
const client = require('contentful').createClient({
space: 'MYSPACEID',
accessToken: 'MYACCESSTOKEN',
});
function Home(props) {
return (
<div>
<Head>
<title>My Page</title>
<link rel="icon" href="/favicon.ico" />
</Head>
<main id="page-home">
<Modal />
<NavTwo />
<Hero item={props.myEntries[0]} />
<Footer />
</main>
</div>
);
}
Home.getInitialProps = async () => {
const myEntries = await client.getEntries({
content_type: 'mycontenttype',
});
return {
myEntries: myEntries.items
};
};
export default Home;
Where do you think my error comes from?
Researching about my issue, i've also tried to understand how api works in next.js as i've read it could be better to create api requests in pages/api/ but i don't understand how to get the content and then pass the response into my pages components like i did here..
Any help would be much appreciated!
EDIT :
So i've fixed this by adding my env variables to my next.config.js like so :
const withSass = require('#zeit/next-sass');
module.exports = withSass({
webpack(config, options) {
const rules = [
{
test: /\.scss$/,
use: [{ loader: 'sass-loader' }],
},
];
return {
...config,
module: { ...config.module, rules: [...config.module.rules, ...rules] },
};
},
env: {
CONTENTFUL_SPACE_ID: process.env.CONTENTFUL_SPACE_ID,
CONTENTFUL_ACCESS_TOKEN: process.env.CONTENTFUL_ACCESS_TOKEN,
},
});
if you are using latest version of nextJs ( above 9 )
then follow these steps :
Create a .env.local file in the root of the project.
Add the prefix NEXT_PUBLIC_ to all of your environment variables.
eg: NEXT_PUBLIC_SOMETHING=12345
use them in any JS file like with prefix process.env
eg: process.env.NEXT_PUBLIC_SOMETHING
You can't make this kind of request from the client-side without exposing your API credentials. You have to have a backend.
You can use Next.js /pages/api to make a request to Contentful and then pass it to your front-end.
Just create a .env file, add variables and reference it in your API route as following:
process.env.CONTENTFUL_SPACE_ID
Since Next.js 9.4 you don't need next.config.js for that.
By adding the variables to next.config.js you've exposed the secrets to client-side. Anyone can see these secrets.
New Environment Variables Support
Create a Next.js App with Contentful and Deploy It with Vercel
Blog example using Next.js and Contentful
I recomended to update at nextjs 9.4 and up, use this example:
.env.local
NEXT_PUBLIC_SECRET_KEY=i7z7GeS38r10orTRr1i
and in any part of your code you could use:
.js
const SECRET_KEY = process.env.NEXT_PUBLIC_SECRET_KEY
note that it must be the same name of the key "NEXT_PUBLIC_ SECRET_KEY" and not only "SECRET_KEY"
and when you run it make sure that in the log says
$ next dev
Loaded env from E:\awesome-project\client\.env.local
ready - started server on http://localhost:3000
...
To read more about environment variables see this link
Don't put sensitive things in next.config.js however in my case I have some env variables that aren't sensitive at all and I need them Server Side as well as Client side and then you can do:
// .env file:
VARIABLE_X=XYZ
// next.config.js
module.exports = {
env: {
VARIABLE_X: process.env.VARIABLE_X,
},
}
You have to make a simple change in next.config.js
const nextConfig = {
reactStrictMode: true,
env:{
MYACCESSTOKEN : process.env.MYACCESSTOKEN,
MYSPACEID: process.env.MYSPACEID,
}
}
module.exports = nextConfig
change it like this
Refer docs
You need to add a next.config.js file in your project. Define env variables in that file and those will be available inside your app.
npm i --save dotenv-webpack#2.0.0 // version 3.0.0 has a bug
create .env.development.local file in the root. and add your environment variables here:
AUTH0_COOKIE_SECRET=eirhg32urrroeroro9344u9832789327432894###
NODE_ENV=development
AUTH0_NAMESPACE=https:ilmrerino.auth0.com
create next.config.js in the root of your app.
const Dotenv = require("dotenv-webpack");
module.exports = {
webpack: (config) => {
config.resolve.alias["#"] = path.resolve(__dirname);
config.plugins.push(new Dotenv({ silent: true }));
return config;
},
};
However those env variables are gonna be accessed by the server. if you want to use any of the env variables you have to add one more configuration.
module.exports = {
webpack: (config) => {
config.resolve.alias["#"] = path.resolve(__dirname);
config.plugins.push(new Dotenv({ silent: true }));
return config;
},
env: {
AUTH0_NAMESPACE: process.env.AUTH0_NAMESPACE,
},
};
For me, the solution was simply restarting the local server :)
Gave me a headache and then fixed it on accident.
It did not occur to me that env variables are loaded when the server is starting.

dotenv not working when deploy React code to S3

I have a .env file with one variable API_BASE_URL. And I use webpack, dotenv and webpack.DefinePlugin to build the react app. The code is working on my localhost and my local server by serve dist using http-server. When I deploy my to S3 using AWS codepipeline, even though I set API_BASE_URL on AWS codebuild part, but the deployed app on S3 cannot load the variable. Anyone knows where is wrong?
webpack:
const dotenv = require("dotenv")
module.exports = () => {
const env = {} || dotenv.config().parsed
const envKeys = Object.keys(env).reduce((prev, next) => {
prev[`${next}`] = JSON.stringify(env[next])
return prev
}, {})
...
plugins: [
new webpack.DefinePlugin(envKeys),
new HtmlWebpackPlugin({
template: "./public/index.html",
}),
]
...
.env(it's on the root directory)
API_BASE_URL = xxxxxx
App file
const App = () => {
console.log("API_BASE_URL", API_BASE_URL)
return (
...
set enviroment variable on aws
cannot find the varible on S3
If you are using cra, you need to add REACT_APP_ prefix before your env variable name.
https://create-react-app.dev/docs/adding-custom-environment-variables/
But in any case to access the environment variable you have to call it from process.env -> process.env.YOUR_VARIABLE_NAME
https://nodejs.org/dist/latest-v8.x/docs/api/process.html#process_process_env

Failed to regiester service worker error in Next.js file

I'm using workbox-webpack-plugin to generate a service worker for me and I'm using copy-webpack-plugin move generated service worker files to the same directory as main.js. My next js config file goes like this:-
module.exports = {
webpack: (config, {isServer, buildId, dev, ...rest}) => {
if (dev) {
const devSwSrc = join(__dirname, "register-sw.js");
config.plugins.push(new CopyWebpackPlugin([devSwSrc]));
config.plugins.push(new GenerateSW({ ...defaultGenerateOpts }));
// Register SW
const originalEntry = config.entry;
config.entry = async () => {
const entries = await originalEntry();
const swCompiledPath = join(__dirname, 'register-sw-compiled.js');
if (entries['main.js'] && !entries['main.js'].includes(swCompiledPath)) {
let content = await readFile(require.resolve('./register-sw.js'), 'utf8');
await writeFile(swCompiledPath, content, 'utf8');
entries['main.js'].unshift(swCompiledPath);
}
return entries;
};
}
I'm trying to copying my service worker to the same dir as main.js which is chunk/static so that when it's fetched it should not return any error. But instead, I'm getting this error.
TypeError: Failed to register a ServiceWorker for scope ('[http://localhost:3000/](http://localhost:3000/)') with the script ('[http://localhost:3000/service-worker.js](http://localhost:3000/service-worker.js)'): A bad HTTP response code (404) was received when fetching the script.
I know this error is because it's not getting served from the same dir as main.js and I need to make some changes in copy-webpack-plugin in order to achieve that. Also I'm trying to avoid custom server.js file to server routes like /service-worker
Any help would be really appreciated. Thanks in advance

Could not start backup:Request failed with status code 400 Google Cloud

I'm trying to create a backup system for Firestore.
I followed every step of this guide and when I tried to deploy the code, it returned Request failed with status code 400
PROJECT-ID#appspot.gserviceaccount.com permissions: Cloud Datastore Import Export Admin,
Editor,
Storage Admin
This is the code of app.js
'use strict';
const axios = require('axios');
const dateformat = require('dateformat');
const { google } = require('googleapis');
const express = require('express');
const util = require('util')
const request = require('request');
const admin = require('firebase-admin');
const {Storage} = require('#google-cloud/storage');
// Creates a client
const storage = new Storage();
admin.initializeApp({
credential: admin.credential.applicationDefault()
});
const db = admin.firestore();
const googleMapsClient = require('#google/maps').createClient({
key: 'AIza*****',
Promise: Promise
});
const app = express();
// Trigger a backup
app.get('/cloud-firestore-export', async (req, res) => {
const auth = await google.auth.getClient({
scopes: ['https://www.googleapis.com/auth/datastore'],
});
const accessTokenResponse = await auth.getAccessToken();
const accessToken = accessTokenResponse.token;
const headers = {
'Content-Type': 'application/json',
Authorization: 'Bearer ' + accessToken,
};
const { outputUriPrefix } = req.query;
if (!outputUriPrefix) {
res.status(500).send('outputUriPrefix required');
} else if (outputUriPrefix && outputUriPrefix.indexOf('gs://') !== 0) {
res.status(500).send('Malformed outputUriPrefix: ${outputUriPrefix}');
}
// Construct a backup path folder based on the timestamp
const timestamp = dateformat(Date.now(), 'yyyy-mm-dd-HH-MM-ss');
let path = outputUriPrefix;
if (path.endsWith('/')) {
path += timestamp;
} else {
path += '/' + timestamp;
}
const body = {
outputUriPrefix: path,
};
// If specified, mark specific collections for backup
const { collections } = req.query;
if (collections) {
body.collectionIds = collections.split(',');
}
const projectId = process.env.GOOGLE_CLOUD_PROJECT;
const url = 'https://firestore.googleapis.com/v1beta1/projects/' + projectId + '/databases/(default):exportDocuments';
try {
const response = await axios.post(url, body, { headers });
res
.status(200)
.send(response.data)
.end();
} catch (e) {
if (e.response) {
console.warn(e.response.data);
}
res
.status(500)
.send('Could not start backup:' + e.message)
.end();
}
});
°°°°
// Start the server
const PORT = process.env.PORT || 8080;
app.listen(PORT, () => {
console.log('App listening on port ${PORT}');
console.log('Press Ctrl+C to quit.');
});
I have another function that is listening to '/'. Is possible that this can cause the problem?
package.json:
{
"name": "solution-scheduled-backups",
"version": "1.0.0",
"description": "Scheduled Cloud Firestore backups via AppEngine cron",
"main": "app.js",
"engines": {
"node": "10.x.x"
},
"scripts": {
"deploy": "gcloud app deploy --quiet app.yaml cron.yaml",
"start": "node app.js"
},
"author": "Google, Inc.",
"license": "Apache-2.0",
"dependencies": {
"#google-cloud/storage": "^3.2.1",
"#google/maps": "^0.5.5",
"axios": "^0.19.0",
"dateformat": "^3.0.3",
"express": "^4.17.1",
"firebase-admin": "^8.4.0",
"googleapis": "^42.0.0",
"request": "^2.88.0"
},
"devDependencies": {
"prettier": "^1.18.2"
}
}
I also look inside Cron log and there is nothing related to the error. It only returns 500 error
The main problem is the bucket location
When you create a backup bucket you must use Multi-Region or you will receive a deny from the server.
I think this is a bug of Google Cloud
Solution
Delete the bucket and create a new one with a Multi-Regional location
The error with Single Location:
'Bucket backup-bucket is in location EUR4. This project can only operate on buckets
spanning location europe-north1 or europe-west1 or eu or europe-west2 or
europe-west3 or europe-west4 or europe-west5 or europe-west6.',
So i've been replicating your issue and i've found the solution.
I kept getting the same error as you and I finally managed to figure it out. If you look into GAE logs you can see the error saying that 'Project \' "YOUR PROJECT" \' is not a Cloud Firestore enabled project.'.
This worked for me:
Make a new project.
Go to API Library in GCP and enable Firestore Installation API.
Go to Firebase and link your GCP project to a firebase project.
Go to database and create Firestore Database.
Follow the repo with the permissions and the deployment to app engine.
Test the cron and it will be successful.
If you ever had DataStore enabled in your actual project you will not be able to do a Firestore instance at step 3.
This will create you the needed buckets with the .appspot.com format, that you have give permissions to.
Go to GAE and create your cron.yaml, app.js and everything else you need. I used this repo for tests.
In the readme.md of the repo you have the exact commands that you have to do in order to give permissions to your service account.
Remember to change the bucket as told in cron.yaml.
Follow the steps mentioned in the repo as they are pretty well done.
Let me know if it worked for you!

Resources