My Gatsby site sources its content from YAML files rather than Markdown. There are a bunch of categories and subcategories, such that valid paths are like:
mysite.com/movies/drama/war/1980s
mysite.com/video-games/puzzles
etc...
I would like to create an index.js for every subdirectory: /movies , /movies/drama, /movies/drama/war, /video-games, etc.
My strategy is to get a list of all the pages, from which I can extract all the unique paths and createPage an index.js in each. However I tried querying
{
allSitePage {
nodes {
path
}
}
}
during gatsby-node's createPages but none of the paths had been created yet -- only the root and 404 paths existed (probably because these are the only files in my /pages directory?)
I'm guessing there's a better way to do this anyway. How can I create an index.js in every subdirectory? Thank you.
EDIT: I realized createPage is an asyncronous function, my query was running before the createPages had completed. I adjusted all my code with async/await to ensure the pages were created prior to running the allSitePage query -- but it still comes up empty. Now really confused. (Another note: in the GraphiQL browser, the query of course shows all the pages. They just don't seem to be available until after gatsby-node.js completes)
2nd EDIT: I was asked to post my createPage() code:
const postsYaml = resultYaml.data.allYaml.edges
// 1-1 mapping between YAML edge and Gatsby page
// use Promise.all() to ensure all pages are created before continuing
await Promise.all(postsYaml.map(async (post, index) => {
const previous = index === postsYaml.length - 1 ? null : postsYaml[index + 1].node
const next = index === 0 ? null : postsYaml[index - 1].node
createPage({
path: post.node.fields.slug,
component: blog-page,
context: {
slug: post.node.fields.slug,
previous,
next
},
})
}))
So this code runs, creates the pages, then I execute the GraphQL I originally posted -- and the pages aren't there.
Related
I have a problem, when I click to go to the /analytics page on my site, adblockers block the analytics.json file that's being requested by Next.js as they think it's an analytics tracker (it's not, it's a page listing analytics products).
Is there a way to rename the route files Next.js uses when navigating to server-side rendered pages on the client-side?
I want to either obfuscate the names so they're not machine readable, or have a way to rename them all.
Any help appreciated.
With thanks to #gaston-flores I've managed to get something working.
In my instance /analytics is a dynamic page for a category, so I moved my pages/[category]/index.tsx file to pages/[category]/category.tsx and added the following rewrite:
// next.config.js
module.exports = {
async rewrites() {
return [
{
source: "/:category",
destination: "/:category/category",
},
];
},
};
This now gets the category.json file rather than analytics.json, which passes the adblockers checks and renders as expected.
Note that due to having a dynamic file name in the pages/[category] directory (pages/[category]/[product].tsx), I had to move that to pages/[category]/product/[product].tsx as I was seeing the /analytics page redirected to /analytics/category for some reason without this tweak.
Here the Codesandbox example.
I have a list array of songs:
const tracks = [
{
name: "Sunny",
src: "https://www.bensound.com/bensound-music/bensound-sunny.mp3"
},
...
From that list I would like to know how to start download automatically in background the next song (not all the list, only the next) once the current has been completely downloaded.
Why?
Because it is the user's intention to listen to the next song in a selected playlist (We are talking about small files of less than 10MB).
My question is very similar to this one excepted for I'm in React JS and I'm using react-h5-audio-player.
You can insert <link rel="prefetch"> elements into the <head> of the page. This will tell the browser to go ahead and download the thing that it finds in the src property of that element so that it can be served from the cache if something else on the page (in this case, the audio player) requests it in the future (see docs).
Here's some code that should work:
const alreadyPreloaded = new Set();
export function preloadTrack({ src } = {}) {
// We make sure not to insert duplicate <link rel="prefetch> elements
// in case this function gets called multiple times with the same track.
if (typeof src === "string" && !alreadyPreloaded.has(src)) {
const head = document.getElementsByTagName("HEAD")[0];
const link = document.createElement("link");
link.rel = "prefetch";
link.href = src;
head.appendChild(link);
alreadyPreloaded.add(src);
}
}
Then, in your react component, you could call preloadTrack as a side-effect of when the track changes:
useEffect(() => {
preloadTrack(tracks[trackIndex]); // Also preload the current track
preloadTrack(tracks[trackIndex + 1]); // tracks[trackIndex + 1] might be undefined, but we wrote the preloadTracks function to safely handle that.
}, [trackIndex]);
See this fork of your codesandbox
You can see that it's working by checking the "network" tab in debug tools. This is what I see on a fresh page reload after clearing the cache, and before clicking on anything.
Another way to be sure that it is working, is to hit the "next" button once - you should see that track #2 (bensoud-tenderness.mp3) is being served from the cache with a 206 status code.
So I've already implemented a wildcard path on my gatsby-node.js file:
exports.onCreatePage = async ({ page, actions }) => {
const { createPage } = actions
if (page.path.match(/^\/my-path/)) {
page.matchPath = "/my-path/*"
createPage(page)
}
}
and this works fine when I am running the site locally (in development). i.e., if i provide /my-path/anything123 or /my-path/asdfasdfasdf, both will lead to a rendering of the component I've created in my Gatsby project under pages/my-path.tsx.
Now we come to my problem. When I deploy my site to Netlify, I don't get the same behavior as in development. Can behavior like this be handled with a redirect or rewrite in Netlify? I don't want to lose the content in what comes after /my-path/, as in reality I'm using it to parse an id, i.e. if the URL in the browser is /my-path/123, I want to be able to see that 123 in window.location.href, and parse it with some logic in my-path.tsx.
I hope this is clear. Appreciate anyone who can help or guide me in the right direction!
Of course after writing this all up the answer becomes clear... I tried it, and it works! for the example I was providing, the following redirect would work in your netlify.toml file:
[[redirects]]
from = "/my-path/*"
to = "/my-path"
status = 200
force = true
So it essentially has to match 1:1 with the rules you define in gatsby-node.js.
I use Excel data as a data source. I want to create slug dynamically and use Gatsby docs as an example. https://www.gatsbyjs.com/docs/tutorial/part-seven/
But this does not work, because I don't use Markdown files. I changed 'MarkdownRemark' to 'ExcelData'.
exports.onCreateNode = ({ node, getNode }) => {
if (node.internal.type === `ExcelData`) {
const fileNode = getNode(node.parent)
console.log(`\n`, fileNode.relativePath)
}
}
When You look at Gatsby docs, code print to the terminal two markdown files relative paths:
pages/sweet-pandas-eating-sweets.md
pages/pandas-and-bananas.md.
Mine code prints out same path multiple time, because there is only one Excel file.
I try to change the code and use data that is in an Excel file.
const fileNode = getNode(_9)
But this does not work and I get an errors like:
"gatsby-node.js" threw an error while running the onCreateNode lifecycle:
_9 is not defined
const fileNode = getNode(node._9)
Cannot read property 'relativePath' of undefined
Is it possible to change (node.parent) or not?
I assume you're using https://www.gatsbyjs.com/plugins/gatsby-transformer-excel/ already?
Gatsby has a new filesystem routing API that means creating routes like this is much easier called the File System Routing API — this links to the section on Collection Routes which automatically creates pages from every node in a collection without needing to create slugs manually in gatsby-node.js.
E.g. your type is ExcelData so you'd just need to create a collection route component at src/pages/{ExcelData.title}.js (assuming your spreadsheet has a field named title) to create pages for all your spreadsheet rows.
This works with any type and any field.
I'm confused as to the appropriate way to access a bunch of images stored in Firebase storage with a react redux firebase web app. In short, I'd love to get a walkthrough of, once a photo has been uploaded to firebase storage, how you'd go about linking it to a firebase db (like what exactly from the snapshot returned you'd store), then access it (if it's not just <img src={data.downloadURL} />), and also how you'd handle (if necessary) updating that link when the photo gets overwritten. If you can answer that, feel free to skip the rest of this...
Two options I came across are either
store the full URL in my firebase DB, or
store something less, like the path within the bucket, then call downloadURL() for every photo... which seems like a lot of unnecessary traffic, no?
My db structure at the moment is like so:
{
<someProjectId>: {
imgs: {
<someAutoGenId>: {
"name":"photo1.jpg",
"url":"https://<bucket, path, etc>token=<token>"
},
...
},
<otherProjectDetails>: "",
...
},
...
}
Going forward with that structure and the first idea listed, I ran into trouble when a photo was overwritten, so I would need to go through the list of images and remove the db record that matches the name (or find it and update its URL). I could do this (at most, there would be two refs with the old token that I would need to replace), but then I saw people doing it via option 2, though not necessarily with my exact situation.
The last thing I did see a few times, were similar questions with generic responses pointing to Cloud Functions, which I will look into right after posting, but I wasn't sure if that was overcomplicating things in my case, so I figured it couldn't hurt too much to ask. I initially saw/read about Cloud Functions and the fact that Firebase's db is "live," but wasn't sure if that played well in a React/Redux environment. Regardless, I'd appreciate any insight, and thank you.
In researching Cloud Functions, I realized that the use of Cloud Functions wasn't an entirely separate option, but rather a way to accomplish the first option I listed above (and probably the second as well). I really tried to make this clear, but I'm pretty confident I failed... so my apologies. Here's my (2-Part) working solution to syncing references in Firebase DB to Firebase Storage urls (in a React Redux Web App, though I think Part One should be applicable regardless):
PART ONE
Follow along here https://firebase.google.com/docs/functions/get-started to get cloud functions enabled.
The part of my database with the info I was storing relating to the images was at /projects/detail/{projectKey}/imgs and had this structure:
{
<autoGenKey1>: {
name: 'image1.jpg',
url: <longURLWithToken>
},
<moreAutoGenKeys>: {
...
}, ...}
My cloud function looked like this:
exports.updateURLToken = functions.database.ref(`/projects/detail/{projectKey}/imgs`)
.onWrite(event => {
const projectKey = event.params.projectKey
const newObjectSet = event.data.val()
const newKeys = Object.keys(newObjectSet)
const oldObjectSet = event.data.previous.val()
const oldKeys = Object.keys(oldObjectSet)
let newObjectKey = null
// If something was removed, none of this is necessary - return
if (oldKeys.length > newKeys.length) {
return null
}
for (let i = 0; i < newKeys.length; ++i) {// Looking for the new object -> will be missing in oldObjectSet
const key = newKeys[i]
if (oldKeys.indexOf(key) === -1) {// Found new object
newObjectKey = key
break
}
}
if (newObjectKey !== null) {// Checking if new object overwrote an existing object (same name)
const newObject = newObjectSet[newObjectKey]
let duplicateKey = null
for (let i = 0; i < oldKeys.length; ++i) {
const oldObject = oldObjectSet[oldKeys[i]]
if (newObject.name === oldObject.name) {// Duplicate found
duplicateKey = oldKeys[i]
break
}
}
if (duplicateKey !== null) {// Remove duplicate
return event.data.ref.child(duplicateKey).remove((error) => error ? 'Error removing duplicate project detail image' : true)
}
}
return null
})
After loading this function, it would run every time anything changed at that location (projects/detail/{projectKey}/imgs). So I uploaded the images, added a new object to my db with the name and url, then this would find the new object that was created, and if it had a duplicate name, that old object with the same name was removed from the db.
PART TWO
So now my database had the correct info, but unless I refreshed the page after every time images were uploaded, adding the new object to my database resulted (locally) in me having all the duplicate refs still, and this is where the realtime database came in to play.
Inside my container, I have:
function mapDispatchToProps (dispatch) {
syncProjectDetailImages(dispatch) // the relavant line -> imported from api.js
return bindActionCreators({
...projectsContentActionCreators,
...themeActionCreators,
...userActionCreators,
}, dispatch)
}
Then my api.js holds that syncProjectDetailImages function:
const SAVING_PROJECT_SUCCESS = 'SAVING_PROJECT_SUCCESS'
export function syncProjectDetailImages (dispatch) {
ref.child(`projects/detail`).on('child_changed', (snapshot) => {
dispatch(projectDetailImagesUpdated(snapshot.key, snapshot.val()))
})
}
function projectDetailImagesUpdated (key, updatedProject) {
return {
type: SAVING_PROJECT_SUCCESS,
group: 'detail',
key,
updatedProject
}
}
And finally, dispatch is figured out in my modules folder (I used the same function I would when saving any part of an updated project with redux - no new code was necessary)