MDX loader only working on top level directory - reactjs

In the "Create Next App" section about rendering markdown they have us make posts.js that uses remark to parse the .md files.
In the guide for adding .mdx support they initialize it in next.config.js but don't address the changes needed in posts.js. I think that's why mdx support is working in pages > my-page.mdx but not posts > my-post.mdx which uses posts.js to render.
relevant function:
export async function getPostData(id) {
const fullPath = path.join(postsDirectory, `${id}.mdx`);
const fileContents = fs.readFileSync(fullPath, 'utf8');
// Use gray-matter to parse the post metadata section
const matterResult = matter(fileContents);
// Use remark to convert markdown into HTML string
const processedContent = await remark()
.use(html)
.process(matterResult.content);
const contentHtml = processedContent.toString();
// Combine the data with the id and contentHtml
return {
id,
contentHtml,
...matterResult.data,
};
How can I update the getPostData to parse mdx?
(MDX page in the docs using next.config.js)

Related

Uppy/Shrine: How to retrieve presigned url for video after successful upload (using AWS S3)

I'm using Uppy for file uploads in React, with a Rails API using Shrine.
I'm trying to show a preview for an uploaded video before submitting a form. It's important to emphasize that this is specifically for a video upload, not an image. So the 'thumbnail:generated' event will not apply here.
I can't seem to find any events that uppy provides that returns a cached video preview (like thumbnail:generated does) or anything that passes back a presigned url for the uploaded file (less expected, obviously), so the only option I see is constructing the url manually. Here's what I'm currently trying for that (irrelevant code removed for brevity):
import React, { useEffect, useState } from 'react'
import AwsS3 from '#uppy/aws-s3'
import Uppy from '#uppy/core'
import axios from 'axios'
import { DragDrop } from '#uppy/react'
import { API_BASE } from '../../../api'
const constructParams = (metadata) => ([
`?X-Amz-Algorithm=${metadata['x-amz-algorithm']}`,
`&X-Amz-Credential=${metadata['x-amz-credential']}`,
`&X-Amz-Date=${metadata['x-amz-date']}`,
'&X-Amz-Expires=900',
'&X-Amz-SignedHeaders=host',
`&X-Amz-Signature=${metadata['x-amz-signature']}`,
].join('').replaceAll('/', '%2F'))
const MediaUploader = () => {
const [videoSrc, setVideoSrc] = useState('')
const uppy = new Uppy({
meta: { type: 'content' },
restrictions: {
maxNumberOfFiles: 1
},
autoProceed: true,
})
const getPresigned = async (id, type) => {
const response = await axios.get(`${API_BASE}/s3/params?filename=${id}&type=${type}`)
const { fields, url } = response.data
const params = constructParams(fields)
const presignedUrl = `${url}/${fields.key}${params}`
console.log('presignedUrl from Shrine request data: ', presignedUrl)
setVideoSrc(presignedUrl)
}
useEffect(() => {
uppy
.use(AwsS3, {
id: `AwsS3:${Math.random()}`,
companionUrl: API_BASE,
})
uppy.on('upload-success', (file, _response) => {
const { type, meta } = file
// First attempt to construct presigned URL here
const url = 'https://my-s3-bucket.s3.us-west-1.amazonaws.com'
const params = constructParams(meta)
const presignedUrl = `${url}/${meta.key}${params}`
console.log('presignedUrl from upload-success data: ', presignedUrl)
// Second attempt to construct presigned URL here
const id = meta.key.split(`${process.env.REACT_APP_ENV}/cache/`)[1]
getPresigned(id, type)
})
}, [uppy])
return (
<div className="MediaUploader">
<div className="Uppy__preview__wrapper">
<video
src={videoSrc || ''}
className="Uppy__preview"
controls
/>
</div>
{(!videoSrc || videoSrc === '') && (
<DragDrop
uppy={uppy}
className="UploadForm"
locale={{
strings: {
dropHereOr: 'Drop here or %{browse}',
browse: 'browse',
},
}}
/>
)}
</div>
)
}
export default MediaUploader
Both urls here come back with a SignatureDoesNotMatch error from AWS.
The manual construction of the url comes mainly from constructParams. I have two different implementations of this, the first of which takes the metadata directly from the uploaded file data in the 'upload-success' event, and then just concatenates a string to build the url. The second one uses getPresigned, which makes a request to my API, which points to a generated Shrine path that should return data for a presigned URL. API_BASE simply points to my Rails API. More info on the generated Shrine route here.
It's worth noting that everything works perfectly with the upload process that passes through Shrine, and after submitting the form, I'm able to get a presigned url for the video and play it without issue on the site. So I have no reason to believe Shrine is returning incorrectly signed urls.
I've compared the two presigned urls I'm manually generating in the form, with the url returned from Shrine after uploading. All 3 are identical in structure, but have different signatures. Here are those three urls:
presignedUrl from upload-success data:
https://my-s3-bucket.s3.us-west-1.amazonaws.com/development/cache/41b229fb17cbf21925d2cd907a59be25.mp4?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAW63AYCMFA4374OLC%2F20221210%2Fus-west-1%2Fs3%2Faws4_request&X-Amz-Date=20221210T132613Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host&X-Amz-Signature=97aefd1ac7f3d42abd2c48fe3ad50b542742ad0717a51528c35f1159bfb15609
presignedUrl from Shrine request data:
https://my-s3-bucket.s3.us-west-1.amazonaws.com/development/cache/023592fb14c63a45f02c1ad89a49e5fd.mp4?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAW63AYCMFA4374OLC%2F20221210%2Fus-west-1%2Fs3%2Faws4_request&X-Amz-Date=20221210T132619Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host&X-Amz-Signature=7171ac72f7db2b8871668f76d96d275aa6c53f71b683bcb6766ac972e549c2b3
presigned url displayed on site after form submission:
https://my-s3-bucket.s3.us-west-1.amazonaws.com/development/cache/41b229fb17cbf21925d2cd907a59be25.mp4?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAW63AYCMFA4374OLC%2F20221210%2Fus-west-1%2Fs3%2Faws4_request&X-Amz-Date=20221210T132734Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host&X-Amz-Signature=9ecc98501866f9c5bd460369a7c2ce93901f94c19afa28144e0f99137cdc2aaf
The first two urls come back with SignatureDoesNotMatch, while the third url properly plays the video.
I'm aware the first and third urls have the same file name, while the second url does not. I'm not sure what to make of that, though, but the relevance of this is secondary to me, since that solution was more of a last ditch effort anyway.
I'm not at all attached to the current way I'm doing things. It's just the only solution I could come up with, due to lack of options. If there's a better way of going about this, I'm very open to suggestions.

How can I create a React front end that allows me to deploy a Solidity contract?

I am able to build the interface for the React app. I am able to create a deploy script using "#nomiclabs/hardhat-ethers" that compiles a .sol contract and successfully deploys it. What I cannot do is combine the two.
What I want is to click a button in the React app to deploy the contract.
I want to use hardhat's ethers.getContractFactory() method to compile and then deploy.
When I add "#nomiclabs/hardhat-ethers" to my project with
const { ethers } = require("#nomiclabs/hardhat-ethers");
I get an error about Hardhat context being required, so i add
require('hardhat').config()
Then I get
"Module not found: can't resolve 'async_hooks' in /node_modules/undici/lib/api"
Is there another library I can use to achieve what I want, or am I just missing something/doing something wrong?
require('dotenv').config();
const { createAlchemyWeb3 } = require("#alch/alchemy-web3");
const web3 = createAlchemyWeb3(process.env.REACT_APP_ALCHEMY_KEY);
require('hardhat').config();
const { ethers } = require("#nomiclabs/hardhat-ethers");
export const deployAuction = async (options) => {
const AuctionFactory = new ethers.getContractFactory("Auction");
// Start deployment, returning a promise that resolves to a contract object
const auction = await AuctionFactory.deploy(
options.token,
options.paymentToken,
options.bidIncrement,
options.timeoutPeriod
);
console.log("Contract deployed to address:", auction.address);
return auction.address;
};
Please, any help is appreciated. Thank you!
Once you deployed the contract with hardhat, you will see artifacts folder which stores the deployed contracts. since your contract name is Auction you will have artifacts/contracts/Auction/Auction.json. this JSON files includes the abi code of your contract. to create a contract instance you need to bring it to your frond-end.
To create a contract instance you need 3 arguments:
1- Address of the contract
2- abi of the contract
3- provider
// I wrapped with function to be able to use "await"
const createContract = async () => {
// make sure you pass correct path
// fetching json works in next.js, I believe React18 supprts it
const res = await fetch('artifacts/contracts/Auction/Auction.json');
const artifact = await res.json();
// address is inisde the json file. 5 is goeerli.
// depending on your config, you might have a different networkId
const address = artifact.networks[5].address;
// in react.js window.ethereum will be defined. in next.js you need to add if(window.ethereum){code here}
const provider = new ethers.providers.Web3Provider(window.ethereum);
const contract = new ethers.Contract(address, artifact.abi, provider);
return contract;
};

Querying persisted React WYSIWYG data from MongoDB

thanks in advance!
In summary, I am using React's WYSIWYG rich text editor, and saving the text written in the editor to a MongoDB, data is sent to a server which does the insertion. My issue is that I am unable, after following recommended code, to retrieve the stored data back successfully to display it on my page. This is for a prospective blog post site.
Below I've provided all relevant code:
My Component which sends the data to the server to insert it into MongoDB, (not in order, only relevant code):
<Editor
editorState={editorState}
onEditorStateChange={handleEditorChange}
wrapperClassName="wrapper-class"
editorClassName="editor-class"
toolbarClassName="toolbar-class"
/>
const Practice = () => {
const [editorState, setEditorState] = useState(
() => EditorState.createEmpty(),
);
const [convertedContent, setConvertedContent] = useState(null);
const handleEditorChange = (state) => {
setEditorState(state);
convertContentToRaw();
}
const convertContentToRaw = () => {
const contentState = editorState.getCurrentContent();
setEditorState(editorState: {convertToRaw(contentState)});
}
const stateToSend = JSON.stringify(editorState);
try {
const response = await axios.post('http://localhost:8080/api/insert', {
content: stateToSend
})
} catch(error) {
}
In MongoDB, I've initialized 1 column for storing the WYSIWYG data, I've initialized as an empty JS object:
const wysiwygtest = new mongoose.Schema({
content: {
type: {}
}
});
As a result, my data is inserted into MongoDB as such, with everything desired clearly in data type such as RGBA etc. correct me if I'm wrong but I believe Mongo uses BSON, a form of binary based JSON, so this looks doable for retrieval:
Lastly, the code which is not working correctly, the retrieval. For this, I have no interest just yet in placing the data back into the text editor. Rather, I'd like to display it on the page like a typical blog post. However, I'm unable to even log to the console as of yet.
I am parsing the data back to JSON using JSON.parse, converting JSON to JS object using createFromRaw and using EdiorState (even though I don't have the text editor in this component but this seems to be needed to convert the data fully..) to convert fully:
useEffect( async () => {
try {
const response = await axios.get('http://localhost:8080/api/query', {
_id: '60da9673b996f54d507dbfc5'
});
const content = response;
if(content) {
const convertedContent =
EditorState.createWithContent(convertFromRaw(JSON.parse(content)));
console.log('convertedContent - ', convertedContent);
}
console.log('response - ', content);
} catch(error) {
console.log('error!', error);
}
}, [])
My result for the past day and last night has been the following:
"SyntaxError: Unexpected token o in JSON at position 1" and so I'm unsure what I'm doing wrong in the data retrieval, and possibly even the insertion.
Any ideas? Thanks again!
Edit: For more reference, here is what the data looks like when output to the console without a JSON.stringify, this is the full tree of data. I can see all of the relevant data is there, but how do I convert this data and display it into a div or paragraph tag, for example?
More or less figured this out, see my solution below given the aforementioned implementation:
Firstly, I think my biggest mistake was using JSON.parse(); I did away with this with success. My guess as to why this does not work (even though I inserted into MongoDB as JSON) is because we ultimately need the draft-js.Editor Object to convert the data from the DB into an object type it can understand, in order to subsequently convert into HTML successfully, with all properties.
Below is the code with captions/descriptions:
Retrieve data (in useEffect before React component is rendered:
useEffect( async () => {
console.log('useeffect');
try {
const response = await axios.get('http://localhost:8080/api/query', {
_id: '60da9673b996f54d507dbfc5' //hard-coded id from DB for testing
});
const content = response.data; //get JSON data from MongoDB
if(content) {
const rawContent = convertFromRaw(content); //convert from JSON to contentstate understood by DraftJS, for EditorState obj to use
setEditorState(EditorState.createWithContent(rawContent)); //create EditorState based on JSON data from DB and set into component state
let currentContentAsHTML = draftToHtml(convertToRaw(editorState.getCurrentContent())); //create object which converts contentstate understood by DraftJS into a regular vanilla JS object, then take THAT and convert into HTML with "draftToHtml" function. Save that into our 2nd state titled "convertedContent" to be displayed on page for blog post
setConvertedContent(currentContentAsHTML);
}
} catch(error) {
console.log('error retrieving!', error);
} },[convertedContent]) //ensure dependency with with convertedContent state, DB/server calls take time...
In component render, return HTML which sets the innerHTML in the DOM using/passing the convertedContent state which we converted to proper HTML format in step 1.
return (
<div className="blog-container" dangerouslySetInnerHTML={createMarkup(convertedContent)}></div>
</div>
);
In step 2, we called a function entitled, "createMarkup"; here is that method. It essentially returns HTML object using the HTML converted data originally from our database. This is a bit vulnerable it terms of malicious users being able to intercept that HTML in the DOM, however, so we use a method, "purify" from "DOMPurify" class from 'isomorphic-dompurify" library. I'm using this instead of regular DOMPurify because I am using Next JS and NEXT runs on the server side as well, and DOMPurify only expects client side:
const createMarkup = (html) => {
return {
__html: DOMPurify.sanitize(html)
}
}

How to upload image to image column from SharePoint List using REST API(React)

I have a problem of uploading image to image column in sharepoint online via pnpjs
I don't know how to convert image and upload to image column in sharepoint list.
I tried lot of ways (by convert image to blob, filedata) nothing works.
Keep in much this is not an attachments for the list..
It's a new column(image) in sharepoint list
reference image click here
Looks like the image is not stored in the list. JSON is stored. So you can just upload image to site assets (that's what sharepoint does when you set the image manually) and then put the json to the field. I would try something like this (assuming you are using pnpjs)
import * as React from "react";
import { sp } from "#pnp/sp/presets/all";
// hello world react component
export const HelloWorld = () => {
const uploadFile = async (evt) => {
const file: File = evt.target.files[0];
// upload to the root folder of site assets in this demo
const assets = await sp.web.lists.ensureSiteAssetsLibrary();
const fileItem = await assets.rootFolder.files.add(file.name, file, true);
// bare minimum; probably you'll want other properties as well
const img = {
"serverRelativeUrl": fileItem.data.ServerRelativeUrl,
};
// create the item, stringify json for image column
await sp.web.lists.getByTitle("YourListWithImageColumn").items.add({
Title: "Hello",
YourImageColumn: JSON.stringify(img)
});
};
return (<div>
<input type='file' onChange={uploadFile} />
</div>);
};
#azarmfa,
The image file in fact did not store in the image field. The field just references its location. You could first upload the image file to a library (by default it will be site asset), then update the item like below:
let list = sp.web.lists.getByTitle("mylinks");
let json = {
"fileName": "Search_Arrow.jpg",
"serverUrl": "https://abc.sharepoint.com",
"serverRelativeUrl": "/sites/s01/Style%20Library/Images/Search_Arrow.jpg"
};
let jsonstr = JSON.stringify(json);
const i = await list.items.getById(3).update({
Title: "My New Tit",
img: jsonstr
});
BR

Does Gatsby's createPages API send the path to the template?

I've been following this Gatsby tutorial to create pages dynamically based on my CMS's creation of markdown files: https://www.gatsbyjs.org/docs/adding-markdown-pages/
I don't understand how the GraphQL query in the file blogTemplate.js receives the $path variable. I can see that the createPages function in gatsby-node.js creates the page based off the result of its own GraphQL query, and uses the 'path' frontmatter element to choose the URL for the created page.
However, how does that created page know its own path? The query called pageQuery in the tutorial uses the $path variable to source its own data, but I can't see how it receives that variable. Thank you in advance for any explanation.
While creating the pages, we can pass context,
All context values are made available to a template’s GraphQL queries as arguments prefaced with $
exports.createPages = async function ({ actions, graphql }) {
const { data } = await graphql(`
query {
allMarkdownRemark {
edges {
node {
fields {
slug
}
}
}
}
}
`)
data.allMarkdownRemark.edges.forEach(edge => {
const slug = edge.node.fields.slug
actions.createPage({
path: slug,
component: require.resolve(`./src/templates/blog-post.js`),
context: { path: slug }, //here you can pass path through context parameter, the slug can be then accesed under $path variable in the template
})
})
}
`
using $ sign we can then access the path value in template side
export const query = graphql`
query($path: String!) {
...
}
`
Graphql uses redux internall and all information regarding pages creation and paths are updated to gatsby's redux store and from thereon the graphql query is executed for each page
Now according to gatsby createPage github code comment
Data in "context" is passed to GraphQL as potential arguments when
running the page query.
When arguments for GraphQL are constructed, the context object is
combined with the page object so both page object and context data
are available as arguments. So you don't need to add the page "path"
to the context as it's already available in GraphQL. If a context
field duplicates a field already used by the page object, this can
break functionality within Gatsby so must be avoided.

Resources