Uppy/Shrine: How to retrieve presigned url for video after successful upload (using AWS S3) - reactjs

I'm using Uppy for file uploads in React, with a Rails API using Shrine.
I'm trying to show a preview for an uploaded video before submitting a form. It's important to emphasize that this is specifically for a video upload, not an image. So the 'thumbnail:generated' event will not apply here.
I can't seem to find any events that uppy provides that returns a cached video preview (like thumbnail:generated does) or anything that passes back a presigned url for the uploaded file (less expected, obviously), so the only option I see is constructing the url manually. Here's what I'm currently trying for that (irrelevant code removed for brevity):
import React, { useEffect, useState } from 'react'
import AwsS3 from '#uppy/aws-s3'
import Uppy from '#uppy/core'
import axios from 'axios'
import { DragDrop } from '#uppy/react'
import { API_BASE } from '../../../api'
const constructParams = (metadata) => ([
`?X-Amz-Algorithm=${metadata['x-amz-algorithm']}`,
`&X-Amz-Credential=${metadata['x-amz-credential']}`,
`&X-Amz-Date=${metadata['x-amz-date']}`,
'&X-Amz-Expires=900',
'&X-Amz-SignedHeaders=host',
`&X-Amz-Signature=${metadata['x-amz-signature']}`,
].join('').replaceAll('/', '%2F'))
const MediaUploader = () => {
const [videoSrc, setVideoSrc] = useState('')
const uppy = new Uppy({
meta: { type: 'content' },
restrictions: {
maxNumberOfFiles: 1
},
autoProceed: true,
})
const getPresigned = async (id, type) => {
const response = await axios.get(`${API_BASE}/s3/params?filename=${id}&type=${type}`)
const { fields, url } = response.data
const params = constructParams(fields)
const presignedUrl = `${url}/${fields.key}${params}`
console.log('presignedUrl from Shrine request data: ', presignedUrl)
setVideoSrc(presignedUrl)
}
useEffect(() => {
uppy
.use(AwsS3, {
id: `AwsS3:${Math.random()}`,
companionUrl: API_BASE,
})
uppy.on('upload-success', (file, _response) => {
const { type, meta } = file
// First attempt to construct presigned URL here
const url = 'https://my-s3-bucket.s3.us-west-1.amazonaws.com'
const params = constructParams(meta)
const presignedUrl = `${url}/${meta.key}${params}`
console.log('presignedUrl from upload-success data: ', presignedUrl)
// Second attempt to construct presigned URL here
const id = meta.key.split(`${process.env.REACT_APP_ENV}/cache/`)[1]
getPresigned(id, type)
})
}, [uppy])
return (
<div className="MediaUploader">
<div className="Uppy__preview__wrapper">
<video
src={videoSrc || ''}
className="Uppy__preview"
controls
/>
</div>
{(!videoSrc || videoSrc === '') && (
<DragDrop
uppy={uppy}
className="UploadForm"
locale={{
strings: {
dropHereOr: 'Drop here or %{browse}',
browse: 'browse',
},
}}
/>
)}
</div>
)
}
export default MediaUploader
Both urls here come back with a SignatureDoesNotMatch error from AWS.
The manual construction of the url comes mainly from constructParams. I have two different implementations of this, the first of which takes the metadata directly from the uploaded file data in the 'upload-success' event, and then just concatenates a string to build the url. The second one uses getPresigned, which makes a request to my API, which points to a generated Shrine path that should return data for a presigned URL. API_BASE simply points to my Rails API. More info on the generated Shrine route here.
It's worth noting that everything works perfectly with the upload process that passes through Shrine, and after submitting the form, I'm able to get a presigned url for the video and play it without issue on the site. So I have no reason to believe Shrine is returning incorrectly signed urls.
I've compared the two presigned urls I'm manually generating in the form, with the url returned from Shrine after uploading. All 3 are identical in structure, but have different signatures. Here are those three urls:
presignedUrl from upload-success data:
https://my-s3-bucket.s3.us-west-1.amazonaws.com/development/cache/41b229fb17cbf21925d2cd907a59be25.mp4?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAW63AYCMFA4374OLC%2F20221210%2Fus-west-1%2Fs3%2Faws4_request&X-Amz-Date=20221210T132613Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host&X-Amz-Signature=97aefd1ac7f3d42abd2c48fe3ad50b542742ad0717a51528c35f1159bfb15609
presignedUrl from Shrine request data:
https://my-s3-bucket.s3.us-west-1.amazonaws.com/development/cache/023592fb14c63a45f02c1ad89a49e5fd.mp4?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAW63AYCMFA4374OLC%2F20221210%2Fus-west-1%2Fs3%2Faws4_request&X-Amz-Date=20221210T132619Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host&X-Amz-Signature=7171ac72f7db2b8871668f76d96d275aa6c53f71b683bcb6766ac972e549c2b3
presigned url displayed on site after form submission:
https://my-s3-bucket.s3.us-west-1.amazonaws.com/development/cache/41b229fb17cbf21925d2cd907a59be25.mp4?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAW63AYCMFA4374OLC%2F20221210%2Fus-west-1%2Fs3%2Faws4_request&X-Amz-Date=20221210T132734Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host&X-Amz-Signature=9ecc98501866f9c5bd460369a7c2ce93901f94c19afa28144e0f99137cdc2aaf
The first two urls come back with SignatureDoesNotMatch, while the third url properly plays the video.
I'm aware the first and third urls have the same file name, while the second url does not. I'm not sure what to make of that, though, but the relevance of this is secondary to me, since that solution was more of a last ditch effort anyway.
I'm not at all attached to the current way I'm doing things. It's just the only solution I could come up with, due to lack of options. If there's a better way of going about this, I'm very open to suggestions.

Related

Accessing Drive thumbnailLink with gapi.client.request gives 404

I'm currently working on a React app that should load a set of (non-public) files from a user's Google Drive and display thumbnails using the CSS background-image: url(...) property. I can load all of the file metadata using the Gapi Files:get method, and an OAuth token is set ahead of time using gapi.client.setToken. I'm hoping to load the thumbnail as follows:
function CollectionArtifact(props){
const [thumbnail, setThumbnail] = useState(null);
async function loadThumbnail(){
try {
const res = await window.gapi.client.request(props.artifact.thumbnailLink);
const data = await res.blob();
const localURL = URL.createObjectUrl(data);
setThumbnail(localURL);
catch(e){ console.log('Failed to load thumbnail', e); }
}
useEffect(() => {
if(props.apisLoaded){
loadThumbnail();
}
}, [props.apisLoaded]);
return (
...
<div style={{backgroundImage: `url(${thumbnail})`}}>
...
</div>
...
);
}
The client is initialized in a wrapper component like so:
window.gapi.load('client:auth2', () => {
window.gapi.client.init({
apiKey: developerKey,
clientId: clientId,
scope: 'https://www.googleapis.com/auth/drive'
}).then(() => {
window.gapi.client.load('drive', 'v3', () => {
if(props.onGapisLoad) props.onGapisLoad();
setInitialized(true);
});
});
});
However, the window.gapi.client.request call gives Failed to load resource: the server responded with a status of 404, and the URL that's actually giving this error is the proxy request https://docs.google.com/static/proxy.html?usegapi=1&.... Using fetch also doesn't work, as it gives a CORS error. Meanwhile, if I try to access the thumbnailLink in the browser while signed in to the correct Google account (and no other accounts), I can see the image without a problem.
What would be the proper way to load thumbnails through an authorized request? I've seen other answers that recommend generating a PDF and using it to create a public thumbnail, but I'd like to avoid creating extra files in the user's drive or making file data publicly accessible. And since the image can be loaded directly from the browser, I would assume there's a way to access it with an authorized client.

Querying persisted React WYSIWYG data from MongoDB

thanks in advance!
In summary, I am using React's WYSIWYG rich text editor, and saving the text written in the editor to a MongoDB, data is sent to a server which does the insertion. My issue is that I am unable, after following recommended code, to retrieve the stored data back successfully to display it on my page. This is for a prospective blog post site.
Below I've provided all relevant code:
My Component which sends the data to the server to insert it into MongoDB, (not in order, only relevant code):
<Editor
editorState={editorState}
onEditorStateChange={handleEditorChange}
wrapperClassName="wrapper-class"
editorClassName="editor-class"
toolbarClassName="toolbar-class"
/>
const Practice = () => {
const [editorState, setEditorState] = useState(
() => EditorState.createEmpty(),
);
const [convertedContent, setConvertedContent] = useState(null);
const handleEditorChange = (state) => {
setEditorState(state);
convertContentToRaw();
}
const convertContentToRaw = () => {
const contentState = editorState.getCurrentContent();
setEditorState(editorState: {convertToRaw(contentState)});
}
const stateToSend = JSON.stringify(editorState);
try {
const response = await axios.post('http://localhost:8080/api/insert', {
content: stateToSend
})
} catch(error) {
}
In MongoDB, I've initialized 1 column for storing the WYSIWYG data, I've initialized as an empty JS object:
const wysiwygtest = new mongoose.Schema({
content: {
type: {}
}
});
As a result, my data is inserted into MongoDB as such, with everything desired clearly in data type such as RGBA etc. correct me if I'm wrong but I believe Mongo uses BSON, a form of binary based JSON, so this looks doable for retrieval:
Lastly, the code which is not working correctly, the retrieval. For this, I have no interest just yet in placing the data back into the text editor. Rather, I'd like to display it on the page like a typical blog post. However, I'm unable to even log to the console as of yet.
I am parsing the data back to JSON using JSON.parse, converting JSON to JS object using createFromRaw and using EdiorState (even though I don't have the text editor in this component but this seems to be needed to convert the data fully..) to convert fully:
useEffect( async () => {
try {
const response = await axios.get('http://localhost:8080/api/query', {
_id: '60da9673b996f54d507dbfc5'
});
const content = response;
if(content) {
const convertedContent =
EditorState.createWithContent(convertFromRaw(JSON.parse(content)));
console.log('convertedContent - ', convertedContent);
}
console.log('response - ', content);
} catch(error) {
console.log('error!', error);
}
}, [])
My result for the past day and last night has been the following:
"SyntaxError: Unexpected token o in JSON at position 1" and so I'm unsure what I'm doing wrong in the data retrieval, and possibly even the insertion.
Any ideas? Thanks again!
Edit: For more reference, here is what the data looks like when output to the console without a JSON.stringify, this is the full tree of data. I can see all of the relevant data is there, but how do I convert this data and display it into a div or paragraph tag, for example?
More or less figured this out, see my solution below given the aforementioned implementation:
Firstly, I think my biggest mistake was using JSON.parse(); I did away with this with success. My guess as to why this does not work (even though I inserted into MongoDB as JSON) is because we ultimately need the draft-js.Editor Object to convert the data from the DB into an object type it can understand, in order to subsequently convert into HTML successfully, with all properties.
Below is the code with captions/descriptions:
Retrieve data (in useEffect before React component is rendered:
useEffect( async () => {
console.log('useeffect');
try {
const response = await axios.get('http://localhost:8080/api/query', {
_id: '60da9673b996f54d507dbfc5' //hard-coded id from DB for testing
});
const content = response.data; //get JSON data from MongoDB
if(content) {
const rawContent = convertFromRaw(content); //convert from JSON to contentstate understood by DraftJS, for EditorState obj to use
setEditorState(EditorState.createWithContent(rawContent)); //create EditorState based on JSON data from DB and set into component state
let currentContentAsHTML = draftToHtml(convertToRaw(editorState.getCurrentContent())); //create object which converts contentstate understood by DraftJS into a regular vanilla JS object, then take THAT and convert into HTML with "draftToHtml" function. Save that into our 2nd state titled "convertedContent" to be displayed on page for blog post
setConvertedContent(currentContentAsHTML);
}
} catch(error) {
console.log('error retrieving!', error);
} },[convertedContent]) //ensure dependency with with convertedContent state, DB/server calls take time...
In component render, return HTML which sets the innerHTML in the DOM using/passing the convertedContent state which we converted to proper HTML format in step 1.
return (
<div className="blog-container" dangerouslySetInnerHTML={createMarkup(convertedContent)}></div>
</div>
);
In step 2, we called a function entitled, "createMarkup"; here is that method. It essentially returns HTML object using the HTML converted data originally from our database. This is a bit vulnerable it terms of malicious users being able to intercept that HTML in the DOM, however, so we use a method, "purify" from "DOMPurify" class from 'isomorphic-dompurify" library. I'm using this instead of regular DOMPurify because I am using Next JS and NEXT runs on the server side as well, and DOMPurify only expects client side:
const createMarkup = (html) => {
return {
__html: DOMPurify.sanitize(html)
}
}

undefined in file path when trying to upload a file in React Application using ReactS3Uploader and SignedUrl

I am a newbie into React. I have been trying to upload file (images, json files etc) to AWS S3 bucket from a reactJS application using ReactS3Uploader (version 4.8.0). I am following this example : https://www.npmjs.com/package/react-s3-uploader
I have added the below code into one of my component where I want the file upload functionality :
<ReactS3Uploader
getSignedUrl={getSignedUrl}
accept="image/*"
s3path="/uploads/test/"
preprocess={this.onUploadStart}
onSignedUrl={this.onSignedUrl}
onProgress={this.onUploadProgress}
onError={this.onUploadError}
onFinish={this.onUploadFinish}
signingUrlHeaders={{ }}
signingUrlQueryParams={{ }}
signingUrlWithCredentials={ true } // in case when need to pass authentication credentials via CORS
uploadRequestHeaders={{ 'x-amz-acl': 'public-read' }} // this is the default
contentDisposition="auto"
scrubFilename={(filename) => filename.replace(/[^\w\d_\-.]+/ig, '')}
inputRef={cmp => this.uploadInput = cmp}
autoUpload={true}
server="http://cross-origin-server.com"
/>
I have also created another component for getSignedUrl (S3SignedUrl.js) as follows (as described here https://www.npmjs.com/package/react-s3-uploader ) :
import React, { Component } from 'react';
import { toast } from 'react-toastify';
import axios from '../../shared/axios';
function getSignedUrl(file, callback) {
console.log('.........Inside getSignedUrl()>>file.nameeeee.........'+file.name)
console.log('.........Inside getSignedUrl()>>file.size.........'+file.size)
const filename = file.name;
const params = {
filename: file.name
//contentType: file.type
};
var headers = {
'Content-Type': 'application/json'
}
axios.post(`/api/link/admin/v1/s3/sign?filename=${filename}`, {headers: headers})
.then(data => {
console.log('data.data.signedUrl>>>>>>>>>>>'+data.data.signedUrl)
callback(data);
return data.data
})
.catch(error => {
console.error(error);
});
}
export default getSignedUrl;
I have a groovy based backend api (springboot application) which creates the s3 signed url in the following format :
{
"signedUrl": “<complete signed url>”,
"uploadPath": “mybucket/apidocs/dev/version/logo/04137a9c-fb60-48dd-ae0f-c53d78e4e379/logo.png",
"expiresAt": 1552083549794
}
I am successfully able to call my groovy /s3/sign url from my react application through (S3SignedUrl.js which uses Axios) but right after that when ReactS3Uploader component tries to upload the file to the AWS S3 bucket, it gives me an error with HTTP 403.
When I see into the network tab (by inspecting within the google chrome), the underlying call being made my ReactS3Uploader component is
PUT https://localhost:3000/apps/gateway/undefined with Http 403
I am not sure what is undefined here within the url. Shouldn’t ReactS3Uploader component automatically be doing a HTTP PUT to the signedURL ?
I do see some fixes in react-s3-uploader version 4.6.2 around undefined in file path when not providing s3path property. https://changelogs.md/github/odysseyscience/react-s3-uploader/
But not sure if it has anything too do with the problem I am getting. By the way I am using using version 4.8.0.
Just to confirm I can successfully upload the file using that SignedURL manually thru curl.
Any help here would highly be appreciated.
Thanks in advance
I know this is an old post, but I've been searching for something similar and came across this.
You should have callback(data.data);.
ReactS3Uploader will redirect to an undefined URL if it's configured to use a getSignedUrl function and is not returned a signedUrl field.

Error: User credentials required in Google Cloud Print API

I'm trying to use Google Cloud Print(GCP) API, but I can't make it works.
Maybe I've understood bad the workflow because is the first time I'm using the google api, please help me to understand how to make it works.
Initial considerations:
I'm trying to implement it in reactJS, but It is indifferent because the logic to make GCP works is independent of the technology. Then you also can help me understand the workflow.
What exactly I want:
To make my first test, I am looking to get all information about my printer.
What I did:
I created a project in: https://console.developers.google.com
Inside the project created, I created a credential:
create credentials -> OAuth client ID
And I chose Application type: Web, and also configure the restrictions to source and redirection to my localhost.
Manually in https://www.google.com/cloudprint, I added my printer, I made a test printing a PDF and was OK.
I created a project in reactJS to get the information of my printer I've added.
Component:
Explanation:
I'm using a component react-google-login to obtain easily the user accessToken: https://github.com/anthonyjgrove/react-google-login
This component only obtains the access token and save it in localStorage, in a variable called googleToken and it draws a button to call a function to obtain the information about the printer.
code:
import React, { Component } from 'react'
import GoogleLogin from 'react-google-login';
import { connect } from 'react-redux'
import { getPrinters } from '../actions/settings'
class Setting extends Component {
responseGoogle(response) {
const accessToken = response.accessToken
localStorage.setItem('googleToken', accessToken)
}
render() {
return (
<div>
<GoogleLogin
clientId="CLIENT_ID_REMOVED_INTENTIONALLY.apps.googleusercontent.com"
buttonText="Login"
onSuccess={this.responseGoogle}
onFailure={this.responseGoogle}
/>
<button
onClick = {() => {
this.props.getPrinters()
}}
>test printer</button>
</div>
)
}
}
const mapStateToProps = state => {
return {
state: state
}
}
const mapDispatchToProps = dispatch => {
return {
getPrinters() {
dispatch(getPrinters())
}
}
}
export default connect(
mapStateToProps,
mapDispatchToProps
)(Setting)
Action or Function to get information printer:
Explanation:
I'm passing the parameter printerid to get information about that printer.
In authorization, I'm using OAuth ... because in the documentation says that(second paragraph).: https://developers.google.com/cloud-print/docs/appInterfaces
The next two headers I wrote it because I tried solutions as:
Google Cloud Print API: User credentials required
Google Cloud Print User credentials required
code:
import axios from 'axios'
axios.defaults.headers.common['Authorization'] = 'OAuth ' + localStorage.getItem('googleToken')
axios.defaults.headers.common['scope'] = 'https://www.googleapis.com/auth/cloudprint'
axios.defaults.headers.common['X-CloudPrint-Proxy'] = 'printingTest'
const getPrinters = () => {
return () => {
return axios.get('https://www.google.com/cloudprint/printer'
, {
params: {
printeid: 'PRINTER_ID_REMOVED_INTENTIONALLY'
}
}
)
.then(response => {
console.log('response of google cloud print')
console.log(response)
})
}
}
export { getPrinters }
Error:
After all explained before, I got the next error:
User credentials required
Error 403
Note:
I'm using CORS plugin by recommendation of:
Chrome extensions for silent print?
because initially, I had cors error.
Any suggestion or recommendation would be very useful, thanks.
I've resolved my problem, my main problem about User Credential required were because I was using the incorrect access token and It was because I was getting the access token incorrectly.
I'm going to explain my whole solution because there are few examples of codes with this API.
Solutions:
The steps described were Ok until the fourth step where I used the external component react-google-login to trying to get the access token, instead I used googleapis module: Link Github googleapis
Also to avoid CORS problem(and not use CORS chrome plugin) I wrote the requests to Google API in server side.(NODEJS)
I had also a problem in the frontend when I tried to generate a popup to give permission for printer(problems about CORS), my solution was to use this very simple module for authentication: Link Github oauth-open
General scheme:
Explanation:
Knowing I have all data described in my question post(until the third step).
Authentication:
The next step in getting a URL and use it to the user can authenticate.
As I said before I used the module oauth-open in the frontend to generate the popup and only this module need the URL. To get the URL in the backend I used the endpoint /googleurl, where here I used the method generateAuthUrl of the module googleapis to generate the URL.
After that In the frontend, I got the authentication_code(that returned the module oauth-open), I send It to my endpoint /googletoken and here I process the authentication_code to generate access token, refresh token and expiration date with the method getToken of the module googleapis. Finally, these data are stored in the database.
Print:
For print, since the frontend, I send what data I need send to the printer. I used my endpoint /print
In the backend endpoint, my logic was the next:
Recover tokens and expiration date from database, with the expiration date check if the token has expired, and if It has already expired then gets another token and replace the old access token with the new one, replacing also with the new expiration date, to obtain this new data only is necessary call to method refreshAccessToken of module googleapis.Note: the refresh token never expires.
After having the access token updated, use it to send data to the printer with Google route(.../submit)
Code:
All the next codes are in only 1 file
Some data as validation, static variables, error handler, etc, has been removed to better understanding.
Route get URL authentication.
const express = require('express');
const google = require('googleapis');
const router = express.Router();
var OAuth2 = google.auth.OAuth2;
const redirect_url = 'http://localhost:3001/setting'; //Your redirect URL
var oauth2Client = new OAuth2(
'CLIENT ID', //Replace it with your client id
'CLIEND SECRET', //Replace it with your client secret
redirect_url
);
var url = oauth2Client.generateAuthUrl({
access_type: 'offline',
scope: 'https://www.googleapis.com/auth/cloudprint'
});
router.get('/googleurl', (req, res) => {
return res.status(200).send({
result: { googleURLToken: url }
});
});
To get tokens using the authentication code and save these in the database.
const Setting = require('../models/setting'); // My model(Mongoose)
router.post('/googletoken', (req, res) => {
oauth2Client.getToken(req.body.code, function (err, tokens) {
oauth2Client.credentials = tokens;
// If refresh token exits save it
// because the refresh token it returned only 1 time! IMPORTANT
if (tokens.hasOwnProperty('refresh_token')) {
let setting = new Setting();
setting.refreshTokenGoogle = tokens.refresh_token;
setting.expirationTokenGoogle = tokens.expiry_date;
setting.tokenGoogle = tokens.access_token;
setting.save()
.then((settingCreated) => {
return res.status(200).send({
message: 'OK'
});
})
}
});
});
To print
const axios = require('axios');
const moment = require('moment');
router.post('/print',async (req, res) => {
const tickeProperties = {
'version': '1.0',
'print': {
'vendor_ticket_item': [],
'color': { 'type': 'STANDARD_MONOCHROME' },
'copies': { 'copies': 1 }
}
};
const accessToken = await getTokenGoogleUpdated();
axios.get(
'https://www.google.com/cloudprint/submit',
{
params: {
printerid : printerID, // Replace by your printer ID
title: 'title printer',
ticket: tickeProperties,
content : 'print this text of example!!!',
contentType: 'text/plain'
},
headers: {
'Authorization': 'Bearer ' + accessToken
}
}
)
.then(response => {
return res.status(200).send({
result: response.data
});
})
}
);
async function getTokenGoogleUpdated() {
return await Setting.find({})
.then(async setting => {
const refreshTokenGoogle = setting[0].refreshTokenGoogle;
const expirationTokenGoogle = setting[0].expirationTokenGoogle;
const tokenGoogle = setting[0].tokenGoogle;
const dateToday = new Date();
// 1 minute forward to avoid exact time
const dateTodayPlus1Minute = moment(dateToday).add(1, 'm').toDate();
const dateExpiration = new Date(expirationTokenGoogle);
// Case date expiration, get new token
if (dateExpiration < dateTodayPlus1Minute) {
console.log('Updating access token');
oauth2Client.credentials['refresh_token'] = refreshTokenGoogle;
return await oauth2Client.refreshAccessToken( async function(err, tokens) {
// Save new token and new expiration
setting[0].expirationTokenGoogle = tokens.expiry_date;
setting[0].tokenGoogle = tokens.access_token;
await setting[0].save();
return tokens.access_token;
});
} else {
console.log('Using old access token');
return tokenGoogle;
}
})
.catch(err => {
console.log(err);
});
}
I hope It helps you if you want to use Google Cloud Print to not waste a lot of time as I did.
The important part there is a scope https://www.googleapis.com/auth/cloudprint which is not obvious and took one day for me to figure out.

admin-on-rest Using PATCH method

I am a junior node developer and am trying out admin on rest to quickly run up an admin panel for my json api. However, all of my update requests use patch instead of put. I attempted revising the UPDATE method in my restClient but this seems wrong (the rest of the methods are removed for brevity)
export default (apiUrl, httpClient = fetchJson) => {
const convertRESTRequestToHTTP = (type, resource, params) => {
let url = ''
const options = {}
switch (type) {
case UPDATE:
url = `${apiUrl}/${resource}/${params.id}`
options.method = 'PATCH'
options.body = JSON.stringify(params.data)
break
return { url, options }
}
}
To me this makes sense but when I try to edit an object I get back HTTP/1.1 404 Not Found <pre>Cannot PUT </pre>
I know that that this wasn't possible with previous versions but I read this https://marmelab.com/blog/2017/03/10/admin-on-rest-0-9.html#http-patch but was a little confused on how it works? I guess I just don't know where to start with this.
if problem still is actual now, please check some places which are using by me to set my customRestClient.
// App.js
import customRestClient from './customRestClient';
in my case i'm using httpClient to add custom headers:
import httpClient from './httpClient';
below:
const restClient = customRestClient('my_api_url', httpClient);
and finally:
<Admin title="Admin Panel" restClient={restClient}>

Resources