Show new image after upload - reactjs

Quick question, using the Baqend SDK in React, I'm saving profile images using the id of an object saved in the database as the name.
But in order to get the image to update in the user's browser after it is uploaded, I'm modifying state and adding &updated=true to the end of the file.url as it is returned by Baqend.
The save image code:
uploadLogo(event) {
event.preventDefault();
const name = this.props.match.params.id+"logo";
const file = event.target.files[0];
const img = new db.File({ name: name, data: file, type: 'blob' });
img.upload({force: true}).then((file) => {
db.Companies.load(this.props.match.params.id).then(company => {
this.setState({
logo: file.url+"?updated=true"
});
company.logo = file.url;
return company.update();
},
(error) => {
alert(error);
});
});
}
Is this the correct approach with React and the Baqend SDK? Are there going to be any side-effects on this if I'm loading a bunch of images by URL that look like this: https://remarkable-apple-95.app.baqend.com/v1/file/www/cce9830b-48eb-422e-830d-72ae28571480logo?BCB&updated=true
I would imagine url parameters like this are just ignored? The only person that is going to load the image with ?updated=true after it is the one person that updates the logo and only immediately after he updates it.
Also what is the ?BCB being added in file.url doing?

Your example looks good so far.
But you should not add additional query parameters at all, as they cause cache misses in the CDN.
The BCB (Baqend Cache Buster) is actually what you are trying to archive with the ?upload=true parameter. The SDK adds those cache busters automatically if an image was changed previously.
The BCB ensures that the fresh image is fetched from the server and is only cached with revalidation headers until the old image is expired in the browser cache. Our CDN caches are aware of this special cache buster and rewrite the image request back to the original URL to ensure cache hits in the CDN.
Note that our CDN caches are instantly invalidated if the content is changed.
This staleness information is propagated to other clients as well via a Bloom filter. That ensures that other clients won't take the image out of their local cache and therefore see the new image too.

Related

Corrupt video uploads when chunking MediaRecorder to Google Cloud platform

I currently am using react hook powered component to record my screen, and subsequently upload it to Google Cloud Storage. However, when it finishes, the file created inside Google Cloud appears to be corrupt.
This is the gist of the code within my React component, where useMediaRecorder is from here: https://github.com/wmik/use-media-recorder -
let {
error,
status,
mediaBlob,
stopRecording,
getMediaStream,
startRecording,
liveStream,
} = useMediaRecorder({
onCancelScreenShare: () => {
stopRecording();
},
onDataAvailable: (chunk) => {
// do the uploading here:
onChunk(chunk);
},
recordScreen: true,
blobOptions: { type: "video/webm;codecs=vp8,opus" },
mediaStreamConstraints: { audio: audioEnabled, video: true },
});
As data becomes available through this hook - it calls onChunk( chunk ) passing a binary Blob through to that method, to perform the upload, I tie in with this section of code to perform the upload:
const onChunk = (binaryData) => {
var formData = new FormData();
formData.append("data", binaryData);
let customerApi = new CustomerVideoApi();
customerApi.uploadRecording(
videoUUID,
formData,
(res) => {},
(err) => {}
);
};
customerApi.uploadRecording looks like this (using axios).
const uploadRecording = (uuid, data, fn, fnErr) => {
axios
.post(endpoint + "/stream/upload", data, {
headers: {
"Content-Type": "multipart/form-data",
},
})
.then(function (response) {
fn(response);
})
.catch(function (error) {
fnErr(error.response);
});
};
The HTTP request succeeds, and all is well with the world: the server side code to upload is based on laravel:
// this is inside the controller.
public function index( Request $request )
{
// Set file attributes.
$filepath = '/public/chunks/';
$file = $request->file('data');
$filename = $uuid . ".webm";
// streamupload
File::streamUpload($filepath, $filename, $file, true);
return response()->json(['uploaded' => true,'uuid'=>$uuid]);
}
// there's a service provider used to create a new macro on the File:: object, providing the facility for appropriate handling the stream:
public function boot()
{
File::macro('streamUpload', function($path, $fileName, $file, $overWrite = true) {
$resource = fopen($file->getRealPath(), 'r+');
$storageClient = new StorageClient([
'projectId' => 'myprjectid',
'keyFilePath' => '/my/path/to/servicejson.json',
]);
$bucket = $storageClient->bucket('mybucket');
$adapter = new GoogleStorageAdapter($storageClient, $bucket);
$filesystem = new Filesystem($adapter);
return $overWrite
? $filesystem->putStream($fileName, $resource)
: $filesystem->writeStream($fileName, $resource);
});
}
So to reiterate:
React app chunks out blobs,
server side determines if it should create or append in Google Cloud Storage
server side succeeds.
4) Video inside Google Cloud platform is corrupted.
However, the video file, inside the Google Cloud container is corrupted and won't play. I'm unsure exactly why it is corrupted, but my guesses so far:
Some sort of Dodgy Mime type problem.. - different browsers seem to handle the codec / filetype differently from the mediarecorder: e.g. Chrome seems to be x-matroska (.mkv?) - firefox different again.. Ideally I would have a container of .webm - notice how I set the file name server side, and it isn't coming from the client. Should it? I'm unsure how to force the MediaRecorder to be a specific mimeType - I thought the blobOptions option should do it, but changing the extension and mime type seems to have little to no impact on the corruption occurring.
Some sort of problem during upload where an HTTP request doesn't execute and finish in order - e.g.
1 onDataAvailable completes second
2 onDataAvailable completes first
3 onDataAvailable completes third
I've sort of ruled this out because I think the chunks should be small enough.
Some sort of problem with Google Cloud Storage APIs that I'm using, perhaps in the wrong way? Does the cloud platform support streaming, and does this library send the correct params to do so?
Some sort of problem with how I'm uploading - should the axios headers be multipart formdata, or something else?
This is the package I'm using for the Server side: https://github.com/Superbalist/flysystem-google-cloud-storage
Can anyone could shed any light on how to achieve this goal of streaming up into Google Cloud without the video from the mediarecorder being corrupted? Hopefully there's enough detail here in the question to help figure it out. The problem as illustrated isn't on getting the file as far as Google cloud, but rather the resulting file being unplayable in any video format.
Update
I've ordered my chunks client side now, and queued them properly before letting them reach the server. No difference to the output. As some have suggested - a single blob upload request works fine.
Tried using streamable config param (from reading source code it seems like chunks need to be a certain size before Google recognises them as a resumable upload
$filesystem = new Filesystem($adapter, [
'resumable'=>true
]);
Not sure how: https://cloud.google.com/storage/docs/performing-resumable-uploads - is implemented within the libraries I'm using, (or within the Google Cloud APIs themselves if at all?). Do I need to implement that myself? Documentation is light on Google's part.
Short version:
The first thing you should do is buffer the whole video locally, and send a single payload to the server and to google drive. This will validate your code for a small video is actually correct. Once you can verify this you can move onto handling multi-chunk uploads.
Longer version:
For starters, you aren't passing the uuid to the request, it's being used:
const uploadRecording = (uuid, data, fn, fnErr) => {
axios
.post(endpoint + "/stream/upload", data, {
headers: {
"Content-Type": "multipart/form-data",
},
})
.then(function (response) {
fn(response);
})
.catch(function (error) {
fnErr(error.response);
});
};
Next, you can't trust how chunking will work, I think you verified this behavior with the out of order result of chunk logging. You need to assume on your server you will get chunks out of order and handle them correctly.
Each chunk you get on the server needs to put in the right place, you can't just "writeStream", you need to write to the explicit binary block. Specifically, on every request specify the byte range: Google docs:
curl -i -X PUT --data-binary #CHUNK_LOCATION \
-H "Content-Length: CHUNK_SIZE" \
-H "Content-Range: bytes CHUNK_FIRST_BYTE-CHUNK_LAST_BYTE/TOTAL_OBJECT_SIZE" \
"SESSION_URI"
CHUNK_LOCATION is the local path to the
chunk that you're currently uploading. CHUNK_SIZE is the number of
bytes you're uploading in the current request. For example, 524288. CHUNK_FIRST_BYTE is the
starting byte in the overall object that the chunk you're uploading
contains. CHUNK_LAST_BYTE is the ending byte in the
overall object that the chunk you're uploading contains.
TOTAL_OBJECT_SIZE is the total size of the
object you are uploading. SESSION_URI is the value returned in the
Location header when you initiated the resumable upload.
Try to eliminate as many variables as possible and pinpoint where exactly the file is getting corrupted.
Since you are using a React(JS) -> Laravel(PHP) -> GoogleCloud path,
first thing I would suggest is to test each step separately:
React -> Laravel - save the file on your server and check if its corrupted at this point
Laravel -> GoogleCloud - Load a file from the server filesystem and upload to cloud and see if it gets corrupted
I don't have experience with Google cloud, but I did something very similar with AWS and found that their video uploading service was extremely picky about the requests (including order of headers that were sent).
Try to compare the specs on the service you are using with your input, make the smallest possible thing that works and start adding variables until you get to the final state.
Also I don't see any kind of data ordering in your code.
If your chunks are close to each other, and with streaming it is highly possible then there is a chance that they will arrive in different order than originally sent. If you just append them to a file without any control of the sorting then the file will indeed get corrupted. Not sure if for webm that would cause just parts of the video to be broken or the entire thing to die.

Browser Cache Storage issue

I am storing 40,000 plus images in cache storage using cache.put(). I can see all the images in cache storage successfully stored. But when I am using my react js website offline, some images are displaying and some are not displaying. The browser decides itself to show an image or not. I am unable to find the reason. Can anyone help me?
I got a solution. Just we need an event listener in the Service worker. If there is GET request, it will first check-in the cache first and return from there
self.addEventListener('fetch', event => {
// Let the browser do its default thing
// for non-GET requests.
if (event.request.method !== 'GET') return;
// Prevent the default, and handle the request ourselves.
event.respondWith(async function() {
// Try to get the response from a cache.
const cache = await caches.open('images');
const cachedResponse = await cache.match(event.request);
if (cachedResponse) {
// If we found a match in the cache, return it, but also
// update the entry in the cache in the background.
event.waitUntil(cache.add(event.request));
return cachedResponse;
}
// If we didn't find a match in the cache, use the network.
return fetch(event.request);
}());
});

I'm concerned about memory leak from RecordRTC url object

Using RecordRTC library, I'm hooking my React web application with webcam video recording, replaying and saving functionalities. Coming from native application development, I'm always concerned about potential memory leak which often can be easily diagnosed by checking system memory or lagging UI experience. In web applications, what diagnoses can you perform to see if a JS object is being created and deleted properly without leaks.
My concern appeared when I began integrating replay functionality as shown below. The requestusermedia method instantiates the webcam stream when React component mounts. In fact, the src state gets assigned with the url to the video stream. Afterwards, anytime a stop button is clicked, a new url, representing a webm file of recorded video, is created and assigned to the same src state. The functionality of streaming and replaying works as planned. But, I'm concerned that continuation of creating and replaying video, essentially creating a new url wrapping webm file would only result in memory leak unless the browser is refreshed.
Are there any checks in the browser level I could conduct to diagnose this? Or is this something I shouldn't be concerned of at all in the web application world?
requestUserMedia() {
captureUserMedia((stream) => {
this.setState({ src: window.URL.createObjectURL(stream)});
});
}
handleRecord(){
if (!this.state.record) {
captureUserMedia((stream) => {
var recorder = RecordRTC(stream, {
type: 'video'
});
recorder.startRecording();
this.state.recordVideo = recorder;
});
} else {
var recorder = this.state.recordVideo
recorder.stopRecording(() => {
var blob = recorder.getBlob();
var url = window.URL.createObjectURL(blob);
this.setState({ src: url })
});
}
let newRecordState = !this.state.record
this.setState({
record: newRecordState
})
}
Setting the videos src to a string created with URL.createObjectURL has been deprecated for that reason. Set video.srcObject = stream instead.
For the second createObjectURL use URL.revokeObjectURL to revoke the previous one.

Reactjs data changes, filename stays the same

I have a picture, profile.jpg on my server. When I upload a new picture, replacing picture.jpg in data but not in name (in other words, the old profile.jpg gets replaced by the new profile.jpg, but the new one is also called profile.jpg). After my promise is returned, I call forceUpdate, but this doesn't do anything unless I change the actual url (src) of the image. See my code, in which I attempted to concatenate promises in order that react would recognize that the url is changing (from the correct url, to "empty", to the correct url again):
fetch('http://localhost:3000/change_pet_pic/?petID='+this.props.pet.id+'&userID='+this.props.pet.ownerID, { method: 'POST', body: form })
.then(function(res) {
return res.json();
}).then(function(json) {
var pet = $this.props.pet;
pet.petPicture = "empty";
$this.props.pet=pet;
$this.forceUpdate();
return json.picture_url;
}).then(function(url){
var pet = $this.props.pet;
pet.petPicture = url;
$this.props.pet=pet;
$this.forceUpdate();
})
Thanks for your tips!
It seems nothing wrong with your ReactJS code instead it should be a browser cache which is causing issue by returning the old image all the time as the image url looks same.
What you can do to get rid of this is, you can access the image with different query string whenever the image is getting changed.
So the first time, you can access this with profile.jpg?v=1 and the second time, you can access it with profile.jpg?v=2 something like that.

My flux store gets re-instantiated on reload

Okay. I'm kinda new to react and I'm having a #1 mayor issue. Can't really find any solution out there.
I've built an app that renders a list of objects. The list comes from my mock API for now. The list of objects is stored inside a store. The store action to fetch the objects is done by the components.
My issue is when showing these objects. When a user clicks show, it renders a page with details on the object. Store-wise this means firing a getSpecific function that retrieves the object, from the store, based on an ID.
This is all fine, the store still has the objects. Until I reload the page. That is when the store gets wiped, a new instance is created (this is my guess). The store is now empty, and getting that specific object is now impossible (in my current implementation).
So, I read somewhere that this is by design. Is the solutions to:
Save the store in local storage, to keep the data?
Make the API call again and get all the objects once again?
And in case 2, when/where is this supposed to happen?
How should a store make sure it always has the expected data?
Any hints?
Some if the implementation:
//List.js
componentDidMount() {
//The fetch offers function will trigger a change event
//which will trigger the listener in componentWillMount
OfferActions.fetchOffers();
}
componentWillMount() {
//Listen for changes in the store
offerStore.addChangeListener(this.retriveOffers);
}
retrieveOffers() {
this.setState({
offers: offerStore.getAll()
});
}
.
//OfferActions.js
fetchOffers(){
let url = 'http://localhost:3001/offers';
axios.get(url).then(function (data) {
dispatch({
actionType: OfferConstants.RECIVE_OFFERS,
payload: data.data
});
});
}
.
//OfferStore.js
var _offers = [];
receiveOffers(payload) {
_offers = payload || [];
this.emitChange();
}
handleActions(action) {
switch (action.actionType) {
case OfferConstants.RECIVE_OFFERS:
{
this.receiveOffers(action.payload);
}
}
}
getAll() {
return _offers;
}
getOffer(requested_id) {
var result = this.getAll().filter(function (offer) {
return offer.id == requested_id;
});
}
.
//Show.js
componentWillMount() {
this.state = {
offer: offerStore.getOffer(this.props.params.id)
};
}
That is correct, redux stores, like any other javascript objects, do not survive a refresh. During a refresh you are resetting the memory of the browser window.
Both of your approaches would work, however I would suggest the following:
Save to local storage only information that is semi persistent such as authentication token, user first name/last name, ui settings, etc.
During app start (or component load), load any auxiliary information such as sales figures, message feeds, and offers. This information generally changes quickly and it makes little sense to cache it in local storage.
For 1. you can utilize the redux-persist middleware. It let's you save to and retrieve from your browser's local storage during app start. (This is just one of many ways to accomplish this).
For 2. your approach makes sense. Load the required data on componentWillMount asynchronously.
Furthermore, regarding being "up-to-date" with data: this entirely depends on your application needs. A few ideas to help you get started exploring your problem domain:
With each request to get offers, also send or save a time stamp. Have the application decide when a time stamp is "too old" and request again.
Implement real time communication, for example socket.io which pushes the data to the client instead of the client requesting it.
Request the data at an interval suitable to your application. You could pass along the last time you requested the information and the server could decide if there is new data available or return an empty response in which case you display the existing data.

Resources