I am making a PWA where users can answer the forms. I want it to make also offline, so when a user fills out a form and does not have the internet connection, the reply will be uploaded when he is back online. For this, I want to catch the requests and send them when online. I wanted to base it on the following tutorial:
https://serviceworke.rs/request-deferrer_service-worker_doc.html
I have managed to implement the localStorage and ServiceWorker, but it seems the post messages are not caught correctly.
Here is the core function:
function tryOrFallback(fakeResponse) {
// Return a handler that...
return function(req, res) {
// If offline, enqueue and answer with the fake response.
if (!navigator.onLine) {
console.log('No network availability, enqueuing');
return;
// return enqueue(req).then(function() {
// // As the fake response will be reused but Response objects
// // are one use only, we need to clone it each time we use it.
// return fakeResponse.clone();
// });
}
console.log("LET'S FLUSH");
// If online, flush the queue and answer from network.
console.log('Network available! Flushing queue.');
return flushQueue().then(function() {
return fetch(req);
});
};
}
I use it with:
worker.post("mypath/add", tryOrFallback(new Response(null, {
status: 212,
body: JSON.stringify({
message: "HELLO"
}),
})));
The path is correct. It detects when the actual post event happens. However, I can't access the actual request (the one displayed in try or fallback "req" is basically empty) and the response, when displayed, has the custom status, but does not contain the message (the body is empty). So somehow I can detect when the POST is happening, but I can't get the actual message.
How to fix it?
Thank you in advance,
Grzegorz
Regarding your sample code, the way you're constructing your new Response is incorrect; you're supplying null for the response body. If you change it to the following, you're more likely to see what you're expecting:
new Response(JSON.stringify({message: "HELLO"}), {
status: 212,
});
But, for the use case you describe, I think the best solution would be to use the Background Sync API inside of your service worker. It will automatically take care of retrying your failed POST periodically.
Background Sync is currently only available in Chrome, so if you're concerned about that, or if you would prefer not to write all the code for it by hand, you could use the background sync library provided as part of the Workbox project. It will automatically fall back to explicit retries whenever the real Background Sync API isn't available.
Using RecordRTC library, I'm hooking my React web application with webcam video recording, replaying and saving functionalities. Coming from native application development, I'm always concerned about potential memory leak which often can be easily diagnosed by checking system memory or lagging UI experience. In web applications, what diagnoses can you perform to see if a JS object is being created and deleted properly without leaks.
My concern appeared when I began integrating replay functionality as shown below. The requestusermedia method instantiates the webcam stream when React component mounts. In fact, the src state gets assigned with the url to the video stream. Afterwards, anytime a stop button is clicked, a new url, representing a webm file of recorded video, is created and assigned to the same src state. The functionality of streaming and replaying works as planned. But, I'm concerned that continuation of creating and replaying video, essentially creating a new url wrapping webm file would only result in memory leak unless the browser is refreshed.
Are there any checks in the browser level I could conduct to diagnose this? Or is this something I shouldn't be concerned of at all in the web application world?
requestUserMedia() {
captureUserMedia((stream) => {
this.setState({ src: window.URL.createObjectURL(stream)});
});
}
handleRecord(){
if (!this.state.record) {
captureUserMedia((stream) => {
var recorder = RecordRTC(stream, {
type: 'video'
});
recorder.startRecording();
this.state.recordVideo = recorder;
});
} else {
var recorder = this.state.recordVideo
recorder.stopRecording(() => {
var blob = recorder.getBlob();
var url = window.URL.createObjectURL(blob);
this.setState({ src: url })
});
}
let newRecordState = !this.state.record
this.setState({
record: newRecordState
})
}
Setting the videos src to a string created with URL.createObjectURL has been deprecated for that reason. Set video.srcObject = stream instead.
For the second createObjectURL use URL.revokeObjectURL to revoke the previous one.
I am a newbie to Audio/Video Recording. The script works well for my cam and audio recorder. However, I would like to know how to implement it similar to the Extension available so that I can record a tab and all the audio involved. Thanks in advance. Currently using Ver.5.4.0
Here is an open-sourced chrome-extension that supports both tab, screen and any opened app's screen recording:
https://github.com/muaz-khan/Chrome-Extensions/tree/master/screen-recording
You can use tabCapture API to capture MediaStream object; now you can record the resulting stream using MediaRecorder API or RecordRTC.
RecordRTC can record following kind of streams:
Stream captured from a webcam or microphone
Stream captured using tabCapture API
Stream captured using desktopCapture API
Stream captured from <canvas> or <video> elements using captureStream API
Streams generated by WebAudio API
e.g.
var capturedStream = videoElement.captureStream();
var recorder = RecordRTC(videoElement.captureStream(), {
type: 'video'
});
Or:
var recorder = RecordRTC(tabCaptureStream, {
type: 'video'
});
Simply make sure that you're getting the MediaStream object from above API; and now you can use RecordRTC to record that stream.
Regarding "replacing video track with secondary camera track or screen track", you can use addTrack, removeTrack as well as replaceTrack methods. However I'm not sure if MediaRecorder API can record replaced track:
// using Firefox
theStreamYouAreRecording.replaceTrack( screenTrack );
// using Chrome or Firefox
theStreamYouAreRecording.addTrack ( screenTrack );
So you must either record camera or screen. Do not replace tracks.
I am developing a web page , where a user will select two audio files and the app will join the two audio files and make it as a single output audio file. I am using nodejs in back end and angularjs in client side. How can I achieve this requirement? I went through many libraries nothing suits it.
I'm looking at a similar use-case at the moment. The libraries aren't great as most need a big program to be installed in your server environment. Examples are:
sox-audio: Which should be ok as long as you don't need iterative (variable number of files) concatenation. But this requires installation of SoX.
audio-concat: A wrapper for ffmpeg, but also needs installation of ffmpeg.
Or of you don't need the output audio to be seekable, you could simply do it with streams. Concept:
var fs = require('fs')
var writeStream = fs.createWriteStream('outputAudio.mp3'); // Or whatever you want to call it
// Input files should be an array of the paths to the audio files you want to stitch
recursiveStreamWriter(inputFiles) {
if(inputFiles.length == 0) {
console.log('Done!')
return;
}
let nextFile = inputFiles.shift();
var readStream = fs.createReadStream(nextFile);
readStream.pipe(writeStream, {end: false});
readStream.on('end', () => {
console.log('Finished streaming an audio file');
recursiveStreamWriter(inputFiles);
});
}
This works and the transitions are fine, but audio players struggle to seek the audio. A decent full example can be found here of the recursive stream method.
As an example I am combining Iphone ringtone with Nokia ringtone and piping the streamed readers to a write stream.
//Created two reader streams
let readerstream1 = fs.createReadStream('iphone-6.mp3');
let readerstream2 = fs.createReadStream('Nokia.mp3');
// Created a write stream
let writestream = fs.createWriteStream('NewRingtone.mp3');
//just pipe one after the other
readerstream1.pipe(writestream);
readerstream2.pipe(writestream);
I'm developing a web application using GeoExt, OpenLayers and having my own GeoServer to serve various maps. Still, I want to let the user add other WMS's if needed, to be able to play around with all desired layers.
Thus, my problem with the GetFeatureInfo request. Right now I have a toolbar button attached to geoext's map panel,
new GeoExt.Action({
iconCls: "feature",
map: map,
toggleGroup: "tools",
tooltip: "Feature",
control: featureControl
})
its control attribute being
var featureControl = new OpenLayers.Control.WMSGetFeatureInfo({
queryVisible: true,
drillDown: true,
infoFormat:"application/vnd.ogc.gml"
});
I've also defined an event listener to do what I really want once I receive the responses, but that is not relevant here. My problem is the following:
Considering the user clicks on a point where there are 2+ visible layers and at least one of them is from a different source, OpenLayers will have to do one AJAX request per different source and, from OpenLayers own documentation,
Triggered when a GetFeatureInfo response is received. The event
object has a text property with the body of the response (String), a
features property with an array of the parsed features, an xy property
with the position of the mouse click or hover event that triggered the
request, and a request property with the request itself. If drillDown
is set to true and multiple requests were issued to collect feature
info from all layers, text and request will only contain the response
body and request object of the last request.
so, yeah, it will obviously wont work like that right away. Having a look at the debugger I can clearly see that, giving two layers from different sources, it actually DOES the request, it's just that it doesn't wait for the first's response and jumps for the next one (obviously, being asynchronous). I've thought about doing the requests one-by-one, meaning doing the first one as stated above and once it's finished and the response saved, go for the next one. But I'm still getting used to the data structure GeoExt uses.
Is there any API (be it GeoExt or OpenLayers) option/method I'm missing? Any nice workarounds?
Thanks for reading :-)
PS: I'm sorry if I've not been clear enough, english is not my mother tongue. Let me know if something stated above was not clear enough :)
i Hope this help to someone else, I realized that: you're rigth this control make the request in asynchronous mode, but this is ok, no problem with that, the real problem is when the control handle the request and trigger the event "getfeatureinfo" so, i modified 2 methods for this control and it works!, so to do this i declare the control first, and then in the savage mode i modified the methods here is de code:
getInfo = new OpenLayers.Control.WMSGetFeatureInfo({ drillDown:true , queryVisible: true , maxFeatures:100 });
//then i declare a variable that help me to handle more than 1 request.....
getInfo.responses = [];
getInfo.handleResponse=function(xy, request) { var doc = request.responseXML;
if(!doc || !doc.documentElement) { doc = request.responseText; }
var features = this.format.read(doc);
if (this.drillDown === false) {
this.triggerGetFeatureInfo(request, xy, features);
} else {
this._requestCount++;
this._features = (this._features || []).concat(features);
if( this._numRequests > 1){
//if the num of RQ, (I mean more than 1 resource ), i put the Request in array, this is for maybe in a future i could be need other properties or methods from RQ, i dont know.
this.responses.push(request);}
else{
this.responses = request;}
if (this._requestCount === this._numRequests) {
//here i change the code....
//this.triggerGetFeatureInfo(request, xy, this._features.concat());
this.triggerGetFeatureInfo(this.responses, xy, this._features.concat());
delete this._features;
delete this._requestCount;
delete this._numRequests;
// I Adding this when the all info is done 4 reboot
this.responses=[];
}
}
}
getInfo.triggerGetFeatureInfo= function( request , xy , features) {
//finally i added this code for get all request.responseText's
if( isArray( request ) ){
text_rq = '';
for(i in request ){
text_rq += request[i].responseText;
}
}
else{
text_rq = request.responseText;
}
this.events.triggerEvent("getfeatureinfo", {
//text: request.responseText,
text : text_rq,
features: features,
request: request,
xy: xy
});
// Reset the cursor.
OpenLayers.Element.removeClass(this.map.viewPortDiv, "olCursorWait");}
Thanks, you bring me a way for discover my problem and here is the way i solved, i hope this can help to somebody else.
saheka's answer was almost perfect! Congratulations and thank you, I had the same problem, and with it I finally managed to solve it.
What I would change in your code:
isArray() does not work, change it like this: if(request instanceof Array) {...} at the first line of getInfo.triggerGetFeatureInfo()
to show the results in a popup this is the way:
My code:
getInfo.addPopup = function(map, text, xy) {
if(map.popups.length > 0) {
map.removePopup(map.popups[0]);
}
var popup = new OpenLayers.Popup.FramedCloud(
"anything",
map.getLonLatFromPixel(xy),
null,
text,
null,
true
);
map.addPopup(popup);
}
and in the getInfo.triggerGetFeatureInfo() function, after the last line, append:
this.addPopup(map, text_rq, xy);
A GetFeatureInfo request is send as a JavaScript Ajax call to the external server. So, the requests are likely blocked for security reasons. You'll have to send the requests to the external servers by a proxy on your own domain.
Then, configure this proxy in openlayers by setting OpenLayers.ProxyHost to the proper path. For example:
OpenLayers.ProxyHost = "/proxy_script";
See http://trac.osgeo.org/openlayers/wiki/FrequentlyAskedQuestions#ProxyHost for more background information.