Invalid value for transfer with ipcRenderer.postMessage in Electron - reactjs

I'm getting this error message : Invalid value for transfer when trying to use, for the very first time, the message-ports-reply-streams.
In preload.js I defined this api:
contextBridge.exposeInMainWorld(
"api", {
electronIpcPostMessage: (channel: string, message: any, transfer?: MessagePort[]) => {
ipcRenderer.postMessage(channel, message, transfer)
},
}
)
declare global {
interface Window {
api: {
electronIpcPostMessage: (channel: string, message: any, transfer?: MessagePort[]) => void;
}
}
And , following the example found here: https://www.electronjs.org/docs/latest/tutorial/message-ports#reply-streams , in the renderer React component I defined the streaming request as follows:
const Layers = () => {
const componentIsMounted = React.useRef(true)
React.useEffect(() => {
const cb = (event, args) => {
try {
if (componentIsMounted.current) {
console.log("react-leaflet-layers-args: ", args)
}
} catch (err) {
console.log("err: ", err)
}
}
const makeStreamingRequest = (element, cb) => {
// MessageChannels are lightweight--it's cheap to create a new one for each request.
const { port1, port2 } = new MessageChannel()
// We send one end of the port to the main process ...
window.api.electronIpcPostMessage(
'give-me-a-stream',
{ element, count: 10 },
[port2]
)
// ... and we hang on to the other end.
// The main process will send messages to its end of the port,
// and close it when it's finished.
port1.onmessage = (event) => {
cb(event.data)
}
port1.onclose = () => {
console.log('stream ended')
}
}
makeStreamingRequest(42, (data) => {
console.log('got response data:', event.data)
})
// We will see "got response data: 42" 10 times.
return () => { // clean-up function
componentIsMounted.current = false
window.api.electronIpcRemoveListener(
"give-me-a-stream",
cb,
)
}
}, [])
As said, when running Electron-React app the error message I get when accessing the page rendered by that component, is : Invalid value for transfer .
From this StackOverflow question : Invalid value for transfer while using ipcRenderer.postMessage of electron, it seems that I'm not the only one stumbling on this type of error, but I didn't find any solutions yet.
What am I doing wrongly or missing? How to solve the problem?
My objective is to send, better in a streaming fashion, a very big geojson file from the main process to the renderer process. That's why I thought to try to use ipcRenderer.postMessage.
By the way, any other working solutions that accomplish this goal, are welcomed.
Other info:
electron: v. 16
node: v. 16.13.0
O.S: Ubuntu 20.04
Looking forward to hints and help

I also encountered the same problem. In https://www.electronjs.org/docs/latest/api/context-bridge, it is mentioned that the types of parameters, errors and return values in functions bound with contextBridge are restricted, and MessagePort is one of the types that cannot be transported, so it doesn't recognize the MessagePort you passed in and throw this error.
If you want to use MessageChannel for communication, you can provide some proxy functions through contextBridge in preload.js, call these functions in renderer.js and pass in copyable parameters.
Hope my answer helps you.

Related

UseEffect/useCallback is not triggering on very fast changes

I'm a Backend Dev and having limited knowledge in React still have to fix the problem
My project uses WebRTC for video calls. For signaling I'm using SignalR on my .NET backend.
On the frontend I have 2 classes:
signalRContext.tsx which holds an instance of HubConnection and listeners, onmessage is the relevant one.
const [currentSignal, setCurrentSignal] = useState<TCurrentSignal>
(InitialSignalR.currentSignal);
const initializeSignalListeners = (connection: HubConnection): void => {
console.log('START SIGNAL_R', connection);
connection.on('master', function (RoundInfo: IRoundInfo) {
console.log('MASTER', RoundInfo);
setCurrentSignal({ type: 'master', payload: RoundInfo });
});
connection.on('slave', function (RoundInfo: IRoundInfo) {
console.log('SLAVE', RoundInfo);
setCurrentSignal({ type: 'slave', payload: RoundInfo });
});
connection.on('message', function (message: TSignalRMessage) {
console.log('MESSAGE', message);
setCurrentSignal({ type: 'message', payload: message });
});
connection.on('endround', (payload) => {
console.log('END_ROUND');
setCurrentSignal({ type: 'endround', payload });
});
useRTCPeerConnection.ts which has the whole WebRtc relevant logic
import { useSignalRContext } from '../../../../core/contexts';
const {
signalrRRef,
currentSignal: { type, payload },
} = useSignalRContext();
useCallback(() => { //tried UseEffect as well
if (type === 'message') {
console.log('PAYLOAD', payload);
onMessage(payload as TSignalRMessage);
return;
}
}, [type, payload]);
My problem starts when WebRTC starts exchanging the ICE candidates and sends them sometimes twice per millisecond (see the last column).
The connection.on('message'... listener seems to be fast enough, I'm seeing all console.log('MESSAGE'... outputs in the console.
My problem is that the useCallback/useEffect logic is not firing on every payload change, like for 20 MESSAGE outputs I'm seeing 4-7 PAYLOAD outputs.
My assumption is that useEffect is simply not designed for such quick changes.
Is there any other concept better suitable to solve this problem or any improvement I could do here? Thinking on .NET I would just use the composition pattern and call the relevant method from peer connection class within the event handler in signalR class but not sure how to fix it here.
P.S. I've tried to wait until ICE candidates are gathered and sending them at once but the performance becomes not acceptable.

React websockets disappear on refresh

So I have a websocket connection that is open and I notice once I refresh the page the websocket messages disappears. I've read online that this is supposed to happen. What is a good way to persist these messages so that they do not disappear. Right now I have the websocket messages in a react state. I've seen some say localstorage or cookies, but I don't think that is scalable as their can be thousands of messages in minutes that could overload the browser storage? Below I am using react-use-websocket package and I get the last message and store that inside a state array. That is the wrong approach I need a longer storage solution.
const { lastJsonMessage, lastMessage, sendMessage, readyState, getWebSocket } = useWebSocket(resultUrl, {
//Will attempt to reconnect on all close events, such as server shutting down
shouldReconnect: () => true,
reconnectAttempts: 10,
reconnectInterval: 3000
});
useEffect(() => {
const val = lastJsonMessage ? JSON.parse(lastJsonMessage as unknown as string) : {};
if (val !== null && Object.keys(val).length > 0) {
setMessageHistory((prev) => prev.concat(val));
}
}, [lastJsonMessage, setMessageHistory]);
// Get Video assets after finishing;
useEffect(() => {
messageHistory.forEach((msg) => {
const { type } = msg;
if (type === 'video.live_stream.recording' || type === 'video.live_stream.active') {
const localPlaybackId = msg.data?.playback_ids[0];
setPlaybackId(localPlaybackId);
}
setVideoType(type);
});
}, [messageHistory, videoType]);
const connectionStatus = {
[ReadyState.CONNECTING]: 'Connecting',
[ReadyState.OPEN]: 'Open',
[ReadyState.CLOSING]: 'Closing',
[ReadyState.CLOSED]: 'Closed',
[ReadyState.UNINSTANTIATED]: 'Uninstantiated'
}[readyState];
if (liveStreamId) {
sendMessage(liveStreamId);
}
messageHistory.filter((msg) => {
msg.data.id === liveStreamId;
});
in this case, usually when the app is loaded you should do a GET request to the server in order to load all the last messages. Also since you are talking about "thousands of messages" you should implement a lazy loading aka paginator or infinite-scroll

How to monitor number of API calls using Cypress?

I have a use case where I need to ensure that after a filter is changed on a page, there are only 2 API calls - no more, no less.
How can I achieve this using cypress ?
Gleb Bahmutov has a blog post about using intercepts, and one of his concepts is counting the number of times a request is matched. You could use that along with a very broad matcher, something like...
const getAliasCount = (alias) => {
// implementation details, use at your own risk
const testRoutes = cy.state('routes')
const aliasRoute = Cypress._.find(testRoutes, { alias })
if (!aliasRoute) {
return
}
return Cypress._.keys(aliasRoute.requests || {}).length
}
it('confirms the number of times an intercept was called', () => {
cy.intercept({ url: /.*/}).as('myIntercept');
// test code
...
cy.wait('myIntercept').then(() => {
const interceptCount = getAliasCount('myIntercept');
cy.wrap(interceptCount).should('deep.equal', 2);
});
});

Multiple video call (n users) using Peerjs in React Native

I have an application in which I am trying to get video chatting to work in React Native.
Used packages like react-native-webrtc and react-native-peerjs.
Created peer js server using Node Js.
One to One Video call is working fine with react native Peerjs. But, Now I want more than 2 users to be connected upto n users.
Is it possible to convert one to one video call to Multiple video call. Kindly let me know how Multiple video call can be achieved using Peer js and web rtc.
Here is my code for one to one video call:
Initialize webrtc and PeerJS:
const initialize = async () => {
const isFrontCamera = true;
const devices = await mediaDevices.enumerateDevices();
const facing = isFrontCamera ? 'front' : 'environment';
const videoSourceId = devices.find(
(device: any) => device.kind === 'videoinput' && device.facing === facing,
);
const facingMode = isFrontCamera ? 'user' : 'environment';
const constraints: MediaStreamConstraints = {
audio: true,
video: {
mandatory: {
minWidth: 1280,
minHeight: 720,
minFrameRate: 30,
},
facingMode,
optional: videoSourceId ? [{ sourceId: videoSourceId }] : [],
},
};
const newStream = await mediaDevices.getUserMedia(constraints);
setLocalStream(newStream as MediaStream);
console.log("************ Started ************");
// const io = socketio(SERVER_URL);
// io.connect();
console.log(SERVER_URL);
const io = socketio.connect(SERVER_URL, {
reconnection: true,
autoConnect: true,
reconnectionDelay: 500,
jsonp: false,
reconnectionAttempts: Infinity,
// transports: ['websocket']
});
io.on('connect', () => {
console.log("----------- Socket Connected -----------");
setSocket(io);
io.emit('register', username);
});
io.on('users-change', (users: User[]) => {
console.log("----------- New User - " + JSON.stringify(users) + " -----------");
setUsers(users);
});
io.on('accepted-call', (user: User) => {
setRemoteUser(user);
});
io.on('rejected-call', (user: User) => {
setRemoteUser(null);
setActiveCall(null);
Alert.alert('Your call request rejected by ' + user?.username);
navigate('Users');
});
io.on('not-available', (username: string) => {
setRemoteUser(null);
setActiveCall(null);
Alert.alert(username + ' is not available right now');
navigate('Users');
});
const peerServer = new Peer(undefined, {
host: PEER_SERVER_HOST,
path: PEER_SERVER_PATH,
secure: false,
port: PEER_SERVER_PORT,
config: {
iceServers: [
{
urls: [
'stun:stun1.l.google.com:19302',
'stun:stun2.l.google.com:19302',
],
},
],
},
});
peerServer.on('error', (err: Error) =>
console.log('Peer server error', err),
);
peerServer.on('open', (peerId: string) => {
setPeerServer(peerServer);
setPeerId(peerId);
io.emit('set-peer-id', peerId);
});
io.on('call', (user: User) => {
peerServer.on('call', (call: any) => {
//Alert.alert("PeerServer Call");
setRemoteUser(user);
Alert.alert(
'New Call',
'You have a new call from ' + user?.username,
[
{
text: 'Reject',
onPress: () => {
io.emit('reject-call', user?.username);
setRemoteUser(null);
setActiveCall(null);
},
style: 'cancel',
},
{
text: 'Accept',
onPress: () => {
io.emit('accept-call', user?.username);
call.answer(newStream);
setActiveCall(call);
navigate('Call');
},
},
],
{ cancelable: false },
);
call.on('stream', (stream: MediaStream) => {
setRemoteStream(stream);
});
call.on('close', () => {
closeCall();
});
call.on('error', () => { });
});
});
};
When a user call another user:
const call = (user: User) => {
if (!peerServer || !socket) {
Alert.alert('Peer server or socket connection not found');
return;
}
if (!user.peerId) {
Alert.alert('User not connected to peer server');
return;
}
socket.emit('call', user.username);
setRemoteUser(user);
try {
const call = peerServer.call(user.peerId, localStream);
call.on(
'stream',
(stream: MediaStream) => {
setActiveCall(call);
setRemoteStream(stream);
},
(err: Error) => {
console.error('Failed to get call stream', err);
},
);
} catch (error) {
console.log('Calling error', error);
}
};
Now, how should I call multiple user from the code below and how multiple streams have to be handled.
const call = peerServer.call(user.peerId, localStream);
Is it possible to convert one to one video call to Multiple video call
It's not possible to "convert" a one to one video call to "multiple" in a peer-to-peer architecture. In a p2p architecture with n participants, each participant will have a separate, one-to-one connection with the rest n-1 other participants.
I may possibly be misunderstanding your question, but if you're asking whether it's possible to establish n-1 connections for each participant, then the answer is yes. Here's how I would implement:
Anytime a new participant joins a session, extract their peer information. This is the peerId provided by the peer.js library.
Next, let the rest of the participants know about the presence of this new user. For this, you'll share this new participant's name, peerID and any other metadata with the rest of the participants in the room. This can be done by the signalling logic that you have implemented using socket.io.
Now going forward, you have 2 options:
The new participant could initiate the one-to-one peer connection with others in the room, OR,
The rest of the participants could initiate a one-on-one connection with the new participant.
Personally I prefer the first. So continuing the process:
Using the same signalling logic via socket.io, the rest of the participants will let the new user know about their presence by providing their own peer information and other metadata.
Once the new participant gets everyone's peer information, initiate a new peer connection using call.on('stream', callback) and start broadcasting their video.
On the recipient side, when a call is received along with the stream, you'll create a new video element in react-native, and bind the received media stream to this element. Which means, each participant will have n-1 video elements for streaming the media of n-1 other participants. The recipient also starts to broadcast their own video to the initiator of the call.
Here's a tutorial showing how this can be done using vanilla JavaScript, along with the github repository with source code.
Now, to answer the next question:
Kindly let me know how Multiple video call can be achieved using Peer js and webrtc.
This depends on the number of participants, where they lie geographically, browser/device limits, device computational power, and network bandwidth. So there are multiple factors involved which makes it tricky to give any specific number.
Browsers can place their own upper limits on the maximum number of connections possible, and there might be other values for Android and iOS. On chrome, the max theoretical limit is 500. If you're developing for Android, you may want to check here. But I couldn't manage to find much info on this.
Most practical applications involving WebRTC don't rely on a mesh architecture. Common implementations involve using an SFU, which takes multiple media streams and forwards them. A slightly more sophisticated technique is an MCU architecture, which combines all those media streams from multiple participants into a single one, and send that single stream to the rest of the participants.
I discuss this in some detail here:
https://egen.solutions/articles/how-to-build-your-own-clubhouse-part-2/#architectures-scaling-and-costs
Here's a nice article that explains the difference between SFU and MCU.

Perform Asynchronous Decorations in DraftJS?

I'm trying to perform real-time Named Entity Recognition highlighting in a WYSIWYG editor, which requires me to make a request to my back-end in between each keystroke.
After spending about a week on ProseMirror I gave up on it and decided to try DraftJS. I have searched the repository and docs and haven't found any asynchronous examples using Decorations. (There are some examples with Entities, but they seem like a bad fit for my problem.)
Here is the stripped down Codepen of what I'd like to solve.
It boils down to me wanting to do something like this:
const handleStrategy = (contentBlock, callback, contentState) => {
const text = contentBlock.getText();
let matchArr, start;
while ((matchArr = properNouns.exec(text)) !== null) {
start = matchArr.index;
setTimeout(() => {
// THROWS ERROR: Cannot read property '0' of null
callback(start, start + matchArr[0].length);
}, 200) // to simulate API request
}
};
I expected it to asynchronously call the callback once the timeout resolved but instead matchArr is empty, which just confuses me.
Any help is appreciated!
ok, one possible solution, a example, simple version (may not be 100% solid) :
write a function take editor's string, send it to server, and resolve the data get from server, you need to figure out send the whole editor string or just one word
getServerResult = data => new Promise((resolve, reject) => {
...
fetch(link, {
method: 'POST',
headers: {
...
},
// figure what to send here
body: this.state.editorState.getCurrentContent().getPlainText(),
})
.then(res => resolve(res))
.catch(reject);
});
determine when to call the getServerResult function(i.e when to send string to server and get entity data), from what I understand from your comment, when user hit spacebar key, send the word before to server, this can done by draftjs Key Bindings or react SyntheticEvent. You will need to handle case what if user hit spacebar many times continuously.
function myKeyBindingFn(e: SyntheticKeyboardEvent): string {
if (e.keyCode === 32) {
return 'send-server';
}
return getDefaultKeyBinding(e);
}
async handleKeyCommand(command: string): DraftHandleValue {
if (command === 'send-server') {
// you need to manually add a space char to the editorState
// and get result from server
...
// entity data get from server
const result = await getServerResult()
return 'handled';
}
return 'not-handled';
}
add entity data get from server to specific word using ContentState.createEntity()
async handleKeyCommand(command: string): DraftHandleValue {
if (command === 'send-server') {
// you need to manually add a space char to the editorState
// and get result from server
...
// entity data get from server
const result = await getServerResult()
const newContentState = ContentState.createEntity(
type: 'string',
mutability: ...
data: result
)
const entityKey = contentStateWithEntity.getLastCreatedEntityKey();
// you need to figure out the selectionState, selectionState mean add
// the entity data to where
const contentStateWithEntity = Modifier.applyEntity(
newContentState,
selectionState,
entityKey
);
// create a new EditorState and use this.setState()
const newEditorState = EditorState.push(
...
contentState: contentStateWithEntity
)
this.setState({
editorState: newEditorState
})
return 'handled';
}
return 'not-handled';
}
create different decorators find words with specific entity data, and return different style or whatever you need to return
...
const compositeDecorator = new CompositeDecorator([
strategy: findSubjStrategy,
component: HandleSubjSpan,
])
function findSubjStrategy(contentBlock, callback, contentState) {
// search whole editor content find words with subj entity data
// if the word's entity data === 'Subj'
// pass the start index & end index of the word to callback
...
if(...) {
...
callback(startIndex, endIndex);
}
}
// this function handle what if findSubjStrategy() find any word with subj
// entity data
const HandleSubjSpan = (props) => {
// if the word with subj entity data, it font color become red
return <span {...props} style={{ color: 'red' }}>{props.children}</span>;
};

Resources