I am using SMF silverlight media player. I am using the following code to get me the current volume on player
this.item = function(){
alert(this.player.GetVolume());
}
which works fine but I also want the current status of the media. Whats the property for that. I didn't see that in API docs
thanks
I believe you want the PlayState property, which is of type MediaPluginState and can be one of these values:
Closed
Opening
Buffering
Playing
Paused
Stopped
Individualizing
AcquiringLicense
ClipPlaying
Related
I am a newbie in ReactJS and I badly need some help.
So I have a video catalog that only shows the thumbnails of the videos with label and overlay duration. Before I was using React-Player by Pete Cook but I don't want that my video player has share, like and watch later buttons so I decided to not use it and use video-player instead. I just use Image tag for showing thumbnail and I will just pass the Youtube link to the video player if the image is clicked.
Now my problem is that I am having a hard time in getting the video duration. When I was still using React-Player, I can get it after clicking play button (not the result that I want but at least I was able to get the duration). Any solution for this?
Your problem is you're providing a Youtube URL to the video-player library's Player Component which clearly doesn't support it as mentioned here.
I even tried it you'll get no duration value nor the video will be played.
However, if you changed your Youtube link to for example this one https://www.w3schools.com/html/movie.mp4
You'll find that everything is working perfectly and you can get the duration state from the Player's states without even playing the video.
const { player } = this.player.getState();
console.log(player.duration);
I added these two lines to the a method called changeSource mentioned here under Examples title.
The application we had have been using SceneKit for a while and never had any issues with it until recently. Compared to earlier, now the render doesnt show the node in there actual color. Please see the images attached for more detail. If anyone has a solution to it then I would highly appreciate it.
iOS 12
https://image.ibb.co/i9sVGp/PNG_image.png
iOS 11
https://image.ibb.co/bBRCU9/IMG_0145.png
I had a similar problem in an app that downloads OBJ and MTL files with their texture images and renders them. All image texture materials were just blank white.
In my case, the problem was solved by manually disabling the emission property on the model's materials:
for (id object in self.modelNode.geometry.materials) {
((SCNMaterial*)object).emission.contents = [UIColor blackColor];
}
I have no clue why the emission component was set at all and why this changed with iOS 12/13.
I fixed this by changing the 'Emission' property from white to black under the material settings in xcode. Save the file as a scene-kit file to avoid having to deal with it
Select the model
Go to the panel on the right
I was able to solve my issues for the moment by choosing OpenGL ES as render mode:
I was able to solve this issue by setting the following properties:
self.scnView.pointOfView.camera.wantsHDR = true
self.scnView.pointOfView.camera.minimumExposure = -1
self.scnView.pointOfView.camera.maximumExposure = -1
Apart from these properties, you can also set
emission and lightingmodel geometry properties of your node.
Hi so Im using this angularjs framework "videogular" and have no problem creating a single player, but I dont know how to create multiple players in one screen since apparently each player requires a controller
sample code here:
http://codepen.io/2fdevs/pen/KmDIE
Each player needs a different config file but not a different controller.
Maybe this other answer could be useful:
stop other video that is playing in background on play the new one
I'm trying to write a wpf with webrtc support. The access to the camera works but the display of the <video> from the page doesn't. Can anyone help?
You can do something like this:
var cefSettings = new CefSettings();
cefSettings.CefCommandLineArgs.Add("enable-media-stream", "enable-media-stream");
Cef.Initialize(cefSettings);
This has the same effect as passing the command line argument
I assume you want to display video from your camera via WebRTC so I think it requires a call to .getUserMedia() to get hold of your camera. For that to work you must use CefSharp based on Chromium 30 or later. So either:
Use the latest CefSharp.Wpf NuGet. Right now you need latest -Pre release
or build from source with the current master branch.
I just did a quick test again using CefSharp.MinimalExample so here are the steps:
Make sure your MinimalExample uses Chromium 31 or higher - see this PR - unless it already got merged by the time you are reading this.
In MainView.xaml modify the <cefSharp:WebView Address= /> attribute to "https://simpl.info/getusermedia/sources/index.html"
Build and when running add the --enable-media-stream command line flag.
That's it! With your camera connected and a bit of luck you should see your own face - or whatever the camera points to - on the screen.
Bonus info: Hopefully soon PR #365 can get a bit of extra love to allow for passing flags too and get merged into CefSharp. With that you can set the flag in code instead of having to pass it in as a command line parameter.
The correct code is this
Dim settings As New CefSettings settings.CefCommandLineArgs.Add("--enable-media-stream", "1") CefSharp.Cef.Initialize(settings) settings.CachePath = "cache"
I am using GPUImage framework to record multiple videos one after other in close intervals with having various filters enabled in real time using GPUImageVideoCamera and GPUImageMovieWriter.
When I record the video, video starts with a jerk(freeze for half a seconds) and ends with a jerk also. I know the reason behind this are the statements in which I pass the movieWriter object to VideoCamera's audioEncodingtarget.
So In my case when I record multiple videos one after other(with different objects of GPUImageMovieWriter), the video preview view freezes at start and end of each recording.
If I remove the audio encoding target statement, conditions improves significantly but of course I don't get the audio.
Currently I am using a AVAudioRecorder while recording to save audio tracks but I believe this is not a ideal work around.
Is there any way to solve this problem.
-- I looked at the RosyWriter example by Apple, their app work almost similarly but smoothly at almost constant 30 fps. I tried to use the RosyWriter code(after removing the code that add purple effect) to save the required videos while showing GPUImageVideoCamera's filtered view to user but in vain. When applied unmodified rosywriter code just records two videos and rest video fails. I also tried to pass in the rosywriter code the capture session from GPUImageVideoCamera but only gets videos with black frames and no audio.
Please help on how can I can record GPUImage filtered videos with audio without this jerkiness. Thanks in advance
I faced the same issue and here is my workaround.
As you pointed out, this problem happened because setAudioEncodingTarget method internally calls addAudioInputsAndOutputs to set audio in/output to the capture session.
To avoid this issue, I created justSetAudioEncodingTarget method for VideoCamera as below,
(on GPUImageVideoCamera.m)
// just set
-(void)justSetAudioEncodingTarget:(GPUImageMovieWriter*)newValue {
if( newValue == nil ) {
return;
}
addedAudioInputsDueToEncodingTarget = YES;
[super setAudioEncodingTarget:newValue];
}
The following steps is my scenario and I checked out it smoothly worked.
Called VideoCamera's addAudioInputsAndOutputs after the VideoCamera was created.
This is not right before starting the recording. :)
Set MovieWriter to the VideoCamera by justSetAudioEncodingTarget that I made above.