React: Detect poor connection or cheap device - reactjs

I am creating a service that displays videos, but I want to change the video quality before downloading it. I need to detect either the device or the network connection, to see if they're poor or not. What is the most effective way of doing it?

Well first of all, it's not a super straightforward path - because what you are talking about is a video streaming service. To do it properly, as in developing your own solution, requires sophisticated engineering and is not something that I would suggest for a single developer to do.
But, what you do want is a video streaming CDN - and I suggest you look into CDNs as a whole if you're going to deliver rich media to end users - it will likely make it easier, faster and cheaper than whatever solution you can implement yourself.
A good alternative is CloudFlare - probably the gold standard of CDNs - but there are many out there. Video Streaming CDN Docs
This solution automatically detects network capabilities of your client - and transcodes and delivers media in a way that fits. You can read more in the docs.
Good luck.

Related

How to build webRTC m:m audio/video live-streams/calls like discord does? client to client via gateway for IP protection

mux.com (and also agora.io and so on) is a great service, but very expensive since it's a server solution. I can't use that.
Discord is a great client solution, that just uses gateways as a pass-through to hide IP addresses and so on. They described their entire architecture here: https://discord.com/blog/how-discord-handles-two-and-half-million-concurrent-voice-users-using-webrtc Discord ain't the only one with this approach, Instagram has AFAIK the same approach too, since it's cheap and does what it does
I want to use for my social media app (like instagram) this solution too, but without these many custom built things to increase performance. I am a one-man team and I can't handle that complexity; still i don't want to use mux because it's way too expensive for me
I am okay with the stock/standard performance. Does anyone know or can point me to a tutorial, where to start building such webRTC elixier gateway solution for m:m audio/video live streams calls?
maybe there already is code published that I can just copy paste
thanks a lot!!
edit
ive got an answer on their official forum https://elixirforum.com/t/how-to-build-webrtc-m-m-audio-video-live-streams-calls-like-discord-does-client-to-client-via-gateway-for-ip-protection/44956
Discord backend use SFU to forward streams for peers in a videoroom, the description from the discord post:
Discord Voice server contains two components: a signaling component and a media relay component called the selective forwarding unit or SFU. The signaling component fully controls the SFU and is responsible for generating stream identifiers and encryption keys, forwarding speaking indication, etc.
Note that the projects in answer are written in Elixir(based on Erlang) programming language, which is not very common used in neither live streaming nor WebRTC. For example, FFmpeg, x264, libopus, WebRTC, SRS, all these audio/video components are written in C++, you'd better think about it.
For a video chat product like discord:
The client app, no doubt, could be built on WebRTC, both H5 and mobile.
For SFU server, recommend C++ server, for example, SRS or mediasoup. Because the whole audio/video economy is C++ based, there're lots of stuff to handle for SFU.
About the signaling server, also called videoroom, could be written by nodejs or Go, because it depends on your business, so highly recommend your best skilled language, there're lots of work to do in this server.
And not all peers in a video room need to publish video stream, instead they only play or consume streams, so it's actually low latency live streaming. For more information about live streaming and video chat, please read this post.

What HTML5 runtime environments (renderer shells) don't generate any background network I/O?

I need to put together a nice interface/UX that will run fullscreen as the primary shell/UI on purpose-configured hardware.
HTML5/CSS/JS will be the absolute easiest design path; I don't have anything that requires rock-solid 60fps framerates or similar high-performance requirements, so the live-reloadable nature of HTML easily makes it a winner for rapid development. (Plus, I'm using Linux, so I know my next (real) alternative is Qt, perhaps with Python. That won't be as fast to iterate with.)
Since this is going to be an embedded/appliance type setup, I don't need the "one-click run" packaged nature of Electron; indeed, I will far prefer the more traditional approach of a local application-/web server running in the background.
So, all I need is a basic headless browser to appear fullscreen for the duration the machine is turned on.
That could be Chrome, but here's the thing. The general Internet will be accessible from the devices in question, but only my own backend processes will use it. I'd prefer the renderer(s) not access the network directly.
I understand Chrom{e,ium} contacts Google for telemetry and metrics tracking, as well as to fetch safebrowsing lists and so forth - and that this behavior cannot be turned off, to ensure that some majority of web users don't end up with what are (in the context of "the whole internet") arguably-insane security defaults.
But for my use case, this behavior is pointless, because I'll only ever be accessing 127.0.0.1. There's no point for me to waste my bandwidth on this I/O; the effort being made to update this data in the background is effectively wasted.
I'm currently wondering what renderer solution to deploy, and weighing up NW.js, QtWebEngine and Electron (in that order). I particularly like the following from http://wiki.qt.io/QtWebEngine:
Auxiliary services that talk to Google platforms are stripped out
If I knew NW.js did the same thing, it would be a shoo-in and my choice would be made, but I'm not sure where to look.
Is it a good idea to make the effort to build a simple QtWebEngine kiosk shell, is NW.js resource efficient, or is there another option I haven't thought of?
I've tried CEF in the past; it seems somewhat clunky, or at least the demo shell does (I'm not up to recompiling it). Perhaps it's exactly what I'm looking for and don't realize?

Server side responsiveness good practices

I have the following scenario: I work with CakePHP and Twitter Bootstrap.
I'm using a lot of responsiveness and writing lots of HTML that change for each screen size.
I was thinking about detecting the screen size and saving it on the server side, so I can write only the HTML that will really be useful on that page. Since the user rarely changes the window size, won't hurt to hit F5.
Is it a good practice? What do you suggest?
That job is clearly supposed to be done by the browser and your task it to make that happen by using the right CSS for the right device. As somebody already mentioned in the comment, your device could rotate, devices have different DPI. I'm pretty sure you can't pass the DPI to the server without explicitly reading it via JS (if possible at all) and passing it to the server in an AJAX call.
You can't rely on your server side device detection alone nor is it good to render Markup conditionally for that purpose it just increases the maintenance amount you have to do by X for each device.
I recommend you to read a little more about repsonsive webdesign, there are plenty of articles and books about it these days.
Not really a good practice mate. You'll increase your application load and you'll waste unnecessary space just to save a few lines of code. Imagine if you're making a website like facebook, how much server space would it require to just store that piece of information for millions of users. Responsive design is a must these days in css and you should just give general values for ranges of resolutions. There's a good paid tutorial on team tree house's website i think, and others are freely available on youtube etc.

Comparing Flash, HTML, Silverlight, X3D and Unity 3d

I have to prepare a comparison between the following technologies to present it to my Project Manager, but I fell that I'm lost, so if any one can help I will be thankful
I want to compare between them in the following areas:
the support of online video streaming
the budget of using each one
Learning Time will be needed to learn the technology
Which one is the standard and will target a lot of users
The support if I found any problem
Bugs and security issues
connection to DB, SOA and web services
supporting of multi player
The support of online video streaming
Some of the X3D viewers support video streaming (and some even 3D streaming, for things such as augmented reality).
Which one is the standard and will target a lot of users
X3D is a standardized format, such as JPEG with multiple companies being able to manipulate such data and is even officially recommended by HTML5 specs whereas Unity ties you to a single company. Even if most X3D viewers are plugin-based like Flash, there exists also native implementations such as X3DOM to display/interact with X3D files for any browser that supports WebGL.
Connection to DB, SOA and web services
I would usually recommend using a webservice for interfacing with a DB, and yes, X3D can interact with webservices (XML, JSON). There is even a standard binary format that is fast to transfer and parse large contents faster.
Supporting of multi player
Some X3D-supporting providers offer a multiusers service, such as Bitmanagement's BS Collaborate server, but I've seen people using Darkstar/RedDwarf to make multiusers 3D environments as well.
the support of online video streaming
Unity 3D does not support video streaming, unless done through textures, which will give you a really slow frame rate.
I don't know for sure about X3D, but I would doubt it was really made for such tasks.
Silver light has good video support, it should be easy to stream with.
HTML only supports streaming video if using HTML 5, for which it gives the best user experience when user's browser supports it.
Flash is the de-facto for video streaming. It is extensively widespread. They use it for YouTube for example.
the budget of using each one
The cheapest of them all is HTML, it is free. Then you can theoretically set up something for free in flash using Flex SDK and server streaming technology such as Red5 (both open source and free). After that, I believe that all others would probably be on par cost wise, Unity3D coming in as the cheapest of the paid alternatives.
Learning Time will be needed to learn the technology
Listed in order of fastest one to learn to slowest (assuming no prior experience in any):
HTML
Flash/Silverlight
Unity3D
X3D
Which one is the standard and will target a lot of users
Flash is the most widespread. Its only competitor would be HTML 5, as new browsers tend to support it and its the only possible option on iOS. On the other hand, if 3D is what you want, then Unity3D is the standard for now, might be followed by HTML 5 in the future.
The support if I found any problem
Well, Unity3D would offer you good paid support, flash and silver light also (but only when you pay for streaming server licenses). HTML, X3D will not give you any support, but you can find a lot of information on the internet. There is also extensive information about Flash and Silverlight on the internet, but mostly Flash.
Bugs and security issues
All are pretty secure, I'm just not sure about X3D, but all others are comparable in term of security or bug issues.
connection to DB, SOA and web services
Easy to do with HTML, Flash and Silverlight. Harder with Unity3D, and hardest with X3D.
supporting of multi player
Multi-player what? If you are making a game, then clearly I would say your real options are Unity3D if the game is to be in 3D, Flash if it is to be done in 2D. Check out SmartFoxServer for easy multiplayer server.
My 2 cents:
the support of online video streaming:
Some X3D players do support it. Unity does in some ways : http://unity3d.com/unity/features/audio-and-video
the budget of using each one:
X3D and Unity3d are free. You can pay for Unity licenses for extra features and platforms like iOS and Android. If you need to write plugins for Unity, you'll need the $1500 license. There are no costs for distribution of Unity products.
Learning Time will be needed to learn the technology:
Both X3D and Unity3d have active communities and many online resources and offline books. Unfortunately for X3D, the best content creation tool (Vivaty Studio) is no longer supported officially, but X3D is supported in Maya, Max, Blender, and many other 3D programs. Unity's online docs are excellent and the answers.unity3d.com forum (and other forums) are free and fast.
Which one is the standard and will target a lot of users:
'Standard' Well, HTML is the broadest standard. X3D (if including VRML) is the oldest most widely used 3D standard. HTML you have. HTML5 is coming, 'real soon now' (I'm already turning blue). If you mean 'most readily available' the HTML is #1, Flash is #2 (as everyone has a browser, and most computers come with Flash installed already). Flash needs to be installed. Unity needs to be installed too, but it's at least as fast and easy to install as Flash, and it's gotten millions of downloads, so it's getting pretty pervasive. X3D requires a plugin (this should change sometime 'real soon now' with x3dom on HTML5), but the many X3D players are all a little different from each other.
The support if I found any problem:
All have much online community support. X3D has a spec committee but that's not really support per se, you'd have to contact the X3D plugin provider (Bitmanagement, Cortona, Octaga, Exit Reality, Fraunhoffer, etc.) Unity has great online community forums, you can pay for premium support, but I'd only do that if I needed a serious bug or feature that has no work-around.
Bugs and security issues:
X3D's bugs depend on which player you use. Unity has bugs, but the product is pretty solid (I've only crashed it once, and I use is all day, every day, for over a year). Both have a mind toward security, but neither of these are totally secure, especially since you can write scripts that are inherently not secure. So you have a hand in how secure YOUR content will be. Some X3D players support encryption. Unity products are compiled.
connection to DB, SOA and web services:
You can use something like AJAX or JSON or whatever in all these platforms, no? So if it's by web service, sure. If by direct local access, I know Unity can do that. Both Unity and Flash require cross-server xml files on the server to allow access cross-domain (in the web player for Unity anyway).
supporting of multi player:
Unity has excellent multi player networking components. X3D (spec) supports it too, but it really depends on which X3D player you go with as to how well it actually works. Worst case, you can use AJAX or JSON or whatever to roll your own.
Which you choose depends mostly on what you want to do with it. Flash is generally the best route right now, unless it's all about 3D, then I'd try Unity. But a year from now, HTML5 alternatives will begin to take over. Flash DOES support 3D, there are different ways it can be done. Vivaty had a full-featured X3D player written in Flash, so it can be done. There are several good 3rd party 3d plugins for Flash.
I totally agree with wildpeaks : )
Connection to DB, SOA and web services: easy to do with HTML, Flash and Silverlight. Harder with Unity3D, and hardest with X3D.
Reply: I think X3D is not hardest.
X3D(X3DOM) can interact with webservices (XML) as very easy in this example/tutorial
Flash supports hardware accelerated 3d, and comes out of the box with 3d support. In addition, there is the papervision library for more advanced 3d. Unity3d is also supported
as a flash library.
I would consider Flex as a real alternative to Flash. It has the same actionscript language, but uses a tag based syntax called MXML, similar to silverlight. Database remoting is extremely simple. You can access your .Net/Java/Php objects directly on the front end without having to deal with serialisation issues. All of the Flash libraries are accessible.
There is also the X3D player from instantreality.org, supporting video streaming & decoding, XMLHttp request via scripting and its free for non commercial usage.
Flash 3D isn't good 3D for any application of real-time 3D. It is 2.5D with some tricks.
X3D is easy to learn for simple things and harder as complexity goes up. It does have the advantage of being VRML with pointy brackets so the free content, examples and toolkits are easily found. I did comparison tests of the various players. BS Contact is the best for the ability to handle the most complex content with the fastest frame rate and rich color palette. Network support is still non-standard although XMLHTTP and database connections are easy to bolt on. As others have said, Instant Reality is coming on fast and supported by people with a deep understanding of the past implementations and future requirements.
The decision comes down to the project type. A simple comparison rating such as you are is misleading at best but thanks for giving it a shot. I've used VRML through all of its incarnations and now X3D for world building and now as a source for 3D models in video work in combination with Sony Vegas. For cost-benefit without the need to use very expensive modeling toolkits, it is the best of all the choices.

What are the cons of a web based application

I am going to write a database application for the camp I work for. I am thinking about writing it in C# with a Windows GUI interface but using a browser as the application is seeming more and more appelaing for various reasons. What I am wondering is why someone would not choose to write an application as a web application. Ex. The back button can cause you some trouble. Are there other things that ayone can think of?
There are plenty of cons:
Speed and responsiveness tend to be significantly worse
Complicated UI widgets (such as tree controls) are harder to do
Rendering graphics of any kind is pretty tricky, 3D graphics is even harder
You have to mess around with logins
A centralised server means clients always need network access
Security restrictions may cause you trouble
Browser incompatibilities can cause a lot of extra work
UI conventions are less well-defined on the web - users may find it harder to use
Client-side storage is limited
The question is.. do enough of those apply to your project to make web the wrong choice?
One thing that was not mentioned here is the level of complexity and knowledge required to generate a good web application. The problem being unless you are doing something very simple, there is no "Single" knowledge or technology that goes into these applications.
For example if you were to write an application for some client server platform.. you may develop in Java or C++. For a complex web application you may have to have expertise in Java, Java Script, HTML, Flash, CSS, Ajax, SQL, J2EE.. etc. Also the components of a web based application are also more numerous, Web Application Server, HTTP Server, Database, Browser.. are tipical components but there could be more.. a client server app is tipical just what it says.. a client application and a Server application. My experience and personal preference is not web based .. web based is great for many things. But even though I am an IT Architect for a leading company that is completely emersed in Web Apps as the solution for everything... The cons are many still.. I do thing the technology will evolve and the cons will go away over time though.
Essentially the real limitations are only through of the platform, being the browser. If you have to account for all browsers in current use that can be a pain due to varying degrees of standards in each of them.
If have control of the which browser to use, that is everyone is on computers that you control on site, and say you install firefox on all of them, you could then leverage the latest Javascript and CSS standards to their fullest in your content delivery.
[edit] You could also look into options like the adobe integrated runtime or "AIR" as an option allowing you to code the front-end with traditional browser based options like xhtml/css/javascript, flash/flex and have the backend hooked up to your database online, only also providing functionality of a traditional desktop app at the same time.
The biggest difference and drawback I see with web applications is state management. Since the web is, by nature, stateless every thing you want to maintain has to be sent back and forth from the server with every request and response. How to efficiently store and retrieve it in a matter with respect to page size and performance is hard to do at times. Also the fact that there is no real standard (at least not that everyone adheres to) for browsers makes consistency really..........fun.
You need to have a network access to the server that you are going to have the web application on (if there are going to be multiple users for the application - which is typically the case).
Actually, there are more pros than cons - if you can give some details about your application, we could help a little more...
It completely depends on the requirements of your project. For the most part, there isn't much web applications cannot do these days. Admittedly, certain applications do belong on the desktop as browsers (while currently advancing, and rapidly), still are not quite there yet. From the advent of applications such as Google Docs, Gmail
There isn't much you -cannot- do on the web. If you're creating a World of Warcraft competitor however, the web is most certainly not the optimal solution. Again, unfortunately we'd need more insight on the application you're building for the camp. The best part about the web is that anyone with a browser can use your application.
Web applications delegate processing to a remote machine. Depending on the amount of processing, this can be a con. Consider a photo editor that's a web app.
Web applications also can't deal with a whole lot of data going back and forth to and from a client. You can watch video online.. when it's compressed. It will be awhile before we see any web-based video editing software.
Browser compatibility is also a hassle. You can't control the look-and-feel of the application 100%.
Vaibhav has a good point. What's your application?
A major one is down time for migrations... users will not expect the application to be down, ever, but realistically it will have to be down for major upgrades. When doing this with a desktop application, the user (or end-user systems admin) is in control of when upgrades happen; with an online app, they're not.
For applications which have large data, performance can be a major problem as you're storing a large number of users' data centrally, which means the IO performance will not be as good as it would be if you gave them all a laptop.
In general scalability gives problems for a server-based app. Desktop applications scale really well.
You can do an awful lot with a web-based app, but it is a lot easier to do certain things with a thick client:
Performance: You get simple access to the full power of the client's CPU.
Responsiveness: Interactivity is fast and easy.
Graphics: You can easily use graphics libraries such as DirectX and OpenGL to create fast impressive graphics.
Work with local files
Peer-to-peer
Deciding whether a web application is a good approach depends on what you are trying to achieve. However here are some more general cons of web applications:
Real integration with desktop apps (e.g. Outlook) is impossible
Drag and drop between your app and the desktop / other running apps
With a web application, there are more privacy concerns, when you are storing user data on your servers. You have to make sure that you don't loose/disclose it and your users have to be comfortable with the idea of storing that data on your servers.
Apart from that, there are many security problems, like Man-in-the-middle attacks, XSS or SQL injections.
You also need to make sure that you have enough computing power and bandwidth at hand.
"Ex. The back button can cause you some trouble."
You'll have to be specific on this. A lot of people make fundamental mistakes in their web applications and introduce bugs in how they handle transactions. If you do not use "Redirect after Post" (also known as Post-Redirect-Get, PRG design), then you've created a bug which appears as a problem with the back button.
A blanket statement that the back button in trouble is unlikely to be true. A specific example would clarify your specific question on this.
The back button really is not that much of an issue if you design your application correctly. You can use AJAX to manipulate parts of the current page, without adding items into the browser history (since the page itself wont change).
The biggest issue with designing web applications has to do with state, and the challenges that need to be programmed around. With a desktop application, state is easy to handle, you can leave a database connection opened, lock the record and wait for the user to make the changes and commit. With a web application, you could lock the record...but then what if the user closes the browser? These things must be overcome in the design of your application.
When designing a web application, make sure that each trip to the server "stands alone" and provides a complete answer. Always re-initialize your variables before performing any work and never assume anything. One of the challenges I ran into once was pulling "pages" of grid data back to the user. In a real busy system, with record additions/modifications happening in real time, the user navigation from page to page would vary greatly, sometimes even resulting in viewing the same set of a few records as new additions were added in-front of the query.

Resources