Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am trying to understand how the timestamp in rtp along with some time synchronization protocol like ntp, can synchronize the media streams. Based on my understanding I have drawn this. Please correct me if I'm wrong.
Here clock in these devices are synchronized, and rtp packet is created with timestamp 10. But due to network transmission delay the packet reaches at 11, but the timestamp is still 10. How this case is handled in rtp for proper synchronization or is it the application that is taking care of this situation?
When handling an incoming (UDP) RTP stream, received RTP packets will be buffered before they are processed. This is to allow for jitter and such. This buffering period is typically between 50 and 300 milliseconds, depending on the used network topology.
If the buffering time is adjustable at runtime, you could use this buffering period to synchronize the two streams by ear. When two streams are out of sync adjust the buffering time (delay) of one of the streams until they appear in-sync.
If you don't want, or can't, adjust the buffering period by ear you should use RTCP (RFC 3550) to synchronize the two streams. You can't just use the timestamp values in the RTP packets.
I think this website with FAQs on RTP can be helpful.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
In embedded systems, we often use UART to transmit data to a serial console on a PC, and these days with a USB-to-uart serial converter showing up as a virtual com port. Why has UART become the go-to for this instead of other serial interfaces like I2C and SPI?
Because is simple, was designed to be used on longer distances (I mean meters not kilometers :)), very standard and every uC has it.
I2C & SPI are not designed to be used outside the PCB (I know that people use them on longer distances). Those interfaces are used do connect other ICs to your microcontroller.
Maximum distance of RS232 can be a few meter, I2C and SPI doesn't work well with distances longer than about 200 - 500mm (depending on pullups, speed, collector current, ...).
SPI and I2C need a master and slave(s), there is no such difference between 2 UART hosts.
You need fewer pins than SPI (when pins like DTR, DSR, RTS are omitted) or a parallel port.
You don't need to worry about where to put a pullup-resistor.
Both hosts can start a transmission asynchronous, with I2C and SPI the master needs to poll the slave before he can transmit data.
A host doesn't need to answer immediately. This can be important on a PC under load where the reaction time can be very high (50ms or so). Try to write a program for a PC that can reliable answer in less than 1ms.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I want to transfer some files from one point to another. Files are sensitive so transfer has to be reliable, but if I use TCP to transfer files than the speed gets slow.
How do I create a reliable version of UDP that will transfer files quickly?
What I am doing is sending an acknowledgement for every received packet. But it is reducing my transfer speed.
Is there any way that exists without sending an acknowledgment for every received packet? Can I somehow keep track of lost packets efficiently and request those packets only?
Note:: I am sending a sequence number with every packet
I guess that you could put a count value in each packet and if you received a packet that skips a value then you know that you've lost one or more and could request a resend.
However, you're starting to implement the functionality of TCP by coding for packet loss. Is there a reason why you couldn't implement that instead?
Certainly if I was transferring sensitive data I wouldn't choose UDP myself.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
According to the book The Linux Programming Interface:
epoll provides a number of advantages over signal driven I/O.
Can we safely say:
Signal driven I/O has actually been deprecated by epoll in practice under linux?
If we assume that "signal driven I/O" is referring to the POSIX aio (asynchronous I/O) facility, using the aio_sigevent notification method, then it is perhaps fair to say that most networking applications desiring asynchronous operation will favor epoll over aio. Deprecation may be a bit strong.
I do want to point out that the aio facility outshines epoll for disk I/O. The aiocb structure allows an offset for a aio_write or aio_read command to be specified for the operation. So, multiple file I/O operations can be occurring in parallel over many different offsets in the file. Tradition file descriptor I/O with epoll would typically be serialized as stream operations where the next operation continues where the previously completed operation left off.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I've been dying to implement a chat server in C and Winsock for a long time now, but I haven't taken the time. Partly, because I'm still unsure of some of the conceptual ideas about building a server on Windows OS'es for something like chat.
Here are some of the issues I've been thinking about :
How will user x connect to my server over a generic LAN
without me relying on them to type in a network address( e.g. address
could be invalid, redirected to a different server, etc. )
If I use broadcasting to solve the above problem, will that be reliable enough for chat?
Will that potentially DDos a LAN since the packets will be be forcibly handled on every machine and may take a lot of bandwidth if enough people join?
What is the difference between multicasting and broadcasting? Is multicasting truly superior?
Etc.
Per request, my definition of reliability would be that I can get most of the data consistently in sent packets. In other words, I don't mind a few dropped packets, but I do mind if the data gets messed up quite a lot along the way.
Currently, I have a lot more questions than answers, but the main point I'm getting at is this :
What is the safest and most reliable way of implementing a chat over a LAN in C and Winsock?
How will user x connect to my server over a generic LAN without me relying
on them to type in a network address( e.g. address could be
invalid, redirected to a different server, etc. )
Use a closed list of known servers, or use some broadcast based autodiscovery system.
If I use broadcasting to solve the above problem, will that be reliable
enough for chat?
Define your requirements for reliability.
Will that potentially DDos a LAN since the packets will be be forcibly
handled on every machine and may take a lot of bandwidth if enough
people join?
It's a chat... the amount of generated packets will be comparatively short and small.
What is the difference between multicasting and broadcasting? Is
multicasting truly superior?
Search the web. There are lots of resources and information about multicasting, most precisely, IP multicasting. In short:
Broadcast delivers to all hosts on a broadcast domain. Multicast delivers to all hosts that have explicity joined a multicast group, which may not be in the same broadcast domain (see last point).
Broadcast forces a switch to forward broadcast packets to all its interfaces. Intelligent switches can benefit from peeking at IGMP packets to know which interfaces multicast packets have to be forwarded to.
Broadcast cannot treepass a broadcast domain. Multicast packets can traverse a router, if it's configured to route multicast (search for M-bone)
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'd like to make an app to send/receive text messages using a GSM modem. However, I've seen that a modem can only receive/send about 8-10 text messages per minute. So if I receive 200 incoming text messages within a 10 minute span (like I'm at a conference and I ask people to sign up), do they get queued up on the modem? Do I have to deal with in in my application? Are they queued up by AT&T (or some other wireless carrier)? Is there a maximum length to the queue? Any help would be great.
Thanks!
You should not have to worry about it.
You can setup the modem to store the messages internally as opposed to on the SIM by setting the preferred storage location to "ME" which will give you whatever memory resources are built into the modem - way larger than whats available on the SIM.
If that fills up, when next the device receives a message it can't handle it will report that fact back to the SMSMC which will record the failure and add the message to a retry queue.
There is no standard for the retry policy, so they may retry a few minutes later, gradually increasing the interval as failures build up upto a total retry time equaling the "validity period" setup on the senders handset, or their own upper limit.