Issue
I have two debian servers located on the same subnet. They are connected by a switch. I am aware the UDP is unreliable.
Question 1: I assume the link layer is ethernet. And MTU from a standard Ethernet is 1500 bytes. However, when I did a ping from one server to another, I found out that the maximum packet size can be sent is 65507. Shouldn't it be 1500 bytes? Can I say, because there's no router in between these two servers, therefore, the IP datagram will not be fragmented.
Question 2: Because two servers are directly connected with a switch, can I assume that all datagrams arrives in order and no loss on the path?
Question 3: How can I determine that the chances of datagram dropped at the server because of buffer overflow. What size to set the receive buffer so that datagram will not overflow receive buffer.
Solution
No. UDP is not even reliable between processes on the same machine. If packets are sent to a socket without giving the receiver process time to read them, the buffer will overflow and packets will be lost.
You did your ping test with fragmentation enabled. Besides that, ping doesn't use UDP, but ICMP, so the results mean nothing. UDP packets smaller than the MTU will not be fragmented, but the MTU depends on more factors, such as IP options and VLAN headers, so it may not be greater than 1500.
No. Switches perform buffering, and it's possible for the internal buffers to overflow. Consider a 24 port switch where 23 nodes are all transmitting as fast as possible to the last node. Clearly the connection to the last node cannot handle the aggregate traffic of 23 other links, the switch will try to buffer packets but eventually end up dropping them.
Besides that, electrical noise can corrupt packets in transit, causing them to be discarded when the checksum fails.
To analyze the chance of buffer overflow, you could employ queuing theory to find the probability that a packet arrives when the buffer is full. You'll need some assumptions regarding the probability distribution on the rate of packet transmission and the processing time. The number of packets in the buffer then form a finite chain, hopefully Markov, which you can solve for the steady-state probabilities of each state in the chain. Good search keywords to find out more would be "queuing theory", "Markov chain", "call capacity", "circuit capacity", "load factor".
EDIT: You changed the title of the question. The answer to your new question is: "You can't prove something that isn't true." If you want to make a reliable application using UDP, you should add your own acknowledgement and loss handling logic.
Answered By - Ben Voigt