Week 9 & 10: The Final Product

About the Final Product

After 10 weeks, this post will summarize my goals, and my accomplishments in this projects. This does not necessarily mean that I will be ending this project, but this certainly means that progress on this topic will be on paused for a long while, or until I find a topic within the same realm that requires use of this project. With that said, here’s a summary of my adventures into Reliable-UDP.

The Final Product

About Reliable-UDP

The goal of this project was to develop a protocol that was best suited for the fast-paced multiplayer online game that is common today. The protocol I chose to develop is Reliable-UDP, which is a transport layer protocol that aims at sending data in a message-oriented fashion while keeping data reliable. Compared to TCP and UDP protocol, it tries to take the best of both protocols and combine it into one protocol to be used in practice. 

R-UDP Protocol aims to have the following behavior:

  • Reliable Data Transfer. Reliable for this protocol does not mean to guarantee delivery, but rather guarantees that the sender knows whether or not the message has been delivered. The delivered message is also guaranteed to be sequenced properly and error-free, regardless of its size.
  • Connection-Based. Since delivered messages must be aware of whether the message has arrived successfully, a connection-based protocol is needed to achieve that behavior.
  • Message-Oriented. Compared to stream-oriented (TCP’s way of message communication), message-oriented only focuses on a per-message basis. Messages do not depend on the state of other messages before the current message being delivered.

With these properties, the R-UDP protocol will allow the application to determine the priority of the messages being delivered. Knowing whether the message has been delivered successfully or not, the application can determine whether the message was of high-priority to re-submit the message, or low-priority to ignore re-submission and continue the process.

Reliable-UDP Implementation

In developing the R-UDP protocol, I worked on creating the following:


  • Established TCP / UDP Communication
    Created wrapper classes for the TCP/UDP Implementation in the Winssock API and abstracted the protocols to allow for easy switching of protocols through a config file.
  • Established Reliable-UDP Communication
    Developed Reliable-UDP over UDP Socket with TCP behaviors, such as the Three-Way Handshake and the Sliding Window algorithm for sequencing the packets received.
  • Performance Testing on TCP / UDP / R-UDP
    Researched on Network Emulation for performance testing purposed. This allowed me to test rare cases effectively, such as packet loss, packet error, low bandwidth, etc. The software I used was a combination of Clumsy for packet reordering and quick testing on the same computer, while NEWT (Network Emulation Windows Toolkit) was used for more accurate testing of packet loss, low-bandwidth, etc.
  • Server-Client Visual Simulation
    Using the TCP, UDP, and R-UDP Communication developed, I created a server-client simulation to showcase some of the notable improvements of the different protocols.



Using the Server-Client Simulation, I managed to trigger the best states of the protocol which well-represents the protocol’s purpose. The following results are performed under network conditions of  30% packet loss, and 10% re-ordering of packets.

TCP Protocol: Stream-Oriented

TCP is reliable in data transmission, but once connection is in a poor-enough state, the protocol races to keep all the messages it’s receiving reliable. As a result, the application pauses long after the connection is restored to normal.


UDP Protocol: Fast but Unreliable

UDP is blazingly fast. Even on terrible conditions there is no noticeable delay in the retrieval of data. This is great for low-priority data such as movement. However, since there’s no guarantee on delivery of messages and no way to determine its success, important messages (such as the spawning and de-spawning of entities) can be lost in transmission, producing the results seen above. As a result, UDP alone cannot be used in all cases.


R-UDP Protocol: Prioritized Data Delivery

R-UDP takes on similar performance as TCP. The major difference is seen when transitioning from a poor network condition to normal conditions. During poor connection, the application aims to keep the high priority data (spawn and de-spawn) reliable. If the low-priority data fails to transmit (the position of the entity) then it moves on to more relevant messages. As soon as connection is restored, because it’s only focused on the high-priority data to transmit, the application can return to its normal state after those data has been submitted successfully.


On a Final Note

It’s been debated on whether it is worth using protocols such as R-UDP when one can simply use a combination of TCP and UDP. I would argue that using two protocols per client is not wise, as that requires two times the code, two times the socket and ports needed for communication, and will not allow for as much flexibility as protocols that were designed for your use-case.

Although creating your own protocol might not be an efficient use of your time, there are libraries out there that can be used in place of TCP / UDP. One I’ve been meaning to try for myself is ENet, which is essentially taking the ideals of this protocol, but allowing a lot more flexibility than what this currently is. More recently, I’ve caught word of a protocol called QUIC that might pick up as a suitable option over TCP, UDP, and even R-UDP!

I took the time to develop my own protocol solely for learning purposes. Creating R-UDP taught me how to use TCP and UDP, and taught me how TCP worked internally to make it what it is. However, if I were to use a protocol, it would likely come from a more reliable library, and not of my own design.

With that said, I hope you, the readers, have enjoyed my ventures and discussion of the importance of protocols in your future applications.



Week 8 (cont.): The Socket Architecture

 The New Socket Architecture

I promised a discussion of the code for designing the RUDP and here it is. This post will break down all the components of my current library relating to the RUDP implementation.


SocketExceptionCode.PNGBefore beginning with the Socket, I wanted to discuss shortly about the SocketException code. This helped form the structure since error checking was no longer depending on specific return types or Winsock’s C-style error code system (see WSAManager for details). With exceptions, function became cleaner and simpler to manage. The code comes straight from Practical Socket, with the addition of the code which now handles WSA Error code differently.



Before this code, I had created this class to handle the initialization and clean-up of the Winsock API. The Winsock API required that the user calls WSAStartup() and WSACleanup() when beginning and ending the use of their use of their API. Unix implementation did not require this, and so when trying to develop for Windows and Unix, it’ll be strange to make calls to start up and clean up on the Unix-side when no set up or clean up is required. Although I’m not designing for Unix, I wanted to eliminate this concept from the user of my library anyways. The way I approached this is through the use of a Singleton instance that never had to be called by the user. This class was the WSAManager.


The way this class achieves hiding the call to the WSAStartup() is through the magic of static instances. Notice the variable _instance. This is not a pointer to a class instance, and that is important because I want the instance to be created on static construction.When creating an instance on static construction, it guarantees that the variable will be created before the main() function gets called. Knowing this, I create an instance of the WSAManager (the only instance since the constructor is private) and when the constructor of the WSAManager is called it makes its respective call to the WSAStartup(). In the very same fashion, the destructor gets called after main, allowing for the Winsock API to perform clean-up.

Constructor/Destructor call to hide the Startup/Cleanup

This is what the WSAManager was initially used for, but with the improvements to the code, I decided for the WSAManager to handle another annoying aspect of the API… Error codes.

In the Winsock API, whenever a function would throw an error, the function would return a state that is recognizably bad (-1 for some functions, sometimes 0 for others). When this happens, the function generates an error code for the respective error, but instead of passing the error code through the function directly, it takes a more indirect approach and expects the user that’s interested in the type of error to call the function WSAGetLastError().

There are some notable problems using this method. First, and most apparent, is that it returns an integer that represents an error-code. It’s understandable that this was needed in the days of C, but this becomes becomes tedious to work with, as there are 117 different Winsock Error Codes (you can see all the WSA Error Codes there). What makes it worse is that they each have their own unique number (not contiguous), so I couldn’t simply index the meaning of their values. As a result, to solve this dilemma of unreadable code, I used a hashmap to map the Error code with the details of its respective WSA Error Code type. This error code information was wrapped in a class, respectively called WSAErrorCodeInfo.


Using the  WSA Error Code website as a reference, I designed the class to store all the information that website had to offer. With this class, I was able to initialize the map in the WSAManager to all the information needed for each error code.

Defining 117 different WSA Error codes easily with the help of hash maps and macros.

There is one more annoying factoid to the WSA Error code that became apparent to me upon implementing this: WSAGetLastError() will only work on the stack frame that the function with the error gets called from. This may have been purposefully designed as such, but because of this behavior (however it is achieved) it does not allow the SocketException to retrieve the error message through that call. Because of this I needed a way to retrieve this error code for the SocketException. This is what the purpose of the StoreLastErrorCode() function was designed for. Let’s take a look at the code snippet:

static void StoreLastErrorCode(int wsaError = WSAGetLastError());

This function is essentially mimicing the behavior of the WSAGetLastError() function, but it allows the error code to be retrieved within the same stack call. Here’s how it’s used:

// Try to connect to the given port
int result = connect(mSocket, (sockaddr *)&destAddr, sizeof(destAddr));

In this example, I’m storing the last error code generated from the connect() call, but without exposing the fact that I’m calling the WSAGetLastErrorCode(). This is because the default value of the parameter is a call to that very function, and since initializing the parameter is on the same stack frame, the function is able to retrieve the error code generated. Had I moved the call to inside the function, to truly hide the use of this function, it would return zero as the error code (this was tested, and I have yet to learn why and how this happens).

With the error code retrieved, the SocketException class can now retrieve the error code information for that error code and create a very informative message as a result.

Socket abstract class

Socket skeleton code (header file)

This is where the meat of the architecture lies. This abstract class defines how any type of socket will behave. Unlike the Practical Sockets, we assume all sockets will be communicating sockets.

The standard operations all sockets should have is listed as follows:

  • Know its own local address and port
  • Know the remote client’s address and port (with which the socket is communicating with)
  • Know if this socket is open to communication
  • Close the socket from communications

Being an abstract class, I force the children to define how the type of Socket communicates with each other. As such, the following functionality need to be defined:

  1. Connect with a remote client (given their address and port)
  2. Send data to the remote client
  3. Receive data from the remote client

Depending on the type of socket, these last three operations will have their own unique implementations.

In this case, I chose to implement three different types of sockets. TCP, UDP, and R-UDP, and they will be explained further in their respective sections on TCPSocket, UDPSocket, and RUDPSocket.


ServerSocket interface


This class is not abstract, but rather it’s an interface. I was considering making this inherit the Socket class, but it would force duplicate code if the programmer had to re-code the Connect / Send / Receive for the ServerSocket as well as the Socket class.

When using this interface, the ServerSocket for that class will inherit both the Socket type and this interface to then define the ServerSocket for that type.

Any server socket really only needs one function: The Accept() call. It is assumed that any process such as opening of socket, binding, and listening on the port, will be handled in the constructor call or before the call to Accept(). The GetListeningPort() is pure virtual in case the listening port is needed. Even though the Socket class has a call to GetLocalPort() (which will likely be the listening port), I make sure to enforceit in the case that the ServerSocket does not inherit the Socket class of that type (which it should).

TCPSocket and TCPServerSocket classes


TCPSocket is as straight-forward as it gets. Connect(), Send(), Receive() calls its respective connect(), send() recv() functions in the Winsock API for TCP communication.


In a similar fashion, Accept() call the accept() function in the Winsock API, but not before setting the listening port on construction.

UDPSocket and UDPServerSocket classes


Because UDP is not connection-oriented, I created a pseudo-connection between the two clients. What that means is that I keep a state of the socket, whether the socket should be connection-oriented, or whether it should behave like a normal UDP communication. If it’s only being used for UDP communication, the normal functions are provided in a non-overridden function call SendTo() and ReceiveFrom() (which of course calls the sendto() and recvfrom() of the Winsock API). A caveat is that when using the ReceiveFrom function, it does not return the remote client’s address and port through the function, instead the user will simply need to call the function GetRemoteAddress() and GetRemotePort() (functions that the Socket class provides).

If, however, the application wishes to define this UDP socket as a connection, the user simply calls the Connect() function, which changes the behavior of this class to the following:

  • Send() and Receive() will now send to the “connected”client address and port.
  • SendTo() and ReceiveFrom() will behave as normal, but ReceiveFrom will not set the new remote client address and port, but instead only receive the data.
  • GetRemoteAddress() and GetRemotePort() will always return the “connected” client address and port.

Note that using the Send() and Receive() function before calling Connect() will use the address and port stored from previous calls to SendTo() and ReceiveFrom() function, otherwise there will be undefined behavior (undefined only because I’m not willing to look into what actually happens on null sends).


The UDPServerSocket does nothing more than wait on a message to be received from a remote client (using ReceiveFrom()), and establishes that client as the connection.


This post talked mainly about the Socket architecture that was used throughout the code. To keep this post short, I will complete this discussion in a later post, where I talk in depth about what it took to design the RUDPSocket class.

Week 8: Implementing a Sequencer, Part 3


Before I discuss about the code, I feel the need to discuss about a bug, which resulted in a majority of changes (that I will talk about), but also might show up in others who attempt to work with networking. I have overlooked this problem for a while to get some progress, but as the sequencer comes to completion it became apparent that it needed fixing once and for all.

The bug is best described as follows: two clients that communicate with one another will, in certain cases, be unable to receive messages from the client they have sent to. I have tested within the same computer with no bug, and when communicating with the listening socket across different computers it communicates fine. When a new socket is created on the server side however (to become the socket that communicates with the client), this bug gets triggered.

My temporary fix was to simply use the listening socket as the communicating socket, but of course that needed to be changed eventually. After numerous attempts at solving the problem, I decided to assume that the way I was using the winsock was not correct and overhauled my entire way of communicating with reliable source code online.

I decided to substitute my UDP implementation with a tested UDP implementation from Practical Socket by Michael J. Donahoo and Kenneth L. Calvert and decided to give it a try. Testing (on a separate test project) that this code does in fact work, I rewrote my code to see if the problem persisted. Unfortunately the code behaved the same way, still a one-sided connection. However, in doing that it led me to suspect an outside influence playing a role. In the end, it was discovered that Windows Firewall was blocking my program all along. Once it was disabled, the program worked just as it should. Why it didn’t cause issues when I did it through the same computer (not loop back) I’m still uncertain of…

Architectural Changes

Given the fact that most of the  code base has been overhauled with this new implementation (at least the networking side of it), I decided to work with this source code to avoid any future hassle that would derive from my attempts at networking. Having dealt enough with the C functions, it felt right to abstract those function calls using their help. The way their code was designed also inspired me to structure my code similarly for many reasons. Compared to their structure, I did not enjoy the idea of the Server-Client-Stream class that I had chosen for the design. Initially, I designed it so that the client will have a Server instance to do the listening and accepting of incoming clients, and producing a Stream instance that will be used to communicate with the new client. In a similar fashion, the Client class would establish a connection with the server and produce its own Stream instance to communicate with the server. There were two main problems I had with this, the biggest one being that the concept of the socket itself would be dealt with by 3 separate classes instead of one.

The Server and Client class’ intent were to build a socket and connect, while the Stream acted as a container for the socket to be passed into and call the respective sending and receiving functions of the socket. This idea of passing sockets from class to class did not sit well, as it did not make sense to split the behavior of the socket across three classes. The way Practical Socket handled it, only the Socket class had to deal with any socket operations. Connections were also handled within the Socket class, as it should be. When I finished developing the Client class, the class itself never needed to become an instance. Since it was only needed to create a new connection to the server, the class became a glorified function call, heck it was basically a Socket Factory in disguise. On the other hand, libraries such as the Practical Socket had a function built in with the Socket class to establish this connection (respectively called Connect()). I didn’t need a separate class to create the socket, nor did I need one to establish a connection when a function call would do the trick. It made sense that a separate client class wasn’t needed.

Removing the server class was not an option, though. The server behaves differently in that it needed to listen for connections as well as create sockets for these new connections. The Server class I had designed followed the typical style of servers. Creating the socket, binding the socket to a port, listening in on a port, and accepting incoming connection were all their own functions to mimic the steps of the server. In the end, however, I only ended up needing to make two function calls with the server. One function to create a socket, bind, and listen to a port, the other function to accept new clients. When re-vamping the code, the Server class only ended up needing an Accept() function, and the process of create-bind-listen was done on construction. My Server class had a flaw in that it wasn’t re-using code, since it had to make the listening socket from scratch and then create new sockets separately for each new connection. My code ended up using a function call to create sockets that would be used for incoming connections. Instead the Server class became a socket of its own, now called the ServerSocket class, so that the creation of the listening socket happened on the creation of the server instance. Whenever a new socket was needed for a connection, I would simply create a new socket instance.

As you can tell, the Practical Socket source code had a major influence in the current design of my RUDP structure. I also took in some other concepts, such as the idea of throwing exceptions instead of the C-style error codes, and shifted to using string and integers for addresses and ports instead of the clunky struct sockaddr that the Winsock API uses. Looking at the code I have now, porting the Winsock API into the C++ standard was what I should have done from the start.


In the end, this venture lead to great improvements in my code, which I will take the time to discuss in the coming posts. It seems like every post I promised code, but this architectural change is something I don’t wish to fit into one post, especially since I’ll only hold these updates back if I do. Be patient with me and you’ll get the discussion you’ve been waiting for. Stay tuned!

Week 6 & 7: Implementing a Sequencer, Part 2


Before I continue, I would like to answer what you all are thinking: Yes, the Sequencer did take me 3 weeks to develop. Most of what I talked about the Sequencer in Part 1 still remains true, the sequencer is the driving mechanism that creates the desired protocol. At that point, I knew this wasn’t going to be a weeks task, as the challenges I foresaw then still remained true on completing the sequencer. The larger challenge, however, came in assuring that the states between the two clients were communicated effectively. I would like to go further into this and the process that I had to undergo for creating this sequencer


In Developing a Sequencer

Part 1: Sending and Receiving Packets in tandem

Before I could even develop a sequencer, I had to make sure that the method in which I communicated between the clients would allow me to both send and receive data / packets. Of course, UDP is built to do just that, send and receive packets. However, when developing this communication prior to the sequencer, managing what data is received was not a concern. My first attempt at a UDP client-server program expected for the data to be sent and received on the other side. Because of this, after sending the packet, my program would wait for a message to be returned. This idea of sending then waiting will not work, as UDP is not intended for reliable sends. Any UDP packet that does not transmit the message through will make the client wait for a packet that it will never receive, and thus make the other client wait for a packet that it expects to receive in return. This is essentially a dead-lock for networking.

So, to address this problem, I had two options. The first option is to create separate threads for sending and receiving packets. This is the ideal way to handle this, however I did not want to add complications to this project. The second option, then, would be to essentially make the receiving of packets a non-blocking call. This is simple enough, but this now means I have to both send and receive packets in the same looping body, while maintaining the added state of the receive function returning from what would otherwise block. Of course, I ended up choosing this option.

Part 2: Establishing the Packet Data

If you’re creating a “unique” sequencer like I am, and not re-making an existing sequencer, you’ll be returning to this part frequently as you try to consider the data that will be needed for the sequencer to function reliably. In Part 1, I mentioned a few pieces of data that are essential to the sequencing algorithm: The sequence number, the acknowledgement number (remote sequence number), and the acknowledgement bit field to store the state of the 32 acknowledgements before it (in the 32 bits). This concept of the sequence number comes straight from TCP’s design, as sequence numbers are used to guarantee order, detecting packet loss and duplicates, essentially everything to assure reliability (if managed right). One thing that was not considered in this implementation, however, was the idea of retrieving the complete data from the transmission, a feature overlooked in TCP.

In TCP, order is guaranteed and the data received is reliable, but TCP in no way guarantees the data to be returning the entire message. A single message has the potential to return multiple messages, and as such the application will have to handle it accordingly. UDP does not behave in that manner. UDP guarantees that all the data is transmitted, or none of it. Being that I’m creating Reliable UDP, I wanted to incorporate the behavior of guaranteed completeness in the protocol. This required me to add an extra value, which I called the fragment count, which I will explain in the next part.

Part 3: Fragmentation

Of all the parts I will bring up, this I can say will be one the easier ones to implement. The fragmentation process is important to make sure that all the packets can pass through the Maximum Transmission Unit (MTU) that is built-in to routers. This MTU means that if the packet becomes larger than that specified size, it will operate on it in order to send the packet properly through the router. This could either mean the packet gets fragmented on by the router (using the IP Protocol’s fragmentation method), or it can simply drop the packet. To avoid the latter, this protocol will incorporate its own fragmentation and maintain its size below the expected MTU (which should be around 1200 bytes).

In doing this, I need to add an extra value to my protocol packet to inform the receiving end on whether or not they are receiving a packet or a fragment of a packet. This is where the fragment count data is used to alert the receiver of how many fragments it is to expect. Using the same value, this also allows me to incorporate complete data retrieval. if the message does not transmit completely (from packet loss), I won’t need to depend on the other end to tell me what its sequence number will be. With the fragment count, I can add the count to both the sequence number and remote sequence number, and that will set the state that those number would be in, had the program received all the packets. Knowing that, I can now safely sequence data together and ignore the data if unsuccessful

At this point, you can finally work on the sequencer. But wait! There’s a catch…

Part 4: Creating Two Sequencer Algorithms

When developing a sequencer, you must be aware that the sequencer has two sides to work with: the sender and the receiver. The receiver, of course, needs a sequencing algorithm to sequence the packet it receives properly, but just as important is the sender that needs to keep track of the acknowledged sequence numbers received to assure that the data was sent reliably. The sender’s sequencer doesn’t need to sequence any data, however without a system on the sending side to track acknowledgements, the sender would never know if the receiver has successfully received the data or not. These two systems are important to assure reliability, and thus these two systems need to be developed to handle these cases.

Not only that, but to know whether what you wrote even works, you’ll need to complete BOTH of them so that you can actually test against each other. It’s either that, or you write your send and receives in a way for you to pass in your own packets, and then create Unit Tests that will utilize this to send packets in a given sequence such that it tests the varying conditions that it is to expect. Who has time for that?*

*Seriously though, who has time? I didn’t have time for that. If you do though, I would suggest this approach as it’s a lot more reliable and allows reproduce-able and diverse test cases. Remember though that there is a time-aspect involved to the sequencer, which your Unit Test will have to reproduce.

Part 5: Maintaining Two Acknowledgement States

In the process of sending or receiving, the algorithm always has to maintain the state of the packets received, as well as sent, to the client. It would make sense to keep the acknowledgement state of the packets you send, as the packets received return you those acknowledgement numbers state for that purpose. However, these acknowledged packets you’re receiving is for the purpose of the sending client knowing that their packets have been received, but how about the receiving client? If you don’t record the packets that you received from the client, then you’ll have no way of telling the other receiving client that you have received their data.

Before you go thinking that this is easy, if you don’t name your variables right, you can go about confusing the two acknowledgement states. Being that these acknowledgement states are used in determining the completion of a message, mixing up these states can only end badly. Also recall that 33 acknowledgement states are being maintained, and ONLY 33. Therefore, maintaining two states of the 33 acknowledged packets when it comes to more recent or older acknowledgements can become a real head-scratcher. This is where testing would really come in handy (now if only that were possible early on…).

Part 6: Determining Packet-Loss, and other Special Conditions

So now you might have a simple loop that sends a packet, receives a packet, maintains the two acknowledgement states, and return as complete once these two acknowledgement states deem the message to be complete. Now what if a packet is lost in the middle of this procedure? This will mean that the two acknowledgement states will never be complete, and the loop will never allow the procedure to escape!

This is where you have to maintain a maximum time-out period, where if a packet is not received within a given time, you must deem the packet as lost. With this time-out period, you can n

This is not the only state of a packet either. There can be old packets, duplicate packets, packets from a newer message, random packets from the ether, and each type of packet needs to be identified in order to be handled properly.

Part 7: Testing the Sequencer

Like it was stated earlier, there are many states that both sequencing algorithms can be in, and it would be incredibly difficult and time-consuming to reproduce them all reliably. This was the purpose for looking into Network Emulation, in order to test the protocol in various network conditions. Unfortunately the Network Emulation for Windows Toolkit (NEWT) that I was going to use for testing did not work in reordering packets, and so I had to go with an alternative program, Clumsy. Although Clumsy does not provide near-accurate results like NEWT, it does provide a working feature of packet reordering, and has helped me test my program under those conditions. Check it out if you’re in need of a way to test your protocol.



I hope that this discussion showcases most, if not all, of the challenges that had to be overcome in developing a proper sequencer. There are a few more aspects that I would have liked to touch upon, like Congestion Avoidance, but I want to continue my progression onward, past the innards of Reliable-UDP, and actually showcase what R-UDP is capable of.

Expect to see a Part 3, where I’ll talk more about the code approaches I took to develop the sequencer, and stay tuned!

Week 5: Implementing a Sequencer, Part 1


Today I will be taking the time to start a small discussion on what a sequencer is for networking protocols and my current progress in integrating the algorithm in Reliable-UDP. Designing a sequencer turned out to be far more interconnected to the entire design of a protocol than initially conceived, and therefore I will be discussing this in parts.

What is a Sequencer?

A sequencer is the process of retrieving multiple packets transmitted over the network and properly organizing the packets in sequence to form the entire contiguous stream of data that the packets were intending to deliver. The sequencer process of a protocol is what allows for large amounts of data to be sent across the network reliably. What this also means is that depending on how the sequencer algorithm handles these packets, it can pretty much define what the protocol is. There are implementations of the sequencer that requires that every packet sent must immediately be acknowledged before proceeding. With TCP, it came up with a variant, what they call the “Sliding Window”, to improve the performance of transmitting packets and was a major source of the protocol’s popularity. With my Reliable-UDP implementation, it cannot take either of those approaches, since both of these implementations will define the protocol as a stream, and that is not what R-UDP was designed to become.

R-UDP’s Sequencer

Given time, I have decided that the Reliable-UDP will have a sequencer that behaves as follows:

  1. Constantly sends packets (not prioritizing the receiving of acknowledgement packets)
  2. Every packet sent (say the Nth packet sent), check the received packets for acknowledgements of the last K sent packets (where K < N)
  3. After sending the Nth packet, if  the (N-K-1) th packet was not received, then treat that packet as lost, otherwise sequence the data accordingly

This approach was used by Glenn in his article Reliability, Ordering and Congestion Avoidance over UDP , which was widely different from the approaches researched elsewhere. I was skeptical at first of how to implement this and why, mainly because his approach did not take a direct consideration of the time taken for packets to be acknowledged. Time is what is used in most implementations of sequencers in determining packet loss. Glenn’s implementation differs in that the time is determined (indirectly) by taking the time for  N packets to send, and determining packet loss as a packet not acknowledged after that stated time has passed. His article discusses more of the specific reasons for choosing this implementation (which you should read up on to find out more about my implementation), but in summary if we assume that a network can send a best-case of 30 packets per second, then we can safely assume that if a packet is not received after sending 30 packets (the equivalent of one second or longer) then that packet is considered lost.

This approach allows the protocol to constantly send packets, while still having packet loss detection and ordering. Another benefit to this is avoiding the implementation of a separate thread to handle time-outs for acknowledgement packets, as that makes creating the sequencer increasingly difficult. Such is the case with high-end sequencer implementations. With this implementation, it also allows the protocol to do exactly as it proposes, to send the data as fast as possible while maintaining that reliability. With the protocol allowing the application to be aware of packet loss in the sending process, the application can choose how to respond to it, by either re-sending the message again, or sending more recent data instead. This is the core of what Reliable-UDP is all about, and this is achieved thanks to this sequencer algorithm.

Breaking up the Sequencer Implementation

When I initially scheduled the sequencer, I took the implementation of this as a component of the protocol. Now knowing more, it’s fair to say that implementing a sequencer is essentially creating the heart of what the protocol will become. The sequencer plays a major role in everything involving the transmission of packets, making fragmentation, packet-loss detection, and congestion avoidance part of the entire process of properly sequencing packets. As a result of this, this week’s progress turned out to progress slower than expected. Where I was hoping to complete the sequencer and have one post discuss about it, I now have a small portion of the sequencer completed, and the rest of the features to come in the coming posts.

Next week, I will continue working on the sequencer, which should provide completion of the fragmentation and packet-loss detection for me. After that, I can truly showcase the approaches I took to implementing the sequencer. Until then, I hope you enjoyed this discussion on the R-UDP sequencer, and I hope to report to you soon!

Week 4: Creating a “Virtual” Connection

Last week, I programmed UDP communication and created a class to serialize a Reliable-UDP Packet into bytes that I can transmit across the network. This week, I’m finally putting these two implementations into use by creating what is referred to as a “virtual” connection. This post is more or less a direct inspiration from Glenn’s article on Virtual Connection, and future posts will draw from his articles as well. If you want more information about this topic, or the topic of my project in general, I would recommend reading his articles as they have been very informative on the decisions I have made in developing R-UDP.

In a TCP connection, we treat the two clients as constantly keeping a connection between one another, but the internet does not work like that. Instead, packets are what is transmitted across the network, which are basically like sending mail. Mail has an outgoing address and stores the mail (the data) and the sender’s address for the receiver. If you want to create the concept of a connection out of a system like that, you need to enforce that in the protocol. TCP determines connectivity through a “timed-out” parameter, which is information on how old a packet can be received from a client in order to be considered connected (Which are what “connection timed-out” messages mean for a networked application).

In Glenn’s article, he discusses of a simple implementation to establish a virtual connection between client and server. His implementation took the first connection to a client and established that as the connection. I took it a step further and approached it the way TCP approached connection. TCP establishes a connection through what is called a Three-Way Handshake in order to synchronize data between the two clients before establishing that they are connected. The purpose for this is so that each client can establish with one another the initial sequence number that they’ll be using. This method also assures that both clients can receive each other’s message, which cannot be made certain with a two-way handshake.

This implementation turned out to be a lot harder than accepting the first connection (as he does in his article). The way I had implemented my UDP application was through calling recvfrom(), then waiting for the message to be received (essentially a blocking call). This is incredibly unreliable. If even one packet is lost in the process, the handshake (as well as the program) locks on BOTH ends, since both will be waiting to receive a packet that won’t reach them. Having encountered this problem, I had to re-write the code so that the calls to recvfrom were non-blocking.

I approached it at first with the method through how Glenn approached it in his article Sending and Receiving Packets. His implementation called ioctlsocket() to make all blocking calls (recv(), recvfrom(), accept(), etc.) no longer block, and then loop  calls to recvfrom until the function returns successful (emphasis is on the fact that it polls). Doing this made a simple send-receive-send call turn into nested for-loops and checks after checks. Albeit, the checks were necessary, the nested for-loops made it hard to properly check these conditions and organize the code properly. Trying to get a first-draft working version forced me to copy-paste chunks of code in several areas.

Another problem that I ran into involved something I did not considered when approaching this project as a client-server console. Normally, the server binds and listens on a port for incoming connections from clients. When a packet is received, the connection is then established (through the 3-way handshake). At this point, a separate socket would be created to handle requests from the new client connecting, while the socket used to establish the connection moves on to listening for other connections. Unfortunately, when this new socket is created, it cannot use the same port that the listening socket bound to, simply because all received messages from that port will be read by the listening socket only. Not knowing this led me to believe my code was the problem and caused a number of unwanted changes to my implementation, including a slew of debug print statements. In the end, now having discovered and learned of this issue, I stuck to getting a single connection to one client and got a working virtual connection established from there.

With all these unexpected problems, it led me to focus most of my time this week completing the virtual connection effectively. As a result, I decided to take extra care to lay out the logic of my code correctly and concise, so as not to run into complications in the future. I broke the concept down in a flowchart and restructured my code accordingly.

RUDP Flowchart.png

As you can see, the logic ended up becoming a lot more complicated than just a back and forth between messages. The logic on the client-side is more-or-less solid, but I expect the server-side to go through many iterations in the future, especially since this implementation is only handling one connection (Yikes!).

On a final note, I stumbled upon a method that would avoid the use of polling the recvfrom() method, making the code more efficient on the CPU and avoid the nasty for-loops with breaks. The Winsock library offers a method called select(), which is a non-blocking call to check the state of a given socket, and optionally returns after a given period of time. In later versions, I would like to integrate the code to improve the performance of my code, but I’ll leave that to polish later on. For this week, I consider this virtual connection integration complete.

Hope you enjoyed this post! In the next post I’ll be working my way towards a proper sequencer to sequence the randomly ordered packets, and finally show the power of the sequence and acknowledgement numbers in use.

Week 3: Serializing Packets

Welcome! In this post, I will be discussing about serializing data for packet transmission.

When I talk about serializing data, the data I’m concerned about is the information about the packet being sent, not the data the packet is holding. When creating Reliable UDP, I need to make sure that the data being received is reliable. Because I’m implementing R-UDP on top of UDP, this means that I must transform the unreliable nature of UDP packets into a reliable stream of data.

Here’s a summary of the issues in more detail. The UDP packets (or datagrams) have a limited number of bytes, so to send large amounts of data, the data must be segmented and submitted packet-by-packet across the network. This makes receiving the entire data challenging, as the receiver has no guarantees of the packets being reaching them and the packets being received in order. The extra challenge also comes in handling lost packets, since the sender does not receive an acknowledgement that the packets have been received. TCP handles all of this under the hood and returns to the user a reliable stream of data. This too is what R-UDP needs to do.

In order to get segments of data from multiple packets and combine them into one unified data, we need to transmit more information than just the data itself. To handle the sliced data, every packet needs to hold information about what segment is being transmitted in order to merge the data correctly. To handle detecting packet losses, the receiver of this data needs to send information letting the sender know that the packet has been received. Information like this becomes important to achieve the goal of streaming data, but there’s one caveat. Remember that sending data over the network means that different computers will be handling the data, which means that data can be read differently on different computers. Different computers can have different endian-ness in their processor, making it challenging to handle data across the network.

For this project, I didn’t want to concern myself with handling data across different computers. For this reason, I went about looking for a library that can handle serializing data for me, so that I can put more focus into what data to put in a packet rather than how to read / write data to / from a packet.

I skimmed through multiple libraries, looking for a library that can keep data in a compact binary format, be cross platform (of course), and was easy to use. My first candidate was the Google Protocol Buffer, which stood out to be for its ease of use (code-wise) along with being compact and cross platform. The set-up was quite simple: Design a “protocol” or structure for the data type you want compressed, run an application to parse that format and generate a header file, and then use the header file (along with other files) in your application. Unfortunately, the project was hosted as a Visual Studio project, and in order to get the application, you had to build the project in Visual Studios to generate the executable needed to run the parser. Since the project hasn’t been updated in a while, building the project became a hassle as it was riddled with errors and warnings of deprecated functions and headers (Did you know #include <hash_map> is deprecated for <unordered_map>?). There was also a C++ parsing difference from my projects, as apparently my defines needed spaces before the statement, but I digress. What hit the nail on the coffin, though, was the test applications that were generated alongside the application. The documentation states to run these tests to make sure all tests passes, and of course they did not. Further research into each of the testing problems led me nowhere (The best talk about my specific test problems came from their “issues” section: https://github.com/google/protobuf/issues/547). After spending more hours than I wanted to on these issues I decided to move on to a different library.

In searching for something besides Protocol Buffer, I ended up finding a library that is similar (in fact, it is the successor of Protocol Buffer in some ways). Flat Buffer was the next, and last, library that caught my eye, in that it works very similarly to Protocol Buffer, but is also more suited to my needs. A Facebook Code Blog Post discussed how they used Flatbuffer for their networking needs, which told me that this library was built to work for network communication. Flatbuffer had the same set-up, in that a schema needed to be created and then parsed by their application to generate headers, the only difference is that IT WORKED. I didn’t get any of the troubles I got running the Visual Studio project and generating the application, therefore I manage to use this tool in my application.

And the schema was really easy to pick up. This packet schema was made purely out of the example provided in their example page.

Here’s my Packet schema:

table Packet {

root_type Packet;

Yup, that’s it! (This, of course, is subject to modifications in the future)

In the end, I got the serialization I needed to create the RUDP Packets that I can transmit data across the network safely. With this, I also created an RUDP Packet class to wrap the serializing functionality around.

Note: The data and schema is not set in stone, more modification may be required in the future to better support R-UDP.


I hope you enjoyed this post! Expect Week 4’s progress soon, and more frequent posts from this site to come! So that I can keep you readers more up-to-date on the progress of this project.