2.3 TCP
Transport Control Protocol: Traditional Transport Control Protocols, Transport Protocol Design Issues, WSN Middleware Architecture
The transport layer in a network is responsible for end-to-end segment transportation, where messages are broken down into segments at the source and reassembled at the destination. It doesn't concern itself with the underlying delivery protocols or mechanisms. Two commonly used transport layer protocols are the Transport Control Protocol (TCP) and the User Datagram Protocol (UDP).
TCP is a connection-oriented protocol, meaning it operates in three main phases:
Connection Establishment: The sender initiates a request to establish a connection with the destination. If the destination is available and a path exists between the source and destination, a logical link is established between them.
Data Transmission: Once the connection is established, data transmission begins. During this phase, the rate of transmission may be adjusted based on network congestion. TCP includes mechanisms for packet loss detection and recovery.
Disconnection: After the data exchange is completed, the connection is terminated. Unexpected events, such as the receiver becoming unavailable during transmission, can also lead to disconnection.
UDP, on the other hand, is a connectionless protocol. It doesn't require a connection to be established before data transmission. When the source has data to send, it simply forwards it to the destination without any prior setup.
Another way to classify transport protocols is as elastic or nonelastic. TCP is considered elastic because it allows for the adjustment of the data transmission rate by the sender. UDP, being nonelastic, does not provide this feature.
Connection-oriented protocols like TCP typically offer more services compared to connectionless protocols like UDP. They are preferred when reliable and effective transmission services are crucial for the application, especially in situations where the underlying network lacks such reliability.
The transport layer protocol may support various features depending on the application requirements:
Orderly Transmission: In situations where packets may arrive out of order due to multiple paths or network conditions, the transport protocol can reorder them at the destination. This is typically achieved by including a sequential number in the packet headers, allowing the receiver to sort and reassemble the packets correctly.
Flow and Congestion Control: To prevent congestion and ensure efficient transmission, some transport protocols offer flow and congestion control mechanisms. These mechanisms coordinate the transmission rate between sender and receiver, adjusting it based on network conditions and the receiver's ability to handle incoming data. For example, TCP employs additive increase and multiplicative decrease (AIMD) flow control to adjust transmission rates based on detected congestion.
Loss Recovery: In scenarios where data loss can occur due to network congestion or other factors, the transport protocol may include mechanisms for loss recovery. This involves detecting lost packets and initiating retransmission to ensure all data is successfully delivered. Sequence numbers in packet headers are often used to detect packet loss.
Quality of Service (QoS): For real-time applications requiring high throughput and low latency, the transport protocol can incorporate Quality of Service considerations into flow and congestion control. This ensures that sufficient bandwidth is allocated to meet the requirements of delay-sensitive applications like video streaming or teleconferencing.
These features are typically utilized during the data transmission phase. For connection-oriented protocols, negotiation and determination of these features may occur during the connection establishment phase, allowing the protocol to adapt to the specific requirements of the application and network conditions.
TCP (Transmission Control Protocol) is a widely used connection-oriented transport protocol on the Internet, providing reliable, orderly, controllable, and elastic transmission of data. Its operation consists of three main phases:
Connection Establishment: TCP establishes a logical connection between the sender and receiver using a three-way handshake. This handshake involves negotiating parameters such as initial sequence number, window size, and others, to ensure both parties are ready for data transmission.
Data Transmission: Once the connection is established, TCP ensures reliable and orderly transmission of data between the sender and receiver. It uses ACK (acknowledgment) packets to recover lost segments and maintains sequence numbers for orderly transmission. TCP also supports flow control and congestion control to adjust the transmission rate based on network conditions. Flow control is achieved through a window-based mechanism where the sender adjusts the congestion window (cwnd) based on received ACKs.
Connection Termination: After completing data transmission, the connection is terminated, and related resources are released.
Flow and congestion control in TCP involve the following phases:
Slow Start: Initially, all transmissions start with slow start. During this phase, the congestion window (cwnd) increases exponentially for each ACK received, allowing for rapid ramp-up of the transmission rate.
Congestion Avoidance: Once cwnd reaches a maximum value (threshold), TCP enters the congestion avoidance state. In this state, cwnd is incremented linearly for each ACK received, slowing down the rate of increase to avoid congestion.
Fast Recovery and Fast Retransmission (FRFT): If segments are lost during transmission, TCP enters the FRFT state. In this state, cwnd is halved, but the congestion window is not reset entirely, allowing for a quicker recovery from sporadic segment losses.
These mechanisms ensure flexible flow and congestion control in TCP, allowing for efficient data transmission while adapting to network conditions. Factors such as congestion levels, segment losses, round-trip time (RTT), and segment sizes all influence the behavior of TCP and its throughput. Overall, TCP's design aims to achieve high throughput and reliability in varying network environments.
UDP (User Datagram Protocol) is a connectionless transport protocol commonly used for applications that prioritize simplicity and minimal overhead over reliability and ordered delivery. Here are some key points about UDP:
Connectionless: Unlike TCP, UDP does not establish a connection before transmitting data. Each datagram (packet) sent over UDP is independent of others, and there is no concept of a connection or session between sender and receiver.
No Sequence Numbers: UDP datagrams do not include sequence numbers, so there is no guarantee of orderly delivery. Packets may arrive out of order or be lost without any mechanism for recovery.
No Congestion or Flow Control: UDP does not provide mechanisms for congestion control or flow control. This means that UDP applications can transmit data at their desired rate without any throttling or adjustment based on network conditions.
Performance: In situations where both TCP and UDP are present, UDP may outperform TCP due to its lack of overhead from connection establishment, sequencing, and congestion control mechanisms. However, this performance advantage comes at the cost of reliability and ordered delivery.
TCP-Friendly Rate Control (TFRC): To address the limitations of UDP and provide a certain level of control, TCP-Friendly Rate Control (TFRC) has been proposed. TFRC aims to achieve throughput levels similar to TCP while maintaining UDP's connectionless nature. It adjusts the transmission rate dynamically based on feedback from the network, aiming to prevent congestion while maximizing throughput.
Overall, UDP is favored for applications where low latency and minimal overhead are more important than guaranteed delivery or congestion control. Examples include real-time multimedia streaming, online gaming, and DNS (Domain Name System) queries. However, for applications requiring reliability and ordered delivery, TCP remains the preferred choice''
Mobile IP is a protocol designed to enable seamless mobility for devices in an IP network. Here's how Mobile IP works and some of its key features:
Home Agent (HA): The Home Agent is a router located in the mobile node's home network. It maintains the mobile node's home address and is responsible for intercepting packets destined for the mobile node while it is away from home.
Foreign Agent (FA): The Foreign Agent is a router located in the network visited by the mobile node. It assigns a temporary Care-of Address (COA) to the mobile node when it enters the foreign network and forwards packets between the mobile node and its home agent.
Care-of Address (COA): The Care-of Address is a temporary IP address assigned to the mobile node by the foreign agent when it moves to a new network. It allows the mobile node to receive packets while away from its home network.
Registration: When a mobile node enters a new network, it registers with the foreign agent to obtain a COA. The mobile node then informs its home agent about its current COA.
Packet Forwarding: When packets are sent to the mobile node's home address, they are intercepted by the home agent. The home agent forwards these packets to the mobile node's current COA, which is the address assigned by the foreign agent in the current network.
Triangular Routing: Due to the nature of Mobile IP, where packets are forwarded through the home agent to reach the mobile node's current location, there can be asymmetrical routing, known as triangular routing. This can lead to longer paths and potentially lower efficiency.
Mobility and TCP: Mobility events, such as handoffs between different networks, can result in packet loss and TCP timeouts. This can lead to reduced throughput as TCP sender reduces its transmission rate in response to perceived congestion. Even if the physical link offers sufficient bandwidth, mobility-related issues can affect TCP performance.
Mobile IP provides a solution for maintaining connectivity and enabling mobility in IP networks, but it does come with some inherent challenges, such as triangular routing and potential impact on TCP performance during handoff events. As mobile technologies continue to evolve, Mobile IP and similar protocols will likely undergo further enhancements to address these challenges and improve overall performance and user experience.
Designing transport protocols for wireless sensor networks (WSNs) requires careful consideration of various factors such as energy conservation, congestion control, reliability, and management. Here are some key points to consider in the design process:
Congestion Control and Reliable Data Delivery:
- WSNs may experience congestion, especially around the sink where data from multiple sensors converge.
- Implement mechanisms for packet loss recovery, such as ACK and selective ACK similar to TCP, to ensure reliable data delivery.
- Define what reliable delivery means in the context of WSNs. For some applications, receiving packets correctly from a fraction of sensors may suffice.
- Consider using a hop-by-hop approach for congestion control and loss recovery to conserve energy and reduce packet loss.
Streamlined Connection Establishment:
- Simplify the initial connection establishment process or use connectionless protocols to speed up connection setup, improve throughput, and lower transmission delay.
- Recognize that many WSN applications are reactive and generate only a few packets in response to events.
Avoiding Packet Loss:
- Minimize packet loss to conserve energy, as packet loss translates to wasted energy.
- Implement active congestion control (ACC) mechanisms to trigger congestion avoidance before congestion occurs.
- For example, reduce the sending rate when downstream neighbors' buffer sizes exceed a certain threshold.
Fairness and Cross-Layer Optimization:
- Ensure fairness for different types of sensor nodes in the network.
- Design protocols with cross-layer optimization in mind to leverage information from other layers. For example, if a routing algorithm detects route failure, the transport protocol can adjust its behavior accordingly.
In summary, designing transport protocols for WSNs requires addressing the unique challenges of these networks, such as limited energy resources, intermittent connectivity, and varying application requirements. By considering factors like congestion control, reliable data delivery, connection establishment, packet loss avoidance, fairness, and cross-layer optimization, designers can develop efficient and effective transport protocols tailored to the specific needs of WSNs.
- Wireless Sensor Network (WSN) middleware architecture serves as a bridge between the low-level hardware of sensors and the high-level applications that utilize the data collected by these sensors
- The role of middleware is to gather information from the application and the network protocols.it decides how to support the application and adjust the network protocol parameter simultaneously.
- The middleware some time interfaces directly with the operating system while by passing the Information.
- Middleware Layer:
- The core layer of the architecture, providing various services and functionalities to manage and process sensor data.
- Key components include:
- Data Aggregation: Aggregates data from multiple sensors to reduce redundancy and conserve energy.
- Data Fusion: Combines data from different sensors to provide a more accurate and comprehensive view of the environment.
- Routing: Determines the optimal paths for data transmission in the network to minimize energy consumption and maximize reliability.
- Security: Implements encryption, authentication, and access control mechanisms to secure data transmission and protect against unauthorized access.
- Localization: Determines the physical locations of sensors within the network.
- Resource Management: Allocates resources such as bandwidth, memory, and energy efficiently among the sensors.
- QoS (Quality of Service) Management: Ensures that application requirements such as latency, reliability, and throughput are met.
- Application programming interface (API): In order to achieve better performance and network utilization the API can be invoked by applications.
- Data Compression
- Data storage
- Application Layer:
- This layer hosts the actual applications and services that utilize the data collected by the sensors.
- Applications can range from simple tasks such as environmental monitoring to more complex ones like smart agriculture, industrial automation, healthcare monitoring, etc.
- Developers interact with this layer to build custom applications tailored to specific use cases.
In wireless sensor networks (WSNs), middleware plays a crucial role in managing data-related functions, including data dissemination, data compression, and data storage. Let's explore each of these functions briefly:
Data Dissemination: WSNs generate a vast amount of data that needs to be transmitted efficiently to a central node or sink for further processing. Data dissemination protocols facilitate effective transmission of sensor data to the sink. These protocols typically involve two phases:
- The initial phase involves triggering data transmission by the sink, which sends out queries to inform sensor nodes about the transmission requirements.
- The data transmission phase involves sensor nodes reporting data to the sink, with protocols indicating whether data transmission occurs via broadcast or unicast modes. Various protocols like Directed Diffusion (DD), Two-Tier Data Dissemination (TTDD), and Sinks Accessing Data from Environments (SAFE) are used, each with its own approach to optimize data transmission.
Data Compression: Given that communication components consume significant energy in WSNs, data compression techniques are employed to reduce the number of packet transmissions and conserve energy. Features like data correlation, tree-like network topologies, and application semantics enable effective data compression. Techniques such as distributed source coding, data aggregation-based compression, and sampling of random processes are utilized to compress sensor data efficiently.
Data Storage: Sensor nodes collect both raw data and analyzed results from sensed events, which need to be stored for future use. Different data storage schemes are employed to address various requirements:
- External Storage (ES): Data is transmitted to an external centralized host for storage.
- Local Storage (LS): Data is stored locally within sensor nodes themselves.
- Data-centric Storage (DCS): Event data is stored based on event type at special "home nodes."
- Provenance-aware Data Storage (PADS): Emphasizes the ability to query the provenance of data, with event data stored locally and index/pointers stored at home hosts.
- Multiresolution Storage (MRS): Data is decomposed and classified into different levels, with varying storage durations for each level.
Comments
Post a Comment