TOC 
CONEXT. Moncaster, Ed.
Internet-DraftMoncaster Internet Consulting
Intended status: InformationalJ. Leslie, Ed.
Expires: September 15, 2011JLC.net
 B. Briscoe
 BT
 R. Woundy
 Comcast
 D. McDysan
 Verizon
 March 14, 2011


ConEx Concepts and Use Cases
draft-ietf-conex-concepts-uses-01

Abstract

Internet Service Providers (operators) are facing problems where localized congestion prevents full utilization of the path between sender and receiver at today's "broadband" speeds. Operators desire to control this congestion, which often appears to be caused by a small number of users consuming a large amount of bandwidth. Building out more capacity along all of the path to handle this congestion can be expensive and may not result in improvements for all users so network operators have sought other ways to manage congestion. The current mechanisms all suffer from difficulty measuring the congestion (as distinguished from the total traffic).

The ConEx Working Group is designing a mechanism to make congestion along any path visible at the Internet Layer. This document describes example cases where this mechanism would be useful.

Status of This Memo

This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.

Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at http://datatracker.ietf.org/drafts/current/.

Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as “work in progress.”

This Internet-Draft will expire on September 15, 2011.

Copyright Notice

Copyright (c) 2011 IETF Trust and the persons identified as the document authors. All rights reserved.

This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.



Table of Contents

1.  Introduction
2.  Definitions
3.  Congestion Management
    3.1.  Existing Approaches
4.  Exposing Congestion
    4.1.  ECN - a Step in the Right Direction
5.  ConEx Use Cases
    5.1.  ConEx as a basis for traffic management
    5.2.  ConEx to incentivise scavenger transports
    5.3.  Accounting for Congestion Volume
    5.4.  ConEx as a form of differential QoS
    5.5.  Partial vs. Full Deployment
6.  Statistical Multiplexing over Differing Timescales
    6.1.  ConEx Objectives for This Issue
    6.2.  ConEx as a Solution
    6.3.  Additional Support Using other Measures and Mechanisms
7.  Other issues
    7.1.  Congestion as a Commercial Secret
    7.2.  Information Security
8.  Security Considerations
9.  IANA Considerations
10.  Acknowledgments
11.  Informative References




 TOC 

1.  Introduction

The growth of "always on" broadband connections, coupled with the steady increase in access speeds, have caused unforeseen problems for network operators and users alike. Users are increasingly seeing congestion at peak times and changes in usage patterns (with the growth of real-time streaming) simply serve to exacerbate this. Operators want all their users to see a good service but are unable to see where congestion problems originate. But congestion results from sharing network capacity with others, not merely from using it. In general, today's "DSL" and cable-internet users cannot "cause" congestion in the absence of competing traffic. (Wireless operators and cellular internet have different tradeoffs which we will not discuss here.)

Despite its central role in network control and management, congestion is a remarkably hard conept to define. The discussions in [Bauer09] (Bauer, S., Clark, D., and W. Lehr, “The Evolution of Internet Congestion,” 2009.) provide a good academic background. [RFC6077] (Papadimitriou, D., Welzl, M., Scharf, M., and B. Briscoe, “Open Research Issues in Internet Congestion Control,” February 2011.) defines it as "as a state or condition that occurs when network resources are overloaded, resulting in impairments for network users as objectively measured by the probability of loss and/or delay." An economist might define it as the condition where the utility of a given user decreases due to an increase in network load. Common to these definitions is the idea that an increase in load results in a reduction of service from the network.

Congestion takes two distinct forms. The first results from the interaction of traffic from one set of users with traffic from other users, causing in a reduction in service (a cost) for all of them. the second, often referred to as "self-congestion", occurs when an increase in traffic from a single user causes that user to suffer a worse service (for instance because their traffic is being "shaped" by their ISP, or because they have an excessively large buffer in their home router). ConEx is principally interested in the first form of congestion since it involves informing those other users of the impact you expect to have on them.

While building out more capacity to handle increased traffic is always good, the expense and lead-time can be prohibitive, especially for network operators that charge flat-rate feeds to subscribers and are thus unable to charge heavier users more for causing more congestion [BB‑incentive] (MIT Communications Futures Program (CFP) and Cambridge University Communications Research Network, “The Broadband Incentive Problem,” September 2005.). The operators also face the challenge that network traffic grows according to Moore's Law -- increasing capacity may only be buying a few months grace before you are again facing increasing congestion, reducing utility and customers demanding a better service. For an operator facing congestion caused by other operators' networks, building out its own capacity is unlikely to solve the congestion problem. Operators are thus facing increased pressure to find effective solutions to dealing with the increasing bandwidth demands of all users.

The growth of "scavenger" behaviour (e.g. [LEDBAT] (Shalunov, S., “Low Extra Delay Background Transport (LEDBAT),” March 2010.)) helps to reduce congestion, but can actually make the problem less tractable. These users are trying to make good use of the capacity of the path while minimising their own costs. Thus, users of such services may show very heavy total traffic up until the moment congestion is detected (at the Transport Layer), but then will immediately back off. Monitoring (at the Internet Layer) cannot detect this congestion avoidance if the congestion in question is in a different domain further along the path and hence such users may get tretated as congestion-causing users.

The ConEx working group proposes that Internet Protocol (IP) packets will carry additional ConEx information. The exact protocol details are not described in this document, but the ConEx information will be sufficient to allow any node in the network to see how much congestion is attributable to a given traffic flow. See [ConEx‑Abstract‑Mech] (Briscoe, B., “Congestion Exposure (ConEx) Concepts and Abstract Mechanism,” March 2011.) for further details.

Changes from previous drafts (to be removed by the RFC Editor):

From draft-ietf-conex-concepts-uses-00 to -01:
Added section on timescales: Section 6 (Statistical Multiplexing over Differing Timescales)
Revised introduction to clarify congestion definitions
Changed source for congestion definition in Section 2 (Definitions)
Other minor changes
From draft-moncaster-conex-concepts-uses-02 to draft-ietf-conex-concepts-uses-00 (per decisions of working group):
Removed section on DDoS mitigation use case.
Removed appendix on ConEx Architectural Elements. PLEASE NOTE: Alignment of terminology with the Abstract Mechanism draft has been deferred to the next version.
From draft-moncaster-conex-concepts-uses-01 to draft-moncaster-conex-concepts-uses-02:
Updated document to take account of the new Abstract Mechanism draft [ConEx‑Abstract‑Mech] (Briscoe, B., “Congestion Exposure (ConEx) Concepts and Abstract Mechanism,” March 2011.).
Updated the definitions section.
Removed sections on Requirements and Mechanism.
Moved section on ConEx Architectural Elements to appendix.
Minor changes throughout.
From draft-moncaster-conex-concepts-uses-00 to draft-moncaster-conex-concepts-uses-01:
Changed end of Abstract to better reflect new title
Created new section describing the architectural elements of ConEx. Added Edge Monitors and Border Monitors (other elements are Ingress, Egress and Border Policers).
Extensive re-write of Section 5 (ConEx Use Cases) partly in response to suggestions from Dirk Kutscher
Improved layout of Section 2 (Definitions) and added definitions of Whole Path Congestion, ConEx-Enabled and ECN-Enabled. Re-wrote definition of Congestion Volume. Renamed Ingress and Egress Router to Ingress and Egress Node as these nodes may not actually be routers.
Improved document structure. Merged sections on Exposing Congestion and ECN.
Added new section on ConEx requirements with a ConEx Issues subsection. Text for these came from the start of the old ConEx Use Cases section
Added a sub-section on Partial vs Full Deployment Section 5.5 (Partial vs. Full Deployment)
Added a discussion on ConEx as a Business Secret Section 7.1 (Congestion as a Commercial Secret)
From draft-conex-mechanism-00 to draft-moncaster-conex-concepts-uses-00:
Changed filename to draft-moncaster-conex-concepts-uses.
Changed title to ConEx Concepts and Use Cases.
Chose uniform capitalisation of ConEx.
Moved definition of Congestion Volume to list of definitions.
Clarified mechanism section. Changed section title.
Modified text relating to conex-aware policing and policers (which are NOT defined terms).
Re-worded bullet on distinguishing ConEx and non-ConEx traffic in Section 5 (ConEx Use Cases).



 TOC 

2.  Definitions

In this section we define a number of terms that are used throughout the document. The key definition is that of congestion, which has a number of meanings depending on context. The definition we use in this document is based on the definition in [RFC6077] (Papadimitriou, D., Welzl, M., Scharf, M., and B. Briscoe, “Open Research Issues in Internet Congestion Control,” February 2011.). This list of definitions is supplementary to that in [ConEx‑Abstract‑Mech] (Briscoe, B., “Congestion Exposure (ConEx) Concepts and Abstract Mechanism,” March 2011.).

Congestion:
Congestion occurs when any user's traffic suffers increased delay, loss or ECN marking as a result of one or more network resources being overloaded.
Flow:
a series of packets from a single sender to a single receiver that are treated by that sender as belonging to a single stream for the purposes of congestion control. NB in general this is not the same as the aggregate of all traffic between the sender and receiver.
Congestion-rate:
For any granularity of traffic (packet, flow, aggregate, etc.), the instantaneous rate of traffic discarded or marked due to congestion. Conceptually, the instantaneous bit-rate of the traffic multiplied by the instantaneous congestion it is experiencing.
Congestion-volume:
For any granularity of traffic (packet, flow, aggregate, etc.), the volume of bytes dropped or marked in a given period of time. Conceptually, congestion-rate multipled by time.
Upstream Congestion:
the accumulated level of congestion experienced by a traffic flow thus far along its path. In other words, at any point the Upstream Congestion is the accmulated level of congestion the traffic flow has experienced as it travels from the sender to that point. At the receiver this is equivalent to the end-to-end congestion level that (usually) is reported back to the sender.
Downstream Congestion:
the level of congestion a flow of traffic is expected to experience on the remainder of its path. In other words, at any point the Downstream Congestion is the level of congestion the traffic flow is yet to experience as it travels from that point to the receiver.
Ingress:
the first node a packet traverses that is outside the source's own network. In a domestic network that will be the first node downstream from the home access equipment. In an enterprise network this is the provider edge router.
Egress:
the last node a packet traverses before reaching the receiver's network.
ConEx-enabled:
Any piece of equipment (end-system, router, tunnel end-point, firewall, policer, etc) that complies with the core ConEx protocol, which is to be defined by the ConEx working group. By extension a ConEx-enabled network is a network whose edge nodes are all ConEx-enabled.



 TOC 

3.  Congestion Management

Since 1988 the Internet architecture has made congestion management the responsibility of the end-systems. The network signals congestion to the receiver, the receiver feeds this back to the sender and the sender is expected to try and reduce the traffic it sends.

Any network that is persistently highly congested is inefficient. However the total absence of congestion is equally bad as it means there is spare capacity in the network that hasn’t been used. The long-standing aim of congestion control has been to find the point where these two things are in balance.

Over recent years, some network operators have come to the view that end-system congestion management is insufficient. Because of the heuristics used by TCP, a relatively small number of end-machines can get a disproportionately high share of network resources. They have sought to “correct” this perceived problem by using middleboxes that try and reduce traffic that is causing congestion or by artificially starving some traffic classes to create stronger congestion signals.



 TOC 

3.1.  Existing Approaches

The authors have chosen not to exhaustively list current approaches to congestion management. Broadly these approaches can be divided into those that happen at Layer 3 of the OSI model and those that use information gathered from higher layers. In general these approaches attempt to find a "proxy" measure for congestion. Layer 3 approaches include:

Higher layer approaches include:

All of these current approaches suffer from some general limitations. First, they introduce performance uncertainty. Flat-rate pricing plans are popular because users appreciate the certainty of having their monthly bill amount remain the same for each billing period, allowing them to plan their costs accordingly. But while flat-rate pricing avoids billing uncertainty, it creates performance uncertainty: users cannot know whether the performance of their connection is being altered or degraded based on how the network operator manages congestion.

Second, none of the approaches is able to make use of what may be the most important factor in managing congestion: the amount that a given endpoint contributes to congestion on the network. This information simply is not available to network nodes, and neither volume nor rate nor application usage is an adequate proxy for congestion volume, because none of these metrics measures a user or network's actual contribution to congestion on the network.

Finally, none of these solutions accounts for inter-network congestion. Mechanisms may exist that allow an operator to identify and mitigate congestion in their own network, but the design of the Internet means that only the end-hosts have full visibility of congestion information along the whole path. ConEx allows this information to be visible to everyone on the path and thus allows operators to make better-informed decisions about controlling traffic.



 TOC 

4.  Exposing Congestion

We argue that current traffic-control mechanisms seek to control the wrong quantity. What matters in the network is neither the volume of traffic nor the rate of traffic: it is the contribution to congestion over time — congestion means that your traffic impacts other users, and conversely that their traffic impacts you. So if there is no congestion there need not be any restriction on the amount a user can send; restrictions only need to apply when others are sending traffic such that there is congestion.

For example, an application intending to transfer large amounts of data could use a congestion control mechanism like [LEDBAT] (Shalunov, S., “Low Extra Delay Background Transport (LEDBAT),” March 2010.) to reduce its transmission rate before any competing TCP flows do, by detecting an increase in end-to-end delay (as a measure of impending congestion). However such techniques rely on voluntary, altruistic action by end users and their application providers. Operators can neither enforce their use nor avoid penalizing them for congestion they avoid.

The Internet was designed so that end-hosts detect and control congestion. We argue that congestion needs to be visible to network nodes as well, not just to the end hosts. More specifically, a network needs to be able to measure how much congestion any particular traffic expects to cause between the monitoring point in the network and the destination ("rest-of-path congestion"). This would be a new capability. Today a network can use Explicit Congestion Notification (ECN) [RFC3168] (Ramakrishnan, K., Floyd, S., and D. Black, “The Addition of Explicit Congestion Notification (ECN) to IP,” September 2001.) to detect how much congestion the traffic has suffered between the source and a monitoring point, but not beyond. This new capability would enable an ISP to give incentives for the use of LEDBAT-like applications that seek to minimise congestion in the network whilst restricting inappropriate uses of traditional TCP and UDP applications.

So we propose a new approach which we call Congestion Exposure. We propose that congestion information should be made visible at the IP layer, so that any network node can measure the contribution to congestion of an aggregate of traffic as easily as straight volume can be measured today. Once the information is exposed in this way, it is then possible to use it to measure the true impact of any traffic on the network.

In general, congestion exposure gives operators a principled way to hold their customers accountable for the impact on others of their network usage and reward them for choosing congestion-sensitive applications.



 TOC 

4.1.  ECN - a Step in the Right Direction

Explicit Congestion Notification [RFC3168] (Ramakrishnan, K., Floyd, S., and D. Black, “The Addition of Explicit Congestion Notification (ECN) to IP,” September 2001.) allows routers to explicitly tell end-hosts that they are approaching the point of congestion. ECN builds on Active Queue Mechanisms such as random early discard (RED) [RFC2309] (Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering, S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., Partridge, C., Peterson, L., Ramakrishnan, K., Shenker, S., Wroclawski, J., and L. Zhang, “Recommendations on Queue Management and Congestion Avoidance in the Internet,” April 1998.) by allowing the router to mark a packet with a Congestion Experienced (CE) codepoint, rather than dropping it. The probability of a packet being marked increases with the length of the queue and thus the rate of CE marks is a guide to the level of congestion at that queue. This CE codepoint travels forward through the network to the receiver which then informs the sender that it has seen congestion. The sender is then required to respond as if it had experienced a packet loss. Because the CE codepoint is visible in the IP layer, this approach reveals the upstream congestion level for a packet.

Alas, this is not enough - ECN gives downstream nodes an idea of the congestion so far for any flow. This can help hold a receiver accountable for the congestion caused by incoming traffic. But a receiver can only indirectly influence incoming congestion, by politely asking the sender to control it. A receiver cannot make a sender install an adaptive codec, or install LEDBAT instead of TCP congestion-control. And a receiver cannot cause an attacker to stop flooding it with traffic.

What is needed is knowledge of the downstream congestion level, for which you need additional information that is still concealed from the network.



 TOC 

5.  ConEx Use Cases

This section sets out some of the use cases for ConEx. These use cases rely on some of the conceptual elements described in [ConEx‑Abstract‑Mech] (Briscoe, B., “Congestion Exposure (ConEx) Concepts and Abstract Mechanism,” March 2011.). The authors don't claim this is an exhaustive list of use cases, nor that these have equal merit. In most cases ConEx is not the only solution to achieve these. But these use cases represent a consensus among people that have been working on this approach for some years.



 TOC 

5.1.  ConEx as a basis for traffic management

Currently many operators impose some form of traffic management at peak hours. This is a simple economic necessity — the only reason the Internet works as a commercial concern is that operators are able to rely on statistical multiplexing to share their expensive core network between large numbers of customers. In order to ensure all customers get some chance to access the network, the "heaviest" customers will be subjected to some form of traffic management at peak times (typically a rate cap for certain types of traffic) [Fair‑use] (Broadband Choices, “Truth about 'fair usage' broadband,” 2009.). Often this traffic management is done with expensive flow aware devices such as DPI boxes or flow-aware routers.

ConEx offers a better approach that will actually target the users that are causing the congestion. By using Ingress or Egress Policers, an ISP can identify which users are causing the greatest Congestion Volume throughout the network. This can then be used as the basis for traffic management decisions. The Ingress Policer described in [Policing‑freedom] (Briscoe, B., Jacquet, A., and T. Moncaster, “Policing Freedom to Use the Internet Resource Pool,” December 2008.) is one interesting approach that gives the user a congestion volume limit. So long as they stay within their limit then their traffic is unaffected. Once they exceed that limit then their traffic will be blocked temporarily.



 TOC 

5.2.  ConEx to incentivise scavenger transports

Recent work proposes a new approach for QoS where traffic is provided with a less than best effort or "scavenger" quality of service. The idea is that low priority but high volume traffic such as OS updates, P2P file transfers and view-later TV programs should be allowed to use any spare network capacity, but should rapidly get out of the way if a higher priority or interactive application starts up. One solution being actively explored is LEDBAT which proposes a new congestion control algorithm that is less aggressive in seeking out bandwidth than TCP.

At present most operators assume a strong correlation between the volume of a flow and the impact that flow causes in the network. This assumption has been eroded by the growth of interactive streaming which behaves in an inelastic manner and hence can cause high congestion at relatively low data volumes. Currently LEDBAT-like transports get no incentive from the ISP since they still transfer large volumes of data and may reach high transfer speeds if the network is uncongested. Consequently the only current incentive for LEDBAT is that it can reduce self-congestion effects.

If the ISP has deployed a ConEx-aware Ingress Policer then they are able to incentivise the use of LEDBAT because a user will be policed according to the overall congestion volume their traffic generates, not the rate or data volume. If all background file transfers are only generating a low level of congestion, then the sender has more "congestion budget" to "spend" on their interactive applications. It can be shown [Kelly] (Kelly, F., Maulloo, A., and D. Tan, “Rate control for communication networks: shadow prices, proportional fairness and stability,” 1998.) that this approach improves social welfare — in other words if you limit the congestion that all users can generate then everyone benefits from a better service.



 TOC 

5.3.  Accounting for Congestion Volume

Accountability was one of the original design goals for the Internet [Design‑Philosophy] (Clarke, D., “The Design Philosophy of the DARPA Internet Protocols,” 1988.). At the time it was ranked low because the network was non-commercial and it was assumed users had the best interests of the network at heart. Nowadays users generally treat the network as a commodity and the Internet has become highly commercialised. This causes problems for operators and others which they have tried to solve and often leads to a tragedy of the commons where users end up fighting each other for scarce peak capacity.

The most elegant solution would be to introduce an Internet-wide system of accountability where every actor in the network is held to account for the impact they have on others. If Policers are placed at every Network Ingress or Egress and Border Monitors at every border, then you have the basis for a system of congestion accounting. Simply by controlling the overall Congestion Volume each end-system or stub-network can send you ensure everyone gets a better service.



 TOC 

5.4.  ConEx as a form of differential QoS

Most QoS approaches require the active participation of routers to control the delay and loss characteristics for the traffic. For real-time interactive traffic it is clear that low delay (and predictable jitter) are critical, and thus these probably always need different treatment at a router. However if low loss is the issue then ConEx offers an alternative approach.

Assuming the ingress ISP has deployed a ConEx Ingress Policer, then the only control on a user's traffic is dependent on the congestion that user has caused. Likewise, if they are receiving traffic through a ConEx Egress Policer then their ISP will impose traffic controls (prioritisation, rate limiting, etc) based on the congestion they have caused. If an end-user (be they the receiver or sender) wants to prioritise some traffic over other traffic then they can allow that traffic to generate or cause more congestion. The price they will pay will be to reduce the congestion that their other traffic causes.

Streaming video content-delivery is a good candidate for such ConEx-mediated QoS. Such traffic can tolerate moderately high delays, but there are strong economic pressures to maintain a high enough data rate (as that will directly influence the Quality of Experience the end-user receives. This approach removes the need for bandwidth brokers to establish QoS sessions, by removing the need to coordinate requests from multiple sources to pre-allocate bandwidth, as well as to coordinate which allocations to revoke when bandwidth predictions turn out to be wrong. There is also no need to "rate-police" at the boundaries on a per-flow basis, removing the need to keep per-flow state (which in turn makes this approach more scalable).



 TOC 

5.5.  Partial vs. Full Deployment

In a fully-deployed ConEx-enabled internet, [QoS‑Models] (Briscoe, B. and S. Rudkin, “Commercial Models for IP Quality of Service Interconnect,” April 2005.) shows that ISP settlements based on congestion volume can allocate money to where upgrades are needed. Fully-deployed implies that ConEx-marked packets which have not exhausted their expected congestion would go through a congested path in preference to non-ConEx packets, with money changing hands to justify that priority.

In a partial deployment, routers that ignore ConEx markings and let them pass unaltered are no problem unless they become congested and drop packets. Since ConEx incentivises the use of lower congestion transports, such congestion drops should anyway become rare events. ConEx-unaware routers that do drop ConEx-marked packets would cause a problem so to minimise this risk ConEx should be designed such that ConEx packets will appear valid to any node they traverse. Failing that it could be possible to bypass such nodes with a tunnel.

If any network is not ConEx enabled then the sender and receiver have to rely on ECN-marking or packet drops to establish the congestion level. If the receiver isn't ConEx-enabled then there needs to be some form of compatibility mode. Even in such partial deployments the end-users and access networks will benefit from ConEx. This will put create incentives for ConEx to be more widely adopted as access networks put pressure on their backhaul providers to use congestion as the basis of their interconnect agreement.

The actual charge per unit of congestion would be specified in an interconnection agreement, with economic pressure driving that charge downward to the cost to upgrade whenever alternative paths are available. That charge would most likely be invisible to the majority of users. Instead such users will have a contractual allowance to cause congestion, and would see packets dropped when that allowance is depleted.

Once an Autonomous System (AS) agrees to pay any congestion charges to any other AS it forwards to, it has an economic incentive to increase congestion-so-far marking for any congestion within its network. Failure to do this quickly becomes a significant cost, giving it an incentive to turn on such marking.

End users (or the writers of the applications they use) will be given an incentive to use a congestion control that back off more aggressively than TCP for any elastic traffic. Indeed they will actually have an incentive to use fully weighted congestion controls that allow traffic to cause congestion in proportion to its priority. Traffic which backs off more aggressively than TCP will see congestion charges remain the same (or even drop) as congestion increases; traffic which backs off less aggressively will see charges rise, but the user may be prepared to accept this if it is high-priority traffic; traffic which backs off not at all will see charges rise dramatically.



 TOC 

6.  Statistical Multiplexing over Differing Timescales

Access networks are usually provisioned assuming statistical multiplexing, where end-users are presumed not all to use their maximum bandwidth simultaneously. Typically, an ISP might design access networks with shared resources (e.g., circuits, ports, schedulers) dimensioned in proportion to the sum of average usage by the customers involved. Generally, ISPs monitor actual usage averaged over some time period (typically stated in minutes) to plan when upgrades to shared resources will be needed.

Almost always, they find that certain busy periods of the day have higher usage; and that actual contention for bandwidth at a shared resource (e.g., circuit, port, scheduler) is limited to those periods. This leads to "economic congestion" as defined in Section 3.4 of [Bauer09] (Bauer, S., Clark, D., and W. Lehr, “The Evolution of Internet Congestion,” 2009.), where traffic by one end-user imposes a "cost" of reduced utility on other users. Sometimes, there is an extended period between economic congestion being first observed and the completion of upgrades. In other cases, a trend of "economic congestion" is used by a service provider before congestion as defined in the abstract mechanism (loss or ECN marking) occurs.

During busy periods, it has been observed that roughly 20% of the end-users are using 80% of the bandwidth [Varian] (Varian, H., “Congestion pricing principles,” July .). We call this roughly-20% "heavy users", and the others "light users". Left to itself, this situation means that heavy users cause queues to fill at a rate much greater than light users do. (Note that this heavy/light categorization is for illustrative purposes since there is actually a continuum of "heaviness" across users.) When both heavy and light users pay the same flat rate, ISPs believe heavy users should bear more of the "cost" of reduced utility.

When all users have unlimited access to a shared resource bottleneck, this problem is the most severe since maximum per user bandwidth is that of the shared resource. In order to provide more control over the maximum rate at which individual users may send, many ISPs have deployed "traffic shapers" to limit bandwidth available to an individual user during all time periods. Note that this limits the per user maximum bandwidth in the sub-second timeframe of the shaper queue. Currently, these "shapers" make no distinction between busy periods where shared resource congestion may occur and periods when no congestion occurs.

During a period of higher usage, a shared resource becomes the bottleneck and causes filling of a shared queue or individual user shaper queues. However, heavy users create much more queuing and therefore potentially more congestion volume Section 2 (Definitions) as compared with lighter users. This means that during periods of higher usage, heavier and lighter users see comparable congestion (i.e., packet loss or ECN marking). Thus, the overall utility (i.e., probability of a packet not being lost or ECN marked) is reduced by the fewer heavier users at the expense of the many lighter users.

During periods of lighter usage, heavier users will fill their individual shaper queues, potentially creating loss or ECN marking, such that TCP congestion-control does what the ISP desires and cuts back the sending rate giving the user the expected maximum bandwidth.



 TOC 

6.1.  ConEx Objectives for This Issue

ConEx should provide better information for a provider to address the "economic congestion" problem. Specifically, ConEx should help to distinguish which users cause queue-filling over a time interval matching the economic congestion and statistical multiplexing algorithm time scales. This can range from seconds, to minutes, to hours. It is also desirable to distinguish "self-congestion" where there is no contention for a shared resource bandwidth (e.g., circuit, port, scheduler), which is referred to as "inter-user congestion" in the following. If this is visible to end-users, they could use an out-of-band mechanism to "go faster" if only "self-congestion" is limiting their throughput.

There are (at least) three approaches for addressing this issue.

  1. Treat "self-congestion" the same as "inter-user congestion" since they both create congestion as perceived by the flow user;
  2. Signal more information to the receiver about the cause of loss since the remedy may differ;
  3. Process (and generate) ConEx information at the same network element which implements the shaper, which has knowledge of the configured maximum bandwidth for the users as well as local shared resource congestion.

For the most part, these don't require any changes to the abstract mechanism; but a subcase of 2), where the traffic-shaper might use ConEx to signal that the "congestion" is actually due to traffic-shaping, not shared resource contention, could require additional signaling to be defined in the ConEx protocol.

Note that during busy periods "self congestion" might not be the limiting factor, but there will inevitably be less-busy periods where "self-congestion" will predominate.



 TOC 

6.2.  ConEx as a Solution

Over a time period related to the statistical multiplexing or economic congestion interval (e.g., many seconds to minutes to hours) total up the number of bytes that have been congestion marked and the total number of bytes sent per end-user. Compute the ratio of congested bytes to total bytes. This measures the average rate per user.

Quantizing users into classes using one threshold on total and and another threshold on ratio results in a grid that identifies four classes of user:



              +------------+-------------+-------------+
              |            |          Volume           |
              |   Ratio    |    Large    |    Small    |
              +------------+-------------+-------------+
              |   High     | Heavy User  | Bursty User |
              +------------+-------------+-------------+
              |    Low     | LEDBAT User | Light User  |
              +------------+-------------+-------------+
   (Where "LEDBAT User" includes other Less-than-Best-Effort algorithms.)
 Figure 1: Four Classes of User 

Note that Bursty and Heavy Users contribute more to congestion marking, but a Bursty user contributes less overall congestion marking and may be creating shorter periods of queue filling as compared with heavy users. LEDBAT and light users create less to congestion marking, with LEDBAT users able to transfer more volume as compared with light users since LEDBAT users back off before congestion marking occurs. An operator might reasonably take this into account in their shaping algorithms.



 TOC 

6.3.  Additional Support Using other Measures and Mechanisms

An additional congestion measure of burstiness (in addition to "congestion") would allow nodes upstream from the node implementing the shaper to implement traffic management. This measure could be derived from signals in the abstract mechanism but would require (a majority) of the heavier senders and receivers to implement conex and also would only work if loss or ECN marking occurs. Also, signaling a measure of the burstiness (or something related to it) would partially address the scenario where no loss or ECN marking occurs.

As an alternative, a "light weight" TCP proxy might be implemented at the network element containing the shaper, and an upstream network element (e.g., an ingress router), could potentially create a ConEx control loop between these network elements to provide more optimal balance between heavier and lighter users during congested intervals. This would be a closed domain where the signals could be implicitly trusted. The burstiness measure could be communicated using TCP extensions between these proxies.

There is also the aspect of "self congestion" where a traffic-shaper is at the access node. Using the current mechanisms, the receiver cannot tell the difference between "self-congestion" and "inter-user congestion." Adding a signal to the abstract mechanism could enable a receiver to inform the sender about the cause of congestion, enabling the sender to request that the traffic-shaper parameters change to enable the flow to "go faster".



 TOC 

7.  Other issues



 TOC 

7.1.  Congestion as a Commercial Secret

Network operators have long viewed the congestion levels in their network as a business secret. In some ways this harks back to the days of fixed-line telecommunications where congestion manifested as failed connections or dropped calls. But even in modern data-centric packet networks congestion is viewed as a secret not to be shared with competitors. It can be debated whether this view is sensible, but it may make operators uneasy about deploying ConEx. The following two examples highlight some of the arguments used:

Of course some might say that the idea of keeping congestion secret is silly. After all, end-hosts already have knowledge of the congestion throughout the network, albeit only along specific paths, and operators can work out that there is persistent congestion as their customers will be suffering degraded network performance.



 TOC 

7.2.  Information Security

make a source believe it has seen more congestion than it has

hijack a user's identity and make it appear they are dishonest at an egress policer

clear or otherwise tamper with the ConEx markings

...

{ToDo} Write these up properly...



 TOC 

8.  Security Considerations

This document proposes a mechanism tagging onto Explicit Congestion Notification [RFC3168] (Ramakrishnan, K., Floyd, S., and D. Black, “The Addition of Explicit Congestion Notification (ECN) to IP,” September 2001.), and inherits the security issues listed therein. The additional issues from ConEx markings relate to the degree of trust each forwarding point places in the ConEx markings it receives, which is a business decision mostly orthogonal to the markings themselves.

One expected use of exposed congestion information is to hold the end-to-end transport and the network accountable to each other. The network cannot be relied on to report information to the receiver against its interest, and the same applies for the information the receiver feeds back to the sender, and that the sender reports back to the network. Looking at each in turn:

The Network
In general it is not in any network's interest to under-declare congestion since this will have potentially negative consequences for all users of that network. It may be in its interest to over-declare congestion if, for instance, it wishes to force traffic to move away to a different network or simply to reduce the amount of traffic it is carrying. Congestion Exposure itself won't significantly alter the incentives for and against honest declaration of congestion by a network, but we can imagine applications of Congestion Exposure that will change these incentives. There is a perception among network operators that their level of congestion is a business secret. Today, congestion is one of the worst-kept secrets a network has, because end-hosts can see congestion better than network operators can. Congestion Exposure will enable network operators to pinpoint whether congestion is on one side or the other of any border. It is conceivable that forwarders with underprovisioned networks may try to obstruct deployment of Congestion Exposure.
The Receiver
Receivers generally have an incentive to under-declare congestion since they generally wish to receive the data from the sender as rapidly as possible. [Savage] (Savage, S., Wetherall, D., and T. Anderson, “TCP Congestion Control with a Misbehaving Receiver,” 1999.) explains how a receiver can significantly improve their throughput my failing to declare congestion. This is a problem with or without Congestion Exposure. [KGao] (Gao, K. and C. Wang, “Incrementally Deployable Prevention to TCP Attack with Misbehaving Receivers,” December 2004.) explains one possible technique to encourage receiver's to be honest in their declaration of congestion.
The Sender
One proposed mechanism for Congestion Exposure deployment adds a requirement for a sender to advise the network how much congestion it has suffered or caused. Although most senders currently respond to congestion they are informed of, one use of exposed congestion information might be to encourage sources of persistent congestion to back off more aggressively. Then clearly there may be an incentive for the sender to under-declare congestion. This will be a particular problem with sources of flooding attacks. "Policing" mechanisms have been proposed to deal with this.

In addition there are potential problems from source spoofing. A malicious sender can pretend to be another user by spoofing the source address. Congestion Exposure allows for "Policers" and "Traffic Shapers" so as to be robust against injection of false congestion information into the forward path.



 TOC 

9.  IANA Considerations

This document does not require actions by IANA.



 TOC 

10.  Acknowledgments

Bob Briscoe is partly funded by Trilogy, a research project (ICT-216372) supported by the European Community under its Seventh Framework Programme. The views expressed here are those of the author only.

The authors would like to thank the many people that have commented on this document. Bernard Aboba, Mikael Abrahamsson, João Taveira Araújo, Steve Bauer, Caitlin Bestler, Steven Blake, Louise Burness, Alissa Cooper, Philip Eardley, Matthew Ford, Ingemar Johansson, Mirja Kuehlewind, Dirk Kutscher, Zhu Lei, Kevin Mason, Michael Menth, Chris Morrow, Hannes Tschofenig and Stuart Venters. Please accept our apologies if your name has been missed off this list.



 TOC 

11. Informative References

[BB-incentive] MIT Communications Futures Program (CFP) and Cambridge University Communications Research Network, “The Broadband Incentive Problem,” September 2005.
[Bauer09] Bauer, S., Clark, D., and W. Lehr, “The Evolution of Internet Congestion,” 2009.
[ConEx-Abstract-Mech] Briscoe, B., “Congestion Exposure (ConEx) Concepts and Abstract Mechanism,” draft-ietf-conex-abstract-mech-00 (work in progress), March 2011 (TXT).
[Design-Philosophy] Clarke, D., “The Design Philosophy of the DARPA Internet Protocols,” 1988.
[Fair-use] Broadband Choices, “Truth about 'fair usage' broadband,” 2009.
[Fairer-faster] Briscoe, B., “A Fairer Faster Internet Protocol,” IEEE Spectrum Dec 2008 pp38-43, December 2008.
[KGao] Gao, K. and C. Wang, “Incrementally Deployable Prevention to TCP Attack with Misbehaving Receivers,” December 2004.
[Kelly] Kelly, F., Maulloo, A., and D. Tan, “Rate control for communication networks: shadow prices, proportional fairness and stability,” Journal of the Operational Research Society 49(3) 237--252, 1998 (PDF).
[LEDBAT] Shalunov, S., “Low Extra Delay Background Transport (LEDBAT),” draft-ietf-ledbat-congestion-01 (work in progress), March 2010 (TXT).
[Malice] Briscoe, B., “Using Self Interest to Prevent Malice; Fixing the Denial of Service Flaw of the Internet,” WESII - Workshop on the Economics of Securing the Information Infrastructure 2006, 2006 (PDF).
[Padhye] Padhye, J., Firoiu, V., Towsley, D., and J. Kurose, “Modeling TCP Throughput: A Simple Model and its Empirical Validation,” ACM SIGCOMM Computer Communications Review Vol 28(4), pages 303-314, May 1998.
[Policing-freedom] Briscoe, B., Jacquet, A., and T. Moncaster, “Policing Freedom to Use the Internet Resource Pool,” RE-Arch 2008 hosted at the 2008 CoNEXT conference , December 2008.
[QoS-Models] Briscoe, B. and S. Rudkin, “Commercial Models for IP Quality of Service Interconnect,” BTTJ Special Edition on IP Quality of Service vol 23 (2), April 2005.
[RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering, S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., Partridge, C., Peterson, L., Ramakrishnan, K., Shenker, S., Wroclawski, J., and L. Zhang, “Recommendations on Queue Management and Congestion Avoidance in the Internet,” RFC 2309, April 1998 (TXT, HTML, XML).
[RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, “The Addition of Explicit Congestion Notification (ECN) to IP,” RFC 3168, September 2001 (TXT).
[RFC6077] Papadimitriou, D., Welzl, M., Scharf, M., and B. Briscoe, “Open Research Issues in Internet Congestion Control,” RFC 6077, February 2011 (TXT).
[Re-Feedback] Briscoe, B., Jacquet, A., Di Cairano-Gilfedder, C., Salvatori, A., Soppera, A., and M. Koyabe, “Policing Congestion Response in an Internetwork Using Re-Feedback,” ACM SIGCOMM CCR 35(4)277—288, August 2005 (PDF).
[Savage] Savage, S., Wetherall, D., and T. Anderson, “TCP Congestion Control with a Misbehaving Receiver,” ACM SIGCOMM Computer Communication Review , 1999.
[Varian] Varian, H., “Congestion pricing principles,” Technical Plenary 78th IETF Meeting, July .
[re-ecn-motive] Briscoe, B., Jacquet, A., Moncaster, T., and A. Smith, “Re-ECN: A Framework for adding Congestion Accountability to TCP/IP,” draft-briscoe-tsvwg-re-ecn-tcp-motivation-02 (work in progress), October 2010 (TXT).


 TOC 

Authors' Addresses

  Toby Moncaster (editor)
  Moncaster Internet Consulting
  Dukes
  Layer Marney
  Colchester CO5 9UZ
  UK
EMail:  toby@moncaster.com
  
  John Leslie (editor)
  JLC.net
  10 Souhegan Street
  Milford, NH 03055
  US
EMail:  john@jlc.net
  
  Bob Briscoe
  BT
  B54/77, Adastral Park
  Martlesham Heath
  Ipswich IP5 3RE
  UK
Phone:  +44 1473 645196
EMail:  bob.briscoe@bt.com
URI:  http://bobbriscoe.net/
  
  Richard Woundy
  Comcast
  Comcast Cable Communications
  27 Industrial Avenue
  Chelmsford, MA 01824
  US
EMail:  richard_woundy@cable.comcast.com
URI:  http://www.comcast.com
  
  Dave McDysan
  Verizon
  22001 Loudon County Pkwy
  Ashburn, VA 20147
  US
EMail:  dave.mcdysan@verizon.com