congestion exposure (ConEx), re-feedback & re-ECN

a new resource sharing architecture for the Internet
enabling cost fairness (and other forms of fairness)

Page contents



Overview

Re-feedback is a novel feedback arrangement for connectionless networks that forces packets to truthfully expose the congestion they expect to cause as they share each resource that makes up the Internet. Once exposed, this information considerably simplifies the resource allocation problems listed below.

We have proposed a form of re-feedback called re-ECN (re-feedback of explicit congestion notification) that can be incrementally added to the current Internet by modifying Internet senders or using a proxy. Currently, we are proposing to encode re-ECN using the last available bit in the IPv4 packet header, or an IPv6 extension header.

Applications of re-feedback


Presentations

Each primary paper below links to a full entry later in the page which includes all supporting material, including presentations.

General Presentations


Primary Documentation


All Documentation & Supporting Material


HTMLUnpaginated TextTXTXMLCongestion Exposure (ConEx) Concepts and Abstract Mechanism, Matt Mathis (Google) and Bob Briscoe (BT), IETF Internet-Draft <draft-ietf-conex-abstract-mech-01.txt> (Mar 2011). (15pp, 1 fig, 19 refs) [BibTeX]

Differences between drafts: [ IETF document history | ietf00-mathis00 ]

Presentations: [ IETF-80 | IETF-79 ]

Abstract: This document describes an abstract mechanism by which senders inform the network about the congestion encountered by packets earlier in the same flow. Today, the network may signal congestion to the receiver by ECN markings or by dropping packets, and the receiver passes this information back to the sender in transport-layer feedback. The mechanism to be developed by the ConEx WG will enable the sender to also relay this congestion information back into the network in-band at the IP layer, such that the total level of congestion is visible to all IP devices along the path, from where it could, for example, provide input to traffic management.


HTMLUnpaginated TextTXTXMLConEx Concepts and Use Cases, Toby Moncaster (independent; Ed), John Leslie (JLC.net; Ed), Bob Briscoe (BT), Richard Woundy (Comcast) and David McDysan (Verizon), IETF Internet-Draft <draft-ietf-conex-concepts-uses-01.txt> (Mar 2011). (23pp, 1 figs, 20 refs) [BibTeX]

Differences between drafts: [ IETF document history | moncaster02-01 | 01-00 ]

Presentations: [ IETF-80 | IETF-79 | IETF-78 ]

Abstract: Internet Service Providers (operators) are facing problems where localized congestion prevents full utilization of the path between sender and receiver at today's "broadband" speeds. Operators desire to control this congestion, which often appears to be caused by a small number of users consuming a large amount of bandwidth. Building out more capacity along all of the path to handle this congestion can be expensive and may not result in improvements for all users so network operators have sought other ways to manage congestion. The current mechanisms all suffer from difficulty measuring the congestion (as distinguished from the total traffic).

The ConEx Working Group is designing a mechanism to make congestion along any path visible at the Internet Layer. This document describes example cases where this mechanism would be useful.

PDFRe-feedback: Freedom with Accountability for Causing Congestion in a Connectionless Internetwork, Bob Briscoe (BT & UCL) UCL PhD dissertation (May 2009). (256pp, 39 figs, 173 refs) [BibTeX]

Abstract: This dissertation concerns adding resource accountability to a simplex internetwork such as the Internet, with only necessary but sufficient constraint on freedom. That is, both freedom for applications to evolve new innovative behaviours while still responding responsibly to congestion; and freedom for network providers to structure their pricing in any way, including flat pricing.

The big idea on which the research is built is a novel feedback arrangement termed `re-feedback'. A general form is defined, as well as a specific proposal (re-ECN) to alter the Internet protocol so that self-contained datagrams carry a metric of expected downstream congestion. Congestion is chosen because of its central economic role as the marginal cost of network usage. The aim is to ensure Internet resource allocation can be controlled either by local policies or by market selection (or indeed local lack of any control).

The current Internet architecture is designed to only reveal path congestion to end-points, not networks. The collective actions of self-interested consumers and providers should drive Internet resource allocations towards maximisation of total social welfare. But without visibility of a cost-metric, network operators are violating the architecture to improve their customer's experience. The resulting fight against the architecture is destroying the Internet's simplicity and ability to evolve.

Although accountability with freedom is the goal, the focus is the congestion metric, and whether an incentive system is possible that assures its integrity as it is passed between parties around the system, despite proposed attacks motivated by self-interest and malice. This dissertation defines the protocol and canonical examples of accountability mechanisms. Designs are all derived from carefully motivated principles. The resulting system is evaluated by analysis and simulation against the constraints and principles originally set. The mechanisms are proven to be agnostic to specific transport behaviours, but they could not be made flow-ID-oblivious.


PDFInternet: Fairer is Faster,  Bob Briscoe (BT), BT White Paper TR-CXR9-2009-001 (Jun 2009). (7pp, 5 figs) [BibTeX]

Abstract: The Internet is founded on a very simple premise: shared communications links are more efficient than dedicated channels that lie idle much of the time. We share local area networks at work and neighbourhood links from home. Indeed, a multi-gigabit backbone cable is shared among thousands of folks surfing the Web, downloading videos, and talking on Internet phones. But there’s a profound flaw in the protocol that governs how people share the Internet’s capacity. The protocol allows you to seem to be polite, even as you take far more resources than others.

Network providers like Verizon or BT either throw capacity at the problem or patch over it with homebrewed attempts to penalize so-called bandwidth hogs or the software they tend to use. From the start it needs to be crystal clear that those with an appetite for huge volumes of data are not the problem.  There is no need to stop them downloading vast amounts of material, if they can do so without starving others.

But no network provider can solve this on their own. At the Internet standards body, work has started on fixing the deeply entrenched underlying problem. A leading proposal claims to have found a way to deploy a tweak to the Internet protocol itself—the Internet’s ‘genetic material’. The intent is to encourage a profound shift in the incentives that drive capacity sharing.


The following 6pp article for IEEE Spectrum magazine is adapted for an engineering audience from the above 7pp white paper—it has some of the economic motivation and figures edited out.


HTML(Remote IEEE original) A Fairer, Faster Internet Protocol, Bob Briscoe (BT), Illustrations by QuickHoney, IEEE Spectrum, Dec 2008 pp38-43 (2008). (6pp, 3 figs) [BibTeX]

Abstract:  The Internet is founded on a very simple premise: shared communications links are more efficient than dedicated channels that lie idle much of the time. And so we share. We share local area networks at work and neighborhood links from home. And then we share again—at any given time, a terabit backbone cable is shared among thousands of folks surfing the Web, downloading videos, and talking on Internet phones. But there’s a profound flaw in the protocol that governs how people share the Internet’s capacity. The protocol allows you to seem to be polite, even as you elbow others aside, taking far more resources than they do. Network providers like Verizon and BT either throw capacity at the problem or improvise formulas that attempt to penalize so-called bandwidth hogs. Let me speak up for this much-maligned beast right away: bandwidth hogs are not the problem. There is no need to prevent customers from downloading huge amounts of material, so long as they aren’t starving others. Rather than patching over the problem, my colleagues and I at BT (formerly British Telecom) have worked out how to fix the root cause: the Internet’s sharing protocol itself. It turns out that this solution will make the Internet not just simpler but much faster too.


PDFFlow Rate Fairness: Dismantling a Religion, Bob Briscoe (BT & UCL), ACM Computer Communications Review 37(2) 63--74 (Apr 2007). (10pp, 2 figs, 35 refs) [BibTeX]


PDFHTMLUnpaginated TextTXTXMLFlow Rate Fairness: Dismantling a Religion, Bob Briscoe (BT & UCL), IETF Internet-Draft <draft-briscoe-tsvarea-fair-02.pdf> (Expired) (Jul 2007). (44pp, 2 figs, 62 refs) [BibTeX]

Differences between drafts: [ 02-01 | 01-00 ]

Presentations: [ IETF-69 | IETF-68 | IRTF-E2ERG'0702 | IRTF-ICCRG'0702 | PFLDnet'07 | IETF-67 ]

Abstract: Resource allocation and accountability have been major unresolved problems with the Internet ever since its inception. The reason we never resolve these issues is a broken idea of what the problem is. The applied research and standards communities are using completely unrealistic and impractical fairness criteria. The resulting mechanisms don't even allocate the right thing and they don't allocate it between the right entities. We explain as bluntly as we can that thinking about fairness mechanisms like TCP in terms of sharing out flow rates has no intellectual heritage from any concept of fairness in philosophy or social science, or indeed real life. Comparing flow rates should never again be used for claims of fairness in production networks. Instead, we should judge fairness mechanisms on how they share out the `cost' of each user's actions on others.


Adobe AcrobatPolicing Congestion Response in an Internetwork using Re-feedback, Bob Briscoe (BT & UCL), Arnaud Jacquet (BT), Carla Di Cairano-Gilfedder (BT), Alessandro Salvatori (Eurécom & BT), Andrea Soppera (BT) and Martin Koyabe (BT) in Proc ACM SIGCOMM'05, Computer Communications Review 35(4) (Sep 2005) (12pp, 21 refs, 8 figs; pre-print) [BibTeX]

Presentation: [ SIGCOMM'05 | CFP_BB'05 | UCL'04 | Cam'04 | ICSI'04 ]

Abstract: This paper introduces a novel feedback arrangement, termed re-feedback. It ensures metrics in data headers such as time to live and congestion notification will arrive at each relay carrying a truthful prediction of the remainder of their path. We propose mechanisms at the network edge that ensure thedominant selfish strategy of both network domains and endpoints will be to set these headers honestly and to respond correctly to path congestion and delay, despite conflicting interests. Although these mechanisms influence incentives, they don’t involve tampering with end-user pricing. We describe a TCP rate policer as a specific example of this new capability. We show it can be generalised to police various qualities of service. We also sketch how a limited form of re-feedback could be deployed incrementally around unmodified routers without changing IP.


PDFPolicing Freedom to Use the Internet Resource Pool, Arnaud Jacquet (BT), Bob Briscoe (BT & UCL) & Toby Moncaster (BT), Workshop on Re-Architecting the Internet (ReArch'08) (Dec 2008) (6pp, 6 figs, 14 refs) [BibTeX]

Presentations: [ NGN Interconnection Strategies'08 ]

Abstract:  Ideally, everyone should be free to use as much of the Internet resource pool as they can take. But, whenever too much load meets too little capacity, everyone's freedoms collide. We show that attempts to isolate users from each other have corrosive side-effects - discouraging mutually beneficial ways of sharing the resource pool and harming the Internet's evolvability. We describe an unusual form of traffic policing which only pushes back against those who use their freedom to limit the freedom of others. This offers a vision of how much better the Internet could be. But there are subtle aspects missing from the current Internet architecture that prevent this form of policing being deployed. This paper aims to shift the research agenda onto those issues, and away from earlier attempts to isolate users from each other.



HTMLUnpaginated TextTXTXMLRe-ECN: Adding Accountability for Causing Congestion to TCP/IP, Bob Briscoe (BT & UCL), Arnaud Jacquet (BT), Toby Moncaster (independent) and Alan Smith (BT), IETF Internet-Draft <draft-briscoe-tsvwg-re-ecn-tcp-09.txt> (Oct 2010). (51pp, 28 refs, 7 figs) [BibTeX]

Differences between drafts: [ 09-08 | 08-07 | 07-06 | 06-05 | 05-04 | 04-03 | 03-02 | 02-01 ]

Presentations: [ IETF-76 | ECOC-FID'07 | IETF-69IETF-68CMU'06 | IETF-67 | IETF-66 | IETF-65 | IETF-64 ]

Abstract: This document introduces a new protocol for explicit congestion notification (ECN), termed re-ECN, which can be deployed incrementally around unmodified routers. It enbales the the upstream party at any trust boundary in the internetwork to be held responsible for the congestion they cause, or allow to be caused. So, networks can introduce straightforward accountability for congestion and policing mechanisms for incoming traffic from end-customers or from neighbouring network domains. The protocol works by arranging an extended ECN field in each packet so that, as it crosses any interface in an internetwork, it will carry a truthful prediction of congestion on the remainder of its path. The purpose of this document is to specify the re-ECN protocol at the IP layer and to give guidelines on any consequent changes required to transport protocols. It includes the changes required to TCP both as an example and as a specification. It also gives examples of mechanisms that can use the protocol to ensure data sources respond correctly to congestion. And it describes example mechanisms that ensure the dominant selfish strategy of both network domains and end-points will be to set the extended ECN field honestly.

HTMLUnpaginated TextTXTXMLRe-ECN: The Motivation for Adding Accountability for Causing Congestion to TCP/IP, Bob Briscoe (BT & UCL), Arnaud Jacquet (BT), Toby Moncaster (independent) and Alan Smith (BT), IETF Internet-Draft <draft-briscoe-tsvwg-motivation-02.txt> (Oct 2010). (52pp, 28 refs, 2 figs) [BibTeX]

Differences between drafts:  [ 02-01 | 01-00 ]

Presentations: [ ECOC-FID'07 | IETF-69 | ParisNetNeutrality'07 | IETF-68 | CRN_NetNeutrality'06 | CMU'06 | IETF-67 | IETF-66 | IETF-65 | IETF-64 ]

Abstract: This document describes the motivation for a new protocol for explicit congestion notification (ECN), termed re-ECN, which can be deployed incrementally around unmodified routers.  Re-ECN allows accurate congestion monitoring throughout the network thus enabling the upstream party at any trust boundary in the internetwork to be held responsible for the congestion they cause, or allow to be caused.  So, networks can introduce straightforward accountability for congestion and policing mechanisms for incoming traffic from end- customers or from neighbouring network domains.  As well as giving the motivation for re-ECN this document also gives examples of mechanisms that can use the protocol to ensure data sources respond correctly to congestion.  And it describes example mechanisms that ensure the dominant selfish strategy of both network domains and end- points will be to use the protocol honestly.


TXTXMLThe Need for Congestion Exposure in the Internet, Toby Moncaster, Louise Krug (BT), Michael Menth (Uni Wuerzburg), Joăo Taveira Araújo (UCL), Steven Blake (Extreme Networks) and Richard Woundy (Comcast; Editor), IETF Internet-Draft <draft-moncaster-conex-problem-00> (Mar 2010). (22pp, 0 figs, 17 refs) [BibTeX]

Presentations: [ Slides in IETF Proceedings of ConEx BoF (Nov'09) ]

Abstract:   Today's Internet is a product of its history.  TCP is the main transport protocol responsible for sharing out bandwidth and preventing a recurrence of congestion collapse while packet drop is the primary signal of congestion at bottlenecks.  Since packet drop (and increased delay) impacts all their customers negatively, networkoperators would like to be able to distinguish between overly aggressive congestion control and a confluence of many low-bandwidth, low-impact flows.  But they are unable to see the actual congestion signal and thus, they have to implement bandwidth and/or usage limits based on the only information they can see or measure (the contents of the packet headers and the rate of the traffic).  Such measures don't solve the packet-drop problems effectively and are leading to calls for government regulation (which also won't solve the problem).

We propose congestion exposure as a possible solution.  This allows packets to carry an accurate prediction of the congestion they expect to cause downstream thus allowing it to be visible to ISPs and network operators.  This memo sets out the motivations for congestion exposure and introduces a strawman protocol designed to achieve congestion exposure.

TXTXMLCongestion Exposure Problem Statement, Hannes Tschofenig (Nokia Siemens Networks) and Alissa Cooper (Center for Democracy & Technology) IETF Internet-Draft <draft-tschofenig-conex-ps-02> (Mar 2010). (17pp, 0 figs, 16 refs) [BibTeX]

Presentations: [ Slides in IETF Proceedings of ConEx BoF (Nov'09) ]

Abstract: The increasingly ubiquitous availability of broadband, together with flat-rate pricing, have made for increasing congestion problems on the network, which are often caused by a small number of users consuming a large amount of bandwidth.  In some cases, building out more capacity to handle this new congestion may be infeasible or unwarranted.  As a result, network operators have sought other ways to manage congestion both from their own users and from other networks.  These different types of solutions have different strengths and weaknesses, but all of them are limited in a number of key ways.

This document discusses the problems created for operators by high- consuming users and describes the strengths and weaknesses of a number of techniques operators are currently using to cope with high bandwidth usage.  The discussion of these solutions ultimately points to a need for a new kind of congestion accounting.



HTMLUnpaginated TextTXTXMLProblem Statement: Transport Protocols Don't Have To Do Fairness, Bob Briscoe (BT & UCL), Toby Moncaster and Lou Burness (BT), IETF Internet-Draft <draft-briscoe-tsvwg-relax-fairness-01.txt> (Expired) (Jul 2008). (27pp, 27 refs) [BibTeX]

Differences between drafts: [ 01-00 ]

Presentations: [ IETF-70 ]

Abstract: The Internet is an amazing achievement - any of the thousand million hosts can freely use any of the resources anywhere on the public network.  At least that was the original theory.  Recently issues with how these resources are shared among these hosts have come to the fore.  Applications are innocently exploring the limits of protocol design to get larger shares of available bandwidth. Increasingly we are seeing ISPs imposing restrictions on heavier usage in order to try to preserve the level of service they can offer to lighter customers. We believe that these are symptoms of an underlying problem: fair resource sharing is an issue that can only be resolved at run-time, but for years attempts have been made to solve it at design time.  In this document we show that fairness is not the preserve of transport protocols, rather the design of such protocols should be such that fairness can be controlled between users and ISPs at run-time.


Adobe AcrobatCommercial Models for IP Quality of Service Interconnect, Bob Briscoe & Steve Rudkin (BT), in BTTJ Special Edition on IP Quality of Service, 23(2) (Apr 2005). (26pp, 44 refs, 8 figs; pre-print) [BibTeX]

Presentations: [ IP Interconnection Forum | CFP ]


Abstract: Interconnection of IP QoS capabilities between networks releases considerable value. In this paper we show where this value will be realised. We give technical and economic arguments for why QoS will be provided in core and backbone networks as a bulk QoS facility incapable of distinguishing or charging differentially between sessions. While between edge networks a vibrant mix of retail QoS solutions will be possible, including Internet-wide per flow guarantees.

We outline cutting edge research on how to coordinate QoS between networks, using a session-based overlay between the edges that will extract most surplus value, underpinned by a bulk QoS layer coordinating the whole. We survey today's interconnect tariffs and the current disconnected state of IP QoS. Then we describe a commercial `model of models' that allows incremental evolution towards an interconnected future.

The paper covers intertwined engineering and economic/commercial issues in some depth, but considerable effort has been made to allow both communities to understand the whole paper.



HTMLUnpaginated TextTXTXMLEmulating Border Flow Policing using Re-PCN on Bulk Data, Bob Briscoe (BT), IETF Internet-Draft <draft-briscoe-re-pcn-border-cheat-03.txt> (Oct 2009). (59pp, 31 refs, 4 figs) [BibTeX]

Differences between drafts: [pcn03-pcn02 | pcn02-pcn01 | pcn01-pcn00 | pcn00-tsvwg01 | tsvwg01-tsvwg00]

Presentations: [ IETF-66 | IETF-65 ]

Abstract: Scaling per flow admission control to the Internet is a hard problem. The approach of combining Diffserv and pre-congestion notification (PCN) provides a service slightly better than Intserv controlled load that scales to networks of any size without needing Diffserv's usual overprovisioning, but only if domains trust each other to comply with admission control and rate policing.  This memo claims to solve this trust problem without losing scalability.  It provides a sufficient emulation of per-flow policing at borders but with only passive bulk metering rather than per-flow processing.  Measurements are sufficient to apply penalties against cheating neighbour networks.


Adobe AcrobatUsing Self-interest to Prevent Malice; Fixing the Denial of Service Flaw of the Internet, Bob Briscoe (BT & UCL), The Workshop on the Economics of Securing the Information Infrastructure (Oct 2006). (16pp, 34 refs, 5 figs) [BibTeX]

Presentations: [ WESII'06 | CRN_DoS'06 | CRN_DoS_Nov'05 | CRN_DoS_Jan'05 ]

Abstract: This paper describes the economic intent of a proposed change to the Internet protocol. Denial of service is the extreme of a spectrum of anti-social behaviour problems it aims to solve, but without unduly restricting unexpected new uses of the Internet. By internalising externalities and removing information asymmetries it should trigger evolutionary deployment of protections for Internet users. To be worthwhile architectural change must solve the last stages of the arms race, not just the next. So we work through the competitive process to show the solution will eventually block attacks that other researchers consider unsolvable, and that it creates the right incentives to drive its own deployment, from bootstrap through to completion. It also encourages deployment of complementary solutions, not just our own. Interestingly, small incentives in the lower layer infrastructure market amplify to ensure operators block attacks worth huge sums on the black market in the upper layers.


Adobe AcrobatShared Control of Networks using Re-feedback; An Outline, Bob Briscoe, Sébastien Cazalet, Andrea Soppera and
Arnaud Jacquet (BT), BT Technical Report TR-CXR9-2004-001 (Sep 2004) (9pp, 16 refs, 5 figs) [BibTeX]

Presentation

Abstract: Properly characterising paths is an important foundation for resource sharing and routing in packet networks. We realign metrics so that fields in packet headers characterise the path downstream of any point, rather than upstream. Then closed loop control is possible for either end-points or network nodes. We show how incentives can be arranged to ensure that honest reporting and responsible behaviour will be the dominant strategies of selfish parties, even for short flows. This opens the way for solutions to a number of problems we encounter in data networking, such as congestion control, routing and denial of service.
 

Implementations

Related Work

Assessments of re-ECN/re-feedback 

Background Research

Weighted and/or Proportionally Fair Congestion Controls (incl for background transfers)

Related standards activity

Articles in the technical media

Most of these articles 'get it', but some tend to still bash p2p usage, rather than embracing coexistence of heavy & light users.

Contact

The conex@ietf.org mailing list page contains instructions for joining the IETF's congestion exposure (ConEx) mailing list, sending to the list or reading the archive

Bob Briscoe <bob.briscoebt.com>

Last modified: 08 Mar 2012
Bob Briscoe