congestion
exposure (ConEx), re-feedback &
re-ECN
a new resource sharing
architecture for the Internet
enabling cost fairness
(and other forms of fairness)
Page contents
Overview
Re-feedback is a novel feedback arrangement for
connectionless networks that forces packets to truthfully expose the
congestion they expect to cause as they share each resource that makes
up the
Internet. Once exposed, this information considerably simplifies the
resource allocation problems
listed below.
We have proposed a form of re-feedback called re-ECN (re-feedback
of explicit
congestion notification) that can be
incrementally added to the current Internet by modifying Internet
senders or using a proxy. Currently, we are proposing to encode re-ECN
using the last
available bit in the IPv4 packet header, or an IPv6 extension header.
Applications of re-feedback
- Controlling
congestion fairly: Networks
can limit the impact any customer can have when they congest resources
that others are using (link bandwidth, spectrum etc.)
- Controlling
costs: The
impact on others has a direct interpretation as a cost to each network
providing the resources, and re-feedback ensures the information in
packets reflects the remaining costs both in the network transmitting
the packet and in further downstream networks
- Tight,
loose or no control: Re-feedback
only provides robust, timely information on congestion costs; it is up
to networks
how they use it, but a powerful range of possibilities is available:
- Networks might offer plenty of leeway for innovative
uses of
the Internet within one overall per-user constraint
- Or they might police the precise rate that each flow
should
adopt in response to varying congestion levels, enforcing
service-specific behaviour instead of the traditional generic service
of the Internet
- Networks can choose not to
limit the ability of their customers to cause costs to others, but
neighbouring networks can still hold such networks accountable for the
costs they allow their users to cause in other networks
- Quality of
service (QoS): Re-feedback
provides the information for a simple, wide-ranging quality of service
mechanism, because networks
can allow different customers to go faster than others in the presence
of congestion, giving some users the equivalent of a lightly loaded
network. Certain flows can be allowed not to alter their rate at all in
response to congestion, which is equivalent to a guaranteed rate
reservation.
- Simplified
inter-domain QoS: Re-feedback
enables QoS to work correctly across multiple domains but with
signalling only between the sender and their access domain.
- Providing
relevant bulk
information at borders: The
information re-feedback puts in packets is designed to correctly
aggregate so that inter-domain SLAs and contracts for all qualities of
service can be monitored and controlled simply by maintaining a couple
of counters at each border interface
- Traffic
engineering (speculative):
Routers
might be able to use the information re-feedback puts in packets to
balance the load on different paths (we have done no detailed research
to justify this claim)
- Distributed
denial of service
mitigation: DDoS attacks create extreme levels of
congestion
which re-feedback mechanisms would naturally mitigate with an extreme
response, both mitigating the attack and providing very strong
incentives for network operators to limit DDoS attacks.
Presentations
Each primary paper below links to a full entry later in the page which
includes all supporting material, including presentations.
General Presentations
- re-ECN next steps (includes
repeat of
architectural intent)
- Unofficial IETF-69 Birds of a Feather, Jul '07 (animation
requires Office XP) [.ppt
| .pdf
]
Notes/transcript: [tba | tba ]
- re-ECN architectural intent
- Unofficial IETF-68 Birds of a Feather,
Mar '07 (animation requires Office XP) [.ppt
| .pdf
]
Notes/transcript: [ .txt
| .mp3
]
Primary
Documentation
- Internet: Fairer is
Faster (May 2009)
- Audience: a
general
introduction to the technical and economic problem and solution
- How badly wrong the Internet's capacity sharing protocol
is,
and how Internet service providers (ISPs) are overriding it
- Fixing the root cause with a two-part solution that's a
lot
better than patching over one broken approach with another:
- weighted flow control on end systems
- congestion limits enforced by ISPs
- Why being able to limit congestion at a flat price
improves the
economic health of the Internet, and provides incentives to invest in
bandwidth
- Why the current Internet doesn't let ISPs limit
congestion,
because it was designed for end systems alone to see congestion
- The proposed change to IP that reveals congestion to ISPs
using
're-feedback', and the story so far getting it through the Internet
standards body
- Flow Rate
Fairness:
Dismantling a Religion (Jul 2007)
- Audience:
explaining
socio-economic issues for an engineering audience
- Why the approach the Internet community has been taking
to
fairness for the last few decades is broken
- Why re-feedback was developed as an alternative.
- Policing
Freedom to Use the Internet Resource Pool (Dec 2008)
- Audience: Internet
designers,
engineers and computer scientists
- The simplest possible mechanism to create the incentives
for
everyone to optimise total Internet usage
- With minimal constraint on applications and flat not
dynamic
pricing
- Why a new chapter will open in the evolution of the
Internet
- How applications would use this new capability
- Congestion
Exposure (ConEx)
Concepts and Use Cases (Mar 2011)
- Audience:
network operators, protocol engineers
- The concept of congestion-volume
- Pre-existing approaches
to traffic management and their limitations
- Usage scenarios for
Congestion Exposure (ConEx)
- Congestion
Exposure (ConEx)
Concepts and Abstract Mechanisms (Mar 2011)
- Audience:
Architects, protocol engineers & security professionals
- An abstraction of the ConEx wire protocol
- The generic components of the ConEx architecture
- The high level security goals of the system
- Policing Congestion
Response in an
Inter-Network
Using Re-Feedback (Sep 2005)
- Audience:
computer scientists
and
economists
- The architectural intent of re-feedback,
explaining why a
spectrum of ways
to fairly police congestion response is necessary and why it also
provides QoS and mitigates DDoS
- The detailed mechanisms in here are dated - superceded by
those
in the
Re-ECN for TCP/IP memo listed below.
- Re-ECN:
Adding Accountability for Causing Congestion to TCP/IP (Oct 2010)
- Audience:
engineering
- The full specification
of the protocol changes
required to IP (v4
& v6) and TCP as an example transport
- A full description of
example policing
functions that networks could build on top of re-feedback for
end-to-end congestion control
- Incremental deployment
strategies and
incentives
- Architectural intent.
- Problem Statement: Transport
Protocols Don't Have
To Do Fairness (Jul 2008)
- Audience:
Internet Engineering Task Force (IETF) and the engineering research
community
- Nowadays Internet
resource sharing is
largely determined by run-time behaviours, not the IETF's protocol
designs
- The relative shares of bottlenecks that users
get are totally unlike those the IETF expects
- The IETF needs to
understand and recognise
this trend and work on allowing others to control fairness properly at
run-time
- Delegating control of
average rate to
others, relaxes the problems the IETF is having standardising
transports for apps with demanding dynamics
- Commercial Models for IP
Quality of Service
Interconnect (Apr 2005)
- Audience: explaining
commercial
aspects to engineers and engineering issues to commercial specialists
- The competitive commercial context within which
inter-domain
QoS exists, and how cost-based and value-based charging interact
- How to simplify the provision of assured QoS sessions
across an
internetwork by dividing the problem into edge and inter-connected core
networks and adding re-feedback of downstream congestion information
solely between the core networks (not out to the edges)
- Engineering details are in the memo on "Emulating Border
Flow
Policing..." below
- The structure of the session-level market that buys QoS
from
edge and core networks, and the market between networks, both edge and
core.
- Emulating
Border
Flow Policing using Re-ECN on Bulk Data (Feb 2008)
- Audience:
engineering
- The application of re-feedback in a specific edge-to-edge
QoS
reservation scenario using pre-congestion notification (PCN)
- A full description of
example policing
functions that networks could build on top of re-feedback to prevent
networks cheating each other
- Commercial context is in
the "Commercial
Models for IP QoS Interconnect" paper above.
- Using Self-interest to
Prevent
Malice; Fixing the Denial of Service Flaw of the Internet
(Oct 2006)
- Audience:
Economists, public
policy and security strategists
- The economic intent of re-feedback with respect to
extreme
anti-social behaviour---DDoS attacks
- Covers both mitigation of attacks and incentives for
deployment---bootstrap and completion
All
Documentation & Supporting Material
Congestion
Exposure (ConEx) Concepts and Abstract Mechanism,
Matt Mathis (Google) and Bob Briscoe (BT),
IETF Internet-Draft <draft-ietf-conex-abstract-mech-01.txt>
(Mar 2011). (15pp, 1 fig, 19 refs) [BibTeX]
Differences between drafts: [ IETF
document history | ietf00-mathis00
]
Presentations: [ IETF-80
| IETF-79
]
Abstract: This
document describes an abstract mechanism by which senders inform the
network about the congestion encountered by packets earlier in the same
flow. Today, the network may signal congestion to the receiver by ECN
markings or by dropping packets, and the receiver passes this
information back to the sender in transport-layer feedback. The
mechanism to be developed by the ConEx WG will enable the sender to
also relay this congestion information back into the network in-band at
the IP layer, such that the total level of congestion is visible to all
IP devices along the path, from where it could, for example, provide
input to traffic management.
ConEx
Concepts and Use Cases,
Toby Moncaster
(independent; Ed), John Leslie (JLC.net; Ed), Bob Briscoe (BT), Richard
Woundy (Comcast) and David McDysan (Verizon),
IETF Internet-Draft <draft-ietf-conex-concepts-uses-01.txt>
(Mar 2011). (23pp, 1 figs, 20 refs) [BibTeX]
Differences between drafts: [ IETF
document history | moncaster02-01
| 01-00
]
Presentations: [ IETF-80
| IETF-79
| IETF-78
]
Abstract: Internet
Service Providers (operators) are facing problems where localized
congestion prevents full utilization of the path between sender and
receiver at today's "broadband" speeds. Operators desire to control
this congestion, which often appears to be caused by a small number of
users consuming a large amount of bandwidth. Building out more capacity
along all of the path to handle this congestion can be expensive and
may not result in improvements for all users so network operators have
sought other ways to manage congestion. The current mechanisms all
suffer from difficulty measuring the congestion (as distinguished from
the total traffic).
The ConEx Working Group is designing a mechanism to make congestion
along any path visible at the Internet Layer. This document describes
example cases where this mechanism would be useful.
Re-feedback:
Freedom with Accountability for Causing
Congestion in a Connectionless Internetwork,
Bob Briscoe (BT & UCL)
UCL PhD dissertation (May 2009).
(256pp, 39 figs, 173
refs) [BibTeX]
Abstract:
This dissertation concerns
adding resource accountability to a simplex internetwork such as the
Internet, with only necessary but sufficient constraint on freedom.
That is, both freedom for applications to evolve new innovative
behaviours while still responding responsibly to congestion; and
freedom for network providers to structure their pricing in any way,
including flat pricing.
The big idea on which the research is built is a novel feedback
arrangement termed `re-feedback'. A general form is defined, as well as
a specific proposal (re-ECN) to alter the Internet protocol so that
self-contained datagrams carry a metric of expected downstream
congestion. Congestion is chosen because of its central economic role
as the marginal cost of network usage. The aim is to ensure Internet
resource allocation can be controlled either by local policies or by
market selection (or indeed local lack of any control).
The current Internet architecture is designed to only reveal path
congestion to end-points, not networks. The collective actions of
self-interested consumers and providers should drive Internet resource
allocations towards maximisation of total social welfare. But without
visibility of a cost-metric, network operators are violating the
architecture to improve their customer's experience. The resulting
fight against the architecture is destroying the Internet's simplicity
and ability to evolve.
Although accountability with freedom is the goal, the focus is the
congestion metric, and whether an incentive system is possible that
assures its integrity as it is passed between parties around the
system, despite proposed attacks motivated by self-interest and malice.
This dissertation defines the protocol and canonical examples of
accountability mechanisms. Designs are all derived from carefully
motivated principles. The resulting system is evaluated by analysis and
simulation against the constraints and principles originally set. The
mechanisms are proven to be agnostic to specific transport behaviours,
but they could not be made flow-ID-oblivious.
Internet:
Fairer is
Faster,
Bob Briscoe (BT), BT White Paper TR-CXR9-2009-001 (Jun 2009). (7pp, 5
figs) [
BibTeX]
Abstract:
The
Internet is founded on a very simple premise: shared communications
links are
more efficient than dedicated channels that lie idle much of the time.
We share
local area networks at work and neighbourhood links from home. Indeed,
a multi-gigabit
backbone cable is shared among thousands of folks surfing the Web,
downloading
videos, and talking on Internet phones. But
there’s a
profound flaw in the protocol that governs how people share the
Internet’s
capacity. The protocol allows you to seem to be
polite, even as
you take
far more resources than others.
Network providers like Verizon or BT
either throw capacity at the problem or patch over it with homebrewed
attempts to penalize so-called bandwidth hogs or the software they tend
to use. From the start it needs to be crystal clear that those with an
appetite for huge volumes of data are not the problem. There
is
no
need to stop them downloading vast amounts of material, if they can do so without
starving others.
But no network provider can solve this on their own. At the Internet
standards body, work has started on fixing the deeply entrenched
underlying problem. A leading proposal claims to have found a way to
deploy a tweak to the Internet protocol itself—the Internet’s ‘genetic
material’. The intent is to encourage a profound shift in the
incentives that drive capacity sharing.
The
following 6pp article for
IEEE Spectrum magazine is adapted for an engineering audience from the
above 7pp white paper—it has some of the economic motivation and
figures edited out.
(Remote
IEEE original)
A Fairer,
Faster
Internet Protocol,
Bob Briscoe (BT), Illustrations by QuickHoney, IEEE Spectrum, Dec 2008
pp38-43
(2008). (6pp, 3 figs) [
BibTeX]
Abstract:
The Internet is founded on a very simple
premise: shared communications links are more efficient than dedicated
channels that lie idle much of the time. ¶ And so we
share. We
share local area networks at work and neighborhood links from home. And
then we share again—at any given time, a terabit backbone cable is
shared among thousands of folks surfing the Web, downloading videos,
and talking on Internet phones. ¶ But
there’s a
profound flaw in the protocol that governs how people share the
Internet’s capacity. The protocol allows you to seem to be polite, even
as you elbow others aside, taking far more resources than they do. ¶
Network providers like Verizon and BT either throw capacity at the
problem or improvise formulas that attempt to penalize so-called
bandwidth hogs. Let me speak up for this much-maligned beast right
away: bandwidth hogs are not the problem. There is no need to prevent
customers from downloading huge amounts of material, so long as they
aren’t starving others. ¶
Rather than patching over the problem, my colleagues and I
at BT
(formerly British Telecom) have worked out how to fix the root cause:
the Internet’s sharing protocol itself. It turns out that this solution
will make the Internet not just simpler but much faster too.
Flow
Rate
Fairness: Dismantling a Religion, Bob
Briscoe (BT & UCL), ACM Computer Communications Review 37(2)
63--74
(Apr 2007). (10pp, 2 figs, 35
refs) [
BibTeX]
Flow
Rate
Fairness: Dismantling a Religion, Bob
Briscoe (BT & UCL),
IETF Internet-Draft <draft-briscoe-tsvarea-fair-02.pdf>
(Expired) (Jul 2007). (44pp, 2 figs, 62 refs) [BibTeX]
Differences between drafts: [ 02-01
| 01-00
]
Presentations: [ IETF-69
| IETF-68
| IRTF-E2ERG'0702
| IRTF-ICCRG'0702
| PFLDnet'07
| IETF-67
]
Abstract:
Resource allocation and accountability have been major unresolved
problems with the Internet ever since its inception. The reason we
never resolve these issues is a broken idea of what the problem is. The
applied research and standards communities are using completely
unrealistic and impractical fairness criteria. The resulting mechanisms
don't even allocate the right thing and they don't allocate it between
the right entities. We explain as bluntly as we can that thinking about
fairness mechanisms like TCP in terms of sharing out flow rates has no
intellectual heritage from any concept of fairness in philosophy or
social science, or indeed real life. Comparing flow rates should never
again be used for claims of fairness in production networks. Instead,
we should judge fairness mechanisms on how they share out the `cost' of
each user's actions on others.
Policing
Congestion
Response in an Internetwork using Re-feedback,
Bob Briscoe (BT & UCL), Arnaud Jacquet (BT), Carla Di
Cairano-Gilfedder (BT),
Alessandro Salvatori (Eurécom & BT), Andrea
Soppera (BT) and Martin Koyabe (BT) in Proc ACM
SIGCOMM'05,
Computer Communications Review 35(4)
(Sep 2005) (12pp, 21 refs, 8 figs; pre-print) [BibTeX]
Presentation: [ SIGCOMM'05
| CFP_BB'05
| UCL'04
| Cam'04 |
ICSI'04
]
Abstract:
This paper introduces a novel feedback arrangement, termed re-feedback.
It ensures metrics in data headers such as time to live and congestion
notification will arrive at each relay carrying a truthful prediction
of the remainder of their path. We propose mechanisms at the network
edge that ensure thedominant selfish strategy of both network domains
and endpoints will be
to set these headers honestly and to respond correctly to path
congestion and delay, despite conflicting interests. Although these
mechanisms influence incentives, they don’t involve tampering with
end-user pricing. We describe a TCP rate policer as a specific example
of this new capability. We show it can be generalised to police various
qualities of service. We also sketch how a limited form of re-feedback
could be deployed incrementally around unmodified routers without
changing IP.
Policing
Freedom to Use the Internet Resource Pool,
Arnaud Jacquet
(BT), Bob
Briscoe (BT & UCL) & Toby Moncaster (BT), Workshop
on
Re-Architecting the Internet (ReArch'08) (Dec 2008) (6pp, 6
figs,
14
refs) [BibTeX]
Presentations: [ NGN
Interconnection Strategies'08 ]
Abstract:
Ideally, everyone should be free to use as much of
the Internet
resource pool as they can take. But, whenever too much load meets too
little capacity, everyone's freedoms collide. We show that attempts to
isolate users from each other have corrosive side-effects -
discouraging mutually beneficial ways of sharing the resource pool and
harming the Internet's evolvability. We describe an unusual form of
traffic policing which only pushes back against those who use their
freedom to limit the freedom of others. This offers a vision of how
much better the Internet could be. But there are subtle aspects missing
from the current Internet architecture that prevent this form of
policing being deployed. This paper aims to shift the research agenda
onto those issues, and away from earlier attempts to isolate users from
each other.
Re-ECN:
Adding Accountability for Causing Congestion to TCP/IP, Bob
Briscoe (BT & UCL), Arnaud Jacquet (BT), Toby Moncaster
(independent) and Alan
Smith
(BT),
IETF Internet-Draft <draft-briscoe-tsvwg-re-ecn-tcp-09.txt>
(Oct 2010). (51pp, 28 refs, 7 figs) [BibTeX]
Differences between drafts: [ 09-08
| 08-07
| 07-06
| 06-05
| 05-04
| 04-03
| 03-02
| 02-01
]
Presentations: [ IETF-76
| ECOC-FID'07
| IETF-69
| IETF-68
| CMU'06
| IETF-67
| IETF-66
| IETF-65
| IETF-64
]
Abstract:
This document introduces a
new protocol for explicit congestion
notification (ECN), termed re-ECN, which can be deployed incrementally
around unmodified routers. It enbales the the upstream party at any
trust boundary in the internetwork to be held responsible for the
congestion they cause, or allow to be caused. So, networks can
introduce straightforward accountability for congestion and policing
mechanisms for incoming traffic from end-customers or from neighbouring
network domains. The protocol works by arranging an extended ECN field
in each packet so that, as it crosses any interface in an internetwork,
it will carry a truthful prediction of congestion on the remainder of
its path. The purpose of this document is to specify the re-ECN
protocol at the IP layer and to give guidelines on any consequent
changes required to transport protocols. It includes the changes
required to TCP both as an example and as a specification. It also
gives examples of mechanisms that can use the protocol to ensure data
sources respond correctly to congestion. And it describes example
mechanisms that ensure the dominant selfish strategy of both network
domains and end-points will be to set the extended ECN field honestly.
Re-ECN:
The Motivation for Adding Accountability for Causing Congestion to
TCP/IP, Bob
Briscoe (BT & UCL), Arnaud Jacquet (BT), Toby Moncaster
(independent) and Alan
Smith
(BT),
IETF Internet-Draft <draft-briscoe-tsvwg-motivation-02.txt>
(Oct 2010). (52pp, 28 refs, 2 figs) [BibTeX]
Differences between drafts: [ 02-01
| 01-00
]
Presentations: [ ECOC-FID'07
| IETF-69
| ParisNetNeutrality'07
| IETF-68
| CRN_NetNeutrality'06
| CMU'06
| IETF-67
| IETF-66
| IETF-65
| IETF-64
]
Abstract: This
document describes
the motivation for a new
protocol for
explicit congestion notification (ECN), termed re-ECN, which can be
deployed incrementally around unmodified routers. Re-ECN
allows
accurate congestion monitoring throughout the network thus enabling the
upstream party at any trust boundary in the internetwork to be held
responsible for the congestion they cause, or allow to be
caused.
So, networks can introduce straightforward accountability for
congestion and policing mechanisms for incoming traffic from end-
customers or from neighbouring network domains. As well as
giving
the motivation for re-ECN this document also gives examples of
mechanisms that can use the protocol to ensure data sources respond
correctly to congestion. And it describes example mechanisms
that
ensure the dominant selfish strategy of both network domains and end-
points will be to use the protocol honestly.
The
Need for Congestion Exposure in the Internet,
Toby Moncaster, Louise Krug (BT),
Michael Menth (Uni Wuerzburg), Joăo Taveira Araújo (UCL),
Steven
Blake (Extreme Networks) and Richard Woundy (Comcast; Editor),
IETF
Internet-Draft <draft-moncaster-conex-problem-00>
(Mar 2010). (22pp, 0 figs, 17 refs) [BibTeX]
Presentations: [ Slides
in IETF Proceedings of ConEx BoF (Nov'09) ]
Abstract:
Today's Internet is a product of its history. TCP is the
main transport protocol responsible for sharing out bandwidth
and
preventing a recurrence of congestion collapse while packet drop is the
primary signal of congestion at bottlenecks. Since packet
drop
(and increased delay) impacts all their customers negatively,
networkoperators would like to be able to distinguish between overly
aggressive congestion control and a confluence of many low-bandwidth,
low-impact flows. But they are unable to see the actual
congestion signal and thus, they have to implement bandwidth and/or
usage limits based on the only information they can see or measure (the
contents of the packet headers and the rate of the traffic).
Such measures don't solve the packet-drop problems effectively and are
leading to calls for government regulation (which also won't solve the
problem).
We
propose congestion exposure as a possible solution. This
allows
packets to carry an accurate prediction of the congestion they expect
to cause downstream thus allowing it to be visible to ISPs and network
operators. This memo sets out the motivations for congestion
exposure and introduces a strawman protocol designed to achieve
congestion exposure.
Congestion
Exposure Problem Statement,
Hannes Tschofenig (Nokia Siemens Networks) and Alissa Cooper (Center
for Democracy & Technology) IETF
Internet-Draft <draft-tschofenig-conex-ps-02>
(Mar 2010). (17pp, 0 figs, 16 refs) [BibTeX]
Presentations: [ Slides
in IETF Proceedings of ConEx BoF (Nov'09) ]
Abstract:
The increasingly ubiquitous availability of broadband, together with
flat-rate pricing, have made for increasing congestion problems on the
network, which are often caused by a small number of users consuming a
large amount of bandwidth. In some cases, building out more
capacity to handle this new congestion may be infeasible or
unwarranted. As a result, network operators have sought other
ways to manage congestion both from their own users and from other
networks. These different types of solutions have different
strengths and weaknesses, but all of them are limited in a number of
key ways.
This document discusses the problems created for
operators by high- consuming users and describes the strengths and
weaknesses of a number of techniques operators are currently using to
cope with high bandwidth usage. The discussion of these
solutions
ultimately points to a need for a new kind of congestion accounting.
Problem
Statement: Transport Protocols Don't Have To Do Fairness,
Bob
Briscoe (BT & UCL), Toby Moncaster and Lou Burness (BT),
IETF Internet-Draft <draft-briscoe-tsvwg-relax-fairness-01.txt>
(Expired)
(Jul 2008). (27pp, 27 refs) [BibTeX]
Differences between drafts: [ 01-00
]
Presentations: [ IETF-70
]
Abstract:
The
Internet is an amazing achievement - any of the thousand million hosts
can freely use any of the resources anywhere on the public
network. At
least that was the original theory. Recently issues with how
these
resources are shared among these hosts have come to the fore.
Applications are innocently exploring the limits of protocol design to
get larger shares of available bandwidth. Increasingly we are seeing
ISPs imposing restrictions on heavier usage in order to try to preserve
the level of service they can offer to lighter customers. We believe
that these are symptoms of an underlying problem: fair resource sharing
is an issue that can only be resolved at run-time, but for years
attempts have been made to solve it at design time. In this
document
we show that fairness is not the preserve of transport protocols,
rather the design of such protocols should be such that fairness can be
controlled between users and ISPs at run-time.
Commercial
Models for
IP Quality of Service Interconnect, Bob
Briscoe & Steve Rudkin (BT), in BTTJ Special Edition
on IP
Quality of Service, 23(2)
(Apr 2005). (26pp, 44 refs, 8 figs; pre-print) [BibTeX]
Presentations: [ IP
Interconnection Forum | CFP ]
Abstract:
Interconnection of IP QoS capabilities between networks releases
considerable value. In this paper we show where this value will be
realised. We give technical and economic arguments for why QoS will be
provided in core and backbone networks as a bulk QoS facility incapable
of distinguishing or charging differentially between sessions. While
between edge networks a vibrant mix of retail QoS solutions will be
possible, including Internet-wide per flow guarantees.
We outline cutting edge research on how to coordinate QoS between
networks, using a session-based overlay between the edges that will
extract most surplus value, underpinned by a bulk QoS layer
coordinating the whole. We survey today's interconnect tariffs and the
current disconnected state of IP QoS. Then we describe a commercial
`model of models' that allows incremental evolution towards an
interconnected future.
The paper covers intertwined engineering and economic/commercial issues
in some depth, but considerable effort has been made to allow both
communities to understand the whole paper.
Emulating
Border Flow Policing using Re-PCN on Bulk Data, Bob
Briscoe (BT),
IETF Internet-Draft <draft-briscoe-re-pcn-border-cheat-03.txt>
(Oct 2009). (59pp, 31 refs, 4 figs) [BibTeX]
Differences between drafts: [pcn03-pcn02
| pcn02-pcn01
| pcn01-pcn00
| pcn00-tsvwg01
| tsvwg01-tsvwg00]
Presentations:
[ IETF-66
| IETF-65
]
Abstract:
Scaling per flow admission
control to the Internet is a hard problem.
The approach of combining Diffserv and pre-congestion notification
(PCN) provides a service slightly better than Intserv controlled load
that scales to networks of any size without needing Diffserv's usual
overprovisioning, but only if domains trust each other to comply with
admission control and rate policing. This memo claims to
solve
this trust problem without losing scalability. It provides a
sufficient emulation of per-flow policing at borders but with only
passive bulk metering rather than per-flow processing.
Measurements are sufficient to apply penalties against cheating
neighbour networks.
Using
Self-interest to Prevent Malice; Fixing the Denial of Service Flaw of
the Internet,
Bob
Briscoe (BT & UCL), The
Workshop on the Economics of Securing the
Information Infrastructure (Oct 2006). (16pp, 34 refs, 5
figs) [BibTeX]
Presentations: [ WESII'06
| CRN_DoS'06
| CRN_DoS_Nov'05
| CRN_DoS_Jan'05
]
Abstract:
This paper describes the
economic intent of a proposed change to the Internet protocol. Denial
of service is the extreme of a spectrum of anti-social behaviour
problems it aims to solve, but without unduly restricting unexpected
new uses of the Internet. By internalising externalities and removing
information asymmetries it should trigger evolutionary deployment of
protections for Internet users. To be worthwhile architectural change
must solve the last stages of the arms race, not just the next. So we
work through the competitive process to show the solution will
eventually block attacks that other researchers consider unsolvable,
and that it creates the right incentives to drive its own deployment,
from bootstrap through to completion. It also encourages deployment of
complementary solutions, not just our own. Interestingly, small
incentives in the lower layer infrastructure market amplify to ensure
operators block attacks worth huge sums on the black market in the
upper layers.
Shared
Control of Networks using Re-feedback; An Outline, Bob
Briscoe,
Sébastien Cazalet, Andrea Soppera and
Arnaud Jacquet (BT), BT Technical Report TR-CXR9-2004-001 (Sep 2004)
(9pp, 16 refs, 5 figs) [BibTeX]
Presentation
Abstract:
Properly characterising paths is an important foundation for resource
sharing and routing in packet networks. We realign metrics so that
fields in packet headers characterise the path downstream of any point,
rather than upstream. Then closed loop control is possible for either
end-points or network nodes. We show how incentives can be arranged to
ensure that honest reporting and responsible behaviour will be the
dominant strategies of selfish parties, even for short flows. This
opens the way for solutions to a number of problems we encounter in
data networking, such as congestion control, routing and denial of
service.
Implementations
- re-ECN in IP & TCP
- Linux kernel 2.6.27.7, to spec
draft-briscoe-tsvwg-re-ecn-tcp-08,.but only approx byte
accounting.
Available from: <Alan.P.Smithbt.com>
- Linux kernel 2.6.26 to spec
draft-briscoe-tsvwg-re-ecn-tcp-08. but only packet accounting
Contact: <Mirja.Kuehlewindikr.uni-stuttgart.de>
- ns2 network simulator to spec
draft-briscoe-tsvwg-re-ecn-tcp-08, but only packet accounting
By Toby Moncaster, available from: <bob.briscoebt.com>
- ns2 network simulator, implementated sufficiently for
internal traffic engineering simulations
Contact Joăo Taveira Araújo <j.araujoee.ucl.ac.uk>
- Congestion-related traffic management
Related
Work
Assessments
of re-ECN/re-feedback
- Mirja Kühlewind &
Michael Scharf, "Implementation
and Performance Evaluation of the re-ECN Protocol," In:
Proc 3rd
Workshop on Economic Traffic Management (ETM'10) (to appear
September 2010)
- Finding
a Fair Internet Capacity Sharing Solution:
Global Information Infrastructure Commission (GIIC)
assessment of re-ECN (from commerical, public policy &
technical viewpoints) [remote
| local copy]
- See
also a summary of this GIIC activity on
the GIIC home page
with links to related resources .
- Trilogy
Project Resource Control deliverables (2009-2010) [remote
copies]
- Steve Bauer, Peyman Faratin & Rob Beverly, "Assessing
the Assumptions Underlying Mechanism Design for the Internet,"
In: Proc. Workshop on the Economics of Networked Systems (NetEcon06)
MIT (June 2006) [remote
| local copy]
- Net
Neutrality: Beyond
the Hype to Achieve a Balanced Solution,
Communications
Research
Network (CRN) case study (Dec 2006) (2pp) [local copy]
- Alessandro Salvatori, "Closed Loop
Traffic Policing," Masters Thesis
submitted to: Politecnico Torino and Institut Eurécom (Sep 2005)
Background Research
- Literature Review
extracted from Bob
Briscoe, "Re-feedback: Freedom with Accountability for Causing
Congestion in a Connectionless Internetwork," UCL PhD dissertation (May
2009), focusing primarily on the following:
- Internet Congestion Control
- Jacobson, V. & Karels, M.J., "Congestion
Avoidance and Control,"
Laurence Berkeley Labs Technical Report (November 1988) (a slightly
modified version of the original published at SIGCOMM in Aug'88)
- Economics of Network Congestion
- Two-part congestion pricing: MacKie-Mason, J.K.
& Varian, H., "Pricing
Congestible Network Resources," IEEE Journal on Selected
Areas in Communications, ``Advances in the Fundamentals of Networking''
13(7):1141--1149
(1995)
- Network Utility functions: Shenker, S., "Fundamental
Design Issues for the Future Internet," IEEE Journal on
Selected Areas in Communications 13(7):1176--1188
(1995)
- Weighted Proportional Fairness: Kelly, F.P., Maulloo,
A.K. & Tan, D.K.H., "Rate
control for communication networks: shadow prices, proportional
fairness and stability," Journal of the Operational Research
Society 49(3):237--252
(1998)
- Internetwork Market Structure
- Shenker, S., Clark, D., Estrin, D. & Herzog,
S., "Pricing
in Computer Networks: Reshaping the research agenda," ACM
SIGCOMM Computer Communication Review 26(2) (April 1996)
- Constantiou, I.D. & Courcoubetis, C.A., "Information
Asymmetry Models in the Internet Connectivity Market," In:
Proc. 4th Internet Economics Workshop (May 2001)
- Laskowski, P. & Chuang, J., "Network
Monitors and Contracting Systems: Competition and Innovation,"
Proc. ACM SIGCOMM'06, Computer Communication Review 36(4):183--194 ACM
Press (September 2006)
- Clark, D., Wroclawski, J., Sollins, K. &
Braden, R., "Tussle
in Cyberspace: Defining Tomorrow's Internet," IEEE/ACM
Transactions on Networking 13(3):462--475
(June 2005)
- A
self-managed Internet, Frank Kelly's paper of the same name
and
collection of links to supporting work on:
- smaller buffer delays
- stability with propagation delays
- joint congestion control and routing
- distributed measurement-based admission control
- priority scheduling
- multicast
- marking strategies
- pricing and business models.
- Resource
Pricing and the Evolution of Congestion Control, Richard
Gibbens
& Frank Kelly
- Trilogy
Project -
Architecting the Future Internet
- Market
Managed Multi-service
Internet (M3I) project, 2000-2002
- Future
Wireless Network Architectures, Joint ICS FORTH, BT Research
project
- Microsoft Research, Networking Group (mostly see past
projects) led by Peter Key
- Explicit
congestion
notification (ECN), Sally Floyd's page of collected resources
- Caltech
NetLab, led by Steven Low
- R
Srikant, Internet Congestion Control & Pricing papers
- Congestion
Balancing using re-ECN, early investigation of traffic
engineering using re-ECN from Joăo Taveira Araújo, Miguel Rio and
George Pavlou
- Also see the
related work sections of the relevant papers above.
Weighted
and/or Proportionally Fair Congestion Controls (incl for background
transfers)
- Data Centre TCP (DCTCP)
- Relentless Congestion Control
- Equitable Quality Streaming of Video
- Nilsson,
M., Crabtree, B., Mulroy, P. & Appleby, S., "Equitable quality
video streaming for IP networks," International Journal of Internet
Protocol Technology 4(1):65--76
(March 2009) DOI: 10.1504/IJIPT.2009.024171
- Low
Extra Delay BAckground Transport (LEDBAT), by Stanislav
Shalunov: IETF standards track congestion control protocol that is less
aggressive than TCP
- Micro
Transport Protocol or µTP (sometimes also uTP), an
open-source transport protocol developed by BitTorrent that uses a
LEDBAT-like congestion control
- Background
Intelligent Transfer Service (BITS),
Microsoft's transport used for background file transfers (by
Microsoft apps such as Windows update and non-Microsoft
applications) - BITS is not designed to yield to other users sharing
Internet capacity, only to other applications on the same host, or
possibly those using the same Internet gateway device (see
specific details).
- Scalable
TCP by Tom Kelly
- Weighted
window-based congestion control for various packet marking algorithms,
Vasilios Siris
- Weighted
Proportional Fair Sharing TCP (MulTCP), Jon Crowcroft
&
Phillippe Oechslin
Related
standards activity
Articles in the technical media
Most of these articles 'get it', but some tend to still bash p2p usage,
rather than embracing coexistence of heavy & light users.
- heise online, Congestion
Control: Die Idealwelten der Ökonomen und die Netzneutralität,
Monika Ermert (Aug 2010)
- IETF Journal, Congestion
Exposure: We’re All in This Together, Philip Eardley (Jan
2010)
- c't,
Das
Fairness Bit, Richard Sietmann (Aug 2009)
- IEEE Spectrum, A
Fairer, Faster Internet Protocol
Bob Briscoe (Dec
2008)
- PC World, Money
for Jams, Clive Akass (May 2008)
- The Guardian, File-sharers
want to have your cake and eat it too, Jack Schofield (Jun
2008)
- The Guardian, Stopping peer-to-peer bandwidth hogs from ripping off the rest of us, Jack
Schofield (Mar 2008)
- ZDnet, Fixing the
unfairness of TCP congestion control, George Ou (Apr 2008)
- The Register, Dismantling
a Religion: The EFF's Faith-Based Internet, Richard Bennet
(Dec 2007)
- Ars Technica, Growth
of P2P leads IETF to debate "fair" bandwidth use, Iljitsch
van Bejnum (Dec 2007)
Contact
The conex@ietf.org
mailing list page contains instructions for joining the
IETF's congestion exposure (ConEx) mailing list, sending to
the
list or reading the archive
Bob
Briscoe
<bob.briscoebt.com>