{Fragments of text contributing to next draft of draft-irtf-iccrg-welzl-congestion-control-open-research-00.txt} 3.x Challenge x: Fairness Recently, how we even reason about fairness has been called into question [Bri07]. Much of the community has taken fairness to mean approximate equality between the rates of flows that experience equivalent path congestion as with TCP [RFC2581] and TFRC [RFC3448]. But it has always been accepted that this leaves fairness as a pretty amorphous concept [RFC3714]. A parallel tradition has been built on [Kelly98] where, as long as each user is accountable for the cost their rate causes to others [MKMV95], the set of rates that everyone chooses is deemed fair (cost fairness)---because with any other set of choices people would lose more value than they gained overall. The two traditions are fundamentally at odds. In comparison, the debate between max-min, proportional and TCP fairness is about mere details. These three all share the assumption that equal flow rates are desirable; they merely differ in the second order issue of how to share out excess capacity in a network of many bottlenecks. In contrast, cost fairness should lead to extremely unequal flow rates by design. Equivalently, equal flow rates would typically be considered extremely unfair. The two traditions are not protocol options that can each be followed in different parts of a network, rather one (cost fairness) is a scientific proof that the other is an incorrect way to even reason about fairness. The traditions are so incompatible, that they produce research agendas that are nearly completely orthogonal. If we assume TCP-friendliness as a goal, with flow rate as the metric, open issues would be: o How should we judge whether flow rate should depend on RTT (as in TCP) or whether only flow dynamics should depend on RTT (e.g. as in Fast TCP [Jin04])? o If an application needs still smoother flows than TFRC, or it needs to burst occasionally, or any other application behaviour, how should we judge what is reasonably fair? o During brief congestion bursts (e.g. due to new flow arrivals) how should we judge at what point it becomes unfair for some flows to continue at a smooth rate while others reduce their rate? o How should we judge whether a particular flow start strategy is fair? o How should we judge whether a particular fast recovery strategy after a reduction in rate due to congestion is fair? o Should fairness depend on the packet rate or the bit rate? o Mechanism to enforce approximate flow rate fairness; o Preventing gaming strategies like opening excessively large numbers of flows over separate paths (e.g. via an overlay); o How can we introduce some degree of fairness that takes account of flow duration? If we assume cost fairness as a goal with congestion volume as the metric, open issues would be: o Can one application's sensitivity to instantaneous congestion really be protected by longer term accountability of competing applications? o Protocol mechanism to give accountability for causing congestion; o Policy enforcement by networks; o Designing one or two generic transports (akin to TCP, UDP etc) with the addition of application policy control; o Interactions between app policy and network policy enforcement. o Competition with flows aiming for rate equality (e.g. TCP); So, the question of how to reason about fairness is a pre-requisite to agreeing the research agenda. That question doesn't require more research in itself, it is merely a debate that needs to be resolved by studying existing research and by assessing how bad fairness problems could get if we don't address the issue rigorously, then reaching consensus. If the more rigorous logic of cost fairness were adopted, the main implications would be: o The research issues in the rate fairness agenda would mostly disappear, given they mostly concern how to extend the way we reason about fairness to cope with different dynamics, while the cost fairness agenda starts out able to reason about fairness during dynamics. o The constraints on the problem of designing high speed congestion controls would be considerably relaxed, allowing an application with a short term requirement to go fast during congestion to do so, as long as the user compensates at another time or on another path. o The IETF's and IRTF's role would shift from defining protocol fairness at design time to designing protocols that gave policy control of fairness to users and networks at run-time. /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ 3.8 Challenge 8: Misbehaving Senders, Receivers and Networks In the current Internet architecture, congestion control depends on parties acting against their own interests. It is not in a receiver's interest to honestly return feedback about congestion on the path, effectively requesting a slower transfer. It is not in the sender's interest to reduce its rate in response to congestion if it can rely on others to do so. And networks may have strategic reasons to make other networks appear congested. Numerous strategies to game congestion control have already been identified. The IETF has particularly focused on misbehaving TCP receivers that could confuse a compliant sender into assigning excessive network and/or server resources to that receiver (e.g. [Sav99], [RFC3540]). But, although such strategies are worryingly powerful, they do not yet seem common. A growing proportion of Internet traffic comes from applications designed not to use congestion control at all, or worse, apps that add more forward error correction the more losses they experience. Some believe the Internet was designed to allow such freedom so it can hardly be called misbehaviour. But others consider that it is misbehaviour to abuse this freedom [RFC3714], given one person's freedom can constrain the freedom of others (congestion represents this conflict of interests). Indeed leaving freedom unchecked could threaten congestion collapse in parts of the Internet. Proportionately large volumes of unresponsive voice traffic could represent such a threat, particularly for countries with less generous provisioning [RFC3714]. More recently, Internet video on demand services are becoming popular that transfer much greater data rates without congestion control (e.g. the peer-to-peer Joost service currently streams media over UDP at about 700kbps downstream and 220kbps upstream). Finally, the problem is not just misbehaviour driven by a selfish desire for more bandwidth. Misbehaviour may be driven by pure malice, or malice may in turn be driven by wider selfish interests, e.g. using distributed denial of service (DDoS) attacks to gain rewards by extortion [RFC4948]. DDoS attacks are possible both because of vulnerabilities in operating systems and because the Internet delivers packets without requiring congestion control. The research agenda messages from these considerations are: o By design, new congestion control protocols need to enable one end to check the other for protocol compliance. o We need to provide congestion control primitives that satisfy more demanding applications (smoother than TFRC, faster than high speed TCPs), so application developers won't need to turn off congestion control to get what they need. o But self-restraint is rapidly disappearing from the Internet, so it is no longer sufficient to rely on developers voluntarily submitting themselves to congestion control. o Consequently, mechanisms to enforce fairness (S.3.x) need to have more emphasis within the agenda. o Currently the focus of the research agenda against denial of service is about identifying attack packets, attacking machines and networks hosting them, with a particular focus on mitigating source address spoofing. But if mechanisms to enforce congestion control fairness were robust to both selfishness and malice [Bri06] they would also naturally mitigate denial of service, which can be considered as a congestion control enforcement problem. /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ References {in addition to those already in draft-00} [RFC3714] [RFC3540] [RFC3714] [RFC4948] [Bri07] Bob Briscoe, "Flow Rate Fairness: Dismantling a Religion" ACM SIGCOMM Computer Communication Review 37(2) 63--74 (April 2007) [Bri06] Bob Briscoe, "Using Self-interest to Prevent Malice; Fixing the Denial of Service Flaw of the Internet," Workshop on the Economics of Securing the Information Infrastructure (Oct 2006) [Jin04] Chen Jin, David X. Wei and Steven Low "FAST TCP: Motivation, Architecture, Algorithms, Performance," In Proc. IEEE Conference on Computer Communications Infocomm'04) (March 2004) [MKMV95] MacKie-Mason, J. and H. Varian, "Pricing Congestible Network Resources", IEEE Journal on Selected Areas in Communications, `Advances in the Fundamentals of Networking' 13(7)1141--1149, 1995, . [Sav99] Savage, S., Wetherall, D., and T. Anderson, "TCP Congestion Control with a Misbehaving Receiver," in ACM SIGCOMM Computer Communication Review (1999).