NeXtworking: COST-NSF workshop on the Future Internet, Apr 2007 =============================================================== Bob Briscoe, UCL and Chief Researcher, BT Group a) Fundamental research challenges for the Future Internet; "Why are all architectural problems from 2000 still unsolved?" ============================================================== As researchers, we have to be careful we are not inventing problems to fit the research we want to do. A professional researcher should be able to follow the agenda of other respected researchers. Back in 2000, the DARPA NewArch project identified problems with the Internet architecture. Why are none of these problems solved? They divide into two categories, those with: (0) few researchers working on them (n) many proposed solutions but no obviously good ones and no consensus ROUTING, NAMING, ADDRESSING (n) * policy controls on inter-provider routing * mobility * reachability through middleboxes * robustness & availability RESOURCE CONTROL (0) * highly time-variable resources * capacity allocation * extremely long propagation delays MANAGEMENT (0) * policy-driven auto-configuration * failure management SECURITY (n) * attack resilience * traceability HETEROGENEITY * enabling conflicting socio-economic outcomes (0) * enabling a variety of technical outcomes (n) ROUTING AND MOBILITY: always popular research subjects. Researchers like fiddling with addresses, paths and policies. It's easier than doing maths. Perhaps the reason that there is no consensus despite shed loads of similar ideas is that no-one knows what a good idea looks like, except their own---due to lack of a theoretical framework big enough to model a naming system as a routing system within a routing system. NETWORK SECURITY RESEARCH: a lot of it, but mainly about identifiers, push-back, traceability, paths---there's that obsession with addresses again. What about DDoS as a resource sharing problem? NETWORK MANAGEMENT: very little /architectural/ research. The problem isn't elegant and generic enough for most CS researchers. RESOURCE CONTROL: hardly any good architectural research. Nowhere near solved yet, but the field is dead. Is the maths too hard? ENABLING HETEROGENEITY: lots of proposals, but largely heterogeneity between /technical/ architectures, usually by virtualisation---it's that obsession with addresses again. Flexible architecture testbeds provide the requirement. But let's not kid ourselves that the real Internet will evolve new architectures like this---so a thousand flowers will bloom. The value of a fully reachable internetwork is hugely greater than many little ones. How will flows traverse multiple architectures unless we solve inter-architecture resource control, routing, network management, robustness? Why create a problem, to make the problems we haven't solved even harder? Are customers or app developers demanding architectural diversity at the inter-network layer? Yes, we do need an architecture that supports diversity---enabling the heroic social and economic tussles: liberal vs conservative, open vs closed, community mesh network vs IMS. Yes we need a way to transition from old architecture to new. But ten new technical architectures isn't heroic tussle, it's just pathetic indecisiveness. Why has there been so little architectural research on supporting social and economic diversity---tussle in cyberspace? Good multidisciplinary research is perhaps even harder than hard maths. Summary so far: I'm not impressed by sexy new proposals that deflect from the well-known problems with the current Internet architecture. We have to carry on until the problems are solved = deployed and working. Far more effort is needed on: * resource control * enabling conflicting socio-economic outcomes * (and network management?). These involve hard (elegant) maths and hard multidisciplinary research. Research should HURT. b) Experimental Infrastructure: a vehicle to design the Future Internet "And how would we know we had solved socio-economic problems anyway?" =================================================================== If you're obsessed with routing, addressing and naming, and believe resource control research is no longer required because it's not fashionable, overlay testbeds are just fine. But overlay testbeds where most resources are outside the control of the experiment are completely unsuited to experiments in highly dynamic, high speed resource control. ROUTING is both a controllability problem at scale and an economic problem of policy conflicts. RESOURCE CONTROL is both a controllability problem at scale and an economic problem---and we want multiple outcomes to be able to coexist---3GPP service-oriented culture interconnected with open ISP culture and mesh network culture and ... . To evaluate solutions in the space where the Internet currently has its main problems, we need a testbed where the suppliers are real even if the users are experimenters and the technology is experimental. E.g.: * If we want to know whether IMS culture can co-exist with open ISPs, we need to test resource control and routing between real organisations with these real policies. * If we want to test firewall traversal, we need to know how (and whether) a real operator or company would deploy/configure our idea. Traditionally testbeds try to get real users but the infrastructure is operated by the experimenters and funded by governments with large pockets. For a significant part of future Internet research we need the reverse. The faulty part of the Internet is the faulty part of our experiments: our assumptions about operators.