We adopt an approach where the key used to encrypt sent data is systematically changed for each new unit of application data. The keys are taken from a pseudo-random sequence seeded with a value initially known only to the senders. When a receiver wishes to join, it requires a trusted third party smartcard. At the end of each receiver's set-up phase, its card is running a key generator seeded with the same value as that of the senders and it contains a policy defining which keys the receiver is entitled to. The smartcard does no decryption; it merely hands out a key whenever a request conforms to the policy. The smartcard can record a summary of which keys it has given out that can be used as a non-repudiable 'delivery note' in the case of delivery disputes.
Thus, whenever a receiver is added or removed, there is zero side-effect on other receivers. A special group key change doesn't have to be initiated because systematic changes occur all the time anyway. No keys need sending over the multicast, therefore reliable multicast isn't required. If key managers are delegated to handle requests to set-up receiver sessions, the senders can be completely oblivious of any receiver addition or removal. Thus, there is absolutely no coupling back to the senders. For stateless key manager scenarios (e.g. pre-payment with no credit) any amount of key manager replication can be introduced. The key managers just give out session seeds and policies in return for pre-payments. Thus performance is linear with key manager replication and system resilience is independent of key manager resilience.
Thus we focus on a pragmatic scenario where evictions from the multicast group are typically planned at session set-up, but still might occur at arbitrary times. Nonetheless, we do cater for the occasional unplanned eviction, although the scheme doesn't scale if the level of its use becomes high. Our thesis is that there are many applications that only rarely require premature eviction, e.g. pay-TV or pay-per-view. Consequently, our scheme typically requires just one set-up message per receiver session. All further security messaging proceeds between the receiver and its smartcard, which acts as a proxy of the key manager. If the receiver wishes to dispute delivery of certain parts of the stream, another message is required at the end of the session to present the 'delivery note'.
In section 2, we discuss requirements and describe related work on multicast key management, non-repudiable receipting and detection of re-multicast. In section 3 we describe the underlying composition of the systems we have implemented to meet all our requirements. Then, in section 4, we describe the design of specialisations we have implemented to achieve each requirement. Section 5 describes how secure sessions are set-up, torn down or modified (e.g. for an unplanned eviction). Section 6 briefly describes our implementation. Finally limitations of the approach are discussed followed by conclusions.
We define a 'secure multicast session' as the set of data that a receiver could understand, having passed one access control test. If one key is used for many related multicast groups, they all form one secure session. If a particular receiver leaves a multicast group then re-joins but she could have decrypted the information she missed, the whole transmission is still a single secure session. We envisage very large receiver communities, e.g. ten million viewers for a popular Internet pay-TV channel. Even if just 10% of the audience tuned in or out within a fifteen minute period, this would potentially cause thousands of secure joins or leaves per second.
We use the term 'application data unit' (ADU) as a more general term for the minimum useful atom of data from a security or commercial point of view (one second in the above example). ADU size is application dependent. It may be an initialisation frame and its set of associated 'P-frames' in a video sequence or it may be ten minutes of access to a network game. For performance, an ADU may be only partially encrypted with the remainder in the clear [Kunkel97]. ADU size can vary throughout the duration of a stream dependent on the content. ADU size is a primary determinant of system scalability. If a million receivers were to join within fifteen minutes, but the ADU size was also fifteen minutes, this would only require one re-key event.
However, reduction in re-keying requirements isn't the only scalability issue. In the above example, a system that can handle a million requests in fifteen minutes still has to be provided, even if its output is just one re-key request to the senders. With just such scalability problems in mind, many multicast key management architectures introduce a key manager role as a separate concern from the senders. This deals with policy concerns over membership and isolates the senders from much of the messaging traffic needed for access requests.
We now describe the most scalable of the group key management proposals. Ballardie suggested exploiting the same scalability technique used for the underlying multicast tree, by delegating key distribution along the chain of routers in a core based multicast routing tree [IETF_RFC1949]. However, this suffers from a lack of end-to-end security, requiring edge customers to entrust their keys to many intermediate network providers. The Iolus system [Mittra97] sets up a similar distribution hierarchy, but only involving trusted end-systems. However, gateway nodes in the hierarchy decrypt and re-encrypt the stream to isolate sub-group members from key-changes in other sub-groups. This increases latency and set-up complexity and reduces resilience.
An alternative class of approaches involves a single key for the multicast data, but a hierarchy of keys under which to send out a new key over the same multicast channel as the data. These approaches involve a degree of redundant re-keying traffic arriving at every receiver in order for the occasional message to arrive that is decipherable by that receiver. The logical key hierarchy (LKH) [Wallner97] gives each receiver its own key then creates the same number of extra keys, one for each node of a binary tree of keys with each member's key at the leaves. The root of the tree is the group key under which data is encrypted. When a member joins or leaves, all the keys on their branch to the root are replaced in one long message multicast to the whole tree. Perlman has suggested an improvement, termed LKH+, where a one way function could be used to compute the next key from the existing one [Perlman]. Only the new key would be revealed to the joining member. The one-way function tree (OFT) technique is in the same class of approaches [McGrew98]. Like LKH, all members have their own key, and a binary tree of keys is built over them with the root also being the group key. Because the keys at each intermediate node are a combination of the hashes of the two keys below, rather than being freely generated, Perlman's suggestion cannot be applied. LKH+ is therefore more efficient than OFT in most scenarios. The standardised approach to pay-TV key management also falls into this class [ITU-R.810]. A set of secondary keys is created and each receiver holds a sub-set of these in tamper-resistant storage. The group key is also unknown outside the tamper-resistant part of the receiver. In case the group key becomes compromised, a new one is regularly generated and broadcast multiple times under different secondary keys to ensure the appropriate receivers can re-key. All the key hierarchy approaches send new keys over the multicast itself. As 'reliable multicast' is still to some extent a contradiction in terms, either efficiency is reduced through adding redundant messaging or complexity is increased to cope with losses.
Dillon [Dillon97] falls into the same class of approaches to key management as the current work. Each broadcast document is encrypted using a different key rather than the key only being changed in synchrony with the addition or removal of receiver interest. In the interests of full disclosure, we note that the present work is the subject of a European patent filing [Briscoe97].
The novel aspect of the present work is the ability to prove that individual fragments of an isochronous stream (e.g. video) not only arrived, but arrived in time to be played out, giving suitable perceived quality for a real-time application. Our approach is for the receiving system to only request a key to decrypt the stream if there is sufficient time remaining to decrypt it and still achieve smooth play-out. This is possible because the link between the receiving computer and the smartcard has predictable latency and minimal risk of packet drop unlike the Internet connection between sender and receiver.
In general, prevention of information copying is considered infeasible; instead most attention focuses on the more tractable problem of copy detection. It is possible to 'watermark' different copies of a copyrighted digital work. If a watermarked copy is later discovered, it can be traced back to its source, thus deterring the holders of original copies from passing on further, illicit copies. Watermarks are typically applied to the least significant bits of a medium to avoid significantly degrading the quality. Such bits are in different locations with different regularity in different media, therefore there is never likely to be a generic approach [Schneier96]. The most generic scheme discussed to date is Chameleon [Anders97]. In Chameleon a stream is ciphered by combining a regular stream cipher with a large block of bits. Each receiver is given a long-term copy of the block to decipher the stream. In the concrete example given, four 64b words in the 512kB block are chosen by indexing the block with the output of the regular stream cipher. Then all four are XOR'd together with each 64b word of the stream. The block given to each receiver is watermarked in a way specific to the medium. For instance, the least significant bit of every 16b word of an audio stream might be the only place where a watermark can be stored without degrading the content significantly. Because the block is only used for the XOR operation, the position of any watermarked bits is preserved in the output.
Naor et al [Naor98] formalises a pragmatic approach to 'traitor tracing' by proposing a parameter that represents the minimum number of group members that need to collude to eliminate a watermark. The elimination criteria are that none of the conspirators are identifiable, and it is assumed that the copyright owner will want to avoid accusing innocent members. For instance, changing at least the square root of the total number of bits that could hold a watermark in the Chameleon scheme would protect against conspiracies of four or less members.
Watercasting [Brown99] is a novel, if rather convoluted way to embed an individual watermark in each receiver's copy of multicast data. Multicast forwarding is modified by including active networking elements at strategic branch points. These elements drop redundant data inserted into the original stream in order to produce a different drop pattern on each forwarded branch. A chain of trusted network providers is required for watercasting, each of which has to be willing to reveal their authenticated tree topology to each sender.
In this paper, for completeness, we report how it is possible to add an audit trail back to the copier of multicast information using watermarking. Our approach is not novel in this respect, simply re-using Chameleon. However, we include it to demonstrate our modular approach to the addition of mechanisms.
So far we have focussed on the scenario where the data is an ordered stream and access is given between some start and some later end point. A more random access approach might be required for non-sequential application name spaces [Fuchs98]. However, often we cannot generate a number at position n in the sequence if we have generated the number at position m where m > n, unless we store all numbers in the sequence up to the mth term or regenerate the sequence. Storing numbers in the sequence or regenerating the sequence is usually impractical for devices that are as limited as smartcards. For random access to any point in a sequence, in the longer version of this paper [Briscoed99] we present a fast algorithm to generate any key from a seed as an alternative to keyed hash algorithms.
The content provider first divides the video stream into units that potential viewers can use to select what they want to see. The obvious unit here is a 'TV channel' or may-be a 'TV programme'. Each of these units is given an ID termed the session ID. Next the content-provider sets up a video server which has access to the video in a streamable form. As part of the set-up process a seed is generated and a formula chosen. This is used to generate symmetric encryption keys based on this seed for encrypting and decrypting the data. The content provider also sets up another key management server to hand out these seeds in return for payment. The sender passes on the programme information, including the seed and formula, to that server. The content provider then advertises the programmes on the channel, perhaps using a web site or email, with the session ID and the key management server being used to uniquely identify the channel to the system. When the broadcast time arrives the video server starts streaming. Each frame of video is given it's own ID within the channel and a corresponding key is generated from this ID, the seed key and the formula. This new key is used to encrypt each frame before it is sent.
Now we consider the receiver's side. The user has a computer that is connected to the network and a smartcard reader. They also have a smartcard which contains it's own public/private key pair and has been certified by a trusted third party. The private key is unavailable to the user. The user finds a programme they want to watch on a web site and clicks on the "set-up" link for that programme. The link URL downloads a file containing the information that is needed to join the session and the browser passes this on to the user's video player software (which has been configured as a browser helper application). The video player passes this information on to a socket factory, the internals of which are outside the scope of this paper - see Flexinet [Hayton98]. The essential point is that a communications stack is built containing a decrypter. When the decrypter is set up it in turn sets up a key generator in the smart card, which in turn needs a seed and a policy. The decrypter requests these from a key server in return for a payment. They arrive encrypted with the smartcard's public key and are passed to the key generator.
The socket factory then passes a socket reference back to the video application which need not be aware that decryption is taking place beneath it. The video application simply uses this socket to join the multicast. When the TV programme starts, the socket waits until it receives all the data for each frame, then asks the smartcard for the key for that particular frame, decrypts the frame and passes the frame on to the video player application for decompression and display. The smartcard can record the number of keys that were generated per programme and a summary of which keys were passed out.
After the programme finishes, there is no need to do anything further unless reception was poor or incomplete. The receiver can ask the smartcard to produce a 'delivery note' for the partially received programme which the smartcard signs with it's private key. This can be forwarded on to the payment server to prove the right to a refund.
The process of sending data is as follows:
The process for receiving data is as follows (note that smartcard security is only added when we discuss the variations later):
In this case we add a key limiter that limits the production of keys, i.e. to a certain number or perhaps to a particular range of IDs. The key limiter and the key generator are placed within the tamper-proof processor. In Figure 3, a key is returned by the key generator only if the ADU ID passes the key limiter's test. The limiter will usually also be required to restrict its output to one response per key. This protects against the same card being shared around multiple receivers as a relatively convenient way to decrypt the same data multiple times rather than passing all the keys around. This would require internal receipting capabilities similar to those described in the next section.
In this case we can create receipts for every ADU decrypted by intercepting every return of a key and recording the ADU ID in a file. For ordered streams, the receipting storage format only needs to be a simple index to the last key given out, plus a list of any exceptions. If random access to any ADU is envisaged, a block of bits, one for each ADU, would be required to record which keys had already been given out. More efficient tree-based variants are possible to reduce storage requirements in most realistic scenarios.
If different types of ADUs in a stream require different treatment with respect to security it is simplest to create a separate secure session for them. For instance, high quality transmission costs for adverts might be refunded only if a delivery note is returned to prove they were at least decrypted if not watched (e.g. a hash of the decrypted frames might be required). These would form a sub-session with a different policy in the smartcard.
Figure 5 Off-card Watermarking
On-card watermarking is only feasible with a fairly highly powered tamper-resistant cryptographic co-processor. It is impractical with smartcards due to processing and memory limitations. Off-card watermarking needs only light card resources. An approach such as Chameleon [Anders97] as described earlier is preferred as long as there is sufficient memory on the receiver to hold the whole watermarked key block (about 512kB in the concrete example). The following steps for off-card watermarking assume the sender encrypter unit produces its stream cipher by combining a standard cipher with an unwatermarked version of the long-term key-block, as in Chameleon.
In the follow sections this notation is used:
This describes a simple scenario where a single sequence generator can create an unlimited sequence of numbers and create a single delivery note type. More realistically the session information would include:
Sent in plain:
Another aspect of session set-up is session amendment. The user may pay to receive a certain amount of data and then later on pay for some more. This would ideally be handled by updating the session information (probably just increasing the maximum number of keys to be generated) while the session is active.
This is a probabilistic approach. Every time an ADU is sent it contains an encrypted control message and secure space ID which must be passed into the secure space along with the key ID to obtain the key. If the secure space ID(s) contained in the encrypted block refer to this particular space then it checks the flags. If the stop flag is set then the card 'commits suicide' - no more keys are passed out. If the contact sender flag is set then the secure space does a remote procedure call to the sender (or the sender's representative) and will not give out more keys until it has a new key generation policy. Alternatively more general control messages might take the place of these two flags.
If several users need to be thrown out of the session then their secure space IDs will be rotated through different packets.
ADU format:
If the length of the control message and number of secure space IDs is variable then there needs to be an unencrypted field before the flags stating the total length of the control message and secure space IDs.
The non-repudiation aspect of this work is only useful in a commercial model where there is an incentive for the receiver to volunteer the delivery note to the information provider. Such scenarios are easy to imagine, but this means the capability is not universally useful. For instance, it would not be possible to give away the stream of information then ask each receiver to volunteer their delivery note to calculate how much they should pay. The beneficial corollary is that it is difficult to get the smartcard to give out thousands of keys off-line in order to break the seed. The smart card won't give out any keys if it doesn't have a key limiter policy and if it does have a policy, it will only give out keys the user has paid for.
This paper contains no formal security analysis of the strength of the schemes employed. A number of questions are left unanswered, such as whether the seed of a pseudo-random sequence becomes easier to predict, the more values from the sequence are revealed.
We must also admit to the standard limitations that apply to most other work on copyright protection. A watermark-based audit trail is only proof against small numbers in collusion and it only helps detection not prevention. Also, traitor tracing relies on finding the watermarked data in the first place; a problem that this paper and others on the subject invariably leave unresolved.
Regarding further work, we claimed in the abstract that this approach could be applied to other means of bulk data distribution than multicast, such as DVD (digital video/versatile disk). We envisage a scenario where data on the DVD would be encrypted with a stream cipher such that it would be indistinguishable from a multicast stream once it was read from the disk. As long as the initial set up with the smartcard had occurred on-line, the rest of the DVD could be played off-line, only requiring interaction with the smartcard, not the network. Any final 'delivery note' of exactly what had been accessed would then be available to present to the provider of the DVD. In a similar vein, policies and seeds to load into the smart card could be supplied on various media other than over the Internet. All these scenarios and more are introduced in [Briscoe97], but we have done no specific design or implementation work on them.
All this loose coupling is made possible by a simple technique where multicast senders systematically change the group encryption key rather than only changing it whenever there is a change to the group membership. This innovation is driven by the insight that there will always be a minimum application data unit (ADU) granularity, within which there is no commercial advantage to changing the group key. The traditional approach has been to group together membership change events within the timespan of an ADU and then drive key changes dependent on whether none or some events have occurred within each timeslot. Instead, by systematically changing group keys whether or not it is necessary, the whole system can rely on the key changes and not require tight coupling back to the senders. A further advantage of this approach is that there is no need to send control messages over the multicast channel itself. Thus no reliable multicast mechanism is assumed or required and no complexity is involved when messages are dropped. The only exception is the rare need to send a 'poison pill', which merely requires statistical delivery.
In order to distribute the load of key management further, we require each receiver to operate a smartcard, into which the information provider can install a key generator capable of mirroring the systematic key generation of the senders. We prefer generic smartcards certified by trusted third parties, so that any key generator can be installed at session set-up. This mitigates the barrier created by the need for each receiver to obtain a card, as it can be re-used for multiple services. A policy is installed into the smart card at session set up to control which keys it will give out to its receiver. The details depend on the specifics of the wider application.
Further, we have shown by implementation that it is even possible to prove timely reception of real-time data units using this arrangement. The smart card records which keys it has been asked for and if a packet arrives late, the receiver simply refrains from asking for the key. Thus, the smartcard generates a delivery note that can later be used by the receiver to prove that only a certain number of data units were usefully decrypted.
We have also described how it would be possible to combine the above approach with a key watermarking scheme such as 'Chameleon'. This provides a small but significant deterrent against a receiver giving away or re-selling either the keys or the decrypted data, because both are watermarked in such a way as to trace that receiver.
We believe these mechanisms (combined with sender authentication approaches described elsewhere) provide a soundly engineered basis for a number of very large scale commercial applications built over Internet multicast. We have also briefly described how the same techniques could usefully be applied to other bulk data distribution mechanisms, such as DVDs. The techniques also have application where protection of information security rather than value is required.
[Anders97] Ross Anderson & Charalampos Manifavas (Cambridge Uni), "Chameleon - A New Kind of Stream Cipher" Encryption in Haifa (Jan 1997), <URL:http://www.cl.cam.ac.uk/ftp/users/rja14/chameleon.ps.gz>
[Bagnall99] Pete Bagnall, Bob Briscoe & Alan Poppitt, (BT), "Taxonomy of Communication Requirements for Large-scale Multicast Applications", Internet Draft (work in progress), Internet Engineering Task Force (17 May 1999) <draft-ietf-lsma-requirements-03.txt>
[Balen99] D. Balenson, D. McGrew & A. Sherman (TIS Labs at Network Associates), "Key Management for Large Dynamic Groups: One-Way Function Trees and Amortized Initialization", February 26, 1999, <draft-balenson-groupkeymgmt-oft-00.txt>
[Ballard95] Tony Ballardie and Jon Crowcroft (UCL), "Multicast-specific threats and counter-measures," Proceedings of the Internet Society 1995 Symposium on Network and Distributed System Security 16-17 Feb 1995, San Diego, California, IEEE Computer Society (1995), 2-14. <URL:ftp://cs.ucl.ac.uk/darpa/IDMR/mcast-sec-isoc.ps.Z>
[Boxall98] Sarah Boxall (Cambridge Uni), Report on industrial placement (Sep 1998).
[Briscoe97] Bob Briscoe and Ian Fairman (BT), "Multicast Key Management", European patent publication no. EP98304429.8 (Mar 97).
[Briscoed99] Bob Briscoe and Ian Fairman (BT), "Nark: Receiver-based Multicast Key Management and Non-repudiation", BT Technical Report (Jun 1999), <URL:http://www.labs.bt.com/projects/mware/>
[Brown99] Ian Brown, Colin Perkins & Jon Crowcroft (UCL), "Watercasting: Distributed Watermarking of Multicast Media", submitted to Globecom '99, Rio de Janeiro, (December 1999), <URL:ftp://cs.ucl.ac.uk/darpa/watercast.ps.gz>
[Canetti98] Canetti, R. & B. Pinkas, "A taxonomy of multicast security issues," Internet Draft (work in progress), Internet Engineering Task Force (May 1998), <URL:draft-canetti-secure-multicast-taxonomy-00.txt>
[Cryptix] Cryptix library, <URL:http://www.cryptix.org/>
[Deering91] S. Deering, "Multicast
Routing in a Datagram Network," PhD thesis, Dept. of Computer Science,
Stanford University,
(1991).
[Dillon97] Douglas M Dillon (Hughes), "Deferred Billing, Broadcast, Electronic Document Distribution System and Method", International patent publication no. WO 97/26611 (24 July 1997).
[Fuchs98] M. Fuchs, C. Diot, T. Turletti, M. Hoffman, "A Naming Approach for ALF Design", in proceedings of HIPPARCH workshop, London, (June 1998) <URL:ftp://ftp.sprintlabs.com/diot/naming-hipparch.ps.gz>
[Hayton98] Richard Hayton, Andrew Herbert & Douglas Donaldson, (APM), "FlexiNet - A flexible component oriented middleware system", in proceedings of SIGOPS'98 (1998), <URL:http://www.ansa.co.uk/>
[Herzog95] Shai Herzog (IBM), Scott Shenker (Xerox PARC), Deborah Estrin (USC/ISI), "Sharing the cost of Multicast Trees: An Axiomatic Analysis", in Proceedings of ACM/SIGCOMM '95, Cambridge, MA, Aug. 1995, <URL:http://www.research.ibm.com/people/h/herzog/sigton.html>
[Gleick87] James Gleick, "Chaos", 1987, ISBN: 0749386061
[IETF_RFC1949] Tony Ballardie, "Scalable multicast key distribution", Request for Comments (RFC) 1949, Internet Engineering Task Force (May 1996) <URL:rfc1949.txt>
[ITU-R.810] ITU-R Rec. 810, "Conditional-Access Broadcasting Systems", (1992) <URL:http://www.itu.int/itudocs/itu-r/rec/bt/810.pdf>
[Kunkel97] Thomas Kunkelmann, Rolf Reinema & Ralf Steinmetz (Darmstadt Tech Uni), "Evaluation of Different Video Encryption Methods for a Secure Multimedia Conferencing Gateway", 4th COST 237 Workshop, Lisboa, Portugal, Springer Verlag LNCS 1356, ISBN 3-540-63935-7 (Dec 1997), <URL:http://www.ito.tu-darmstadt.de/publs/cost97.ps.gz>
[McGrew98] McGrew, David A., & Alan T. Sherman, "Key establishment in large dynamic groups using one-way function trees," TIS Report No. 0755, TIS Labs at Network Associates, Inc., Glenwood, MD (May 1998). 13 pages.
[Mittra97] Suvo Mittra, "Iolus: A framework for scalable secure multicasting," Proceedings of the ACM SIGCOMM '97, 14-18 Sep 1997 Cannes, France.
[Naor98] Moni Naor & Benny Pinkas (Weizmann Inst of Sci, Rehovot), ''Threshold Traitor Tracing'', CRYPTO '98. <URL:http://www.wisdom.weizmann.ac.il/~bennyp/PAPERS/ttt.ps>
[Perlman] Radia Perlman, observation concerning LKH [McGrew98] from the conference floor - termed "LKH+"
[Schneier96] Schneier B, "Applied cryptography", 2nd Edition, John Wiley & Sons (1996).
[Shoup96] V. Shoup and A.D. Rubin, "Session Key Distribution Using Smart Cards", in Proc. of Eurocrypt'96 (1996), <URL:http://www.cs.nyu.edu/cgi-bin/cgiwrap/~rubin/keydist.ps>
[Wallner97] Wallner, Debby M., Eric J. Harder, and Ryan C. Agee, "Key management for multicast: Issues and architectures," IETF Internet Draft (work in progress) (15 Sep 1998) draft-wallner-key-arch-01.txt