The eXpressive Internet  Architecture: Architecture and Research Overview

 

Funded by the NSF under awards

CNS-1040801, CNS-1040757, CNS-1040800,

CNS-1345305, CNS-1345307, CNS-1345284, and CNS-1345249

 

 

 

 

 

PDF version of this document

 

 

 

 

1   XIA Architecture Overview

 

The goal of the XIA project is to radically improve both the evolvability and trustworthiness  of the In- ternet. To meet this goal, the eXpressive Internet Architecture  defines a single network that offers inher- ent support for communication between multiple communicating principals–including  hosts, content, and services–while accommodating unknown future entities. Our design of XIA is based on a narrow  waist, like todays Internet, but this narrow waist can evolve to accommodate new application  usage models and to incorporate improved link, storage, and computing  technologies  as they emerge. XIA is further designed around the core guiding principle of intrinsic security, in which the integrity and authenticity of communi- cation is guaranteed.  We have also defined an initial inter-domain control plane architecture that support routing protocols for several principal types. Finally, the XIA core architecture includes the design of Scion, a path selection protocol that supports significant security benefits over traditional  destination forwarding approaches.

 

 

1.1   Vision

 

The vision we outlined for the future Internet in the proposal is that of a single internetwork that, in contrast to todays Internet, will:

Be trustworthy: Security, broadly defined, is the most significant challenge facing the Internet today.

Support long-term evolution of usage models: The primary  use of the original Internet was host-based communication. With the introduction of the Web, over the last nearly two decades, the communication  has shifted to be dominated today by content retrieval. Future shifts in use may cause an entirely new dominant mode to emerge. The future Internet should not only support communication  between todays popular entities (hosts and content), but it must be flexible, so it can be extended to support new entities as use of the Internet evolves.

Support long-term technology evolution: Advances in link technologies  as well as storage and com- pute capabilities at both end-points and network devices have been dramatic. The network architecture must continue to allow easy and efficient integration of new link technologies and evolution in functionality on all end-point and network devices in response to technology improvements and economic realities.

Support explicit interfaces between network  actors: The Internet encompasses a diverse set of actors playing different roles and also serving as stakeholders  with different goals and incentives. The architecture must support well-defined interfaces that allow these actors to function effectively. This is true both for the interface between users (applications)  and the network,  and between the providers that will offer services via the future Internet.


 

This vision was the driver for both the original and the current XIA project.

 

 

1.2   Key Principles driving XIA

 

In order to realize the above vision, the XIA architecture follows three driving principles:

 

1. The architecture must support an evolvable  set of first-order principals for communication, exposing the respective network elements that are intended to be bridged by the communication, be they hosts, services, content, users, or something as-yet unexpected.

Performing operations between the appropriate principal types creates opportunities to use communica- tion techniques that are most appropriate, given the available technology,  network scale, destination popular- ity, etc. Such “late binding” exposes intent and can dramatically  reduce overhead and complexity  compared with requiring all communication to operate at a lower level (e.g., projecting all desired communications onto host-to-host communications,  as in todays Internet), or trying to “shoe-horn” all communication into a higher level (e.g., using HTTP  as the new narrow waist of the Internet [96]).

A set of evolvable principal types is however not sufficient to support evolution of the network archi- tecture.  Experience so far shows that the concept of fallbacks is equally important. Allowing addresses to contain multiple identifiers supports only incremental deployment of principal types, but selective deploy- ment by providers and network customization.

 

2. The security of “the network”,  broadly defined, should be as intrinsic as possible—that is, not dependent upon the correctness of external configurations, actions, or databases.

To realize this principle,  we propose to extend the system of self-certification  proposed in the Account- able Internet Protocol (AIP) [11].  In AIP,  hosts are “named” by their public key.  As a result,  once a correspondent knows the name of the host they wish to contact, all further cryptographic security can be handled automatically, without external support.

We call the generalization of this concept “intrinsic security”. Intuitively, this refers to a key integrity property that is built in to protocol interfaces: any malicious perturbation of an intrinsically  secure proto- col must yield a result that is clearly identifiable  as ill-formed or incorrect, under standard cryptographic assumptions. We extend the application  and management of self-certifying identifiers into a global frame- work for integrity such that both control  plane messages and content on the data plane of the network can be efficiently  and certifiably  bound to a well-defined originating principal.  Intrinsic security properties can also be used to bootstrap systematic mechanisms for trust management, leading to a more secure Internet in which trust relationships are exposed to users.

 

3. A pervasive narrow waist for all key functions, including access to principals (e.g., service, hosts, content), interaction  among stakeholders (e.g., users, ISPs, content providers),  and trust management.

The current Internet has benefited significantly from having a “narrow waist” that identifies the minimal functionality  a device needs to be Internet-capable.  It is limited to host-based communication,  and we propose to widen its scope while retaining the elegance of “narrowness”. The narrow waist effectively is an interface that governs the interactions between the stakeholders, so it defines what operations are supported, what controls are available, and what information  can be accessed. It plays in key role in the tussle  [27] of control between actors by creating explicit control points for negotiation. The architecture must incorporate a narrow waist for each principal  type it supports. The narrow waist identifies the API for communication between the principal types, and for the control protocols that support the communication. The architecture must also define a narrow waist that enables a flexible  set of mechanisms for trust management, allowing applications and protocols to bridge from a human-readable description to a machine-readable,  intrinsically secure, identifier.

Our experience so far shows that precise, minimal interfaces are not only important  as control  points


 

 

Figure 1: The eXpressive Internet Architecture

 

between actors, but also because they also play a key role in the evolvability of the network at large. The same was that a hard to evolve data plane (format and semantics of network header) can stiffle innovation, so can interfaces in the control plane (e.g., BGP for routing) or for resource management (e.g., AIMD based congestion control). Well-defined interfaces are critical  to evolvability at all levels in the network.

 

 

1.3   XIA Data Plane Architecture

 

Building on the above principles,  we introduce the “eXpressive Internet Architecture”  (or XIA). The core of XIA is the eXpressive Internet Protocol (XIP), which supports communication between various types of principals. It is shown in red in the bottom center of Figure 1. XIP supports supports various communica- tion and network services (including  transport protocols, mobility services, etc.) which in in turn support applications on end-hosts.  These components are shown in blue in the center of the figure. This protocol stack forms a “bare-bones” network Additional  elements are needed to make the network  secure, usable, and economically viable.

First, we need trustworthy network operations (shown on the right in green), i.e., a set of control and management protocols that deliver trustworhthy  network service to users. We will describe SCION,  a path selection protocol  developed as part of XIA, as an example later in this section.  Other examples include trust management, protocols for billing, monitoring, and diagnostics. Second, we need interfaces so various actors can productively  participate in the Internet.  For example, users need visibility into (and some level of control over) the operation of the network.  We also need interfaces between network providers. The design of these network-network  interfaces must consider both technical requirements, economic incentives, and the definition of policies that reward efficient and effective operation.

XIP was defined in the FIA-funded  XIA project. We include an overview of its design in this section because it is the core of the architecture and forms the basis for the the research activities  reported on this report. We also give an overview of SCION, a control protocol for path selection, in the next section for the same reason.

 

 

1.3.1  Principals and Intrinsic Security

 

The idea is that XIA supports communication between entities of different types, allowing communicating parties to express the intent of their communication operation. In order to support evolution in the network, e.g. to adapt to changing  usage models, the set of principal  types can evolve over time. At any given


 

point in time, support for certain principal types, e.g. autonomous domains, will be mandatary to ensure interoperability, while support for other principal types will be optional.  Support for an optional principal type in a specific network will depend on business and other considerations.

IP is notoriously hard to secure, as network security was not a first-order  consideration in its design. XIA aims to build security into the core architecture as much as possible, without  impairing  expressiveness. In particular,  principals  used in XIA source and destination  addresses must be intrinsically secure. We define intrinsic security  as the capability to verify type-specific security properties without relying on external information. XIDs are therefore  typically cryptographically  derived from the associated communicating entities in a principal  type-specific  fashion, allowing communicating entities to ascertain certain security and integrity properties of their communication operations [9].

The specification of a principal  type must define:

 

1. The semantics of communicating with a principal  of that type.

 

2. A unique XID type, a method for allocating XIDs and a definition  of the intrinsic security properties of any communication involving the type. These intrinsically secure XIDs should be globally unique, even if, for scalability,  they are reached using hierarchical  means, and they should be generated in a distributed and collision-resistant way.

 

3. Any principal-specific  per-hop processing and routing of packets that must either be coordinated or kept consistent in a distributed  fashion.

 

These three features together define the principal-specific  support for a new principal  type. The follow- ing paragraphs describe the administrative  domain, host, service, and content principals in terms of these features.

Network and host principals  (NIDs and HIDs) represent autonomous routing  domains and hosts that attach to the network. NIDs provide hierarchy or scoping for other principals, that is, they primarily provide control over routing.  Hosts have a single identifier  that is constant regardless of the interface used or network that a host is attached to. NIDs and HIDs are self-certifying:  they are generated by hashing the public key of an autonomous domain or a host, unforgeably binding  the key to the address. The format of NIDs and HIDs and their intrinsic security properties are similar to those of the network  and host identifiers  used in AIP [11].

Services represent an application  service running on one or more hosts within the network. Examples range from an SSH daemon running on a host, to a Web server, to Akamais global content distribution service, to Googles search service. Each service will use its own application protocol, such as HTTP, for its interactions. An SID is the hash of the public key of a service.  To interact with a service, an application transmits packets with the SID of the service as the destination  address. Any entity communicating with an SID can verify that the service has the private key associated with the SID. This allows the communicating entity to verify the destination and bootstrap further encryption or authentication.

In todays Internet, the true endpoints of communication are typically application processes—other than, e.g., ICMP messages, very few packets are sent to an IP destination without specifying application port numbers at a higher layer. In XIA, this notion of processes as the true destination  can be made explicit by specifying an SID associated with the application  process (e.g., a socket) as the intent. An NID-HID pair can be used as the “legacy  path” to ensure global reachability,  in which  case the NID forwards the packet to the host, and the host “forwards”  it to the appropriate process (SID).  In [51], we show and example of how making the true process-level destination explicit facilitates transparent process migration, which is difficult in todays IP networks because the true destination  is hidden as state in the receiving end-host.

Lastly, the content principal allows applications to express their intent to retrieve content without regard to its location. Sending  a request packet to a CID initiates retrieval of the content from either a host, an


 

in-network  content cache, or other future source. CIDs are the cryptographic  hash (e.g., SHA-1,  RIPEMD-

160) of the associated content.  The self-certifying  nature of this identifier allows any network element to verify that the content retrieved matches its content identifier.

 

 

1.3.2  Addressing Requirements

 

Three key innovative  features of the XIA architecture are support for multiple principal types that allow users to express their intent, support for evolution,  and the use of intrinsically secure identifiers.   These features all depend heavily on the format of addresses that are present in XIA packets, making the XIA addressing format one of the key architectural decisions we have to make. While  this may seem like a narrow decision, the choice of packet addressing and the associated packet forwarding  mechanisms have significant implica- tions on how the network can evolve, the flexibility given to end hosts, the control mechanisms provided to infrastructure providers, and the scalability of routing/forwarding. We elaborate on the implications  of each of these requirements before presenting our design.

The scalability  requirements for XIA are daunting. The number of possible end-points in a network  may be enormous - the network contains millions of end-hosts and services, and billions of content chunks. Since XIA end-points are specified using flat identifiers,  this raises the issue of forwarding  table size that routers must maintain.  To make forwarding  table size more scalable, we want to provide support for scoping by allowing  packets to contain a destination network identifier, in addition to the actual destination identifier.

Evolution of the network requires support for incremental deployment, i.e. a particular principal  type and its associated destination identifiers  may only be recognized by some parts of the network.  Packets that use such a partially  supported principal  type may encounter a router that lacks support for such identifiers. Simply dropping the packet will hurt application performance (e.g. timeouts) or break connectivity, which means that applications will not use principal  types that are not widely supported. This, in turn, makes it hard to justify adding support in routers for unpopular types. To break out of this cycle, the architecture must provide some way to handle these packets correctly  and reasonably efficiently,  even when a router does not support the destination type. XIA allows packets to carry a fallback destination, i.e. an alternative address, a set of addresses or a path, that routers can use when the intended destination type is not supported.

In the Internet architecture, packets are normally processed independently by each router based solely on the destination address. “Upstream” routers and the source have little control over the path or processing that is done as the packet traverses the network.  However, various past architectures have given additional  control to these nodes, changing the balance between the control  by the end-point and control  by the infrastructure. For example, IP includes support for source routing  and other systems have proposed techniques such as delegation. The flexibility provided by these mechanisms can be beneficial  in some context, but they raise the challenge of how to ensure that upstream nodes do not abuse these controls to force downstream nodes to consume resources or violate their normal policies. In XIA, we allow packets to carry more explicit path information  than just a destination; thus, giving the source and upstream routers the control needed to achieve the above tasks.

The above lays out the requirements for addressing in packets, and indirectly  specifies requirements for the forwarding  semantics. We now present our design of an effective and compact addressing format that addresses the evolution  and control  requirements.   Next, we discuss the forwarding  semantics and router processing steps for XIA packets.

 

 

1.3.3  XIA Addressing

 

XIAs addressing scheme is a direct realization of the above requirements.   To implement fallback and scoping, XIA uses a restricted  directed acyclic  graph (DAG) representation of XIDs to specify XIP ad- dresses [51]. A packet contains both the destination DAG and the source DAG to which a reply can be sent.


 

Because of symmetry, we describe only the destination address.

Three basic building blocks are: intent, fallback, and scoping. XIP addresses must have a single intent, which can be of any XID type. The simplest XIP address has only a “dummy” source and the intent (I) as a sink:

I

 

The dummy source () appears in all visualizations of XIP addresses to represent the conceptual source of the packet.

A fallback is represented using an additional  XID (F) and a fallback” edge (dotted line):

I

 

F

 

The fallback edge can be taken if a direct  route to the intent is unavailable.  While each node can have multiple outgoing fallback edges, we allow  up to four fallbacks to balance between flexibility and efficiency.

Scoping of intent is represented as:

S  I

 

This structure means that the packet must be first routed to a scoping  XID S, even if the intent is directly routable.

These building  blocks are combined to form more generic DAG addresses that deliver  rich semantics, implementing the high-level requirements in Section 1.3.2. To forward  a packet, routers traverse edges in the address in order and forward  using the next routable XID. Detailed behavior of packet processing is specified in Section 1.3.4.

To illustrate how DAGs provide flexibility, we present three (non-exhaustive)  “styles” of how it might be used to achieve important architectural goals.

Supporting  evolution:  The destination  address encodes a service XID as the intent, and an autonomous domain and a host are provided  as a fallback path, in case routers do not understand the new principal  type.

 

SID1

 

NID1             HID1

 

This scheme provides both fallback and scalable routing. A router outside of NID1 that does not know how to route based on intent SID1 directly will instead route to NID1.

Iterative refinement: In this example, every node includes a direct edge to the intent, with fallback to domain and host-based routing. This allows iterative incremental refinement of the intent. If the CID1 is unknown, the packet is then forwarded to NID1. If NID1 cannot route to the CID, it forwards the packet to HID1.

 

CID1

 

NID1             HID1

 

An example of the flexibility afforded by this addressing is that an on-path content-caching router could directly reply to a CID query without forwarding  the query to the content source.  We term this on-path interception. Moreover, if technology  advances to the point that content IDs became globally  routable, the network and applications could benefit directly,  without  changes to applications.

Service binding and more: DAGs also enable application control in various contexts. In the case of legacy HTTP, while the initial packet may go to any host handling the web service, subsequent packets of the same “session”  (e.g., HTTP keep-alive)  must go to the same host. In XIA, we do so by having the initial


 

 

Figure 2: Packet Forwarding in an XIP router

 

packet destined for: NID1 SID1.  A router inside NID1 routes the request to a host that provides SID1.  The service replies with a source address bound to the host, NID1 HID1 SID1, to which subsequent packets can be sent.

More  usage models for DAGs  are presented in [10]. It is important to view DAGs, and not individual XIDs, as addresses  that are used to reach  a destination. As such, DAGs are logically equivalent to IP addresses in todays Internet. This means that the responsibility  for creating and advertising  the address DAG for a particular  destination, e.g., a service or content provider, is the responsibility of that destination. This is important  because the desitnation  has a strong incentive  to create a DAG that will make it accessible in a robust and efficient way. Address look up can use DNS or other means.

The implications of using a flexible  address format  based on DAGs are far-reaching and evaluating both the benefits and challenges of this approach is an ongoing effort. Clearly, it has the potential of giving end- points more control over how packets travel through the network, as we discuss in more detail in Section 1.8.

 

 

1.3.4  XIA Forwarding and Router Processing

 

DAG-based addressing allows XIA to meet its goals of flexibility and evolvability, but this flexibility must not come at excess cost to efficiency and simplicity of the network core, which impacts the design of our XIA router. In what follows, we describe the processing behavior of an XIA router and how to make it efficient by leveraging parallelism appropriately.

Figure 2 shows a simplified  diagram of how packets are processed in an XIA router.  The edges represent the flow of packets among processing components.  Shaded elements are principal-type  specific, whereas other elements are common to all principals. Our design isolates principal-type  specific logic to make it easier to add support for new principals.

Before we go through the different  steps, let us briefly look at three key features that differ from tradi- tional IP packet processing:

 

Processing tasks are separated in XID-independent tasks (white boxes) and XID-specific  tasks (col- ored boxes). The XID-independent functions form the heart of the XIA architecture and mostly focus on how to interpret the DAG.

 

There is a fallback edge that is used when the router cannot handle the XID pointed at by the last- visited XID in the DAG, as explained  below.

 

In addition to having per-principal  forwarding  functions based on the destination  address, packet forwarding allows for processing functions specific to the principal type of the source address.

 

We now describe the steps involved  in processing  a packet in detail. When  a packet arrives,  a router first performs source XID-specific processing based upon the XID type of the sink node of the source DAG. For example, a source DAG NID1 HID1 CID1 would  be passed to the CID processing module.


 

 

Figure 3: XIA as a combination  of three ideas

 

By default,  source-specific  processing modules are defined as a no-op since source-specific  processing is often unnecessary. In our prototype, we override this default only to define a processing module  for the content principal type. A CID sink node in the source DAG represents content that is being forwarded to some destination. The prototype CID processing element opportunistically caches content to service future requests for the same CID.

The following stages of processing iteratively  examine the outbound edges of the last-visited node (field LN in the header) of the DAG in priority order. We refer the node pointed by the edge in consideration  as the next destination.  To attempt to forward  along an adjacency, the router examines the XID type of the next destination. If the router supports that principal type, it invokes a principal-specific  component based on the type, and if it can forward  the packet using the adjacency, it does so. If the router does not support the principal type or does not have an appropriate forwarding  rule, it moves on to the next edge. This enables principal-specific customized forwarding ranging from simple route lookups to packet replication or diversion. If no outgoing  edge of the last-visited  node can be used for forwarding, the destination is considered unreachable and an error is generated.

We implement a prototype of the XIP protocol and forwarding  engine. It also includes  a full protocol stack. The status of the prototype is discussed in more detail in Section 12.1.

 

 

1.4   Deconstructing the XIA Data Plane

 

While we defined XIA as a comprehensive Future Internet Architecture,  we also found it useful from a research perspective to view XIA as three ideas that, in combination, form the XIA architecture. This ap- proach helps in comparing different future Internet architecture, in evaluating the architecture, e.g. by being able to evaluate the benefits of individual  ideas in different deployment environments, and in establishing collaborations, e.g. by focusing on ideas rather than less critical details such as header formats.  Figure 3 shows how XIA can be viewed  as a combination of three ideas: support for multiple principal types, intrinsic security. XIA also proposes a specific realization  for each idea: we have specific proposals for an initial set of principal  types and their intrinsic security properties, and for flexible addressing. Other future Internet architecture proposals can differ in the ideas they incorporate and/or how they implement those ideas.

The ideas in Figure 3 are largely orthogonal. For example, it is possible to have a network that supports multiple principal types, without having intrinsic security or flexible addressing.  Similarly, network can provide intrinsic security for a single principle  type (e.g. as shown  in AIP [11], which in part motivated XIA), while it is possible to support flexible  addressing, e.g. in the form of alternate paths or destinations, without supporting intrinsic security. While it is possible to build an architecture around just one or two of the ideas, there is significant benefit to combine all three, as we did in XIA, since the ideas leverage each


 

other. This is illustrated by the edges in Figure 3:

 

Combining multiple principal  types and flexible addressing makes it possible to evolve the set of principal  types since flexible  addressing can be used to realize fallbacks, one of the key mechanisms needed to incrementally deploy new principal types (e.g., 4IDs [89]). As another example, this com- bination of ideas makes it possible to customize network deployments, without affecting interoper- ability, e.g. by only implementing  a subset of the principal  types in specific networks.  Looking at possible deployments of XIA showed that is likely to be common, e.g., core networks will support only a limited  number of XIDs (e.g., NIDs  and SIDs),  while  access networks  may also support CIDs, HIDs and possibly others. This customization can be done without affecting interoperability.

 

Combining multiple principal types with intrinsic  security makes it possible to define intrinsic security properties that are specific to the principal type. The benefit is that users get stronger security proper- ties when using the right principal types. For example, when retrieving a document (static content), using a content principal  guarantees that the content matches the identifier,  while using a host-based principal to contact the server provides a weaker guarantee (the content comes from the right server).

 

The benefits of combining  flexible  addressing and intrinsic  security make it possible to modify DAG- based addresses at runtime securely without needing external key management. For example, when mobile devices move between networks, they can sign use the public key associated with the SID of each connection end-point  to sign a change-of-address notice for each communicating peer. Similarly, service instances of a replicated service can switch to a new service identifier  by using a certificate signed by the service identifier of the service.

 

Finally,  the ideas in Figure 3 directly  address the four components of our vision outlined in Section 1.1. Intrinsic security is a key building block of a trustworthy  networks, in the sense that it guarantees certain se- curity properties without relying on external configuration or databases; these can be used as a starting point for more comprehensive solutions. Multiple principal types, combined with flexible addressing, support evolution of the network, for example, to support new usage models or leverage emerging technologies.

 

 

1.5   Using Multiple Principals

 

We use a banking service to illustrate how the XIA features can be used in an application.

 

 

1.5.1  Background and Goals

 

A key feature of XIA is that it represents a single internetwork  supporting communication between a flexible, evolvable  set of entities. This approach is in sharp contrast with alternative  approaches, such as those based on a a coexisting  set of network  architectures, e.g., based on virtualization, or a network  based on an alternative single principal  type (e.g., content instead of hosts).  The premise is that XIA approach offers maximum flexibility both for applications at a particular point in time, and for evolvability over time. We now use an example of how services interact with applications using service and content principals in the context of an online banking service to illustrate the flexibility of using multiple principals. More examples can be found elsewhere [10, 51, 9].

 

 

1.5.2  Approach and Findings

 

In Figure 4, Bank of the Future (BoF) is providing  a secure on-line banking service hosted at a large data center using the XIA Internet. The first step is that BoF will register its name, bof.com, in some out of band fashion with a trusted Name Resolution Service (Step 1), binding it to ADBoF : SIDBoF  (Step 2). The service


 

 

Figure 4: Bank of the Future example scenario.

 

ID SIDBoF  to a public key for the service.  One or more servers in the BoF data center will also advertise

SIDBoF  within the BoFs data center network, i.e. administrative domain ADBoF (Step 3).

The focus of the example is on a banking  interaction  between a BoF client C (HIDC ). The first step is to resolve the name bof.com by contacting the Name Resolution Service using SIFResolv  (Step 4), which was obtained from a trusted source, e.g. a service within ADC . The client now connects to the service by sending a packet destined to ADBoF : SIDBoF  using the socket API. The source address for the packet is ADC :HIDC :SIDC , where ADC is the AD of the client, HIDC is the HID of the host that that is running the client process, and SIDC is the ephemeral SID automatically  generated by connect(). The source address will be used by the service to construct a return packet to the client.

The packet is routed to ADBoF , and then to S, one of the servers that serves SIDBoF  (Step 5). After the initial exchange, both parties agree on a symmetric  key. This means that state specific to the communication between two processes is created.  Then the client has to send data specifically  to process P running at S, not any other server that provides the banking service. This is achieved by having the server  S bind the service ID to the location of the service, ADBoF : HIDS , then communication may continue between ADBoF : HIDS : SIDBoF  and ADC :HIDC :SIDC (Step 6). Content can be retrieved directly from the server, or using content IDs, allowing it to be obtained from anywhere in the network.

The initial binding of the banking service running on process P to HIDS  can be changed when the server process migrates to another machine, for example, as part of load balancing.  Suppose this process migrates to a server with host ID = HIDS2  (Step 7). With appropriate OS support for process migration,  a route to SIDBoF  is added on the new hosts routing table and a redirection entry replaces the routing table entry on HIDS . After migration,  the source address in subsequent packets from the service is changed to ADBoF : HIDS2 : SIDBoF . Notification of the binding  change propagates to the client via a packet with an SID extension header containing  a message authentication code (MAC) signed by SIDBoF  that certifies the


 

 

Figure 5: Multi-party  conferencing example

 

binding  change (Step 8). A client move to a new AD, ADC2,  can be handled the same way (Step 9). The new source address of the client will then be ADC2:HIDC :SIDC (Step 10). When a new packet arrives at the server, the server will update the clients address. Even though locations of both the server and the client have changed, the two SID end-points –and therefore their cryptographic verifiability–  remain the same.

 

 

1.6   Interactive  conferencing systems

 

We use an interactive  conferencing  system as another use scenario for XIA. This research was performed by

Wei Shi as part of his undergraduate honors thesis, advised by PI Peter Steenkiste.

 

 

1.6.1  Background

 

With the availability of the XIA prototype (and its Xsocket API) as a platform  for application development, we started to develop various user applications over XIA. Among them, we started to explore how to best support multi-party interactive conferencing, called Voice-over XIA (VoXIA). This serves a good example to show how the flexibility of using multiple principal types in XIA can deliver benefits for conversational communication applications that are important in the Internet today.

 

 

1.6.2  Approach and Findings

 

We compared different design options for the VoXIA appliation.  We implemented a basic application with the following features: (1) each node uses the same (predefined)  SIDVoX IA for the VoXIA application; (2) each node registers its own unique VoXIA identifier (VoXIA ID); (3) and binds this VoXIA ID to its DAG, e.g., AD : HID : SIDVoX IA. For the voice software stack, we used the Speex protocol for compressing audio frames and PortAudio for recording and playing back audio.

In order to examine whether the use of multiple principal types in XIA can offer advantages over tra- ditional host-based communication  in this context, we developed two different VoXIA designs.  The first method is to send and receive all frames via XDP sockets (UDP style) and thus, no in-network caching is supported.  This serves as a measurement baseline, which  represents the traditional  host-based communi- cation.  The second one is to use XDP sockets only for exchanging  a list of frame pointers (CIDs) and use XChunkP sockets (chunk transmission) for retrieving the actual voice frames. This method is supported by in-network caching, as XChunkP communication uses the content principal type. VoXIA experiments over GENI (five end-hosts and two routers) show that the chunk-style VoXIA is advantageous when multiple nodes request the same set of frames since they can effectively  utilize the in-network  caching. Figure 5 shows an example with four nodes that are are geographically  distributed.   After Node 3 retrieves  a frame from Node 1, it is likely to be available in the content caches of the routers, where it can be efficiently retrieved by Nodes 2 and 4.


 

 

 

 

Figure 6: Control Plane Architecture (left) and Routing Protocol Design (right).

 

 

The VoXIA experiments showed that the XIA flexibility of using multiple principals can deliver benefits for conversational communication applications. VoXIA can use principal  types that match the nature of the communication operation: SIDs for signaling path and CIDs for data path. The experiments also showed that it would be useful to allow nodes to push content chunks in a way that they can be caches along the way (the the current XchunkP transport protocol only allows fetching chunks). Finally, a robust conferencing application will have to deal with issues such as pipelining of chunk requests for future data and synchro- nization  among nodes. These issues are similar  to those faced by content-centric network architectures such as NDN [60].

 

 

1.7   XIA Control Plane Architecture

 

As a first step towards a control plane architecture, we designed and implemented a flexible  routing archi- tecture for XIA. The designed is based on the 4D architecture  [49, 121] which  argues that manageability and evolvability require a clean separation separation between four dimensions: data, discovery, dissemination and decision.  Software-defined  networks have become a popular control  plane architecture for individual networks (i.e., intra-domain)  and they are based on these 4D principles. XIA is exploring how the 4D architecture  can be used as the basis for an inter-domain control plane.

Our initial design is shown in Figure 6(left). We assume that each domain is a software-defined  network that uses a logically centralized controller.   This controller  runs separate applications  for intra-domain and inter-domain routing. The inter-domain routing application communications with the routing application in neighboring domains, creating the control plane topology for inter-domain  routing  that can be used to implement the discovery and dissemination dimensions. Note that this topology is significantly  simpler than the topology  used by, for example,  BGP, which  uses a router-based topology.

For routing in XIA, we have logically  separate routing  protocols for each XID type supported at the intra or inter-domain  level, each responsible for populating forwarding tables for intra or inter-domain commu- nication  respectively.   These are implemented  as applications  on the SDN controller (Figure 6(left)). The implementation of the routing applications can potential share functionality to reduce the implemention  ef- fort. Our implementation supports stand-alone inter-domain routing for NIDs while the protocols supporting Scion (Scion IDs) and anycast for replicated services (SIDs) leverage the universal connectivity  supported by NIDs routing and forwarding.

Ongoing research will refine this initial control plane design by providing  richer discovery and dissem- ination dimensions that simplify both the implementation of new control protocols and the extension of existing protocols (evolvability).

As part of the original XIA project, the Scion contributed two addtional pieces of technology that will


 

Figure 7: Scion trust domains, beaconing, and path selection.

 

 

be incorporated into the XIA control plane. First, Scion [124, 57] supports path-based communication  in which  packets are forwarded  based on a previously  allocated inter-domain  path (Figure 7). For each edge domain, the ISPs establish a number of paths connecting that domain to the core of the Internet (sequence of straight arrows in the figure).  These paths adhere to the business policies of the ISPs and may have different properties. Two domains can then communicate by choosing one of their paths and merging them to form an end to end path (e.g., AD1-AD2 and AD3-AD2 in the figure). Scion path based communication uses a new XID type, Scion IDs, and and edge networks can choose between forwarding based on destination  NID or Scion IDs. The latter are more secure and offer more path control than NIDs.

Second, Scion introduced several new concepts and mechanisms to securely establish and maintain paths. First, it introduced Trust Domains (TD) as a way of reducing the Trusted Computing  Base for paths. This is achieved by grouping in domains in independent routing sub-planes (two in the figure). Second, it uses a beaconing  protocol  to identify possible paths; beacons are created by the domain in the TD root and forwarded towards the edge network.  Finally,  Scion uses a trust domain infrastructure in which domain share certificates to authenticate themselves.  While these techniques were designed within the context of Scion, they are independent of the Scion path selection protocol and we plan to use them to enhance the security of XIAs inter-domain control plane.

 

 

1.8   Exposing and Using Control Points

 

The XIP protocol effectively  represents the dataplane interface between network  users and domains. It represents a control  point that allows actors to engage in a tussle “at runtime” to define a communication agreement that meets their policy,  economic, legal and other requirements [27].

Compared with IP, XIP offers significant flebility in how communication operations are performed (fourth point of the vision in Section 1.1). First, multiple principal  types give users (applications or hu- man users) a choice of how to use the network.  This choice affects not only performance but also privacy (CIDs potentially  expose more information)  and control. Similar,  service providers have a choice in what principal types to support, e.g., based on economic  considerations.  Second, DAGs as flexible  addresses that are much more expressive than todays  Internet  addresses, given users a lot more control over how a packet is handled, through the use of fallbacks, weak source routing and possibly the specification of multiple paths or multiple sinks [10]. Finally, SCION offers the user a choice over the path that its packets will take through


 

the network,  as we explain next.

Experience with XIA so far shows however that flexible data plane interfaces are not sufficient to build a network  that is evolvable and flexible.  We also need to rethink the control plane interfaces for routing for the various principal  types and resource management on a variety of time scales (congestion control,  traffic engineering).  Appropriate  interfaces are needed not only between the ISPs but also between ISPs and their customers (residential and corporate networks, service providers, CDNs, etc.). While we have done some initial research in this area, this a central theme in our current research.


 

 

Figure 8: Supporting mobility: first contact (left) and during a session (right)

 

2   Network Challenges

 

The XIA architecture  has been used to address a number  of traditional  network challenges, including mo- bility, incremental deployment of architectures, anycast, support for new architecture concepts, in-path ser- vices, and in-network caching.

 

 

2.1   Supporting Mobility

 

We designed and implemented protocols to both establish and maintain  a communication  session with a mobile end-point, leveraging XIAs DAG-based addresses, service identifiers,  and intrinsic security. This research was done by Nitin Gupta and Peter Steenkiste from CMU.

Background and Goals - Supporting cross-network mobility in the current Internet is challenging for a simple  but fundamental  reason: the IP protocol  uses IP addresses both to identify an endpoint (“identi- fier”) and as a topological  address used to reach the endpoint (“locator”). When a mobile devices switches networks it must change the IP address that is used to reach it, but in IP this means that its identifier also changes so any peers it is communicating with will no longer recognize it as the same device.

There has been a great deal of work on supporting mobility in the Internet, e.g., [122, 48], to overcome this fundamental problem, but two concepts have emerged as the basis for most solutions. The first concept is the use of a rendezvous point that tracks the mobile  devices current address that can be used to reach it (its locator). An example of a rendezvous service is the home agent in the mobile IP protocol.  The second is the observation that, once a connection is established, the mobile host can inform  its stationary peer (or its point of contact) of any change in location. This is for example done by the foreign agent in mobile IP. Both of these concepts are fairly easy to implement  in XIA, in part because XIA distinguishes between the identifier for an endpoint (HID SID, or any XID type) and its locator (an address DAG). Note that MobilityFirst uses a similar split with globally unique host identifiers (GUIDs) that are resolved to locators [105]. We now describe our implementation of each concept for host mobility.

Approach and Findings - In our design, when a mobile device leaves its home network, it needs to do two things. First, it needs to make sure that it registers  a DAG address with the name service that includes a rendezvous service; we discuss possible DAG formats below. Second, as it connects to different foreign networks, it needs to keep its rendezvous service (marked  as Loc Svc in igure 8(left)) informed of its current location by registering its DAG address (dotted arrow in the Figure). This will allow the rendezvous service to keep an up-to-date forwarding  table with entries for all the mobile devices it is responsible for.


 

To establish a communication  session with a mobile device, a corresponding client will first do a name lookup. As shown in Figure 8(left), the address DAG that is returned could include the devices “home” locator (NIDH : HID) as the intent, and the location  of a rendezvous service (NIDS : SID)) as the fallback. If the device is attached to its home network, it is delivered by following dashed Arrow 1 in the DAG. If not, it is automatically delivered to the rendezvous service using the fallback (Arrow 2). The rendezvous service could be located anywhere. For example, it could be hosted by the home network  as in mobile IP, or could be a distributed commercial service. When it receives the packet, the rendezvous service looks up the current DAG address for the mobile device and forwards the packet (Arrow 3). The mobile device can then establish the connection with the communicating client. It also provides the corresponding host with its new address as described below,  so future packets can be delivered  directly  to the new address. Clearly, other DAG formats are possible. For example, the mobile device could register the DAG of its rendezvous service with its name service when it leaves its home network,  so connection requests are directly  forwarded  to the rendezvous service.

The above solution is simple but it is not secure.  A malicious  party can easily intercept a session by registering its own DAG with the rendezvous service, impersonating the mobile device. The solution is to require that address registrations with the rendezvous are signed by the mobile device. This can be done in a variety  of ways. For example, the mobile  device can receive a shared key or other credentials when it signs up with the mobile device. In XIA, we can leverage intrinsic security. The mobile  device uses the public key associated with SID in its “home” DAG address so the rendezvous service can verify the authenticity of registration  without  needing a prior agreement with the mobile device.  We use the SID to sign the registration since it is typically the “intent” identifier, i.e., it is the true end-point associated with the DAG. In contrast, HIDs and NIDs in the DAG can change over time.

The implementation of the rendezvous service raises two design questions.  First, where should it be implemented in the protocol stack. Logically,  it acts as a layer 3 forwarding element. It uses a forwarding table to forward packets at layer 3, but that table is directly filled in by endpoints, rather than by a traditional routing protocol. However, for simplicity of code development, we decided not to implement it at layer 3 inside click. Instead, it is is an application  level process that uses the raw Xsocket interface to send and receive layer 3 packets (that include the XIA header).  A second design decision is how the rendezvous service forwards packets to the client. Options include dynamic packet encapsulation or address rewriting. We decided to use address rewriting, i.e. replace the clients old address with its new address, because it does not affect packet length, does avoiding  packet fragmentation  issues.

To maintain an active session during mobility, the mobile host keeps the stationary host informed  of its new address [51],  as shown in Figure 8(right).  Again, we must decide where to implement this functionality. For network-level mobility, layer 3 would be the natural place in the protocol stack. Unfortunately,  since XIP is a connection-les protocol, it does not know what active (layer 4 and higher)  sessions exist. In our solution, the XIP protocol prepares a change-of-address record, which is passes on to any software module (transport protocol, applications, etc.) that registered an interest in learning  about address changes. These modules are then responsible  for forwarding the address change to any relevant parties. In our implementation, the reliable transport protocol inserts the change of address record in the data stream, marked  as control information.   The change of address record is signed using the private key of the SID associated with The endpoint of the connection, as explained earlier.

Status and future work - We have implemented  both mechanisms and have shown that they work. Both mechanisms are very responsive, and the current bottleneck in network-level handoff is the old and slow XHCP protocol that is currently  used to join a new network.  Future work will focus on optimizing the protocol  used to join a network  and transport  and session level support to optimize network performance during network mobility.


 

 

Figure 9: Using the 4ID principal type to incrementally deploy XIA

 

2.2   Incremental deployment

 

We explored whether XIAs flexible  addressing could be used to help with incremental deployment of XIA nodes and networks in IPv4 networks.  This research was performed by CMU Ph.D. student Matt Mukerjee, advised by PIs Seshan and Steenkiste.

Background - One of the bigest hurdles facing a new architecture is deployment in the real world over an existing infrastructure.  For example, several past designs including  IP Multicast, different QoS designs, and IPv6, have faced significant deployment challenges over the current IPv4 infrastructure.  The most common approach that past systems have taken is the use of an overlay network. Both IPv6 and IP Multicast  operate a virtual backbone - the 6bone and Mbone,  respectively.   Users who wish to make use of IPv6 or IP Multicast must create an overlay network link or tunnel that connects them to the backbone. Unfortunately,  creating such links is a complicated  process and this has limited the wide adoption of these protocols.  Recent efforts [86] have tried to automate this process; however,  other weaknesses associated with a tunnel-based  scheme remain.

By analyzing the range of IPv6 deployment  methods, it quickly becomes clear that any proper de- ployment method must have certain qualities.  A first key requirements is that there should be no explicit tunneling between disjoint clouds since it is a fragile.  Also, there should be no address mapping that takes old addresses and puts them into the new address space.  Allowing this severely limits the usefulness of the new network to the constrains of the old network.  Desirable properties include minimal configuration, automatic adaptation to unforeseen changes in the underlying network topology or failure of nodes/links, and graceful degradation.

Our initial design to address these requirements  in XIA introduces  a new XID type that we call a 4ID (named for IPv4 ID). Consider a scenario with two nodes, A and B, that are located in different XIA-based networks attached to the IPv4 Internet at different locations. Each of the two networks  has a least one node that operates as a dual-stack router (i.e., a router that connects to both the XIA local network and the IPv4 network). In order for A to transmit a packet to B, the destination DAG address will contain the IPv4 address of Bs networks dual-stack  router as a fallback address. This entry will be encoded inside and XIA address with the type labeled as a 4ID. This design takes advantage of the fact that XIA packets can carry multiple addresses and encode relative priorities  across them. In addition, unlike past designs, there is no use of a virtual  backbone and no need to setup tunnels.

Figure 9 illustrates the example. The dual stack router in network A will first try to forward the packet based on the intent identifier, ADS , but it does not not have a forwarding  entry ADS , so it needs to use the fallback. After encapsulating the packet into an IPv4 packet using the IPv4 address enclosed in 4IDS , the packet can be delivered over the IPv4 network to ADS . The dual stack router in network B will remove the IPv4 header and forward the packet to the destination using XIP. Once the IPv4 network is upgraded to XIA,


 

the same DAG can still be used to reach the destination.

Approach and Findings - We generalized our understanding of incremental deployment techniques by creating a design space based on how different approaches addressed four core questions:

 

How and when to select an egress gateway from  the new network  architecture  (NNA) source network

 

How and when to select an ingress gateway into the destination NNA network

 

How to reach the egress gateway of the source NNA network from the source host

 

How to reach the ingress gateway of the NNA destination network from the source NNA network

 

Based on the above design space, we were able to map all existing designs into two broad classes of solutions. In addition, we were able to identify two new classes of designs. The 4ID-based design represents one of these classes. We created a new design that we called “Smart 4ID” as an example of the second new class. The 4ID mechanism utilizes new data plane technology  to flexibly decide when to encapsulate packets at forwarding  time. In contrast, the Smart 4ID mechanism additionally  adopts an SDN-style control plane to intelligently pick ingress/egress pairs based on a wider  view  of the local network.

To characterize the real-world performance tradeoffs of the four classes (both new and old), we imple- mented a representative mechanism from each class and performed wide-area experimentation  over Planet- Lab [23]. We explored how the choices made in each class directly  impact host performance (path latency, path stretch, and latency seen by applications)  as well as failure semantics (failure  detection and recovery time) through a quantitative  analysis.  We additionally  provide a qualitative  analysis of management and complexity  overhead of each mechanism.  Path latency and stretch provide  insight  into the quality of the path chosen by each mechanism, whereas application latency shows path impact on hosts. Failure semantics and management/complexity  overhead present a fuller picture of the effort  needed to use these mechanisms, which is often left out in analysis.

Our results shows that the new Smart 4ID-based approach outperforms previous approaches in per- formance while simultaneously providing  better failure semantics.  We contend that the our mechanism performs  better because it leverages innovations in the data plane (flexible  addressing) and the control plane (centralized local con- troller)  rather than relying solely on traditional  ideas (routing, naming, etc).

 

 

2.3   Service anycast routing

 

We designed and implemented an inter-domain  anycast routing protocol for XIA service identifiers (SIDs).

Background - SIDs represent the end-points of communication sessions (i.e., sockets).  Similar  to the current Internet, SIDs can be “well-known”, i.e., advertised by service providers, or ephemeral, e.g., for clients. Since services can be replicated  for performance and availability, an initial service request based on an SID has anycast semantics: the network must deliver the request to exactly one service instance. The choice of service instance is highly service-specific  and will depend on both the state of the infrastructure (bandwidth, latency, server load, ..) and provider policies.  Once the service request is received by a service instance, the endpoint of the connection is “bound” to that instance by inserting the NID of the service instance in the address DAG. Intrinsic security is used to make this step secure.

The requirements for SID anycast can be summarized as follows:

 

Flexibility Decision making should be flexible enough to meet the requirements of wide variety of

Internet services.

 

Accuracy The choice of the service instance should accurately reflect that conditions of the server and the network.


 

Figure 10: Architecture for SID anycast.

 

 

Responsiveness Decision making should be responsive to changes in the system conditions.

 

Efficiency Initial request packets should be handled efficiently since many service sessions are short- lived.

 

Scalability The system must have low overhead so it can scale to a potentially large number of services and client domains.

Availability  The system must have high availability, even if parts of the anycast architecture fail. Related work in this area falls broadly in two categories. The first one is exemplified by IP anycast [93],

which supports anycast over the current Internet. Its current implementation [2, 66] leverages BGP [101].

This means that instances are selected using BGP-type criteria  (AD path length, etc.). While this may be sufficient for DNS [55], it does not meet the requirements of more demanding services.

The most commonly  used anycast service today is DNS-based.  Specifically,  the DNS server of the service provider will selected a service instance and return  it in the form of an IP address to the client. This approach has the advantage that the service provider  is fully in control of the selection process and can apply any policy  they want. There are however a number of limitations and drawbacks. First, the DNS server does not have access to any network information; all it knows is an estimated location  based on the clients request.  Second, it adds a roundtrip  worth of latency to service access. Finally, this approach introduces a fair bit of overhead since caching of DNS responses has to be limited to make the system responsive to failures and changes in load.

Approach and Findings - The key idea underlying our design that we want to place instance selection in the hands of service providers so rich policies can be implemented,  but instead of relying on DNS, we want to use routing  so we can incorporate more accurate information about network conditions and avoid the roundtrip  latency associated with a DNS lookup.

The high level design is shown in Figure 10. The figure shows an SID-enables domain with an SDN style control plane, consistent with the XIA control plane architecture (Section 1.7). The SDN controller runs an SID routing application, service router, that, together with a set of service controllers that are colocated with each service instance for a set of SIDs, implement anycast. Consistent with the 4D architecture [49], the service routers and controllers together implement discovery, dissemination and decision functions.

As a first step, service controllers  will use beacons to advertise their presence, allowing service routers to identify available service instances and establish communication sessions with their service controllers (step


 

1). Next, a network measurement application  in each SID-enabled  domain will collect network performance data (e.g., latency and bandwidth  to service instances) and make this information available to the service controllers; we discuss the placement of SID-enabled domains later. The services controllers for each SID will then jointly decide on how service routers should forward SID requests to service instances using both the information provided by the service controllers and internal information such as service load and policies (step 3). They then forward  these decisions to the service routers (step 4), which will distribute it to routers in their domain  as needed (step 5). SID requests from clients will now be forwarded to the appropriate service instances without  incurring  additional latency (step 6).

Let us elaborate on some of the key steps. First, the simplest outcome of the decision step (step 3) would be to create a forwarding  entry for an SID that points at a single service instance.  Unfortunately,  unless the service provider  has a presence in a large number of SID-enabled domains, which may not be practical for smaller service providers, this simple approach would limit load balancing options. For this reason, we are exploring  an alternative design where a forwarding  entry for an SID can include multiple  service instances with a target load distribution.  Second, our design assumes that the information collected in step

2 is representative for clients. This requires SID-enabled domains to be at the edge of the Internet, near or even inside client networks. Clearly, increasing the number of domains will improve performance. Finally, in order to keep track of changing network and server conditions, routing updates will happen periodically. Service controllers can also push updates at any time, for example to respond to failures of sudden changes in the network or load. This should improve both responsiveness and availability.

This design meets the requirements we identified  above by combining the benefits of the DNS-style approach with those of a routing solution.

 

 

2.4   Pushing the evolvability envelop

 

The XIA group at Boston University  has realized a Linux implementation of the core XIA network archi- tecture and has been exploring  how other FIA features can be integrated into Linux XIA at the network layer. The primary goal of this effort is fourfold: to streamline upcoming network deployments of XIA, to provide  a concrete demonstration of XIAs capability to evolve and import novel architectural function- ality created by others, to facilitate other network  researchers to conduct architectural evaluations enabled by the existence of Linux XIA, and to reach beyond network  researchers to other potential early adopters of XIA. This effort has been spearheaded by post-doctoral researcher Michel Machado and PI John Byers. Other team members contributing  in this reporting period are PhD students Cody Doucette and Qiaobin Fu, undergraduate Alexander Breen, and Boston Academy High School student Aaron Handelman.

Background - In the FIA-funded  FIA XIA project, Boston University  implemented an XIA protocol stack natively in Linux and started to use this implementation as a basis for realizing other FIA architectures as principals  within the XIA framework. As described in Michel Machados Ph.D. thesis [82], the team ported the Linux implementation of the Serval service-centric architecture to XIA, and produced white paper ports of the NDN architecture,  as well as the MobilityFirst architecture, and the ANTS active network protocol. XIA was able to accomodate these “foreign” architectures by introducing new principal types and using the flexible addresses, as described  in previous annual reports. Another interesting take-away finding was that realizing Serval in XIA directly  activates intrinsic  security, cleanly remediating a key weakness in the original “Serval over IP” design.

Findings - In this reporting period, the Boston team continued to build out functionality on Linux XIA, providing support for multicast using the zFilter framework from the PURSUIT project (a European future architecture),  and focusing on interoperability. The goal was to evaluate how effectively  and efficiently XIAs architectural principles supports networking features, in this case supporting  multicast  over a legacy network.  This research will be presented at ANCS’15 [83]

We built a reliable multicast application  that combines three principals  to deliver content across a hetero-


 

 

Figure 11: Example of network evolvability and principal interoperability  in Linux XIA.

 

geneous network,  to demonstrate the value of principal interoperability. This application employs the U4ID principal to cross an IP-only link, the zFilter principal  to duplicate packets in the network, the XDP principal to deliver packets to sockets, and erasure codes to make the transmission reliable. Figure 11 illustrates this application in action, and shows how these principals  can compose an XIP address.

The three-node destination address depicted at bottom left in 11 can be understood  as expressing  the following intent: (1) traverse an IP-only part of the network by encapsulating XIA packets in UDP/IP pay- loads; (2) multicast the content to multiple hosts; and (3) deliver the content to listening datagram sockets. Alternatively, the depicted single-node destination  address can be used in tandem with routing redirects in the network to supply the same functionality. In both cases, this allows the TCP/IP, zFilter, and XIA ar- chitectures to interoperate by composing their individual  strengths, despite the fact that these architectures were never intended to work together.

Each step in Figure 11 captures a transition in the life of an XIP packet being sent from the server to the clients. The XIP packet is created (Step 1) but while routing the packet, XIP discovers that it cannot forward the packet using the X DP1 identifier since the link between the dual stack server and the router is IP-only, so XIP transfers control to the U4ID principal, which encapsulates the XIP packet into the payload of a UDP/IP packet (Step 2). The packet is forwarded using IP and arrives on the router, where it is decapsulated and control is handed back to the U 4ID principal (Step 3). IP decides on following the edge zF1, which leads to duplicating  the packet (Step 4), and sending the copied packets toward the two clients. Once the packets arrive at the clients (Step 5), the XDP principal identifies the listening datagram sockets to which the data must be delivered.  This application  serves as a proof of concept that XIA has a strong notion  of evolvability and that Linux XIA offers a practical realization of allowing interoperation and collaboration between multiple new principal types.

The lessons learned from porting various FIA features to XIA show that XIA can be viewed  as an in- teroperable meta network architecture [83]: a framework  that nurtures coexistence of clean-slate designs, letting stakeholders experiment with, improve,  and choose the designs that best suit their needs.  In this architectural framework, future innovation can occur on the level playing field provided by the XIA frame- work, lowering barriers to entry, and thus enabling crowdsourced innovation.  Our vision is that Linux XIA


 

(a) Application-Level Service                                         (b) Network-Level Service

 

Figure 12: XIA network stack with in-path services and a session layer.

 

makes (formerly  intractable) network layer innovation broadly accessible to networking  researchers.

 

 

2.5   In-Path Network Services

 

In this section, we explore how to effectively support in-path network services. This project is motivated by, and extends, earlier work on Tapa, transport support for communincation  over heterogenous edge networks, which is summarized in Section 3.1. This research was performed by David Naylor, in collaboration with Suk-Bok Lee (postdoc) and PI Peter Steenkiste.

 

 

2.5.1  Background

 

Todays Internet does a poor job handling in-path services, i.e. services data should pass through  on its way to a destination. There is currently no mechanism for including in-path services in a communication  session without building an overlay on TCP/IP or physically (and transparently) inserting the service into the data path. These transparent middleboxes often introduce new problems, such as creating new failure modes that violate fate sharing.

The goal of this work is to design and implement support for in-path services in XIA such that: (1) all involved  parties (applications,  hosts, and ADs) can add services to a communication  session; (2) all in-path services are visible to all communicating parties (eliminating  hidden failures and allowing policy verication);  (3) paths do not have to be symmetric; (4) applications are required to implement  as little of the service management/error recovery functionality as possible (i.e., the network handles as much as possible).

 

 

2.5.2  Approach and Findings

 

In order to realize the above design goal for in-path services, we proposed adding a session layer to the XIA stack. The session layer manages all in-path services in a session.  It solicits input from all involved parties as to which  services should be included and sets up a session by initiating transport connections between application-level  services. Figure 12 shows the XIA stack with a session layer that supports in-path services, either at the application level (Figure 12(a)) or at the network level (Figure 12(b)). Application level services sit above the application layer, while the session layer provides what are essentially raw sockets to network- level services, bypassing the transport layer.

The initiator of a session specifies its final intent, or the other endpoint, and optionally  a set of in-path services. We use the term path to describe the ordered  set of services through which data should  pass between the initiator and the final intent in one direction, including either the initiator in the forward path or the final intent in the return path. A session, then, is simply two paths: one from the initiator to the final


 

 

Figure 13: Example of in-path services in XIA.

 

intent and one from the final intent back to the initiator. The two paths may or may not be identical. We call the DAGs describing  these paths the session DAGs. Figure 13 shows an example session where the virus scanner is a application-level  service (it is a transport endpoint)  while the firewall is a network-level service (it is not a </