Definitions

MUSE QoS solution

MUSE QoS solution

MUSE advocates the introduction of QoS into IP networks as this allows better resource utilization while at the same time it allows to serve multiple and different applications with the transport quality they actually need.

The solution needs to be able to offer quantitative QoS support for some services and qualitative for others, to support a retail/wholesale split in the QoS business model, to provide upstream QoS, especially across the access link, and to support multiple service edges.

The usage of at least four traffic classes (real-time, streaming, transactional and best-effort) is recommended by MUSE as the way for differentiating traffic whereas keeping the scalability of the network.

MUSE recommends a user-centric approach where classification of traffic into traffic classes is a responsibility of the user, although it is still expected to be often delegated to the providers. Traffic policing is recommended to be used for ensuring that the usage of each traffic class does not exceed what has been planned, agreed or contracted.

Architectural Options

There are some basic questions that need answering before defining the MUSE QoS solution:

  • How to differentiate traffic?

MUSE recommends simple traffic differentiation (i.e. DiffServ) as one of the basic QoS techniques. However the differentiation should be done in such a way that it does not impose a heavy burden in terms of complexity and performance. Therefore, MUSE advocates using a limited set of traffic classes (e.g. four classes) inside the access and aggregation networks.

  • Who is responsible for the classification of the traffic?

The nature and importance of the traffic should be taken into consideration in assigning the responsibility to classify traffic, and these aspects are better known to the users of the network (Residential, Corporate, Service Providers, Packagers, etc). Indeed, classification of traffic into traffic classes should be done before the ingress point of the network acknowledging the right of the users to decide this and this is the only way to guarantee QoS on all the network segments.

However, as most users are laymen and do not know how or not want to make this kind of decisions, it is expected that most of them would delegate these decisions to the network as part of the function of users’ preferences. Note also that, whereas it is legitimate to try to offer this capability as an added value so that network users can be liberated from taking decisions, it is also true that some users could prefer to keep the control, and hence this model should not be imposed but offered as a possibility.

  • How is QoS requested?

Requesting QoS, once a DiffServ approach has been selected, is implicit to the fact of belonging to a given traffic class, as the network will normally define a set of constraints (e.g. a given maximum mean bitrate per traffic class) for using the different available traffic classes on a per user basis. This will basically depend on the physical constraints of the access technology and on the contract negotiated with the user. Verifying that the constraints for using the traffic classes are being fulfilled by the users is part of the policy enforcement, and this should ideally be done at the ingress points of the network. Note that these constraints are expected to be rather static in time, although they may be altered upon previous negotiation between the user and the network. Additionally, explicit guarantees per session can be provided to some services by means of a CAC (Call/Connection Admission Control) system so that service session requests are accepted or rejected based on the availability of requested network resources for the traffic class applicable to the session. Such a CAC system will receive explicit requests through application’s own signalling (e.g. in the Session Description Protocol) or from application servers.

  • How are the resources provided?

The goal of CAC is to verify in real time that there are available resources enough for satisfying the QoS guarantees. In this sense, admission control has a complementary role to network dimensioning, which should have provided the necessary resources in advance. Note that QoS should be largely achieved by the correct network dimensioning. Even with CAC, if the network is largely underprovisioned, a significant number of session requests will be denied, thereby leading to a poor customer experience. CAC is therefore not a substitute for good network dimensioning, which is the basic means of providing the necessary resources for QoS support.

CAC Options

  • Central CAC:

A central CAC system is where all CAC decisions are made at the same place. The central CAC system has a complete view of the resources of the appropriate parts of the network. All call admission requests have to be signalled to this central system. For each and every call (signalled) request or (non-signalled) detection, the central CAC system is consulted, which, on the basis of resource availability, decides whether to allow or block the requested call. This decision is sent to the boundary node where enforcement may be done. The main advantage of a central CAC system is that it is simple to manage. However, it requires the exchange of signalling messages, which may compromise the scalability and adds delay in the decision process.

  • Local CAC:

A local CAC system is where uncoordinated CAC decisions are made at the appropriate nodes on the basis of the state of a local link. For a CAC decision to be made locally and independently of other nodes, it is necessary that there be a local view of the availability of network resources that can be used by that node. To do that, it is necessary to partition network resources for each traffic class and to allocate them to the different nodes. Resource partitioning naturally leads to a reduction in the potential multiplexing gains as unused resources allocated to a given node can not be used by another node or by another traffic class at the same node. In order to avoid this, partitions should be updated on a regular basis by a central authority that has a historic and global view of network resource usage or when a threshold associated to a given allocation is exceeded. Any re-allocation of resources should take into account the commercial agreements (SLAs) between network operators and service providers, which might specify bandwidth on a link-by-link basis.

A local CAC system reduces both the time to make decisions and the exchange of signalling messages. It can take advantage of direct interaction with IGMP messages, hence offering a way of implementing CAC for multicast traffic. However, its implementation is complex and must be carefully done in order to avoid problems such as the lack of information consistency.

MUSE QoS Architecture

A diagram of the MUSE QoS architecture is described in Figure 1.

Traffic classes, selective CAC and appropriate network dimensioning are the keystones of the solution.

While traffic class differentiation will be used for most of the traffic keeping the solution simple and scalable, per flow differentiation can also be supported for certain types of traffic in the appropriate parts of the network. A small set of traffic classes will be used to deal with most of the traffic, establishing simple scheduling algorithms in the network elements to differentiate them.

Traffic classification will normally be performed by the traffic originator, unless delegated to the network operator. However, the network will ensure that the load of each traffic class is below the level that has been used for dimensioning by means of policing the different traffic classes on a per subscriber basis. This traffic policing should be done as close to the user as possible in order to react faster, to guarantee enough network resources, to protect sensitive traffic, etc; that is, at the access nodes (AN). The policies could be statically provisioned for common residential services. However, personalised policies could be applied in association with the authentication process, so that richer and more dynamic services could be supported. Inside the network, nodes must have dedicated queues per traffic class at the output ports, which will be attended by a scheduling mechanism with strict priority or weighted mechanisms.

CAC is not necessary for all services, but only for those featuring difficulty in demand prediction and require a large amount of resource. This is likely to happen on the access links, because of their relatively high associated cost, and especially for the upstream traffic when using asymmetric access technologies. It may also happen on aggregate links which carry a mix of basic Internet traffic and video. Hence, CAC is recommended to be applied at least for protecting the traffic of the most sensitive services; that is, real-time or loss-sensitive ones, and especially those that require a relatively big amount of bandwidth (e.g. IPTV and VoD).

Appropriate network dimensioning will reduce the chances of having congestion problems and hence will diminish the need for widespread deployment and usage of resource reservation and admission control mechanisms; these can be difficult to deploy as they add the need for signalling and stateful management at network elements. Of course, an appropriate network dimensioning requires an awareness of possible traffic evolution patterns in the network. Hence, traffic monitoring and reporting per node are required.

Traffic classes

A natural way to group telecommunication services is as a function of the type of traffic they generate. The main differentiators identified in MUSE are:

  • Elasticity level (elastic/inelastic): Elasticity level refers to the level up to which the traffic’s original shape can be modified. Not all applications have the same elasticity level. Normally, communication services aspire to keep both data and temporal integrity. In order to establish the elasticity level of a given service/application, it is useful to assess which of both integrities is more important. So, elastic and inelastic applications (or traffic generated by those applications) can be distinguished as a function of which of these is more relevant. If data integrity is more relevant (e.g. a file transfer), lost or corrupted data have to be retransmitted, the traffic is considered elastic. If temporal integrity is the main concern (e.g. a voice call), there is normally no chance of retransmitting lost or corrupted data, so this type of service is characterised as having inelastic traffic. For some services both data and temporal integrity are important (e.g. certain interactive games).
  • Interactivity level (interactive/non-interactive): The interactivity level describes the time integrity in both directions of communication. For instance, elastic traffic with a high interactivity level is generated by an application where both data integrity and temporal integrity are very relevant (for instance, e-commerce, etc).
  • Service availability (standard/high): Availability is a very important consideration, and of course must be used as an attribute to identify the different traffic classes. Indeed, it is one of the most important considerations to take into account in core networks (which are not in the scope of MUSE).

The following table provides a mapping between these concepts and the ITU and 3GPP terminology.

Traffic class Terminology proposed in MUSE 3GPP ITU
Elastic non-interactive Best-effort Background Non-critical
Elastic interactive Transactional Interactive Responsive
Inelastic non-interactive Streaming Streaming Timely
Inelastic interactive Real Time Conversational Interactive

It is recommended that all network nodes support at least the 4 classes of services defined in the previous table. Every operator can obviously provide additional classes of services.

Inside the network, nodes will have dedicated queues per traffic class at the output ports, which will be served by Strict Priority (SP) and/or Weighted Fair Queuing (WFQ) scheduling mechanisms.

Selective CAC

In large access network domains, there could be a scalability problem when implementing signalled and central CAC for each and every flow. Hence, MUSE recommends the use of central CAC for the (small) subset of services that actually need it, and only in those parts of the network where the network operator has identified a potential dimensioning problem, whereas local CAC or no CAC can be used for the rest of the traffic.

A mechanism is needed to segregate the network resources into:

  • A set of resources that can be used by services that need no CAC, and are policed only on a traffic class basis so that a maximum class bandwidth cannot be exceeded.
  • A set of resources for services that are controlled by a central system on a per call/session basis with explicit signalling.
  • A set of resources for services that are controlled by a local CAC system on a per call/session basis with or without explicit signalling.

The next section describes the mechanism selected by MUSE to segregate the network resources.

Note that a given set of resources can either be completely dedicated to traffic subject to CAC or, to improve network utilisation, may also be shared by traffic not subject to CAC. In the case of sharing, prioritisation mechanisms will be required in addition to CAC. Depending on the network architecture and dimensioning, CAC may only be needed for certain links within an end-to-end path.

In addition to any admission control performed by the network operator, it is the responsibility of the service provider, through a separate service admission control, to check whether necessary resources are still available on the services platform and at the traffic classes contracted to the network operator (in a wholesale scenario).

Provisioning scenario for co-ordinating central and local CAC

MUSE recommends a “provisioning” scenario where the central CAC, which has a view of all network resource usage, is able to allocate to a local CAC a certain amount of resources that will then be managed locally. A scheme of this provisioning scenario can be seen in Figure 2.

Simpler scenarios could be designed where each CAC system just controls a dedicated set of resources and no interrelation is needed between them. However, the main drawback of such an approach is that optimization of resource usage is difficult to achieve.

Within this approach, the central CAC regularly monitors the usage of local resources at the Access Nodes (AN) and adjusts the resources allocated to the local CAC entities if needed.

The usage of WFQ is recommended as scheduling algorithm in those links where capacity is high enough, as it helps to provision the required bandwidth for each traffic class by appropriately setting the different weights. However, use of SP is recommended for guaranteeing a low queuing delay for real time traffic in those links where capacity is relatively low, whereas WFQ can be used for sharing the remaining bandwidth among the rest of classes.

This approach is recommended as it gives more flexibility to share resources between central and local CAC, and allows more reactivity if a significant evolution occurs between central and local traffic proportions. This mechanism could be used to adjust the threshold for local resources over a day, allowing, for instance, attractive prices on VoD when TV bandwidth is not heavily used, or even close to real-time when local allocated resources are exhausted while there are still global available resources, so that resource allocation is kept to an optimum.

More complex and dynamic scenarios can be envisioned where local CAC entities are able to request additional resources to the central CAC system in a proactive way. However, the added complexity does not justify going for such approaches in the medium term.

Central CAC implementation

Central CAC is usually considered as the simplest approach, and hence is in principle recommended to be used for applications requiring explicit CAC (e.g. VoD) where neither scalability nor setup delays are special concerns.

A central CAC entity has a single but complete view of the availability of all network resources in a network area. This is usually done by listening to link-state routing protocols running on the network, in conjunction with a database containing the installed network elements and the installed bandwidth per interface. The admission control entity receives all the requests for starting a service and, since it has a current view of the status of the network, it decides whether it is possible or not to accept the request. Once this decision has been made, resource reservation can be done.

However, for the admission control of multicast flows, a centralized implementation of the CAC function is not appropriate. A centralized admission control would have to decide if a new requested channel could be delivered or not each time it receives a request to join a multicast channel. There are three problems with this:

  • The first is the sheer volume of requests which arise when people are channel zapping.
  • The second problem is that the multicast protocol (i.e. IGMP) has no mechanism (i.e. no parameter in the message) to convey the required bandwidth, which means that there has to be a local association between bandwidth and channel.
  • Finally in most cases the stream join will occur (automatically) closer to the end-user than the location of the admission control system, so there is no mechanism for the CAC system to actually prevent the join.

Two of these problems can be solved by a distributed CAC system, but there will still be a need to maintain a mapping of multicast group addresses to channel bandwidth.

Central CAC based on Impact Matrixes

Within MUSE project, it has been proposed the analysis of a centralized CAC mechanism using Impact Matrixes.

An impact matrix shows the different types of interference (or impact) that a concrete network traffic (pre-computed end-to-end path) causes in the rest, in terms of available bandwidth. We consider three types of impact:

  • Impact itself: it is the own interference over the considered traffic.
  • Direct impact: represents the impact over the traffics whose links are totally included in the path of the considered traffic.
  • Indirect impact: it is the impact over the traffics that include some links (not all) in the path of the considered traffic.

The size of an impact matrix is (number of network nodes)x(number of network nodes). Each matrix position represents a network traffic identifier.

There are several steps for computing the impact matrixes:

  • We list all the link identifiers that every one of the traffics needs to pass through to reach the destination.
  • We list the traffic identifiers that traverse every link in the network.
  • We fill in the impact matrix for each traffic in the following way:;* For interference itself, we introduce a ‘P’ in the corresponding matrix position, that is, the position representing the considered traffic.;* We write a ‘D’ in the matrix positions representing the traffics suffering a direct impact (if exist) by the considered traffic.;* We must put an ‘I’ for the traffic identifiers that are indirectly impacted (if exist) by the considered traffic.;* Finally, the rest of matrix positions are filled in by zeros, for example, representing that there is not any type of interference over these traffics.

How to use these impact matrixes for the admission control?

Firstly, it is necessary to introduce the concept of Acceptance Matrix:;* It contains the minimum available bandwidth along the links that compound the path between every pair of network nodes, that is, the bottleneck of each path.

Once this term has been defined, we must mention that the proposed admission control is carried out according the available bandwidth that the acceptance matrix indicates for each one of the traffics.

The impact matrixes, once they have been built in the initialization phase, tell us how to modify/update each one of the acceptance matrix positions, that is, the available bandwidth for supporting future traffic requests.

So, when the central CAC entity receives a traffic request from a network node, it has to carry out several steps:

  • First of all, it must check if the available capacity indicated in the acceptance matrix for the required traffic is enough for serving it. If not, the request is rejected. In affirmative case, the acceptance matrix must be updated in the appropriate form.
  • The suitable way to modify the acceptance matrix is looking at the corresponding impact matrix for the required traffic:;* For the matrix positions containing a ‘P’, that is, the required traffic, the corresponding position in the acceptance matrix must be updated subtracting the demanded bandwidth.;* The same occurs for the matrix positions with a ‘D’. We must subtract the demanded capacity in the corresponding positions of the acceptance matrix.;* For each matrix position containing an ‘I’, we have to do the following:;;* We must access to the impact matrix of the corresponding traffic identifier.;;* Inside this matrix, we must pay attention in the matrix positions containing a ‘D’ and select the minimum value of capacity among them.;;* The available bandwidth that we must assign in the acceptance matrix position corresponding to the traffic marked as an ‘I’ in the impact matrix of the required traffic is the minimum value obtained in the previous step.

Before it has been described the acceptance procedure of an incoming traffic request. Now, we present the release process of an established connection that is equivalent to the explained previously:

  • We must update the acceptance matrix according to the impact matrix of the released traffic:;* For the matrix positions containing a ‘P’, that is, the released traffic, the corresponding position in the acceptance matrix must be updated adding the released bandwidth.;* The same occurs for the matrix positions with a ‘D’. We must add the released capacity in the corresponding positions of the acceptance matrix.;* For each matrix position containing an ‘I’, we have to follow the similar steps described for the case of accepting a traffic request:;;* We must access to the impact matrix of the corresponding traffic identifier.;;* Inside this matrix, we must pay attention in the matrix positions containing a ‘D’ and select the minimum value of capacity among them.;;* The available bandwidth that we must assign in the acceptance matrix position corresponding to the traffic marked as an ‘I’ in the impact matrix of the released traffic is the minimum value obtained in the previous step.

In further analysis, the four traffic classes proposed by MUSE will be considered.

Local CAC implementation

Local CAC allows to significantly reduce complexity, signalling exchanges and time required to provide a CAC answer.

By using traffic classes, it is possible to decouple the checking of available resources into two parts: a CAC per traffic class, which consists of verifying that the traffic class involved in the session request has available resources, and a policing of traffic classes.

Note that local CAC can be performed independently in different nodes (i.e. RGW, AN, EN), so that a single reject decision is enough for denying the service request. In this way, local CAC can be deployed at those parts of the network where congestion is more likely to appear.

Local CAC can easily handle multicast traffic (e.g. IPTV), as it is possible to be locally aware of IGMP messages.

This decoupling approach allows a positive local CAC decision made by a single node to be enough for guaranteeing that there will be available resources in the corresponding traffic class, provided that such a traffic class is dimensioned to be used at the maximum rate by every user. This is not a special concern for multicast traffic, as this does not depend on the number of users but on the number of channels that are being simultaneously watched. But this approach could also be followed for unicast traffic. By taking into account the statistical nature of the traffic generated by the services of a given traffic class, the amount of resources needed is reduced and scalability is improved.

In addition, under congestion conditions, the network operator may decide to unilaterally vary the policing of the traffic classes, communicating to the local CAC entities the new limits for each traffic class so that the new conditions for using the traffic classes can be taken into account. Besides, traffic policing could be dynamically varied upon request from the network users provided that there are enough available resources in the network.

Note that local CAC can be applied at the Residential Gateways, independently of other CAC systems running in the network. This way, services that are not subject to CAC in the network could be delivered with QoS guarantees without the network being aware of such service sessions. However, the Residential Gateway should have awareness of those service sessions, by snooping the application signalling or by using service signatures provided by the service providers. This last approach is better, as it requires the cooperation of the service provider, which can provide also the parameters needed for evaluating the CAC, i.e. the effective bandwidth. It can also provide the necessary signalling messages for informing the service provider about the denial of service and/or the user when the Residential Gateway CAC decides to block the session.

CAC using Local Quotas

In this case, within MUSE project it has been proposed an admission control (CAC) technique based on local quotas for accepting the incoming traffic requests. Although it considers a central system that computes these local quotas, we must mention that the operation of the proposed admission control technique is based on a local scheme, since each node in the network with admission control functions manage their assigned quotas locally, without intervention of the central system.

We must start by defining what a local quota is:;* It is the assigned percentage of the total available bandwidth in a node for a determined traffic in the network. This assigned amount of capacity can be managed locally by the node in order to accept or reject an incoming traffic request, without asking for permission for establishing it to the central system. This is possible because the assigned quotas for each one of the traffics take into account the minimum available bandwidth for the rest of traffics. So, if a node has enough capacity for serving a traffic request, this node can accept it guaranteeing that it is going to be established by all the pre-determined links successfully, that is, there is going to be available capacity enough along the pre-computed path for this traffic.

Local quotas computation

The local quotas computation process that is carried out by the central system is explained following:

  • Pre-compute the path or the route for each one of the possible traffics in the network.
  • We list all the link identifiers that every one of the traffics needs to pass through to reach the destination.
  • We list the traffic identifiers that traverse every link in the network.
  • For each link in the network, we divide its total capacity among the traffics that traverse the link, in a uniform way.
  • As a result of the previous step, we have assigned a certain bandwidth by link to every one of the traffics. So, we must establish as the local quota for a determined traffic the minimum assigned one.

Again, in further analysis, the four traffic classes proposed by MUSE will be taking into account.

Policy enforcement

A policy is the combination of rules and services where rules define the criteria for resource access and usage.

According to the Common Open Policy Service (COPS) terminology, three functional elements are defined for using policies in a network:

  • Policy Repository’, which contains the policies that have to be applied in the network.
  • Policy Decision Point’ (PDP), which evaluate the policies upon a given request and notifies the decision to the corresponding Policy Enforcement Points. A CAC decision is an example of policy decision.
  • Policy Enforcement Point’ (PEP). The PEP is the place in the network where the policy decisions are actually enforced (e.g. access control and traffic policing).

Traffic policing consists of verifying that a given traffic class does not exceed a certain profile. Enforcement of the allowed QoS policy (bandwidth, maximum size of packet, etc.) maybe required so that misbehaving traffic does not impact the QoS of the other users/classes. Protection can only be provided if such enforcement is done on a per user and per traffic class basis. Note that as a result of the policy enforcement, out of profile traffic could be dropped, remarked or delayed until it complies to the profile (i.e. traffic shaping).

For the upstream traffic, it is recommended that the policy enforcement be done at the Access Nodes (AN). This will minimise the chances of misbehaving users altering the QoS of other users in the aggregation network. Otherwise, excessive traffic marked as high priority by some users may cause starvation of lower priority traffic of other users.

However, for the downstream traffic, it is recommended that policy enforcement be done at the Edge Nodes (EN) on aggregate of IP flows (i.e. traffic classes) in order to lower the processing power required at these nodes. This will not prevent a single misbehaving user from impacting the aggregate, but will limit any damage to that aggregate.

In addition to the aggregate enforcement at the Edge Nodes for downstream traffic, it is recommended to have per user shaping of downstream traffic at the Access Node to prevent congestion in the first mile. Shaping is commonly done at the BRAS in current architectures. However this is no longer viable in a multi-edge architecture. The only point at which all the traffic for a given line comes together may be the Access node itself.

Coordination between Home Network – Access Network for QoS

The link between the Residential Gateway and the Access Node (AN) is usually the main bottleneck, especially since its capacity cannot easily be increased. To solve the contest for bandwidth on this link, the usage of prioritization mechanisms for handling traffic classes is recommended. Additionally, traffic policing can be used for avoiding starvation of lower priority traffic and fulfilling the usage limitations per traffic class.

Classification of upstream traffic into the traffic classes, and its prioritization onto the access link will be realized by the RGW according to user preferences (note that these preferences can be delegated to the network operator or to any service provider, and that traffic could have been previously marked by a terminal). This classification can be done by identifying application signatures, by using predefined ports at the RGW, by evaluating the Ethertype, etc. The policing of upstream traffic will be done by the Access Node according to the rules defined above.

Prioritization of downstream traffic per traffic class is made by the network according to the rules described in the user contracts (i.e. residential and corporate users, service providers, etc). This is currently done at Edge Nodes (EN). However, when having multiple Edge Nodes in the network, there is not a single point of control of downstream traffic except for the Access Node.

Conclusions

MUSE advocates a pragmatic and simple way to provide services with QoS which is based mainly on traffic class differentiation, selective CAC and appropriate network dimensioning.

MUSE recommends a user-centric approach where classification of traffic into traffic classes is a responsibility of the user even if it is expected to be normally delegated to the providers. However, the network will ascertain that the usage of each traffic class does not exceed what has been planned, agreed or contracted by means of traffic policing. For the upstream traffic, it is recommended that the policy enforcement be done at the Access Nodes per Traffic Class and per Access Line. However, for the downstream traffic, it is recommended that the policy enforcement be done at the Edge Nodes per Traffic Class. It is also recommended to have per user shaping at the Access Node of downstream traffic to prevent congestion in the first mile.

MUSE recommends the use of central CAC for the small subset of services that actually need it and only in those parts of the network where the network operator has identified a potential dimensioning problem. Local CAC at the Access Nodes or no CAC can be used for the rest of the traffic. Appropriate network dimensioning will help to minimize the risk of congestion or blocking problems.

Central CAC is recommended for applications requiring explicit CAC where neither scalability nor setup delays are special concerns (e.g. VoD). Local CAC is recommended to handle multicast traffic (e.g. IPTV) because of its better scalability and lower reaction time.

MUSE recommends a “provisioning” scenario where the central CAC, which has a view of all network resource usage, is able to allocate to a local CAC a certain amount of resources that will then be managed locally. Within this approach, the central CAC regularly monitors the usage of local resources at the Access Nodes and readjusts the resources allocated to the local CAC entities if necessary.

It is recommended that all network nodes support at least the four traffic classes proposed by MUSE (real-time, streaming, transactional and best-effort). At every link, outgoing traffic will be placed into a different queue according to what traffic class it belongs. These queues will be served by Strict Priority (SP) and/or WFQ scheduling mechanisms. SP is recommended to be used for guaranteeing a low delay for real time traffic in those links where capacity is relatively low, whereas WFQ can be used for the rest of classes. When capacity is high enough, WFQ can help to provision the required bandwidth to each traffic class.

See also

References

  • MUSE QoS Architecture White Paper
  • MUSE deliverable DTF1.9 GSB Access Network Architecture
  • MUSE deliverable DTF1.6 Access Network Architecture III
  • MUSE deliverable DA2.4 Network Architecture and Functional Specifications for the Multi-Service Access and Edge (Network Architecture and Functional Specifications for the Multi-Service Access and Edge)

External links

Search another word or see MUSE QoS solutionon Dictionary | Thesaurus |Spanish
Copyright © 2014 Dictionary.com, LLC. All rights reserved.
  • Please Login or Sign Up to use the Recent Searches feature
FAVORITES
RECENT

;