IEEE/ACM ToN

Distribuir contenido
TOC Alert for Publication# 90
Updated: hace 13 horas 16 mins

Table of Contents [Front Cover]

Mar, 08/01/2017 - 00:00
Presents the front cover for this issue of the publication.
Categorías: research

IEEE/ACM Transactions on Networking publication information

Mar, 08/01/2017 - 00:00
"Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication."
Categorías: research

$\mathrm {F^{2}}$ Tree: Rapid Failure Recovery for Routing in Production Data Center Networks

Mar, 08/01/2017 - 00:00
Failures are not uncommon in production data center networks (DCNs) nowadays. It takes long time for the DCN routing to recover from a failure and find new forwarding paths, significantly impacting realtime and interactive applications at the upper layer. In this paper, we present a fault-tolerant DCN solution, called F2Tree, which is readily deployed in existing DNCs. F2Tree can significantly improve the failure recovery time only through a small amount of link rewiring and switch configuration changes. Through testbed and emulation experiments, we show that F2Tree can greatly reduce the routing recovery time after failure (by 78%) and improve the performance of upper layer applications when routing failure happens (96% less deadline-missing requests).
Categorías: research

PIAS: Practical Information-Agnostic Flow Scheduling for Commodity Data Centers

Mar, 08/01/2017 - 00:00
Many existing data center network (DCN) flow scheduling schemes, that minimize flow completion times (FCT) assume prior knowledge of flows and custom switch functions, making them superior in performance but hard to implement in practice. By contrast, we seek to minimize FCT with no prior knowledge and existing commodity switch hardware. To this end, we present PIAS, a DCN flow scheduling mechanism that aims to minimize FCT by mimicking shortest job first (SJF) on the premise that flow size is not known a priori. At its heart, PIAS leverages multiple priority queues available in existing commodity switches to implement a multiple level feedback queue, in which a PIAS flow is gradually demoted from higher-priority queues to lower-priority queues based on the number of bytes it has sent. As a result, short flows are likely to be finished in the first few high-priority queues and thus be prioritized over long flows in general, which enables PIAS to emulate SJF without knowing flow sizes beforehand. We have implemented a PIAS prototype and evaluated PIAS through both testbed experiments and ns-2 simulations. We show that PIAS is readily deployable with commodity switches and backward compatible with legacy TCP/IP stacks. Our evaluation results show that PIAS significantly outperforms existing information-agnostic schemes, for example, it reduces FCT by up to 50% compared to DCTCP and L2DCT; and it only has a 1.1% performance gap to an ideal information-aware scheme, pFabric, for short flows under a production DCN workload.
Categorías: research

Adaptive CSMA Under the SINR Model: Efficient Approximation Algorithms for Throughput and Utility Maximization

Mar, 08/01/2017 - 00:00
We consider a carrier sense multiple access (CSMA)-based scheduling algorithm for a single-hop wireless network under a realistic signal-to-interference-plus-noise ratio model for the interference. We propose two local optimization-based approximation algorithms to efficiently estimate certain attempt rate parameters of CSMA called fugacities. It is known that adaptive CSMA can achieve throughput optimality by sampling feasible schedules from a Gibbs distribution, with appropriate fugacities. Unfortunately, obtaining these optimal fugacities is an NP-hard problem. Furthermore, the existing adaptive CSMA algorithms use a stochastic gradient descent-based method, which usually entails an impractically slow (exponential in the size of the network) convergence to the optimal fugacities. To address this issue, we first propose an algorithm to estimate the fugacities, that can support a given set of desired service rates. The convergence rate and the complexity of this algorithm are independent of the network size, and depend only on the neighborhood size of a link. Furthermore, we show that the proposed algorithm corresponds exactly to performing the well-known Bethe approximation to the underlying Gibbs distribution. Then, we propose another local algorithm to estimate the optimal fugacities under a utility maximization framework, and characterize its accuracy. Numerical results indicate that the proposed methods have a good degree of accuracy, and achieve extremely fast convergence to near-optimal fugacities, and often outperform the convergence rate of the stochastic gradient descent by a few orders of magnitude.
Categorías: research

On Oblivious Neighbor Discovery in Distributed Wireless Networks With Directional Antennas: Theoretical Foundation and Algorithm Design

Mar, 08/01/2017 - 00:00
Neighbor discovery, one of the most fundamental bootstrapping networking primitives, is particularly challenging in decentralized wireless networks where devices have directional antennas. In this paper, we study the following fundamental problem, which we term oblivious neighbor discovery: How can neighbor nodes with heterogeneous antenna configurations discover each other within a bounded delay in a fully decentralised manner without any prior coordination or synchronisation? We establish a theoretical framework on the oblivious neighbor discovery and the performance bound of any neighbor discovery algorithm achieving oblivious discovery. Guided by the theoretical results, we then devise an oblivious neighbor discovery algorithm, which achieves guaranteed oblivious discovery with order-minimal worst case discovery delay in the asynchronous and heterogeneous environment. We further demonstrate how our algorithm can be configured to achieve a desired tradeoff between average and worst case performance.
Categorías: research

FitLoc: Fine-Grained and Low-Cost Device-Free Localization for Multiple Targets Over Various Areas

Mar, 08/01/2017 - 00:00
Many emerging applications driven the fast development of the device-free localization (DfL) technique, which does not require the target to carry any wireless devices. Most current DfL approaches have two main drawbacks in practical applications. First, as the pre-calibrated received signal strength (RSS) in each location (i.e., radio-map) of a specific area cannot be directly applied to the new areas, the manual calibration for different areas will lead to a high human effort cost. Second, a large number of RSS are needed to accurately localize the targets, thus causes a high communication cost and the areas variety will further exacerbate this problem. This paper proposes FitLoc, a fine-grained and low cost DfL approach that can localize multiple targets over various areas, especially in the outdoor environment and similar furnitured indoor environment. FitLoc unifies the radio-map over various areas through a rigorously designed transfer scheme, thus greatly reduces the human effort cost. Furthermore, benefiting from the compressive sensing theory, FitLoc collects a few RSS and performs a fine-grained localization, thus reduces the communication cost. Theoretical analyses validate the effectivity of the problem formulation and the bound of localization error is provided. Extensive experimental results illustrate the effectiveness and robustness of FitLoc.
Categorías: research

An Approach for Service Function Chain Routing and Virtual Function Network Instance Migration in Network Function Virtualization Architectures

Mar, 08/01/2017 - 00:00
Network function virtualization foresees the virtualization of service functions and their execution on virtual machines. Any service is represented by a service function chain (SFC) that is a set of VNFs to be executed according to a given order. The running of VNFs needs the instantiation of VNF Instances (VNFIs) that in general are software modules executed on virtual machines. The virtualization challenges include: 1) where to instantiate VNFIs; ii) how many resources to allocate to each VNFI; iii) how to route SFC requests to the appropriate VNFIs in the right sequence; and iv) when and how to migrate VNFIs in response to changes to SFC request intensity and location. We develop an approach that uses three algorithms that are used back-to-back resulting in VNFI placement, SFC routing, and VNFI migration in response to changing workload. The objective is to first minimize the rejection of SFC bandwidth and second to consolidate VNFIs in as few servers as possible so as to reduce the energy consumed. The proposed consolidation algorithm is based on a migration policy of VNFIs that considers the revenue loss due to QoS degradation that a user suffers due to information loss occurring during the migrations. The objective is to minimize the total cost given by the energy consumption and the revenue loss due to QoS degradation. We evaluate our suite of algorithms on a test network and show performance gains that can be achieved over using other alternative naive algorithms.
Categorías: research

Multicast in Multihop CRNs Under Uncertain Spectrum Availability: A Network Coding Approach

Mar, 08/01/2017 - 00:00
The benefits of network coding on multicast in traditional multihop wireless networks have already been extensively demonstrated in previous works. However, most existing approaches cannot be directly applied to multihop cognitive radio networks (CRNs), given the unpredictable primary user occupancy on licensed channels. Specifically, due to the unpredictable occupancy, the channel's available bandwidth is time-varying and uncertain. Accordingly, the capacity of the link using that channel is also uncertain, which can significantly affect the network coding subgraph optimization and may result in severe throughput loss if not properly handled. In this paper, we study the problem of network coding-based multicast in multihop CRNs while considering the uncertain spectrum availability. To capture the uncertainty of spectrum availability, we first formulate our problem as a chance-constrained program. Given the computational intractability of the above-mentioned program, we then transform the original problem into a tractable convex optimization problem, through appropriate Bernstein approximation with relaxation on link scheduling. We further leverage Lagrangian relaxation-based optimization techniques to propose an efficient distributed algorithm for the original problem. Extensive simulation results show that the proposed algorithm achieves higher multicast rates, compared with a state-of-the-art non-network coding algorithm in multihop CRNs, and a conservative robust network coding algorithm that treats the link capacity as a constant value in the optimization.
Categorías: research

CubicRing: Exploiting Network Proximity for Distributed In-Memory Key-Value Store

Mar, 08/01/2017 - 00:00
In-memory storage has the benefits of low I/O latency and high I/O throughput. Fast failure recovery is crucial for large-scale in-memory storage systems, bringing network-related challenges, including false detection due to transient network problems, traffic congestion during the recovery, and top-of-rack switch failures. In order to achieve fast failure recovery, in this paper, we present CubicRing, a distributed structure for cube-based networks, which exploits network proximity to restrict failure detection and recovery within the smallest possible one-hop range. We leverage the CubicRing structure to address the aforementioned challenges and design a network-aware in-memory key-value store called MemCube. In a 64-node 10GbE testbed, MemCube recovers 48 GB of data for a single server failure in 3.1 s. The 14 recovery servers achieve 123.9 Gb/s aggregate recovery throughput, which is 88.5% of the ideal aggregate bandwidth and several times faster than RAMCloud with the same configurations.
Categorías: research

On Hardware Programmable Network Dynamics With a Chemistry-Inspired Abstraction

Mar, 08/01/2017 - 00:00
Chemical algorithms are statistical control algorithms described and represented as chemical reaction networks. They are analytically tractable, they reinforce a deterministic state-to-dynamics relation, they have configurable stability properties, and they are directly implemented in state space using a high-level visual representation. These properties make them attractive solutions for traffic shaping and generally the control of dynamics in computer networks. In this paper, we present a framework for deploying chemical algorithms on field programmable gate arrays. Besides substantial computational acceleration, we introduce a low-overhead approach for hardware-level programmability and re-configurability of these algorithms at runtime, and without service interruption. We believe that this is a promising approach for expanding the control-plane programmability of software defined networks (SDN), to enable programmable network dynamics. To this end, the simple high-level abstractions of chemical algorithms offer an ideal northbound interface to the hardware, aligned with other programming primitives of SDN (e.g., flow rules).
Categorías: research

Economics of Quality Sponsored Data in Non-Neutral Networks

Mar, 08/01/2017 - 00:00
The growing demand for data has driven the service providers (SPs) to provide differential treatment of traffic to generate additional revenue streams from content providers (CPs). While SPs currently only provide best-effort services to their CPs, it is plausible to envision a model in near future, where CPs are willing to sponsor quality of service for their content in exchange of sharing a portion of their profit with SPs. This quality sponsoring becomes invaluable especially when the available resources are scarce, such as in wireless networks, and can be accommodated in a non-neutral network. In this paper, we consider the problem of quality-sponsored data (QSD) in a non-neutral network. In our model, SPs allow CPs to sponsor a portion of their resources, and price it appropriately to maximize their payoff. The payoff of the SP depends on the monetary revenue and the satisfaction of end-users both for the non-sponsored and sponsored content, while CPs generate revenue through advertisement. Note that in this setting, end-users still pay for the data they use. We analyze the market dynamics and equilibria in two different frameworks, i.e., sequential and bargaining game frameworks, and provide strategies for: 1) SPs-to determine if and how to price resources and 2) CPs-to determine if and what quality to sponsor. The frameworks characterize different sets of equilibrium strategies and market outcomes depending on the parameters of the market.
Categorías: research

Situation-Aware Dynamic Service Coordination in an IoT Environment

Mar, 08/01/2017 - 00:00
The Internet of Things (IoT) infrastructure with numerous diverse physical devices are growing up rapidly, which need a dynamic services coordination approach that can integrate those heterogeneous physical devices into the context-aware IoT infrastructure. This paper proposes a situation-aware dynamic IoT services coordination approach. First, focusing on the definition of formal situation event pattern with event selection and consumption strategy, an automaton-based situational event detection algorithm is proposed. Second, the enhanced event-condition-action is used to coordinate the IoT services effectively, and also the collaboration process decomposing algorithm and the rule mismatch detection algorithms are proposed. Third, the typical scenarios of IoT services coordination for smart surgery process are also illustrated and the measurement and analysis of the platform's performance are reported. Finally, the conclusions and future works are given.
Categorías: research

Achieving Accurate and Real-Time Link Estimation for Low Power Wireless Sensor Networks

Mar, 08/01/2017 - 00:00
Link estimation is a fundamental component of forwarding protocols in wireless sensor networks. In low power forwarding, however, the asynchronous nature of widely adopted duty-cycled radio control brings new challenges to achieve accurate and real-time estimation. First, the repeatedly transmitted frames (called wake-up frame) increase the complexity of accurate statistic, especially with bursty channel contention and coexistent interference. Second, frequent update of every link status will soon exhaust the limited energy supply. In this paper, we propose meter, which is a distributed wake-up frame counter. Meter takes the opportunities of link overhearing to update link status in real time. Furthermore, meter does not only depend on counting the successfully decoded wake-up frames, but also counts the corrupted ones by exploiting the feasibility of ZigBee identification based on short-term sequence of the received signal strength. We implement meter in TinyOS and further evaluate the performance through extensive experiments on indoor and outdoor test beds. The results demonstrate that meter can significantly improve the performance of the state-of-the-art link estimation scheme.
Categorías: research

Public Wi-Fi Monetization via Advertising

Mar, 08/01/2017 - 00:00
The proliferation of public Wi-Fi hotspots has brought new business potentials for Wi-Fi networks, which carry a significant amount of global mobile data traffic today. In this paper, we propose a novel Wi-Fi monetization model for venue owners (VOs) deploying public Wi-Fi hotspots, where the VOs can generate revenue by providing two different Wi-Fi access schemes for mobile users (MUs): 1) the premium access, in which MUs directly pay VOs for their Wi-Fi usage, and 2) the advertising sponsored access, in which MUs watch advertisements in exchange of the free usage of Wi-Fi. VOs sell their ad spaces to advertisers (ADs) via an ad platform, and share the ADs' payments with the ad platform. We formulate the economic interactions among the ad platform, VOs, MUs, and ADs as a three-stage Stackelberg game. In Stage I, the ad platform announces its advertising revenue sharing policy. In Stage II, VOs determine the Wi-Fi prices (for MUs) and advertising prices (for ADs). In Stage III, MUs make access choices and ADs purchase advertising spaces. We analyze the sub-game perfect equilibrium (SPE) of the proposed game systematically, and our analysis shows the following useful observations. First, the ad platform's advertising revenue sharing policy in Stage I will affect only the VOs' Wi-Fi prices but not the VOs' advertising prices in Stage II. Second, both the VOs' Wi-Fi prices and advertising prices are non-decreasing in the advertising concentration level and non-increasing in the MU visiting frequency. Numerical results further show that the VOs are capable of generating large revenues through mainly providing one type of Wi-Fi access (the premium access or advertising sponsored access), depending on their advertising concentration levels and MU visiting frequencies.
Categorías: research

<italic>T-Chain</italic>: A General Incentive Scheme for Cooperative Computing

Mar, 08/01/2017 - 00:00
In this paper, we propose a simple, distributed, but highly efficient fairness-enforcing incentive mechanism for cooperative computing. The proposed mechanism, called triangle chaining (T-Chain), enforces reciprocity to avoid the exploitable aspects of the schemes that allow free-riding. In T-Chain, symmetric key cryptography provides the basis for a lightweight, almost-fair exchange protocol, which is coupled with a pay-it-forward mechanism. This combination increases the opportunity for multi-lateral exchanges and further maximizes the resource utilization of participants, each of whom is assumed to operate solely for his or her own benefit. T-Chain also provides barrier-free entry to newcomers with flexible resource allocation, allowing them to immediately benefit, and, therefore, is suitable for dynamic environments with high churn (i.e., turnover). T-Chain is distributed and simple to implement, as no trusted third party is required to monitor or enforce the scheme, nor is there any reliance on reputation information or tokens.
Categorías: research

Distributed Spanner Construction With Physical Interference: Constant Stretch and Linear Sparseness

Mar, 08/01/2017 - 00:00
This paper presents the first distributed algorithm to construct a spanner for arbitrary ad hoc networks under the physical signal-to-interference-and-noise-ratio (SINR) interference model. Spanner construction is one of the most important techniques for topology control in wireless networks, which intends to find a sparse topology in which only a small number of links need to be maintained, without substantially degrading the path connecting any pair of the nodes in the network. Due to the non-local property of interference, constructing a spanner is challenging under the SINR model, especially when a local distributed algorithm is desired. We meet this challenge by proposing an efficient randomized distributed algorithm that can construct a spanner in O(log n log Γ) timeslots with a high probability, where n is the total number of nodes and Γ describes the ratio of the maximum distance to the minimum distance between nodes. The constructed spanner concurrently satisfies two most desirable properties: constant stretch and linear sparseness. Our algorithm employs a novel maximal independent set (MIS) procedure as a subroutine, which is crucial in achieving the time efficiency of spanner construction. The MIS algorithm improves the best known result of O(log2 n) [33] to O(log n) and is of independent interest as the algorithm is applicable also to many other applications. We conduct simulations to verify the proposed spanner construction algorithm, and the results show that our algorithm also performs well in realistic environments.
Categorías: research

Scaling in Internet Traffic: A 14 Year and 3 Day Longitudinal Study, With Multiscale Analyses and Random Projections

Mar, 08/01/2017 - 00:00
In the mid 1990s, it was shown that the statistics of aggregated time series from Internet traffic departed from those of traditional short range-dependent models, and were instead characterized by asymptotic self-similarity. Following this seminal contribution, over the years, many studies have investigated the existence and form of scaling in Internet traffic. This contribution first aims at presenting a methodology, combining multiscale analysis (wavelet and wavelet leaders) and random projections (or sketches), permitting a precise, efficient and robust characterization of scaling, which is capable of seeing through non-stationary anomalies. Second, we apply the methodology to a data set spanning an unusually long period: 14 years, from the MAWI traffic archive, thereby allowing an in-depth longitudinal analysis of the form, nature, and evolutions of scaling in Internet traffic, as well as network mechanisms producing them. We also study a separate three-day long trace to obtain complementary insight into intra-day behavior. We find that a biscaling (two ranges of independent scaling phenomena) regime is systematically observed: long-range dependence over the large scales, and multifractallike scaling over the fine scales. We quantify the actual scaling ranges precisely, verify to high accuracy the expected relationship between the long range dependent parameter and the heavy tail parameter of the flow size distribution, and relate fine scale multifractal scaling to typical IP packet inter-arrival and to round-trip time distributions.
Categorías: research

An Online Learning Approach to Improving the Quality of Crowd-Sourcing

Mar, 08/01/2017 - 00:00
We consider a crowd-sourcing problem where in the process of labeling massive data sets, multiple labelers with unknown annotation quality must be selected to perform the labeling task for each incoming data sample or task, with the results aggregated using for example simple or weighted majority voting rule. In this paper, we approach this labeler selection problem in an online learning framework, whereby the quality of the labeling outcome by a specific set of labelers is estimated so that the learning algorithm over time learns to use the most effective combinations of labelers. This type of online learning in some sense falls under the family of multi-armed bandit (MAB) problems, but with a distinct feature not commonly seen: since the data is unlabeled to begin with and the labelers' quality is unknown, their labeling outcome (or reward in the MAB context) cannot be readily verified; it can only be estimated against the crowd and be known probabilistically. We design an efficient online algorithm LS_OL using a simple majority voting rule that can differentiate high and low quality labelers over time, and is shown to have a regret (with respect to always using the optimal set of labelers) of O(log2 T ) uniformly in time under mild assumptions on the collective quality of the crowd, thus regret free in the average sense. We discuss further performance improvement by using a more sophisticated majority voting rule, and show how to detect and filter out “bad” (dishonest, malicious or very incompetent) labelers to further enhance the quality of crowd-sourcing. Extension to the case when a labeler's quality is task-type dependent is also discussed using techniques from the literature on continuous arms. We establish a lower bound on the order of O(log T D2(T )), where D2(T) is an arbitrary function such that D2(T ) > O(1). We further provide a matching uppe- bound through a minor modification of the algorithm we proposed and studied earlier on. We present numerical results using both simulation and set of images labeled by amazon mechanic turks.
Categorías: research

Data Center Server Provision: Distributed Asynchronous Control for Coupled Renewal Systems

Mar, 08/01/2017 - 00:00
This paper considers a cost minimization problem for data centers with N servers and randomly arriving service requests. A central router decides which server to use for each new request. Each server has three types of states (active, idle, and setup) with different costs and time durations. The servers operate asynchronously over their own states and can choose one of multiple sleep modes when idle. We develop an online distributed control algorithm so that each server makes its own decisions. The request queues are bounded and the overall time average cost is near optimal with probability 1. First the algorithm does not need probability information for the arrival rate or job sizes. Finally, an improved algorithm that uses a single queue is developed via a “virtualization” technique, which is shown to provide the same (near optimal) costs. Simulation experiments on a real data center traffic trace demonstrate the efficiency of our algorithm compared with other existing algorithms.
Categorías: research