Java Project Titles (IEEE 2012, 2011 Java IEEE Project List)   |   Dot Net Project Titles (IEEE 2012, 2011 Dot Net IEEE Project List)    |   Android Project Titles (IEEE 2012, 2011 Android IEEE Project List)   |   Matlab Project Titles (IEEE 2012, 2011 Matlab IEEE Project List)   |   NS2 Project Titles (IEEE 2012, 2011 NS2 IEEE Project List)

Embedded Systems Project Titles (IEEE 2012, 2011 Embedded Systems IEEE Project List)   |   DSP Project Titles (IEEE 2012, 2011 DSP IEEE Project List)   |   VLSI Project Titles (IEEE 2012, 2011 VLSI IEEE Project List)   |   Power Electronics Project Titles (IEEE 2012, 2011 Power Electronics IEEE Project List)   |   Matlab Project Titles (IEEE 2012, 2011 Matlab IEEE Project List)

Java, J2EE, J2ME Project Titles / List   |   Dot Net - ASP.NET, VB.NET, C# Project Titles   |   Embedded Systems Project Titles

Project Supports and Deliverables    |   Software Project Training Process   |   Embedded Project Training Process   |   In Plant Training Details    |   Photo Gallery    |   Contact us   |   Route Map

DOWNLOADS - To get Titles, Abstracts & IEEE Base Paper visit - www.ncct.in

Software Projects in Java, J2EE, J2ME, Java Fx, Dot Net, ASP.NET, VB.NET, C#, NS2, Matlab, Android, PHP

IEEE PROJECT DOMAINS - Networking * Mobile Computing * Cloud Computing * Knowledge and Data Engineering * Network Security * Dependable and Secure Computing * Parallel and Distributed Systems * Data Mining * Image Processing * Android Applications * Software Engineering * Grid Computing * Network and Service Management * Learning Technologies * Web Services & Applications * Wireless Sensor Networks * Services Computing * Biometrics * Pattern Analysis & Machine Language * Fuzzy Systems * Neural Networks * Cluster Computing * Communication System * MANETS, WSNS, WMNS, CRNS, VANETS, QoS * Biometrics & Access Control

Application Projects * System Projects * Enterprise Computing * ERP, CRM, SCM Projects * System to Mobile Communication * WAP Services * Others

Embedded Projects in Embedded Systems using Microcontrollers, VLSI, DSP, Matlab

PROJECTS IN Embedded, Electronics, Electrical, Communication, Hardware, Instrumentation, Mechanical, Biomedical, Tele Communication, Automobile, Power Electronics, Power Systems

EMBEDDED / IEEE PROJECT DOMAINS - Robotics * Biomedical * Jammers * Security & Access Control * Biometrics * Finger Print, Voice Control * Zigbee, RF, RFID, GPS, GSM, EMPCR, Touch * PLC, SCADA, MEMS, CAN, AI * Intelligent Transportation Systems * Industrial Electronics * Motors, Drives & Control * Process Control, PID, Fuzzy Logic * Energy Conversion * Solar Energy, Wind Power * Ultrasonic * Wireless Communication * Image, Voice, Audio, Video Processing * Telecommunication * Networking * IVRS * Sensors Networks * Automotive Projects * Wireless Sensors Networks * Power Electronics * High Voltage Engineering * Power Quality-Harmonics & Active Power Filtering * Converters & Inverters * SMPS * Fuel Cell * Power Factor * Power System * Thermal * Instrumentation Projects * Process Control System * Weighing Systems * Mechanical Projects * Automobile * Device Control * SMS Controllers * Ethernet, USB Controllers * Digital Camera, Satellite, Research, Photoshop, Biomedical, Copyright & Surveillance Applications * Image Processing * Low Power Design * Power Analyzer * TTCAN * VLSI Systems * Broadcasting * Consumer Electronics * Automatic Control * Others

Sunday 5 August 2012

2012, Java IEEE Project Abstracts - Part 4

JAVA IEEE 2012 PROJECT ABSTRACTS

DOMAIN - SOFTWARE ENGINEERING
QOS ASSURANCE FOR DYNAMIC RECONFIGURATION OF COMPONENT-BASED SOFTWARE SYSTEMS
A major challenge of dynamic reconfiguration is Quality of Service (QoS) assurance, which is meant to reduce application disruption to the minimum for the system’s transformation. However, this problem has not been well studied.
This paper investigates the problem for component-based software systems from three points of view.
First, the whole spectrum of QoS characteristics is defined. Second, the logical and physical requirements for QoS characteristics are analyzed and solutions to achieve them are proposed. Third, prior work is classified by QoS characteristics and then realized by abstract reconfiguration strategies.
On this basis, quantitative evaluation of the QoS assurance abilities of existing work and our own approach is conducted through three steps. First, a proof-of-concept prototype called the reconfigurable component model is implemented to support the representation and testing of the reconfiguration strategies.
Second, a reconfiguration benchmark is proposed to expose the whole spectrum of QoS problems. Third, each reconfiguration strategy is tested against the benchmark and the testing results are evaluated. The most important conclusion from our investigation is that the classified QoS characteristics can be fully achieved under some acceptable constraints


*------------*------------*------------*------------*------------*------------*

DOMAIN - SOFTWARE ENGINEERING
EXPLOITING DYNAMIC INFORMATION IN IDES IMPROVES SPEED AND CORRECTNESS OF SOFTWARE MAINTENANCE TASKS
Modern IDEs such as Eclipse offer static views of the source code, but such views ignore information about the runtime behavior of software systems. Since typical object-oriented systems make heavy use of polymorphism and dynamic binding, static views will miss key information about the runtime architecture.
In this paper, we present an approach to gather and integrate dynamic information in the Eclipse IDE with the goal of better supporting typical software maintenance activities. By means of a controlled experiment with 30 professional developers, we show that for typical software maintenance tasks, integrating dynamic information into the Eclipse IDE yields a significant 17.5 percent decrease of time spent while significantly increasing the correctness of the solutions by 33.5 percent. We also provide a comprehensive performance evaluation of our approach


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - SOFTWARE ENGINEERING
COMPARING THE DEFECT REDUCTION BENEFITS OF CODE INSPECTION AND TEST-DRIVEN DEVELOPMENT
This study is a quasi experiment comparing the software defect rates and implementation costs of two methods of software defect reduction: code inspection and test-driven development.
We divided participants, consisting of junior and senior computer science students at a large Southwestern university, into four groups using a two-by-two, between-subjects, factorial design and asked them to complete the same programming assignment using either test-driven development, code inspection, both, or neither.
We compared resulting defect counts and implementation costs across groups. We found that code inspection is more effective than test-driven development at reducing defects, but that code inspection is also more expensive. We also found that test-driven development was no more effective at reducing defects than traditional programming methods.


*------------*------------*------------*------------*------------*------------*

DOMAIN - SOFTWARE ENGINEERING
AN AUTONOMOUS ENGINE FOR SERVICES CONFIGURATION AND DEPLOYMENT
The runtime management of the infrastructure providing service-based systems is a complex task, up to the point where manual operation struggles to be cost effective. As the functionality is provided by a set of dynamically composed distributed services, in order to achieve a management objective multiple operations have to be applied over the distributed elements of the managed infrastructure.
Moreover, the manager must cope with the highly heterogeneous characteristics and management interfaces of the runtime resources. With this in mind, this paper proposes to support the configuration and deployment of services with an automated closed control loop.
The automation is enabled by the definition of a generic information model, which captures all the information relevant to the management of the services with the same abstractions, describing the runtime elements, service dependencies, and business objectives.
On top of that, a technique based on satisfiability is described which automatically diagnoses the state of the managed environment and obtains the required changes for correcting it (e.g., installation, service binding, update, or configuration). The results from a set of case studies extracted from the banking domain are provided to validate the feasibility of this proposal.


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - SOFTWARE ENGINEERING
STAKERARE: USING SOCIAL NETWORKS AND COLLABORATIVE FILTERING FOR LARGE-SCALE REQUIREMENTS ELICITATION
Requirements elicitation is the software engineering activity in which stakeholder needs are understood. It involves identifying and prioritizing requirements—a process difficult to scale to large software projects with many stakeholders.
This paper proposes StakeRare, a novel method that uses social networks and collaborative filtering to identify and prioritize requirements in large software projects. StakeRare identifies stakeholders and asks them to recommend other stakeholders and stakeholder roles, builds a social network with stakeholders as nodes and their recommendations as links, and prioritizes stakeholders using a variety of social network measures to determine their project influence.
It then asks the stakeholders to rate an initial list of requirements, recommends other relevant requirements to them using collaborative filtering, and prioritizes their requirements using their ratings weighted by their project influence. StakeRare was evaluated by applying it to a software project for a 30,000-user system, and a substantial empirical study of requirements elicitation was conducted.
Using the data collected from surveying and interviewing 87 stakeholders, the study demonstrated that StakeRare predicts stakeholder needs accurately and arrives at a more complete and accurately prioritized list of requirements compared to the existing method used in the project, taking only a fraction of the time.


*------------*------------*------------*------------*------------*------------*

DOMAIN - PARALLEL AND DISTRIBUTED SYSTEMS
ON THE HOP COUNT STATISTICS IN WIRELESS MULTIHOP NETWORKS SUBJECT TO FADING
Consider a wireless multihop network where nodes are randomly distributed in a given area following a homogeneous Poisson process. The hop count statistics, viz. the probabilities related to the number of hops between two nodes, are important for performance analysis of the multihop networks.
In this paper, we provide analytical results on the probability that two nodes separated by a known euclidean distance are k hops apart in networks subject to both shadowing and small-scale fading. Some interesting results are derived which have generic significance. For example, it is shown that the locations of nodes three or more hops away provide little information in determining the relationship of a node with other nodes in the network.
This observation is useful for the design of distributed routing, localization, and network security algorithms. As an illustration of the application of our results, we derive the effective energy consumption per successfully transmitted packet in end-to-end packet transmissions.
We show that there exists an optimum transmission range which minimizes the effective energy consumption. The results provide useful guidelines on the design of a randomly deployed network in a more realistic radio environment.


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - PARALLEL AND DISTRIBUTED SYSTEMS
ON MAXIMIZING THE LIFETIME OF WIRELESS SENSOR NETWORKS USING VIRTUAL BACKBONE SCHEDULING
Wireless Sensor Networks (WSNs) are key for various applications that involve long-term and low-cost monitoring and actuating. In these applications, sensor nodes use batteries as the sole energy source.
Therefore, energy efficiency becomes critical. We observe that many WSN applications require redundant sensor nodes to achieve fault tolerance and Quality of Service (QoS) of the sensing.
However, the same redundancy may not be necessary for multihop communication because of the light traffic load and the stable wireless links. In this paper, we present a novel sleep-scheduling technique called Virtual Backbone Scheduling (VBS). VBS is designed for WSNs has redundant sensor nodes.
VBS forms multiple overlapped backbones which work alternatively to prolong the network lifetime. In VBS, traffic is only forwarded by backbone sensor nodes, and the rest of the sensor nodes turn off their radios to save energy.
The rotation of multiple backbones makes sure that the energy consumption of all sensor nodes is balanced, which fully utilizes the energy and achieves a longer network lifetime compared to the existing techniques.
The scheduling problem of VBS is formulated as the Maximum Lifetime Backbone Scheduling(MLBS) problem. Since the MLBS problem is NP-hard, we propose approximation algorithms based on the Schedule Transition Graph (STG) and Virtual Scheduling Graph(VSG).
We also present an Iterative Local Replacement (ILR) scheme as a distributed implementation. Theoretical analyses and simulation studies verify that VBS is superior to the existing techniques


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - PARALLEL AND DISTRIBUTED SYSTEMS
FLASH CROWD IN P2P LIVE STREAMING SYSTEMS: FUNDAMENTAL CHARACTERISTICS AND DESIGN IMPLICATIONS
Peer-to-peer (P2P) live video streaming systems have recently received substantial attention, with commercial deployment gaining increased popularity in the internet.
It is evident from our practical experiences with real-world systems that, it is not uncommon for hundreds of thousands of users to choose to join a program in the first few minutes of a live broadcast.
Such a severe flash crowd phenomenon in live streaming poses significant challenges in the system design.
In this paper, for the first time, we develop a mathematical model to: 1) capture the fundamental relationship between time and scale in P2P live streaming systems under a flash crowd, and 2) explore the design principle of population control to alleviate the impact of the flash crowd.
We carry out rigorous analysis that brings forth an in-depth understanding on effects of the gossip protocol and peer dynamics. In particular, we demonstrate that there exists an upper bound on the system scale with respect to a time constraint.
By trading peer startup delays in the initial stage of a flash crowd for system scale, we design a simple and flexible population control framework that can alleviate the flash crowd without the requirement of otherwise costly server deployment


*------------*------------*------------*------------*------------*------------*

DOMAIN - PARALLEL AND DISTRIBUTED SYSTEMS
EXPLORING THE OPTIMAL REPLICATION STRATEGY IN P2P-VOD SYSTEMS: CHARACTERIZATION AND EVALUATION
P2P-Video-on-Demand (P2P-VoD) is a popular Internet service which aims to provide a scalable and high-quality service to users. At the same time, content providers of P2P-VoD services also need to make sure that the service is operated with a manageable operating cost.
Given the volume-based charging model by ISPs, P2P-VoD content providers would like to reduce peers’ access to the content server so as to reduce the operating cost.
In this paper, we address an important open problem: what is the “ optimal replication ratio” in a P2P-VoD system such that peers will receive service from each other and at the same time, reduce the access to the content server? We address two fundamental issues: 1) what is the optimal replication ratio of a movie if w e know its popularity, and 2) how to achieve these optimal ratios in a distributed and dynamic fashion.
We first formally show how movie popularities can impact server’s workload, and formulate the video replication as an optimization problem. We show that the conventional wisdom of using the proportional replication strategy is “ suboptimal,” and expand the design space to both “ passive replacement policy” and “ active push policy ” to achieve the optimal replication ratios.
We consider practical implementation issues, evaluate the performance of P2P-VoD systems and show how to greatly reduce server’s workload and improve streaming quality via our distributed algorithms


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - PARALLEL AND DISTRIBUTED SYSTEMS
ENERGY-EFFICIENT TOPOLOGY CONTROL IN COOPERATIVE AD HOC NETWORKS
Cooperative communication (CC) exploits space diversity through allowing multiple nodes cooperatively relay signals to the receiver so that the combined signal at the receiver can be correctly decoded. Since CC can reduce the transmission power and extend the transmission coverage, it has been considered in topology control protocols
However, prior research on topology control with CC only focuses on maintaining the network connectivity, minimizing the transmission power of each node, whereas ignores the energy efficiency of paths in constructed topologies.
This may cause inefficient routes and hurt the overall network performance in cooperative ad hoc networks. In this paper, to address this problem, we introduce a new topology control problem: energy-efficient topology control problem with cooperative communication, and propose two topology control algorithms to build cooperative energy spanners in which the energy efficiency of individual paths are guaranteed.
Both proposed algorithms can be performed in distributed and localized fashion while maintaining the globally efficient paths. Simulation results confirm the nice performance of all proposed algorithms


*------------*------------*------------*------------*------------*------------*

DOMAIN - PARALLEL AND DISTRIBUTED SYSTEMS
EMBEDDED TRANSITIVE CLOSURE NETWORK FOR RUNTIME DEADLOCK DETECTION IN NETWORKS-ON-CHIP
Interconnection networks with adaptive routing are susceptible to deadlock, which could lead to performance degradation or system failure. Detecting deadlocks at runtime is challenging because of their highly distributed characteristics.
In this paper, we present a deadlock detection method that utilizes runtime transitive closure (TC) computation to discover the existence of deadlock-equivalence sets, which imply loops of requests in networks-on-chip (NoCs). This detection scheme guarantees the discovery of all true deadlocks without false alarms in contrast with state-of-the-art approximation and heuristic approaches.
A distributed TC-network architecture, which couples with the NoC infrastructure, is also presented to realize the detection mechanism efficiently. Detailed hardware realization architectures and schematics are also discussed.
Our results based on a cycle-accurate simulator demonstrate the effectiveness of the proposed method. It drastically outperforms timing-based deadlock detection mechanisms by eliminating false detections and, thus, reducing energy wastage in retransmission for various traffic scenarios including real-world application.
We found that timing-based methods may produce two orders of magnitude more deadlock alarms than the TC-network method. Moreover, the implementations presented in this paper demonstrate that the hardware overhead of TC-networks is insignificant.


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - PARALLEL AND DISTRIBUTED SYSTEMS
EFFICIENT HARDWARE BARRIER SYNCHRONIZATION IN MANY-CORE CMPS
Traditional software-based barrier implementations for shared memory parallel machines tend to produce hotspots in terms of memory and network contention as the number of processors increases. This could limit their applicability to future many-core CMPs in which possibly several dozens of cores would need to be synchronized efficiently.
In this work, we develop GBarrier, a hardware-based barrier mechanism especially aimed at providing efficient barriers in future many-core CMPs.
Our proposal deploys a dedicated G-line-based network to allow for fast and efficient signaling of barrier arrival and departure. Since GBarrier does not have any influence on the memory system, we avoid all coherence activity and barrier-related network traffic that traditional approaches introduce and that restrict scalability.
Through detailed simulations of a 32-core CMP, we compare GBarrier against one of the most efficient software-based barrier implementations for a set of kernels and scientific applications. Evaluation results show average reductions of 54 and 21 percent in execution time, 53 and 18 percent in network traffic, and also 76 and 31 percent in the energy-delay 2 product metric for the full CMP when the kernels and scientific applications, respectively, are considered


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - PARALLEL AND DISTRIBUTED SYSTEMS
DYNAMIC BEACON MOBILITY SCHEDULING FOR SENSOR LOCALIZATION
In mobile-beacon assisted sensor localization, beacon mobility scheduling aims to determine the best beacon trajectory so that each sensor receives sufficient beacon signals and becomes localized with minimum delay.
We propose a novel DeteRministic dynamic bEAcon Mobility Scheduling (DREAMS) algorithm, without requiring any prior knowledge of the sensory field. In this algorithm, the beacon trajectory is defined as the track of Depth-First Traversal (DFT) of the network graph, thus deterministic.
The mobile beacon performs DFT dynamically, under the instruction of nearby sensors on the fly. It moves from sensor to sensor in an intelligent heuristic manner according to Received Signal Strength (RSS)-based distance measurements. We prove that DREAMS guarantees full localization (every sensor is localized) when the measurements are noise-free, and derive the upper bound of beacon total moving distance in this case.
Then, we suggest to apply node elimination and Local Minimum Spanning Tree (LMST) to shorten beacon tour and reduce delay. Further, we extend DREAMS to multibeacon scenarios. Beacons with different coordinate systems compete for localizing sensors. Loser beacons agree on winner beacons’ coordinate system, and become cooperative in subsequent localization.
All sensors are finally localized in a commonly agreed coordinate systems. Through simulation we show that DREAMS guarantees full localization even with noisy distance measurements. We evaluate its performance on localization delay and communication overhead in comparison with a previously proposed static path-based scheduling method


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - PARALLEL AND DISTRIBUTED SYSTEMS
DRAGON: DETECTION AND TRACKING OF DYNAMIC AMORPHOUS EVENTS IN WIRELESS SENSOR NETWORKS
Wireless sensor networks may be deployed in many applications to detect and track events of interest. Events can be either point events with an exact location and constant shape, or region events which cover a large area and have dynamic shapes.
While both types of events have received attention, no event detection and tracking protocol in existing wireless sensor network research is able to identify and track region events with dynamic identities, which arise when events are created or destroyed through splitting and merging. In this paper, we propose DRAGON, an event detection and tracking protocol which is able to handle all types of events including region events with dynamic identities.
DRAGON employs two physics metaphors: event center of mass, to give an approximate location to the event; andnode momentum, to guide the detection of event merges and splits.
Both detailed theoretical analysis and extensive performance studies of DRAGON’s properties demonstrate that DRAGON’s execution is distributed among the sensor nodes, has low latency, is energy efficient, is able to run on a wide array of physical deployments, and has performance which scales well with event size, speed, and count.


*------------*------------*------------*------------*------------*------------*

DOMAIN - PARALLEL AND DISTRIBUTED SYSTEMS
DISTRIBUTED DIAGNOSIS OF DYNAMIC EVENTS IN PARTITIONABLE ARBITRARY TOPOLOGY NETWORKS
This work introduces the Distributed Network Reachability(DNR) algorithm, a distributed system-level diagnosis algorithm that allows every node of a partitionable arbitrary topology network to determine which portions of the network are reachable and unreachable.
DNR is the first distributed diagnosis algorithm that works in the presence of network partitions and healings caused by dynamic fault and repair events. Both crash and timing faults are assumed, and a faulty node is indistinguishable of a network partition. Every link is alternately tested by one of its adjacent nodes at subsequent testing intervals. Upon the detection of a new event, the new diagnostic information is disseminated to reachable nodes.
New events can occur before the dissemination completes. Any time a new event is detected or informed, a working node may compute the network reachability using local diagnostic information. The bounded correctness of DNR is proved, including the bounded diagnostic latency, bounded startup and accuracy.
Simulation results are presented for several random and regular topologies, showing the performance of the algorithm under highly dynamic fault situations


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - PARALLEL AND DISTRIBUTED SYSTEMS
COST-DRIVEN SCHEDULING OF GRID WORKFLOWS USING PARTIAL CRITICAL PATHS
Recently, utility Grids have emerged as a new model of service provisioning in heterogeneous distributed systems. In this model, users negotiate with service providers on their required Quality of Service and on the corresponding price to reach a Service Level Agreement.
One of the most challenging problems in utility Grids is workflow scheduling, i.e., the problem of satisfying the QoS of the users as well as minimizing the cost of workflow execution. In this paper, we propose a new QoS-based workflow scheduling algorithm based on a novel concept called Partial Critical Paths (PCP), that tries to minimize the cost of workflow execution while meeting a user-defined deadline.
The PCP algorithm has two phases: in the deadline distribution phase it recursively assigns subdeadlines to the tasks on the partial critical paths ending at previously assigned tasks, and in the planning phase it assigns the cheapest service to each task while meeting its subdeadline. The simulation results show that the performance of the PCP algorithm is very promising


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - PARALLEL AND DISTRIBUTED SYSTEMS
COMPARISON-BASED SYSTEM-LEVEL FAULT DIAGNOSIS: A NEURAL NETWORK APPROACH
We consider the fault identification problem, also known as the system-level self-diagnosis, in multiprocessor and multicomputer systems using the comparison approach. In this diagnosis model, a set of tasks is assigned to pairs of nodes and their outcomes are compared by neighboring nodes.
Given that comparisons are performed by the nodes themselves, faulty nodes can incorrectly claim that fault-free nodes are faulty or that faulty ones are fault-free. The collections of all agreements and disagreements, i.e., the comparison outcomes, among the nodes are used to identify the set of permanently faulty nodes.
Since the introduction of the comparison model, significant progress has been made in both theory and practice associated with the original model and its offshoots. Nevertheless, the problem of efficiently identifying the set of faulty nodes when not all the comparison outcomes are available to the diagnosis algorithm at the beginning of the diagnosis phase, i.e., partial syndromes, remains an outstanding research issue.
In this paper, we introduce a novel diagnosis approach using neural networks to solve this fault identification problem using partial syndromes. Results from a thorough simulation study demonstrate the effectiveness of the neural-network-based self-diagnosis algorithm for randomly generated diagnosable systems of different sizes and under various fault scenarios.
We have then conducted extensive simulations using partial syndromes and nondiagnosable systems. Simulations showed that the neural-network-based diagnosis approach provided good results making it a viable addition or alternative to existing diagnosis algorithms


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - PARALLEL AND DISTRIBUTED SYSTEMS
CATCHING PACKET DROPPERS AND MODIFIERS IN WIRELESS SENSOR NETWORKS
Packet dropping and modification are common attacks that can be launched by an adversary to disrupt communication in wireless multihop sensor networks. Many schemes have been proposed to mitigate or tolerate such attacks, but very few can effectively and efficiently identify the intruders.
To address this problem, we propose a simple yet effective scheme, which can identify misbehaving forwarders that drop or modify packets. Extensive analysis and simulations have been conducted to verify the effectiveness and efficiency of the scheme


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - PARALLEL AND DISTRIBUTED SYSTEMS
CASHING IN ON THE CACHE IN THE CLOUD
Over the past decades, caching has become the key technology used for bridging the performance gap across memory hierarchies via temporal or spatial localities; in particular, the effect is prominent in disk storage systems. Applications that involve heavy I/O activities, which are common in the cloud, probably benefit the most from caching.
The use of local volatile memory as cache might be a natural alternative, but many well-known restrictions, such as capacity and the utilization of host machines, hinder its effective use. In addition to technical challenges, providing cache services in clouds encounters a major practical issue (quality of service or service level agreement issue) of pricing. Currently, (public) cloud users are limited to a small set of uniform and coarse-grained service offerings, such as High-Memory and High-CPU in Amazon EC2.
In this paper, we present the cache as a service (CaaS) model as an optional service to typical infrastructure service offerings. Specifically, the cloud provider sets aside a large pool of memory that can be dynamically partitioned and allocated to standard infrastructure services as disk cache.
We first investigate the feasibility of providing CaaS with the proof-of-concept elastic cache system (using dedicated remote memory servers) built and validated on the actual system, and practical benefits of CaaS for both users and providers (i.e., performance and profit, respectively) are thoroughly studied with a novel pricing scheme.
Our CaaS model helps to leverage the cloud economy greatly in that 1) the extra user cost for I/O performance gain is minimal if ever exists, and 2) the provider’s profit increases due to improvements in server consolidation resulting from that performance gain. Through extensive experiments with eight resource allocation strategies, we demonstrate that our CaaS model can be a promising cost-efficient solution for both users and providers


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - PARALLEL AND DISTRIBUTED SYSTEMS
AN ONLINE DATA ACCESS PREDICTION AND OPTIMIZATION APPROACH FOR DISTRIBUTED SYSTEMS
Current scientific applications have been producing large amounts of data. The processing, handling and analysis of such data require large-scale computing infrastructures such as clusters and grids.
In this area, studies aim at improving the performance of data-intensive applications by optimizing data accesses. In order to achieve this goal, distributed storage systems have been considering techniques of data replication, migration, distribution, and access parallelism.
However, the main drawback of those studies is that they do not take into account application behavior to perform data access optimization. This limitation motivated this paper which applies strategies to support the online prediction of application behavior in order to optimize data access operations on distributed systems, without requiring any information on past executions. In order to accomplish such a goal, this approach organizes application behaviors as time series and, then, analyzes and classifies those series according to their properties.
By knowing properties, the approach selects modeling techniques to represent series and perform predictions, which are, later on, used to optimize data access operations. This new approach was implemented and evaluated using the OptorSim simulator, sponsored by the LHC-CERN project and widely employed by the scientific community.
Experiments confirm this new approach reduces application execution time in about 50 percent, specially when handling large amounts of data


*------------*------------*------------*------------*------------*------------*

DOMAIN - PARALLEL AND DISTRIBUTED SYSTEMS
A SURVEY OF PARALLEL PROGRAMMING MODELS AND TOOLS IN THE MULTI AND MANY-CORE ERA
In this work, we present a survey of the different parallel programming models and tools available today with special consideration to their suitability for high-performance computing. Thus, we review the shared and distributed memory approaches, as well as the current heterogeneous parallel programming model.
In addition, we analyze how the partitioned global address space (PGAS) and hybrid parallel programming models are used to combine the advantages of shared and distributed memory systems. The work is completed by considering languages with specific parallel support and the distributed programming paradigm. In all cases, we present characteristics, strengths, and weaknesses.
The study shows that the availability of multi-core CPUs has given new impulse to the shared memory parallel programming approach. In addition, we find that hybrid parallel programming is the current way of harnessing the capabilities of computer clusters with multi-core nodes.
On the other hand, heterogeneous programming is found to be an increasingly popular paradigm, as a consequence of the availability of multi-core CPUs+GPUs systems. The use of open industry standards like OpenMP, MPI, or OpenCL, as opposed to proprietary solutions, seems to be the way to uniformize and extend the use of parallel programming models


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - PARALLEL AND DISTRIBUTED SYSTEMS
A SEQUENTIALLY CONSISTENT MULTIPROCESSOR ARCHITECTURE FOR OUT-OF-ORDER RETIREMENT OF INSTRUCTIONS
Out-of-order retirement of instructions has been shown to be an effective technique to increase the number of in-flight instructions. This form of runtime scheduling can reduce pipeline stalls caused by head-of-line blocking effects in the reorder buffer (ROB).
Expanding the width of the instruction window can be highly beneficial to multiprocessors that implement a strict memory model, especially when both loads and stores encounter long latencies due to cache misses, and whose stalls must be overlapped with instruction execution to overcome the memory latencies.
Based on the Validation Buffer (VB) architecture (a previously proposed out-of-order retirement, checkpoint-free architecture for single processors), this paper proposes a cost-effective, scalable, out-of-order retirement multiprocessor, capable of enforcing sequential consistency without impacting the design of the memory hierarchy or interconnect.
Our simulation results indicate that utilizing a VB can speed up both relaxed and sequentially consistent in-order retirement in future multiprocessor systems by between 3 and 20 percent, depending on the ROB size


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - PARALLEL AND DISTRIBUTED SYSTEMS
A SECURE ERASURE CODE-BASED CLOUD STORAGE SYSTEM WITH SECURE DATA FORWARDING
A cloud storage system, consisting of a collection of storage servers, provides long-term storage services over the Internet. Storing data in a third party’s cloud system causes serious concern over data confidentiality.
General encryption schemes protect data confidentiality, but also limit the functionality of the storage system because a few operations are supported over encrypted data. Constructing a secure storage system that supports multiple functions is challenging when the storage system is distributed and has no central authority.
We propose a threshold proxy re-encryption scheme and integrate it with a decentralized erasure code such that a secure distributed storage system is formulated. The distributed storage system not only supports secure and robust data storage and retrieval, but also lets a user forward his data in the storage servers to another user without retrieving the data back.
The main technical contribution is that the proxy re-encryption scheme supports encoding operations over encrypted messages as well as forwarding operations over encoded and encrypted messages. Our method fully integrates encrypting, encoding, and forwarding.
We analyze and suggest suitable parameters for the number of copies of a message dispatched to storage servers and the number of storage servers queried by a key server. These parameters allow more flexible adjustment between the number of storage servers and robustness


*------------*------------*------------*------------*------------*------------*

DOMAIN - PARALLEL AND DISTRIBUTED SYSTEMS
TRUSTWORTHY COORDINATION OF WEB SERVICES ATOMIC TRANSACTIONS
The Web Services Atomic Transactions (WS-AT) specification makes it possible for businesses to engage in standard distributed transaction processing over the Internet using Web Services technology. For such business applications, trustworthy coordination of WS-AT is crucial.
In this paper, we explain how to render WS-AT coordination trustworthy by applying Byzantine Fault Tolerance (BFT) techniques. More specifically, we show how to protect the core services described in the WS-AT specification, namely, the Activation service, the Registration service, the Completion service and the Coordinator service, against Byzantine faults.
The main contribution of this work is that it exploits the semantics of the WS-AT services to minimize the use of yzantine Agreement (BA), instead of applying BFT techniques naively, which would be prohibitively expensive.
We have incorporated our BFT protocols and mechanisms into an open-source framework that implements the WS-AT specification. The resulting BFT framework for WS-AT is useful for business applications that are based on WS-AT and that require a high degree of dependability, security, and trust


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - PARALLEL AND DISTRIBUTED SYSTEMS
A NETWORK CODING EQUIVALENT CONTENT DISTRIBUTION SCHEME FOR EFFICIENT PEER-TO-PEER INTERACTIVE VOD STREAMING
Although random access operations are desirable for on-demand video streaming in peer-to-peer systems, they are difficult to efficiently achieve due to the asynchronous interactive behaviors of users and the dynamic nature of peers.
In this paper, we propose a network coding equivalent content distribution (NCECD) scheme to efficiently handle interactive video-on-demand (VoD) operations in peer-to-peer systems. In NCECD, videos are divided into segments that are then further divided into blocks. These blocks are encoded into independent blocks that are distributed to different peers for local storage.
With NCECD, a new client only needs to connect to a sufficient number of parent peers to be able to view the whole video and rarely needs to find new parents when performing random access operations. In most existing methods, a new client must search for parent peers containing specific segments; however, NCECD uses the properties of network coding to cache equivalent content in peers, so that one can pick any parent without additional searches.
Experimental results show that the proposed scheme achieves low startup and jump searching delays and requires fewer server resources. In addition, we present the analysis of system parameters to achieve reasonable block loss rates for the proposed scheme

2012, Java IEEE Project Abstracts - Part 3

JAVA IEEE 2012 PROJECT ABSTRACTS

DOMAIN - MOBILE COMPUTING
APPROXIMATION ALGORITHMS FOR DATA BROADCAST IN WIRELESS NETWORKS
Broadcasting is a fundamental operation in wireless networks and plays an important role in the communication protocol design. In multihop wireless networks, however, interference at a node due to simultaneous transmissions from its neighbors makes it nontrivial to design a minimum-latency broadcast algorithm, which is known to be NP-complete.
We present a simple 12-approximation algorithm for the one-to-all broadcast problem that improves all previously known guarantees for this problem. We then consider the all-to-all broadcast problem where each node sends its own message to all other nodes. For the all-to-all broadcast problem, we present two algorithms with approximation ratios of 20 and 34, improving the best result available in the literature.
Finally, we report experimental evaluation of our algorithms. Our studies indicate that our algorithms perform much better in practice than the worst-case guarantees provided in the theoretical analysis and achieve up to 37 percent performance improvement over existing schemes


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - MOBILE COMPUTING
CHANNEL ESTIMATION FOR OPPORTUNISTIC SPECTRUM ACCESS: UNIFORM AND RANDOM SENSING
The knowledge of channel statistics can be very helpful in making sound opportunistic spectrum access decisions. It is therefore desirable to be able to ef ficiently and accurately estimate channel statistics. In this paper we study the problem of optimally placing sensing/sampling times over a time window so as to get the best estimate on the parameters of an on-off renewal channel. We are particularly interested in a sparse sensing regime with a small number of samples relative to the time window size.
Using Fisher information as a measur e, we analytically derive the best and worst sensing sequences under a sparsity condition. We also present a way to derive the bes t/worst sequences without this condition using a dynamic programming approach. In both cases the worst turns out to be the uniform sensing sequence, where sensing times are evenly spaced within the window. Interestingly the best sequence is also uniform but with a much smaller sensing interval that requires a priori knowledge of the channel parameters.
With these results we argue that without a priori knowledge, a robust sensing strategy should be a randomized strategy. We then compare different random schemes using a family of distributions generated by the circular β ensemble, and propose an adaptive sensing scheme to effectively track time-varying channel parameters. We further discuss the applicability of compressive sensing for this problem


*------------*------------*------------*------------*------------*------------*

DOMAIN - MOBILE COMPUTING
AN MIMO CONFIGURATION MODE AND MCS LEVEL SELECTION SCHEME BY FUZZY Q-LEARNING FOR HSPA ÞSYSTEMS
In this paper, we propose afuzzy Q-learning-based MIM O configuration mode and MCS level (FQL-MOMS) selection scheme for high speed packet access evolution (HSPA þ) systems. The FQL-MOMS selection scheme intends to enhance the system throughput under the block error rate (BLER) requirement guarantee.
It will determine an appropriate MIMO configuration mode and MCS (modulation and coding scheme) level for packet data transmission in HSPA þ systems, under the situations that the channel status is varying and the channel quality indication (CQI) has report delay.
The FQL-MOMS scheme considers not only the reported CQI and the last transmission result but also the BLER performance metric and the transmission efficiency. Moreover, it is effectively configured, where the fuzzy rules and the reinforcement signals for the Q-learning algorithm are sophisticatedly designed.
Simulation results show that the proposed FQL-MOMS scheme increases the system throughput by up to 49.3 and 35.9 percent, compared to the conventional adaptive threshold selection (ATS) scheme [12] and the Q-HARQ scheme [14], respectively, under the BLER requirement fulfillment


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - MOBILE COMPUTING
A NOVEL MAC SCHEME FOR MULTICHANNEL COGNITIVE RADIO AD HOC NETWORKS
This paper proposes a novel medium access control (MAC) scheme for multichannel cognitive radio (CR) ad hoc networks, which achieves high throughput of CR system while protecting primary users (PUs) effectively.
In designing the MAC scheme, we consider that the PU signal may cover only a part of the network and the nodes can have the different sensing result for the same PU even on the same channel. By allowing the nodes to use the channel on which the PU exists as long as their transmissions do not disturb the PU, the proposed MAC scheme fully utilizes the spectrum access opportunity.
To mitigate the hidden PU problem inherent to multichannel CR networks where the PU signal is detectable only to some nodes, the proposed MAC scheme adjusts the sensing priorities of channels at each node with the PU detection information of other nodes and also limits the transmission power of a CR node to the maximum allowable power for guaranteeing the quality of service requirement of PU.
The performance of the proposed MAC scheme is evaluated by using simulation. The simulation results show that the CR system with the proposed MAC accomplishes good performance in throughput and packet delay, while protecting PUs properly.


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - MOBILE COMPUTING
A COST ANALYSIS FRAMEWORK FOR NEMO PREFIX DELEGATION-BASED SCHEMES
Network Mobility (NEMO) efficiently manages the mobility of multiple nodes that moves together as a mobile network. A major limitation of the basic protocol in NEMO is the inefficient route between end hosts. A number of prefix delegation-based schemes have been proposed in the literature to solve the route optimization problem in NEMO.
Approaches used by the schemes trade off delivery of packets through the partially optimized route with signaling and other processing overheads. Cost of delivering packets through the partially optimized route along with signaling and processing cost need to be measured to find out the gain from tradeoff. However, cost analysis performed so far on NEMO protocols consider only the cost of signaling.
In this paper, we have developed analytical framework to measure the costs of the basic protocol for NEMO, and four representative prefix delegation-based schemes. Our results show that cost of packet delivery through the partially optimized route dominates over other costs.
Therefore, optimizing the route completely is preferable to reduction of signaling as far as cost of network mobility is concerned. Our cost analysis framework will help in decision making to select the best route optimization scheme depending on the load imposed by the scheme on the infrastructure.


*------------*------------*------------*------------*------------*------------*

DOMAIN - MOBILE COMPUTING
USING ROTATABLE AND DIRECTIONAL (R&D) SENSORS TO ACHIEVE TEMPORAL COVERAGE OF OBJECTS AND ITS SURVEILLANCE APPLICATION
Due to hardware design or cost consideration, sen-sors may possess sector-like sensing coverage. Furthermore, by stepper motors, sensors can rotate to cover the objects around them. This type of sensors are called rotatable and directional (R&D) sensors. Through rotation, R&D sensors provide temporal coverage to objects by “periodically” detecting their existence.
In the paper, we first develop an event-driven surveillance system by R&D sensors, where objects are monitored by the sensors equipped with infrared detectors and cameras. When an object is taken away, the sensor monitoring the object reports a warning message along with detailed snapshots from the surroundings.
Then, motivated by the system, we formulate an R&D sensor deployment problem, which tries to deploy the minimum number of R&D sensors to cover a given set of objects such that each object is covered by 0 <δ ≤ 1 ratio of time in every frame. We show this problem to be NP-hard and propose two efficient heuristics.
The maximum covering deployment (MCD) heuristic iteratively deploys a sensor to cover more objects, and performs well when objects congregate together. The disk-overlapping deployment (DOD)heuristic deploys sensors to cover the joint sectors of overlapped disks, so it works better when objects are arbitrarily placed in the sensing field.
The paper contributes in defining a new temporal coverage model by R&D sensors, developing a surveillance application for this model, and proposing efficient heuristics to reduce the deployment cost


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - MOBILE COMPUTING
UNDERSTANDING NODE LOCALIZABILITY OF WIRE-LESS AD-HOC AND SENSOR NETWORKS
Location awareness is highly critical for wireless ad-hoc and sensor networks. Many efforts have been made to solve the problem of whether or not a network can be local-ized. Nevertheless, based on the data collected from a working sensor network, it is observed that the network is not  always entirely localizable.
Theoretical analyses also suggest that, in most cases, it is unlikely that all nodes in a network are localizable, although a (large) portion of the nodes can be uniquely located. Existing studies merely ex-amine whether or not a network is localizable as a whole; yet two fundamental questions remain unaddressed: First, given a network configuration, whether or not a specific node is localizable? Second, how many nodes in a network can be located  and  which  are  them? In this study, we analyze the limitation of previous works an d propose a novel concept of node localizability.
By deriving the necessary and sufficient conditions for node localizability, for the first time, it is possible to analyze how many n odes one can expect to locate in sparsely or moderately connected networks.
To validate this design, we implement our solution on a real-world sys-tem and the experimental results show that node localizability provides useful guidelines for network deployment and other location-based services


*------------*------------*------------*------------*------------*------------*

DOMAIN - MOBILE COMPUTING
STORM: A FRAMEWORK FOR INTEGRATED ROUTING, SCHEDULING AND TRAFFIC MANAGEMENT IN AD HOC NETWORKS
A cross-layer framework is introduced for the effective dissemination of real-time and elastic traffic in multi-hop wireless networks called STORM (Scheduling and Traffic Management in Ordered Routing Meshes).
Unicast and multicast routes are estab-lished in coordination with the scheduling of transmissions and band-width reservations in a way that bandwidth and delay guarantees can be enforced on a per-hop and end-to-end basis. The routes established in STORM are shown to be loop-free and real-time packets forwarded along these routes are shown to have bounded end-to-end delays.
Results from detailed simulation experiments show that, compared to a protocol stack consisting of 802.11 DCF for channel access, AODV or OLSR for unicast routing, and ODMRP for multicast routing, STORM attains similar or better performance for elastic traffic, and up to two orders of magnitude improvement in end-to-end delays, with twice the amount of data delivery for real-time traffic while inducing considerably less communication overhead.


*------------*------------*------------*------------*------------*------------*

DOMAIN - MOBILE COMPUTING
THE BOOMERANG PROTOCOL: TYING DATA TO GEOGRAPHIC LOCATIONS IN MOBILE DISCONNECTED NETWORKS
We present the boomerang protocol to efficiently retain information at a particular geographic location in a sparse network of highly mobile nodes without using infrastructure networks. To retain information around certain physical location, each mobile device passing that location will carry the information for a short while.
This approach can become challenging for remote locations around which only few nodes pass by. To address this challenge, the boomerang protocol, similar to delay-tolerant communication, first allows a mobile node to carry packets away from their location of origin and periodically returns them to the anchor location.
A unique feature of this protocol is that it records the geographical trajectory while moving away from the origin and exploits the recorded trajectory to optimize the return path. Simulations using automotive traffic traces for a southern New Jersey region show that the boomerang protocol improves packet return rate by 70 percent compared to a baseline shortest path routing protocol.
This performance gain can become even more significant when the road map is less connected. Finally, we look at adaptive protocols that can return information within specified time limits


*------------*------------*------------*------------*------------*------------*

DOMAIN - MOBILE COMPUTING
PROSPECT: A PROACTIVE SPECTRUM HANDOFF FRAMEWORK FOR COGNITIVE RADIO AD HOC NETWORKS WITHOUT COMMON CONTROL CHANNEL
Cognitive Radio (CR) technology is a promising solution to enhance the spectrum utilization by enabling unlicensed users to exploit the spectrum in an opportunistic manner. Since unlicensed users are temporary visitors to the licensed spectrum, they are required to vacate the spectrum when a licensed user reclaims it.
Due to the randomness of the appearance of licensed users, disruptions to both licensed and unlicensed communications are often difficult to prevent, which may lead to low throughput of both licensed and unlicensed communications. In this paper, a proactive spectrum handoff framework for CR ad hoc networks, ProSpect, is proposed to address these concerns.
In the proposed framework, Channel-Switching (CW) policies and a proactive spectrum handoff protocol are proposed to let unlicensed users vacate a channelbefore a licensed user utilizes it to avoid unwanted interference. Network coordination schemes for unlicensed users are also incorporated into the spectrum handoff protocol design.
Moreover, a distributed channel selection scheme to eliminate collisions among unlicensed users in a multiuser spectrum handoff scenario is proposed. In our proposed framework, unlicensed users coordinate with each other without using a Common Control Channel (CCC), which is highly adaptable in a spectrum-varying environment.
We compare our proposed proactive spectrum handoff protocol with a reactive spectrum handoff protocol, under which unlicensed users switch channels after collisions with licensed transmissions occur. Simulation results show that our proactive spectrum handoff outperforms the reactive spectrum handoff approach in terms of higher throughput and fewer collisions to licensed users.
Furthermore, our distributed channel selection can achieve higher packet delivery rate in a multiuser spectrum handoff scenario, compared with existing channel selection schemes


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - MOBILE COMPUTING
ROBUST RELATIVE LOCATION ESTIMATION IN WIRELESS SENSOR NETWORKS WITH INEXACT POSITION PROBLEMS
In this paper, the relative location estimation problem, a prominent issue faced by several applications in wireless sensor networks (WSNs), is considered. Sensors are classified into two categories: location-aware and location-unaware sensors. To estimate the positions of location-unaware sensors, exact positions are often assumed for location-aware sensors. However, in practice, such precise data may not be available.
Therefore, determining the positions of location-unaware sensors in the presence of inexact positions of location-aware sensors is the primary focus of this study. A robust min-max optimization method is proposed for the relative location estimation problem by minimizing the worst-case estimation error.
The corresponding optimization problem is originally nonconvex, but after it is transformed into a convex semidefinite program (SDP), it can be solved by existing numerical techniques. In the presence of inexact positions of location-aware sensors, the robustness of the proposed approach is validated by simulations under different WSN topologies.
Modified maximum-likelihood (ML) estimation and second-order cone programming (SOCP) relaxation methods have been used for localization in comparison with the proposed approach


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - MOBILE COMPUTING
ON THE VULNERABILITIES OF CSI IN MIMO WIRELESS COMMUNICATION SYSTEMS
Multiple-input multiple-output (MIMO) technologies are apopular choice for emerging wireless systems due to their promised gains in throughput and reliability. In order to realize any gains over traditional non-MIMO communication systems, these systems must possess accurate knowledge of the wireless channel.
In this paper, we investigate strategies for disrupting MIMO communications by developing attacks that target the often over-looked, but essential, channel estimation procedure.
Our study focuses on the two most popular and well-known MIMO techniques: the capacity achieving SVD-based MIMO scheme, and the Alamouti space-time block code (STBC), which spans many protocols including 802.11n, WiMAX, and 3GPP. We augment theoretical and simulation results with real-world experimentation using the USRP/GNU Radio software de fined radio platform.
We also present novel methodology to protect the channel estimation procedure from such attacks by embedding authentication messages into physical layer features of the transmissions


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - MOBILE COMPUTING
NATURE-INSPIRED SELF-ORGANIZATION, CONTROL, AND OPTIMIZATION IN HETEROGENEOUS WIRELESS NETWORKS
In this paper, we present new models and algorithms for control and optimization of a class of next generation communication networks: Hierarchical Heterogeneous Wireless Networks (HHWNs), under real-world physical constraints. Two biology-inspired techniques, a Flocking Algorithm (FA) and a Particle Swarm Optimizer (PSO), are investigated in this context.
Our model is based on the control framework at the physical layer presented previously by the authors. We first develop a nonconvex mathematical model for HHWNs. Second, we propose a new FA for self-organization and control of the backbone nodes in an HHWN by collecting local information from end users.
Third, we employ PSO, a widely used artificial intelligence algorithm, to directly optimize the HHWN by collecting global information from the entire system. A comprehensive evaluation measurement during the optimization process is developed.
In addition, the relationship between HHWN and FA and the comparison of FA and PSO are discussed, respectively. Our novel framework is examined in various dynamic scenarios. Experimental results demonstrate that FA and PSO both outperform current algorithms for the self-organization and optimization of HHWNs while showing different characteristics with respect to convergence speed and quality of solutions


*------------*------------*------------*------------*------------*------------*

DOMAIN - MOBILE COMPUTING
MODELING AND PERFORMANCE EVALUATION OF BACKOFF MISBEHAVING NODES IN CSMA/CA NETWORKS
Backoff misbehavior, in which a wireless node deliberately manipulates its backoff time, can induce significant network problems, such as severe unfairness and denial-of-service. Although great progress has been made towards the design of countermeasures to backoff misbehavior, little attention has been focused on quantifying the gain of backoff misbehaviors.
In this paper, to assess the gain that misbehaving nodes can obtain, we define and study two general classes of backoff misbehavior: continuous misbehavior, which keeps manipulating the backoff time unless it is disabled by countermeasures, and intermittent misbehavior, which tends to evade the detection of countermeasures by performing misbehavior sporadically. Our approach is to introduce a new performance metric, namely order gain, to characterize the performance benefits of misbehaving nodes in comparison to legitimate nodes in CSMA/CA-based wireless networks.
We derive the order gains of both continuous andintermittent misbehaviors and further investigate the relation between our metric, order gain, and the throughput gain for a misbehaving node. We show that in IEEE 802.11 networks, the throughput ratio of a backoff misbehaving node to a legitimate node is either bounded above or proportional to the number of legitimate nodes.
We use both simulations and experiments to validate our theoretical analysis and to further demonstrate the impact of a wide range of backoff misbehaviors on network performance in CSMA/CA-based wireless networks


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - MOBILE COMPUTING
LOW POWER CONSUMPTION SOLUTIONS FOR MOBILE INSTANT MESSAGING
Instant messaging (IM) services enable real-time text and multimedia exchange and online presence awareness. Users typically log onto instant messaging services persistently to discover available friends and also to be discovered.
However, our analysis shows that the frequency exchange of presence information incurs massive power consumption to mobile devices over cellular or wireless local area networks. Such power consumption penalty can render persistent-instant messaging infeasible for battery-powered mobile devices.
In this paper, we propose several solutions to mitigate the power consumption problem. By reducing the network access and keeping mobile devices in the sleep mode as much as possible, these solutions achieve significant power saving. The power consumption of the proposed solutions is derived analytically in this paper and the proposed solutions are implemented using a Jabber-based architecture.
Actual power measurement results show that the power consumption of the proposed solutions agrees well with our analysis, and significant power saving can be achieved on mobile handsets with our low power consumption solutions implemented.


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - MOBILE COMPUTING
GAME-THEORETIC ANALYSIS OF COOPERATION INCENTIVE STRATEGIES IN MOBILE AD HOC NETWORKS
In mobile ad hoc networks (MANETs), tasks are conducted based on the cooperation of nodes in the networks. However, since the nodes are usually constrained by limited computation resources, selfish nodes may refuse to be cooperative. Reputation systems and price-based systems are two main solutions to the node non-cooperation problem.
A reputation system evaluates node behaviors by reputation values and uses a reputation threshold to distinguish trustworthy nodes and untrustworthy nodes. A price-based system uses virtual cash to control the transactions of a packet forwarding service. Although these two kinds of systems have been widely used, very little research has been devoted to investigating the effectiveness of the node cooperation incentives provided by the systems.
In this paper, we use game theory to analyze the cooperation incentives provided by these two systems and by a system with no cooperation incentive strategy. We find that the strategies of using a threshold to determine the trustworthiness of a node in the reputation system and of rewarding cooperative nodes in the price-based system may be manipulated by clever or wealthy but selfish nodes.
Illumined by the investigation results, we propose and study an integrated system. Theoretical and simulation results show the superiority of the integrated system over an individual reputation system and a price-based system in terms of the effectiveness of cooperation incentives and selfish node detection.


*------------*------------*------------*------------*------------*------------*

DOMAIN - MOBILE COMPUTING
ERROR RESILIENT ESTIMATION AND ADAPTIVE BINARY SELECTION FOR FAST AND RELIABLE IDENTIFICATION OF RFID TAGS IN ERROR-PRONE CHANNEL
In RFID systems, far field passive tags send information using back scattering. The signal level is typically very small, so channel error during transmission may occur frequently. Due to channel error, performance of RFID tag identification under error-prone channel is degraded compared to that under error-free channel.
In this paper, we propose a novel error resilient estimation and adaptive binary selection to overcome the problem of channel errors. Our proposed error resilient estimation algorithm can estimate the number of tags and the channel state accurately regardless of frame errors.
And our proposed adaptive binary selection reduces the idle slots caused by frame errors. Performance analysis and simulation results show that the proposed algorithm consumes up to 20 percent less time slots than the binary tree protocol and dynamic framed slotted ALOHA (DFSA) in various packet error rate (PER) conditions


*------------*------------*------------*------------*------------*------------*

DOMAIN - MOBILE COMPUTING
LEVERAGING THE ALGEBRAIC CONNECTIVITY OF A COGNITIVE NETWORK FOR ROUTING DESIGN
In this paper, we consider the implications of spectrum heterogeneity on connectivity and routing in a Cognitive Radio Ad Hoc Network (CRAHN). We study the Laplacian spectrum of the CRAHN graph when the activity of primary users is considered. We introduce the cognitive algebraic connectivity , i.e., the second smallest eigenvalue of the Laplacian of a graph, in a cognitive scenario.
Throughout this notion we provide a methodology to evaluate the connectivity of CRAHNs and consequently introduce a utility function that is shown to be effective in capturing key characteristics of CRAHN paths. This model provides a unique metric that captures network connectivity, path length, and impact of primary users.
Moreover, the proposed metric penalizes paths where spectrum band switchings are highly probable. We design all the components of our routing framework, named Gymkhana, and we present a twofold performance verification: one from a topological perspective to show all the potentialities of the proposed routing approach, and the other considering network traffic to evaluate the performance in terms of end-to-end delay and packet delivery ratio.


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - MOBILE COMPUTING
FAST RELEASE/CAPTURE SAMPLING IN LARGE-SCALE SENSOR NETWORKS
Efficient estimation of global information is a com-mon requirement for many wireless sensor network applications. Examples include counting the number of nodes alive in the network and measuring the scale of physically correlated events.
These tasks must be accomplished at extremely low overhead due to the severe resource limitation of sensor nodes, which poses a challenge for large-scale sensor networks. In this paper, we develop a novel protocol FLAKE to efficiently and accurately estimate the global information of large-scale sensor networks based on the sparse sampling theory.
Specially, FLAKE dissem-inates a small number of messages called seeds to the network and issues a query about which nodes receive a seed. The number of nodes that have the information of interest can be estimated by counting the seeds disseminated, the nodes queried, and the nodes that receive a seed. FLAKE can be easily implemented in a distributed manner due to its simplicity.
Moreover, desirable trade-offs can be achieved between the accuracy of estimation and the system overhead. Our simulations show that FLAKE significantly outperforms several existing schemes on accuracy, delay and message overhead


*------------*------------*------------*------------*------------*------------*

DOMAIN - MOBILE COMPUTING
EFFICIENT AND FAIR BANDWIDTH ALLOCATION IN MULTI-CHANNEL COGNITIVE RADIO NETWORKS
Cognitive radio improves spectrum efficiency by allowing secondary users (SUs) to dynamically exploit the idle spectrum owned by primary users (PUs).
This paper studies optimal bandwidth allocation of SUs for throughput efficiency. Consider the following tradeoff: an SU increases its instantaneous throughput by accessing more spectrum, but channel access/switching overhead, contention among multiple SUs, and dynamic PU activity create higher liability for larger bandwidths. So how much is too much? In this paper, we study the optimal bandwidth allocation for multiple SUs.
Our approach is two-fold. We first study the optimal bandwidth a SU should use to maximize the per-SU throughput in the long term. The optimal bandwidth is derived in the context of dynamic PU activity, where we consider both independent and correlated PU channel scenarios while accounting for the effects of channel switching overhead.
We further consider the case of sub-optimal spectrum use by SUs in the short term due to PU activity dynamics. We propose an efficient channel reconfiguration scheme to improve SUs’ performance. We use real PU channel activity traces in the simulations to validate our results. The work sheds light on the design of spectrum sharing protocols in cognitive radio networks


*------------*------------*------------*------------*------------*------------*

DOMAIN - MOBILE COMPUTING
ENERGY-EFFICIENT COOPERATIVE VIDEO DISTRIBUTION WITH STATISTICAL QOS PROVISIONS OVER WIRELESS NETWORKS
For real-time video broadcast where multiple users are interested in the same content, mobile-to-mobile cooperation can be utilized to improve delivery efficiency and reduce network utilization. Under such cooperation, however, real-time video transmission requires end-to-end delay bounds.
Due to the inherently stochastic nature of wireless fading channels, deterministic delay bounds are prohibitively difficult to guarantee. For a scalable video structure, an alternative is to provide statistical guarantees using the concept of effective capacity/bandwidth by deriving quality of service exponents for each video layer.
Using this concept, we formulate the resource allocation problem for general multihop multicast network flows and derive the optimal solution that minimizes the total energy consumption while guaranteeing a statistical end-to-end delay bound on each network path. A method is described to compute the optimal resource allocation at each node in a distributed fashion.
Furthermore, we propose low complexity approximation algorithms for energy-efficient flow selection from the set of directed acyclic graphs forming the candidate network flows. The flow selection and resource allocation process is adapted for each video frame according to the channel conditions on the network links.
Considering different network topologies, results demonstrate that the proposed resource allocation and flow selection algorithms provide notable performance gains with small optimality gaps at a low computational cost.


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - MOBILE COMPUTING
DELAY OPTIMAL SCHEDULING FOR COGNITIVE RADIOS WITH COOPERATIVE BEAMFORMING: A STRUCTURED MATRIX-GEOMETRIC METHOD
There have been increasing interests in integrating cooperative diversity into Cognitive Radios (CRs). However, conventional cooperative diversity protocols require at least two randomly available idle timeslots or temporal spectrum holes for one transmission, thus leading to limited throughput and/or large latency.
In this paper, we propose a novel cross-layer approach for ef ficient scheduling in CR systems with bursty secondary traf fics. Specifically, cooperative beamforming is exploited for Secondary Users (SUs) to access busy timeslots or spatial spectrum holes wi thout causing interference to primary users.
We first propose a basic cooperative beaMforming and Automatic repeat request aided oppoRtunistic speCtrum scHeduling (MARCH) scheme to balance available spectrum resources, namely temporal and spatial spectrum holes, between the source and the relays.
To analyze the proposed scheme, we develop a tandem queueing framework, which captures bursty traffic arrival, dynamic availability of spectrum holes, and time-varying channel fading. The stable throughput region and the average delay are characterized using a structured matrix-analytical method.
We then obtain delay optimal scheduling schemes for various scenarios by jointly optimizing the scheduling parameters. Finally, we propose a modi fied scheme, MARCH-IR, which combines MARCH with Incremental Relay selection to further improve the system performance. Simulation results reveal that the proposed schemes provide significant Quality of Service (QoS) gains over conventional scheduling schemes that access only temporal spectrum holes


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - MOBILE COMPUTING
DETECTION OF SELFISH MANIPULATION OF CARRIER SENSING IN 802.11 NETWORKS
Recently, tuning the clear channel assessment (CCA) threshold in conjunction with power control has been considered for improving the performance of WLANs. However, we show that, CCA tuning can be exploited by selfish nodes to obtain an unfair share of the available bandwidth.
Specifically, a selfish entity can manipulate the CCA threshold to ignore ongoing transmissions; this increases the probability of accessing the medium and provides the entity a higher, unfair share of the bandwidth. We experiment on our 802.11 testbed to characterize the effects of CCA tuning on both isolated links and in 802.11 WLAN configurations.
We focus on AP-client(s) configurations, proposing a novel approach to detect this misbehavior. A misbehaving client is unlikely to recognize low power receptions as legitimate packets; by intelligently sending low power probe messages, an AP can efficiently detect a misbehaving node.
Our key contributions are: 1) We are the first to quantify the impact of selfish CCA tuning via extensive experimentation on various 802.11 configurations. 2) We propose a lightweight scheme for detecting selfish nodes that inappropriately increase their CCAs. 3) We extensively evaluate our system on our testbed; its accuracy is 95 percent while the false positive rate is less than 5 percent


*------------*------------*------------*------------*------------*------------*

DOMAIN - MOBILE COMPUTING
CHARACTERIZING THE SECURITY IMPLICATIONS OF THIRD-PARTY EMERGENCY ALERT SYSTEMS OVER CELLULAR TEXT MESSAGING SERVICES
Cellular text messaging services are increasingly being relied upon to disseminate critical information during emergencies. Accordingly, a wide range of organizations including colleges and universities now partner with third-party providers that promise to improve physical security by rapidly delivering such messages.
Unfortunately, these products do not work as advertised due to limitations of cellular infrastructure and therefore provide a false sense of security to their users. In this paper, we perform the first extensive investigation and characterization of the limitations of an Emergency Alert System (EAS) using text messages as a security incident response mechanism.
We show emergency alert systems built on text messaging not only can meet the 10 minute delivery requirement mandated by the WARN Act, but also potentially cause other voice and SMS traffic to be blocked at rates upward of 80 percent. We then show that our results are representative of reality by comparing them to a number of documented but not previously understood failures.
Finally, we analyze a targeted messaging mechanism as a means of efficiently using currently deployed infrastructure and third-party EAS. In so doing, we demonstrate that this increasingly deployed security infrastructure does not achieve its stated requirements for large populations.


*------------*------------*------------*------------*------------*------------*

DOMAIN - MOBILE COMPUTING
CONTROLLED MOBILITY SENSOR NETWORKS FOR TARGET TRACKING USING ANT COLONY OPTIMIZATION
In mobile sensor networks, it is important to manage the mobility of the nodes in order to improve the performances of the network. This paper addresses the problem of single target tracking in controlled mobility sensor networks.
The proposed method consists of estimating the current position of a single target. Estimated positions are t hen used to predict the following location of the target. Once an area of interest is de fined, the proposed approach consists of moving the mobile nodes in order to cover it in an optimal way. It thus de fines a strategy for choosing the set of new sensors locations.
Each node is then assigned one position within the set in the way to minimize the total traveled distance by the nodes. While the estimation and the prediction phases are performed using the interval theory, relocating nodes employs the ant colony optimization algorithm.
Simulations results corroborate the efficiency of the proposed method compared to the target tracking methods considered for networks with static nodes


*------------*------------*------------*------------*------------*------------*

DOMAIN - NETWORKING
SCALABLE LOOKAHEAD REGULAR EXPRESSION DETECTION SYSTEM FOR DEEP PACKET INSPECTION
Regular expressions (RegExes) are widely used, yet their inherent complexity often limits the total number of RegExes that can be detected using a single chip for a reasonable throughput. This limit on the number of RegExes impairs the scalability of today’s RegEx detection systems.
The scalability of existing schemes is generally limited by the traditional detection paradigm based on per-character-state processing and state transition detection. The main foc us of existing schemes is on opti-mizing the number of states and the required transitions, but not on optimizing the suboptimal character-based detection method.
Furthermore, the potential benefi ts of allowing out-of-sequence detection, instead of detecting components of a RegEx in the order of appearance, have not been explored. Lastly, theexisting schemes do not provide ways to ad apt to the evolving RegExes. In this paper, we propose Lookahead Finite Automata (LaFA) to perform scalable RegEx detection.
LaFA requires les smemory due to these three contributions:
1) providing specialized and optimized detection modules to increase resource utilization;
2) systematically reordering the RegEx detection sequence to reduce the number of concurrent operations;
3) sharing states among automata for different RegExes to reduce resource re-quirements.
Here, we demonstrate that LaFArequires an order of magnitude less memory compared to today’s state-of-the-art RegEx detection systems.


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - NETWORKING
ACCELERATING MULTIPATTERN MATCHING ON COMPRESSED HTTP TRAFFIC
Current security tools, using “signature-based” de-tection, do not handle compressed traf fi c, whose market-share is constantly increasing.
This paper focuses on compressed HTTP traf fi c. HTTP uses GZIP compression and requires some kind of decompression phase before performing a string matching. We present a novel algorithm, Aho–C orasick-based algorithm for Compressed HTTP (ACCH), that takes advantage of information gathered by the decompression phase in order to accelerate the commonly used Aho–Corasick pattern-matching algorithm.
By analyzing real HTTP traf fi c and real Web application fi rewall signatures, we show that up to 84% of the data can be skipped in its scan. Surprisingly, we show that it is faster to perform pattern matching on the compressed data, with the penalty of decompression, than on regular traffic. As far as we know, we are the first paper that analyzes the problem of “on-the-fl y” multipattern matching on compressed HTTP traffic and suggest a solution


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - NETWORKING
SPATIO-TEMPORAL COMPRESSIVE SENSING AND INTERNET TRAF FI C MATRICES (EXTENDED VERSION)
Despite advances in measurement technology, it is still challenging to reliably compile large-scale network datasets. For ex ample, because offlaws in the measurement systems or difficulties posed by the measurement problem itself, missing, ambiguous, or indirect data are common.
In the case where such data have spatio-temporal structure, it is natural to try to leverage this structure to deal with the challenges posed by the problem-atic nature of the data. Our work involving network datasets draws on ideas from the area of compressive sensing and matrix completion, where sparsity is exploited in estimating quantities of interest.
However, the standard results on compressive sensing are: 1) reliant on conditions that generally do not hold for network datasets; and 2) do not allow us to exploit all we know about their spatio-temporal structure.
In this paper, we overcome these limitations with an algorithm that has at its heart the same ideas espoused in compressive sensing, but adapted to the problem of network datasets.
We show how this algorithm can be used in a variety of ways, in particular on traffic data, to solve problems such as simple interpolation of missing values, traffic matrix inference from link data, prediction, and anomaly detection.
The elegance of the approach lies in the fact that it unifies all of these tasks and allows them to be performed even when as much as 98% of the data is missing


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - NETWORKING
SPARSE WIFI DEPLOYMENT FOR VEHICULAR INTERNET ACCESS WITH BOUNDED INTERCONNECTION GAP
Vehicular Internet access via open WiFi access points (APs) has been demonstrated to be a feasible solution to provide opportunistic data service to moving vehicles. Using an in situ deployment, however, such a solution does not provide performance guarantees due to unpredictable intermittent connectivity.
On the other hand, a solution that tries to cover every point in an entire road network with Aps (a full coverage) is not very practical due to prohibitive deployment and operational costs. In this paper, we introduce a new notion of intermittent coverage for mobile users, called Alpha Coverage, which provides worst-case guarantees on the interconnection gap, i.e., the distance or expected delay between two consecutive mobile-AP contacts for a vehicle, while using significantly fewer APs than needed for full coverage.
We propose efficient algorithms to verify whether a given deployment provides Alpha Coverage. The problem of finding an economic deployment that provides coverage turns out to be NP-hard.
We hence provide both approximation algorithms that have provable guarantees on the performance as well as efficient heuristics that perform well in practice. The efficiency of our algorithms is demonstrated via simulations using data from real-world road networks


*------------*------------*------------*------------*------------*------------*

DOMAIN - NETWORKING
RELIABLE COLLECTIVE COMMUNICATIONS WITH WEIGHTED SRLGS IN OPTICAL NETWORKS
In this paper, we study the problem of reliable collective communication (broadcast or gossip) with the objective of maximizing the reliability of the collective communication. The need for collective communication arises in many problems of parallel and distributed computing, including Grid or cloud computing and database management.
We describe the network model, formulate the reliable collective communication problem, prove that the maximum reliable collective communication problem is NP-hard, and provide an integer linear program (ILP) formulation for the problem.
We then provide a greedy approximation algorithm to construct collective communication (through a spanning tree) that achieves an approximation ratio of where is the average number of shared link risk groups (SRLGs) along links, and are the total number of vertices and edges of the network, respectively.
Simulations demonstrate that our approximation algorithm achieves good performance in both small and large networks and that, in almost 95% of total cases, our algorithm outperforms the modifi ed minimum spanning tree algorithms


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - NETWORKING
POLYNOMIAL-TIME ALGORITHMS FOR MULTIRATE ANY PATH ROUTING IN WIRELESS MULTIHOP NETWORKS
In this paper, we present a new routing paradigm that generalizes opportunistic routing for wireless multihop net-works.In multirate anypath routing, each node uses both a set of next-hops and a selected transmission rate to reach a destination. Using this rate, a packet is broadcast to the nodes in the set, and one of them forwards the packet on to the destination.
To date, there is no theory capable of jointly optimizing both the set of next-hops and the transmission rate used byeach node. We solve this by introducing two polynomial-time routing algorithms and provide the proof of their optimality.
The proposed algorithms have roughly the same running time as regular shortest-path algorithms and are therefore suitable for deployment in routing protocols. We conducted measurements in an 802.11b testbed network, and our trace-driven analysis shows that multirate anypath routing is on average 80% better than 11-Mbps anypath routing, with a factor of 6.4 improvement in the best case.
If the rate is fi xed at 1 Mbps instead, pe rformance improves by a factor of 5.4 on average


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - NETWORKING
ON IDENTIFYING ADDITIVE LINK METRICS USING LINEARLYINDEPENDENT CYCLES AND PATHS
In this paper, we study the problem of identifying con-stant additive link metrics using linearly independent monitoring cycles and paths. A monitoring cycle starts and ends at the same monitoring station, while a monitoring path starts and ends at distinct monitoring stations.
We show that three-edge connectivity is a necessary and sufficient condition to identify link metrics using one monitoring station and employing monitoring cycles. We develop a polynomial-time algorithm to compute the set of linearly independent cycles.
 For networks that are less than three-edge-connected, we show how the minimum number of monitors required and their placement may be computed. For networks with symmetric directed links, we show the relationship between the number of monitors employed, the number of directed links for which metric is known apriori, and the identi fi ability for the remaining links.
To the best of our knowledge, this is the first work that derives the necessary and sufficient conditions on the network topology for identifying additive link metrics and develops a polynomial-time algorithm to compute linearly independent cycles and paths


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - NETWORKING
INSIGHTS ON MEDIA STREAMING PROGRESS USING BITTORRENT-LIKE PROTOCOLS FOR ON-DEMAND STREAMING
This paper develops analytical models that characterize the behavior of on-demand stored media content delivery using BitTorrent-like protocols. The models capture the effects of different piece selection policies, including Rarest-First, two vari-ants of In-Order, and two probabilistic policies (Portion and Zipf).
Our models provide insight into system behavior and help explain the sluggishness of the system with In-Order streaming. We use the models to compare different retrieval policies across a wide range of system parameters, including peer arrival rate, upload/ download bandwidth, and seed residence time.
We also provide quantitative results on the startup delays and retrieval times for streaming media delivery. Our results provide insights into the de-sign tradeoffs for on-demand media streaming in peer-to-peer net-works. Finally, the models are validated using simulations


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - NETWORKING
GREEDY GEOGRAPHIC ROUTING IN LARGE-SCALE SENSOR NETWORKS: A MINIMUM NETWORK DECOMPOSITION APPROACH
In geographic (or geometric) routing, messages are by default routed in a greedy manner: The current node always forwards a message to its neighbor node that is closest to the destination. Despite its simplicity and general efficiency, this strategy alone does not guarantee delivery due to the existence of local minima (or dead ends).
Overcoming lo cal minima requires nodes to maintain extranonlocal state or to use auxiliary mechanisms. We study how to facilitate greedy forwarding by using a minimum amount of such nonlocal states in topologically complex networks. Specifically, we investigate the problem of decomposing a given network into a minimum number of greedily routable components (GRCs), where greedy routing is guaranteed to work.
We approach it by considering an approximate version of the problem in a continuous domain, with a central concept called the greedily routable region(GRR). A full characterization of GRR is given concerning its geometric properties and routing capability. We then develop simple approximate algorithms for the problem. These results lead to a practical routing protocol that has a routing stretch below 7 in a continuous domain, and close to 1 in several realistic network settings


*------------*------------*------------*------------*------------*------------*

DOMAIN - NETWORKING
ESM: EFFICIENT AND SCALABLE DATA CENTER MULTICAST ROUTING
Multicast benefits group communications in saving network traffic and improving application throughput, both of which are important for data center applications. However, the technical trend of data center design poses new challenges for efficient and scalable multicast routing. First, the densely connected networks make traditional receiver-driven multicast routing protocols inefficient in multicast tree formation.
Second, it is quite difficult for the low-end switches widely used in data centers to hold the routing entries of massive multicast groups. In this paper, we propose ESM, an effi cient and scalable multicast routing scheme for data center networks.
ESM addresses the challenges above by exploiting the feature of modern data center networks. Based on the regular topology of data centers, ESM uses a source-to-receiver expansion approach to build efficient multicast trees, excluding many unnecessary intermediate switches used in receiver-driven multicast routing.
For scalable multicast routing, ESM combines both in-packet Bloom Filters and in-switch entries to make the tradeoff between the number of multicast groups supported and the additional bandwidth overhead. Simulations show that ESM saves 40% 50% network traffic and doubles the application throughputs compared to receiver-driven multicast routing, and the combination routing scheme significantly reduces the number of in-switch entries required. We implement ESM on a Linux platform.
The experimental results further demonstrate that ESM can well support online tree building for large-scale groups with churns, and the overhead of the combination for-warding engine is light-weighted


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - NETWORKING
EFFICIENT SCHEDULING FOR PERIODIC AGGREGATION QUERIES IN MULTIHOP SENSOR NETWORKS
In this paper, we study periodic query scheduling for data aggregation with minimum delay under various wireless in-terference models. Given a set o f periodic aggregation queries, each query has its own period and the subset of source nodes containing the data.
We first propose a family of efficient and effective real-time scheduling protocols that can answer every job of each query task within a relative delay under resource constraints by addressing the following tightly coupled tasks: routing, transmission plan constructions, node activity scheduling, and packet scheduling. Based on our protocol design, we further propose schedulability test schemes to efficiently and effectively test whether, for a set of queries, each query job can be finished within a finite delay.
Our theoretical analysis shows that our methods achieve at least a constant fraction of the maximum possible total utilization for query tasks, where the constant depends on wireless interference models. We also conduct extensive simulations to validate the proposed protocol and evaluate its practical performance. The simulations corroborate our theoretical analysis


*------------*------------*------------*------------*------------*------------*

DOMAIN - NETWORKING
DIFFERENTIATED QUALITY-OF-RECOVERY IN SURVIVABLE OPTICAL MESH NETWORKS USING -STRUCTURES
This paper investigates desi gn methods of protection schemes in survivable WDM networks that use preconfigured protection structures ( -structures) in order to pro vide different quality-of-recovery (QoR) classes within 100% resilient single-link protection schemes.
QoR differentiation is a practical and effective approach in order to strike different balance s among protection cost, recovery delay, and manage ment complexity.
Based on the degree of pre-cross connectivity of the protection structures, we develop three design approaches of shared protection capacity schemes based on the following: 1) fully pre-cross-connected -structures ( -structures); 2) partially pre-cross-connected -structures ( -structures); and 3) dynamically recon figured -structures ( -structures). In order to identify the optimal combinations of protection structures to meet the requirements of the three QoR classes, we use a column generation (CG) model that we solve using large-scale optimization techniques.
Our CG decomposition approach is based on the separation processes of the design and selection of the protection structures. In the design process of the protection structures, the shape and protection capability of each -structure is decided dynamically during the selection process depending on the network topology and the targeted QoR parameters.
Extensive experiments are carried out on several data instances with different design constraints in order to measure the protection capacity cost and the recovery delay for the three QoR classes


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - NETWORKING
DESIGN OF WIRELESS SENSOR NETWORKS FOR MOBILE TARGET DETECTION
We consider surveillance applications through wire-less sensor networks (WSNs) where the areas to be monitored are fully accessible and the WSN topology can be planned apriori to maximize application efficiency.
We pro pose an optimization framework for selecting the positions of wireless sensors to detect mobile targets traversing a given area. By leveraging the concept of path exposure as a measure of detection quality, we propose two problem versions: the minimization of the sensors installation cost while guaranteeing a minimum exposure, and the maximization of the exposure of the least-exposed path subject to a budget on the sensors installation cost.
We present compact mixed-integer linear programming formulations for these problems that can be solved to optimality for reasonable-sized network instances. More-over, we develop Tabu Search heuristics that are able to provide near-optimal solutions of the same instances in short computing time and also tackle large size instances.
The basic versions are extended to account for constraints on the wireless connectivity as well as heterogeneous devices and nonuniform sensing. Finally, we analyze an enhanced exposure definition based on mobile target detection probability


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - NETWORKING
CONGESTION-DEPENDENT PRICING AND FORWARD CONTRACTS FOR COMPLEMENTARY SEGMENTS OF A COMMUNICATION NETWORK
Congestion-dependent pricing is a form of traffic management that ensures the efficient allocation of bandwidth between users and applications. As the unpredictability of con-gestion prices creates revenue uncertainty for network providers and cost uncertainty for users, it has been suggested that forward contracts could be used to manage these risks.
We develop a novel game-theoretic model of a multiprovider communication network with two complementary segments and investigate whether for-ward contracts would be adopted by service providers.
Service on the upstream segment is provided by a single Internet service provider (ISP) and priced dynamically to maximize profit, while several smaller ISPs sell connectivity on the downstream network segment, with the advance possibility of entering into forward contracts with their users for some of their capacity.
We show that the equilibrium forward contracting volumes are necessarily asymmetric, with one downstream provider entering into fewer forward contracts than the other competitors, thus ensuring a high subsequent downstream price level.
In practice, network providers will choose the extent of forward contracting strategically based not only on their risk tolerance, but also on the market structure in the interprovider network and their peers’ actions.


*------------*------------*------------*------------*------------*------------*

DOMAIN - NETWORKING
DECLARATIVE POLICY-BASED ADAPTIVE MOBILE AD HOC NETWORKING
This paper presents DAWN, a declarative platform that creates highly adaptive policy-based mobile ad hoc network (MANET) protocols. DAWN leverages declarative networking techniques to achieve extensible routing and forwarding using declarative languages. We make the following contributions.
First, we demonstrate that traditional MANET protocols can be expressed in a concise fashion as declarative networks and policy-driven adaptation can be specified i n the same language to dictate the dynamic selection of different protocols based on various network and traffic conditions.
Second, we propose interprotocol forwarding techniques t hat ensure packets are able to seamlessly traverse across clusters of nodes running different protocols selected based on their respective policies.
Third, we have developed a full-fl edged implementation of DAWN using the RapidNet declarative networking system. We experimentally validate a variety of policy-based adaptive MANETs in various dynamic settings using a combination of ns-3 simulations and deployment on the ORBIT testbed.
Our experimental results demonstrate that hybrid protocols developed using DAWN out-perform traditional MANET routing protocols and are able to flexibly and dynamically adapt their routing mechanisms to achieve a good tradeoff between b and width utilization and route quality. We further demonstrate DAWN’s capabilities to achieve interprotocol forwarding across different protocols


*------------*------------*------------*------------*------------*------------*

DOMAIN - NETWORKING
CONCISE LOOKUP TABLES FOR IPV4 AND IPV6 LONGEST PREFIX MATCHING IN SCALABLE ROUTERS
We present a distinct longest prefi xmatching(LPM) lookup scheme able to achieve exceedingly concise lookup ta-bles (CoLT), suitable for scalable routers.
Based on unified hash tables for handling both IPv4 and IPv6 simultaneously, CoLT excels over previous mechanisms in: 1) lower on-chip storage for lookup tables; 2) simpler table formats to enjoy richer prefi x aggregation and easier implementation; and 3) most importantly, deemed the only design able to accommodate both IPv4 and IPv6 addresses uniformly and effectively.
As its hash tables permit multiple possible buckets to hold each prefix (following a migration rule to avoid false positives altogether), CoLT  exhibits the best memory efficiency and can launch parallel search over tables during every LPM lookup, involving fewer cycles per lookup when on-chip memory is used to implement hash tables.
With 16 (or 32) on-chip SRAM blocks clocked at 500 MHz (achievable in today’s 65-nm technology), it takes 2 (or 1.6) cycles on average to complete a lookup, yielding 250 (or 310 ) millions of packets per second (MPPS) mean throughput. Being hash-oriented, CoLT well supports incremental table updates, besides its high table utilization and lookup throughput.


*------------*------------*------------*------------*------------*------------*

DOMAIN – NETWORKING
BALANCING RELIABILITY AND UTILIZATION IN DYNAMIC SPECTRUM ACCESS
Future wireless networks will dynamically access spectrum to maximize its utilization. Conventional design of dynamic spectrum access focuses on maximizing spectrum utilization, but faces the problem of degraded reliability due to unregulated demands and access behaviors. Without providing proper reliability guarantee, dynamic spectrum access is unacceptable to many infrastructure networks and services.
In this paper, we propose SPARTA, a new architecture for dynamic spectrum access that balances access reliability and spectrum utilization. SPARTA includes two complementary techniques: proactive ad-mission control performed by a central entity to determine the set of wireless nodes to be supported with only statistical information of their spectrum demands, and online adaptation performed by admitted wireless nodes to adjust their instantaneous spectrum usage to time-varying demand.
Using both theoretical analysis and simulation, we show that SPARTA fulfills the reliability re-quirements while dynamically multiplexing spectrum demands to improve utilization.
Compared to conventional solutions, SPARTA improves spectrum utilization by 80%–200%. Finally, SPARTA also allows service providers to explore the tradeoff between utilization and reliability to make the best use of the spectrum. To our best knowledge, our work is the first to identify and address such a tradeoff


*------------*------------*------------*------------*------------*------------*

DOMAIN – NETWORKING
ADAPTIVE SELECTIVE VERIFICATION: AN EFFICIENT ADAPTIVE COUNTERMEASURE TO THWART DOS ATTACKS
Denial-of-service (DoS) attacks are considered within the province of a shared channel model in which attack rates may be large but are bounded and client request rates vary within fixed bounds. In this setting, it is shown that clients can adapt effectively to an attack by increasing their request rate based on timeout windows to estimate attack rates.
The server will be able to process client requests with high probability while pruning out most of the attack by selective random sampling. The protocol introduced here, called Adaptive Selective Verification (ASV), is shown to use bandwidth efficiently and does not require any server state or assumptions about network congestion.
The main results of the paper are a formulation of optimal performance and a proof that ASV is optimal


*------------*------------*------------*------------*------------*------------*
 
DOMAIN – NETWORKING
A THEORY FOR THE CONNECTIVITY DISCOVERED BY ROUTING PROTOCOLS
Route-vector protocols, such as the Border Gateway Protocol (BGP), have nodes elect and exchange routes in order to discover paths over which to send traffic. We ask the following: What is the minimum number of links whose failure prevents a route-vector protocol from finding such paths?
The answer is not obvious because routing policies prohibit some paths from carrying traffic and because, on top of that, a route-vector protocol may hide paths the routing policies would allow. We develop an algebraic theory to address the above and related questions.
In particular, we characterize a broad class of routing policies for which we can compute in polynomial time the minimum number of links whose failure leaves a route-vector protocol without a communication path from one given node to another.
The theory is applied to a publicly available description of the Internet topology to quantify how much of its intrinsic connectivity is lost due to the traditional customer–provider, peer–peer r outing policies and how much can be regained with simple alternative policies


*------------*------------*------------*------------*------------*------------*