Search results
(1 - 3 of 3)
- Title
- DATA-DRIVEN OPTIMIZATION OF NEXT GENERATION HIGH-DENSITY WIRELESS NETWORKS
- Creator
- Khairy, Sami
- Date
- 2021
- Description
-
The Internet of Things (IoT) paradigm is poised to advance all aspects of modern society by enabling ubiquitous communications and...
Show moreThe Internet of Things (IoT) paradigm is poised to advance all aspects of modern society by enabling ubiquitous communications and computations. In the IoT era, an enormous number of devices will be connected wirelessly to the internet in order to enable advanced data-centric applications. The projected growth in the number of connected wireless devices poses new challenges to the design and optimization of future wireless networks. For a wireless network to support a massive number of devices, advanced physical layer and channel access techniques should be designed, and high-dimensional decision variables should be optimized to manage network resources. However, the increased network scale, complexity, and heterogeneity, render the network unamenable to traditional closed-form mathematical analysis and optimization, which makes future high-density wireless networks seem unmanageable. In this thesis, we study the design and data-driven optimization of future high-density wireless networks operating over the unlicensed band, including Radio Frequency (RF)-powered wireless networks, solar-powered Unmanned Aerial Vehicle (UAV)-based wireless networks, and random Non-Orthogonal Multiple Access (NOMA) wireless networks. For each networking scenario, we first analyze network dynamics and identify performance trade-offs. Next, we design adaptive network controllers in the form of high-dimensional multi-objective optimization problems which exploit the heterogeneity in users' wireless propagation channels and energy harvesting to maximize the network capacity, manage battery energy resources, and achieve good user capacity fairness. To solve the high-dimensional optimization problems and learn the optimal network control policy, we propose novel, cross-layer, scalable, model-based and model-free data-driven network optimization and resource management algorithms that integrate domain-specific analyses with advanced machine learning techniques from deep learning, reinforcement learning, and uncertainty quantification. Furthermore, convergence of the proposed algorithms to the optimal solution is theoretically analyzed using mathematical results from metric spaces, convex optimization, and game theory. Finally, extensive simulations have been conducted to demonstrate the efficacy and superiority of our network optimization and resource management techniques compared with existing methods. Our research contributions provide practical insights for the design and data-driven optimization of next generation high-density wireless networks.
Show less
- Title
- A SCALABLE SIMULATION AND MODELING FRAMEWORK FOR EVALUATION OF SOFTWARE-DEFINED NETWORKING DESIGN AND SECURITY APPLICATIONS
- Creator
- Yan, Jiaqi
- Date
- 2019
- Description
-
The world today is densely connected by many large-scale computer networks, supporting military applications, social communications, power...
Show moreThe world today is densely connected by many large-scale computer networks, supporting military applications, social communications, power grid facilities, cloud services, and other critical infrastructures. However, a gap has grown between the complexity of the system and the increasing need for security and resilience. We believe this gap is now reaching a tipping point, resulting in a dramatic change in the way that networks and applications are architected, developed, monitored, and protected. This trend calls for a scalable and high-fidelity network testing and evaluation platform to facilitate the transformation from in-house research ideas to real-world working solutions. With this objective, we investigate means to build a scalable and high-fidelity network testbed using container-based emulation and parallel simulation; our study focuses on the emerging software-defined networking (SDN) technology. Existing evaluation platforms facilitate the adoption of the SDN architecture and applications to production systems. However, the performance of those platforms is highly dependent on the underlying physical hardware resources. Insufficient resources would lead to undesired results, such as low experimental fidelity or slow execution speed, especially with large-scale network settings. To improve the testbed fidelity, we first develop a lightweight virtual time system for Linux container and integrate the system into a widely-used SDN emulator. A key issue with an ordinary container-based emulator is that it uses the system clock across all the containers even if a container is not being scheduled to run, which leads to the issue of both performance and temporal fidelity, especially with high workloads. We investigate virtual time approaches by precisely scaling the time of interactions between containers and physical devices. Our evaluation results indicate a definite improvement in fidelity and scalability. To improve the testbed scalability, we investigate how the centralized paradigm of SDN can be utilized to reduce the simulation workload. We explore a model abstraction technique that effectively transforms the SDN network devices to one virtualized switch model. While significantly reducing the model execution time and enabling the real-time simulation capability, our abstracted model also preserves the end-to-end forwarding behavior of the original network.With enhanced fidelity and scalability, it is realistic to utilize our network testbed to perform a security evaluation of various SDN applications. We notice that the communication network generates and processes a huge amount of data. The logically-centralized SDN control plane, on the one hand, has to process both critical control traffic and potentially big data traffic, and on the other hand, enables many efficient security solutions, such as intrusion detection, mitigation, and prevention. Recently, deep neural networks achieve state-of-the-art results across a range of hard problem spaces. We study how to utilize the big data and deep learning to secure communication networks and host entities. For classifying malicious network traffic, we have performed the feasibility study of off-line deep-learning based intrusion detection by constructing the detection engine with multiple advanced deep learning models. For malware classification on individual hosts, another necessity to secure computer systems, existing machine learning-based malware classification methods rely on handcrafted features extracted from raw binary files or disassembled code. The diversity of such features created has made it hard to build generic malware classification systems that work effectively across different operational environments. To strike a balance between generality and performance, we explore new graph convolutional neural network techniques to effectively yet efficiently classify malware programs represented as their control flow graphs.
Show less
- Title
- Deep Learning Methods For Wireless Networks Optimization
- Creator
- Zhang, Shuai
- Date
- 2022
- Description
-
The resurgence of deep learning techniques has brought forth fundamental changes to how hard problems could be solved. It used to be held that...
Show moreThe resurgence of deep learning techniques has brought forth fundamental changes to how hard problems could be solved. It used to be held that the solutions to complex wireless network problems require accurate mathematical modeling of the network operation, but now the success of deep learning has shown that a data-driven method could generate powerful and useful representations such that the problem could be solved efficiently with surprisingly competent performance. Network researchers have recognized this and started to capitalize on the learning methods’ prowess. But most works follow the existing black-box learning paradigms without much accommodation to the nature and essence of the underlying network problems. This thesis focuses on a particular type of classical problem: multiple commodity flow scheduling in an interference-limited environment. Though it does not permit efficient exact algorithms due to its NP-hard complexity, we use it as an entry point to demonstrate from three angles how the learning-based methods can help improve the network performance. In the first part, we leverage the graphical neural network (GNN) techniques and propose a two-stage topology-aware machine learning framework, which trains a graph embedding unit and a link usage prediction module jointly to discover links that are likely to be used in optimal scheduling. The second part of the thesis is an attempt to find a learning method that has a closer algorithmic affinity to the traditional DCG method. We make use of reinforcement learning to incrementally generate a better partial solution such that a high quality solution may be found in a more efficient manner. As the third part of the research, we revisit the MCF problem from a novel viewpoint: instead of leaning on the neural networks to directly generate the good solutions, we use them to associate the current problem instance with historical ones that are similar in structure. These matched instances’ solutions offer a highly useful starting point to allow efficient discovery of the new instance’s solution.
Show less