Search results
(1 - 12 of 12)
- Title
- COORDINATED DRIVING IN CONNECTED AND AUTONOMOUS VEHICLE SYSTEM -- OPTIMAL ADVANCE LANE CHANGE ZONES AND COORDINATED PLATOON CAR FOLLOWING CONTROL
- Creator
- Gong, Siyuan
- Date
- 2017, 2017-07
- Description
-
The connected and autonomous vehicle (CAV) system enables countless innovative coordinated driving approaches, such as coordinated lane change...
Show moreThe connected and autonomous vehicle (CAV) system enables countless innovative coordinated driving approaches, such as coordinated lane change and car-following in microscopic CAV control, and coordinated rounding and parking in macroscopic traffic flow guidance, which will improve the performance of our transportation system by enhancing traffic mobility, providing safe driving environment and reducing fuel consumption. Since the lane change and car-following behavior are indicated as crucial factors of traffic safety and efficiency, this dissertation focuses on developing the coordinated driving schemes in microscopic control and operation of lane change and car-following maneuvers. In particular, I develop an lane change zone optimization strategy and the coordinated platoon car-following control for a pure CAV platoon and a mixed platoon (i.e. mixed with human-drive vehicles and CAVs) respectively. This dissertation first explore the management strategy of the mandatory lane change near a two-lane highway off-ramp by optimizing the location of advance warning. The proposed approach considers that the area downstream of the advance warning includes two zones: the green and yellow zones corresponding to their respective most like lane change maneuvers. An optimization model is proposed to search for the optimal green and yellow zones. Traffic flow theory such as Greenshield model and shock wave analysis are used to analyze the impacts of the S-MLC and D-MLC maneuvers on the traffic delay. Numerical experiments indicate that the proposed optimization model can identify the optimal location to set the advance MLC warning nearby an off-ramp so that the traffic delay resulting from lane change maneuvers is minimized, and the corresponding capacity drop and traffic oscillation can be efficiently mitigated. Then, this research develops a novel car-following control scheme for a platoon of connected and autonomous vehicles on a straight highway. The platoon is modeled as an interconnected multi-agent dynamical system subject to physical and safety constraints. A constrained optimization based control scheme is proposed to ensure an entire platoon’s transient traffic smoothness and asymptotic dynamic performance. This dissertation develops dual based distributed algorithms to compute optimal solutions with proven convergence. Furthermore, the asymptotic stability of the unconstrained linear closed-loop system is established. These stability analysis results provide a principle to select penalty weights in the underlying optimization problem to achieve the desired closed-loop performance for both the transient and the asymptotic dynamics. By the motivation that CAVs and human-drive vehicles will co-exist on the road for a long period in the near future, the third part of this dissertation extends the pure CAV coordinated platooning control to the mixed flow environment. By integrating the Newell car-following model, a real-time curve matching algorithm is implemented to calibrate the ca-following model and anticipate the movement of human-drive vehicle by the real-time trajectory data. The constrained MPC are developed for each CAV platoon, considering their movement interaction through the human-drive vehicle platoon. Furthermore, this study provide a modified dual based distributed algorithm to improve convergence speed of the primal problem for the dual based distributed algorithm in Chapter 4. Several requirements of the penalty weights selection are provided by stability analysis under the unconstrained conditions. The numerical experiments based on field data will be conducted to illustrate the effectiveness and efficiency of the proposed the solution approach and the platoon control schemes.
Ph.D. in Civil Engineering, July 2017
Show less
- Title
- DUAL-BASED APPROXIMATION ALGORITHMS FOR MULTIPLE NETWORK DESIGN PROBLEMS
- Creator
- Grimmer, Benjamin
- Date
- 2016, 2016-05
- Description
-
We study a variety of NP-Complete network connectivity problems. Our pri- mary results come from a novel Dual-Based approach to approximating...
Show moreWe study a variety of NP-Complete network connectivity problems. Our pri- mary results come from a novel Dual-Based approach to approximating network de- sign problems with cut-based linear programming relaxations. This approach gives a 3=2-approximation to Minimum 2-Edge-Connected Spanning Subgraph that is equivalent to a previously proposed algorithm. One well-studied branch of network design models ad hoc networks where each node can either operate at high or low power. If we allow unidirectional links, we can formalize this into the problem Dual Power Assignment (DPA). Our Dual-Based approach gives a 3=2-approximation to DPA, improving the previous best known approximation of 11=7 1:57. Another standard network design problem is Minimum Strongly Con- nected Spanning Subgraph (MSCS). We propose a new problem generalizing MSCS and DPA called Star Strong Connectivity (SSC). Then we show that our Dual-Based approach achieves a 1.6-approximation ratio on SSC. As a result of our Dual-Based approximations, we prove new upper bounds on the integrality gaps of these problems. For completeness, we present a family of instances of MSCS (and thus SSC) with integrality gap approaching 4=3.
M.S. in Computer Science, May 2016
Show less
- Title
- DEVELOPING ALGORITHMIC TRADING STRATEGIES AND EMPIRICAL ANALYSIS WITH HIGH FREQUENCY TRADING DATA
- Creator
- Lee, Jeonghoe
- Date
- 2015, 2015-07
- Description
-
The PhD dissertation research topics aim at developing algorithmic trading strategies and demonstrating data analysis skills. To be a...
Show moreThe PhD dissertation research topics aim at developing algorithmic trading strategies and demonstrating data analysis skills. To be a quantitative analyst as well as an academic scholar in financial trading area, these two professional backgrounds are indispensable. In detail, chapter 1 shows multi-objective optimization and spontaneous optimization of design variables. For instance, while conventional trading systems explore a single objective function, multi-objective optimization allows us to manage the essential trade-off among profit, standard deviation and maximum-drop. In addition, design parameters such as trading volume, the amount of historical data, and trading gateways of technical indicators are continuously optimized in real time. In chapter 2, this chapter shows an algorithmic trading system with the concept of machine learning, and demonstrating its various applications. The main purpose of this research is to propose objective numerical development framework in algorithmic trading. Chapter 3 pursues understanding liquidity measures which are critical for algorithmic traders and investors. Various liquidity measures have been suggested and they have different sensitivities to the market. This research analyzes liquidity measures and clarifies the relation between market price return & realized volatility and liquidity measures. In sum, with these three chapters, this dissertation will demonstrate necessary research topics in algorithmic trading.
Ph.D. in Management Science, July 2015
Show less
- Title
- INFEASIBILITY OF A POINTWISE TRUNCATION ERROR ESTIMATE TO DRIVE MESH ADAPTATION
- Creator
- Singh, Manpreet
- Date
- 2016, 2016-12
- Description
-
An investigation of the Fast Approximation Scheme or FAS multigrid truncation error estimates at grid points with application to mesh...
Show moreAn investigation of the Fast Approximation Scheme or FAS multigrid truncation error estimates at grid points with application to mesh redistribution is presented. Feasibility of the error estimate as a means to adapt the mesh to a physical problem by solving the elliptic mesh equations derived from minimization of the error estimate based on the principle of equidistribution is examined by solving 1-D numerical test cases. To keep mesh movement under control, a parabolized version of the mesh equation is also tested to make an active comparison of the possible improvements in adaptivity and mesh quality. The results reveal smoothness issues indicating the need for a more robust estimator within the adaptive redistribution framework. Particularly, the prevalence of poor zonal e↵ects on the mesh points alone, point to lack of information over each cell thereby rendering the estimate ine↵ective to adapt the mesh.
M.S. in Mechanical Engineering, December 2016
Show less
- Title
- Economic and Computational Methods for the Control of Uncertain Systems
- Creator
- Zhang, Jin
- Date
- 2019
- Description
-
The Economic Linear Optimal Control (ELOC) can improve the effective use of economic and dynamic information throughout the traditional...
Show moreThe Economic Linear Optimal Control (ELOC) can improve the effective use of economic and dynamic information throughout the traditional optimization and control hierarchy. This dissertation investigates the computational procedures used to obtain a global solution to the ELOC problem. The proposed method employs the Generalized Benders Decomposition (GBD) algorithm. Compared to the previous branch and bound approach, the application of GBD to the ELOC problem will greatly improve computational performance. A technological benefit of decomposing the problem into steady-state and dynamic parts is the ability to utilize nonlinear steady-state models, since the relaxed master problem is free of SDP type constraints and can be solved using any global nonlinear programming algorithm.In order to address the issue of model/plant mismatch, the dissertation will also investigate how to handle box-type uncertainties in ELOC. We consider two methods, a robust formulation for when the uncertainty is completely unknown and a Linear Parameter Varying formulation for when uncertainty can be measured in real time. In both cases, the infinite number of conditions that need to be satisfied are reduced to a finite set of constraints. The resulting problem formulations have a similar structure to the ELOC and can be solved globally by employing the generalized Benders decomposition.Despite a high-quality control law, the ultimate performance of closed-loop systems will be dictated by the quality and limitation of hardware element. Thus, hardware selection is also investigated in the dissertation. The cost-optimal hardware selection problem has been shown to be of the Mixed Integer Convex Programming (MICP) class. While such a formulation provides a route to global optimality, use of the branch and bound search procedure has limited application to fairly small systems. In this dissertation, we illustrate that a simple reformulation of the MICP and subsequent application of the GBD algorithm will result in massive reductions in computational effort.Finally, the problems of value-optimal sensor network design (SND) for steady-state and closed-loop systems are investigated. The value-optimal SND problem has been shown to be of the nonconvex mixed integer programming class. In the dissertation, it is demonstrated after transforming into an equivalent reformation, the application of GBD algorithm will significantly reduce the computational effort.
Show less
- Title
- APPLICATION OF MACHINE LEARNING TO ELECTRICAL DATA ANALYSIS
- Creator
- Bao, Zhen
- Date
- 2017, 2017-05
- Description
-
The dissertation is composed of four parts: modeling demand response capability by internet data centers processing batch computing jobs,...
Show moreThe dissertation is composed of four parts: modeling demand response capability by internet data centers processing batch computing jobs, cloud storage based power consumption management in internet data center, identifying hot socket problem in smart meters, and online event detection for non-intrusive load monitoring without knowing label. Mathematical models are constructed to fulfill the research of the four targets, and numerical examples are used to test the effectiveness of the models. The first two parts optimize jobs in Data Center in order to find the best way of utilizing the existing computing resources and storage. Mixed-integer programming (MIP) is used in the formulation. The purpose of the third part is to identify the hot socket problem in smart meter. Machine learning method has been used to locate the bad installation of smart meters by analyzing historical data from smart meters. The fourth part is non-intrusive load monitoring for residential load in houses. Signal processing and deep learning methods are used to identify the specific loads from high frequency signals.
Ph.D. in Electrical Engineering, May 2017
Show less
- Title
- HARDWARE ACCELERATION OF HASHING FUNCTIONS FOR CRYPTOCURRENCY MINING USING ZYNQ SOC
- Creator
- Rajagopal, Vignesh
- Date
- 2018, 2018-05
- Description
-
Cryptocurrencies rely on secure hashing algorithms and public key cryptography to keep transactions secure, disallow double spending, and keep...
Show moreCryptocurrencies rely on secure hashing algorithms and public key cryptography to keep transactions secure, disallow double spending, and keep a decentralized ledger. In a bitcoin’s lifetime, the most computationally intensive work is done at conception. New bitcoins are brought into the ecosystem by a process named mining. While mining encompasses various mathematical functions and nuances that will be described in this thesis, the key element is a double SHA256 function, which has to be performed many times until the specified criterion is met. The focus in this thesis will be to speed up one iteration of this double hash function by making optimizations in hardware. The hashing function along with its optimizations will be implemented on Xilinx’s Zedboard, which contains both an FPGA chip as well as a microprocessor (ZYNQ7). Using Xilinx’s product suite, the double hashing function was taken from a high abstraction level language to low-level hardware deployment, following the design flow and applying various optimization methods, or directives. Some of the methods employed were array partitioning, pipelining, and loop unrolling. The primary goal in this thesis is not to simply achieve low latency; rather it is to reach a compromise between latency, power consumption, and area usage. Techniques are explored and tested both individually and in hybrid solutions and various practical tradeoffs are considered. Miners have to take into account the associated increase in hardware and power costs that come with high-area hardware implementations. After all, a miner is not going to want a setup that costs more to run than the worth of the Bitcoins mined. Therefore, the focus in this thesis is to optimize the double SHA256 hashing function for speed, area, and power. The work done has pointed towards array partitioning – a technique that brings frequently accessed variables from BRAM into registers. It allows for the best combination of decreased latency with only marginal increases in power and area. Utilized alongside pipelining, in tests, a speedup of 2.83x was achieved over an auto-optimized hardware implementation while decreasing both power consumption and the usage of look-up tables (LUTs) and flip-flops (FFs). Implementing these optimizations into a real-life mining rig, complete with comparator, incrementing nonce, and live block header construction has been left as future work. In addition, optimizing various other mathematical functions of the Bitcoin life cycle lies beyond the scope of this thesis. As such, this work serves to reduce latency for a single iteration of the double SHA256 function as it applies to Bitcoin mining, while taking into account practical considerations for the miner. The result is an IP core that can be instantiated 10-15 times in a modest FPGA chip and used in the development of a full mining rig.
M.S. in Electrical Engineering, May 2018
Show less
- Title
- ECONOMIC MPC-BASED DESIGN AND OPERATION OF GRID SCALE ENERGY STORAGE SYSTEMS
- Creator
- Adeodu, Oluwasanmi
- Date
- 2019
- Description
-
It is generally recognized that a higher penetration of renewable power on the electric grid, along with the attendant environmental benefits,...
Show moreIt is generally recognized that a higher penetration of renewable power on the electric grid, along with the attendant environmental benefits, is limited by its inherent high variability and intermittency. An approach to alleviating this issue is to install grid scale energy storage as buffer. However, the economic viability of such an endeavor is dependent on the optimal sizing and placement (OSP) of storage units, which in turn requires the specification of an appropriate storage management policy. While stochastic programming with recourse is recognized as the standard approach to stage-wise optimal decision-making under uncertainty, Economic Model Predictive Control (EMPC) is put forward as a deterministic simplification of the former and demonstrated to be a viable economic dispatch strategy for networks with a high proportion of renewable energy and storage. Then, a numerical, EMPC-based gradient search strategy is proposed to address the OSP problem. Since both the operating policy and OSP questions are invariably massive optimization problems in real systems, strong emphasis is laid on computational tractability. Therefore, the analytical nature of a surrogate stochastic control policy, Economic Linear Optimal Control (ELOC), is exploited to develop innovative modifications to both algorithms. The end products are (1), an Approximate Infinite Horizon EMPC (AIH-EMPC) strategy, a relatively low computational cost variant of EMPC and (2), a hybrid EMPC-ELOC OSP strategy that essentially sidesteps the inherent combinatorial complexity of the unit location problem.
Show less
- Title
- Statistical Experimental Design and Modeling for Complex Data
- Creator
- Huang, Xiao
- Date
- 2018
- Description
-
The ability to handle complex data is essential for new research findings and business success today. With increased complexity, data can...
Show moreThe ability to handle complex data is essential for new research findings and business success today. With increased complexity, data can either be difficult to collect with designed experiments or be difficult to analyze with statistical models. Both kinds of difficulties are addressed in this dissertation.The first part of this dissertation (Chapter 2 and 3) addresses the issue of complex data collection by considering two design of experiment problems. In chapter 2, we consider Bayesian A-optimal design problem under a hierarchical probabilistic model involving both quantitative and qualitative response variables. The objective function was derived and an efficient optimization algorithm was developed. In chapter 3, we consider the A/B-testing problem and propose a novel discrepancy-based approach for designing such an experiment. As the numerical examples show, the A/B-testing experiments designed in this way achieve better group balance and parametric estimation results.In the second part of this dissertation (Chapter 4 and 5), we focus on analyzing complex data with Gaussian process (GP) models. Gaussian process model is widely used for analyzing data with highly nonlinear relationships and emulating complex systems. In Chapter 4, we apply and extend GP model to analyze the in-cylinder pressure data resulted from experiments on a newly-developed dual fuel engine. The resulted model incorporates different data types and achieves good prediction accuracy. In Chapter 5, a generalized functional ANOVA GP model is proposed to tackle the difficulty resulted from high-dimensional feature space, and we develop an efficient algorithm for building such a model from the perspective of multiple kernel learning. The proposed approach outperforms traditional MLE-based GP models on both computational efficiency and prediction accuracy.
Show less
- Title
- Intelligent Job Scheduling on High Performance Computing Systems
- Creator
- Fan, Yuping
- Date
- 2021
- Description
-
Job scheduler is a crucial component in high-performance computing (HPC) systems. It sorts and allocates jobs according to site policies and...
Show moreJob scheduler is a crucial component in high-performance computing (HPC) systems. It sorts and allocates jobs according to site policies and resource availability. It plays an important role in the efficient use of system resources and users satisfaction. Existing HPC job schedulers typically leverage simple heuristics to schedule jobs. However, the rapid growth in system infrastructure and the introduction of diverse workloads pose serious challenges to the traditional heuristic approaches. First, the current approaches concentrate on CPU footprint and ignore the performance of other resources. Second, the scheduling policies are manually designed and only consider some isolated job information, such as job size and runtime estimate. Such a manual design process prevents the schedulers from making informative decisions by extracting the abundant environment information (i.e., system and queue information). Moreover, they can hardly adapt to workload changes, leading to degraded scheduling performance. These challenges call for a new job scheduling framework that can extract useful information from diverse workloads and the increasingly complicated system environment, and finally make well-informed scheduling decisions in real time.In this work, we propose an intelligent HPC job scheduling framework to address these emerging challenges. Our research takes advantage of advanced machine learning and optimization methods to extract useful workload- and system-specific information and to further educate the framework to make efficient scheduling decisions under various system configurations and diverse workloads. The framework contains four major efforts. First, we focus on providing more accurate job runtime estimations. Estimated job runtime is one of the most important factors affecting scheduling decisions. However, user provided runtime estimates are highly inaccurate and existing solutions are prone to underestimation which causes jobs to be killed. We leverage and enhance a machine learning method called Tobit model to improve the accuracy of job runtime estimates at the same time reduce underestimation rate. More importantly, using TRIP’s improved job runtime estimates boosts scheduling performance by up to 45%. Second, we conduct research on multi-resource scheduling. HPC systems are undergoing significant changes in recent years. New hardware devices, such as GPU and burst buffer, have been integrated into production HPC systems, which significantly expands the schedulable resources. Unfortunately, the current production schedulers allocate jobs solely based on CPU footprint, which severely hurts system performance. In our work, we propose a framework taking all scalable resources into consideration by transforming this problem into multi-objective optimization (MOO) problem and rapid solving it via genetic algorithm. Next, we leverage reinforcement learning (RL) to automatically learn efficient workload- and system-specific scheduling policies. Existing HPC schedulers either use generalized and simple heuristics or optimization methods that ignore workload and system characteristics. To overcome this issue, we design a new scheduling agent DRAS to automatically learn efficient scheduling policies. DRAS leverages the advance in deep reinforcement learning and incorporates the key features of HPC scheduling in the form of a hierarchical neural network structure. We develop a three-phase training process to help DRAS effectively learn the scheduling environment (i.e., the system and its workloads) and to rapidly converge to an optimal policy. Finally, we explore the problem of scheduling mixed workloads, i.e., rigid, malleable and on-demand workloads, on a single HPC system. Traditionally, rigid jobs are the main tenants of HPC systems. In recent years, malleable applications, i.e., jobs that can change sizes before and during execution, are emerging on HPC systems. In addition, dedicated clusters were the main platforms to run on-demand jobs, i.e., jobs needed to be completed in the shortest time possible. As the sizes of on-demand jobs are growing, HPC systems become more cost-efficient platforms for on-demand jobs. However, existing studies do not consider the problem of scheduling all three types of workloads. In our work, we propose six mechanisms, which combine checkpointing, shrink, expansion techniques, to schedule the mixed workloads on one HPC system.
Show less
- Title
- Optimization methods and machine learning model for improved projection of energy market dynamics
- Creator
- Saafi, Mohamed Ali
- Date
- 2023
- Description
-
Since signing the legally binding Paris agreement, governments have been striving to fulfill the decarbonization mission. To reduce carbon...
Show moreSince signing the legally binding Paris agreement, governments have been striving to fulfill the decarbonization mission. To reduce carbon emissions from the transportation sector, countries around the world have created a well-defined new energy vehicle development strategy that is further expanding into hydrogen vehicle technologies. In this study, we develop the Transportation Energy Analysis Model (TEAM) to investigate the impact of the CO2 emissions policies on the future of the automotive industries. On the demand side, TEAM models the consumer choice considering the impacts of technology cost, energy cost, refueling/charging availability, consumer travel pattern. On the supply side, the module simulates the technology supply by the auto-industry with the objective of maximizing industry profit under the constraints of government policies. Therefore, we apply different optimization methods to guarantee reaching the optimal automotive industry response each year up to 2050. From developing an upgraded differential evolution algorithm, to applying response surface methodology to simply the objective function, the goal is to enhance the optimization performance and efficiency compared to adopting the standard genetic algorithm. Moreover, we investigate TEAM’s robustness by applying a sensitivity analysis to find the key parameters of the model. Finally based on the key sensitive parameters that drive the automotive industry, we develop a neural network to learn the market penetration model and predict the market shares in a competitive time by bypassing the total cost of ownership analysis and profit optimization. The central motivating hypothesis of this thesis is that modern optimization and modeling methods can be applied to obtain a computationally-efficient, industry-relevant model to predict optimal market sales shares for light-duty vehicle technologies. In fact, developing a robust market penetration model that is optimized using sophisticated methods is a crucial tool to automotive companies, as it quantifies consumer’s behavior and delivers the optimal way to maximize their profits by highlighting the vehicles technologies that they could invest in. In this work, we prove that TEAM reaches the global solution to optimize not only the industry profits but also the alternative fuels optimized blends such as synthetic fuels. The time complexity of the model has been substantially improved to decrease from hours using the genetic algorithm, to minutes using differential evolution, to milliseconds using neural network.
Show less
- Title
- Development of Granular Jamming Soft Robots from Boundary Constrained to Interconnected Systems
- Creator
- Tanaka, Koki
- Date
- 2023
- Description
-
This dissertation provides a detailed study on the conceptualization, creation, and optimization of a unique, interconnected soft robot system...
Show moreThis dissertation provides a detailed study on the conceptualization, creation, and optimization of a unique, interconnected soft robot system. It introduces a flexible assembly of locomotive robotic modules interconnected by an envelope, capable of granular jamming. In doing so, it highlights the practical capabilities of these interconnected modules to adapt and function cohesively as a single robot system.As a precursor to the primary investigation, the study initially presents the development and experimental validation of a boundary constrained mobile soft robot. This design leverages granular jamming for locomotion and object grasping, thereby laying a robust foundation for the subsequent exploration of complex soft robotic systems.The cornerstone of this study is the development of an interconnected soft robot system, where locomotive robotic modules, primarily composed of an elastic material, are bound together by a flexible envelope designed for granular jamming. The robotic modules, fundamentally constructed from an elastic material, incorporate origami-inspired artificial muscle actuators. These actuators, with their semi-soft characteristics, complement the inherent flexibility of the modules and play a significant role in facilitating module propulsion. Although the design incorporates a traditional rigid power source, as opposed to a fully soft robot system, the integration of a pneumatic power method into the system successfully reduces the mechanical intricacy and unwieldiness typically associated with rigid mechanisms.This research further probes into the diverse applications of this interconnected soft robot system. Its ability to shape-shift and maintain these forms during locomotion exemplifies a robust control strategy for the system that may undergo substantial deformation, proving instrumental in dynamic environments. The study demonstrates a methodology for object manipulation and obstacle avoidance that does not rely heavily on precise control and sensing. Instead, it utilizes the inherent compliance of the soft robot system. In a notable departure from previous studies, the system also exhibits a unique capability for ascending and traversing inclined surfaces.Additionally, the study dives into the optimization of the interconnected robot system via a physics-based simulation and genetic algorithm. This approach results in an assortment of optimized configurations that excel in object grasping tasks of various shapes, thereby laying a robust groundwork for the progression of soft robotics in the future.In conclusion, this investigation reveals groundbreaking insights into the field of soft robotics through the successful design and optimization of an interconnected soft robot system. Its standout performances in deformation, manipulation, and navigation tasks set it apart. This work serves to significantly enhance the adaptability and functionality of future robotic systems, pushing the edge of what is possible across a diverse range of sectors. By portraying a significant step towards a future where robots can dynamically adapt to their environments and efficiently accomplish complex tasks, this dissertation exemplifies a transformative stride in the field.
Show less