Search results
(1 - 5 of 5)
- Title
- ADVANCING DESIGN SIZING AND PERFORMANCE OPTIMIZATION METHODS FOR BUILDING INTEGRATED THERMAL AND ELECTRICAL ENERGY GENERATION SYSTEMS
- Creator
- Zakrzewski, Thomas
- Date
- 2017, 2017-07
- Description
-
Combined electrical and thermal energy systems (i.e., cogeneration systems) will play an integral role in future energy supplies because they...
Show moreCombined electrical and thermal energy systems (i.e., cogeneration systems) will play an integral role in future energy supplies because they can yield higher overall system fuel utilization and efficiency, and thus produce fewer greenhouse gas emissions, than traditionally separate systems. However, methods for both design sizing and performance optimization for cogeneration systems and commercial buildings lag behind the tremendous advancements that have been made in building performance simulation methods. Therefore, the overall goal of this research is to develop and apply novel cogeneration system modeling techniques for optimizing design sizing and dispatch of generation sets that reduce energy use, energy costs, and greenhouse gas emissions. This research is divided into four main research objectives: (1) generalizing cogeneration performance of lean burn natural gas spark ignition reciprocating engines, (2) developing a new Design and Optimization of Combined Heat and Power (DOCHP) systems optimization tool for improving design-sizing of building-integrated and grid-tied CHP systems, (3) demonstrating the utility of the DOCHP tool with several practical applications, and (4) integrating on-site intermittent renewable energy systems into the DOCHP tool to analyze micro-grid applications. This research leverages recent developments in multiple areas of building and system simulation methods. DOCHP advances design sizing and performance optimization methods for building integrated thermal and electrical energy generation systems through the application of an evolutionary artificial intelligence-based genetic algorithm and its ability to resolve to non-linear optimization with discrete constraints while considering non-linear part-load generation set performance curves.
Ph.D. in Civil Engineering, July 2017
Show less
- Title
- Mathematics of Civil Infrastructure Network Optimization
- Creator
- Rumpf, Adam Andrew
- Date
- 2020
- Description
-
We consider a selection of problems from civil infrastructure network design that are of great importance in modern urban planning but have,...
Show moreWe consider a selection of problems from civil infrastructure network design that are of great importance in modern urban planning but have, until relatively recently, gone largely ignored in mathematical literature. Each of these problems is approached from the perspective of network optimization-based modeling, with a major focus placed on the development of efficient solution algorithms.We begin with a study of the phenomenon of interdependent civil infrastructure networks, wherein the functionality of one network (such as a telecommunications system) requires the input of resources from another network (such as the electrical power grid). We first consider a linear relaxation of an established binary interdependence minimum-cost network flows model, including its unique modeling applications and its use as part of a randomized rounding approximation algorithm for the mixed integer model. We also develop a generalized network simplex algorithm for the efficient solution of this generalized minimum-cost network flows problem. We then move on to consider a trilevel network interdiction game for use in planning the fortification of interdependent networks subject to targeted attacks. A variety of solution algorithms are developed for both the binary and the linear interdependence models, and the linear interdependence model is used to develop an approximation algorithm for the more computationally expensive binary model.We then develop a public transit network design model which incorporates a social access objective in addition to traditional operator cost and user cost objectives. The model is meant for use in planning minor modifications to a public transit network capable of improving equity of access to important services while guaranteeing that service levels remain within a specified tolerance of their initial values. A hybrid tabu search/simulated annealing algorithm is developed to solve this model, which is then applied to a test case based on the Chicago public transit network with the objective of improving equity of primary health care access across the city.
Show less
- Title
- Quantification of Vascular Permeability in the Retina Using Fluorescein Videoangiography Data as a Biomarker for Early Diabetic Retinopathy
- Creator
- Kayaalp Nalbant, Elif
- Date
- 2023
- Description
-
Diabetic retinopathy, which is the most common reason for blindness in the working-age population, affects over one-third of those who have...
Show moreDiabetic retinopathy, which is the most common reason for blindness in the working-age population, affects over one-third of those who have had diabetes for over ten years. High blood sugar level (hyperglycemia) in the blood damages blood vessels and tight junction at the blood-retinal barrier (BRB). Chronic inflammation leads to changes in vascular health, and over time blood vessels tend to get damaged and exhibit higher “leakage” or permeability. In the late stage of DR, hemorrhages can occur, leading to irreversible damage of neuronal tissue in the retina and vision loss. In the clinic, there are some biomarkers and imaging modalities used to diagnose DR based on some of the more severe products of DR (e.g., hemorrhage), but there is no non-invasive, highly sensitive method to detect diabetic retinopathy before clinical signs occur, when mitigating therapies could be more effective. In this thesis, indicator dilution theory was explored to modeling the temporal dynamics of fluorescein in the retina after intravenous injection, with an aim to quantitatively map subtle changes in retinal blood flow and vascular permeability that could preempt subsequent irreversible damage. Specifically, a simplified version of indicator dilution theory—namely the “adiabatic approximation in tissue homogeneity” (AATH) model—was used to estimate physiological parameters such as the blood flow (F) and the extraction fraction (E: a parameter coupled with vascular permeability) from retinal fluorescein videoangiography data. The AATH fitting protocol was optimized through simulations using a more complex model (the AATH-vascular heterogeneity model, AATH-VH). It was determined that a two-step least square fitting method was more sensitive than a single-step least square fitting of AATH to simulated data to evaluate vascular permeability in early diabetic retinopathy. The optimized data analysis protocol was then evaluated in an initial clinical study comparing healthy control subjects to those with moderate non-proliferative DR. Volumetric blood flow and retinal vascular permeability maps were compared between patient groups with clear increases in extraction fraction observed in the mild NPDR patients compared to control. These promising early data have been the foundation to an ongoing 5 year study tracking 100 Diabetic patients with no DR so see if early changes in vascular permeability can predict which patients are more likely to progress to DR.
Show less
- Title
- Electric Machine Windings with Reduced Space Harmonic Content
- Creator
- Tang, Nanjun
- Date
- 2023
- Description
-
The reduction of magnetomotive force (MMF) space harmonic content in electric machine windings can significantly improve the machine's...
Show moreThe reduction of magnetomotive force (MMF) space harmonic content in electric machine windings can significantly improve the machine's electromagnetic performance. Potential benefits include a reduction of torque ripple, a more sinusoidal back EMF, and reduced power losses. With the proposal of a uniform mathematical representation that applies to both distributed windings and fractional-slot concentrated windings (FSCWs), closed-form expressions can be derived for harmonic magnitudes, winding factors, etc. These expressions can then be used to formulate the MMF space harmonic suppression problem for windings, which looks for improved windings with certain harmonic orders reduced or even eliminated, by varying the slot distribution and coil turns. Different solution techniques are explored to gain additional insights about the solution space. The underlying mathematical relations between different harmonic orders are mathematically proved to establish the family phenomenon, which presents clear pictures of the higher order part of the harmonic spectrum and is the foundation for exact calculation of the total harmonic distortion (THD) of windings. The exact THD calculation further indicates how the minimal THD can be achieved for a winding. Windings can also be analyzed and designed from the view of subsets to incorporate distribution and excitation phase shift effects. With reduced or the minimal space harmonic content, new winding designs can help significantly improve the Pareto front when combined with motor geometry optimization. Design examples including a 12-slot 2-pole mixed-layer distributed winding, a 18-slot 2-pole mixed-layer distributed winding, and a four-layer 24-slot 22-pole FSCW with excitation phase shift are presented with finite element analysis (FEA) results to verify the performance improvements.
Show less
- Title
- Algorithms for Discrete Data in Statistics and Operations Research
- Creator
- Schwartz, William K.
- Date
- 2021
- Description
-
This thesis develops mathematical background for the design of algorithms for discrete-data problems, two in statistics and one in operations...
Show moreThis thesis develops mathematical background for the design of algorithms for discrete-data problems, two in statistics and one in operations research. Chapter 1 gives some background on what chapters 2 to 4 have in common. It also defines some basic terminology that the other chapters use.Chapter 2 offers a general approach to modeling longitudinal network data, including exponential random graph models (ERGMs), that vary according to certain discrete-time Markov chains (The abstract of chapter 2 borrows heavily from the abstract of Schwartz et al., 2021). It connects conditional and Markovian exponential families, permutation- uniform Markov chains, various (temporal) ERGMs, and statistical considerations such as dyadic independence and exchangeability. Markovian exponential families are explored in depth to prove that they and only they have exponential family finite sample distributions with the same parameter as that of the transition probabilities. Many new statistical and algebraic properties of permutation-uniform Markov chains are derived. We introduce exponential random ?-multigraph models, motivated by our result on replacing ? observations of a permutation-uniform Markov chain of graphs with a single observation of a corresponding multigraph. Our approach simplifies analysis of some network and autoregressive models from the literature. Removing models’ temporal dependence but not interpretability permitted us to offer closed-form expressions for maximum likelihood estimators that previously did not have closed-form expression available. Chapter 3 designs novel, exact, conditional tests of statistical goodness-of-fit for mixed membership stochastic block models (MMSBMs) of networks, both directed and undirected. The tests employ a ?²-like statistic from which we define p-values for the general null hypothesis that the observed network’s distribution is in the MMSBM as well as for the simple null hypothesis that the distribution is in the MMSBM with specified parameters. For both tests the alternative hypothesis is that the distribution is unconstrained, and they both assume we have observed the block assignments. As exact tests that avoid asymptotic arguments, they are suitable for both small and large networks. Further we provide and analyze a Monte Carlo algorithm to compute the p-value for the simple null hypothesis. In addition to our rigorous results, simulations demonstrate the validity of the test and the convergence of the algorithm. As a conditional test, it requires the algorithm sample the fiber of a sufficient statistic. In contrast to the Markov chain Monte Carlo samplers common in the literature, our algorithm is an exact simulation, so it is faster, more accurate, and easier to implement. Computing the p-value for the general null hypothesis remains an open problem because it depends on an intractable optimization problem. We discuss the two schools of thought evident in the literature on how to deal with such problems, and we recommend a future research program to bridge the gap those two schools. Chapter 4 investigates an auctioneer’s revenue maximization problem in combinatorial auctions. In combinatorial auctions bidders express demand for discrete packages of multiple units of multiple, indivisible goods. The auctioneer’s NP-complete winner determination problem (WDP) is to fit these packages together within the available supply to maximize the bids’ sum. To shorten the path practitioners traverse from from legalese auction rules to computer code, we offer a new wdp formalism to reflect how government auctioneers sell billions of dollars of radio-spectrum licenses in combinatorial auctions today. It models common tie-breaking rules by maximizing a sum of bid vectors lexicographically. After a novel pre-solving technique based on package bids’ marginal values, we develop an algorithm for the WDP. In developing the algorithm’s branch-and-bound part adapted to lexicographic maximization, we discover a partial explanation of why classical WDP has been successful in using the linear programming relaxation: it equals the Lagrangian dual. We adapt the relaxation to lexicographic maximization. The algorithm’s dynamic-programming part retrieves already computed partial solutions from a novel data structure suited specifically to our WDP formalism. Finally we show that the data structure can “warm start” a popular algorithm for solving for opportunity-cost prices.
Show less