Search results
(21 - 40 of 1,034)
Pages
- Title
- OPTIMIZATION AND MARKET CLEARING IN THE POWER SYSTEMS WITH HIGH-LEVEL RENEWABLES
- Creator
- Ye, Hongxing
- Date
- 2016, 2016-05
- Description
-
The increasing penetration of renewable energy source (RES), such as wind and solar power generation, in recent years to meet various...
Show moreThe increasing penetration of renewable energy source (RES), such as wind and solar power generation, in recent years to meet various renewable portfolio standards (RPS) has led to more uncertainties in power systems. The RES penetration level is expected to further increase in order to reduce emission and fight climate change. The growing uncertainties caused by RES pose new challenges in power and energy systems. Advanced models and technologies are urgent to provide secure, affordable and clean energy to customers. Security-Constrained Unit Commitment (SCUC) problem is one of the most important tools in the modern power system. It determines the optimal short-term generating planning. The electricity is priced and settled based on its solution. In order to manage the uncertainties caused by renewables, it is urgent to develop new SCUC models and solution approaches. SCUCs considering the uncertainties have become a focus of research in recent years. The proposed optimistic robust SCUC combines the idea of robust optimization and reserve concept in the electricity. The merit of the robust optimization is that its solution can be immunized against any uncertainties. It exactly meets the first priority, reliability, in power system operation. The robust optimization is attractive in theory. However, a solution is robust if and only if the system can survive in the worst case scenario. Hence, the key task is to identify the worst case scenario. Unfortunately, finding the worst case scenario in general is a non-deterministic polynomial-time hard (NP-hard) problem. This will create issue in satisfying the timeliness requirement that the optimal scheduling must be obtained quickly (e.g., within several hours) in the day-ahead electricity markets. This dissertation proposes a fast solution approach to finding the worst case scenario by exploring the special structure in the SCUC problem. This dissertation proposes a new market mechanism for managing uncertain ties caused by high-level RES based on the robust optimization. A new concept, Uncertainty Marginal Price (UMP), is proposed to charge uncertainty sources and to credit flexible sources. For the first time, explicit price signals are provided and utilized to manage any level uncertainties within a robust optimization framework. The proposed mechanism manages uncertainties from both the source side (uncertainty reduction) and resource side (uncertainty accommodation). In short term, it provides incentives for RES operators to improve forecasting accuracy (i.e. to reduce uncertainties) and existing flexible resources (e.g. storage) to participate the uncertainty accommodation. In long term, the proposed mechanism provides price signals for siting new flexible resources (e.g. energy storage) to accommodate uncertainties from increasing RES penetration.
Ph.D. in Electrical Engineering, May 2016
Show less
- Title
- Relationship of Executive Fuction Patterns and Academic Achievement Across DSM-IV Subtypes of Attention-Deficit/Hyperactivity Disorder
- Creator
- Young, Crystal
- Date
- 2012-07-18, 2012-07
- Description
-
The aim of this study was to expand our understanding of the impact of executive function skills on academic achievement in children with ADHD...
Show moreThe aim of this study was to expand our understanding of the impact of executive function skills on academic achievement in children with ADHD, as well as investigate trends in executive functioning and academic achievement between ADHD subtypes in an adolescent sample. Each participant completed neuropsychological assessments including a measure of intelligence and academic functioning as well as several measures of executive functioning. Results suggested that children with ADHD – Combined Type do not significantly differ from children with ADHD – Inattentive Type on tasks of academic or executive functioning. Issues regarding outcomes associated with ADHD as well as diagnostic procedures are discussed.
Ph.D. in Psychology, July 2012
Show less
- Title
- SPATIO-TEMPORAL RECONSTRUCTION FOR GATED CARDIAC SPECT
- Creator
- Niu, Xiaofeng
- Date
- 2011-07, 2011-07
- Description
-
In myocardial perfusion imaging using single photon emission computed tomography (SPECT), gated acquisition is often used in order to deal...
Show moreIn myocardial perfusion imaging using single photon emission computed tomography (SPECT), gated acquisition is often used in order to deal with blur caused by cardiac motion in the resulting images. While this can provide useful information about the myocardial function, it also inevitably leads to reduced signal-to-noise ratio in the acquired data due to gating. In this work, we aim to investigate and evaluate image reconstruction methods for improving the quality of the reconstructed images in cardiac gated SPECT imaging. First, we propose a spatio-temporal (aka 4D) reconstruction procedure for gated images based on use of discrete Fourier transform (DFT) basis functions, wherein the image activity at each spatial location is regulated by a Fourier representation along the gate dimension. The gated images are then reconstructed through determination of the coefficients of the Fourier representation. We explore two different reconstruction algorithms, one is a penalized least-square approach and the other is a maximum a posteriori approach. Our simulation results demonstrate that use of DFT-basis functions in gated imaging can improve the accuracy of the reconstruction. While in gated imaging the tracer distribution is traditionally treated as constant, a recent development is gated dynamic imaging where the goal is to obtain an image sequence from a single acquisition which shows simultaneously both cardiac motion and tracer distribution change over the course of imaging. In this work, we further develop and demonstrate a fully 5D (3D space plus time plus gate) reconstruction procedure for cardiac gated, dynamic SPECT imaging, where the challenge is even greater without the use of multiple fast camera rotations. We develop and compare two iterative reconstruction algorithms: one is based on the modified block sequential regularized EM (BSREM-II) algorithm, and the other is based on the Bsplines algorithm. Our simulation results demonstrate that the 5D reconstruction xiii procedure can yield gated dynamic images which show quantitative information for both perfusion defect detection and cardiac motion. Based upon the success of 5D reconstruction, we also study the saliency of 5D images for detection of perfusion defects. We explore efficient ways for characterization and visualization of information pertinent to perfusion defects in a 5D image sequence. We apply various metrics to quantify the degree to which perfusion deficits can be detected. We show that these metrics can be used to produce new types of visualizations, showing wall motion and perfusion information, that may potentially be useful for clinical evaluation. Finally, with the ultimate goal of effective detection of lesion defect for clinical use, we also investigate a direct reconstruction approach to determine a sequence of gated, kinetic parameter images from a single acquisition, which can provide information simultaneously for both tracer kinetics and wall motion. To combat the greatly under-determined nature of the problem, we apply smoothness constraints to exploit the similarity both among the different gates and among the local spatial neighborhood. The parameter images of the different gates are then determined jointly using maximum a posteriori estimation from all the available image data.
Ph.D. in Electrical Engineering, July 2011
Show less
- Title
- EFFECT OF TEMPERATURE, MOISTURE CONTENT AND SOIL SURFACE AREA ON THE BIOLOGICAL DEGRADATION OF BIOSOLIDS ADDED TO SOIL
- Creator
- Wen, Dongqi
- Date
- 2016, 2016-05
- Description
-
The objective is to develop and apply an organic matter degradation model (OMDM) that predicts changes in the biological degradation of...
Show moreThe objective is to develop and apply an organic matter degradation model (OMDM) that predicts changes in the biological degradation of organic matter added to soils as a function of soil temperature, moisture content and mineral surface area using data from field and laboratory measurements. A one year long laboratory study was conducted to determine biosolids microbial degradation rates (21 ˚C and 20% moisture content) for soils from eight different fields Biosolids microbial degradation rates were also determined for the eight soils under field condition (10 ˚C and 10% moisture content) by applying a degradation rate model (DRM). The total mineral surface areas for 27 soil samples were also determined. Regression analysis revealed that the degradation rate was positively associated with mineral soil surface area (R2=0.85) and that the biosolids application rate is not significantly associated with the biosolids degradation rate (p value =0.35>0.01). The OMDM determines a decomposition rate of 1.04 yr-1 at 30 ˚C, 20% moisture content, and 10 m2/g soil and provides correction factors for changes in soil temperature, moisture, and mineral surface area. The model predicted in changes of the biological degradation rate of biosolids added to soils with mineral surface areas that vary from 1 to 10 m2/g soil under laboratory and field conditions. A comparison of the measured and simulated degradation rates for eight different soils produced R2 = 0.87 for field data. The model was also able to predict the degradation rate of biosolids for 19 additional soils under field conditions (R2=0.89). The model was also applied to agricultural plant litter. The estimated degradation rates are 0.45 yr-1 corn stover, 0.35 yr-1 oats, 0.26 yr-1 wheat, and 0.13 yr-1 millet at 30 ˚C, 20% moisture content and 10 m2/g. This study demonstrates that the effects of environmental conditions and soil mineral surface area have a significant impact on degradation rates of organic matter and must be considered when assessing degradation rates for organic matter added to soil. Higher temperatures and moisture content and finer soils are related to larger degradation rates.
Ph.D. in Environmental Engineering, May 2016
Show less
- Title
- INSTANTANEOUS DETECTION OF SPATIAL GRADIENT ERRORS IN DIFFERENTIAL GNSS
- Creator
- Jing, Jing
- Date
- 2014, 2014-12
- Description
-
Differential GNSS (Global Navigation Satellite System) is currently being ex- tended using a high integrity Ground Based Augmentation System ...
Show moreDifferential GNSS (Global Navigation Satellite System) is currently being ex- tended using a high integrity Ground Based Augmentation System (GBAS) to enable civil aircraft precision approach and landing. Differential GPS carrier phase measure- ments between ground-based GBAS reference antennas can provide the means to de- tect and isolate certain signal-in-space (SIS) failures and anomalies that are hazardous to aircraft, most notably ionospheric anomalies and ephemeris failures, which can be characterized as spatial range error gradients. In this research, we develop a monitor capable of instantaneously detecting multi-dimensional spatial gradient faults. The existence of large gradients, while rare, has prompted the development of numerous ground monitors for their detection. One drawback of previously proposed monitors is that their performance for a given ground antenna con guration is de- pendent on how antennas are paired to form measurement differences. In contrast, in this work a new monitor approach is developed to provide consistent detection performance, regardless of how the antennas are paired, by combining measurements from multiple, spatially separated ground antennas through a null space transforma- tion. It is shown that the `null space' monitor signi cantly improves the detection performance over existing fault detection algorithms and enables GBAS to support Category III precision approach and landing. The instantaneous carrier phase monitor cannot detect all gradients due to the presence of integer cycle ambiguities. These ambiguities cannot be resolved because the gradient magnitude is unknown a priori. Furthermore, it has been shown that the performance of such monitors is highly dependent on the reference antenna topol- ogy. The range of detectable gradients for all carrier phase monitors depends on two factors: the number of antennas and their con guration. One can always expand the detection range by using many antennas, but only at greatly increased operational cost. Optimizing antenna con guration has been overlooked as a means to improve performance; simple, heuristic arguments typically prevail in the associated siting decisions. For example, when given four antennas it is generally assumed that they should be sited in a square arrangement to balance performance in all directions. However, such heuristics do not provide the maximum detectable range of gradients, and exploiting the freedom to choose the antenna topology can dramatically expand the detectable range. Due to the presence of carrier phase cycle ambiguities, the re- sulting optimization problem is a constrained, mixed integer nonlinear programming problem. By solving the nonlinear program, the optimal GBAS antenna topology that maximizes the range of detectable gradients can be found for any number of antennas. For ionospheric fronts, the magnitude of the resulting gradient is bounded. However, orbit ephemeris faults can be arbitrarily large. The monitor must be able to detect all large gradients. To detect gradients beyond the capability of the car- rier phase monitor, code phase measurements are integrated into the monitor. Al- though we show this allows for the detection of all gradients, the number and spacing of ground antennas may not be suitable for all ground stations. Therefore, dual frequency carrier phase measurements are also considered to reduce the number of antennas required to achieve the desired detection performance. Finally, a search algorithm is developed to nd the antenna topology that enables the null space mon- itor to detect all hazardous gradients using code and dual frequency carrier phase measurements.
Ph.D. in Mechanical and Aerospace Engineering, December 2014
Show less
- Title
- COMPUTATION AND ANALYSIS OF TUMOR GROWTH
- Creator
- Turian, Emma
- Date
- 2016, 2016-05
- Description
-
The ability of tumors to metastasize is preceded by morphological instabilities such as chains or fingers that invade the host environment....
Show moreThe ability of tumors to metastasize is preceded by morphological instabilities such as chains or fingers that invade the host environment. Parameters that control tumor morphology may also contribute to its invasive ability. In this thesis, we investigate tumor growth using a two-phase Stokes model. We first examine the morphological changes using the surface energy of the tumor-host interface and investigate its nonlinear dynamics using a boundary integral method. In an effort to understand the interface stiffness, we then model the tumor-host interface as an elastic membrane governed by the Helfrich bending energy. Using an energy variation approach, we derive a modified Young-Laplace condition for the stress jump across the interface, and perform a linear stability analysis to evaluate the effects of viscosity, bending rigidity, and apoptosis on tumor morphology. Results show that increased bending rigidity versus mitosis rate contributes to a more stable growth. On the other hand, increased tumor viscosity or apoptosis may lead to an invasive fingering morphology. Comparison with experimental data on glioblastoma spheroids shows good agreement especially for tumors with high adhesion and low proliferation. Next, we evaluate tumor regression during cancer therapy by a combined modality involving chemotherapy and radiotherapy. The goal is to address the complexities of a vascular tumor (e.g. apoptosis and vascularization) during treatment. We introduce an apoptotic time delay and study its impact on tumor regression using numerical and asymptotic techniques. In particular, we implement the linear-quadratic model and identify two extreme sets of parameter data, namely the slow, and fast tumor response to therapy. Numerical simulations for the slow response set show good agreements with data representing non-small cell lung carcinoma. Using the evolution equation for tumor radius with time delay, we find that tumors with shorter time interval to the onset of apoptosis shrink faster.
Ph.D. in Applied Mathematics, May 2016
Show less
- Title
- Large Scale Integration of Sustainable Energy and Congestion Management in Western Interconnection
- Creator
- Aflaki Khosrosha, Kaveh
- Date
- 2012-07-12, 2012-07
- Description
-
Large scale sustainable energy like wind and solar energy integration to the bulk grid could introduce inevitable challenges to regional...
Show moreLarge scale sustainable energy like wind and solar energy integration to the bulk grid could introduce inevitable challenges to regional transmission and generation systems. The most important challenges for transmission system are the congestion management and planning for transmission expansion to transfer the zero cost generated electricity. Another big challenge is competition of current fuel based generation units in the electricity market with zero cost sustainable energy. In this dissertation all these challenges identified and analyzed for large scale grid. This thesis brings a new method used to study transmission congestions in Western Interconnection of the United States. The process involved Security-Constrained Unit Commitment (SCUC) formulation applying its results for analysis of transmission congestion. This thesis also presents results and findings in simulation of the system operation in the Western Interconnection of the United States with the inclusion of large scale wind and solar energy integration for year 2030. High level of wind and solar energy with the forecasted wind and solar time series profiles were integrated to the Western Interconnection grid. Their impact on different existing types of generation plants is studied. The sensitivity of the fuel prices, wind turbine power output, load volatility and demand side management as well as carbon tax are analyzed in different possible scenarios. In order to incorporate large scale of sustainable energy into a bulk electricity grid footprint, planned transmission expansion showed need to take place. Transmission expansion reduces grid congestion and balances Locational Marginal Prices (LMP). This thesis explores the advancements in high-performance computing and visual analytics of economic-based transmission expansion in the Western Electricity Coordinating Council (WECC). This expansion is based on 2018 and 2029 forecasted data. It identifies transmission congestions and average of LMP for each area, and expands the transmission system while accommodating large scale wind and solar energy to achieve the Department of Energy’s renewable energy vision for year 2030. An iterative transmission expansion analysis, based on the average LMP for each area, is used to identify the minimum WECC transmission lines required. All results are visualized on the Geographical Information System (GIS) format map of North America.
Ph.D. in Electrical Engineering, July 2012
Show less
- Title
- AUTOMATIC SUMMARIZATION OF CLINICAL ABSTRACTS FOR EVIDENCE-BASED MEDICINE
- Creator
- Summerscales, Rodney L.
- Date
- 2013, 2013-12
- Description
-
The practice of evidence-based medicine (EBM) encourages health professionals to make informed treatment decisions based on a careful analysis...
Show moreThe practice of evidence-based medicine (EBM) encourages health professionals to make informed treatment decisions based on a careful analysis of current research. However, after caring for their patients, medical practitioners have little time to spend reading even a small fraction of the rapidly growing body of medical research literature. As a result, physicians must often rely on potentially outdated knowledge acquired in medical school. Systematic reviews of the literature exist for speci c clinical questions, but these must be manually created and updated as new research is published. Abstracts from well-written clinical research papers contain key information regarding the design and results of clinical trials. Unfortunately, the free text nature of abstracts makes it di cult for computer systems to use and time consuming for humans to read. I present a software system that reads abstracts from randomized controlled trials, extracts key clinical entities, computes the e ectiveness of the proposed interventions and compiles this information into machine readable and human readable summaries. This system uses machine learning and natural language processing techniques to extract the key clinical information describing the trial and its results. It extracts the names and sizes of treatment groups, population demographics, outcome measured in the trial and outcome results for each treatment group. Using the extracted outcome measurements, the system calculates key summary measures used by physicians when evaluating the e ectiveness of treatments. It computes absolute risk reduction (ARR) and number needed to treat (NNT) values complete with con dence intervals. The extracted information and computed statistics are automatically compiled into XML and HTML summaries that describe the details and results of the trial. xiii Extracting the necessary information needed to calculate these measures is not trivial. While there have been various approaches to generating summaries of medical research, this work has mostly focused on extracting trial characteristics (e.g. population demographics, intervention/outcome information). No one has attempted to extract all of the information needed, nor has anyone attempted to solve many of the tasks needed to reliably calculate the summary statistics.
PH.D in Computer Science, December 2013
Show less
- Title
- COVERAGE AND CONNECTIVITY IN WIRELESS NETWORKS
- Creator
- Xu, Xiaohua
- Date
- 2012-04-25, 2012-05
- Description
-
The limited energy resources, instability, and lacking central control in wireless networks motivates the study of connected dominating set ...
Show moreThe limited energy resources, instability, and lacking central control in wireless networks motivates the study of connected dominating set (CDS) which serves as rout- ing backbone to support service discovery, and area monitoring and also broadcasting. The construction of CDS involves both coverage and connectivity. We ¯rst study sev- eral problems related to coverage. Given are a set of nodes and targets in a plane, the problem Minimum Wireless Cover (MWC) seeks the fewest nodes to cover the targets. If all nodes are associated with some positive prices, the problem Cheapest Wireless Cover (CWC) seeks a cheapest set of nodes to cover the targets. If all nodes have bounded lives, the problem Max-Life Wireless Cover (MLWC) seeks wireless coverage schedule of maximum life subject to the life constraints of individ- ual nodes. We present a polynomial time approximation scheme (PTAS) for MWC, and two randomized approximation algorithms for CWC and MLWC respectively. Given a node-weighted graph, the problem Minimum-Weighted Dominating Set (MWDS) is to ¯nd a minimum-weighted vertex subset such that, for any vertex, it is contained in this subset or it has a neighbor contained in this set. We will propose a (4+²)-approximation algorithm for MWDS in unit disk graphs. Meanwhile, for the connecting part, given a node-weighted connected graph and a subset of terminals, the problem Node-Weighted Steiner Tree (NWST) seeks a lightest tree connecting a given set of terminals in a node-weighted graph. We present three approximation algorithms for NWST restricted to UDGs. This dissertation also explores the applications of CDS, and develops e±cient algorithms for the applications such as real-time aggregation scheduling in wireless networks. Given a set of periodic aggregation queries, each query has its own period , and the subset of source nodes Si containing the data, we ¯rst propose a family of e±cient and e®ective real-time scheduling protocols that can answer every job of each query task within a relative delay under resource constraints by addressing the following tightly coupled tasks: routing, transmission plan constructions, node activity scheduling, and packet scheduling. Based on our protocol design, we further propose schedulability test schemes to e±ciently and e®ectively test whether, for a set of queries, each query job can be ¯nished within a ¯nite delay. We also conduct extensive simulations to validate the proposed protocol and evaluate its practical performance. The simulations corroborate our theoretical analysis.
Ph.D. in Computer Science, May 2012
Show less
- Title
- AN INTEGRATED DATA ACCESS SYSTEM FOR BIG COMPUTING
- Creator
- Yang, Xi
- Date
- 2016, 2016-07
- Description
-
Big data has entered every corner of the fields of science and engineering and becomes a part of human society. Scientific research and...
Show moreBig data has entered every corner of the fields of science and engineering and becomes a part of human society. Scientific research and commercial practice are increasingly depending on the combined power of high-performance computing (HPC) and high-performance data analytics. Due to its importance, several commercial computing environments have been developed in recent years to support big data applications. MapReduce is a popular mainstream paradigm for large-scale data analytics. MapReduce-based data analytic tools commonly rely on underlying MapReduce file systems (MRFS), such as Hadoop Distributed File System (HDFS), to manage massive amounts of data. In the same time, conventional scientific applications usually run on HPC environments, such as Message Passing Interface (MPI), and their data are kept in parallel file systems (PFS), such as Lustre and GPFS, for high-speed computing and data consistency. As scientific applications become data intensive and big data applications become computing hungry, there is a surging interest and need to integrate HPC power and data processing power to support HPC on big data, the so-called big computing. A fundamental issue of big computing is the integration of data management and interoperability between the conventional HPC ecosystem and the newly emerged data processing/analytic ecosystem. However, data sharing between PFS and MRFS is limited currently, due to semantics mismatches, lacking communication middleware, and the diverged design philosophies and goals, etc. Also, challenges also exist in cross-platform task scheduling and parallelism. At the application layer, the data model mismatch between the raw data kept on file systems and the data management software of an application impedes cross-platform data processing as well. To support cross-platform integration, we propose and develop the Integrated Data Access System (IDAS) for big computing. IDAS extends the accessibilities of programming models and integrates the HPC environment with the data processing MapReduce/Hadoop environment. Under IDAS, MPI applications and MapReduce applications can share and exchange data under PFS and MRFS transparently and efficiently. Through this sharing and exchange, MPI and MapReduce applications can collaboratively provide both high-performance computing and data processing power for a given application. IDAS achieves its goal with several steps. First, IDAS enhances MPI-IO so that MPI-based applications can access data stored in HDFS efficiently. Here the term efficient means that HDFS is enhanced to support MPI-based applications. For instance, we have enhanced HDFS to transparently support N-to-1 file write for better write concurrency. Second, IDAS enhances Hadoop framework to enable MapReduce-based applications process data that resides on PFS transparently. Please notice that we have carefully chosen the term “enhance” here. That is MPI-based applications not only can access data stored on HDFS but also can continue access data stored on PFS. The same is for MapReduce-based applications. Through these enhancements, we achieve seamless data sharing. In addition, we have integrated data accessing with several application tools. In particular, we have integrated image plotting, query, and data subsetting within one application, for Earth Science data analysis. Many data centers prefer erasure-coding rather than triplication to achieve data durability, which trades data availability for lower storage cost. To this end, we have also investigated performance optimization of the erasure coded Hadoop system, to enhance Hadoop system in IDAS.
Ph.D. in Computer Science, July 2016
Show less
- Title
- DYNAMIC COHERENT ACCEPTABILITY INDICES AND THEIR APPLICATIONS IN FINANCE
- Creator
- Zhang, Zhao
- Date
- 2011-05-02, 2011-05
- Description
-
This thesis presents a unified framework for studying coherent acceptability indices in a dynamic setup. We study dynamic coherent...
Show moreThis thesis presents a unified framework for studying coherent acceptability indices in a dynamic setup. We study dynamic coherent acceptability indices and dynamic coherent risk measures. In particular, we establish a duality between them. We derive representation theorems for both dynamic coherent acceptability indices and dynamic coherent risk measures in terms of so called dynamically consistent sequence of sets of probability measures. In addition, we present an alternative approach to study dynamic coherent acceptability indices and the representation theorem. Finally, we provide examples and counterexamples of dynamic coherent acceptability indices, and their applications in portfolio management.
Ph.D. in Applied Mathematics, May 2011
Show less
- Title
- ENVIRONMENTAL PERFORMANCE VS. FINANCIAL PERFORMANCE, MARKET INEFFICIENCY AND INVESTMENTS
- Creator
- He, Chaohua
- Date
- 2014, 2014-07
- Description
-
It is challenging to de ne corporate environmental performance or corporate nancial performance. In this study, a company is considered to...
Show moreIt is challenging to de ne corporate environmental performance or corporate nancial performance. In this study, a company is considered to have good environmental performance (namely, be green, environment-friendly or environmentally responsible) if it is among the Top 100 of the 500 US greenest companies ranked by Newsweek, or has environmental strength(s) and no environmental concern in terms of the KLD ratings. A company is regarded to have good nancial performance if it has a high raw return, Sharpe ratio, and excess (or abnormal) return over various benchmarks. Preference will be given to excess return estimated using the Carhart four-factor model [14]. A previous published longitudinal study, co-authored with my advisor [13], unveils that: 1) environmentally responsible companies tend to experience signi cantly positive abnormal performance in the long horizon (e.g. from the fourth to seventh year after being selected); 2) the value-adding e ect and the market's upward price adjustments on undervalued intangible environmental strength(s) might have resulted in the long-term outperformance. Would environmentally responsible companies still outperform during shorter horizons, such as the event period of an environmental disclosure? Using event study methodologies, this paper investigates market responses to independent Newsweek environmental disclosures by analyzing cross-sectional and time-series abnormal security returns. Results suggest that the Top 100 greenest companies tend to display signi cant abnormal returns within 4 days after a disclosure, and the signi cant abnormal returns generally persist for no more than 3 trade days. e.g., the Carhart four-factor abnormal return, with statistical signi cance, is averaged at 0.50% per day over the four disclosure events. The ndings are robust to di erentmodels of normal return, removal of outliers, elimination of confounding e ects, controlling for characteristic factors, and adjusting for cross-sectional correlation and volatility shift on test statistics using BMP-adjusted technology[56]. Signi cant abnormal returns over the event period may indicate ine ciency of the nancial market. Fama-Macbeth regressions further reveal that short-horizon abnormal returns could be explained by a spectrum of characteristic variables, green investing, arbitrage trading, and/or various psychological biases. Complementing the cited longitudinal study, a portfolio-level comparison reveals that an actively managed green portfolio outperforms an actively managed nongreen portfolio in terms of raw return and risk-adjusted measures such as Sharpe ratio, Jensen's alpha and Fama-French alpha in the long horizon. The results are robust to di erent portfolio weighting technologies and the consideration of turnover costs. In addition, the green portfolio's outperformance is driven by a bunch of small, aggressive and relatively inactive stocks that have better performance than the market predicts. No evidence shows that the ever-increasing demand on green securities leads to the green portfolio's outperformance, because green stocks are actually less actively traded. Panel regressions further indicate that long-horizon corporate economic performance positively correlates to historical corporate environmental performance.
Ph.D. in Management Science, July 2014
Show less
- Title
- QUALITY-OF-SERVICE AWARE SCHEDULING AND DEFECT TOLERANCE IN REAL-TIME EMBEDDED SYSTEMS
- Creator
- Li, Zheng
- Date
- 2015, 2015-05
- Description
-
For real-time embedded systems, such as control systems used in medical, automotive and avionics industry, tasks deployed on such systems...
Show moreFor real-time embedded systems, such as control systems used in medical, automotive and avionics industry, tasks deployed on such systems often have stringent real-time, reliability and energy consumption constraints. How to schedule real-time tasks under various QoS constraints is a challenging issue that has drawn attention from the research community for decades. In this thesis, we study task execution strategies that not only minimize system energy consumption but also guarantee task deadlines and reliability satisfaction. We first consider the scenario when all tasks are of the same criticality. For this case, two task execution strategies, i.e. checkpointing based and task re-execution based strategies are developed. Second, considering the scenario when tasks are of different criticalities, a heuristic search based energy minimization strategy is also proposed. When tasks are of different criticalities, a commonly used approach to guaranteeing high-criticality task deadlines is to remove low-criticality tasks whenever the system is overloaded. With such an approach, the QoS provided to low-criticality tasks is rather poor, it can cause low-criticality tasks to have high deadline miss rate and less accumulated execution time. To overcome this shortcoming, we develop a time reservation based scheduling algorithm and a two-step optimization algorithm to meet high-criticality task deadlines, while minimizing low-criticality task deadline miss rate and maximizing their accumulated execution time, respectively. As many-core techniques mature, many real-time embedded systems are built upon many-core platforms. However, many-core platforms have high wear-out failure rate. Hence, the last issue to be addressed in the thesis is how to replace defective cores on many-core platforms so that deployed applications’ real-time properties can be maintained. We develop an offline and an online application-aware system reconfiguration strategy to minimize the impact of the physical layer changes on deployed real-time applications. All the developed approaches are evaluated through extensive simulations. The results indicate that the developed approaches are more effective in addressing the identified problems compared to the existing ones in the literature.
Ph.D. in Computer Science, May 2015
Show less
- Title
- UNDERSTANDING REACTION MECHANISMS AND CONTROLLING REACTIVE SURFACE SPECIES DURING ATOMIC LAYER DEPOSITION OF METAL CHALCOGENIDES
- Creator
- Weimer, Matthew S.
- Date
- 2016, 2016-07
- Description
-
Atomic layer deposition (ALD) is a thin film growth technique that relies on self-limiting reactions between vapor precursors and a surface....
Show moreAtomic layer deposition (ALD) is a thin film growth technique that relies on self-limiting reactions between vapor precursors and a surface. Significant progress has been made in the scope of materials grown by ALD, enabled by precursor development and investigation of surface reaction mechanisms; progress is still necessary. This dissertation has two portions. The first is the development of organometallic compounds to engender new material growth by ALD. Second is the development of X-ray absorption spectroscopy (XAS) tools and techniques for observations of the metal coordination environment during ALD. One material that can be difficult to make is doped materials. Composition of a doped material determines the film properties. One example is vanadium doped indium sulfide. This material was purposed as an intermediate bandgap solar absorbing material which can absorb multiple ranges of light. Two new precursors were developed for the growth of indium sulfide and vanadium sulfide. The indium sulfide material grown has shown superior film photocurrent qualities and doping of vanadium into indium sulfide with fine control over atomic placement was achieved. Films were grown that showed secondary absorptions which aligned with theoretical calculations. The flexibility of the V(III) precursor is shown by a short study on the control of the difficult vanadium-oxygen system in the deposition of stoichiometric oxides. To illustrate how XAS can be used in conjunction with other techniques tin dioxide growth from a stannylene precursor and hydrogen peroxide was studied. From ex situ XAS measurements nucleation, growth and termination reaction mechanism were discovered. Next, a mobile ALD reactor allowed for in situ XAS measurements to be performed on “flat” substrates for the first time. Specifically, the local coordination environment and surface reactions were followed while erbium was doped into alumina and coordination environment of manganese was investigated in the growth of manganese-doped zinc oxide, with the ability to probe in-plane versus out-of-plane bonds. These unique in situ XAS experiments allow for greater understand of metal and non-metal precursors as they interact with surfaces. This enhanced understanding enables new precursor development which leads to better control over surface chemistry and new materials.
Ph.D. in Chemistry, July 2016
Show less
- Title
- DISTRIBUTED VIDEO CODING FOR RESOURCE CONSTRAINED VIDEO APPLICATIONS
- Creator
- Liu, Wenhui
- Date
- 2014, 2014-05
- Description
-
Video coding technology has played a key role in the explosion of current multimedia society with increasing resolution and quality. Such big...
Show moreVideo coding technology has played a key role in the explosion of current multimedia society with increasing resolution and quality. Such big success is largely built on the conventional video coding paradigm where motion estimation and compensation are performed at the encoder. This asymmetry in complexity is well-suited for the applications where the video sequence is encoded once and decoded many times. However, some new emerging applications such as wireless video surveillance, wireless PC cameras and multimedia sensor networks require a low complexity encoding, while possibly a ording a high complexity decoding. Therefore, a challenging problem emerges with the new type of visual communication system is how to achieve low complexity encoding video compression while maintaining good coding e ciency. Distributed video coding (DVC) provides low complexity encoding solutions for video communication with limited computational power or energy constraints. In DVC, the source video information is independently encoded at lightweight encoders. At the decoder, all the received bitstreams are jointly exploited their statistical dependencies between them. In such a way, motion estimation and its computational complexity is shifted from the encoder to the decoder. However, DVC also has its own restrictions. The low coding e ciency remains a challenging issue for DVC compare to the conventional video coding. Although DVC is robust to channel loss due to its intrinsic feature of independent encoders and joint decoder, the error resiliency for medium to large transmission errors is weak. In this dissertation, previously proposed low-complexity DVC (LC-DVC) architecture is rstly introduced. After that, a continued work is presented to further improve quality of SI. The proposed method is called spatio-temporal joint bilateral upsampling (STJBU) based SI generation, where geometric closeness of pixels and their photometric similarity is exploited to reduce the noise while preserving the edge xiv information. Moreover, a distributed multiple description coding (DMDC) scheme is proposed by combining the multiple description (MD) coding into LC-DVC to improve its error resiliency. All the proposed schemes are well described and the ratedistortion analyses are presented in this dissertation. All these features have made the LC-DVC a great solution for resource constraints applications.
PH.D in Electrical Engineering, May 2014
Show less
- Title
- POWER OPTIMIZATION IN DEEP SUBMICRON VLSI CIRCUITS: FROM SYSTEM LEVEL TO CIRCUIT LEVEL
- Creator
- Tong, Qiang
- Date
- 2017, 2017-07
- Description
-
As VLSI technology advances to deep sub-micron regime, power consumption has become a critical concern in VLSI circuits. Therefore, power...
Show moreAs VLSI technology advances to deep sub-micron regime, power consumption has become a critical concern in VLSI circuits. Therefore, power optimization becomes mandatory in VLSI design nowadays. To reduce the power consumption, many techniques have been proposed at various levels of VLSI circuits design: system level, register-transfer level(RTL), and circuit/transistor level. This dissertation starts with a review of system level power optimization techniques. Experiments on a computer architecture simulation system have been conducted to compare the impact of different programming styles at system level on power consumption. The results could be used as an intuitive guidance for programmers with intention for implementing power-aware system. The second topic in this dissertation is a clustering based clock gating technique, targeting power reduction at RT-Level. Clock gating is an effective and popular method to reduce dynamic power in VLSI circuits, it can be applied at both RT-level and gate level. The basic idea of clock gating is to disable the clock of one or more sequential logics (majorly flip-flops) when the input data of the logic cells do not change. In this dissertation, a clustering based clock gating technique is proposed, the technique exploits activity information of each flip-flop, and clusters them into groups according to their activity correlations. As the leakage power has become a major concern in VLSI design, the proposed As the leakage power has become a major concern in VLSI design, the proposed clustering method is extended down to gate level and a clustering based hybrid clock gating and power gating technique is proposed. The technique can reduce both the dynamic power and leakage power in VLSI circuits. As process technology scaling down to deep submicron regime, bulk CMOS technology has encountered many challenges due to short channel effect (SCE), which degrades the reliability and feasibility of MOSFET devices. New technologies such as FinFET and carbon nanotube FET (CNFET) are two promising substitute solutions in the following decade to address SCE issue. Part of this dissertation presents circuit design using these new process technologies for low power VLSI circuits. More specifically, two SRAM cell designs using FinFET and CNFET devices are proposed. The new designs can improve performance while reduce power consumption.
Ph.D. in Electrical Engineering, July 2017
Show less
- Title
- BIOPHYSICAL AND BIOCHEMICAL STUDY OF NATIVE AND EDITED DYSTROPHIN ROD REGION
- Creator
- Mangat, Khushdeep
- Date
- 2014, 2014-12
- Description
-
Duchenne Muscular Dystrophy (DMD) is a severe X-linked recessive disease affecting 1 in 3500 boys that is characterized by the degeneration of...
Show moreDuchenne Muscular Dystrophy (DMD) is a severe X-linked recessive disease affecting 1 in 3500 boys that is characterized by the degeneration of muscle function and strength. The cause of this disease lies in gene defects that eliminate expression of the protein dystrophin. Becker Muscular Dystrophy, BMD is a milder form of disease that has a later onset and much longer survival (up to the 7th decade of life, compared to median survival of 25 years for DMD patients) because of the presence of low levels of modified dystrophin protein. BMD is very heterogeneous, however, and many cases are nearly as severe as DMD. A major therapy for DMD involves exon skipping, which produces modified forms of dystrophin that are very similar to BMD. However, how these edits impact the function of dystrophin, and how they are linked to the severity of BMD or the BMD-like state produced in DMD exon skip therapy is unknown. We investigated this in two specific cases involving a specific panel of BMD defects linked to a major cause of death, dilated cardiomyopathy (DCM). We also investigated the contribution of various exons to interaction with a signaling partner of dystrophin, neuronal nitric oxide synthetase (nNOS).
Ph.D. in Biological and Chemical Sciences, December 2014
Show less
- Title
- PHYSICS-PRESERVING FINITE DIFFERENCE SCHEMES FOR THE POISSON-NERNST-PLANCK EQUATIONS
- Creator
- Flavell, Allen
- Date
- 2014, 2014-07
- Description
-
The Poisson-Nernst-Planck equations are a system of nonlinear di erential equations that describe ow of charged particles in solution. This...
Show moreThe Poisson-Nernst-Planck equations are a system of nonlinear di erential equations that describe ow of charged particles in solution. This dissertation is about the design of numerical schemes to solve this system which preserves global properties exhibited by the system. There are two major advances presented. The rst is the design of schemes that conserve mass globally when the system is coupled with no- ux boundary conditions. Most notably, a scheme using central di erencing and TR-BDF2 achieves second order accuracy in both space and time, while also conserving global mass is presented. The second is the design of a more general scheme that preserves the time-varying properties of the free energy of the system. One such a scheme uses central di erencing in space and trapezoidal integration in time to achieve second order accuracy in both space and time, while also preserving the energy dynamics, but at the cost of requiring positivity of the solution. There is also a discussion of solution methods: the classic Newton iteration scheme is compared with a modi ed Gummel iteration scheme for the purpose of solving the transient equations. The intended application of this work is the modeling of ion channels, and many of the simulations presented use parameters consistent with models of ion channels.
Ph.D. in Applied Mathematics, July 2014
Show less
- Title
- HYBRID TO SOCIAL CONDENSER: COMPETING APPROACHES TO MIXED-USE DEVELOPMENT
- Creator
- Zagow, Maged
- Date
- 2016, 2016-12
- Description
-
In the last two decades, mixed use has taken center stage in urban planning development in the United States. The research frequently cites...
Show moreIn the last two decades, mixed use has taken center stage in urban planning development in the United States. The research frequently cites this development as a model that can address a variety of socioeconomic problems. Also, it has enjoyed a recent surge in popularity in redeveloping cities by providing more affordable housing opportunities, ensuring safety, reducing auto-dependency, and for providing a sense of place and community. However, its affordability, physical design, and outcomes are highly variable. This study is particularly interested in whether and how mixed use affects the socioeconomics configuration of the built environment. This study uses multilevel data from the county level to the zip code level that represents all US neighborhoods. I use different implementation methods of mixed-use development and different cultural and historical backgrounds to examine the data. The study adopts six mixed-use models that present different methodological interactions between socioeconomic spatial metrics and urban forms. These models represent the realistic constraints of urban geometry and of the socioeconomic structure that comprises the charateristics of race, income, accessibility, safety, adjacency, accessibility, environment, and density. This study finds that the built environment produces a rich landscape of information that appears to guide the opportunities for facilities. The analysis shows that mixed-use development may have certain effects on the number of facilities, housing, income, diversity, crime rate, employment, health, and environment. The analysis of this research works in two dimensions. First, urban models (Hybrid and Social Condenser in general and under two categories Metropolis and Neighborhood Community). The second dimension is the urban characteristics (zoning programming, land use mix, streets fabric), socioeconomics variables (Population density, occupied housing, median age, diversity of race, income, and employment rates), and location variation (states, and cities). The results confirm that mixing the facilities in hybrid communities create more jobs opportunities but limit the affordability of housing, social cohesion and the race diversity. But in Social Condenser models, there are more race diversity safety and healthy environment. These results reflect complexity demands more than mixed-use developments, beyond Jane Jacobs' requirements, and beyond the designation of selected mixed-use zones. This study contributes to the study of how mixed-use development models shift because of various social and economic conditions. The findings from this study can inform architects, investors, policymakers, economists, and planners about factors that sustain mixed-use neighborhoods in the United States and beyond. Urban designers will be able to inform how the seemingly necessary act of laying out mixed-use development can affect the socioeconomic structure of a city. Thus, this study is a useful source for more accurate planning ideas than generic abstract theories or slogans.
Ph.D. in Architecture, December 2016
Show less
- Title
- STUDIES ON SYNTHETIC APPLICATIONS OF STEREOSELECTIVE AND REGIOSELECTIVE RING OPENING REACTIONS OF AZIRIDINIUM IONS
- Creator
- Chen, Yunwei
- Date
- 2014, 2014-12
- Description
-
Aziridinium ions are valuable reactive intermediates in organic synthesis. Regioselective and stereoselective ring opening reactions of...
Show moreAziridinium ions are valuable reactive intermediates in organic synthesis. Regioselective and stereoselective ring opening reactions of aziridinium ions can provide various useful building blocks including optically active vincinal amines, amino alcohols and amino esters. Aziridinium ions are also involved in the biological process of anti-cancer agents. However, aziridinium ions are under-utilized in organic synthesis. In this thesis, we utilize stereoselective and regioselective ring opening reactions of aziridinium ions for synthesis of enantiomerically enriched compounds. Ring opening reactions of aziridinium ions were utilized in intramolecular Friedel-Crafts (FC) reactions for stereoselective and regioselective synthesis of 4-substituted tetrahydroisoquinoline. A series of β-haloamine were prepared as precuresors of aziridinium ions. The reaction conditions for ring opening of aziridinium ions for the FC reactions including temperature, catalysts, and solvents were optimized. Further, the reaction mechanism was studied to prove that the aziridinium ions were formed as the key intermediates during the intramolecular FC reaction. Intermolecular nucleophilic ring opening reaction of aziridinium ions was studied as a convenient method of carbon-carbon formation. Regioselective and stereoselective nucleophilic substitution reactions of aziridinium ions with indole analogues were carried out for the synthesis of optically active tryptamine analogues. The reactions proceeded smoothly to provide the tryptamine analogues in high yield in the presence of halo-sequestering agents, while the reaction provided the tryptamine products in significantly low yield in the absence of halo-sequestering agents. Ring opening reactions of aziridinium ions with malonic esters and Grignard reagents were carried out for the respective synthesis of optically active tryptamine analogues, γ-aminobuyric acid (GABA), and α-amine derivatives. The regiospecific ring opening reactions of aziridinium ions was directly applied for the synthesis of bifunctional ligands which have a potential use in targeted therapy and imaging of cancers. The novel bifunctional chelates with a shorter alkyl spacer C-NETA and 2E-C-NETA as well as the chelates with a longer alkyl spacer 5p-C-NETA were prepared. 5p-C-NETA was conjugated to a cyclic peptide c(RGDyK) as a targeting moiety for use in targeted radiation therapy. In addition, 2E-C-NETA was conjugated to a fluorescent dye Cy5.5 for theranostic applications. The experimental results indicated that the new bifunctional ligands have promising applications in the biomedical field. In summary, stereoselective and regioselective ring opening reactions of aziridinium ions have been successfully applied for the synthesis of optically active compounds such as 4-substituted tetrahydroisoquinolines, tryptamines, γ-aminobuyric acid, α-amine derivatives and the bifunctional chelators. We demonstrated that ring opening of versatile aziridinium intermediates is a strightforward and convenient method for the synthesis of various optically active compounds.
Ph.D. in Chemistry, December 2014
Show less