Search results
(9,101 - 9,120 of 10,083)
Pages
- Title
- Choice-Distinguishing Colorings of Cartesian Products of Graphs
- Creator
- Tomlins, Christian James
- Date
- 2022
- Description
-
A coloring $f: V(G)\rightarrow \mathbb N$ of a graph $G$ is said to be \emph{distinguishing} if no non-identity automorphism preserves every...
Show moreA coloring $f: V(G)\rightarrow \mathbb N$ of a graph $G$ is said to be \emph{distinguishing} if no non-identity automorphism preserves every vertex color. The distinguishing number, $D(G)$, of a graph $G$ is the smallest positive integer $k$ such that there exists a distinguishing coloring $f: V(G)\rightarrow [k]$ and was introduced by Albertson and Collins in their paper ``Symmetry Breaking in Graphs.'' By restricting what kinds of colorings are considered, many variations of distinguishing numbers have been studied. In this paper, we study proper list-colorings of graphs which are also distinguishing and investigate the choice-distinguishing number $\text{ch}_D(G)$ of a graph $G$. Primarily, we focus on the choice-distinguishing number of Cartesian products of graphs. We determine the exact value of $\text{ch}_D(G)$ for lattice graphs and prism graphs and provide an upper bound on the choice-distinguishing number of the Cartesian products of two relatively prime graphs, assuming a sufficient condition is satisfied. We use this result to bound the choice distinguishing number of toroidal grids and the Cartesian product of a tree with a clique. We conclude with a discussion on how, depending on the graphs $G$ and $H$, we may weaken the sufficient condition needed to bound $\text{ch}_D(G\square H)$.
Show less
- Title
- Development of validation guidelines for high pressure processing to inactivate pressure resistant and matrix-adapted Escherichia coli O157:H7, Salmonella spp. and Listeria monocytogenes in treated juices
- Creator
- Rolfe, Catherine
- Date
- 2020
- Description
-
The fruit and vegetable juice industry has shown a growing trend in minimally processed juices. A frequent technology used in the functional...
Show moreThe fruit and vegetable juice industry has shown a growing trend in minimally processed juices. A frequent technology used in the functional juice division is cold pressure, which refers to the application of high pressure processing (HPP) at low temperatures for a mild treatment to inactivate foodborne pathogens instead of thermal pasteurization. HPP juice manufacturers are required to demonstrate a 5-log reduction of the pertinent microorganism to comply with FDA Juice HACCP. The effectiveness of HPP on pathogen inactivation is determinant on processing parameters, juice composition, packaging application, as well as the bacterial strains included for validation studies. Unlike thermal pasteurization, there is currently no consensus on validation study approaches for bacterial strain selection or preparation and no agreement on which HPP process parameters contribute to overall process efficacy.The purpose of this study was to develop validation guidelines for HPP inactivation and post-HPP recovery of pressure resistant and matrix-adapted Escherichia coli O157:H7, Salmonella spp., and Listeria monocytogenes in juice systems. Ten strains of each microorganism were prepared in three growth conditions (neutral, cold-adapted, or acid-adapted) and assessed for barotolerance or sensitivity. Pressure resistant and sensitive strains from each were used to evaluate HPP inactivation with increasing pressure levels (200 – 600 MPa) in two juice matrices (apple and orange). A 75-day shelf-life analysis was conducted on HPP-treated juices inoculated with acid-adapted resistant strains for each pathogen and examined for inactivation and recovery. Individual strains of E. coli O157:H7, Salmonella spp., and L. monocytogenes demonstrated significant (p <0.05) differences in reduction levels in response to pressure treatment in high acid environments. E. coli O157:H7 was the most barotolerant of the three microorganism in multiple matrices. Bacterial screening resulted in identification of pressure resistant strains E. coli O157:H7 TW14359, Salmonella Cubana, and L. monocytogenes MAD328, and pressure sensitive strains E. coli O15:H7 SEA13B88, S. Anatum, and L. monocytogenes CDC. HPP inactivation in juice matrices (apple and orange) confirmed acid adaptation as the most advantageous of the growth conditions. Shelf-life analyses reached the required 5-log reduction in HPP-treated juices immediately following pressure treatment, after 24 h in cold storage, and after 4 days of cold storage for L. monocytogenes MAD328, S. Cubana, and E. coli O157:H7 TW14359, respectively. Recovery of L. monocytogenes in orange juice was observed with prolonged cold storage time. These results suggest the preferred inoculum preparation for HPP validation studies is the use of acid-adapted, pressure resistant strains. At 586 – 600 MPa, critical inactivation (5-log reduction) was achieved during post-HPP cold storage, suggesting sufficient HPP lethality is reached at elevated pressure levels with a subsequent cold holding duration.
Show less
- Title
- Modeling, Analysis and Computation of Tumor Growth
- Creator
- Lu, Min-Jhe
- Date
- 2022
- Description
-
In this thesis we investigate the modeling, analysis and computation of tumor growth.The sharp interface model we considered is to understand...
Show moreIn this thesis we investigate the modeling, analysis and computation of tumor growth.The sharp interface model we considered is to understand how the two key factors of (1) the mechanical interaction between the tumor cells and their surroundings, and (2) the biochemical reactions in the microenvironment of tumor cells can influence the dynamics of tumor growth. From this general model we give its energy formulation and solve it numerically using the boundary integral methods and the small-scale decomposition under three different scenarios.The first application is the two-phase Stokes model, in which tumor cells and the extracellular matrix are both assumed to behave like viscous fluids. We compared the effect of membrane elasticity on the tumor interface and the curvature-weakening one and found the latter would promote the development of branching patterns.The second application is the two-phase nutrient model under complex far-field geometries, which represents the heterogeneous vascular distribution. Our nonlinear simulations reveal that vascular heterogeneity plays an important role in the development of morphological instabilities that range from fingering and chain-like morphologies to compact,plate-like shapes in two-dimensions.The third application is for the effect of angiogenesis, chemotaxis and the control of necrosis. Our nonlinear simulations reveal the stabilizing effects of angiogenesis and the destabilizing ones of chemotaxisand necrosis in the development of tumor morphological instabilities if the necrotic core is fixed. We also perform the bifurcation analysis for this model.In the end, as a future work, we propose new models through Energetic Variational Approach (EnVarA) to shed light on the modeling issues.
Show less
- Title
- DO GENERAL EDUCATION HIGH SCHOOL STUDENTS IN A BASIC PHYSICAL SCIENCE COURSE IMPROVE UPON ATTITUDES TOWARD SCIENCE LEARNING AND CONTENT MASTERY FOLLOWING VIRTUAL/REMOTE FLIPPED INSTRUCTION OR VIRTUAL/REMOTE NON – FLIPPED INQUIRY – BASED INSTRUCTION?
- Creator
- Martino, Robert S.
- Date
- 2022
- Description
-
As we progress further into the 21st Century, high school science is being challenged on how to best deliver instruction to students. Teacher ...
Show moreAs we progress further into the 21st Century, high school science is being challenged on how to best deliver instruction to students. Teacher – centered instruction has long been de – emphasized in favor of inquiry – based instruction, although teacher – centered instruction still exists to a noticeable extent. Inquiry – based instruction, while more student – centered in its common practice, still involves the teacher as a guide during classroom direct instruction. Research has been ongoing to identify new and dynamic forms of science concept delivery that serve the needs of diversified science instruction (Keys & Bryan, 2001; Saldanha, 2007). Virtual instruction has become more commonplace, and it was fully implemented during this study. It has become incumbent upon science education researchers to explore and identify the most effective means of virtual instruction, means that are student – centered, engaging, interesting, and that both improve student science content understanding and attitudes toward science. Flipped instruction is a more recently – incorporated form of student – centered instruction that has students experiencing classroom routines at home and homework routines in class, and that is why this instruction is referred to as being “flipped.” Hunley (2016) examined teacher and student perception of flipped instruction in a science classroom, while Howell (2013) explored it in a ninth – grade physical science honors classroom. At the onset of this study, relatively few studies were available about this newer form of instruction within high school science instruction, no studies were available that involved high school general education physical science courses, and certainly no studies were available that compared virtual/flipped and non – flipped general education physical science instruction at the onset of this study. This study researched the effect of virtually – implemented flipped instruction on high school students’ understanding and attitude toward science. Instruction was completely virtual/remote (online), and at home, for all students in this study. In investigating the effect of this type of instruction, this study examined student academic performance and attitudes (and intentions and beliefs) toward science in two units of a high school Integrated Chemistry and Physics (Physical Science) course. Sixty – six students from Southlake High School, a midwestern U.S. high school, took part in the study. Sixty – four of those students took the unit assessments. Half of the students (test group) were instructed via virtual/remote flipped instruction and the other half (control group) were instructed via virtual/remote non – flipped, inquiry – based instruction during the first unit. During the second unit, the test group students who were instructed via virtual/remote flipped instruction switched with the control group and were instructed via virtual/remote non-flipped inquiry – based instruction, while the control group students who were instructed via virtual/remote non-flipped instruction were instructed via virtual/remote flipped instruction. The students in both groups were surveyed three times, using the Behaviors, Related Attitudes, and Intentions Toward Science (BRAINS) (Summers, 2016) instrument student questionnaire and survey for their attitudes (and beliefs and intentions) toward science (once prior to the first unit, once after the first unit, and once following the second unit). Student test results and survey responses were then analyzed to identify which instructional style was more effective for student learning and whether student attitudes (and intentions, and beliefs) favored one instructional style over the other. Student science attitudes (and beliefs and intentions) and academic performance were evaluated throughout the study. There was an increase in control group student science attitudes (and beliefs and intentions), from the pre – study survey to the post – unit 1 survey following their receipt of non – flipped virtual/remote instruction in the first unit. There was a lower increase in test group student science attitudes (and beliefs and intentions), from lower pre – study attitudes (compared with the control group) following the test group’s receipt of flipped virtual/remote instruction in the first unit,. Following the second unit, both the control group and test group again showed increases in attitude (and beliefs and intentions) compared with the pre – study survey results, with the control group again showing greater increases than the study group. Student academic performance favored the control group as it outperformed the test group in both the first unit and the second unit, even when the test group received the virtually – delivered flipped instruction in the first unit. The findings of the study showed that virtually implemented flipped instruction resulted in no advantage for the test group in terms of greater improvement in attitudes (or beliefs or intentions) toward science and no advantage for the test group in terms of learning science content in general education Integrated Chemistry and Physics (Physical Science). These results indicate that this form of teaching may not be effective in improving general education Physical Science student learning and student attitudes (and beliefs and intentions) toward science. Therefore, the use of virtually implemented flipped instruction in this general education science course will need to be further studied to determine its effect on student learning and student attitudes (or even beliefs and intentions) toward science.
Show less
- Title
- Resilience Enhancement of Critical Cyber-Physical Systems with Advanced Network Control
- Creator
- Liu, Xin
- Date
- 2020
- Description
-
Critical infrastructures are the systems whose failures would have a debilitating impact on national security, economics, public health or...
Show moreCritical infrastructures are the systems whose failures would have a debilitating impact on national security, economics, public health or safety, or any combination of those matters. It is important to improve those systems' resilience, which is the ability to reduce the magnitude and/or duration of disruptive events. However, today’s critical infrastructures, such as electrical power system and transportation system, are deploying advanced control applications with increasing scale and complexity, which leads to the migration of their underlying communication infrastructures from simple and proprietary networks to off-the-shelf network technologies (e.g., IP-based protocols and standards) to handle the intensive and heterogeneous traffic flows. On one hand, this migration provides an opportunity for both academic and industry communities to develop novel ideas on top of existing schemes; on the other hand, it exposes more vulnerabilities for cyber-attacks. Moreover, since the large-scale power system may choose leased networks from Internet service providers (which is a critical infrastructure itself), there exists an interdependency relationship between power and communication infrastructures, where the power transmission control requires message delivery services while the network devices rely on the power supply. These problems raise research challenges to improve the system resilience of critical cyber-physical systems.In this thesis, we focus on resilience enhancement of critical infrastructures from the communication network's aspects. The application domain includes both power and transportation systems. For power systems, we first apply advanced network control techniques (i.e., software-defined network (SDN) and fibbing control scheme) in the transmission grid communication network to improve the grid status restoration process under network failures and cyber-attacks. We develop a unified system model that contains both transmission grid monitoring system (i.e., phasor measurement unit (PMU) network) and communication network, and formalize a mixed-integer linear programming (MILP) problem to minimize the recovery time of system observability with the power and communication domain constraints. We evaluate the system performance regarding the recovery plan generation and installation using IEEE standard systems. However, the advanced network-based control scheme could also lead to problems, since it requires a power supply for the network devices. Thus, we investigate the interdependency relationship between the power grid and communication network and its impact on system resilience. We conduct a survey work that summarizes existing research based on two dimensions: objectives (i.e., failure analysis, vulnerability analysis, failure mitigation, and failure recovery) and methodologies (i.e., analytical solutions, co-simulation, and empirical studies). We also identify the limitations of existing works and propose potential research opportunities in this demanding area. Lastly, based on the review work, we conduct research that focuses on fast power distribution system restoration that involves interdependency constraints. When a natural disaster happens, both power and communication components might be damaged. Furthermore, since they are dependent on each other's service to function correctly, the failures may propagate to the hardware/software that are not affected initially. In this work, we focus on the recovery stage where the failed components in the system are already fully detected and isolated. We construct a mathematical model of the co-existing power and communication system and use optimization techniques to produce a crew dispatch plan that restores power as fast as possible by coordinating damage repairing, switch operation, and communication supply processes. We evaluate the restoration efficiency on the IEEE standard system using both analytical analysis and discrete-event simulation.For the second application domain, railway transportation system, we focus on evaluating the resilience of its communication system that exchanges control and monitoring messages with both on-board driver cabin and remote control center. We use advanced discrete-event simulation techniques to achieve a high-fidelity model of the network which makes the evaluation more concrete and realistic. For the Ethernet-based on-board train communication network (TCN), we develop a parallel simulation platform according to the IEC standard and use it to conduct a case study of a double-tagging VLAN attack on this control network. Another component of the railway communication system is the train-to-ground network that enables the communication between the driving system on the train and the control center that issues commands such as the movement authority messages. We customize the NS3 network simulator to model the LTE-based protocol with a real high-speed train trace dataset from public sources. We evaluate the resilience of the cellular network specifically on the handover process, which happens when the train travels from one base station to another. Due to the high-speed nature, the handover success rate is impacted and there are many protocol-based solutions proposed in this research area. We use the high-fidelity simulation model to evaluate some of them and compare the pros and cons.
Show less
- Title
- Efficient and Practical Cluster Scheduling for High Performance Computing
- Creator
- Li, Boyang
- Date
- 2023
- Description
-
Cluster scheduling plays a crucial role in the high-performance computing (HPC) area. It is responsible for allocating resources and...
Show moreCluster scheduling plays a crucial role in the high-performance computing (HPC) area. It is responsible for allocating resources and determining the order in which jobs are executed. Existing HPC job schedulers typically leverage simpleheuristics to schedule jobs, but such scheduling policies struggle to keep pace with modern changes and technology trends. The study of this dissertation is motivated by two new trends in HPC community: the rapid growth of heterogeneous system infrastructure and the emergence of artificial intelligence (AI) technologies. First, existing scheduling policies are solely CPU-centric. In contrast, systems become more complex and heterogeneous, and emerging workloads have diverse resource requirements, such as CPU, burst buffer, power, network bandwidth, and so on. Second, previous heuristic scheduling approaches are manually designed. Such a manual design process prevents adaptive and informative scheduling decisions. A recent trend in HPC is to intertwine AI to better leverage the investment of supercomputers. This embrace of AI provides opportunities to design more intelligent scheduling methods. In this dissertation, we propose an efficient and practical cluster scheduling framework for HPC systems. Our framework leverages AI technologies and considers system heterogeneity. The framework comprises four major components. First, shared network systems such as dragonfly-based systems are vulnerable to performance variability due to network sharing. To mitigate workload interference on these shared network systems, we explore a dedicated scheduling policy. Next, emerging workloads in HPC have diverse resource requirements instead of being CPU-centric. To cater to this, we design an intelligent scheduling agent for multi-resource scheduling in HPC leveraging the advanced multi-objective reinforcement learning (MORL) algorithm. Subsequently, we address the issues with existing state encoding approaches in RL-driven scheduling, which either lack critical scheduling information or suffer from poor scalability. To this end, we present an efficient and scalable encoding model. Lastly, the lack of interpretability of RL methods poses a significant challenge to deploying RL-driven scheduling in production systems. In response, we provide a simple, deterministic, and easily understandable model for interpreting RL-driven scheduling. The proposed models and algorithms are evaluated with real job traces from production supercomputers. Experimental results show our schemes can effectively improve job scheduling in terms of both user satisfaction and system utilization.
Show less
- Title
- Data-Driven Modeling for Advancing Near-Optimal Control of Water-Cooled Chillers
- Creator
- Salimian Rizi, Behzad
- Date
- 2023
- Description
-
Hydronic heating and cooling systems are among the most common types of heating and cooling systems installed in older existing buildings,...
Show moreHydronic heating and cooling systems are among the most common types of heating and cooling systems installed in older existing buildings, especially commercial buildings. The results of this study based on the Commercial Building Energy Consumption Survey (CBECS) indicates chillers account for providing cooling in more than half of the commercial office building floorspaces in the U.S. Therefore, to address the need of improving energy efficiency of chillers systems operation, research studies developed different models to investigate different chiller sequencing approaches. Engineering-based models and empirical models are among the popular approaches for developing prediction models. Engineering-based models utilize the physical principles to calculate the thermal dynamics and energy behaviors of the systems and require detailed system information, while the empirical models deploy machine learning algorithms to develop relationships between input and output data. The empirical models compared to the engineering-based approach are more practical in a system’s energy prediction because of accessibility to required data, superiority in model implementation and prediction accuracy. Moreover, selecting near accurate chiller prediction models for the chiller sequencing needs to consider the importance of each input variable and its contribution to the overall performance of a chiller system, as well as the ease of application and computational time. Among the empirical modeling methods, ensemble learning techniques overcome the instability of the learning algorithm as well as improve prediction accuracy and identify input variable importance. Ensemble models combine multiple individual models, often called base or weak models, to produce a more accurate and robust predictive model. Random Forest (RF) and Extra Gradient Boosting (XGBoost) models are considered as ensemble models which offer built-in mechanisms for assessing feature importance. These techniques work by measuring how much each feature contributes to the overall predictive performance of the ensemble.In the first objective of this work the frequency of hydronic cooling systems in the U.S. building stock for applying potential energy efficiency measures (EEMs) on chiller plants are explored. Results show that the central chillers inside the buildings are responsible for providing cooling for more than 50% of the commercial buildings with areas greater than 9,000 m2(~100,000 ft2). In addition, hydronic cooling systems contribute to the highest Energy Use Intensity (EUI) among other systems, with EUI of 410.0 kWh/m2 (130.0 kBtu/ft2). Therefore, the results of this objective support developing accurate prediction models to assess the chiller performance parameters as an implication for chiller sequencing control strategies in older existing buildings. The second objective of the dissertation is to evaluate the performance of chiller sequencing strategy for the existing water-cooled chiller plant in a high-rise commercial building and develop highly accurate RF chiller models to investigate and determine the input variables of greatest importance to chiller power consumption predictions. The results show that the average value of mean absolute percentage error (MAPE) and root mean squared error (RMSE) for all three RF chiller models are 5.3% and 30 kW, respectively, for the validation dataset, which confirms a good agreement between measured and predicted values. On the other hand, understanding prediction uncertainty is an important task to confidently reporting smaller savings estimates for different chiller sequencing control strategies. This study aims to quantify prediction uncertainty as a percentile for selecting an appropriate confidence level for chillers models which could lead to better prediction of the peak electricity load and participate in demand response programs more efficiently. The results show that by increasing the confidence level from 80% to 90%, the upper and lower bounds of the demand charge differ from the actual value by a factor of 3.3 and 1.7 times greater, respectively. Therefore, it proves the significance of selecting appropriate confidence levels for implementation of chiller sequencing strategy and demand response programs in commercial buildings. As the third objective of this study, the accuracy of these prediction models with respect to the preprocessing, selection of data, noise analysis, effect of chiller control system performance on the recorded data were investigated. Therefore, this study attempts to investigate the impacts of different data resolution, level of noise and data smoothing methods on the chiller power consumption and chiller COP prediction based on time-series Extra Gradient Boosting (XGBoost) models. The results of applying the smoothing methods indicate that the performance of chiller COP and the chiller power consumption models have improved by 2.8% and 4.8%, respectively. Overall, this study would guide the development of data-driven chiller power consumption and chiller COP prediction models in practice.
Show less
- Title
- A Kernel-Free Boundary Integral Method for Two-Dimensional Magnetostatics Analysis
- Creator
- Jin, Zichao
- Date
- 2023
- Description
-
Performing magnetostatic analysis accurately and efficiently is crucial for the multi-objective optimization of electromagnetic device designs...
Show morePerforming magnetostatic analysis accurately and efficiently is crucial for the multi-objective optimization of electromagnetic device designs. Therefore, an accurate and computationally efficient method is essential. Kernel Free Boundary Integral Method is a numerical method that can accurately and efficiently solve partial differential equations. Unlike traditional boundary integral or boundary element methods, KFBIM does not require an analytical form of Green’s function for evaluating integrals via numerical quadrature. Instead, KFBIM computes integrals by solving an equivalent interface problem on a Cartesian mesh. Compared with traditional finite difference methods for solving the governing PDEs directly, KFBIM produces a well-conditioned linear system. Therefore, the numerical solution of KFBIM is not sensitive to computer round-off errors, and the KFBIM requires only a fixed number of iterations when an iterative method (e.g., GMRES) is applied to solve the linear system.In this research, the KFBIM is introduced for solving magnetic computations in a toroidal core geometry in 2D. This study is very relevant in designing and optimizing toroidal inductors or transformers used in electrical systems, where lighter weight, higher inductance, higher efficiency, and lower leakage flux are required. The results are then compared with a commercial finite element solver (ANSYS), which shows excellent agreement. It should be noted that, compared with FEM, the KFBIM does not require a body-fitted mesh and can achieve high accuracy with a coarse mesh. In particular, the magnetic potential and tangential field intensity calculations on the boundaries are more stable and exhibit almost no oscillations.Furthermore, although KFBIM is accurate and computationally efficient, sharp corners can be a significant problem for KFBIM. Therefore, an inverse discrete Fourier transform (DFT) based geometry reconstruction is explored to overcome this challenge for smoothening sharp corners. A toroidal core with an airgap (C-core) is modeled to show the effectiveness of the proposed approach in addressing the sharp corner problem. A numerical example demonstrates that the method works for the variable coefficient PDE. In addition, magnetostatic analysis for homogeneous and nonhomogeneous material is presented for the reconstructed geometry, and results carried out from KFBIM are compared with the results of FEM analysis for the original geometry to show the differences and the potential of the proposed method.
Show less
- Title
- Investigation in the Uncertainty of Chassis Dynamometer Testing for the Energy Characterization of Conventional, Electric and Automated Vehicles
- Creator
- Di Russo, Miriam
- Date
- 2023
- Description
-
For conventional and electric vehicles tested in a standard chassis dynamometer environment precise regulations on the evaluation of their...
Show moreFor conventional and electric vehicles tested in a standard chassis dynamometer environment precise regulations on the evaluation of their energy performance exist. However, the regulations do not include requirements on the confidence value to associate with the results. As vehicles become more and more efficient to meet the stricter regulations mandates on emissions, fuel and energy consumption, traditional testing methods may become insufficient to validate these improvements, and may need revision. Without information about the accuracy associated with the results of those procedures however, adjustments and improvements are not possible, since no frame of reference exists. For connected and automated vehicles, there are no standard testing procedures, and researchers are still in the process of determining if current evaluation methods can be extended to test intelligent technologies and which metrics best represent their performance. For these vehicles is even more important to determine the uncertainty associated with these experimental methods and how they propagate to the final results. The work presented in this dissertation focuses on the development of a systematic framework for the evaluation of the uncertainty associated with the energy performance of conventional, electric and automated vehicles. The framework is based on a known statistical method, to determine the uncertainty associated with the different stages and processes involved in the experimental testing, and to evaluate how the accuracy of each parameter involved impacts the final results. The results demonstrate that the framework can be successfully applied to existing testing methods and provides a trustworthy value of accuracy to associate with the energy performance results, and can be easily extended to connected-automated vehicle testing to evaluate how novel experimental methods impact the accuracy and the confidence of the outputs. The framework can be easily be implemented into an existing laboratory environment to incorporate the uncertainty evaluation among the current results analyzed at the end of each test, and provide a reference for researchers to evaluate the actual benefits of new algorithms and optimization methods and understand margins for improvements, and by regulators to assess which parameters to enforce to ensure compliance and ensure projected benefits.
Show less
- Title
- CHARACTERIZATION OF COAGULATION AND MUSCLE ATTACHMENT MUTATIONS IN DROSOPHILA MELANOGASTER LARVAE AND THE GENERATION OF A NOVEL TRANSGLUTAMINASE LOSS OF FUNCTION MUTANT LINE
- Creator
- Schubert, Nina H
- Date
- 2014, 2014-05
- Description
-
It is known that the protein Fondue (Fon) is involved in clotting in Drosophila melanogaster. Consequently, Fon was being studied to...
Show moreIt is known that the protein Fondue (Fon) is involved in clotting in Drosophila melanogaster. Consequently, Fon was being studied to characterize its role in the clot. During this study, it was found that Fon and the protein Tiggerin have the same expression pattern. Additionally, mutants for both proteins develop into long, thin pupae (as compared to wild type). Due to this similarity in mutant phenotype and protein localization in combination with literature citing Tiggerin to be involved in both the clot and muscle attachment, Fon was looked at for a muscle attachment phenotype. Furthermore, whilst we do know that Tiggerin is involved in the clot as well as muscle attachment, its specific role in coagulation and its muscle attachment phenotype have not been previously characterized. Due to the pleotropic role of Fon and Tiggerin in muscle attachment and clotting, the long, thin pupal phenotype was used as an indicator of potential clotting mutants in a search at the Bloomington stock center. There, 722 lines were found to have long, thin pupae and from these the 9 lines with the strongest mutant phenotypes were selected to test for coagulation and/or muscle attachment abnormalities. Our collaborator took confocal micrographs of third instar larval fillets obtained from these lines as a means of visualizing actin structure within the muscles. From these images, it was found that the Cchl mutant line had the most severely detached muscles and, consequently, this line was selected as the best candidate for a clotting and muscle attachment mutation and was subjected to further study. The role of a third protein, transglutaminase (TG), which is also involved in clotting, has not been fully characterized due to the lack of a null line. To address this deficiency and fully characterize the role of TG in coagulation in Drosophila, a line with an inactivated TG gene was used as a starting line to generate a null line.
M.S. in Biology, May 2014
Show less
- Title
- HIGH GAIN HIGH EFFICIENCY RESONANT DC-DC CONVERTER
- Creator
- Shang, Fei
- Date
- 2016, 2016-12
- Description
-
Low voltage power sources such as batteries, solar panels, and fuel cells have played an important role in applications such as automotive...
Show moreLow voltage power sources such as batteries, solar panels, and fuel cells have played an important role in applications such as automotive system, renewable energy power generation and so on. These applications of the low voltage power sources require a high gain DC-DC step-up converter. Research in this area shows great improvements for the converter topologies. As the power requirements keep increasing, the converter is going to sustain a very high input current. This high current can bring many design challenges in the existing topologies, such as high component current stress and power loss, complex and costly design for magnetic components, high input current ripple, etc. To address these challenges, a new topology of high gain DCDC step-up converter is needed. Evaluation of current high gain DC-DC converter topologies introduces the idea of the new topology which combines the advantages of different topologies and techniques. The new topology of high gain DC-DC converter suitable for low-voltage-high-current application is proposed in this dissertation. It consists of interleaved step-up topology, resonant circuit, and high frequency transformer. The topology has many merits such as high gain capability, high efficiency, low components stress and requirement of the transformer, simple topology with less number of active switching device, and easy to control. The dissertation carries out theoretical analysis of the proposed topology under different operating modes and the voltage gain has been deduced for each mode. The high voltage gain capability comes from 3 parts, which are interleaved step-up function, transformer turns-ratio and output voltage doubler circuit. Some variants of the topology make it more practical in many applications. In order to realize the design of the proposed converter, the design guidelines of major circuit components have been well studied in this dissertation. The switching power devices current stress and power loss are discussed in detailed to show the trend of their variation under different operating modes. The selection of transformer turns-ratio with the consideration of its impact to the component stress and power loss has been fully analyzed. The design method of the resonant tank is also well studied based on the resonant component value selection and its influence to the other components. Input inductor design is related to the current ripple requirement and this relationship is discussed thoroughly. These guidelines can be used to support the practical design of the proposed converter for different specifications. An effective output voltage regulation of the converter is essential for the proposed converter. To design a proper controller of the converter, the system transfer function is needed. The methods of system dynamic modeling have been fully studied in this dissertation. System dynamic state-space models are acquired by using generalized averaging method and the results validate the effectiveness of the method. Small signal model of the converter is achieved by linearization of the dynamic model around the operating points and system transfer functions are available at di↵erent operating points. The stability study indicates that the system is stable at all operating points, though there are several transfer functions at some operating points containing RHP zeros which can cause system unstable if the closed-loop controller is poorly designed. The parameter sensitivity study shows that the system transfer function is not greatly affected by the variation of the leakage inductance and load resistance. A design of PI controller is introduced in the dissertation and closed loop control of the converter is implemented to achieve the output voltage regulation. Simulations in PSIM and MATLAB Simulink have been carried out to validate the circuit operation and support the design analysis. A 2kW prototype has been built for experimental testing. The experimental results are in a good agreement with the theoretical analysis and efficiency of over 95% has been achieved for the nominal operating point.
Ph.D. in Electrical Engineering, December 2016
Show less
- Title
- FACTORS AFFECTING ACCEPTANCE OF DISABILITY: A PILOT STUDY AMONG CHINESE INDIVIDUALS WITH SPINAL CORD INJURY
- Creator
- Jiao, Jie
- Date
- 2012-07-07, 2012-07
- Description
-
In the rehabilitation literature, acceptance of disability has been identified as one of the best indicators of positive adjustment following...
Show moreIn the rehabilitation literature, acceptance of disability has been identified as one of the best indicators of positive adjustment following an acquired disability (Elliott, Uswatte, Lewis, & Palmatier, 2000) and has significant implications in vocational rehabilitation and overall community integration (Green, Pratt, & Grigsby, 1984; Melamed, Groswasser, & Stern, 1992; Snead, & Davis, 2002). However, existing literature on acceptance of disability is primarily based on Western samples. The current study focused on people with spinal cord injuries and was the first attempt to apply the construct of acceptance of disability to a mainland Chinese sample. It also examined if demographic variables (i.e., age, gender, education level), disability related variables (i.e., functional limitations, pain), and psychosocial variables (i.e., depression, selfesteem, perceived social support, self-efficacy) are significantly related to AD. Hierarchical Regression revealed that higher self-esteem and less depressive symptoms were significantly associated with better acceptance of disability. The current study also indicated an alarmingly high prevalence of depression among Chinese individuals with spinal cord injury and suggested a mediating effect of depression and self-esteem on social support.
Ph.D. in Psychology, July 2012
Show less
- Title
- LEARNING THE STRUCTURE OF PROBABILITY NETWORKS WITH DATA UNCERTAINTY
- Creator
- Zhang, Sisi
- Date
- 2014, 2014-07
- Description
-
This paper studies how data uncertainties impact structure learning. Learning the structure of a probabilistic network from observational data...
Show moreThis paper studies how data uncertainties impact structure learning. Learning the structure of a probabilistic network from observational data has been traditionally studied assuming that there are no uncertainties in the data. This paper focuses on the uncertainties that result in “misclassification errors” in the contingency tables based on which the independence tests are carried out. The impact of misclassification errors is investigated through a sensitivity study which focuses on identifying the boundaries of misclassification errors within which the learned structure from erroneous data is identical to the true structure. Mathematical derivations for obtaining this boundary are presented. The analytical results are showed by a case study in epidemiology.
M.S. in Applied Mathematics, July 2014
Show less
- Title
- SPECTRUM SHARING OPPORTUNITY FOR LTE AND AIRCRAFT RADAR IN THE 4.2 - 4.4 GHZ BAND
- Creator
- Singh, Rohit
- Date
- 2017, 2017-07
- Description
-
The Federal Communications Commission (FCC) states that America is facing a spectrum crunch and there is no easy way to meet this increasing...
Show moreThe Federal Communications Commission (FCC) states that America is facing a spectrum crunch and there is no easy way to meet this increasing demand, hence spectrum sensing and sharing has gotten significant attention in the Spectrum Com- munity. Spectrum is an increasingly scarce natural resource which needs to be used to the fullest. Using modern techniques, spectrum bands can be reused such that they do not interfere with the current users in a band. There are many bands in the RF Spectrum which are underutilized and can be reused in the space-time domain. A number of bands have been recognized as candidates for spectrum sharing. In this dissertation, we consider the 4.2 − 4.4GHz band which is dedicated for used by the radar altimeter fixed on aircraft to measure their elevation above the earth’s surface. This spectrum is currently underutilized and with care can be shared with other technologies. This thesis examines the current use of this spectrum as a func- tion of time and location and presents a methodology for assessing whether harmful interference is experienced by either the incumbent radar usage or by a proposed wireless secondary broadband user. However, this band is a potential “safety of life” spectrum which is used by aircraft during landing and takeoff. Improper sharing of this band could cause interference at the radar, which would result in false attitude detection by the radar. Because of its advance technology, LTE should can be a good sharing candidate for this sensitive band. We propose sharing of this band with small cells (perhaps inside buildings) in urban and/or suburban areas, where there is a high demand for LTE and the attenuation from the environment is high enough to cause less interference at the radar altimeters. In this thesis, we propose to detect the aircraft (i.e. the altimeter radars) us- ing the Automatic Dependent Surveillance Broadcast (ADS-B) data which is broad- casted by an aircraft. This aircraft detection mechanism helps us to take intelligent sharing approaches with LTE using the space-time domain. Since the performance of the radar altimeter is safety-of-life critical, a deep understanding of co-existence between these systems is necessary to evaluate whether sharing is feasible. Given the availability of historical ADS-B data, what we believe is an appropriate analysis of Chicagoland has been done to propose implementation of a mix of Exclusion and Coordination zones in this area in the space-time domain. The novelty of this work is to develop spectrum sharing opportunities with radars which are highly transient and their locations are unpredictable due to emergency or traffic or weather. This thesis presents a method for evaluation of the potential for spectrum sharing between the ground-based LTE systems and commercial radar altimeters.
M.S. in Computer Science, July 2017
Show less
- Title
- MULTIPLE-INPUT MULTIPLE-OUTPUT NONLINEAR CONTROL OF SELECTIVE CATALYTIC REDUCTION SYSTEMS
- Creator
- Dong, Siwei
- Date
- 2015, 2015-05
- Description
-
Selective catalytic reduction (SCR) is one of the most promising solutions to meet the future nitrogen oxides (NOx) emissions regulations for...
Show moreSelective catalytic reduction (SCR) is one of the most promising solutions to meet the future nitrogen oxides (NOx) emissions regulations for heavy-duty diesel vehicles. However, such vehicles often operate in highly transient operations in which mobile selective catalytic reduction systems encounter significant efficiency challenges, especially when the engine is under a low load. A detailed simulation model of the SCR system was developed in the Gamma Technologies simulation suite and a baseline model of feedback control on SCR was constructed. Experiment data for the exhaust gas composition and conditions from a Cummins ISB engine was used to provide the input parameters for the SCR model. The results reveal that in the low-load conditions, the efficiency of NOx reduction in the SCR system is very low, and the level NOx concentration exiting the vehicle could be over 5 times than the limitation set by the US Environmental Protection Agency (EPA). However, these issues are encountered in part due to the fact that current SCR controls focus solely on the aftertreatment components and treat the incoming engine output conditions as system disturbances. To address the low NOx conversion problems encountered in low load conditions, a new integrated engine and aftertreatment control model was designed. This integrated approach improves the SCR system efficiency by using available feedback and modulating the upstream air/fuel ratio to provide more favorable SCR inlet conditions. From experiment data analysis, the engine’s air/fuel ratio is shown to have a critical impact on exhaust gas temperature and exhaust oxygen fraction, which highly affect the SCR reactions. In order to integrate the engine and aftertreatment system, a model of the SCR dynamics was created and validated and a simple model of the relationship between the engine’s air/fuel ratio and resulting exhaust temperature and composition is leveraged. The new model-based control strategy is proven to be effective to improve SCR system performance at low-load operations. With a small value shift in air/fuel ratio, the efficiency of low-load SCR system can increase from 40% to 85% at low load operating conditions.
M.S. in Mechanical and Aerospace Engineering, May 2015
Show less
- Title
- SUBSTATION PLANNING FOR RURAL DISTRIBUTION SYSTEMS IN AFRICAN COUNTRIES
- Creator
- Soyoye, Oluwadamilola
- Date
- 2016, 2016-12
- Description
-
In sub-Saharan Africa, only 35% of the population is connected to grid electricity [2]. Grid-connected parts face serious transmission and...
Show moreIn sub-Saharan Africa, only 35% of the population is connected to grid electricity [2]. Grid-connected parts face serious transmission and distribution challenges. There is also the challenge of electricity demand being greater than electricity supply. These issues in all levels of the traditional power system – generation, transmission and distribution have led to gross inadequacy of electricity supply. This research focuses on the capital intensive Power Distribution Planning (PDP). Most problems in the distribution system affect the consumer directly. Distribution substation planning, a critical part of the PDP, particularly addresses the issue of overloaded distribution systems. It is not uncommon for substation transformers in some African communities to become damaged because of overloading. The choice of location, sizing, siting and number of substations is determined by considering load distribution, feeder lengths and sizes, and the interruption costs. The research illustrates a framework for substation planning, incorporating possible future load growths over a particular period to forestall unwanted failures in the distribution system. A direct algorithm is used, where the substation capacity is computed manually from the load levels at different points. This algorithm is later combined with the Mixed Integer Linear Programming (MILP) approach solved with the CPLEX solver in MATLAB.
M.S. in Electrical Engineering, December 2016
Show less
- Title
- STATISTICAL METHODS FOR LARGE-SCALE TRANSPORTATION NETWORK TRAFFIC VOLUME FORECASTING
- Creator
- Meng, Xiao
- Date
- 2012-11-27, 2012-12
- Description
-
Forecasting is the procedure of making declarations about future events whose actual outcomes have not yet been observed. A lot of decisions...
Show moreForecasting is the procedure of making declarations about future events whose actual outcomes have not yet been observed. A lot of decisions are made based on predictions of future unknown events. Knowing the essence of forecasting, it is not hard to interpret what traffic volume forecasting is. Traffic volume forecasting is the process of estimating the number of vehicles that will be on a planned highway in the future. It plays important roles in different aspects of transportation and related field, such as highway level of service analysis, measure of effectiveness, highway improvement and expansion, geometric design and air quality analysis, etc. A good forecast is needed for decision making in future land use and transportation planning. City and county planners can provide useful information about land use planning and projected developments. County engineers may provide information about future county projects that may cause detours and changes in traffic patterns along a trunk highway. Highway designers need forecasted traffic volumes to ensure proper geometric designs. Since short term forecasting has been a hot topic on research, many statistical methods have been used, such as the mean, historical moving average, exponential smoothing, and autoregressive integrated moving average. Among them, Box Jenkins method (Autoregressive Integrated Moving Average) has been found as the best model on forecasting of time series data with seasonality and trend.
M.S. in Civil Engineering, December 2012
Show less
- Title
- COLLABORATIVE CONSUMPTION: PROFITS, CONSUMER BENEFITS, AND ENVIRONMENTAL IMPACTS
- Creator
- Supangkat, Hendrarto Kurniawan
- Date
- 2014, 2014-05
- Description
-
With increasingly connected consumers and technological advancement, peer- to-peer sharing is emerging as a consumer-led initiative, which is...
Show moreWith increasingly connected consumers and technological advancement, peer- to-peer sharing is emerging as a consumer-led initiative, which is aimed to exploit slack capacities and lower the cost of consuming private goods. Sharing is praised for its potential bene ts of improving consumer access, consumer surplus, and environmental impact. On the other hand, sharing may possess credible threats to producers because of cannibalization and reduced sales quantity. This thesis is composed of three papers on the subject of peer-to-peer sharing of durable goods, e.g., cars, bikes, gadgets, and household appliances. The rst paper studies pricing and product design decisions of a single-product monopolist in a market. We identify the conditions under which a rm would accom- modate or hinder peer-to-peer sharing by pricing the product appropriately. We nd that the rm's pro t can be enhanced only when the consumer valuation heterogene- ity is neither too high nor too low, and the product's intrinsic value is su ciently high. In addition, contrary to the conventional wisdom, we show that sharing does not always improve consumer access to products. Furthermore, some consumers may end up being worse o . Finally, we nd that social sharing may enhance or impede product innovation, depending on consumer heterogeneity and the size of sharing groups. In the second paper, we study whether social sharing will encourage or discour- age product di erentiation. We nd that the two ways of expanding the market, one consumer-initiated and one rm-initiated, can be strategic complements or substi- tutes, depending on consumer heterogeneity, group size, product intrinsic value, and cost structure. We characterize such conditions. For example, we show that accom- modating sharing provides the rm a higher incentive to introduce a di erentiated product when the product intrinsic value and consumer heterogeneity are both low, x or are both high. We also extend the study by allowing consumers to endogenously choose their sharing group size, and show that it may enhance or worsen the rm's pro t. The third paper focuses on the environmental impact stemming from produc- tion and consumption, in the presence of peer-to-peer sharing. The product usage of sharing consumers is modeled as a function of capacity congestion and group size. We show that a "danger" zone exists where sharing is pro table for the rm but is not friendly to the environment. When the rm has an in uence on the sharing group size (e.g., by promoting sharing programs in metropolitan areas or college towns), the economic incentive and environmental impact can be aligned. Speci cally, we nd that stronger congestion e ects may induce the producer to promote sharing in larger groups, which in turn results in a more positive environmental impact. Such situations are more likely to occur when the product unit cost is large. Moreover, we characterize conditions under which the rm may prefer heterogeneous networks composed of groups with di erent sizes or social networks with lower homophily, and meanwhile the environmental impact can be improved.
PH.D in Management Science, May 2014
Show less
- Title
- NUMERICAL SIMULATIONS OF CURVATURE WEAKENING MODEL OF REACTIVE HELE-SHAW FLOW
- Creator
- Zhao, Meng
- Date
- 2013, 2013-12
- Description
-
In this paper, we study a moving interface problem in a Hele-Shaw cell, where two immiscible reactive fluids meet at the interface and...
Show moreIn this paper, we study a moving interface problem in a Hele-Shaw cell, where two immiscible reactive fluids meet at the interface and initiate chemical reactions. A new gel-like phase is produced at the interface and may modify the elastic bending property there. We model the interface as an elastic membrane with a local curva- ture dependent bending rigidity. In the first part of this paper, we review the linear stability analysis on a curvature weakening model, and derive critical flux conditions such that a Hele-Shaw bubble can develop unstable fingering pattern and self-similar morphology. In the second part of this report, we develop a boundary integral nu- merical algorithm to perform nonlinear simulations. Preliminary numerical results show that in the nonlinear regime, there also exist stable self-similar solutions.
M.S. in Applied Mathematics, December 2013
Show less
- Title
- TWO-LAYERED DEPTH ESTIMATION USING SEMI-GLOBAL MATCHING WITH MUTUAL INFORMATION
- Creator
- Zhang, Chen
- Date
- 2014, 2014-07
- Description
-
Depth estimation plays an important role in three-dimensional computer vision area. Its recent development focuses on real-time application....
Show moreDepth estimation plays an important role in three-dimensional computer vision area. Its recent development focuses on real-time application. To be able to provide depth map for real-time applications like pedestrian detection and intelligent vehicles, three challenges must be overcome: 1. Real-time processing speed; 2. Insensitive to brightness change; 3. Clear boundaries and smooth surface. The thesis first describes the major steps of depth estimation, then many commonly used methods are reviewed. From them and other related real-time method, we found that the iteration based semi-global matching with mutual information has much potential to be improved. Based on that, the thesis proposed a method with two layers to provide depth map for pedestrian detection. The low resolution layer’s task is to produce a coarse depth map as quick as possible, then it helps to produce accurate mutual information distribution without iterations. The Full-resolution layer applies semi-global matching with two optional simplification schemes to speed up. The proposed method is implemented on both CPU and GPU. Experimental results and evaluation shows that it has great insensitivity to brightness changes and achieves real-time processing speed while maintaining a comparable performance with state-of-the-art real-time depth estimation methods.
M.S. in Electrical Engineering, May 2014
Show less