Search results
(7,861 - 7,880 of 8,144)
Pages
- Title
- CARING FOR THE CAREGIVER: INTERPERSONAL FACTORS AND DEPRESSION AS PARALLEL-SERIAL MEDIATORS BETWEEN STIGMA AND SUICIDAL IDEATION
- Creator
- Tsen, Jonathan Y.
- Date
- 2022
- Description
-
Background/Objectives: This study applied Joiner's Interpersonal PsychologicalTheory to a caregiver population, by describing relationships...
Show moreBackground/Objectives: This study applied Joiner's Interpersonal PsychologicalTheory to a caregiver population, by describing relationships among affiliate stigma, thwarted-belongingess (TB), perceived-burdensomeness (PB), and depression, and suicidal ideation (SI). Participants/Setting: 243 adult caregivers participated in this study via Prolific Academic and caregiver-related websites. Design/Main Outcome Measures: This study used a cross-sectional, survey-based design including demographics, the Affiliate Stigma Scale (α=.93), Interpersonal Needs Questionnaire-15 (α=.95), Center of Epidemiology Studies–Depression-10 (α=.90), and Depressive Symptom Inventory— Suicide Subscale (α = .91) via Qualtrics. Analyses run on SPSSv27/Hayes’ PROCESS macro. Results: Parallel-serial mediation found after controlling for covariates that the total indirect effect of affiliate stigma on SI through both TB and PB then through depression was significant, B = .0271, SE = .0062, β = .1659, 95%CI [.0152, .0393]. Conclusions: Findings indicated that affiliate stigma indirectly affected SI through both TB and PB then through depression. Interventions to improve caregiver wellbeing should capitalize on both improving interpersonal functioning and depressive symptoms in tandem in order to reduce SI risk.
Show less
- Title
- Towards Understanding the Microstructure and Mechanical Properties of Additively Manufactured Ni-base Superalloys
- Creator
- Tiparti, Dhruv Reddy
- Date
- 2022
- Description
-
Nickel-base superalloy components such as turbine discs typically undergo numerous manufacturing steps that contribute to increasing the cost...
Show moreNickel-base superalloy components such as turbine discs typically undergo numerous manufacturing steps that contribute to increasing the cost and the waste of excess materials. With advent of fusion based additive of manufacturing (AM) techniques, such components with complex geometry can be fabricated with great efficiency. However due to characteristically high energy densities, fast cooling rate, and layer-by-layer building process associated with AM; Ni-base superalloys with higher temperature performance are difficult to be fabricated by AM due to susceptibility to composition related defect formation, which is further exacerbated by anisotropic grain structures induced by the large thermal gradients present. Crack-free material can be fabricated but, in most cases, issues such as an anisotropic microstructure will prevail, and the balance of mechanical properties achieved may not be suitable for the desired applications. Several strategies exist to mitigate the challenges posed by additive manufacturing via post-processing such as hot-isostatic processing, annealing heat treatments, application of grain refining inoculants, etc. All these strategies utilized to mitigate issues with AM of Ni-base superalloys still require further study to understand their effects on the microstructure and mechanical properties. This work aims to evaluate the use of inoculant particles, and novel heat treatments on the microstructure and mechanical properties of different superalloys. First, the effect of varying amounts of CoAl2O4 inoculant ranging from 0 to 2 wt.% on the microstructure evolution of Inconel 718(IN718) fabricated by selective laser melting (SLM) was evaluated. The findings from this study indicated that additions of CoAl2O4 only resulted in a minor degree of grain refinement with slight increase in anisotropy; in addition, a CoAl2O4 ¬content above 0.2 wt.% resulted in the formation of agglomerate inclusions; and that to effectively utilize CoAl2O4 as a grain refining inoculant, process parameters must be further optimized while considering the formation of agglomerates, and other defects. Second, the application of CoAl2¬O4 was extended towards the Direct Energy Deposition (DED) of IN718. Here findings indicated that due to the modification of the thermophysical properties of the melt pool by oxide addition, an earlier onset of large columnar extending across multiple layers occurred while counteracting conditions required for equiaxed grain formation; and these CoAl2O4 were also found to exhibit a potent Zenner pinning effect that maintained the as-built grain structure despite application of extreme high treatment condition of 1200oC for 4 hrs. Third, the tensile and fatigue properties of the DED IN718 with CoAl2O4 were evaluated. Here, it was found that the addition of CoAl2O4 leads to a minor increase in tensile strength in the as-built condition attributed primarily to the fine oxide dispersion; a more modest increase in tensile strength in the heat-treated condition due to grain refinement induced by retaining the as-built grain structure; and that despite the increase in tensile strength with CoAl2O4 a corresponding increase in fatigue life did not occur. Lastly, the processing of René 65 conducted by laser-powder bed fusion(L-PBF) was done and compared to the conventionally cast and wrought material. Here, the effect of the difference in processing route in conjunction with heat treatments was evaluated to understand the creep and stress relaxation behavior. It was found that L-PBF of René 65 led to an overall improved resistance to deformation by creep and relaxation mechanism.
Show less
- Title
- SYNTHESIS AND CHARACTERIZATION OF MG, NB, TI-DOPED LINIO2 CATHODE MATERIAL FOR LI-ION BATTERIES
- Creator
- Tian, Yiwen
- Date
- 2022
- Description
-
In this project, the influence of several metal doping on the electrochemical properties of LiNiO2 materials was analyzed. The doping method...
Show moreIn this project, the influence of several metal doping on the electrochemical properties of LiNiO2 materials was analyzed. The doping method is aiming to improve the stability of the layered structure and inhibit the mixing of nickel and lithium by enhancing the structural stability of the layered material and replacing part of Ni with other metals in the process of intercalation/deintercalation, thereby promoting the cyclic performance and reversible capacity. The LiNiO2 powder doped with Nb, Ti and Mg is denoted as Li0.96Ni0.9Nb0.06Ti0.04Mg0.02O2 or, in short, metal-doped LiNiO2. The synthesis of the metal-doped LiNiO2 powder consists of mixing the lithium and nickel sources with various metal oxides and then being subjected to high-energy ball milling for 10 hours, followed by heating for 20 h in a metallic tube furnace at 680℃ with flowing oxygen atmosphere. The undoped LiNiO2 powder synthesized using the same process and conditions was compared with the doped LiNiO2 powder. In order to understand the doping mechanism, field emission scanning electron microscopy (FESEM), energy dispersive spectroscopy (EDS), and X-ray diffraction (XRD) were used to analyze the morphology, composition and crystal structure of the final product. Benefiting from the Mg, Nb, and Ti doping, the doped LiNiO2 exhibited a high reversible capacity of 130.56 mAh g-1, higher than that of undoped LiNiO2 (95.02 mAh g-1) under the 0.1C charge/discharge rate in the voltage window between 2.5 and 4.2 V. Further, the doped LiNiO2 has 86% of capacity retention over 100 cycles, better than undoped LiNiO2 (only 44% of capacity retention) under the 0.5C charge/discharge rate between 2.5 and 4.2 V.
Show less
- Title
- Quality of Life in People with Epilepsy: The Associations of Anti-seizure Medications and Biopsychosocial Variables
- Creator
- Thomas, Julia A.
- Date
- 2022
- Description
-
People with epilepsy, on average, experience lower quality of life (QOL) than healthy controls (Taylor et al., 2011). This study examined the...
Show morePeople with epilepsy, on average, experience lower quality of life (QOL) than healthy controls (Taylor et al., 2011). This study examined the associations between specific anti-seizure medication, biopsychosocial factors, and QOL in people with epilepsy. Analysis of covariance revealed that individuals taking three or more anti-seizure medications had significantly lower QOL than those taking levetiracetam. Findings also demonstrated that when looking at biopsychosocial factors as predictors of QOL in hierarchical regression, anxiety, depression, and daytime sleepiness were significant predictors of QOL. Once these factors were entered into the model, number of medications was no longer significant. The final model predicted 59.6% of the variance in QOL. Lastly, a moderation analysis to examine the moderating effect of employment on the association between number of anti-seizure medications and QOL was not significant. Additional exploratory analyses looking at individuals who were employed versus those who were not employed were completed. These findings underscore the importance of addressing psychological health and sleep factors within the epilepsy population.
Show less
- Title
- Child and Family Outcomes Associated with Specific Maryland ASD Waiver Services and Choice and Control as Mediators of These Outcomes
- Creator
- Turchmanovych-Hienkel, Nataliya
- Date
- 2022
- Description
-
Autism spectrum disorder (ASD) is a neurodevelopmental condition that affects 1in 44 children and is characterized by impairments in cognitive...
Show moreAutism spectrum disorder (ASD) is a neurodevelopmental condition that affects 1in 44 children and is characterized by impairments in cognitive, behavioral, and social domains of functioning. Literature suggests that ASD not only impacts the quality of life of the individuals diagnosed with this condition, but also has a negative impact on family quality of life (FQoL). Interventions and services offered through the Medicaid 1915(c) Home and Community-Based Services waiver programs can enhance child and family outcomes. The present study looked at one specific waiver program, the Maryland ASD waiver, and examined the frequency at which families received different waiver services and the associations between those service frequencies and child (i.e., academic performance, independent living skills, social communication and interaction skills, stereotypic and repetitive behavior, and aggressive behavior) and family (i.e., FQoL) outcomes, as well as explored whether the family’s perception of choice and control mediate these child and family outcomes. Results suggest that frequencies of some waiver services are associated with progress in some child outcomes, but not in FQoL. This study also suggests that the choice and control that families have over services do not mediate the relation between frequency of waiver services and child and family outcomes. Overall, results suggest that the Maryland ASD waiver program may help improve some domains of child functioning.
Show less
- Title
- Choice-Distinguishing Colorings of Cartesian Products of Graphs
- Creator
- Tomlins, Christian James
- Date
- 2022
- Description
-
A coloring $f: V(G)\rightarrow \mathbb N$ of a graph $G$ is said to be \emph{distinguishing} if no non-identity automorphism preserves every...
Show moreA coloring $f: V(G)\rightarrow \mathbb N$ of a graph $G$ is said to be \emph{distinguishing} if no non-identity automorphism preserves every vertex color. The distinguishing number, $D(G)$, of a graph $G$ is the smallest positive integer $k$ such that there exists a distinguishing coloring $f: V(G)\rightarrow [k]$ and was introduced by Albertson and Collins in their paper ``Symmetry Breaking in Graphs.'' By restricting what kinds of colorings are considered, many variations of distinguishing numbers have been studied. In this paper, we study proper list-colorings of graphs which are also distinguishing and investigate the choice-distinguishing number $\text{ch}_D(G)$ of a graph $G$. Primarily, we focus on the choice-distinguishing number of Cartesian products of graphs. We determine the exact value of $\text{ch}_D(G)$ for lattice graphs and prism graphs and provide an upper bound on the choice-distinguishing number of the Cartesian products of two relatively prime graphs, assuming a sufficient condition is satisfied. We use this result to bound the choice distinguishing number of toroidal grids and the Cartesian product of a tree with a clique. We conclude with a discussion on how, depending on the graphs $G$ and $H$, we may weaken the sufficient condition needed to bound $\text{ch}_D(G\square H)$.
Show less
- Title
- Synthesis and Photophysical Characterization of Novel Organic Triplet Donor–Acceptor Dyads for Light-Harvesting/Modulation Application
- Creator
- Yun, Young Ju
- Date
- 2022
- Description
-
Donor–acceptor chromophoric systems (D–A) are important scaffolds for several light-harvesting/initiated processes and devices, including...
Show moreDonor–acceptor chromophoric systems (D–A) are important scaffolds for several light-harvesting/initiated processes and devices, including light-emitting diodes, photo-catalytic/redox systems, and photovoltaic cells. It has been hypothesized that for efficient photophysical processes (viz. energy/charge-transfer or excited-state interactions); it is ideal to tether the donor and acceptor chromophores into molecular dyads. To this end, I devised and synthesized several dyads by tethering an organic triplet energy donor and various polyaromatic chromophores (e.g., perylene derivatives and anthracene derivatives) onto a conjugated-/non-conjugated-linker (phenylene- and triptycene- linker, respectively). During the 4-5 years of my Ph.D., I synthesized a total of five (5) dyads: o–, p–3, and dyads 3–5. These systems were fully characterized using different spectroscopy tools/techniques. The spectroscopy investigations of the dyads have allowed me to decipher two important energy transfer pathways: through-bond and through-space with the phenylene linker and only through-space energy with the triptycene linker. Furthermore, the investigations led to the discovery that geometrical features such as face-to-face (co-facial) or slip-stacked interactions between the donor and acceptors chromophores might dictate the dynamic/kinetic of light-induced energy transfer in the dyads. Findings from my graduate research project paved the way for developing molecular engineering studies for light-harvesting/modulation applications.Subsequently, I was able to employ the dyads of my interest to achieve intramolecular and intermolecular triplet energy transfer (TEnT) triplet-triplet annihilation-based photon upconversion (TTA-PUC).
Show less
- Title
- DEFAULT RISK AND MOMENTUM PREMIUM
- Creator
- Zhang, Yi
- Date
- 2022
- Description
-
Birge and Zhang (2018) reported that combining common factors models with functions of the default risk improves models' performance to...
Show moreBirge and Zhang (2018) reported that combining common factors models with functions of the default risk improves models' performance to explain stock returns. Default risk contains firm specific information and may help to explain momentum premium that compensates investors for the firm specific risk exposures. In this paper, we confirmed that the forward-looking measure of default risk, as proposed by Birge and Zhang (2018), seems to capture some pricing information in the momentum premium. This provides an alternative to explain the underlying risks associated with the momentum strategy.
Show less
- Title
- INTEGRATED DECISION SUPPORT SYSTEM FOR THE SELECTION AND IMPLEMENTATION OF DELAY ANALYSIS IN CONSTRUCTION PROJECTS
- Creator
- Yang, Juneseok
- Date
- 2022
- Description
-
The goal of this study is to establish an objective, user-friendly, and reliable decision support system, called delay analysis selection and...
Show moreThe goal of this study is to establish an objective, user-friendly, and reliable decision support system, called delay analysis selection and implementation system (DASIS), which allows delay analysts and practitioners in the construction industry to select a type of delay analysis that is most appropriate for given conditions and to perform the selected type of delay analysis. DASIS integrates a delay analysis selection system (DASS) module and an implementation module (DAIS) that performs the type of delay analysis selected by DASS in construction projects.The model that operates the DASS module consists of (1) four different delay analysis approaches currently available to practitioners; (2) a set of 26 attributes that affect the selection of a type of delay analysis; (3) a case-base involving 3,776 cases described by these 26 attributes and their corresponding output values (i.e., the most appropriate delay analysis approach); (4) a set of 7 categories consisting of subsets of attributes; (5) the weights of the attributes and the categories; and (6) a spreadsheet designed in Microsoft Excel that performs the calculations involved in case-based similarity assessment. The implementation module is a computerized analytics and automation platform that performs the type of delay analysis selected by DASS. In developing the DASS module, 26 attributes that influence the selection of the most appropriate type of delay analysis were identified based on a thorough literature review and were organized in seven categories. These attributes were used to evaluate the four types of delay analysis (i.e., static, dynamic, additive, and subtractive analyses). Based on the results of this evaluation, a case-base of 3,776 cases was generated while considering the constraints of each category. The weights of the attributes and categories were determined by using several methods. To determine the best-fit between a target case (defined by its 26 attributes) and the 3,776 cases stored in the case-base were used to perform a case-based similarity assessment to calculate weighted case similarity scores, and to find the best-informed solution to the delay analysis type selection problem. In developing the DAIS module, the four types of delay analysis were coded in Microsoft Excel using macros programmed in Visual Basic for Applications (VBA). This automated tool performs the selected delay analysis by DASS. The fully integrated DASIS model finds the best-fit match between a target case and cases stored in the case-base by means of similarity assessment methods by using weighted case similarity scores, hence identifying the most appropriate type of delay analysis for use in the target case, performs the selected type of delay analysis and generates a report about the results of the delay analysis to the analyst instantaneously, allowing the contractual parties to settle the issues quickly. This study is the first attempt to establish an objective decision support system (DASS) to assist delay analysts by automating the selection of a type of delay analysis using combinations of well recognized and reliable attributes and similarity assessment techniques. In addition, DASS is immediately followed by DAIS in an integrated system (DASIS) that does not only do the selection of the most appropriate type of delay analysis, but that also implements the selected delay analysis, hence providing ease of use and high speed. A case study based on fictitious scenarios is presented to demonstrate and validate the research approach. The use of the entropy weight method to calculate the weights of the attributes can be considered a minor limitation of the study. Finally, DASIS can be reformulated as a web-based application that allows analysts to work online using ordinary browsers anywhere and anytime.
Show less
- Title
- MEASUREMENT OF ELECTRON NEUTRINO AND ANTINEUTRINO APPEARANCE WITH THE NOνA EXPERIMENT
- Creator
- Yu, Shiqi
- Date
- 2020
- Description
-
As a long-baseline neutrino oscillation experiment, the NuMI Off-axis $\nu_e$ Appearance (NOvA) experiment aims at studying neutrino physics...
Show moreAs a long-baseline neutrino oscillation experiment, the NuMI Off-axis $\nu_e$ Appearance (NOvA) experiment aims at studying neutrino physics by measuring neutrino oscillation parameters using the neutrino flux from the Main Injector (NuMI) beam. It has two functionally identical detectors. The near detector is onsite at Fermi National Accelerator Laboratory. The far detector is 810 km away from the source of neutrinos and antineutrinos, at Ash River, Minnesota. At the near detector, muon neutrinos or antineutrinos, before significant oscillations take place, are used to correct the Monte Carlo simulation. At the far detector, the neutrino and antineutrino fluxes after significant oscillations have happened are measured and analyzed to study neutrino oscillation. The NOvA experiment is sensitive to the values of $\sin^2\theta_{23}$, $\Delta m^2_{32}$, and $\delta_{CP}$. The latest values from the NOvA 2020 analysis are as follows: $\sin^2\theta_{23}=0.57^{+0.03}_{-0.04}$, $\Delta m^2_{32}=(2.41\pm0.07)\times10^{-3}$ eV$^2$/c$^4$, and $\delta_{CP}=0.82\pi$ with a wide 1$\sigma$ interval of uncertainty. My study is focused on the neutrino oscillation analysis with NOvA, including detector light model tuning, particle classification with convolutional neural network, electron neutrino and antineutrino energy reconstruction, and oscillation background estimation. Most of my studies have been used in the latest NOvA publication and the NOvA 2020 analysis.
Show less
- Title
- Towards Trustworthy Multiagent and Machine Learning Systems
- Creator
- Xie, Shangyu
- Date
- 2022
- Description
-
This dissertation aims to systematically research the "trustworthy" Multiagent and Machine Learning systems in the context of the Internet of...
Show moreThis dissertation aims to systematically research the "trustworthy" Multiagent and Machine Learning systems in the context of the Internet of Things (IoT) system, which mainly consists of two aspects: data privacy and robustness. Specifically, data privacy concerns about the protection of the data in one given system, i.e., the data identified to be sensitive or private cannot be disclosed directly to others; robustness refers to the ability of the system to defend/mitigate the potential attacks/threats, i.e., maintaining the stable and normal operation of one system.Starting from the smart grid, a representative multiagent system in the IoT, I demonstrate two works on improving data privacy and robustness in aspects of different applications, load balancing and energy trading, which integrates secure multiparty computation (SMC) protocols for normal computation to ensure data privacy. More significantly, the schemes can be readily extended to other applications in IoT, e.g., connected vehicles, mobile sensing systems.For the machine learning, I have studied two main areas, i.e., computer vision and natural language processing with the privacy and robustness correspondingly. I first present the comprehensive robustness evaluation study of the DNN-based video recognition systems with two novel proposed attacks in both test and training phase, i.e., adversarial and poisoning attacks. Besides, I also propose the adaptive defenses to fully evaluate such two attacks, which can thus further advance the robustness of system. I also propose the privacy evaluation for the language systems and show the practice to reveal and address the privacy risks in the language models. Finally, I demonstrate a private and efficient data computation framework with the cloud computing technology to provide more robust and private IoT systems.
Show less
- Title
- Deep Learning Methods For Wireless Networks Optimization
- Creator
- Zhang, Shuai
- Date
- 2022
- Description
-
The resurgence of deep learning techniques has brought forth fundamental changes to how hard problems could be solved. It used to be held that...
Show moreThe resurgence of deep learning techniques has brought forth fundamental changes to how hard problems could be solved. It used to be held that the solutions to complex wireless network problems require accurate mathematical modeling of the network operation, but now the success of deep learning has shown that a data-driven method could generate powerful and useful representations such that the problem could be solved efficiently with surprisingly competent performance. Network researchers have recognized this and started to capitalize on the learning methods’ prowess. But most works follow the existing black-box learning paradigms without much accommodation to the nature and essence of the underlying network problems. This thesis focuses on a particular type of classical problem: multiple commodity flow scheduling in an interference-limited environment. Though it does not permit efficient exact algorithms due to its NP-hard complexity, we use it as an entry point to demonstrate from three angles how the learning-based methods can help improve the network performance. In the first part, we leverage the graphical neural network (GNN) techniques and propose a two-stage topology-aware machine learning framework, which trains a graph embedding unit and a link usage prediction module jointly to discover links that are likely to be used in optimal scheduling. The second part of the thesis is an attempt to find a learning method that has a closer algorithmic affinity to the traditional DCG method. We make use of reinforcement learning to incrementally generate a better partial solution such that a high quality solution may be found in a more efficient manner. As the third part of the research, we revisit the MCF problem from a novel viewpoint: instead of leaning on the neural networks to directly generate the good solutions, we use them to associate the current problem instance with historical ones that are similar in structure. These matched instances’ solutions offer a highly useful starting point to allow efficient discovery of the new instance’s solution.
Show less
- Title
- Essays on Clean Energy Finance and Cryptocurrency Market
- Creator
- Xie, Yao
- Date
- 2021
- Description
-
This dissertation includes four essays with several empirical investigations in the areas of clean energy finance and cryptocurrencies.In the...
Show moreThis dissertation includes four essays with several empirical investigations in the areas of clean energy finance and cryptocurrencies.In the first essay, I investigate the heterogeneous relationship between various determinants of the clean energy market across all subsectors of the clean energy stock market. My findings reveal that VIX is the most significant predictor of all clean energy subsectors conditional volatility. During the COVID-19 stress period, economic uncertainty measures become more significant measures. The heterogeneity of clean energy market persists in the out-of-sample results. These results suggest that portfolio diversification for different clean energy subsector is necessary. In the second essay, I study the safe haven property of several volatility indexes on clean energy subsectors. I compare the current COVID-19 stress period and the time before. The results show that market volatility and commodity volatility are good safe haven assets during the COVID-19 period. But they are not safe haven assets against the clean energy subsector before the pandemic period. Among all volatility indexes, gold volatility index is the most effective safe haven assets. In the third essay, I investigate the characteristics of Bitcoin as a financial asset. A comprehensive set of information variables under five categories: macroeconomics, blockchain technology, other markets, stress level, and investor sentiment. The empirical results show that blockchain technology, stress level and investor sentiment have strong predicting power on Bitcoin returns. In the fourth essay, I aim to study how extreme sentiment measures from Google Trend and Wikipedia Pageviews affect both traditional cryptocurrency, such as Bitcoin and stablecoin, like Tether. Our results show that Tether’s return is not affected by the extreme sentiment measures during the COVID-19 stress period which suggests that stablecoin can offer price stability.
Show less
- Title
- DEEP LEARNING AND COMPUTER VISION FOR INDUSTRIAL APPLICATIONS: CELLULAR MICROSCOPIC IMAGE ANALYSIS AND ULTRASOUND NONDESTRUCTIVE TESTING
- Creator
- Yuan, Yu
- Date
- 2022
- Description
-
For decades, researchers have sought to develop artificial intelligence (AI) systems that can help human beings on decision making, data...
Show moreFor decades, researchers have sought to develop artificial intelligence (AI) systems that can help human beings on decision making, data analysis and pattern recognition applications where analytical methods are ineffective. In recent years, Deep Learning (DL) has been proven to be an effective AI technique that can outperform other methods in applications such as computer vision, natural language processing, autonomous driving. Realizing the potential of deep learning techniques, researchers have also started to apply deep learning on other industrial applications. Today, deep learning based models are used to innovate and accelerate automation, guidance, and decision making in various industries including automotive industry, pharmaceutical industry, finance, agriculture and more. In this research, several important industrial applications (on Biomedicine and Non-Destructive Testing) utilizing deep learning algorithms will be introduced and analyzed. The first biopharmaceutical application focuses on developing a deep learning based model to automate the visual inspection process in Median Tissue Culture Infectious Dose(TCID50). TCID50 is one of the most popular methods for viral quantification. An important step of TCID50 is to visually inspect the sample and decide if it exhibits cytopathic effect(CPE) or not. Two novel models have been developed to detect CPE in microscopic images of cell culture in 96 well-plates. The first model consists of a convolutional neural network (CNN) and support vector machine(SVM). The second model is a fully convolutional network (FCN) followed by morphological post-processing steps. The models are tested on 4 cell lines and achieve very high accuracy. Another biopharmaceutical application developed for cellular microscopic images is the clonal selection. Clonal selection is one of the mandatory steps in cell line development process. It focuses on verifying the clonality of the cell culture. The researchers used to visually inspect the microscopic images to verify the clonality. In this work, a novel deep learning based model and a workflow is developed to accelerate the process. This algorithm consists of multiple steps, including image analysis after incubation to detect the cell colonies, and verify its clonality in day0 image. The results and common mis-classification cases are shown in this thesis. Image analysis method is not the only technology that has been advancing for cellular image analysis in biopharmaceutical industry. A new class of instruments are currently used in biopharmaceutical industry which enable more opportunities for image analysis. To make the most of these new instruments, a convolutional neural network based architecture is used to perform accurate cell counting and cell morphology based segmentation. This analysis can provide more insight of the cells at very early stage in characterization process of cell line development. The architecture and the testing results are presented in this work. The proposed algorithm has achieved very high accuracy on both applications, and the cell morphology based segmentation enables a brand new feature for scientists to predict the potential productivity of the cells. Next part of this dissertation is focused on hardware implementation of Ultrasonic Non-Destructive Testing (NDT) methods based on deep learning, which can be highly useful in flaw detection and classification applications. With the help of a smart and mobile Non-Destructive Testing device, engineers can accurately detect and locate the flaws inside the materials without reliance on high performance computation resources. The first NDT application presents a hardware implementation of a deep learning algorithm on Field-programmable gate array(FPGA) for Ultrasound flaw detection. The Ultrasound flaw detection algorithm consists of a wavelet transform followed by a LeNet inspired convolutional neural network called Ultra-LeNet. This work is focused on implementing the computationally difficult part of this algorithm: Ultra-LeNet, so that it can be used in the field where high performance computation resources (e.g., AWS) are not accessible. The implementation uses resource partitioning to design two dedicated pipelined accelerators for convolutional layers and fully connected layers respectively. Both accelerators utilize loop unrolling, loop pipelining and batch processing techniques to maximize the throughput. The comparison to other work has shown that the implementation has achieved higher hardware utilization efficiency. The second NDT application is also focused on implementing a deep learning based algorithm for Ultrasound flaw detection on a FPGA. Instead of implementing the Ultra-LeNet, the deep learning model used in this application is Meta-learning based Siamese Network, which is capable for multi-class classification and it can also classify a new class even if it does not appear in the training dataset with the help of automated learning features. The hardware implementation is significantly different than the previous algorithm. In order to improve the inference operation efficiency, the model is compressed with both pruning and quantization, and the FPGA implementation is specifically designed to accelerate the compressed CNN with high efficiency. The CNN model compression method and hardware design are novel methods introduced in this work. Comparison against other compressed CNN accelerators is also presented.
Show less
- Title
- MARKETABLE LIMIT ORDERS AND NON-MARKETABLE LIMIT ORDERS ON NASDAQ
- Creator
- ZHANG, DAN
- Date
- 2022
- Description
-
My research includes two parts. In the first part of my research, I classify marketable limit orders into three different types: large...
Show moreMy research includes two parts. In the first part of my research, I classify marketable limit orders into three different types: large marketable order to buy, large marketable order to sell, and small marketable order. I use dummy variance method to research the effect of the three marketable orders on standardized variance, and find that LMOB and LMOS play significant role in variance increase. The second part of my research is about modelling of time to execution and time to cancellation of Non-marketable limit orders. I construct variables and model time to execution for NLO to buy and time to cancellation for NLO to buy and NLO to sell based on exponential distribution with accelerated failure time specification. My research shows that the longer the distance of limit price to buy away from the best bid price, the longer time to execution is. The longer the distance of limit price to buy away from the best bid price or limit price to sell away from the best ask price, the longer the time to cancellation is.
Show less
- Title
- Stochastic dynamical systems with non-Gaussian and singular noises
- Creator
- Zhang, Qi
- Date
- 2022
- Description
-
In order to describe stochastic fluctuations or random potentials arising from science and engineering, non-Gaussian or singular noises are...
Show moreIn order to describe stochastic fluctuations or random potentials arising from science and engineering, non-Gaussian or singular noises are introduced in stochastic dynamical systems. In this thesis we investigate stochastic differential equations with non-Gaussian Lévy noise, and the singular two-dimensional Anderson model equation with spatial white noise potential. This thesis consists of the following three main parts. In the first part, we establish a linear response theory for stochastic differential equations driven by an α-stable Lévy noise (1<α<2). We first prove the ergodic property of the stochastic differential equation and the regularity of the corresponding stationary Fokker-Planck equation. Then we establish the linear response theory. This result is a general fluctuation-dissipation relation between the response of the system to the external perturbations and the Lévy type fluctuations at a steady state.In the second part, we study the global well-posedness of the singular nonlinear parabolic Anderson model equation on a two-dimensional torus. This equation can be viewed as the nonlinear heat equation with a random potential. The method is based on paracontrolled distribution and renormalization. After splitting the original nonlinear parabolic Anderson model equation into two simpler equations, we prove the global existence by some a priori estimates and smooth approximations. Furthermore, we prove the uniqueness of the solution by classical energy estimates. This work improves the local well-posedness results in earlier works.In the third part, we investigate the variation problem associated with the elliptic Anderson model equation in a two-dimensional torus in the paracontrolled distribution framework. The energy functional in this variation problem is arising from the Anderson localization. We obtain the existence of minimizers by a direct method in the calculus of variations, and show that the Euler-Lagrange equation of the energy functional is an elliptic singular stochastic partial differential equation with the Anderson Hamiltonian. We further establish the L2 estimates and Schauder estimates for the minimizer as weak solution of the elliptic singular stochastic partial differential equation. Therefore, we uncover the natural connection between the variation problem and the singular stochastic partial differential equation in the paracontrolled distribution framework.Finally, we summarize our results and outline some research topics for future investigation.
Show less
- Title
- Expanding the Magic Circle and the Self: Integrating Discursive Topics into Games
- Creator
- da Rosa Faller, Roberto
- Date
- 2020
- Description
-
This study focuses on games for self-development and how they communicate ideas, challenge established assumptions, cause reflection, and...
Show moreThis study focuses on games for self-development and how they communicate ideas, challenge established assumptions, cause reflection, and provoke change. It explores the integration of discursive topics – specifically those perceived as difficult, political, philosophical, taboo, or controversial – into games, and how to manage player exposure to these topics through design while avoiding player disengagement to achieve self-development goals. Using a Research Through Design approach, this study was conducted in two phases. The first exploratory phase resulted in an analytical framework with four distinct lenses: engaging play experience; player’s emotional investment; the friction points of discursive topics; and, controlled exposure to the topic. During the second phase, this framework was used to analyze eight case studies and three prototypes. The resultant insights from analysis revealed five categories – topic depiction, emotional climate, emotional anchors, topic delivery, and exposure timing – that form the Discursive Topic Integration Framework for self-development. This framework offers a new theoretical perspective for design scholars and practicing designers about how to manipulate the “magic circle” (a safe temporary space for the act of play), by intentionally designing for discursive topics and their friction points. It contributes strategies about when, how, how frequently, and with what intensity discursive topics may be introduced and abstracted in games. It frames the discursive topic, creates the emotional climate, and anchors the player inside the magic circle of the game so that they feel engaged, motivated, and curious without becoming overwhelmed. This study also generated two additional frameworks, including: the Self-Development Opportunity Matrix that can be used to generate or evaluate self-development goals; and, the Five Categories of Transitional and Traumatic Experiences that can assist in the design of games and other experiences that build a person’s capacity, self-determination, and commitment to positive change.
Show less
- Title
- Development of a novel ultra-nanocrystalline diamond (UNCD) based photocathode and exploration of its emission mechanisms
- Creator
- Chen, Gongxiaohui
- Date
- 2020
- Description
-
High quality electron sources are one of the most commonly used probing tools used for the study of materials. Photoemission cathodes, capable...
Show moreHigh quality electron sources are one of the most commonly used probing tools used for the study of materials. Photoemission cathodes, capable of producing ultra-short and ultra-high intensity beams, are a key component of accelerator based light sources and some microscopy tools. High quantum efficiency (QE), low intrinsic emittance, and long lifetime (or good vacuum tolerance) are three of the most critical features for a photocathode; however, these are difficult to achieve simultaneously and trade-offs need to be made for different applications. In this work, a novel semi-metallic material of nitrogen-incorporated ultrananocrystalline diamond ((N)UNCD) has been studied as a photocathode. (N)UNCD has many of the unique diamond properties, such as low intrinsic as-grown surface roughness (at the order of 10~nm) due to its nanometer scale crystalline size, relatively long lifetime in air, high electrical conductivity with nitrogen doping, and potentially high QE performance due to the high grain boundary densities where most of electron emission occurs. High contrast interference of incident and reflected radiation within (N)UNCD thin films was observed, and this feature allows fast thickness determination based on an analytical optics methodology. This method has been extended to study and calculate the etching rates of two commonly used O$_2$ and H$_2$ plasmas for use with future (N)UNCD microfabrication processes. The mean transverse energy (MTE) of (N)UNCD was determined over a wide UV range in a DC photogun. Unique MTE behavior was observed; it did not scale with photon energy unlike most metals. This behavior is associated with emission from spatially-confined states in the graphite regions (with low electron effective mass) between the diamond grains. Such behavior suggests that beam brightness many be increased by the simple mechanism of increasing the photon energy so that the QE increases, while the MTE remains constant.Two individual (N)UNCD photocathodes synthesized two years apart have been characterized in a realistic RF photogun. Both the QE and intrinsic emittance were characterized. It was found that the QE of $\sim4.0\times 10^{-4}$, is more than an order of magnitude higher than that of most commonly used metal cathodes (such as Cu and Nb). The intrinsic emittance (0.997~$\mu$m/mm) is comparable to that of photocathodes now deployed in research accelerators. The most impressive feature is the excellent robustness of (N)UNCD material; there was no evidence of performance degradation, even after years-long atmospheric exposure. The results of this work demonstrate that a cathode made of (N)UNCD material is able to achieve balanced performance of three of the primary critical photocathode figures-of-merit.
Show less
- Title
- H1 LUBRICANT TRANSFER FROM A HYDRAULIC PISTON FILLER INTO A SEMI-SOLID FOOD SYSTEM
- Creator
- Chao, Pin-Chun
- Date
- 2020
- Description
-
The machinery used to prepare, and process food products need grease and oil for the lubrication of machine parts. H1 (food-grade) lubricants...
Show moreThe machinery used to prepare, and process food products need grease and oil for the lubrication of machine parts. H1 (food-grade) lubricants commonly used in the food industry are regulated as indirect additives by the FDA because they may become components of food through transfer due to incidental contact between lubricants and foods. The maximum level of H1 lubricants currently permitted in foods is 10 ppm, which was derived from FDA data gathered over 50 years ago. Although modern equipment has been designed to minimize the transfer of lubricants during processing and packaging, incidental food contact can still occur resulting from leaks in lubrication systems or over-lubrication. However, there is a lack of data for the FDA to evaluate and determine whether safety issues in the aspect of chemical contamination should be addressed concerning the use of food-grade lubricants in the production of foods. This research was conducted to determine the transfer of an H1 lubricant (Petrol-Gel) into a semi-solid model food from a hydraulic piston filler during conventional operating conditions at 25°C and 50°C. Xanthan gum solutions with concentrations of 2.3% at 25°C and 1.9% at 50°C were used to simulate the viscosity of ketchup at 50°C (970 cP). Petrol-Gel H1 lubricant with a viscosity grade of 70 cSt at 40°C was selected and the aluminum (Al) in the lubricant was targeted as a tracer metal. Analytical methods to quantify Al in both Petrol-Gel and xanthan gum solutions were successfully developed and validated by using inductively coupled plasma – mass spectrometry (ICP-MS) combined with microwave-assisted acid digestion technique. The concentration of Al in the Petrol-Gel was determined to be 3103 ± 26 μg/g. A total of 1.35 g of Petrol-Gel was applied to four ring gaskets in the filler, and 50 g samples of xanthan gum solution were collected into a 100-mL polypropylene tube (DigiTube) with low leachable metals during 500 filling cycles (the full capacity of the piston filler hopper).Results showed that the concentrations of Petrol-Gel transferred into 2.3% xanthan gum solution at 25°C ranged from 1.6 to 63.5 μg/g. A total of 64.47 mg of the applied Petrol-Gel (1.35 g) was transferred into 25 liters of the solution. The average concentration of Petrol-Gel in 2.3% xanthan gum solution was calculated to be 2.84 μg/g, which was lower than the current regulatory limit of 10 ppm. In general, the transfer of Petrol-Gel during the first 100 filling cycles was higher at 50°C than at 25°C. The concentration of Petrol-Gel transferred into 1.9% xanthan gum solution at 50°C for the first 100 filling cycles ranged from 1.6 to 35.06 μg/g and was 6.37 μg/g on average. This research will help FDA to calculate more realistic limits of the H1 lubricants permissible in foods at modern food processing conditions as well as estimate consumer dietary exposure to these indirect food additives.
Show less
- Title
- WASTEWATER COLLECTION SYSTEM MODELING: TOWARDS AN INTEGRATED URBAN WATER AND ENERGY NETWORK
- Creator
- Wang, Xiaolong
- Date
- 2020
- Description
-
Wastewater collection systems, among the oldest features of urban infrastructure, are typically dedicated to collect and transport wastewater...
Show moreWastewater collection systems, among the oldest features of urban infrastructure, are typically dedicated to collect and transport wastewater from users to water resource recovery facilities (WRRFs). Since the 1970s, wastewater engineers and scientists have come to understand that wastewater collection systems can bring benefits for urban water and energy networks, including thermal energy recovery and converting pipelines to bioreactors. However, there is little knowledge about the temporal and spatial changes of collection systems parameters that are important for these applications. Furthermore, the vast majority of existing studies of these applications have focused on laboratory or extremely small-scale systems; there have been few studies about beneficial applications associated with large-scale systems. The purpose of this study is to increase our understanding of how urban wastewater collection systems can bring potential benefits to urban water and energy systems. Models describing wastewater hydraulics, temperature, and water quality can provide valuable information to help evaluate thermal energy recovery and wastewater pretreatment feasibility. These kinds of models, and supporting data from a case study, were used in this study; sizes of the theoretical wastewater collection systems range from 2.6 L/s to 52 L/s, and the sample locations of the case study had flows ranging from 2.3 L/s to 24.5 L/s. A cost-benefit analysis of wastewater source heat pumps was used to evaluate the thermal energy recovery feasibility for different sizes of wastewater collection systems. Results show that the large collection system can support a large capacity heat pump system with a relatively low unit initial cost. Small collection systems have a slightly lower unit operating cost due to the relatively high wastewater temperature. When the heat pump system capacity design was based on the average available energy from the collection system, larger systems have lower payback times; the lowest payback time is about 3.5 years. The wastewater quality model was used to describe the dissolved oxygen (DO) and organic matter concentrations changes in the collection system. The model provides a framework for predicting pretreatment capability. Model results show that DO concentration is the limiting parameter for organic matter removal. Larger collection systems can provide more organic matter removal because they provide relatively longer retention times, and they offer the potential for greater DO reaeration. The model can also be used to identify environmental conditions in sewer pipelines, providing information for potential issues predication.
Show less