Search results
(9,661 - 9,680 of 9,749)
Pages
- Title
- Aerial view of the Illinois Institute of Technology campus, Chicago, Illinois, 1957
- Date
- 1957
- Description
-
Aerial photograph of the south portion of the Illinois Institute of Technology campus, looking south. Photographer unknown.
- Collection
- IIT Campus Aerial photographs, 1940-2002
- Title
- Synthesis and Photophysical Characterization of Novel Organic Triplet Donor–Acceptor Dyads for Light-Harvesting/Modulation Application
- Creator
- Yun, Young Ju
- Date
- 2022
- Description
-
Donor–acceptor chromophoric systems (D–A) are important scaffolds for several light-harvesting/initiated processes and devices, including...
Show moreDonor–acceptor chromophoric systems (D–A) are important scaffolds for several light-harvesting/initiated processes and devices, including light-emitting diodes, photo-catalytic/redox systems, and photovoltaic cells. It has been hypothesized that for efficient photophysical processes (viz. energy/charge-transfer or excited-state interactions); it is ideal to tether the donor and acceptor chromophores into molecular dyads. To this end, I devised and synthesized several dyads by tethering an organic triplet energy donor and various polyaromatic chromophores (e.g., perylene derivatives and anthracene derivatives) onto a conjugated-/non-conjugated-linker (phenylene- and triptycene- linker, respectively). During the 4-5 years of my Ph.D., I synthesized a total of five (5) dyads: o–, p–3, and dyads 3–5. These systems were fully characterized using different spectroscopy tools/techniques. The spectroscopy investigations of the dyads have allowed me to decipher two important energy transfer pathways: through-bond and through-space with the phenylene linker and only through-space energy with the triptycene linker. Furthermore, the investigations led to the discovery that geometrical features such as face-to-face (co-facial) or slip-stacked interactions between the donor and acceptors chromophores might dictate the dynamic/kinetic of light-induced energy transfer in the dyads. Findings from my graduate research project paved the way for developing molecular engineering studies for light-harvesting/modulation applications.Subsequently, I was able to employ the dyads of my interest to achieve intramolecular and intermolecular triplet energy transfer (TEnT) triplet-triplet annihilation-based photon upconversion (TTA-PUC).
Show less
- Title
- DEFAULT RISK AND MOMENTUM PREMIUM
- Creator
- Zhang, Yi
- Date
- 2022
- Description
-
Birge and Zhang (2018) reported that combining common factors models with functions of the default risk improves models' performance to...
Show moreBirge and Zhang (2018) reported that combining common factors models with functions of the default risk improves models' performance to explain stock returns. Default risk contains firm specific information and may help to explain momentum premium that compensates investors for the firm specific risk exposures. In this paper, we confirmed that the forward-looking measure of default risk, as proposed by Birge and Zhang (2018), seems to capture some pricing information in the momentum premium. This provides an alternative to explain the underlying risks associated with the momentum strategy.
Show less
- Title
- INTEGRATED DECISION SUPPORT SYSTEM FOR THE SELECTION AND IMPLEMENTATION OF DELAY ANALYSIS IN CONSTRUCTION PROJECTS
- Creator
- Yang, Juneseok
- Date
- 2022
- Description
-
The goal of this study is to establish an objective, user-friendly, and reliable decision support system, called delay analysis selection and...
Show moreThe goal of this study is to establish an objective, user-friendly, and reliable decision support system, called delay analysis selection and implementation system (DASIS), which allows delay analysts and practitioners in the construction industry to select a type of delay analysis that is most appropriate for given conditions and to perform the selected type of delay analysis. DASIS integrates a delay analysis selection system (DASS) module and an implementation module (DAIS) that performs the type of delay analysis selected by DASS in construction projects.The model that operates the DASS module consists of (1) four different delay analysis approaches currently available to practitioners; (2) a set of 26 attributes that affect the selection of a type of delay analysis; (3) a case-base involving 3,776 cases described by these 26 attributes and their corresponding output values (i.e., the most appropriate delay analysis approach); (4) a set of 7 categories consisting of subsets of attributes; (5) the weights of the attributes and the categories; and (6) a spreadsheet designed in Microsoft Excel that performs the calculations involved in case-based similarity assessment. The implementation module is a computerized analytics and automation platform that performs the type of delay analysis selected by DASS. In developing the DASS module, 26 attributes that influence the selection of the most appropriate type of delay analysis were identified based on a thorough literature review and were organized in seven categories. These attributes were used to evaluate the four types of delay analysis (i.e., static, dynamic, additive, and subtractive analyses). Based on the results of this evaluation, a case-base of 3,776 cases was generated while considering the constraints of each category. The weights of the attributes and categories were determined by using several methods. To determine the best-fit between a target case (defined by its 26 attributes) and the 3,776 cases stored in the case-base were used to perform a case-based similarity assessment to calculate weighted case similarity scores, and to find the best-informed solution to the delay analysis type selection problem. In developing the DAIS module, the four types of delay analysis were coded in Microsoft Excel using macros programmed in Visual Basic for Applications (VBA). This automated tool performs the selected delay analysis by DASS. The fully integrated DASIS model finds the best-fit match between a target case and cases stored in the case-base by means of similarity assessment methods by using weighted case similarity scores, hence identifying the most appropriate type of delay analysis for use in the target case, performs the selected type of delay analysis and generates a report about the results of the delay analysis to the analyst instantaneously, allowing the contractual parties to settle the issues quickly. This study is the first attempt to establish an objective decision support system (DASS) to assist delay analysts by automating the selection of a type of delay analysis using combinations of well recognized and reliable attributes and similarity assessment techniques. In addition, DASS is immediately followed by DAIS in an integrated system (DASIS) that does not only do the selection of the most appropriate type of delay analysis, but that also implements the selected delay analysis, hence providing ease of use and high speed. A case study based on fictitious scenarios is presented to demonstrate and validate the research approach. The use of the entropy weight method to calculate the weights of the attributes can be considered a minor limitation of the study. Finally, DASIS can be reformulated as a web-based application that allows analysts to work online using ordinary browsers anywhere and anytime.
Show less
- Title
- MEASUREMENT OF ELECTRON NEUTRINO AND ANTINEUTRINO APPEARANCE WITH THE NOνA EXPERIMENT
- Creator
- Yu, Shiqi
- Date
- 2020
- Description
-
As a long-baseline neutrino oscillation experiment, the NuMI Off-axis $\nu_e$ Appearance (NOvA) experiment aims at studying neutrino physics...
Show moreAs a long-baseline neutrino oscillation experiment, the NuMI Off-axis $\nu_e$ Appearance (NOvA) experiment aims at studying neutrino physics by measuring neutrino oscillation parameters using the neutrino flux from the Main Injector (NuMI) beam. It has two functionally identical detectors. The near detector is onsite at Fermi National Accelerator Laboratory. The far detector is 810 km away from the source of neutrinos and antineutrinos, at Ash River, Minnesota. At the near detector, muon neutrinos or antineutrinos, before significant oscillations take place, are used to correct the Monte Carlo simulation. At the far detector, the neutrino and antineutrino fluxes after significant oscillations have happened are measured and analyzed to study neutrino oscillation. The NOvA experiment is sensitive to the values of $\sin^2\theta_{23}$, $\Delta m^2_{32}$, and $\delta_{CP}$. The latest values from the NOvA 2020 analysis are as follows: $\sin^2\theta_{23}=0.57^{+0.03}_{-0.04}$, $\Delta m^2_{32}=(2.41\pm0.07)\times10^{-3}$ eV$^2$/c$^4$, and $\delta_{CP}=0.82\pi$ with a wide 1$\sigma$ interval of uncertainty. My study is focused on the neutrino oscillation analysis with NOvA, including detector light model tuning, particle classification with convolutional neural network, electron neutrino and antineutrino energy reconstruction, and oscillation background estimation. Most of my studies have been used in the latest NOvA publication and the NOvA 2020 analysis.
Show less
- Title
- Towards Trustworthy Multiagent and Machine Learning Systems
- Creator
- Xie, Shangyu
- Date
- 2022
- Description
-
This dissertation aims to systematically research the "trustworthy" Multiagent and Machine Learning systems in the context of the Internet of...
Show moreThis dissertation aims to systematically research the "trustworthy" Multiagent and Machine Learning systems in the context of the Internet of Things (IoT) system, which mainly consists of two aspects: data privacy and robustness. Specifically, data privacy concerns about the protection of the data in one given system, i.e., the data identified to be sensitive or private cannot be disclosed directly to others; robustness refers to the ability of the system to defend/mitigate the potential attacks/threats, i.e., maintaining the stable and normal operation of one system.Starting from the smart grid, a representative multiagent system in the IoT, I demonstrate two works on improving data privacy and robustness in aspects of different applications, load balancing and energy trading, which integrates secure multiparty computation (SMC) protocols for normal computation to ensure data privacy. More significantly, the schemes can be readily extended to other applications in IoT, e.g., connected vehicles, mobile sensing systems.For the machine learning, I have studied two main areas, i.e., computer vision and natural language processing with the privacy and robustness correspondingly. I first present the comprehensive robustness evaluation study of the DNN-based video recognition systems with two novel proposed attacks in both test and training phase, i.e., adversarial and poisoning attacks. Besides, I also propose the adaptive defenses to fully evaluate such two attacks, which can thus further advance the robustness of system. I also propose the privacy evaluation for the language systems and show the practice to reveal and address the privacy risks in the language models. Finally, I demonstrate a private and efficient data computation framework with the cloud computing technology to provide more robust and private IoT systems.
Show less
- Title
- Deep Learning Methods For Wireless Networks Optimization
- Creator
- Zhang, Shuai
- Date
- 2022
- Description
-
The resurgence of deep learning techniques has brought forth fundamental changes to how hard problems could be solved. It used to be held that...
Show moreThe resurgence of deep learning techniques has brought forth fundamental changes to how hard problems could be solved. It used to be held that the solutions to complex wireless network problems require accurate mathematical modeling of the network operation, but now the success of deep learning has shown that a data-driven method could generate powerful and useful representations such that the problem could be solved efficiently with surprisingly competent performance. Network researchers have recognized this and started to capitalize on the learning methods’ prowess. But most works follow the existing black-box learning paradigms without much accommodation to the nature and essence of the underlying network problems. This thesis focuses on a particular type of classical problem: multiple commodity flow scheduling in an interference-limited environment. Though it does not permit efficient exact algorithms due to its NP-hard complexity, we use it as an entry point to demonstrate from three angles how the learning-based methods can help improve the network performance. In the first part, we leverage the graphical neural network (GNN) techniques and propose a two-stage topology-aware machine learning framework, which trains a graph embedding unit and a link usage prediction module jointly to discover links that are likely to be used in optimal scheduling. The second part of the thesis is an attempt to find a learning method that has a closer algorithmic affinity to the traditional DCG method. We make use of reinforcement learning to incrementally generate a better partial solution such that a high quality solution may be found in a more efficient manner. As the third part of the research, we revisit the MCF problem from a novel viewpoint: instead of leaning on the neural networks to directly generate the good solutions, we use them to associate the current problem instance with historical ones that are similar in structure. These matched instances’ solutions offer a highly useful starting point to allow efficient discovery of the new instance’s solution.
Show less
- Title
- Essays on Clean Energy Finance and Cryptocurrency Market
- Creator
- Xie, Yao
- Date
- 2021
- Description
-
This dissertation includes four essays with several empirical investigations in the areas of clean energy finance and cryptocurrencies.In the...
Show moreThis dissertation includes four essays with several empirical investigations in the areas of clean energy finance and cryptocurrencies.In the first essay, I investigate the heterogeneous relationship between various determinants of the clean energy market across all subsectors of the clean energy stock market. My findings reveal that VIX is the most significant predictor of all clean energy subsectors conditional volatility. During the COVID-19 stress period, economic uncertainty measures become more significant measures. The heterogeneity of clean energy market persists in the out-of-sample results. These results suggest that portfolio diversification for different clean energy subsector is necessary. In the second essay, I study the safe haven property of several volatility indexes on clean energy subsectors. I compare the current COVID-19 stress period and the time before. The results show that market volatility and commodity volatility are good safe haven assets during the COVID-19 period. But they are not safe haven assets against the clean energy subsector before the pandemic period. Among all volatility indexes, gold volatility index is the most effective safe haven assets. In the third essay, I investigate the characteristics of Bitcoin as a financial asset. A comprehensive set of information variables under five categories: macroeconomics, blockchain technology, other markets, stress level, and investor sentiment. The empirical results show that blockchain technology, stress level and investor sentiment have strong predicting power on Bitcoin returns. In the fourth essay, I aim to study how extreme sentiment measures from Google Trend and Wikipedia Pageviews affect both traditional cryptocurrency, such as Bitcoin and stablecoin, like Tether. Our results show that Tether’s return is not affected by the extreme sentiment measures during the COVID-19 stress period which suggests that stablecoin can offer price stability.
Show less
- Title
- DEEP LEARNING AND COMPUTER VISION FOR INDUSTRIAL APPLICATIONS: CELLULAR MICROSCOPIC IMAGE ANALYSIS AND ULTRASOUND NONDESTRUCTIVE TESTING
- Creator
- Yuan, Yu
- Date
- 2022
- Description
-
For decades, researchers have sought to develop artificial intelligence (AI) systems that can help human beings on decision making, data...
Show moreFor decades, researchers have sought to develop artificial intelligence (AI) systems that can help human beings on decision making, data analysis and pattern recognition applications where analytical methods are ineffective. In recent years, Deep Learning (DL) has been proven to be an effective AI technique that can outperform other methods in applications such as computer vision, natural language processing, autonomous driving. Realizing the potential of deep learning techniques, researchers have also started to apply deep learning on other industrial applications. Today, deep learning based models are used to innovate and accelerate automation, guidance, and decision making in various industries including automotive industry, pharmaceutical industry, finance, agriculture and more. In this research, several important industrial applications (on Biomedicine and Non-Destructive Testing) utilizing deep learning algorithms will be introduced and analyzed. The first biopharmaceutical application focuses on developing a deep learning based model to automate the visual inspection process in Median Tissue Culture Infectious Dose(TCID50). TCID50 is one of the most popular methods for viral quantification. An important step of TCID50 is to visually inspect the sample and decide if it exhibits cytopathic effect(CPE) or not. Two novel models have been developed to detect CPE in microscopic images of cell culture in 96 well-plates. The first model consists of a convolutional neural network (CNN) and support vector machine(SVM). The second model is a fully convolutional network (FCN) followed by morphological post-processing steps. The models are tested on 4 cell lines and achieve very high accuracy. Another biopharmaceutical application developed for cellular microscopic images is the clonal selection. Clonal selection is one of the mandatory steps in cell line development process. It focuses on verifying the clonality of the cell culture. The researchers used to visually inspect the microscopic images to verify the clonality. In this work, a novel deep learning based model and a workflow is developed to accelerate the process. This algorithm consists of multiple steps, including image analysis after incubation to detect the cell colonies, and verify its clonality in day0 image. The results and common mis-classification cases are shown in this thesis. Image analysis method is not the only technology that has been advancing for cellular image analysis in biopharmaceutical industry. A new class of instruments are currently used in biopharmaceutical industry which enable more opportunities for image analysis. To make the most of these new instruments, a convolutional neural network based architecture is used to perform accurate cell counting and cell morphology based segmentation. This analysis can provide more insight of the cells at very early stage in characterization process of cell line development. The architecture and the testing results are presented in this work. The proposed algorithm has achieved very high accuracy on both applications, and the cell morphology based segmentation enables a brand new feature for scientists to predict the potential productivity of the cells. Next part of this dissertation is focused on hardware implementation of Ultrasonic Non-Destructive Testing (NDT) methods based on deep learning, which can be highly useful in flaw detection and classification applications. With the help of a smart and mobile Non-Destructive Testing device, engineers can accurately detect and locate the flaws inside the materials without reliance on high performance computation resources. The first NDT application presents a hardware implementation of a deep learning algorithm on Field-programmable gate array(FPGA) for Ultrasound flaw detection. The Ultrasound flaw detection algorithm consists of a wavelet transform followed by a LeNet inspired convolutional neural network called Ultra-LeNet. This work is focused on implementing the computationally difficult part of this algorithm: Ultra-LeNet, so that it can be used in the field where high performance computation resources (e.g., AWS) are not accessible. The implementation uses resource partitioning to design two dedicated pipelined accelerators for convolutional layers and fully connected layers respectively. Both accelerators utilize loop unrolling, loop pipelining and batch processing techniques to maximize the throughput. The comparison to other work has shown that the implementation has achieved higher hardware utilization efficiency. The second NDT application is also focused on implementing a deep learning based algorithm for Ultrasound flaw detection on a FPGA. Instead of implementing the Ultra-LeNet, the deep learning model used in this application is Meta-learning based Siamese Network, which is capable for multi-class classification and it can also classify a new class even if it does not appear in the training dataset with the help of automated learning features. The hardware implementation is significantly different than the previous algorithm. In order to improve the inference operation efficiency, the model is compressed with both pruning and quantization, and the FPGA implementation is specifically designed to accelerate the compressed CNN with high efficiency. The CNN model compression method and hardware design are novel methods introduced in this work. Comparison against other compressed CNN accelerators is also presented.
Show less
- Title
- MARKETABLE LIMIT ORDERS AND NON-MARKETABLE LIMIT ORDERS ON NASDAQ
- Creator
- ZHANG, DAN
- Date
- 2022
- Description
-
My research includes two parts. In the first part of my research, I classify marketable limit orders into three different types: large...
Show moreMy research includes two parts. In the first part of my research, I classify marketable limit orders into three different types: large marketable order to buy, large marketable order to sell, and small marketable order. I use dummy variance method to research the effect of the three marketable orders on standardized variance, and find that LMOB and LMOS play significant role in variance increase. The second part of my research is about modelling of time to execution and time to cancellation of Non-marketable limit orders. I construct variables and model time to execution for NLO to buy and time to cancellation for NLO to buy and NLO to sell based on exponential distribution with accelerated failure time specification. My research shows that the longer the distance of limit price to buy away from the best bid price, the longer time to execution is. The longer the distance of limit price to buy away from the best bid price or limit price to sell away from the best ask price, the longer the time to cancellation is.
Show less
- Title
- Stochastic dynamical systems with non-Gaussian and singular noises
- Creator
- Zhang, Qi
- Date
- 2022
- Description
-
In order to describe stochastic fluctuations or random potentials arising from science and engineering, non-Gaussian or singular noises are...
Show moreIn order to describe stochastic fluctuations or random potentials arising from science and engineering, non-Gaussian or singular noises are introduced in stochastic dynamical systems. In this thesis we investigate stochastic differential equations with non-Gaussian Lévy noise, and the singular two-dimensional Anderson model equation with spatial white noise potential. This thesis consists of the following three main parts. In the first part, we establish a linear response theory for stochastic differential equations driven by an α-stable Lévy noise (1<α<2). We first prove the ergodic property of the stochastic differential equation and the regularity of the corresponding stationary Fokker-Planck equation. Then we establish the linear response theory. This result is a general fluctuation-dissipation relation between the response of the system to the external perturbations and the Lévy type fluctuations at a steady state.In the second part, we study the global well-posedness of the singular nonlinear parabolic Anderson model equation on a two-dimensional torus. This equation can be viewed as the nonlinear heat equation with a random potential. The method is based on paracontrolled distribution and renormalization. After splitting the original nonlinear parabolic Anderson model equation into two simpler equations, we prove the global existence by some a priori estimates and smooth approximations. Furthermore, we prove the uniqueness of the solution by classical energy estimates. This work improves the local well-posedness results in earlier works.In the third part, we investigate the variation problem associated with the elliptic Anderson model equation in a two-dimensional torus in the paracontrolled distribution framework. The energy functional in this variation problem is arising from the Anderson localization. We obtain the existence of minimizers by a direct method in the calculus of variations, and show that the Euler-Lagrange equation of the energy functional is an elliptic singular stochastic partial differential equation with the Anderson Hamiltonian. We further establish the L2 estimates and Schauder estimates for the minimizer as weak solution of the elliptic singular stochastic partial differential equation. Therefore, we uncover the natural connection between the variation problem and the singular stochastic partial differential equation in the paracontrolled distribution framework.Finally, we summarize our results and outline some research topics for future investigation.
Show less
- Title
- Expanding the Magic Circle and the Self: Integrating Discursive Topics into Games
- Creator
- da Rosa Faller, Roberto
- Date
- 2020
- Description
-
This study focuses on games for self-development and how they communicate ideas, challenge established assumptions, cause reflection, and...
Show moreThis study focuses on games for self-development and how they communicate ideas, challenge established assumptions, cause reflection, and provoke change. It explores the integration of discursive topics – specifically those perceived as difficult, political, philosophical, taboo, or controversial – into games, and how to manage player exposure to these topics through design while avoiding player disengagement to achieve self-development goals. Using a Research Through Design approach, this study was conducted in two phases. The first exploratory phase resulted in an analytical framework with four distinct lenses: engaging play experience; player’s emotional investment; the friction points of discursive topics; and, controlled exposure to the topic. During the second phase, this framework was used to analyze eight case studies and three prototypes. The resultant insights from analysis revealed five categories – topic depiction, emotional climate, emotional anchors, topic delivery, and exposure timing – that form the Discursive Topic Integration Framework for self-development. This framework offers a new theoretical perspective for design scholars and practicing designers about how to manipulate the “magic circle” (a safe temporary space for the act of play), by intentionally designing for discursive topics and their friction points. It contributes strategies about when, how, how frequently, and with what intensity discursive topics may be introduced and abstracted in games. It frames the discursive topic, creates the emotional climate, and anchors the player inside the magic circle of the game so that they feel engaged, motivated, and curious without becoming overwhelmed. This study also generated two additional frameworks, including: the Self-Development Opportunity Matrix that can be used to generate or evaluate self-development goals; and, the Five Categories of Transitional and Traumatic Experiences that can assist in the design of games and other experiences that build a person’s capacity, self-determination, and commitment to positive change.
Show less
- Title
- Development of a novel ultra-nanocrystalline diamond (UNCD) based photocathode and exploration of its emission mechanisms
- Creator
- Chen, Gongxiaohui
- Date
- 2020
- Description
-
High quality electron sources are one of the most commonly used probing tools used for the study of materials. Photoemission cathodes, capable...
Show moreHigh quality electron sources are one of the most commonly used probing tools used for the study of materials. Photoemission cathodes, capable of producing ultra-short and ultra-high intensity beams, are a key component of accelerator based light sources and some microscopy tools. High quantum efficiency (QE), low intrinsic emittance, and long lifetime (or good vacuum tolerance) are three of the most critical features for a photocathode; however, these are difficult to achieve simultaneously and trade-offs need to be made for different applications. In this work, a novel semi-metallic material of nitrogen-incorporated ultrananocrystalline diamond ((N)UNCD) has been studied as a photocathode. (N)UNCD has many of the unique diamond properties, such as low intrinsic as-grown surface roughness (at the order of 10~nm) due to its nanometer scale crystalline size, relatively long lifetime in air, high electrical conductivity with nitrogen doping, and potentially high QE performance due to the high grain boundary densities where most of electron emission occurs. High contrast interference of incident and reflected radiation within (N)UNCD thin films was observed, and this feature allows fast thickness determination based on an analytical optics methodology. This method has been extended to study and calculate the etching rates of two commonly used O$_2$ and H$_2$ plasmas for use with future (N)UNCD microfabrication processes. The mean transverse energy (MTE) of (N)UNCD was determined over a wide UV range in a DC photogun. Unique MTE behavior was observed; it did not scale with photon energy unlike most metals. This behavior is associated with emission from spatially-confined states in the graphite regions (with low electron effective mass) between the diamond grains. Such behavior suggests that beam brightness many be increased by the simple mechanism of increasing the photon energy so that the QE increases, while the MTE remains constant.Two individual (N)UNCD photocathodes synthesized two years apart have been characterized in a realistic RF photogun. Both the QE and intrinsic emittance were characterized. It was found that the QE of $\sim4.0\times 10^{-4}$, is more than an order of magnitude higher than that of most commonly used metal cathodes (such as Cu and Nb). The intrinsic emittance (0.997~$\mu$m/mm) is comparable to that of photocathodes now deployed in research accelerators. The most impressive feature is the excellent robustness of (N)UNCD material; there was no evidence of performance degradation, even after years-long atmospheric exposure. The results of this work demonstrate that a cathode made of (N)UNCD material is able to achieve balanced performance of three of the primary critical photocathode figures-of-merit.
Show less
- Title
- H1 LUBRICANT TRANSFER FROM A HYDRAULIC PISTON FILLER INTO A SEMI-SOLID FOOD SYSTEM
- Creator
- Chao, Pin-Chun
- Date
- 2020
- Description
-
The machinery used to prepare, and process food products need grease and oil for the lubrication of machine parts. H1 (food-grade) lubricants...
Show moreThe machinery used to prepare, and process food products need grease and oil for the lubrication of machine parts. H1 (food-grade) lubricants commonly used in the food industry are regulated as indirect additives by the FDA because they may become components of food through transfer due to incidental contact between lubricants and foods. The maximum level of H1 lubricants currently permitted in foods is 10 ppm, which was derived from FDA data gathered over 50 years ago. Although modern equipment has been designed to minimize the transfer of lubricants during processing and packaging, incidental food contact can still occur resulting from leaks in lubrication systems or over-lubrication. However, there is a lack of data for the FDA to evaluate and determine whether safety issues in the aspect of chemical contamination should be addressed concerning the use of food-grade lubricants in the production of foods. This research was conducted to determine the transfer of an H1 lubricant (Petrol-Gel) into a semi-solid model food from a hydraulic piston filler during conventional operating conditions at 25°C and 50°C. Xanthan gum solutions with concentrations of 2.3% at 25°C and 1.9% at 50°C were used to simulate the viscosity of ketchup at 50°C (970 cP). Petrol-Gel H1 lubricant with a viscosity grade of 70 cSt at 40°C was selected and the aluminum (Al) in the lubricant was targeted as a tracer metal. Analytical methods to quantify Al in both Petrol-Gel and xanthan gum solutions were successfully developed and validated by using inductively coupled plasma – mass spectrometry (ICP-MS) combined with microwave-assisted acid digestion technique. The concentration of Al in the Petrol-Gel was determined to be 3103 ± 26 μg/g. A total of 1.35 g of Petrol-Gel was applied to four ring gaskets in the filler, and 50 g samples of xanthan gum solution were collected into a 100-mL polypropylene tube (DigiTube) with low leachable metals during 500 filling cycles (the full capacity of the piston filler hopper).Results showed that the concentrations of Petrol-Gel transferred into 2.3% xanthan gum solution at 25°C ranged from 1.6 to 63.5 μg/g. A total of 64.47 mg of the applied Petrol-Gel (1.35 g) was transferred into 25 liters of the solution. The average concentration of Petrol-Gel in 2.3% xanthan gum solution was calculated to be 2.84 μg/g, which was lower than the current regulatory limit of 10 ppm. In general, the transfer of Petrol-Gel during the first 100 filling cycles was higher at 50°C than at 25°C. The concentration of Petrol-Gel transferred into 1.9% xanthan gum solution at 50°C for the first 100 filling cycles ranged from 1.6 to 35.06 μg/g and was 6.37 μg/g on average. This research will help FDA to calculate more realistic limits of the H1 lubricants permissible in foods at modern food processing conditions as well as estimate consumer dietary exposure to these indirect food additives.
Show less
- Title
- WASTEWATER COLLECTION SYSTEM MODELING: TOWARDS AN INTEGRATED URBAN WATER AND ENERGY NETWORK
- Creator
- Wang, Xiaolong
- Date
- 2020
- Description
-
Wastewater collection systems, among the oldest features of urban infrastructure, are typically dedicated to collect and transport wastewater...
Show moreWastewater collection systems, among the oldest features of urban infrastructure, are typically dedicated to collect and transport wastewater from users to water resource recovery facilities (WRRFs). Since the 1970s, wastewater engineers and scientists have come to understand that wastewater collection systems can bring benefits for urban water and energy networks, including thermal energy recovery and converting pipelines to bioreactors. However, there is little knowledge about the temporal and spatial changes of collection systems parameters that are important for these applications. Furthermore, the vast majority of existing studies of these applications have focused on laboratory or extremely small-scale systems; there have been few studies about beneficial applications associated with large-scale systems. The purpose of this study is to increase our understanding of how urban wastewater collection systems can bring potential benefits to urban water and energy systems. Models describing wastewater hydraulics, temperature, and water quality can provide valuable information to help evaluate thermal energy recovery and wastewater pretreatment feasibility. These kinds of models, and supporting data from a case study, were used in this study; sizes of the theoretical wastewater collection systems range from 2.6 L/s to 52 L/s, and the sample locations of the case study had flows ranging from 2.3 L/s to 24.5 L/s. A cost-benefit analysis of wastewater source heat pumps was used to evaluate the thermal energy recovery feasibility for different sizes of wastewater collection systems. Results show that the large collection system can support a large capacity heat pump system with a relatively low unit initial cost. Small collection systems have a slightly lower unit operating cost due to the relatively high wastewater temperature. When the heat pump system capacity design was based on the average available energy from the collection system, larger systems have lower payback times; the lowest payback time is about 3.5 years. The wastewater quality model was used to describe the dissolved oxygen (DO) and organic matter concentrations changes in the collection system. The model provides a framework for predicting pretreatment capability. Model results show that DO concentration is the limiting parameter for organic matter removal. Larger collection systems can provide more organic matter removal because they provide relatively longer retention times, and they offer the potential for greater DO reaeration. The model can also be used to identify environmental conditions in sewer pipelines, providing information for potential issues predication.
Show less
- Title
- COMPREHENSIVE ANALYSIS OF EXON SKIPPING EDITS WITHIN DYSTROPHIN D20:24 REGIONS
- Creator
- Niu, Xin
- Date
- 2020
- Description
-
Exon skipping is a disease modifying therapy that operates at the RNA level. In this strategy, oligonucleotide analog drugs are used to...
Show moreExon skipping is a disease modifying therapy that operates at the RNA level. In this strategy, oligonucleotide analog drugs are used to specifically mask specific exons and prevent them from being included in the mature mRNA. Exon skipping can also be used to restore protein expression in cases where a genetic frameshift mutation has occurred, and this how it is applied to Duchenne muscular dystrophy, DMD. DMD most commonly arises as a result of large exonic deletions that juxtapose flanking exons of incompatible reading frame, which abolishes dystrophin protein expression. This loss leads to the pathology of the disease, which is severe, causing death generally in the second or third decade of life. Here, the primary aim of exon skipping is to restore the reading frame by skipping an exon adjacent to the patient’s original. While restoring some protein expression is good, how removing some region from the middle of protein affects its structure and function is unclear. Complicating this in this case is that the dystrophin gene is very large, containing 79 exons. Many different underlying deletions are knowns, and exon skipping can be applied in many ways. It has previously been shown that many exon-skip edits result in structural perturbations of varying degrees. Very few studies are focused on the protein biophysical study and it is still basically unclear whether and how such editing can be done to minimize such perturbations. In order to provide the solid evidences which prove the significant variation among those cases (especially for the clinically relevant cases) and better understanding the general principles of “what makes a good edit”, we examine a systematic and comprehensive panel of possible exon edits in a region of the dystrophin protein. The domain D20:24 of dystrophin rod region are selected for its entirety which is separated by hinge region (mostly random coiled structure) and addition of other STRs will not disrupt the structure stability. Also D20:24 regions lie in the Hot Spot region II (HS2) which holds the most number of DMD patients. During the comprehensive scan, we identify for the first time, exon edits that appear to maintain structural stability similar to wild-type protein and those clinically relevant edits. Then we figure out the factors that appear to be correlated with the degree of structural perturbation, such as the number of cooperative protein domains, as well as how the edited exon structure interacts with the protein domain structure. Our study is the first systematic and comprehensive scan for an entire multiple STRs domain. This would help us understand the protein nature of various exon skipping edits and provide useful target for clinical treatment. Also the knowledge we learned may be applied to produce more sophisticated CRISPR edits in the future work.
Show less
- Title
- DEVELOPMENT OF FULLY BIOCOMPATIBLE HYDROGEL NANOPARTICLE FORMULATIONS FOR CONTROLLED-RELEASE DELIVERY OF A WIDE VARIETY OF BIOMOLECULES
- Creator
- Borges, Fernando Tancredo Pereira
- Date
- 2020
- Description
-
In recent years, our group has focused on the production of PEGDA-based hydrogel scaffolds and nanoparticles for drug delivery of small...
Show moreIn recent years, our group has focused on the production of PEGDA-based hydrogel scaffolds and nanoparticles for drug delivery of small molecules. However, with recent advances in modern therapeutic treatments, such as protein and genetic engineering, there is an increasing need for the development of drug delivery devices that would be able encapsulate larger molecules. Therefore, the goal of this thesis work was to develop a systematic way to produce fully biocompatible PEGDA-based hydrogel nanoparticle formulations that would be able to encapsulate any size molecule, ranging from small ionic molecules, to peptides and proteins, all the way to large nucleic acids, and deliver it in a controlled manner.The first of part of this work consisted of developing a stable and reproducible process for the production of hydrogel PPi-NPs. Initial studies were done in order to assess the influence of phosphate salts in the polymerization system and it was found that both monophosphate and polyphosphate salts significantly damper the NVP homo-polymerization kinetics, but do not affect the co-polymerization of NVP and PEGDA. Then, emulsion stability studies were done to determine whether phosphate salts affected the stability of the minimeulsion system used in the production of the nanoparticles. Cloud point measurements and droplet size screening measurements showed that by transitioning from a Pi-loaded emulsion system to a PPi-loaded emulsion system, the required HLB of the emulsion shifts by 1.5 points. Upon correction for that shift, a reproducible process for production of PPi-loaded nanoparticles was obtained. A parametric study was then performed to see how the different process parameters affected the different properties of the produced particles. The second part of the work consisted in developing a platform for encapsulation of large to very-large molecules within these hydrogel systems. A new set of equations was developed for better estimation of the interstitial space, available for encapsulation of molecules, of crosslinked polymers that used very high molecular weight crosslinkers and/or high amounts of crosslinker. Upon development of this new set of equations, hydrogel discs were made via photopolymerization in order to validate the equations. By introducing a third monomer, EGA, and varying the molecular weight and concentration of the crosslinker, hydrogels with a wide range of mesh dimensions from 25 to 700 were achieved. These gels were then used to encapsulate 4 different sample molecules of varying molecular weights and size. A new heuristic was developed for encapsulation of non-spherical molecules, where the aspect ratios of the molecule and of the polymer network are considered. By varying the size of the ratios of the dimensions of the hydrogel network to the dimensions of the molecule, significantly different release profiles of small molecules, peptides and oligonucleotides were obtained. Finally, in order to explore different administration routes, the process was transitioning into being fully biocompatible. The organic solvent previously used in the emulsion system was replaced by soybean oil and the surfactants were replaced by a food-grade surfactant, PGPR, to form Bio-Compatible Nanoparticle Emulsions (BCNEs). Qualitative release from the BCNEs was shown. A new method for quantitative measuring of release from BCNE was developed. Release from QK-BCNE was observed up to 46 days, which is unprecedented for sustained-release and revolutionary for the field. A BCNE spreadable ointment formulation was also developed.
Show less
- Title
- Development of Human Brain Atlas Resources
- Creator
- Qi, Xiaoxiao
- Date
- 2020
- Description
-
Digital human brain atlases play an increasingly critical role and are widely used in neuroimaging studies such as developing biomarkers,...
Show moreDigital human brain atlases play an increasingly critical role and are widely used in neuroimaging studies such as developing biomarkers, training data for machine learning algorithms, functional connectivity analysis and so on. A brain atlas typically consists of brain templates of different imaging modalities that are representative of individual brains under study in a standard atlas space and semantic labels that delineate brain regions according to the characteristics of the underlying tissue.The IIT Human Brain Atlas project has developed the state-of-the-art diffusion tensor imaging (DTI) template, high angular resolution diffusion imaging (HARDI) template, and anatomical templates for the young adult brain in a standardized space. The probabilistic maps of gray matter (GM) labels and tissue segmentations were also constructed based on the anatomical information of the atlas. This thesis introduced an enhanced T1-weighted template that were developed by combining information from both diffusion and anatomical data. The GM labels and tissue segmentation maps in the standardized space were also improved. Existing white matter (WM) atlases typically lack specificity in terms of brain connectivity. A new approach named regionconnect was developed in this work based on precalculated average healthy adult brain connectivity information stored in standard space in a fashion that allows fast retrieval and integration. This thesis first generated and evaluated the white matter connectome of the IIT Human Brain Atlas v.5.0. Next, the new white matter connectome was used to develop multi-layer, connectivity-based labels for each white matter voxel of the atlas, consistent with the fact that each voxel may contain axons from multiple connections. The regionconnect algorithm was then developed to rapidly integrate information contained in the multi-layer labels across voxels of a white matter region and to generate a list of the most probable connections traversing that region. The regionconnect algorithm as well as the white matter tractogram and connectome, multi-layer, connectivity-based labels, and associated resources developed for the IIT Human Brain Atlas v.5.0 in this work are available at www.nitrc.org/projects/iit. Furthermore, it was well established that use of a young adult atlas in studies of older adults is inappropriate due to the age-related characteristic changes of the brain, resulting in an increasing demand of digital brain atlases for the older adults. To fulfill this demand, a function of fiber orientation distribution (fODF) template that is representative of older adults was developed in a standardized atlas space for studies of white matter of older adult human brains, which built a solid foundation for the development of the white matter resources for the older adults human brain atlas.
Show less
- Title
- AMPLIFICATION AND PURIFICATION OF RECOMBINANT PRO-DEATH BAXΔ2 PROTEINS FOR STRUCTURE ANALYSIS
- Creator
- Zhou, Yi
- Date
- 2020
- Description
-
BaxΔ2 is an isoform of the pro-apoptotic Bax family of proteins, which is an important anti-cancer protein. BaxΔ2 behaves differently from...
Show moreBaxΔ2 is an isoform of the pro-apoptotic Bax family of proteins, which is an important anti-cancer protein. BaxΔ2 behaves differently from Baxα to induce apoptosis. The current computationally predicted model of BaxΔ2 is based on known Baxα structure, which is considered biased. Therefore, the elucidation of the BaxΔ2 crystal structure is critical. The goal of this project was to obtain a sufficient amount of purified recombinant Bax∆2 protein for crystallization. We cloned full-length BaxΔ2 fused with a poly-histidine tag on either N-terminus (His-Bax∆2) or C-terminus (Bax∆2-His) into an inducible bacterial expression vector. We found that His-Bax∆2 proteins were expressed better than Bax∆2-His, which totally inhibit host growth. However, the protein concentration of His-Bax∆2 was still too low to be detected by Coomassie blue staining. To increase His-Bax∆2 expression and avoid cytotoxicity, we further tested different bacterial host cells and applied the chaperone system. However, all attempts could not overcome Bax∆2 cytotoxicity and the protein expression levels were not high enough to be feasible for further large-scale purification. The mechanism underlying how Bax∆2 inhibits bacterial growth is still a mystery because Bax∆2 eukaryotic targets (mitochondria and caspases) do not exist in bacteria. Further experiments are required to explore the mechanism of Bax∆2 cytotoxicity in bacteria, so as to finally optimize and elevate the BaxΔ2 protein yields.
Show less
- Title
- Illinois Institute of Technology gymnasium with the Life Sciences Building under construction in background, Chicago, Ill., 1966
- Date
- 1966
- Description
-
Photograph of the gymnasium of the Illinois Institute of Technology, located at 32nd and Dearborn Streets. The gymnasium was constructed in...
Show morePhotograph of the gymnasium of the Illinois Institute of Technology, located at 32nd and Dearborn Streets. The gymnasium was constructed in 1947 and demolished in 1966. It was built as part of a 1947 Federal Works Agency project to provide facilities for veterans of World War II. Photographer unknown.
Show less - Collection
- Office of Communications and Marketing photographs, 1905-1999