Search results
(1,261 - 1,280 of 4,623)
Pages
- Title
- ANTI-TUMOR SPLICING: RESTORING THE TUMOR SUPPRESSOR BAX IN MICROSATELLITE UNSTABLE TUMORS
- Creator
- Haferkamp, Bonnie
- Date
- 2011-11, 2011-12
- Description
-
Microsatellite instability (MSI) is a hallmark for many tumors, especially colon, endometrial, gastric and bladder. Bax, a tumor suppressor...
Show moreMicrosatellite instability (MSI) is a hallmark for many tumors, especially colon, endometrial, gastric and bladder. Bax, a tumor suppressor and pro-death Bcl-2 family member, is frequently mutated in MSI tumors. A microsatellite mutation produces a frameshift with premature termination, leading to “Bax-negative” tumors. Although low Bax expression in tumors is often associated with poor prognosis, several studies have correlated lack of Bax in MSI tumors with improved prognosis. However, the molecular explanation for this paradox is unknown. Here we show that “Bax-negative” tumors in fact generate a novel family of anti-tumor Bax-MSI isoforms through alternative splicing. The thesis includes two parts. In Chapter One, we fully characterize one Bax-MSI isoform, BaxΔ2. We show that BaxΔ2 is detrimental to cancer cells but through a non-conventional death pathway, with differential sensitivity to chemotherapeutics. In Chapter Two, we present an entire family of Bax-MSI isoforms, and illustrate a potential molecular mechanism behind its production. We show that the frequency of Bax alternative splicing is significantly higher in MSI than non-MSI tumors, and that BaxΔ2 trans splicing requirements are ubiquitous in human cell lines. The discovery of functional Bax isoforms in Bax-mutated tumors may help explain why the apparent loss of Baxα in tumors is sometimes associated with a better prognosis. In addition, the unique sequences of Bax-MSI isoforms can serve as biomarkers for diagnostic and treatment purposes. Importantly, identification of Bax-MSI isoforms will provide a great opportunity from a genetic approach or drug design for treatment of MSI cancer.
Ph.D. in Biology, December 2011
Show less
- Title
- DEPARTURE OF NODES IN A CONTENT ADDRESSABLE NETWORK USING NS3
- Creator
- Velani, Hussain
- Date
- 2012-11-26, 2012-12
- Description
-
The Content Addressable Network is a decentralized Distributed Hash Table for peer-to-peer networks that is scalable and fault tolerant. CAN...
Show moreThe Content Addressable Network is a decentralized Distributed Hash Table for peer-to-peer networks that is scalable and fault tolerant. CAN is capable of performing dynamic node operations. CAN assumes a virtual d-dimensional coordinate system where the entire coordinate space is divided into zones. Each node in CAN is assigned a zone. CAN maps the key-value pair to a point in the virtual coordinate space. Each node maintains a specific number of key value pairs based on the coordinates of the zone. Each node also maintains a limited amount of information about its neighbors. This neighbor information is used for routing in CAN. Routing in CAN is based on a Virtual Identifier. This Virtual Identifier is provided to a node when it joins the CAN. Hence the routing is defined in terms of application hops. In this thesis, the concepts behind the joining are explained. Concepts employed for departure of nodes are discussed. Simulation for departure is explained and the results are discussed. Idea for implementation of handling failure of nodes is discussed and possible future work is highlighted.
M.S. in Electrical Engineering, December 2012
Show less
- Title
- RUMINATION AS A PROCESS THROUGH WHICH COGNITIVE VULNERABILITIES ARE RELATED TO NEGATIVE AFFECT ON BOTH THE TRAIT- AND STATE-LEVEL
- Creator
- Hutman, Paul J.
- Date
- 2017, 2017-07
- Description
-
The purpose of this paper is twofold. First, it reviewed the empirical and theoretical literature on rumination as well as three cognitive...
Show moreThe purpose of this paper is twofold. First, it reviewed the empirical and theoretical literature on rumination as well as three cognitive vulnerabilities (intolerance of uncertainty, positive meta-cognitive beliefs about rumination, and thought suppression) associated with negative emotionality. Second, this paper tested trait- and state-level hypotheses that posit (a) rumination is a mediator that connects the three cognitive vulnerabilities to general negative affect and (b) each cognitive vulnerability has a unique relationship to rumination. Rumination is a transdiagnostic and passive, repetitive thought process known to incite and perpetuate a number of negative emotional states and hinder instrumental behavior (e.g., treatment adherence and social support). Although mainly studied as a trait-like tendency, research studying the act of ruminating found it to be a ubiquitous process, experienced as self-focused, unpleasant, and uncontrollable. Ruminating has a number of precipitants (cognitive avoidance, experiencing acute stress, and receiving negative feedback), occurs more often in those experiencing depression, and is associated with inciting or exacerbating negative emotional states (regardless of one’s diagnostic status). To test hypotheses, data was gathered from 77 adults, half reporting a mental health diagnosis, who reported on the intensity with which they experience the cognitive vulnerabilities, rumination, and negative affect on both the trait- and state-level. Findings support rumination as a process through which cognitive vulnerabilities are connected to negative emotionality. Findings indicate all cognitive vulnerabilities were uniquely predictive of rumination when measured as a trait, but only thought suppression uniquely predicted the act of ruminating. Emphasis was placed on discussion of the theoretical and clinical implications of these findings and potential directions for future research.
Ph.D. in Psychology, July 2017
Show less
- Title
- AMERICA'S NEW HOME
- Creator
- Klinger, Ryan J.
- Date
- 2013-04-30, 2013-05
- Description
-
The role of the American nuclear single-family home has shifted significantly in the past 100 years.It began with the introduction and...
Show moreThe role of the American nuclear single-family home has shifted significantly in the past 100 years.It began with the introduction and expanded availability of the automobile by Henry Ford in 1908. Though private ownership of single-family homes steadily increased in the early part of the Twentieth Century, the majority of the existing housing market remained on rental. As a result of the stock market crash in 1929, the American housing market saw its first decline in the growth rate of new properties being built and occupied. Following the crash, the housing market experienced its first declining growth rate of the century. It was not until the late 1940s, with the end of World War II and the G.I. Bill’s introduction, that Americans saw an unprecedented increase in private homeownership nationwide. Pre-fabricated building processes, adapted from assembly-line production during World War II, radically transformed standard building methodologies for production of housing communities. Levittown, Pennsylvania was the primary example of this new fast-paced method of construction beginning in 1948. In the two decades following World War II, single-family homeownership surpassed rentals to become the majority of occupied housing units in the United States. The housing market took another hit in the economic decline of the 1970s. An economic downturn and an oil embargo that occurred during the 1970s produced a rising demand for rental properties that weakened the demand for newly-built singlefamily homes. In 1980 FHA revised its form of financing private homeownership and the American single-family home was now available to an expanded demographic of qualified owners. ix This resulted in another expansion of units being built and occupied from the later 1980s through the early 2000s. In 2007 America saw this come to an end; the American housing market crashed, forcing homeowners across the country into premature foreclosure. Private homeownership, once a central aspiration of the American Dream, is now shattered on a wide scale. As we move into the next phase of the American singlefamily home, many wonder if it will survive as a building model and reemerge as a prominent housing typology. It is the sole purpose of this investigation to establish the new face of the singlefamily home in an urban setting. The Gap Neighborhood on Chicago’s South Side will be used as an example. The investigation’s conclusion will provide the architectural community, as well as the target client-population, with a prototypical case study for the emerging single-family dwelling unit.It will serve as a comprehensive model that has reacted to the current issues that remain as a result of the housing crisis.
M.S. in Architecture, May 2013
Show less
- Title
- PROTEOLYTIC STABILITY OF FIBRONECTIN CONJUGATED TO POLYETHYLENE GLYCOL: EFFECT OF PEG LENGTH TO CYSTEINE RESIDUES
- Creator
- Hekmatfar, Sogol
- Date
- 2013, 2013-07
- Description
-
Fibronectin (FN) is an essential protein of the extracellular matrix (ECM) needed in wound healing. In chronic wounds, the high levels of...
Show moreFibronectin (FN) is an essential protein of the extracellular matrix (ECM) needed in wound healing. In chronic wounds, the high levels of protease in the wound bed lead to excessive degradation of fibronectin, which delays the healing process. Developing a proteolytically stable and functionally active form of FN is the main purpose of this research. Conjugating of proteins to polyethylene glycol (PEG) or PEGylating proteins showed more proteolytic stability than native FN degradation without perturbing their activity. The goal of this study was to compare the proteolysis of native and PEGylated fibronectin with different PEG length. Fibronectin was purified from human blood plasma and conjugated to PEG Diacrylate (PEGDA) and other types of PEG to yield the PEGylated human plasma fibronectin (PEG-HPFN). α-chymotrypsin and neutrophil elastase were used as digestion enzyme during degradation reaction. The proteolysis reaction was stopped at different time points with protein inhibitor phenylmethanesulfonylfluoride (PMSF). The samples were analyzed by SDS-PAGE followed by silver staining or immunblotting with antibodies specific to human fibronectin. Densitometric analyses of the polyacrylamide gels or the blots demonstrated that PEG-HPFN was more stable than native HPFN. The results demonstrate that PEGylation is a robust approach for stabilizing fibronectin. Future studies into activity of PEGylated proteins as well as the role of PEGylation factors such as extent of PEGylation or PEG length on activity will provide novel strategies of mitigating fibronectin degradation in chronic wounds.
M.S. in Chemical Engineering, July 2013
Show less
- Title
- DYNAMIC ANALYSIS OF UNCERTAIN STRUCTURES USING IMPRECISE PROBABILIY
- Creator
- Bergerson, Joshua D.
- Date
- 2014, 2014-05
- Description
-
A new method for dynamic response spectrum analysis of structures with uncertainty in their mechanical properties utilizing the notion of...
Show moreA new method for dynamic response spectrum analysis of structures with uncertainty in their mechanical properties utilizing the notion of imprecise probability is developed. This finite-element-based method is capable of obtaining probabilistic bounds of the dynamic response of the structure with uncertainty defined by enveloping p-boxes. The developed method obtains probabilistic bounds on 1) the mode shapes, 2) modal coordinates, and 3) modal participation factor, leading to the p-boxes of modal responses. Finally maximum modal responses are combined to obtain the structure’s maximum total response with consideration of uncertainty. Numerical examples demonstrating the developed method are included. Keywords: Structural Dynamics, Uncertainty, Imprecise Probability, P-Box.
PH.D in Civil Engineering, May 2014
Show less
- Title
- Hydro-electric development on the Snake River at Oxen Bend, Oregon
- Creator
- Baer, W. J., Rosenthal, H., Moyses, H., Godfrey, F. O.
- Date
- 2009, 1910
- Publisher
- Armour Institute of Technology
- Description
-
http://www.archive.org/details/hydroelectricdev00baer
Thesis (B.S.)--Armour Institute of Technology; Bibliography: leaf 1
- Title
- Hydro-electric power station design
- Creator
- Badger, Ralph H., Grant, Roy G., Nichols, Harold W.
- Date
- 2009, 1908
- Publisher
- Armour Institute of Technology
- Description
-
http://www.archive.org/details/hydroelectricpow00badg
Thesis (B.S.)--Armour Institute of Technology
- Title
- PREDICTING CORTICAL RESPONSE DURING TRANSCRANIAL MAGNETIC STIMULATION IN HUMANS
- Creator
- Krieg, Todd
- Date
- 2014, 2014-07
- Description
-
Transcranial magnetic stimulation (TMS) is capable of noninvasively activating neurons in the brain. TMS can induce persistent effects and is...
Show moreTranscranial magnetic stimulation (TMS) is capable of noninvasively activating neurons in the brain. TMS can induce persistent effects and is being increasingly used in both clinical and research applications. Despite this growing interest, the relationship between TMS-generated electric fields (E-fields) and specific cortical electrophysiological responses is not well understood. Most analytical approaches focus on applied magnetic field strength in the target region as the primary factor, placing activation on the gyral crowns. However, imaging studies show cortical targets are typically located in the sulcal banks. To study this causal relationship, we combined subject-specific detailed finite element modeling (FEM) to calculate induced E-field information and employed concurrent TMS-PET data as a measure of cortical response. The research presented in this work is divided into three main parts, each one building on the results of the previous: (1) We determined that neocortical surface orientation was a critical determinant of regional activation by studying the locations of activation during TMS on the cortical surface. Results indicated that brain activations were primarily sulcal for both the TMS and task activations. This study provided further evidence that a major factor in cortical activation during TMS is the orientation of the cortical surface with respect to the induced E-fields. This was demonstrated by the fact that the sulcal bank of the primary motor cortex had larger cerebral blood flow (CBF) responses during TMS despite the gyral crown of the cortex being subjected to a larger magnetic field magnitude. (2) We sought to determine the E-field characteristics that lead to cortical activation. We found that decomposing the E-field into orthogonal vector components based on the cortical surface geometry (and hence, cortical neuron directions) led to significant differences between the regions of cortex that were active and non-active. Specifically, active regions had significantly higher E-field components in the normal orthodromic direction (i.e., parallel to pyramidal neurons in the dendrite-to-axon orientation) and in the tangential direction (i.e., parallel to interneurons) at high spatial gradient. In contrast, nonactive regions had higher E-field vectors in the antidromic normal direction. This provides important new understanding of the factors by which TMS induces cortical activation necessary for predictive and repeatable use of this noninvasive stimulation modality. (3) Finally, two different but related algorithms were formulated using different optimization approaches that provide a means for predicting topographical maps of cortical activation in humans. This is the first study to produce an algorithm for predicting the electrophysiological responses of neurons in the cortex based on both gross and microscopic brain anatomy correlated to relevant experimental recordings. This new innovation could provide an invaluable tool for predicting regions of cortical activation that may permit, among other benefits, improved prescriptive TMS protocols to optimize therapeutic response to TMS treatment.
Ph.D. in Biomedical Engineering, July 2014
Show less
- Title
- AN AUTOMATED ENGINEERING PROCESS TO VERIFY THE CORRECT COORDINATION OF MULTILAYER RECOVERY
- Creator
- Kroculick, Joseph
- Date
- 2017, 2017-05
- Description
-
Recovery is a critical function in backbone networks. The primary function of recovery is to provide connectivity regardless of which layer...
Show moreRecovery is a critical function in backbone networks. The primary function of recovery is to provide connectivity regardless of which layer recovery operates at. Another function of recovery is for all services traversing a failed link to be restored in a way that is consistent with a service user’s requirements. These requirements can include the consideration of factors such as (1) the cost of recovery, (2) the amount of traffic restored, and (3) the delay in restoring units of traffic. With more options available to recover traffic, providing an integrated recovery solution is necessary. An important force driving the evolution of network devices to transport services such as IP traffic is the layering of network resources. Layering enables networks to increase capacity by extending legacy SONET networks to interface with optical wavelengths. Inconsistent provisioning can prevent service continuity from being achieved during a failure. Continuity of service has been recognized as one key business goal. Furthermore, since recovery can occur at a different time than when it is provisioned, inconsistent provisioning is determined after the fact, with services left unrepaired, repaired unnecessarily at an extra cost, or not repaired fast enough. A network manager can check if recovery is consistent with a global perspective on how traffic should be restored by comparing the provisioning at each device against suitable properties of a formal representation. To address this issue an engineering method was developed to detect errors in provisioning automated recovery processes in multilayer and multiprotocol transport networks. This dependability assessment process (DAP) leverages inference techniques provided by Semantic Web technologies in order to detect network-device provisioning errors. Provisioning should be accompanied by methodologies, processes, and activities to ensure that it can be trusted to achieve a desired network state. The DAP takes into account unique constraints in the telecommunications domain including bottom-up evolution of physical layer technologies to provide connectivity, and lack of a universal model of network functionality. This method is applied to assessing the correctness of provisioning decisions for a protection switching application in a transport network in both the spatial and temporal domains.
Ph.D. in Computer Science, May 2017
Show less
- Title
- ANANALYSIS OF ZOO AND AQUARIUM PROVIDED TEACHER PROFESSIONAL DEVELOPMENT
- Creator
- Kubarek-sandor, Joy
- Date
- 2014, 2014-12
- Description
-
Informal science institutions are a significant provider of science teacher professional development. As pressure continues to critically...
Show moreInformal science institutions are a significant provider of science teacher professional development. As pressure continues to critically analyze the work of teachers and their effectiveness in the classroom, it is important to understand how informal science institutions contribute to effective change in teacher science content knowledge and pedagogy. This research study analyzed zoo and aquarium provided teacher professional development to respond to the research questions: How do zoos and aquaria determine and assess their goals for teacher professional development? How do these goals align with effective teacher change for science content knowledge and pedagogy? Theoretical frameworks for high quality teacher professional development, effective evaluation of teacher professional development, and learning in informal science settings guided the research. The sample for the study was AZA accredited zoos and aquariums providing teacher professional development (N=107). Data collection consisted of an online questionnaire, follow-up interviews, and content analysis of teacher professional development artifacts. Analysis revealed that by and large zoos and aquariums are lacking in their provision of science teacher professional development. Most professional development focuses on content or resources, neglecting pedagogy. Assessments mismatch the goals and rely heavily on self-report and satisfaction measures. The results demonstrate a marked difference between those zoos and aquariums that are larger in capacity versus those that are medium to small in size. This may be an area of research for the future, as well as analyzing the education resources produced by zoos and aquariums as these were emphasized heavily as a way they serve teachers.
Ph.D. in Mathematics and Science Education, December 2014
Show less
- Title
- IN VITRO CELLULAR ASSAY FOR ASSESSING THE IMPACT OF PROCESSING ON THE ALLERGENIC POTENTIAL OF EGG ALLERGENS
- Creator
- Wei, Kunni
- Date
- 2013-04-24, 2013-05
- Description
-
An understanding of the impact of processing on food allergenicity is critical for allergen risk assessment. In vitro cell-based assays have...
Show moreAn understanding of the impact of processing on food allergenicity is critical for allergen risk assessment. In vitro cell-based assays have been developed to assess the allergenicity of food proteins by measuring the capacity of these proteins to induce degranulation in effector cells. This study evaluated the use of sensitized rat basophilic leukemia (RBL) cells for assessing the allergenic potential of ovalbumin (OVA), ovomucoid (OVM) and NIST whole egg standard reference material (NIST egg) by measuring the proteins’ ability to induce the release of β-hexosaminidase and histamine. RBL-2H3 cells at a concentration of ~105 cells/mL were seeded in six-well plates and incubated for 24 hours. After washing with PBS and resuspension in Eagle’s minimal essential medium, cells were sensitized by incubation for 24 h with various dilutions of anti-OVA, anti-OVM or anti-egg antisera derived from Brown Norway or Wistar rats. The sensitized cells were then challenged with different concentrations of unheated or boiled (for 10 min) OVA, OVM or NIST egg. After 45 min, cell degranulation was determined by measuring the increase in β-hexosaminidase and histamine in culture supernatant. The release of β-hexosaminidase was observed after the cells were challenged with OVA, OVM or NIST egg. Assay optimization studies using sensitized cells challenged with OVA indicated that the greatest degree of release occurred when RBL-2H3 cells were sensitized with 1:10 dilution of anti-sera and challenged with 0.01mg/mL of OVA. Histamine release was not detected under any serum or OVA concentration. Boiling caused a decrease in the recovery of OVA and NIST egg (70.34±3.61% and 73.69±7.66% respectively), but did not affect the recovery of OVM (96.28±2.95%). Under the optimal assay conditions, boiled OVA and boiled NIST egg x triggered the release of more β-hexosaminidase than the unheated forms of these proteins in most cases. However, boiling did not significantly affect the ability of OVM to induce the release of β-hexosaminidase. Due to the large variation in % release of β-hexosaminidase obtained from different trials, the RBL mediator release assay may not be an accurate method to evaluate the impact of thermal processing on the allergenic potential of egg allergens.
M.S. in Food Safety and Technology, May 2013
Show less
- Title
- MOTION OF BUBBLY FLUID IN A TANK
- Creator
- Langman, Michael
- Date
- 2014, 2014-07
- Description
-
Computational uid dynamics is the numerical study of the motion of uids. In this thesis, an introduction to uid mechanics is presented and the...
Show moreComputational uid dynamics is the numerical study of the motion of uids. In this thesis, an introduction to uid mechanics is presented and the governing equations of uid mechanics are derived. The open-source computational uid dynamics library OpenFOAM is then used to simulate uid dynamics and to model the formation and movement of bubbles in a tank.
M.S. in Applied Mathematics, July 2014
Show less
- Title
- ASSESSING THE CULTURAL SENSITIVITY OF THE BOSTON NAMING TEST-2: IS THERE EVIDENCE FOR DIFFERENTIAL ITEM FUNCTIONING BETWEEN AFRICAN AMERICAN AND CAUCASIAN ADULTS AND BETWEEN MEN AND WOMEN?
- Creator
- Benson, Laura M.
- Date
- 2014, 2014-07
- Description
-
The Boston Naming Test – 2 (BNT-2) is one of the most widely used neuropsychological measures for the detection of naming deficits. However,...
Show moreThe Boston Naming Test – 2 (BNT-2) is one of the most widely used neuropsychological measures for the detection of naming deficits. However, there are few studies that have investigated its item-level psychometric properties, particularly in ethnic minorities, including African Americans. The present study examined the BNT-2 for the presence of ethnicity and genderbased differential item functioning (DIF) in a cognitively diverse sample of 744 African American and Caucasian adults recruited from a neuropsychology outpatient clinic in Chicago, Illinois. Using a two-parameter Item Response Theory (IRT) framework, all 60 items of the BNT-2 were analyzed, with difficulty and discriminability estimates generated for items 16 through 60 for ethnicity, and for items 11 through 60 for gender. Ethnicity-based DIF was detected for 10 items (i.e., “globe,” “beaver,” “dominoes,” “unicorn,” “accordion,” “latch,” “tripod,” “yoke,” “trellis,” and “palette”) and genderbased DIF for 6 items (i.e., “stethoscope,” “pyramid,” “latch,” “sphinx,” “yoke,” and “palette”), indicating that these items do not function equivalently between groups, despite being matched on naming ability. Post hoc hierarchical logistic regression analyses examined age, years of completed education, and literacy (measured as word reading ability) as potential explanatory factors for the observed DIF. Results indicated that the three variables provided partial explanations for the DIF detected in the 16 ethnicity and gender items, though none could entirely account for the observed findings. IRT-estimated difficulty parameters also allowed for the examination of item ranking on the BNT-2. For both ethnicity and gender, the IRT-estimated rank ordering of items between groups was similar, as well as being comparable (though not identical) to those initially assigned by Kaplan, Goodglass, and Weintraub (2001). Results further indicated that correlations between BNT-2 scale scores and IRT-based scale scores were very high, suggesting that the overall scoring of the BNT-2 was not subtantially biased by ethnicity or gender, despite the identified DIF and small fluctuations in item rankings. Overall, this implies that the BNT-2 is still a valid measure for use in African Americans, although future consideration should be given to revision or omission of those items identified as having DIF.
Ph.D. in Psychology, July 2014
Show less
- Title
- DEVELOPING ALGORITHMIC TRADING STRATEGIES AND EMPIRICAL ANALYSIS WITH HIGH FREQUENCY TRADING DATA
- Creator
- Lee, Jeonghoe
- Date
- 2015, 2015-07
- Description
-
The PhD dissertation research topics aim at developing algorithmic trading strategies and demonstrating data analysis skills. To be a...
Show moreThe PhD dissertation research topics aim at developing algorithmic trading strategies and demonstrating data analysis skills. To be a quantitative analyst as well as an academic scholar in financial trading area, these two professional backgrounds are indispensable. In detail, chapter 1 shows multi-objective optimization and spontaneous optimization of design variables. For instance, while conventional trading systems explore a single objective function, multi-objective optimization allows us to manage the essential trade-off among profit, standard deviation and maximum-drop. In addition, design parameters such as trading volume, the amount of historical data, and trading gateways of technical indicators are continuously optimized in real time. In chapter 2, this chapter shows an algorithmic trading system with the concept of machine learning, and demonstrating its various applications. The main purpose of this research is to propose objective numerical development framework in algorithmic trading. Chapter 3 pursues understanding liquidity measures which are critical for algorithmic traders and investors. Various liquidity measures have been suggested and they have different sensitivities to the market. This research analyzes liquidity measures and clarifies the relation between market price return & realized volatility and liquidity measures. In sum, with these three chapters, this dissertation will demonstrate necessary research topics in algorithmic trading.
Ph.D. in Management Science, July 2015
Show less
- Title
- STABILITY OF AQUEOUS FOAMS: FOAM FILM STRATIFICATION PHENOMENON AND THE EFFECTS OF DISPERSED VERSUS SOLUBILIZED OIL
- Creator
- Lee, Jongju
- Date
- 2016, 2016-05
- Description
-
A foam is a system consisting of a concentrated dispersion of gas bubbles in a liquid. Foams are encountered in many industries such as food,...
Show moreA foam is a system consisting of a concentrated dispersion of gas bubbles in a liquid. Foams are encountered in many industries such as food, agriculture, chemicals, petroleum, and paper manufacturing. Aqueous foams are formed by using surfactants or nano-colloidal particles. Thin liquid films containing surfactant micelles or other nano-colloidal particles are considered to be the key structural elements of foams containing gas and liquid. We thus probed the effects of the micellar concentration and the film size (area) on the stability of a dry bulk foam by studying the stability of a single foam lamella containing micelles; this is so we can establish the importance of the micellar structuring phenomenon and the foam film size (area) affecting the bulk foam stability. The film stratification phenomenon (stepwise film thinning) was experimentally observed by the reflected light microinterferometry. The stepwise layer-by-layer decrease of film thickness is due to the appearance and growth of dark spot (of one layer less film thickness) in the film. We used the two-dimensional diffusion model to model the dynamics of dark spot expansion considering the apparent diffusion coefficient and the film size. Based on this model, we carried out a parametric study depicting the effects of film thickness (or the number of micellar layers) and film area on the rate of dark spot expansion. Many practical applications involving three-phase foams (aqueous foams containing oil) commonly employ surfactants at several times their critical micelle concentration (CMC). We investigated the influence of both the dispersed and solubilized oils, and the surfactant concentration (above CMC) on the stability of an aqueous foaming system. In foam stability, the relative importance of the dispersed oil versus oil solubilized within the micelles depends on the stability of the aqueous asymmetric (i.e., pseudoemulsion) film between the oil and the air-water interface and the second virial coefficient. Also, the micellar structuring phenomenon tests using the single foam lamella revealed that the multi-layering structure was well pronounced in the absence of the solubilized oil; as a consequence, the foam lamellae thinned slowly layer-by-layer and the oil solubilized in micelles weakened the micellar structure formation. The foam lamellae thinned faster, making the foam less stable.
Ph.D. in Chemical Engineering, May 2016
Show less
- Title
- HYBRID METHODS FOR SIMULATION OF MUON IONIZATION COOLING CHANNELS
- Creator
- Kunz, Josiah D.
- Date
- 2017, 2017-05
- Description
-
COSY Infinity is an arbitrary-order beam dynamics simulation and analysis code. It can determine high-order transfer maps of combinations of...
Show moreCOSY Infinity is an arbitrary-order beam dynamics simulation and analysis code. It can determine high-order transfer maps of combinations of particle optical elements of arbitrary field configurations. For precision modeling, design, and optimization of next-generation muon beam facilities, its features make it a very attractive code. New features are being developed for inclusion in COSY to follow the distribution of charged particles through matter. To study in detail some of the properties of muons passing through material, the transfer map approach alone is not sufficient. The interplay of beam optics and atomic processes must be studied by a hybrid transfer map–Monte Carlo approach in which transfer map methods describe the average behavior of the particles in the accelerator channel including energy loss, and Monte Carlo methods are used to provide small corrections to the predictions of the transfer map accounting for the stochastic nature of scattering and straggling of particles. The advantage of the new approach is that it is very efficient in that the vast majority of the dynamics is represented by fast application of the high-order transfer map of an entire element and accumulated stochastic effects as well as possible particle decay. The gains in speed shown in this work are expected to simplify the optimization of muon cooling channels which are usually very computationally demanding due to the need to repeatedly run large numbers of particles through large numbers of configurations. This work describes the development of the required algorithms and their application to the simulation of muon ionization cooling channels. The code is benchmarked against other codes, validated with experimental results, and predicts results for current muon ionization cooling efforts.
Ph.D. in Physics, May 2017
Show less
- Title
- LAM MUTANT LARVAE WITH MELANOTIC MASSES DO NOT SURVIVE WITH THEIR SIBLINGS THAT DO NOT DEVELOP MASSES
- Creator
- Cui, Yi
- Date
- 2012-04-28, 2012-05
- Description
-
Melanotic masses are black nodules that form inside larvae and can be found in multiple mutants (Minakhina & Steward, 2006). There are two...
Show moreMelanotic masses are black nodules that form inside larvae and can be found in multiple mutants (Minakhina & Steward, 2006). There are two types of melanotic masses, the immune-related masses and the non-hemocyte-associated masses (Minakhina & Steward, 2006). Immune-related melanotic masses form in lam mutants (Markovic et.al, 2009). We now show all lam mutant larvae survived and 33% of the them developed melanotic masses when grown on fresh food, however, only 66% of them survived and none of them formed melanotic masses when living in the frassfood; the mixture of poo medium collected from wild type larvae vials. We find that frassfood lethality is not recipe specific, since the results were similar on normal yeast mashed potato medium and Nutrifly food. Our results also identified that it is bacteria in frassfood that kills lam mutant larvae with melanotic masses because the survival and melanotic mass incidence of lam mutant larvae were similar to fresh food when antibiotics were added to frassfood. However lam mutant larvae with melanotic masses are not generally sensitive to bacteria because all lam mutant larvae showed sensitivity to S. marcescens. Besides, since there is no frassfood lethality observed on Relish mutants, we propose that the humoral response was not affected in the lam mutants. The frassfood lethality to larvae with melanotic masses was also observed in neither cactus mutants nor the hopTum-l mutants, so the immune defects that caused melanotic masses in these two mutants do not lead to the frassfood lethality. Finally, we show that the frassfood lethality is not exactly the same in lam overexpressing larvae with melanotic masses and lam C overexpressing larvae with melanotic masses to the lam/Df larvae with melanotic masses. Thus it is possible that the gut immunity was affected in lam mutants with melanotic masses so that they were killed by the bacteria in frassfood.
M.S. in Biology, May 2012
Show less
- Title
- DEPTH MAP ENHANCEMENT FOR REAL-TIME 3D RECONSTRUCTION
- Creator
- Lee, Kitae
- Date
- 2015, 2015-07
- Description
-
In this paper, we present a novel depth map enhancement for real-time 3D reconstruction by the Microsoft Kinect. The Kinect sensor is...
Show moreIn this paper, we present a novel depth map enhancement for real-time 3D reconstruction by the Microsoft Kinect. The Kinect sensor is relatively affordable and capable of generating high-resolution color image and depth maps of the scene at realtime rates. However, owning the low- cost, there are several artifacts. Generated depth map contains lots of holes, which they are missing information around object boundaries and mis-alignment with color image. The objective of 3D reconstruction is to recreate a real scene, as accurate as possible within a virtual three-dimensional space using a computer. The algorithm of 3D-recosntrution is highly based on the quality of the depth map. This poor depth map could not be applied in potential real-time 3D reconstruction. We present novel multi-step upsampling-based our novel anisotropic diffusion algorithms with generated depth map and color image by Kinect. This method has better performance than existed bilateral filtering and original anisotropic filtering in terms of filling holes, sharpening the boundaries of objects and alignment between depth map and color image. We compare the performance of these filters. It is difficult to do a meaningful comparison of two algorithms with using output of Kinect sensor directly; as for each observation of the same scene, we will get different sensed value. In order to circumvent this problem and to achieve an accurate comparison process, we used dataset from Computer Vision Group at Munchen Technology Universty(TUM). This dataset and the scripts is related to quantitative error metrics are avail at http://vision.in.tum.de/data/datasets/rgbd-‐dataset. We also contribute making our project parallel and GPU computing to satisfy real-time system condition.
M.S. in Electrical Engineering, July 2015
Show less
- Title
- ACOUSTIC LOCALIZATION OF BREAKDOWN IN RADIO FREQUENCY ACCELERATING CAVITIES
- Creator
- Lane, Peter
- Date
- 2016, 2016-07
- Description
-
Current designs for muon accelerators require high-gradient radio frequency (RF) cavities to be placed in solenoidal magnetic fields. These...
Show moreCurrent designs for muon accelerators require high-gradient radio frequency (RF) cavities to be placed in solenoidal magnetic fields. These fields help contain and efficiently reduce the phase space volume of source muons in order to create a usable muon beam for collider and neutrino experiments. In this context and in general, the use of RF cavities in strong magnetic fields has its challenges. It has been found that placing normal conducting RF cavities in strong magnetic fields reduces the threshold at which RF cavity breakdown occurs. To aid the effort to study RF cavity breakdown in magnetic fields, it would be helpful to have a diagnostic tool which can localize the source of breakdown sparks inside the cavity. These sparks generate thermal shocks to small regions of the inner cavity wall that can be detected and localized using microphones attached to the outer cavity surface. Details on RF cavity sound sources as well as the hardware, software, and algorithms used to localize the source of sound emitted from breakdown thermal shocks are presented. In addition, results from simulations and experiments on three RF cavities, namely the Aluminum Mock Cavity, the High-Pressure Cavity, and the Modular Cavity, are also given. These results demonstrate the validity and effectiveness of the described technique for acoustic localization of breakdown.
Ph.D. in Physics, July 2016
Show less