Search results
(7,841 - 7,860 of 10,083)
Pages
- Title
- MODELING GLUCOSE-INSULIN DYNAMICS AND AUTOMATED BLOOD GLUCOSE REGULATION IN PATIENTS WITH TYPE 1 DIABETES
- Creator
- Oruklu, Meriyan
- Date
- 2012-11-06, 2012-12
- Description
-
Estimation of future glucose concentrations is a crucial task for diabetes management. Predicted glucose values can be used for early...
Show moreEstimation of future glucose concentrations is a crucial task for diabetes management. Predicted glucose values can be used for early hypoglycemic/hyperglycemic alarms or for adjustment of insulin amount/rate. In the first part of this thesis, reliable subject-specific glucose concentration prediction models are developed using a patient’s continuous glucose monitoring (CGM) data. CGM technologies provide glucose readings at a high frequency and consequently detailed insight into a patient’s glucose variation. Time-series analyses are utilized to develop low-order linear models from a patient’s own CGM data. Glucose prediction models are integrated with recursive identification and change detection methods, which enable dynamical adaptation of the model to inter-/intra-subject variability and glycemic disturbances. Two separate patient data sets collected under hospitalized (disturbance-free) and normal daily life conditions are used to validate the univariate glucose prediction algorithm developed. Prediction performance is evaluated in terms of prediction error metrics and Clarke error grid analysis (CG-EGA). The long-term complications of diabetes can be reduced by controlling the blood glucose concentrations within normoglycemic limits. In the second part of this thesis, the subject-specific modeling algorithm developed in part one is integrated with a control algorithm for closing the glucose regulation loop for patients with type 1 diabetes. An adaptive control algorithm is developed to keep a patient’s glucose concentrations within normoglycemic range and dynamically respond to glycemic challenges with automated subcutaneous insulin infusion. A model-based control strategy is used to calculate the required insulin infusion rate, while the model parameters are recursively identified at each sampling step. The closed-loop algorithm is designed for the subcutaneous route for both glucose sensing and insulin delivery. xii It accounts for the slow insulin absorption from the adipose tissue and the time-delay between blood and subcutaneous glucose concentrations. The performance of the control algorithm developed is demonstrated on two simulated patient populations to provide effective blood glucose regulation in response to multiple meal challenges with a simultaneous challenge on a patient’s insulin sensitivity. Physical activity and emotional stimuli such as stress are known to have a significant effect on a patient’s whole-body fuel metabolism. In the third part of this thesis, the univariate time-series models developed from recent glucose concentration history are extended to include additional information on a patient’s physical and emotional condition. Physiological measurements from a multi-sensor body monitor are used to supplement a patient’s CGM data and develop multivariate glucose prediction models. The prediction performance of the multivariate algorithm developed is evaluated on data collected from patients with type 2 diabetes, and a real life implementation of the algorithm is demonstrated for early (i.e., 30 min in advance) hypoglycemia detection. Finally, the control algorithm developed in part two is extended to utilize the glucose profiles predicted by the multivariate patient model. The multivariate closedloop algorithm is tested with two clinical experiments performed on a patient with type 1 diabetes during a high intensity exercise followed by a carbohydrate-rich meal challenge. The algorithm acquires the patient’s CGM and armband (body monitor) data every 10 min, and accordingly calculates the required basal insulin infusion rate. Insulin is administered in a fully automated manner without any food or activity announcements (e.g., no information on meal/exercise size or time). None of the algorithms developed in this thesis require any patient specific tailoring or prior experimental data before implementation. They are also designed to function in a fully automated manner and do not require any disturbance announcexiii ments or manual inputs. Therefore, they are good candidates for installation on a portable ambulatory device used in a patient’s home environment for his/her diabetes management.
PH.D in Chemical and Biological Engineering, December 2012
Show less
- Title
- A STUDY OF HIGH FREQUENCY TRADING IN LIMIT ORDER BOOKS
- Creator
- Jiang, Yuan
- Date
- 2013, 2013-12
- Description
-
In the thesis we study the high frequency trading and its applications in limit order books. We discuss the basic concepts and review the...
Show moreIn the thesis we study the high frequency trading and its applications in limit order books. We discuss the basic concepts and review the models in the limit order books. The review section focuses on the queues in the limit order books, optimal trading strategies, short-term volatilities and multi-agent problems in the scenario of limit order markets. Discussions on the shortage of some prevalent models of limit order books are addressed thereafter. For the main results of the thesis, market data are calibrated to facilitate the comparison between a theoretical model and the empirical behaviors in terms of order flows, price changes and diffusion limit of prices.
M.S. in Applied Mathematics, December 2013
Show less
- Title
- LIGHTLY SUPERVISED MACHINE LEARNING FOR CLASSIFYING ONLINE SOCIAL DATA
- Creator
- Mohammady Ardehaly, Ehsan
- Date
- 2017, 2017-05
- Description
-
Classifying latent attributes of social media users has many applications in public health, politics, and marketing. For example, web-based...
Show moreClassifying latent attributes of social media users has many applications in public health, politics, and marketing. For example, web-based studies of public health require monthly estimates of the health status and demographics of users based on their public communications. Most existing approaches are based on supervised learning. Supervised learning requires human annotated labeled data, which can be expensive and many attributes such as health are hard to annotate at the user level. In this thesis, we investigate classification algorithms that use population statistical constraints such as demographics, names, polls, and social network followers to predict individual user attributes. For example, the racial makeup of counties is a source of light supervision came from the U.S. Census to train classification models. These statistics are usually easy to obtain, and a large amount of unlabeled data from social media sites (e.g. Twitter) are available. Learning from Label Proportions (LLP) is a lightly supervised approach when the training data is multiple sets of unlabeled samples and only label distributions of them are known. Because social media users are not a representative sample of the population and constraints are too noisy, using existing LLP models (e.g. linear models, label regularization) is insufficient. We develop several new LLP algorithms to extend LLP to deal with this bias, including bag selection and robust classification models. Also, we propose a scalable model to infer political sentiment from the high temporal big data, and estimate the daily conditional probability of different attributes as a supplement method to polls, for social scientists. Because, constraints are not often available in some domains (e.g. blogs), we propose a self-training algorithm to gradually adapt a classifier trained on social media to a different but similar field. We also extend our framework to deep learning and provide empirical results for demographic classification using the user profile image. Finally, when both textual and profile image are available for a user, we provide a co-training algorithm to iteratively improve both image and text classifications accuracy, and apply an ensemble method to achieve the highest precision.
Ph.D. in Computer Science, May 2017
Show less
- Title
- HOUSING 2.0: A COLLABORATIVE PLATFORM FOR THE DESIGN OF MASS HOUSING THROUGH DIGITAL ENVIRONMENTS, NEW MEDIA, AND DESIGN FRAMEWORKS
- Creator
- Pollard, David P.
- Date
- 2011-07-18, 2011-07
- Description
-
The role of the architect as master-builder has been severely eroded, especially in the single-family housing industry. Although other...
Show moreThe role of the architect as master-builder has been severely eroded, especially in the single-family housing industry. Although other professions have embraced the tremendous advances in technology, the architectural profession has regressed. Historically the architect has been an innovation pioneer. Early in the twentieth century architects had a broad, innovative role. Architects rethought the design process, construction methods, fabrication processes, structural, and engineering systems. Now in the twenty-first century, however, the majority of architects have little to no influence on the latter four; and the design process, the one aspect the architect still controls, has remained stagnant. This thesis examines the available technologies being championed by parallel industries, compares the advantages and disadvantages of innovation implementation in the design field, and proposes a solution for the architect to regain control as master-builder of single-family homes through new media concepts. With an architectural implementation of a web-based, collaborative design and construction alliance, consumers have access to choice, quality, and information when purchasing a new home. It is proposed that affordable architect-designed housing choices be delivered through architectural design systems. These systems allow controlled customization of architectural designs, all while delivering real time cost data and building simulation. The result of this study is an open design system led by the architect that allows homebuyers access to affordable quality design, transparency in costs, and an alternative choice in purchasing a new home.
M.S. in Architecture, July 2011
Show less
- Title
- THE RELATIONSHIP BETWEEN CULTURAL ORIENTATION AND ATTITUDES TOWARDS INTELLECTUAL DISABILITY
- Creator
- Rafajko, Sean I.
- Date
- 2016, 2016-07
- Description
-
Individuals with intellectual disability (ID) face a number of disparities in their daily lives. Many of these disparities are the result of...
Show moreIndividuals with intellectual disability (ID) face a number of disparities in their daily lives. Many of these disparities are the result of interactions with people in their environment, including the general public. The behaviors of the general public toward people with ID are linked to the attitudes that they hold. Thus, it is essential to understand what influences these attitudes. Although there has been some research conducted examining how factors such as demographics and level of contact with individuals with ID affect attitudes, there has been only very limited research specifically investigating the impact of cultural factors on attitudes toward individuals with ID. The purpose of this study was to examine the unique contribution of cultural orientation variables as predictors of individuals’ attitudes toward ID using hierarchical regression analyses. Results revealed that for all examined domains of attitudes, cultural orientation accounted for a significant portion of the variance in attitudes toward ID. More specifically, it was found that greater vertical-individualist orientation was associated with more negative attitude towards ID on all domains, while other cultural orientations (horizontal-collectivist, horizontal-individualist, and vertical-collectivist), when significant, were associated with more positive attitudes toward ID. Findings from this study suggest that culture is a relevant area to explore in future research on attitudes toward ID. Further research is needed to understand how these relationships play out especially for specific groups, such as caregivers and clinicians, in order to better understand how cultural orientation can more directly affect the lives of individuals with ID.
M.S. in Psychology, July 2016
Show less
- Title
- MEDIA AND PROCTORING EFFECTS ON THE MEASUREMENT EQUIVALENCE OF THREE PERSONALITY SCALES
- Creator
- Sawhney, Gargi
- Date
- 2011-05-09, 2011-05
- Description
-
Due to the increased use of technology in administering psychological measures, there has been a growing interest in establishing measurement...
Show moreDue to the increased use of technology in administering psychological measures, there has been a growing interest in establishing measurement equivalence of personality measures across paper-and-pencil and computer-based conditions. The present study examined the measurement equivalence of three personality measures across three administration conditions: paper-and-pencil proctored, computer-based proctored, and computer-based non-proctored. Participants were 415 undergraduate students, who were randomly assigned to the three conditions and completed measures of competitiveness, engagement, and pride in productivity. Adequate fit was found for a three-factor measurement model within each of the three conditions. Results from multi-group confirmatory factor analysis indicated good configural, metric, and scalar equivalence, as well as invariant uniqueness across the three conditions. Practically speaking, observed means can be compared across the paper-and-pencil, computer-based proctored and computer-based non-proctored conditions. Results of this study are consistent with previous research that showed support for measurement equivalence across paper-and-pencil and computer-based modes of administration. Future research with larger samples and employees should attempt to extend our findings to high-stake contexts, such as employment settings.
M.S. in Psychology, May 2011
Show less
- Title
- AGENT-BASED MODELING OF ANGIOGENESIS WITHIN DEGRADABLE BIOMATERIAL SCAFFOLDS
- Creator
- Mehdizadeh, Hamidreza
- Date
- 2013, 2013-12
- Description
-
The ability to promote and control blood vessel assembly in polymer scaffolds is important for clinical success in tissue engineering. Often,...
Show moreThe ability to promote and control blood vessel assembly in polymer scaffolds is important for clinical success in tissue engineering. Often, experimental studies are performed to investigate the role of scaffold architecture on vascularized tissue formation. However, experiments are expensive and time-consuming and synthesis protocols often do not allow for independent investigation of specific scaffold properties. Mathematical and computational representation of the relationship between scaffold properties and neovascularization facilitates studying the fundamental processes involved in vascularization of biomaterials and provides more profound understanding of the critical factors that affect this process. This understanding is critical for the design of new therapeutic approaches that could bridge the existing gap between current experimental techniques and the state of the art practical tissue regeneration approaches. Computational models allow for rapid screening of potential material designs with control over scaffold properties that is difficult in laboratory settings. In this work, a multi-layered, multi-agent framework is developed to model the process of sprouting angiogenesis within porous biodegradable tissue engineering scaffolds. Software agents are designed to represent endothelial cells, interacting together and with their micro-environment, leading to formation of new blood vessels that perfuse the scaffold. A rule base, derived from the experimental findings reported in the literature, or observed by our collaborators, governs the behavior of individual agents. Two-dimensional and three-dimensional scaffold models with well-defined homogeneous and heterogeneous pore architectures are designed and simulated to investigate the impact of various scaffold design parameters such as pore size, pore size distribution, interconnectivity, and porosity, as well as the degradation behavior of 2 the scaffolds, on vessel invasion and capillary network structure. Model parameters such as the speed of vessel sprouting or cell migration speed are adjusted based on independent results of in vivo vascularization of fibrin gels in the absence of a polymer scaffold. The effects of various characteristics of scaffold degradation are also investigated. Various scenarios are defined and simulation case studies are developed to investigate the effect of scaffold geometrical and structural properties on angiogenesis. The simulation results are compared with available experimental results of scaffold vascularization performed in our group and with relevant published literature data to validate the developed model. These results indicate that in general the rate of vascularization increases with larger pore size and higher interconnectivity and porosity scaffolds. Pores of larger size (160-270 μm) support rapid and extensive angiogenesis, however vascularizing deeper parts of the scaffolds still remains a challenge that requires more complex scaffold designs. The agent-based model can be used to provide insight into optimal scaffold properties that support vascularization of engineered tissues. The modeling framework developed provides a novel interface for convenient integration of new knowledge to the current computational models, making it possible to gradually increase the level of complexity and accuracy of the models as our knowledge about the underlying biological system advances. The simulation results help us better understand the complex interactions between the growing blood vessel network and a degrading scaffold structure, and identify the optimal combinations of geometric and degradation characteristics of tissue engineering scaffolds that support scaffold vascularization.
PH.D in Chemical Engineering, December 2013
Show less
- Title
- AN ENERGY EFFICIENT ROUTING PROTOCOL FOR WIRELESS SENSOR NETWORKS
- Creator
- Lara, Aurobinda
- Date
- 2012-04-27, 2012-05
- Description
-
Wireless distributed microsensor systems will enable the reliable monitoring of a variety of environments for both civil and military...
Show moreWireless distributed microsensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. A wireless sensor network consists of nodes that can communicate with each other via wireless links. One way to support efficient communication between sensors is to organize the network into several groups, called clusters, with each cluster electing one node as the head of cluster. Energy efficiency is of great importance for the wireless sensor network (WSN). A popular way to save energy is to construct clusters for data aggregation and forwarding. In this thesis a distributed cluster algorithm is studied to improve the energy consumption efficiency. It was observed that the cluster head has to lie within the transmission range of the base station (sink node) and the distance between cluster head and base station is critical for the energy consumption performance, we proposed a pseudo-cluster and virtual hierarchical clustering scheme (PC-LEACH), which considers the power level of the non-cluster head nodes and the residual energy level during the cluster head selection stage. Consequently we could better balance the chance of being cluster head for all nodes. Simulation results show that the scheme is able to result in longer network lifetime than the well-known protocol LEACH.
M.S. in Electrical Engineering, May 2012
Show less
- Title
- DYNAMIC CONIC FINANCE: NO-ARBITRAGE PRICING AND NO-GOOD-DEAL PRICING FOR DIVIDEND-PAYING SECURITIES IN DISCRETE-TIME MARKETS WITH TRANSACTION COSTS
- Creator
- Rodriguez, Rodrigo
- Date
- 2012-06-27, 2012-07
- Description
-
This thesis studies no-arbitrage pricing and dynamic conic nance for dividend-paying securities in discrete-time markets with transaction...
Show moreThis thesis studies no-arbitrage pricing and dynamic conic nance for dividend-paying securities in discrete-time markets with transaction costs. The rst part investigates no-arbitrage pricing for dividend-paying securities in discrete-time markets with transaction costs. We introduce the value process and the self- nancing condition in our context. Then, we prove a version of First Fundamental Theorem of Asset Pricing. Speci cally, we prove that the no-arbitrage condition under the e cient friction assumption is equivalent to the existence of a risk-neutral measure. We formulate an appropriate notion of a consistent pricing system in our set-up, and we prove that if there are no transaction costs on the dividends paid by the securities, then the no-arbitrage condition under the e cient friction assumption is equivalent to the existence of a consistent pricing system. We nish the chapter by deriving dual representations for the superhedging ask price and subhedging bid price of a derivative contract. The second part studies dynamic conic nance in the set-up introduced in the rst part. We formulate the no-good-deal condition in terms of a family of dynamic coherent risk measures, and then we prove a version of the Fundamental Theorem of No-Good-Deal Pricing. The Fundamental Theorem of No-Good-Deal Pricing provides a necessary and su cient condition for the no-good-deal condition to hold. Next, we study the no-good-deal ask and bid prices of a derivative contract. We particularize our results to the dynamic Gain-Loss Ratio, and compute the no-good-deal prices of European-style Asian options in a market with transaction costs.
Ph.D. in Applied Mathematics, July 2012
Show less
- Title
- KILOMETER-SPACED GNSS ARRAY FOR IONOSPHERIC IRREGULARITY MONITORING
- Creator
- Su, Yang
- Date
- 2017, 2017-05
- Description
-
This dissertation presents automated, systematic data collection, processing, and analysis methods for studying the spatial-temporal...
Show moreThis dissertation presents automated, systematic data collection, processing, and analysis methods for studying the spatial-temporal properties of Global Navigation Satellite Systems (GNSS) scintillations produced by ionospheric irregularities at high latitudes using a closely spaced multi-receiver array deployed in the northern auroral zone. The main contributions include 1) automated scintillation monitoring, 2) estimation of drift and anisotropy of the irregularities, 3) error analysis of the drift estimates, and 4) multi-instrument study of the ionosphere. A radiowave propagating through the ionosphere, consisting of ionized plasma, may su↵er from rapid signal amplitude and/or phase fluctuations known as scintillation. Caused by non-uniform structures in the ionosphere, intense scintillation can lead to GNSS navigation and high-frequency (HF) communication failures. With specialized GNSS receivers, scintillation can be studied to better understand the structure and dynamics of the ionospheric irregularities, which can be parameterized by altitude, drift motion, anisotropy of the shape, horizontal spatial extent and their time evolution. To study the structuring and motion of ionospheric irregularities at the sub-kilometer scale sizes that produce L-band scintillations, a closely-spaced GNSS array has been established in the auroral zone at Poker Flat Research Range, Alaska to investigate high latitude scintillation and irregularities. Routinely collecting lowrate scintillation statistics, the array database also provides 100 Hz power and phase data for each channel at L1/L2C frequency. In this work, a survey of seasonal and hourly dependence of L1 scintillation events over the course of a year is discussed. To efficiently and systematically study scintillation events, an automated low-rate scintillation detection routine is established and performed for each day by screening the phase scintillation index. The spaced-receiver technique is applied to cross-correlated phase and power measurements from GNSS receivers. Results of horizontal drift velocities and anisotropy ellipses derived from the parameters are shown for several detected events. Results show the possibility of routinely quantifying ionospheric irregularities by drifts and anisotropy. Error analysis on estimated properties is performed to further evaluate the estimation quality. Uncertainties are quantified by ensemble simulation of noise on the phase signals carried through to the observations of the spaced-receiver linear system. These covariances are then propagated through to uncertainties on drifts. A case study of a single scintillating satellite observed by the array is used to demonstrate the uncertainty estimation process. The distributed array is used in coordination with other measuring techniques such as incoherent scatter radar and optical all-sky imagers. These scintillations are correlated with auroral activity, based on all-sky camera images. Measurements and uncertainty estimates made over a 30-minute period are made and compared to a collocated incoherent scatter radar, and show good agreement in horizontal drift speed and direction during periods of scintillation for cases when the characteristic velocity is less than the drift velocity. The methods demonstrated are extensible to other zones and other GNSS arrays of varying size, number, ground distribution, and transmitter frequency.
Ph.D. in Mechanical and Aerospace Engineeering, May 2017
Show less
- Title
- AGENT-BASED MODELING OF ANGIOGENESIS: EXPLORATION OF THE EFFECTS OF VEGF DELIVERY STRATEGIES ON PROMOTING ANGIOGENESIS
- Creator
- Xiao, Nan
- Date
- 2015, 2015-05
- Description
-
This is a dissertation about three-dimensional agent-based modeling (ABM) of angiogenesis within porous scaffold. Tissue engineering...
Show moreThis is a dissertation about three-dimensional agent-based modeling (ABM) of angiogenesis within porous scaffold. Tissue engineering technology provides great benefits for humanity in maintaining healthy tissue formation and disease rehabilitation. However, biomedical experiments, especially animal experiments, are very costly, timeconsuming and high technological level of equipment required. The computational modeling can provide an efficient alternative to biomedical experiments in strategy design and assist clinical research. To simulate the angiogenesis process, an agent-based model was developed using java-based Repast toolkit. The purpose of this research is to explore the effects of different Vascular Endothelial Growth Factor (VEGF) delivery methods in promoting angiogenesis. The work here includes four parts: a) model verification by comparing simulation results with experimental results; b) exploration of different VEGF delivery methods by changing total dose and release rate; c) exploration of the effects of prevascularized strategies; d) development of a tissue cell VEGF secretion model. The simulation results showed that: angiogenesis can be promoted by increasing VEGF total dose or decreasing releasing rate; prevascularized scaffolds can improve new vascular network formation and result in better invasion depth; pre-seeded tissue cells in the scaffold can provide a continuous source of VEGF and promote angiogenesis. This ABM can provide a good reference for the design of biomedical applications.
M.S. in Chemical Engineering, May 2015
Show less
- Title
- Using Mitsimlab to Generate Dynamic Traffic for NS2 Simulation of Vanet
- Creator
- Diao, Zhaoshi
- Date
- 2011-04-25, 2011-05
- Description
-
The vehicular ad hoc network (VANET) has attracted a lot of attentions due to their interesting and promising functionalities including...
Show moreThe vehicular ad hoc network (VANET) has attracted a lot of attentions due to their interesting and promising functionalities including vehicular safety, traffic congestion avoidance, and location based services. However, using a real VANET to do these researches costs too much. Simulation of VANET is useful and could solve this problem well. Nevertheless, many simulations of VANET base on simple road networks and relatively simple mobility models. Based on this road networks and mobility models, consequently, the results of simulation of VANET would be impractical and inaccurate. Therefore, MITSIMLAB which is a simulation of transportation system developed by Massachusetts Institution of Technology Intelligent Transportation System Program is introduced. In MITSIMLAB, a real world road network could be generated. Moreover, the mobility models in MITSIMLAB are more practical. However, MITSIMLAB is a simulation of transportation system. It cannot be used to simulate the VANET directly, while NS2 could simulate VANET properly. NS2 is an open sourced and free software, and it is widely used and successfully simulates plenty of situations in the wireless environment. It could well simulate the communication protocols and applications of VANET. But it cannot generate road network and mobility models to simulate a practical traffic by itself. As a result, it is important to incorporate MITSIMLAB by using its practical road network and mobility models with NS2. In the thesis, a method about how to translate the output file of MITSIMLAB into the format of NS2 would be proposed. In addition, a road network based on IIT main campus is generated by using MITSIMLAB. After translating it into the format usable by NS2, a VANET based on the map of IIT main campus and practical mobility models could be simulated.
M.S. in Electrical Engineering, May 2011
Show less
- Title
- DEVELOPMENT OF A VERSATILE WIRELESS NETWORKING TESTBED AND ITS APPLICATIONS
- Creator
- Paladugu, Ravi Kiran
- Date
- 2011-07-26, 2011-07
- Description
-
Wireless networks have been an essential part of communication and a very hot research interest in the last century. It is a truly...
Show moreWireless networks have been an essential part of communication and a very hot research interest in the last century. It is a truly revolutionary paradigm shift, enabling multimedia communications between people and devices from any location. Wireless networking is becoming increasingly ubiquitous, there has been a major development in communication hardware and protocol stack for better quality of service, increased throughput, reduced latency, reduced energy consumption, security, etc. As wireless networks continue to develop, usage has grown day-by-day and it becomes a critical part of home, business and industrial infrastructure. To meet all these increasing demands, growing wireless networks and standards researchers are providing new networking technologies. However, these technologies must be tested before they can be released for mainstream use. As academic research in wireless networking relies heavily on simulation, the accuracy depends on the fidelity of the models used in simulation. Even the well-known simulators used in academic research, like ns-3 and qualnet, fails to provide accurate behavior of the signal propagation and wireless channels. Without high confidence in the accuracy of wireless network simulation tools, it is difficult to make concrete progress in cross-layer protocol optimization research. In this thesis, we built a versatile wireless networking testbed, which can support variety of applications. This testbed supports Multi-Radio Multi-Channel (MRMC) wireless mesh networks and can implement various multi radio routing protocols. We have made several modifications to the wireless interface device drivers to improve the performance of multichannel protocols. Furthermore, we will discuss the details of our testbed and its implementation.
M.S. in Electrical and Computer Engineering, July 2011
Show less
- Title
- APPLICATION OF LONG SPAN TRUSSES AND THEIR INTERACTION WITH ATHLETIC PROGRAM AND OCCUPANT AWARENESS
- Creator
- Marshall, Kristen
- Date
- 2011-11-25, 2011-12
- Description
-
The athletic facility at the Illinois Institute of Technology is currently insufficient for not only the school’s varsity athletes, but also...
Show moreThe athletic facility at the Illinois Institute of Technology is currently insufficient for not only the school’s varsity athletes, but also for the school’s club teams and student body. A larger facility is needed to supplement the Keating Sports Center, which is small and currently the only athletic center for the campus. This new facility will include a lightweight and ingenuitive structural design in combination with architectural engineering ideals that challenges the Miesian typology of the rest of campus. My project will incorporate an original structural design and explore the use of suitable building enclosure materials and their thermal properties in order to make the building as efficient as possible.
M.S. in Architecture, December 2011
Show less
- Title
- FABRICATION OF POLYMER OF CLAY NANOCOMPOSITES AND DEVELOPMENT OF CLAY DIGESTION METHODS
- Creator
- Jin, Zhen
- Date
- 2014, 2014-07
- Description
-
This thesis reports on our preliminary development of methods used to assess the risks that polymer/clay nanocomposite (PCN) food packaging...
Show moreThis thesis reports on our preliminary development of methods used to assess the risks that polymer/clay nanocomposite (PCN) food packaging pose to consumers. PCN with 1% - 7% (w/w) montmorillonite (MMT) clay and 3 mass equivalents of maleic anhydride-grafted polyethylene (MAPE) as a compatibilizer dispersed in low-density polyethylene (LDPE) was successfully extruded into thin, free-standing films using a pilot-scaled microcompounder with 65 mm film device. These films had good optical clarity and a reasonably consistent thickness of 35 ± 3 μm. An oxygen permeability analyzer was used to measure oxygen transmission rate and permeability of these fabricated films to demonstrate that they perform similarly to PCN barrier materials intended for commercial applications; these results showed that the films with the highest amount of added clay had better barrier properties than the neat LDPE films. In preparation of experiments to assess whether clay particles can be released from these materials during intended conditions of use, we also explored effective digestion and trace-metal analysis (Inductively Coupled Plasma-Optical Emission Spectroscopy) methods of both pure clay and MMT/MAPE/LDPE films. This work resulted in an effective digestion protocol to fully digest neat clays and PCN films, as well as an analysis method that provides for a 5-orders-of-magnitude linear detection range and single-digit parts-per-billion detection limits for aluminum and magnesium. Silicon was a more challenging element and efforts to eliminate environmental contamination of samples with this element were unsuccessful. While the work presented in this thesis is largely preliminary and numerous questions remain unanswered, the PCN fabrication and ix characterization methods developed here will be invaluable in our future efforts to understand the risks that nanocomposite food packaging materials pose to human health.
M.S. in Food Process Engineering, July 2014
Show less
- Title
- ELIASHBERG ANALYSIS OF CUPRATE OXIDE SUPERCONDUCTORS
- Creator
- Ahmadi, Omid
- Date
- 2011-11, 2011-12
- Description
-
In this thesis, evidence for antiferromagnetic spin uctuations as the pairing glue in high temperature superconductors is presented through a...
Show moreIn this thesis, evidence for antiferromagnetic spin uctuations as the pairing glue in high temperature superconductors is presented through a modi ed Eliashberg analysis of experimental tunneling data of Bi2212 over a wide range of doping. In particular, the normalized conductance data of the junctions, from optimal to overdoped, will be tted at T=0K using d-wave Eliashberg equations where the spectral function is modeled after spin uctuation spectra seen in experiments. The corresponding real and imaginary diagonal and anomalous self-energy curves are extracted and compare well to photoemission experiments. This is followed by a temperature dependent Eliashberg analysis where the spectral function is now temperature dependent, based on trends seen in inelastic neutron scattering experiments. New results for temperature dependent self energy curves are also compared to experiment with slight deviations. Finally, the Josephson product is calculated as an independent check of the tunneling matrix used in tting the data.
Ph.D. in Physics, December 2011
Show less
- Title
- SRAM DESIGN BASED ON CNFET: A DISCUSSION ON CIRCUIT, PARAMETER AND DIAMETER VARIATION
- Creator
- Yu, Zhiyuan
- Date
- 2011-05-05, 2011-05
- Description
-
This thesis describes the effort in designing SRAMs based on Carbon Nanotube Field Effect Transistor (CNFET), and covers several aspects...
Show moreThis thesis describes the effort in designing SRAMs based on Carbon Nanotube Field Effect Transistor (CNFET), and covers several aspects including circuit structure, parameters, layout and the detection of diameter variation. It aims at providing a primitive reference on the topic of employing CNFETs in realistic SRAM design. In this thesis, we propose a guideline for choosing appropriate transistor ratios with respect to differently selected diameters in a conventional 6-T SRAM. Constraints of transistor ratios are established, followed by the optimization of the ratios regarding Static Noise Margin (SNM) and Read Noise Margin (RNM) of the cell. With the optimal parameters, the CNFET cell can achieve 41.41% and 1.26% improvement over traditional CMOS in SNM and RNM, respectively. Then we propose a column-based monitoring circuit which is capable of detecting variation in the diameter of tubes. It is grounded on a novel layout of the 6-T SRAM based on CNFET, which assigns all cells in a column of the SRAM array to the same group of Carbon Nanotubes (CNTs). This monitor outputs a digital signal indicating the impact of diameter variation on delay of the circuit, and enables further mitigation of the variation. Alternatively, a new 6-T SRAM cell structure is proposed for optimizing the performance of the cell at very-low or sub-threshold supply voltages. Compared with a traditional 6-T cell, simulation results show that the reading and writing delay have been improved by more than 80% and 75% at 0.4V supply voltage, respectively. It achieves PDP reduction of 70% and 91% for reading and writing operations.
M.S. in Electrical Engineering, May 2011
Show less
- Title
- DEVELOPMENT OF A TESTBED FOR STUDYING SECURITY ISSUES IN VOIP NETWORKS
- Creator
- Olawoye, Oladeji
- Date
- 2011-12-06, 2011-12
- Description
-
VoIP is increasingly becoming an alternative to the traditional PSTN for Telephony this is as a result of certain advantages and services...
Show moreVoIP is increasingly becoming an alternative to the traditional PSTN for Telephony this is as a result of certain advantages and services offered by VoIP. VoIP will be considered able to fully replace PSTN if it can provide same or better Quality of Service and Security guaranty as PSTN. Delivering telephony services over (best-effort and connectionless) IP data network faces two main issues not experienced in PSTN, security and quality of service; security in the sense that telephone calls will be susceptible to attacks that are known in the Internet, quality of service in the sense that voice packets now have to compete with packets of other traffic for a limited bandwidth. Researches are ongoing in these two fields to make experience on VoIP similar to the traditional telephone. The focus of this thesis work is in the security aspect. SIP protocol has become one of the most popular signaling protocols used for VoIP, SIP architecture is an open architecture originally developed for trusted communications among trusted partners, and therefore much thought was not given to security. Adopting SIP as a main protocol used in VoIP in the Internet where there are where there are lots of hostile users calls for more ways to properly secure its use. The thesis work involves setting up of a SIP based VoIP network based on open source SIP telephony platforms to study various security issues in a SIP-Based VoIP network and experiment some proposed detection mechanism. The contribution of this work is to develop a graphical user interface in the UNIX environment using Java that makes the execution of the attack scenarios easy to carry out and observe.
M.S. in Electrical Engineering, December 2011
Show less
- Title
- THERMAL AND FLUID FLOW FEASIBILITY STUDY OF A CIRCULAR COUETTE FLOW REACTOR VIA PLANAR LASER-INDUCED FLUORESCENCE IMAGING
- Creator
- Bittner, Peter R.
- Date
- 2012-04-28, 2012-05
- Description
-
Liquid fueled microcombustors face many challenges in their development, the most prominent being high temperature gradients and radiative...
Show moreLiquid fueled microcombustors face many challenges in their development, the most prominent being high temperature gradients and radiative effects. Because the walls of microcombustors are thin, they offer very little resistance to conductive heat transfer, regardless of the materials used. This can cause very high heat losses that lead to large temperature gradients in the gas compared to nearly uniform temperatures inside conventional combustion chambers. In this investigation a circular Couette flow reactor (CCFR) and planar laser induced fluorescence (PLIF) are used to examine the feasibility of studying vapor distributions of a monodisperse acetone droplet stream, formed by a vibrating orifice aerosol generator (VOAG), exposed to combinations of varying velocity gradients, temperature gradients and radiant heating. The acetone droplets are injected through various ports on the CCFR to vary the time for vaporization of the droplets inside the reactor. Initial results of the operating CCFR uses acetone droplets seeded into the test section to demonstrate the fluorescence of the liquid and vapor acetone within the test section.
M.S. in Mechanical, Materials, and Aerospace Engineering, May 2012
Show less
- Title
- SOFT ERROR TOLERANT LATCH CIRCUIT DESIGN FOR DEEPLY SCALED CMOS TECHNOLOGY
- Creator
- Nan, Haiqing
- Date
- 2012-01-25, 2012-05
- Description
-
As CMOS technology keeps scaling down, circuit designers face variety of challenges. Due to the scaling of supply voltage and node capacitance...
Show moreAs CMOS technology keeps scaling down, circuit designers face variety of challenges. Due to the scaling of supply voltage and node capacitance, digital circuits are more aware of noise and variations, which cause reliability issues such as soft error. Traditionally the soft error aware VLSI design is limited to applications which require high reliability and operated in high radiation environment such as avionics applications, medical equipments, space industry and military applications. However, with CMOS technology scales down to nanometer region, the VLSI circuits can also be affected by soft errors at ground level which features low radiation energy. In this thesis, totally 5 soft error tolerant latch designs are proposed including HLR-1, HLR-2, HLR-CG1, HLR-CG2, and HLR-CG3. All the proposed designs protect internal nodes as well as output node for soft error regardless the radiation energy. The proposed HLR-1 and HLR-2 latch circuits tolerate soft error for non-CG systems. Since the proposed HLR-1 and HLR-2 designs take advantages of floating node to tolerate soft error, these two designs cannot be applied with clock gating techniques and the minimum clock frequency of these two designs should be greater than 16MHz in order to maintain correct logic at the floating node. The power consumption and circuit delay between the proposed HLR-1 and HLR-2 designs are very close. The proposed HLR-1 design achieves a small amount of benefits in terms of power and delay compared with the proposed HLR-2 design. But the proposed HLR-2 circuit reduces area 3.5% compared to the proposed HLR-1 circuit. The proposed HLR-CG1, HLR-CG2 and HLR-CG3 latch designs fully tolerate soft error regardless of radiation energy for both CG and non-CG systems. Due to the auto correction mechanism embedded in the proposed HLR-CG1, HLR-CG2 and HLRCG3 designs, any soft error at any location will be automatically corrected without generating any floating nodes. The proposed HLR-CG3 features the smallest power consumption and delay but it has the largest area overhead compared to HLR-CG1 and HLR-CG2 circuits. The proposed HLR-CG1 design features the smallest area compared with HLRCG2 and HLR-CG3 designs. The design cost of HLR-CG2 design is between the proposed HLR-CG1 and HLR-CG3 designs. All the proposed designs achieve faster speed and smaller PDP compared to previous hardening techniques. Compared to the proposed HLR-1 design, previous designs increases power 3.77% on average, delay 272.74% on average, PDP 300.29% on average and decreases area 7.09% on average. Compared to the proposed HLR-2 design, previous designs increases power 3.77% on average, delay 272.40% on average, PDP 299.89% on average and decreases area 3.93% on average. Compared to the proposed HLR-CG1 design, previous designs increases area 19.65% on average, delay 213.14% on average, PDP 203.78% on average and decreases power 5.82% on average. Compared to the proposed HLR-CG2 design, previous designs increase area 6.49% on average, delay 193.28% on average, PDP 223.45% on average and power 6.51% on average. Compared to the proposed HLR-CG3 design, previous designs increases delay 272.18% on average, PDP 314.38% on average, power 8.01% on average and area 2.93% on average.
Ph.D. in Electrical and Computer Engineering, May 2012
Show less