Search results
(1,341 - 1,360 of 4,544)
Pages
- Title
- ANALYSIS OF THE APPLICATION OF THE LIAR MACHINE TO THE Q-ARY PATHOLOGICAL LIAR GAME WITH A FOCUS ON LOWER DISCREPANCY BOUNDS
- Creator
- Williamson, James W
- Date
- 2011-12-12, 2011-12
- Description
-
The binary pathological liar game, as described by Ellis and Yan in [Ellis and Yan, 2004], is a variation of the original liar game, as...
Show moreThe binary pathological liar game, as described by Ellis and Yan in [Ellis and Yan, 2004], is a variation of the original liar game, as described by Berlekamp, R enyi, and Ulam in [Berlekamp, 1964], [R enyi, 1961], and [Ulam, 1976]. This two person, questioner/ responder, game is played for n rounds for a set of M messages. The game begins by the responder selecting a message from the set M. Each round the questioner partitions the messages into two distinct subsets. The responder selects one subset, and elements not in the selected subset each accumulate a lie. Elements accumulating more than e lies are eliminated. The questioner wins the original game provided after the completion of n rounds there is at most one surviving message. The questioner wins the pathological game provided there is at least one surviving message. The focus here will be to generalize the pathological game from two subsets to q subsets with a focus on providing a winning condition for the questioner. The q-ary variant of the pathological liar game has been studied, with rst results in [Ellis and Nyman, 2009]. We let the number of rounds the game is played go to in nity, with e a linear fraction of n, and present an upper bound on the number of messages required by the questioner to win the q-ary Pathological Liar Game. The liar machine and linear machine as discussed by Cooper and Ellis in [Cooper and Ellis, 2010] have been adapted to t this generalization and are used to track the approximate progression of the game. We provide an upper bound on the initial number of chips by bounding the discrepancy between the actual progression of the game and the approximate progression of the game as described by the linear and liar machines respectively. A similar upper bound can be found in [Tietzer, 2011], with di erent elements in the argument. Using methods similar to those found in [Cooper and Ellis, 2010], we provide a partial order argument to show that the winning condition bound for one response strategy by the questioner transfers to all possible response strategies.
M.S. in Applied Mathematics, December 2011
Show less
- Title
- NETWORK CONGESTION / RESOURCE ALLOCATION GAME
- Creator
- Shin, Junghwan
- Date
- 2013, 2013-12
- Description
-
We first consider the K-user(player) resource allocation problem when the resources or strategies are associated with homogeneous functions....
Show moreWe first consider the K-user(player) resource allocation problem when the resources or strategies are associated with homogeneous functions. Further, we consider the K-user(player) matroid resource allocation problem satisfying the specified requirements of the users, which are maximal independent sets of a matroid. The objective is to choose strategies so as to minimize the average maximum cost incurred by a user where the cost of a strategy is the sum of the costs of the elements comprising the strategy. For k commodity networks with heterogeneous latency functions, we consider the price of anarchy (PoA) in multi-commodity selfish routing problems where the latency function of an edge has a heterogeneous dependency on the flow commodities, i.e. when the delay is dependent on the flow of individual commodities, rather than on the aggregate flow. Further we consider the price of anarchy (PoA) in multi-commodity atomic flows where the latency function of an edge has a heterogeneous dependency on the flow commodities, i.e. when the delay is dependent on the flow of individual commodities, rather than on the aggregate flow. Lastly, we show improved bounds on the price of anarchy for uniform latency functions where each edge of the network has the same delay function. We prove bounds on the price of anarchy for the above functions. Our bounds illustrate how the PoA is dependent on θ and the coefficients gij . At the end, we consider security aspects of network routing in a game-theoretic framework where an attacker is empowered with the ability for intrusion into edges of the network; on the other hand, the goal of the designer is to choose routing paths.
PH.D in Computer Science, December 2013
Show less
- Title
- The Relationship Between Default and Volatility and Its Impact on Counterparty Credit Risk
- Creator
- Yang, Jiarui
- Date
- 2012-07-16, 2012-07
- Description
-
This thesis presents a uni ed framework for studying the impact of the correlation between interest rate volatility and counterparty default...
Show moreThis thesis presents a uni ed framework for studying the impact of the correlation between interest rate volatility and counterparty default probability on the credit risk of collateralized interest-rate derivative contracts. A defaultable term structure model is proposed in which the default risk is correlated with interest rate volatility. In particular, an existence and uniqueness theorem of this model is proved. The pricing formula of credit derivatives under the proposed model is derived and the stochastic interest rate model and credit model are calibrated together . Finally, given all the parameters calibrated by the unscented Kalman lter, a sensitivity analysis of the impact of the correlation between interest rate volatility and a counterparty's default probability on the credit risk of collateralized interest-rate derivative contracts is presented.
Ph.D. in Applied Mathematics, July 2012
Show less
- Title
- FPGA IMPLEMENTATION OF ULTRASONIC FLAW DETECTION ALGORITHM BASED ON SUPPORT VECTOR MACHINE CLASSIFICATION
- Creator
- Jiang, Yiyue
- Date
- 2016, 2016-12
- Description
-
In this study, a Support Vector Machine (SVM) classification method used for analyzing Ultrasound signals is implemented by FPGAs based on...
Show moreIn this study, a Support Vector Machine (SVM) classification method used for analyzing Ultrasound signals is implemented by FPGAs based on Xilinx Zynq SoC. The SVM processor aims at classifying A-scan data obtained by an ultrasonic sensor. For reducing development time, hardware software co-design tools such as Xilinx System Generator and Vivado have been used. SVM kernel function is implemented by DSP slices and block RAMs. Advanced Extensible Interface bridges the ARM core and FPGAs for more convenient communication. The main objective of this study is to achieve robust detection of ultrasonic flaw echoes in real-time using an SVM algorithm. The implementation on the FPGA shows that the architecture can be realized with a Xilinx Zedboard FPGA. It runs at 100MHz clock frequency and can calculate the SVM classification for 1024 feature space points under 0.02ms.
M.S. in Electrical Engineering, December 2016
Show less
- Title
- APPROXIMATION OF STOCHASTIC DIFFERENTIAL EQUATIONS WITH NON-GAUSSIAN NOISE AND APPLICATION TO A VOLATILITY MODEL
- Creator
- Jianhua, Wang
- Date
- 2015, 2015-05
- Description
-
In recent decades, stochastic processes with non-Gaussian noise are widely utilized in financial models. The a-stable Levy motion, one type of...
Show moreIn recent decades, stochastic processes with non-Gaussian noise are widely utilized in financial models. The a-stable Levy motion, one type of non-Gaussian noise processes, provides robust data ts and events simulations in financial world. Due to "heavy" tails and path jumps property, the a-stable Levy motion modeling becomes extremely popular among financial decision makers and risk hedgers. The a-stable Levy motion, however, usually has neither closed form of probability density function nor the higher moments, which raises implement obstacles. We exhibited distributions of a-stable random variables by different values. In contrast to the Gaussian distribution, the a-stable distribution illustrated the "heavy" tails and shape skews with various parameters. We analyzed jump behaviors along with calculating tails probabilities. We exploited scenario simulation method to solve stochastic differential equations with a-stable Levy motions. Except Euler scheme, we derived two strong convergence 1.0 order numerical schemes via the Wagner-Platen expansion. After we executed the schemes on the Merton Jump-Diffusion model, we roughly proved the convergence order of the schemes. We successfully applied the derived schemes to simulate a sophisticated stochastic volatility model with skewed a-stable Levy motions. With the approximated underlying asset process, we priced an european call option value and visualized implied volatility curve. As the result, we concluded the logarithm of underlying asset follows a skewed distribution rather than a symmetric one.
M.S. in Applied Mathematics, May 2015
Show less
- Title
- THE EVALUATION OF THERMAL INACTIVATION OF COXIELLA BURNETII NINE MILE PHASE II IN SKIM MILK BY INTEGRATED CELL CULTURE-POLYMERASE CHAIN REACTION (ICC-PCR) ASSAY
- Creator
- Zheng, Jiaojie
- Date
- 2014, 2014-07
- Description
-
Coxiella burnetii (C. burnetii) is an obligate intracellular bacterium and replicates exclusively in an acidified, lysosome-like vacuole which...
Show moreCoxiella burnetii (C. burnetii) is an obligate intracellular bacterium and replicates exclusively in an acidified, lysosome-like vacuole which means the analysis of C. burnetii is difficult than other bacterial which can growth on regular liquid medium. An Integrated Cell Culture-Polymerase Chain Reaction (ICC-PCR) assay has been developed as a potential alternative to animal bioassays for evaluating C. burnetii inactivation in milk. This thesis research is to demonstrate the usefulness of this assay for evaluating C. burnetii inactivation in skim milk and comparing the results found for whole milk which was completed by another researcher. Before the thermal studies, the thermal kinetics of heating skim milk in glass vials and the polymerase chain reaction (PCR) detection limit were determined. For thermal treatments, Ultra High Temperature (U.H.T.) skim milk containing C. burnetii at ~7.2 log10 genome equivalents/ml (ge/ml) was treated in submerged vials at 60 °C, 62 °C and 64 °C for various times. After serial dilution of milk to 10-6, triplicate Vero cell monolayers were infected at each level for 48 hours followed by 9 days incubation after inoculum removal and addition of fresh RPMI + 1% FBS media. Infected cells were freeze-thawed followed by deoxyribonucleic acid (DNA) extraction and real- time PCR (RT-PCR) for the C. burnetii IS1111a gene. C. burnetii in samples was considered as viability if the Day 9 post infection (p.i.) level increased by ≥0.5 log10 C. burnetii ge/ml over the most concentrated Day 0 p.i. sample. The numbers of positive wells from each dilution were used to calculate the remaining viable C. burnetii/ml by MPN method. The thermal kinetics profile for heating the skim milk showed that the come up and cool down time would not adversely affect the thermal x treatment at 60 °C and 62°C. The qPCR could detect the propagation of C. burnetii in skim milk containing as low as 120 C. burnetii ge/ml. The ICC-PCR assay demonstrated that the thermal inactivation of C. burnetii in skim milk was faster than in whole milk at 62 °C and 64 °C. For the 62 °C treatment, the infectious C. burnetii in skim milk was reduced by 1.3 log10 ge/ml at 10 minutes and was no longer infectious after 20 minutes, whereas C. burnetii in whole milk had no obvious reduction after 10 minutes, 3.7 log10 ge/ml after 20 minutes, and was no longer infectious after 26 minutes. After 6 minutes treatment at 64 °C, infectious C. burnetii was reduced by 6.2 log10 ge/ml for skim milk vs. 3.8 log10 ge/ml for whole milk with complete inactivation after 9 minutes for both milk types. This ICC-PCR assay is a specific and sensitive method to detect the inactivation of C. burnetii in skim milk and allows differentiation of the thermal inactivation kinetics of different types of milk, and may be useful for the evaluation of thermal and novel non-thermal processes for C. burnetii inactivation in milk.
M.S. in Food Safety and Technology, July 2014
Show less
- Title
- STUDIES ON CONNECTIVE AND NEUROLOGICAL TISSUES IN RELATION TO DISEASE
- Creator
- Madhurapantula, Rama Sashank
- Date
- 2015, 2015-12
- Description
-
The structure of connective tissue is of great importance for homeostasis of the cells present within it. Pathologies leading to changes in...
Show moreThe structure of connective tissue is of great importance for homeostasis of the cells present within it. Pathologies leading to changes in the structure of the extracellular matrix (ECM), in particular collagen have been shown to play a pivotal role in the progression of various diseases. Similarly, changes in the structure of specific elements in neurological tissues, such as myelin, have been shown to elicit adverse responses to injury. This thesis explores two main aspects: 1) the structural changes brought about by high sugar concentrations, much similar to that found in diabetic patients, to the structure of type I collagen and 2) possible effects of traumatic brain injury (TBI) to the structure of neurons in rat brains. Specific changes in the structure and packing of collagens in various tissues could be potential therapeutic targets to control the progression of related diseases. However, the information available on the nature, specificity and the relevance of these changes at a molecular level are largely unknown and have been explored only sparsely. The result of non-enzymatic glycosylation i.e. glycation, is the formation of sugar- mediated crosslinks within the native structure of type I collagen. The chemistry behind these crosslinks, also known as Advanced Glycation Endproducts (AGEs), has been known for decades. However, the exact locations or regions of high propensity for the formation of these crosslinks within the packing structure of collagen are largely unknown. The results presented in this thesis inform on the location of possible crosslinks using the principle of Multiple Isomorphic Replacement (MIR) to and correlate the effects of crosslinks to the structural and functional sites present on the D-periodic arrangement of collagen into fibrils. An extension to this is the study of the effects of povidone-iodine on the packing structure of collagen. Iodine is used as a common disinfectant in surgery and first aid. Prolonged treatment with iodine is detrimental to the structure of collagen underlying the wound site (surgical or otherwise). This is particularly important in large surface area wounds, as seen in open-heart, hip and joint replacement surgeries and amputations. Diabetic patients are more prone to injuries to limb extremities and a common procedure to stop infections from spreading to the rest of the body is amputation of the limb and constant treatment with low doses of iodine immediately following surgery for a certain length of time. The results presented in this thesis demonstrate specific disintegration of collagen fibrils in rat tail tendons, from a short iodine treatment. This is detrimental for cellular activity, more so in processes like wound healing. TBI results in the loss of neurological control and/or function of various parts of the body, governed by this region. The results presented herein, inform and support the finding that neuroplasticity, in the hemisphere opposite to that where injury was delivered, compensates for the functional deficits as a result of TBI. The data presented here can be used in developing rehabilitation regimens for TBI patients on case-to-case basis to restore most of the functional deficits observed thereof, and also as a factor of predicting the onset of secondary neurological disorders (for instance amyloid related pathologies) at a later stage in life.
Ph.D. in Biology, December 2015
Show less
- Title
- THE SINGLE BUILDING AS THE URBAN CATALYST
- Creator
- La Serna, Matias S.
- Date
- 2012-03-28, 2012-05
- Description
-
An identified strip of land in Chicago’s South Side has left an unmistakably large void within the grid of the city. Current city plans call...
Show moreAn identified strip of land in Chicago’s South Side has left an unmistakably large void within the grid of the city. Current city plans call for single-use and low density spaces to eventually fill the enormous void bounded by State Street to the East, and Federal Street to the West. Resisting the current pattern of architectural and urban segregation, this alternative proposes an ambitious plan to fill an entire block with a select and diverse range of program to invigorate a depleted urban area while simultaneously creating an identifiable architectural landmark. The sudden interruption of single-use occupation reclaims the architectural potential of a site burdened by its troubled past and serves as the catalyst to stimulate ambitious and diverse urban growth. Necessarily occupying the entire site for the urban development of the city, the building is faced with the challenge of expanding to fill the tremendous void imposed by the grid with as few program members as possible, all the while preserving the richness of urban overlaps otherwise afforded in tighter urban settings. The result is a single building that is both mindful of the independent needs of its occupants while simultaneously creating and maximizing shared spaces within the overlaps, generating program opportunities and interactions not otherwise afforded in a system of architectural fragmentation.
M.S. in Architecture, May 2012
Show less
- Title
- Thermoelectric Power Systems and the Energy-Water Nexus
- Creator
- Walker, Michael Edward
- Date
- 2012-04-26, 2012-05
- Description
-
The goal of this Thesis is the development of a comprehensive methodology to evaluate the total cost of water use in the recirculating cooling...
Show moreThe goal of this Thesis is the development of a comprehensive methodology to evaluate the total cost of water use in the recirculating cooling loops of thermoelectric power plants. This methodology expands upon the work presented in the literature to improve estimations of the economic impact of condenser fouling. The methods developed in this Thesis are incorporated into a user friendly Combined Cost Model (CCM) interface that will allow future researchers, students and plant personnel to perform the same comparative analyses presented herein. The objective of this Thesis is the application of the CCM to determine the economic viability of treated municipal wastewater (MWW) use to replace freshwater for cooling in power plants with recirculating cooling systems. To accomplish this objective, a set of case study evaluations are included to (1) evaluate the sensitivity of the economic impact of fouling to condenser design and operation, (2) determine the cost of treated MWW use in pulverized coal power plants, and (3) compare the relative cost of degraded water use in advanced power systems such as IGCC and oxy-combustion. The results of these evaluations show that current freshwater prices do not provide an economic incentive to switch to the use of treated MWW water. However, results indicate that the breakeven differential price of freshwater, at which the total costs of using freshwater and treated MWW are equal, is only 0.52 $/1000Gal. (USD 2009). In addition, the use of treated MWW for cooling is shown to be a better economic alternative to dry air cooling technology (DACT) for the conservation of freshwater resources. Cost-to-conservation estimates of treated MWW use are 1.1 $/1000 Gal., in contrast to 5.6 $/1000 Gal. for DACT. This Thesis also presents a novel, hybrid coal conversion concept, the dry gasification oxy-combustion (DGOC) power cycle. This process is similar to oxycombustion, in that it maintains a concentrated CO2 flue stream and does not utilize a complex separation step. However, coal conversion and sulfur removal are performed within a gasification unit. It is estimated to achieve CCS goals with a higher efficiency than the leading alternative strategies.
Ph.D. in Chemical Engineering, May 2012
Show less
- Title
- METHODOLOGY FOR VEHICLE EMISSION IMPACTS ANALYSIS FROM SIGNAL TIMING OPTIMIZATION OF AN URBAN STREET NETWORK
- Creator
- Lu, Pu
- Date
- 2017, 2017-05
- Description
-
The pace of urban street capacity expansion is much slower than the growth of vehicle travel, leading to several traffic congestions. To...
Show moreThe pace of urban street capacity expansion is much slower than the growth of vehicle travel, leading to several traffic congestions. To mitigate traffic congestion expanding capacity is not feasible for many cases due to the high cost and space restriction. Improving the efficient use of the available capacity becomes the solution. Traffic signal optimization is one of the most widely used ways of efficient capacity utilization. Concurrent to traffic signal optimization, more smooth traffic operations in term of reasonably higher speed and a reduced traffic delay will in turn change vehicle emissions. This research aims to quantify changes in vehicle emissions resulted from traffic signal optimization by introducing a new methodology for quantifying network wide vehicle emissions and real world application in of the Chicago urban network for validation. The proposed methodology considers undersaturation and oversaturation of traffic conditions and urban street segments with varying speeds for different types of vehicles and pollutants by hour of the day and location within the network. It begins with information collection and research through a review of existing methods for urban street network vehicle emission estimation, intersection vehicle emission evaluation, and the running vehicle emission modeling. The proposed methodology focuses on three elements: estimation of emissions from vehicles stopped at intersections and for vehicles cruising along segments, as well as analysis of network wide vehicle emissions and changes in overall network vehicle emissions by time of the day and by areas. Major steps of methodology application included the use of Chicago TRANSIMS model implementing optimized signal timing plans to obtain refined traffic volumes at intersections and on segments, increased vehicle operating speeds, changed green splits, and vehicle compositions for all intersections and segments in the urban street network, the application of an intersection vehicle emission model for stopped vehicles and a segment vehicle emission model for vehicles cruising on segments, and the network wide analysis of vehicle emission changes by vehicle type and pollutant type in a 24-hour period within an urban street network, respectively. The proposed methodology for intersection vehicle emission estimation was successfully applied to a dense urban street network in Chicago for each approach per cycle and then extended for intersections in hours of the day to analyze the impacts of traffic changes at intersections on exhaust changes. In order to develop the network vehicle emission analysis method, it is essential to evaluate the segment vehicle emissions. This is achieved by using the concept of vehicle specific power which is used to estimate emissions of cruising vehicles considered along with vehicle speeds and speed changes and hence analyzing changes in segment vehicle emissions affected by traffic volume changes derived from signal timing optimization. The decreased number of vehicles stopped at intersections by applying signal timing optimization will reduce intersection emissions, hence reducing overall network vehicle emissions. In addition to have vehicle emissions got reduced at intersections, the increasing vehicle speed for vehicles on segments could further reduce vehicle emissions on segments.
Ph.D. in Civil Engineering, May 2017
Show less
- Title
- TOWARD THE DEVELOPMENT OF USABILITY GUIDELINES FOR SINGLE-WINDOW WEB INTERFACES
- Creator
- Maciukenas, James
- Date
- 2013, 2013-05
- Description
-
Since the early 1990s, usability research has guided development of web interfaces used to interact with content available on the Internet....
Show moreSince the early 1990s, usability research has guided development of web interfaces used to interact with content available on the Internet. Following these guidelines has resulted in web pages that in many characteristics are quite similar and are identified here as Conventional Web Interfaces (CWIs). An emergent genre of web interface, the Single Window Interface (SWI), differs in many ways from CWIs. Most importantly, SWIs differ from CWIs in the type of tasks expected of their users and in the visual strategies used to facilitate these tasks. Namely, SWIs facilitate open-ended discovery tasks by using strong visual cues to convey meta-information to the user and encourage both the exploration and perusal of content. This dissertation will demonstrate that the differences between SWIs and CWIs require revisiting current usability guidelines in order to determine how to guide future development of SWIs. If SWI visual strategies can be shown to be effective in conveying meta-information qualities to users, the groundwork will be prepared for future research investigating the effectiveness of these strategies in facilitating open-ended exploration and discovery within SWIs. These efforts will lead to more useful experiences for users of SWIs and inform the fields of technical communication as well as human-computer interaction and usability research, to name just a few of the affected fields of study.
PH.D in Technical Communication, May 2013
Show less
- Title
- INVESTIGATION INTO USE OF GEARLESS PMSG-BASED WIND FARM FOR GRID SUPPORT
- Creator
- Cui, Yinan
- Date
- 2011-12-05, 2011-12
- Description
-
Wind energy has become the world’s fastest growing energy source,as environmental concerns have focused attention on the generation of...
Show moreWind energy has become the world’s fastest growing energy source,as environmental concerns have focused attention on the generation of electricity from clean and renewable sources. New capacity from wind turbines has been growing fast since 2004. Installed capacity reached 196630 Megawatt in 2010 worldwide. 2011 will also see good growth. Reliability and quality of the electrical power supply is of great importance for all grids. A well designed wind-turbine power source can help balancing the unpredictable power changes caused by the Load-side of the grid (Due to the meteorological nature of long-lasting wind at the sea, the offshore wind turbines are more stable in their power production aspect). Alone more and more R&D efforts, PMSG-based direct drive wind turbine generator has become a trend in the industry, its full scale back-to-back converters can achieve fast response of power factor tuning, which directly offers an option of generating a certain amount of reactive supporting power in solving short-term voltage stability problem in the local grid and a desired quantity of active power for mitigating the frequency oscillation in the system. Topics in this thesis includes: (1) analysis of the wind turbine generator model equipped with full scale back-to-back converters and their control schemes are proposed; (2) the low voltage ride-through test of a wind farm with the modeled wind turbine generator integrated into a finite 3-bus system is provided for further short term voltage stability studies; (3) the comparison is rendered between an offshore wind farm with no support on reactive power and the one with automatically reactive power support on voltage drop response in an 8-bus system connected with HVAC submarine cable; (4) study on dynamic active power compensation from wind farm for improving the frequency stability when large disturbance introduced. Keywords: PMSG, full scale converters, reactive power, short term voltage stability, active power, dynamic compensation
M.S. in Electrical Engineering, December 2011
Show less
- Title
- DESIGN AND IMPLEMENTATION OF A POWER ASSISTED DRIVETRAIN FOR A WHEELCHAIR
- Creator
- Hou, Ruoyu
- Date
- 2012-04-06, 2012-05
- Description
-
Over the last two decades, the number of people who have difficulty walking and need wheelchairs has been found to be increasing due to an...
Show moreOver the last two decades, the number of people who have difficulty walking and need wheelchairs has been found to be increasing due to an aging population caused by a low birth rate and advances in medical treatment. Based on a recent survey, a power assisted wheelchair is the latest one in the commercial wheelcair market. The power assisted wheelchair offers users an opportunity for physical activity, but it is often too expensive for customers. This has led to the design of more advanced and economical power assisted drivetrain systems for wheelchairs. In this thesis, a novel controller has been designed. Instead of using a torque sensor for measuring and amplifying human force, the proposed controller uses two infrared sensors to trigger two motors. Using this information in addition to the information from a motion sensor that detects the road angle variation, appropriate torque command is generated. The drivetrain requires the embedded controller has a strong I/O control function, but also high speed signal processing ability for realizing real time control. Therefore, a DSP (Digital Signal Processor) which integrates flexbile multiple PWM signal generator to drive two motors, two Hall sensors for motor position and speed feedback is considered as one of the strongest controllers for power assisted drivetrain implementation. This thesis has two main contributions: a) it presents a novel power assisted motor control strategy, including six-step motor control, Environmental Adaptive control and Push-Go control method; and b) it develops an embedded controller not only on the testbench, also on the wheelchair to realize this control strategy. The designed controller is low cost and compact.
M.S. in Electrical Engineering, May 2012
Show less
- Title
- INFILL HOUSE – HANOI, VIETNAM
- Creator
- Luu, Dung Q
- Date
- 2017, 2017-05
- Description
-
In 1986, the ‘Economic Reform’ had brought significant economic success to Vietnam. The cities such as Hanoi, HoChiMinh City, and Danang,...
Show moreIn 1986, the ‘Economic Reform’ had brought significant economic success to Vietnam. The cities such as Hanoi, HoChiMinh City, and Danang, expanded enormously, and building activities increased to accommodate population growth and housing demand. The rapidly increased income allowed middle-class and upper-class families to pursue their dreams to own a private home. However, most housing projects were built without any city guidelines and lacked thoughtful design. [5] Because of high land prices and valuable frontage for business uses, most new private buildings and houses, 3 to 5 stories, were built to maximize their footprint, on very long and narrow frontage properties. Many of these infill houses were constructed, however they had limited daylight and poor natural ventilation. [4] For my thesis, I have studied typologies of the Vietnamese infill house. The study analyzes 4 types of infill sites based on different site access. In response to the analysis, six house schemes were developed on two of the types of long and narrow infill sites, in the high-density area of Hanoi, Vietnam. The design investigates different site strategies, and applies suitable building techniques to create viable living spaces that improve natural daylight and ventilation.
M.S. in Architecture, May 2017
Show less
- Title
- EXTENSIONAL RHEOLOGY OF POLYISOBUTYLENE MELTS USING A COUNTER-ROTATING CYLINDERS RHEOMETER
- Creator
- Sun, David
- Date
- 2011-12-05, 2011-12
- Description
-
Extensional rheology plays an immense role in many polymer processing operations, such as blow molding and fiber spinning. Extensional flow is...
Show moreExtensional rheology plays an immense role in many polymer processing operations, such as blow molding and fiber spinning. Extensional flow is a type of deformation that stretches a material and is sensitive to a polymer’s molecular structure. Elongational experiments are important in establishing flow models and verifying constitutive equations. Reliable extensional measurements and understanding of extensional rheology are vital for both academia and industry because they build upon the foundation for future theories, models, and applications. This study aims to understand the characteristics of a uniaxial elongation measuring technique and the validity of the data obtained. It focuses on the experimental properties of the Sentmanat Extensional Rheometer (SER), a specific uniaxial elongation rheometer, and tests the rheology for different molecular weights of polyisobutylene (PIB) melts. Through use of oscillatory shear, storage and loss modulus data are obtained and used to establish linear viscoelastic behavior. Using the SER, polyisobutylene was deformed to generate extensional viscosity data for different sample sizes, which was compared to the linear viscoelastic curve to check for consistency. Visual data analysis was used to examine deviations from ideal deformation. The results from this study were consistent with deviations seen by other researchers using the SER, and established experimental parameters that can improve performance of the SER. Based on elongational viscosity data, it is concluded that elongational viscosity does have a dependence based on sample size. In addition, this paper also quantifies the onset of surface instabilities, a phenomenon commonly seen, but not specifically reported. By utilizing the SER and different optical techniques, the development of the surface instability is examined. Analysis of the images demonstrates an appearance of surface striations that are consistent with different experimental parameters. By accurately capturing surface instabilities, they are found to be closely associated with sample deformations and onset of sample failure. By comparing visual and scattering images, the hencky strain (εinstability) at which instability occurs is consistently seen around 0.6 to 1.0. It is concluded that these striations are independent of strain rate, molecular weight, sample size, and technique. The validation of extensional viscosity’s dependence on sample size and onset of instability has great significance for uniaxial extension measuring tools, polymer processing, and extensional polymer modeling and simulation.
M.S. in Chemical Engineering, December 2011
Show less
- Title
- DESIRES OF THE CITY, THE SENSIBLE METROPOLIS
- Creator
- De Sanabria Sales, Lucia Rodriguez
- Date
- 2014, 2014-07
- Description
-
The question about the future of our cities starts with the consideration of what kind of society we want. What role will architecture play in...
Show moreThe question about the future of our cities starts with the consideration of what kind of society we want. What role will architecture play in shaping our society and the way people live their lives? Can architecture really be a tool for other objectives? Can it be part of reactivation of the economy? The future of cities, whether they are as densely populated as the modern metropolis or more sprawled like its surroundings, need to adapt to new technologies and ways of living. We have to be aware that our cities are in constant change and development, and that their future relies on how able they are at adapting. In order to adapt, urbanism needs to step back and analyze the existing city structure, to improve it and create a more flexible environment that will adjust to the next century. The objective of this thesis is to propose a strategy that enhances sustainable development and meets the needs of today by opening a path to the future. Sustainable development is the kind of development that meets the needs of the present without compromising the ability of the future generations to meet their own needs.1 Change is occurring - society is willing to connect with the city and therefore the city must connect back. It should be the playground of young and not so young people. It is in our hands to transform the built environment and create spaces of relation in society. 1 Definition coined in the Brundtland Commission of 1972. xv However, what is really needed is to fix, re-activate, remodel and improve the existing metropolis that we already have. The objective of the thesis is to investigate the urban model of a city and how this can address the present but at the same time is flexible enough to shape the future. Opposites exist and, by definition, there is a strong connection between them. In this research I will work towards a hybrid condition of society. Why do we need to have the opposites separated? Can we bring them together, make them work and interact but at the same time maintain their identity? Can nature influence the metropolis without causing it to lose the density that characterizes it? To investigate this and find answers to the continuous evolution of society, the first studio will concentrate on the Retreat, what it is and how it works. This research will provide an abstraction of nature and Retreat in its pure simple form. I will use this abstraction by applying it to the built environment and using it to analyze and shape the metropolis. I will also observe what changes occur in the basic relationships of society. Many questions arise when an architect gets into the topic of the future city. What kind of people and cities do we want? What will make us experience the difference that we are searching for? How can public spaces be used for people to meet and connect and for culture to grow? What is the difference between metropolis and nature? These questions will guide the projects to find a coherent strategy that could be applied to different metropolitan sceneries. The first field of study proposed is designing with nature. The environment in Colorado around Camp of Arts Perry Mansfield is mostly wild. Here I will see how design xvi responds to the nature around it. How do we bring an urbanized sense to the landscape in order to enhance the feelings we have in it? The second field of study of the thesis addresses the urban tissue of Woodlawn (a neighborhood in Chicago), the existing metropolis, how it is moving towards the future, and creating a bond between environment and metropolis. In this section of the thesis, I will introduce nature as a tool for the development of the city. What are architects and urbanists working on to reshape the built environment, and how do we make spatial conditions where we will experience more diverse stages? I want to create a hybrid stage where natural and urban elements work together, a space for interaction in society, and having retreat and metropolis in the same place. The ´Desires of the City´ will be a strategy that looks into a series of aspects that involve the urban development. Focusing in what the users need and want, and creating a community sense between the neighbors. The aspect of controlling energy and sustainable projects that deal with communication and infrastructures of the neighborhood. In the programmatic side, providing a strategy of one space that hosts different activities, with a variety of cultural and social equipment. I will introduce a designed landscape that will work as the extra layer that we are missing in the urbanism of today. Landscape will be the infrastructure of the neighborhoods, introducing the concept of semi-private ownership in the metropolis. Questioning the actual ownership of the ground and transforming vacant land into options and opportunities, one of the fundamental assertions of the projects is the necessity of community action and engagement. In order to make an impact, society has to be involved xvii in the process. This new landscape infrastructure is where action happens and will spark the beginning of a new urbanism that is characterized by flexibility and where future changes in society are an asset and not an issue. This new residential model is sensible and will respond to the desires of the users.
M.S. in Architecture, July 2014
Show less
- Title
- Machine hour rate method of distribution of factory indirect expense
- Creator
- Wetzel, G. F.
- Date
- 2009, 1918
- Publisher
- Armour Institute of Technology
- Description
-
http://www.archive.org/details/machinehourratem00wetz
Thesis (B.S.)--Armour Institute of Technology
- Title
- STUDY OF THE STRUCTURE OF THE INDIRECT FLIGHT MUSCLE OF LETHOCERUS INDICUS BY LABELING WITH HEAVY ATOMS
- Creator
- Xie, Luping
- Date
- 2012-04-24, 2012-05
- Description
-
Insect flight muscle (IFM) from Lethocerus indicus is an asynchronous muscle which can keep on oscillating after a neural stimulation, as long...
Show moreInsect flight muscle (IFM) from Lethocerus indicus is an asynchronous muscle which can keep on oscillating after a neural stimulation, as long as the load is mechanically-resonant. It has high degree of structural order as well. These characteristics make it an ideal material to study the structure of IFM in vitro. In this research, the structure of IFM from Lethocerus indicus was studied using X-ray diffraction. Multiple isomorphous replacement (MIR) using heavy atoms to alter the structure of biological macromolecules was used in an attempt to solve the well-known phase problem of crystallography. MIR is less commonly used in non-crystalline systems. Here we showed that, by labeling with two heavy atoms, potassium tetrachloroaurate (III) (KAuCl4) and p-Chloromercuribenzoic acid (PCMB), the diffraction patterns from IFM samples changed, in particular the intensities of reflections on the meridian. The positions and intensities of every layer line on the meridian before and after labeling were compared, and the best conditions for the two heavy atoms to use for labeling were discussed. These results indicate that this approach may be a feasible way of determining the electron density in this material with further development.
M.S. in Biology, May 2012
Show less
- Title
- The Morkrum system of printing telegraphy
- Creator
- Earle, Ralph H.
- Date
- 2009, 1917
- Publisher
- Armour Institute of Technology
- Description
-
http://www.archive.org/details/morkrumsystemofp00earl
Thesis (B.S.)--Armour Institute of Technology, 1917 B.S. in Electrical Engineering, 1917
- Title
- THE RELATIONSHIP BETWEEN ENACTMENT OF COMMON CORE STATE STANDARDS-MATHEMATICS, STUDENT MISCONCEPTIONS CONCERNING NEGATIVE SIGNS, DISTRIBUTION, AND DIAGRAMS, STUDENT ACHIEVEMENT, AND TEACHER VARIABLES
- Creator
- Morrissey, Glenda
- Date
- 2017, 2017-07
- Description
-
The relationship between enactment of Common Core State Standards – Mathematics (CCSSM), student misconceptions, student achievement and...
Show moreThe relationship between enactment of Common Core State Standards – Mathematics (CCSSM), student misconceptions, student achievement and teacher variables was investigated. After providing professional development on CCSSM enactment, observations were conducted to determine the degree of enactment of CCSSM content and Standards for Mathematical Practice (SMP) in 22 classrooms of nine teachers in an urban charter school network consisting of three high schools. Students were all boys, 98% African American, and predominantly of low socio-economic status. Data included quarterly assessments, Partnership for Assessment of Readiness for College and Careers (PARCC) test scores, and a teacher survey. Results indicated experienced teachers with high efficacy who expected students to discuss their work were related to higher levels of CCSSM enactment in teacher actions, fewer misconceptions and higher test scores. Newer teachers were most concerned about availability of CCSSM materials and had higher levels of enactment of CCSSM in classroom materials. A strong belief in student ability was related to student enactment of SMP. Implications for teacher education, teacher practice, and future research are discussed.
Ph.D. in Mathematics and Science Education, July 2017
Show less