Search results
(8,881 - 8,900 of 10,083)
Pages
- Title
- DESIGNING SMART ARTIFACTS FOR ADAPTIVE MEDIAT~ON OF SOCIAL VISCOSITY: TRIADIC ACTOR-NETWORK ENACTMENTS AS A BASIS FOR INTERACTION DESIGN
- Creator
- Salamanca, Juan
- Date
- 2012-10-10, 2012-12
- Description
-
With the advent of ubiquitous computing, interaction design has broadened its object of inquiry into how smart computational artifacts...
Show moreWith the advent of ubiquitous computing, interaction design has broadened its object of inquiry into how smart computational artifacts inconspicuously act in people's everyday lives. Although user-centered design approaches remain useful for exploring how people cope with interactive systems, they cannot explain how this new breed of artifacts participates in people's sociality. User-centered design approaches assume that humans control interactive systems, disregarding the agency of smart artifacts. Based on Actor-Network Theory, this research recognizes that artifacts and humans share the capacity of influencing society and meshing with each other, constituting hybrid social actors. From that standpoint, the research offers a triadic structure of networked social interaction as a methodological basis to investigate how smart devices perceive their social setting and adaptively mediate people's interactions within activities. These triadic units of analysis account for the interactions within and between human-nonhuman collectives in the actor-network. The within interactions are those that hold together humans and smart artifacts inside a collective and put forward the collective's assembled meaning for other actors in the network. The between interactions are those that occur among collectives and characterize the dominant relational model of the actor-network. This triadic approach was modeled and used to analyze the interactions of participants in three empirical studies of social activities with communal goals, each xiii mediated by a smart artifact that enacted – signified – a balanced distribution of obligations and privileges among subjects. Overall, the studies found that actor-networks exhibit a social viscosity that hinders people's interactions. This is because when people try to collectively accomplish goals, they offer resistance to one another. These design experiments also show that the intervention of smart artifacts can facilitate the achievement of cooperative and collaborative interaction between actors when the artifacts enact dominant moral principles which prompt the preservation of social balance, enhance the network's information integrity, and are located at the focus of activity. The articulation of Actor-Network Theory principles with interaction design methods opens up the traditional user-artifact dyad towards triadic collective enactments by embracing diverse kinds of participants and practices, thus facilitating the design of enhanced sociality.
PH.D in Design, December 2012
Show less
- Title
- DIGITAL CONTROL OF 2-QUADRANT AND 4-QUADRANT SWITCHED RELUCTANCE MOTOR DRIVES
- Creator
- Shao, Baiming
- Date
- 2011-04-19, 2011-05
- Description
-
Switched reluctance machines (SRMs) are attractive because of their manufacturing simplicity and high reliability. They do not have any...
Show moreSwitched reluctance machines (SRMs) are attractive because of their manufacturing simplicity and high reliability. They do not have any windings or permanent magnets on the rotor, which makes them robust and easy to maintain. On the other hand, SRMs are highly non-linear since they work in saturation. This causes problems such as high torque ripple and system noise. In addition, mutual inductance needs to be considered for the high performance systems such as electric vehicle or aerospace applications. This effect could become critical when more than one phase is conducting. This also makes them difficult for modeling and control. Significant research on different SRM control techniques has been done in order to improve the performance of the controller and present a good solution for the industrial applications with a reasonable cost. Conventional control techniques for SRMs include chopped current control (CCC), angular position control (APC), and pulse-width modulation (PWM). Proportional-integral (PI) and other linear controllers are also used in the drive systems. However, because of the non-linearity of the machine, classic linear control techniques are not ideal for SRMs as they have challenging control issues in wide speed ranges. Different methods have been presented to implement non-linear control techniques for SRM drives or linearize the SRM motor equations. Many SRM controllers are using one or more look-up tables. The behavior of the controller is adjusting in real-time depending on the data in the look-up tables. This could increase the cost and complexity of the system. In this Ph.D. dissertation, an advanced digital control concept is presented for SRMs in both motoring and generating modes. By treating the system digitally, the controller switches between two pre-defined states to get the desired output. The proposed control technique does not need any look up tables, is not sensitive to the motor parameter variations, is low cost, and has a wide speed range. Simulation and experimental results are presented to verify the proposed digital control approach.
Ph.D. in Electrical and Computer Engineering, May 2011
Show less
- Title
- THE POLYMORPHIC DIAGRAM: CONCEPTS FOR DESIGN TECHNOLOGY TO MODEL SPATIAL CRITERIA IN ARCHITECTURE DESIGN
- Creator
- Hamadah, Qutaibah
- Date
- 2012-11-03, 2012-12
- Description
-
In Architectural design, reasoning about space and its configuration lies at the center of the conceptual design workflow. The process unfolds...
Show moreIn Architectural design, reasoning about space and its configuration lies at the center of the conceptual design workflow. The process unfolds in a reflective and adaptive modeling methodology, through which architects structure their understanding of the design problem, and mediate its responsive and sensitive resolution. Paradoxically, however, modeling and representing spatial information – knowledge about the design problem’s spatial requirements and its relational orders – is perhaps the least welldeveloped feature in modern design systems. With all its importance in architecture design, existing design technology offers only limited assistance to one of architecture’s most critical and difficult design workflows, the definition of space, its layout and configuration. Moving forward, modern design systems must extend their ability to assist the architect in modeling spatial and relational design criteria. They must profit an integrated workflow where the problem definition, and the solution proposition develop in unison. In particular, it should pay heed to the architect’s cognitive and generative parameters, which necessarily relies on an adaptively and reflective modeling workflow, one that bridges between the problem definition and its solution proposition using multiple forms of representation. Towards this end, this dissertation presents the Polymorphic Diagram: a concept for a design technology to assist the architect in modeling spatial and relational design criteria using an interactive, graph-based, multi-representational medium.
PH.D in Architecture, December 2012
Show less
- Title
- THERMAL INACTIVATION OF SALMONELLA AGONA IN LOW-MOISTURE FOOD SYSTEMS AS INFLUENCED BY WATER ACTIVITY
- Creator
- Jin, Yuqiao
- Date
- 2016, 2016-07
- Description
-
Salmonella can survive in low-moisture, high-protein and high-fat foods for several years. Despite nationwide recalls for Salmonella in low...
Show moreSalmonella can survive in low-moisture, high-protein and high-fat foods for several years. Despite nationwide recalls for Salmonella in low-moisture products, information on survival of Salmonella during high-protein and high-fat food processing is limited. This project evaluated Salmonella enterica serovar Agona 447967 thermal inactivation kinetics in a high-protein and a high-fat matrix using a defined matrix composition, varying water activities and process conditions. A high-protein matrix, composed of 60:6:25 weight ratio of flour: oil: protein, and a high-fat matrix, composed of 60:25:6 weight ratio of flour: oil: protein was studied. Each matrix was inoculated with Salmonella enterica serovar Agona 447967 at activities of 0.5, and 0.9. Samples were packed in aluminum test cells and heat treated over a range of temperatures and time intervals. Survival of Salmonella Agona was detected on trypticase soy agar with 0.6% yeast extract. The average z-values for the high-protein matrix at the water activity (aw) of 0.5 and 0.9 were 9.01ºC, and 7.51ºC, respectively. The average z-values for the high-fat matrix was 11.91ºC at aw 0.5, and 7.08ºC at aw 0.9. Results showed that the z-value at aw 0.5 was significantly different from the z-value at aw 0.9 (p < 0.05) in both the highprotein and high-fat matrices. Critical process factors associated with pathogen destruction were identified during thermal treatments in this project. Results indicated that a correlation existed between temperature and water activity and must be accounted for when predicating inactivation of Salmonella enterica in these model matrices under dynamic process conditions.
M.S. in Food Process Engineering, July 2016
Show less
- Title
- MINIMIZING SALMONELLA CONTAMINATION IN SPROUTS BY CONTROLLING THE IRRIGATION CONDITIONS DURING GERMINATION
- Creator
- Xie, Jing
- Date
- 2014, 2014-07
- Description
-
The objective of this study was to examine whether the proliferation of Salmonella can be minimized during sprouting by controlling the...
Show moreThe objective of this study was to examine whether the proliferation of Salmonella can be minimized during sprouting by controlling the irrigation conditions using seeds that have been either treated or not treated with 20,000 ppm of calcium hypochlorite, Ca(OCl)2. 200 g of alfalfa seeds spiked with 2 g (or 1 %) of inoculated seeds (containting~1 log cfu/g of Salmonella) were allowed to germinate in a glass jar or in an automatic sprouter (EasyGreen) for 5 days at room temperature. The sprouts germinated in the automatic sprouters were irrigated with either sterile tap water or chlorinated water (containing 100 ppm of calcium hypochlorite) at various frequencies (once every 1, 2 or 4 h); the sprouts germinated in glass jar were rinsed every 24 h with sterile tap water. The same growth studies were performed on seeds treated with 20,000 ppm Ca(OCl)2 for 15 min prior to sprouting. Sprout samples were taken daily and analyzed for the level of Salmonella using the three-tube most probable number method as described in the FDA BAM. Seed treatment with 20,000 ppm Ca(OCl)2 reduced Salmonella level in seeds to a level that was below the detection limit (< -2.5 log MPN/g). The pathogen was not detected during five days of germination in automatic sprouters or jars. Using untreated seeds, the level of Salmonella changed from an increase of ~ 7 log MPN/g in sprouts grown in jars and irrigated once every 24 h to an increase of ~ 4 log MPN/g during sprouting in the automated sprouters and irrigated once every 1 h. Irrigation with chlorinated water although inhibited Salmonella re-growth but affected the quality of sprouts. Overall, seed treatment combined with frequent irrigation with tap water or chlorinated water can control the level of Salmonella to an undetectable level during sprouting.
M.S. in Food Safety and Technology, July 2014
Show less
- Title
- TOWARD A NATURAL GENETIC/EVOLUTIONARY ALGORITHM FOR MULTIOBJECTIVE OPTIMIZATION
- Creator
- Ramasamy, Hariharane
- Date
- 2013, 2013-05
- Description
-
Practical optimization problems often have multiple objectives, which are likely to conflict with each other, and have more than one optimal...
Show morePractical optimization problems often have multiple objectives, which are likely to conflict with each other, and have more than one optimal solution representing the best trade-offs among the competing objectives. Genetic algorithms, which optimize by repeatedly applying genetic operators to a population of possible solutions, have been used recently in multiobjective optimization, but often converge to a single solution that is not necessarily optimal due to lack of diversity in the population. Current multiobjective genetic and other evolutionary methods prevent this premature convergence by promoting new members that are dissimilar in parameter or objective space. A distance measure, which calculates similarities among the members in either objective or parameter space, is used to degrade the fitness of solutions when they are crowded in a small region. This process forces the algorithm to find new but distinct trade-off points in the objective or parameter space, but is computationally expensive. As the number of objectives or parameters increases, the methods fail to scale up and they deviate from the motivating concept of the genetic algorithm—natural evolution. We extend the standard genetic algorithm through two simple, yet powerful, changes motivated by natural evolution. In the first method, the algorithm, at each step, randomly or sequentially chooses one of the objectives for optimization; hence the method is called sequential extended genetic algorithm (SEGA). In the second method, a population is maintained for each objective, and crossover is performed selecting parents from across populations. This method is called parallel extended genetic algorithm (PEGA). We applied these methods to test problems from the literature, and to two well known problems, protein folding and multiple knapsack. We discovered our methods found better trade-off solutions than current multiobjective methods, without increasing computational complexity of genetic algorithms.
PH.D in Computer Science, May 2013
Show less
- Title
- LOAD RATING OF RAILWAY BRIDGES BY ANALYSIS AND TESTING
- Creator
- Khademi, Faezehossadat
- Date
- 2015, 2015-05
- Description
-
Investigating existing structures in the real world can help us know more about their characteristics, advantages, and disadvantages. In this...
Show moreInvestigating existing structures in the real world can help us know more about their characteristics, advantages, and disadvantages. In this thesis project, the investigation was performed on two real bridges named “Yale” Bridge, and “Valleyfield” Bridge. They were owned by the CN, one of the largest railway in the North American railroad industry. First, field testing of the bridges was performed by CN. Strain gauges (devices for indicating the strain of a material) and displacement transducers (devices for indicating movement) were placed on specific points on desired elements. Then the strain and displacement data for these specific points were recorded during the time trains were passing through the bridge. These devices record the data in voltage, so calibration constants must be applied to convert the data to stress & displacement units. After that, the bridges were modeled in “SAP2000” software, and the results were compared to the data recorded in the field. In the real world, these types of bridges are behaving something between the “Truss Model” and the “Frame Model”. Our aim is to know how they exactly behave and which model they are closer to. Results are showing that although the bridges are considered “Truss Bridges”, both of them are behaving more like a “Frame” Model than a “Truss” Model. In addition, the effect of the collision strut on L0U1 was investigated. Results shows that having this collision strut leads to the larger bending moment on L0U1 in comparison to lack of this member on the bridge. Finally, with respect to our results, “Adjustment Factors” for three different groups of diagonal, horizontal, and vertical members were provided in order to improve load rating analysis in the future.
M.S. in Civil Engineering, May 2015
Show less
- Title
- MULTI-DISCIPLINARY PERFORMANCE-BASED FORM GENERATION PROCESS: DEVELOPING AN OPTIMIZATION APPROACH FOR LONG SPAN ROOFS
- Creator
- Nicknam, Mahsa
- Date
- 2013, 2013-05
- Description
-
This research is intended to incorporate multiple performances into the architectural form generation process of long span roofs. To this end,...
Show moreThis research is intended to incorporate multiple performances into the architectural form generation process of long span roofs. To this end, it proposes a multidisciplinary performance-based form generation process (MPGP) using Genetic Algorithm (GA) for the exploration of form based on performance criteria. This process leads us to a new integrated design approach in architecture. Conceptual design decisions have the greatest impact on building performance. However in conventional linear approaches, energy and structural issues are typically dealt with after these program, massing, and enclosure decisions are well articulated. This locks in life-cycle performance, and leads to costly redesigns when results fail to satisfy requirements. Research has shown how successful buildings emerge from the rapid and systematic generation and multidisciplinary analysis of many alternatives. However, until recently Architecture, Engineering and Construction (AEC) design teams were constrained by tools and schedule and only be able to generate a few alternatives, and analyze these from just a few perspectives. The rapid emergence of parametric and generative design, building simulation, and design space exploration and optimization tools now make it possible for a design team to construct and analyze far larger design spaces more quickly, and better understand the importance of design variables on the overall building performance. The proposed process, moves beyond the current form generation approaches by using the dynamic potential possibilities of simulation tools in which form generation is based on their performance feedback. The simultaneous integration of multiple xvi performances at the early stage of design minimizes the need to move back and forth later on the design development phase, therefore reducing the overall design circle. MPGP uses the potential of parametric algorithm to generate the form and uses an optimization algorithm, Genetic Algorithms (GAs), as a search algorithm to explore the proper design satisfying required performances. This method will demonstrate how a flexible 3D model can be parametrically altered toward targeted solutions with the help of near real-time feedback generated by performance-based analysis tools within an optimization framework. Hence, in this approach, design is considered to be a process of a repeated loop of generation, evaluation, and modification until the targeted objectives are satisfied. The integration of generative tools and performance analytical tools in the early stage of design provides great opportunities for the designers to enhance the design space and select the proper design among different design solutions based on their preferences. As a result, designers develop architectural forms based on informed decisions by observing the impact of the varying parameters on the structural and energy efficiency performances. Consequently, this process will greatly benefit engineering by achieving a more collaborative and information-based design environment. Increasing the number of efficient design alternatives, dealing with different levels of complexity in the architectural design process, promoting multi-disciplinary collaboration, and improving overall design understanding are the main benefits of the proposed process.
PH.D in Architecture, May 2013
Show less
- Title
- CARBON DIOXIDE CAPTURE USING SOLID SORBENTS IN A FLUIDIZED BED WITH REDUCED PRESSURE REGENERATION IN A DOWNER
- Creator
- Kongkitisupchai, Sunti
- Date
- 2012-11-11, 2012-12
- Description
-
The most commonly used commercial technology for post-combustion CO2 capture for existing power plants is the amine solvent scrubber. However,...
Show moreThe most commonly used commercial technology for post-combustion CO2 capture for existing power plants is the amine solvent scrubber. However, the energy consumption for capturing CO2 from flue gases using amine solvent technology is 15 to 30% of the power plant due to the use of steam in solvent regeneration. Hence there is a need to develop more efficient methods of removing CO2. The objective of this thesis research is to demonstrate the design of a complete loop system of dry solid sorbent technology, which consumes less energy, as an alternative CO2 capturing technology. The design of a complete riser-sorber and downer-regenerator loop system for a dry solid sorbent technology is developed using the recently developed kinetic theory based multiphase computational fluid dynamics (CFD). The complete dry solid sorbent loop system comprises of an atmospheric fluidized bed riser-sorber and a reduced pressure downer-regenerator. The proposed dry solid sorbent used in this thesis research is a dry sodium carbonate sorbent recently developed at RTI and earlier by Gidaspow and Onischak. The dry solid sorbents capture CO2 and water vapor from flue gases through chemical sorption in the sorber-riser. The captured CO2 is released from the solid sorbent along with water vapor in the reduced pressure regenerator-downer where the solid sorbent regeneration occurred. The complete dry solid sorbent loop system demonstrates the possibility of solving three main technical challenges, which are the handling of large volumetric flow rate of the flue gases, the required operating power, and the quantity of CO2 sorption. xvii A new proposed pressure-equilibrium based sorption rate model for the dry sodium carbonate sorbents is used in the simulations. The simulations of both fluidized riser-sorber and downer-regenerator were done using commercial CFD code; Fluent. The energy efficiency of the proposed dry solid sorbent loop system was studied using thermodynamic availability analysis for both an individual vessel and for the overall process for evaluating the minimum energy requirement for CO2 separation. A T-s diagram of inlet and outlet streams for both the riser-sorber and the downer-regenerator are included in the thermodynamics analysis. The results from multiphase CFD simulations showed that the heat liberated during CO2 sorption in the riser-sorber can be nearly fully recovered in form of sensible heat in the solid sorbent. The captured heat in the solid sorbents is used as the energy for CO2 desorption in the sorbent regeneration process inside the reduced pressure downerregenerator. Hence, the only parasitic power loss will be the energy needed for sorbent circulation, air-lock rotary valves, and vacuum fan. The drastic energy saving is possible due to the high solid circulation rate between sorber-riser and downer-regenerator. Additionally, the simulation results showed that the core-annular regime flow pattern in the riser-sorber can be almost completely eliminated by using multiple jet inlets and increasing solid sorbent particle size, from 75 microns manufactured by RTI to 500 micron sorbent particles. Furthermore, the large sorbent particle size allows better solid settling in the downer. The simulations also showed that a core-annular flow pattern occurred inside the downer-regenerator. However, there is no negative effect of having a core-annular regime inside the downer-regenerator.
PH.D in Chemical Engineering, December 2012
Show less
- Title
- BELIEFS AND CONTEXTUAL MEDIATORS AND MODERATORS OF DISCRETIONARY WORKPLACE BEHAVIOR
- Creator
- Raad, Jason H.
- Date
- 2014, 2014-07
- Description
-
The Theory of Planned Behavior (TPB) has been successfully used to link attitudes, subjective norms, and perceived behavioral control to the...
Show moreThe Theory of Planned Behavior (TPB) has been successfully used to link attitudes, subjective norms, and perceived behavioral control to the enactment of various behaviors in numerous situations; however, the TPB in not frequently used in organizational settings. Similarly, contextual factors may represent important moderating and mediating effects that have not been fully explored in prior TPB research. The current study employs the TPB in a healthcare setting to assess the use of Outcome Measures (OMs) by practicing clinicians. Two contextual mediators and a one contextual moderator were added to the standard TPB framework in an attempt to better explain the enactment of discretionary workplace behavior. Results suggest that TPB components are related to the discretionary use of Outcome Measures in clinical practice; however, results also suggest that hypothesized relationships between TPB factors may diverge significantly from those proposed in the original theory. Implications, limitations, and future directions are also discussed.
Ph.D. in Psychology, July 2014
Show less
- Title
- ANALYSIS OF THE APPLICATION OF THE LIAR MACHINE TO THE Q-ARY PATHOLOGICAL LIAR GAME WITH A FOCUS ON LOWER DISCREPANCY BOUNDS
- Creator
- Williamson, James W
- Date
- 2011-12-12, 2011-12
- Description
-
The binary pathological liar game, as described by Ellis and Yan in [Ellis and Yan, 2004], is a variation of the original liar game, as...
Show moreThe binary pathological liar game, as described by Ellis and Yan in [Ellis and Yan, 2004], is a variation of the original liar game, as described by Berlekamp, R enyi, and Ulam in [Berlekamp, 1964], [R enyi, 1961], and [Ulam, 1976]. This two person, questioner/ responder, game is played for n rounds for a set of M messages. The game begins by the responder selecting a message from the set M. Each round the questioner partitions the messages into two distinct subsets. The responder selects one subset, and elements not in the selected subset each accumulate a lie. Elements accumulating more than e lies are eliminated. The questioner wins the original game provided after the completion of n rounds there is at most one surviving message. The questioner wins the pathological game provided there is at least one surviving message. The focus here will be to generalize the pathological game from two subsets to q subsets with a focus on providing a winning condition for the questioner. The q-ary variant of the pathological liar game has been studied, with rst results in [Ellis and Nyman, 2009]. We let the number of rounds the game is played go to in nity, with e a linear fraction of n, and present an upper bound on the number of messages required by the questioner to win the q-ary Pathological Liar Game. The liar machine and linear machine as discussed by Cooper and Ellis in [Cooper and Ellis, 2010] have been adapted to t this generalization and are used to track the approximate progression of the game. We provide an upper bound on the initial number of chips by bounding the discrepancy between the actual progression of the game and the approximate progression of the game as described by the linear and liar machines respectively. A similar upper bound can be found in [Tietzer, 2011], with di erent elements in the argument. Using methods similar to those found in [Cooper and Ellis, 2010], we provide a partial order argument to show that the winning condition bound for one response strategy by the questioner transfers to all possible response strategies.
M.S. in Applied Mathematics, December 2011
Show less
- Title
- POSTPRANDIAL PLASMA POLYPHENOL PROFILE AND BIOAVAILABILITY OF ANTHOCYANINS IN INSULIN RESISTANT HUMANS AFTER CONSUMING MULTIPLE DOSES OF STRAWBERRIES BEVERAGE WITH A MEAL
- Creator
- Wei, Hequn
- Date
- 2014, 2014-05
- Description
-
Strawberries represent a rich source of polyphenolic compounds that are purported to be important for human health. However, data on their...
Show moreStrawberries represent a rich source of polyphenolic compounds that are purported to be important for human health. However, data on their bioavailability are limited. Therefore, the objective of the present study was to assess the absorption and metabolism of strawberry polyphenols in the postprandial phase using LC-MS/MS. Plasma was collected from humans (n=17) every 30-60 min over 6 h after ingestion of a 650 kcal standard meal accompanied with a beverage containing 0, 10, 20 or 40 g freeze dried strawberry powder. Pelargonidin-O-glucuronide (PG) was the most abundant strawberry metabolite in plasma. Maximum concentrations (Cmax) of PG were achieved at 188 ± 44 min and the levels were significantly different among beverages containing 0, 10, 20, 40 g strawberry powder: 0, 66.0±4.15, 113.64±10.11 and 202.1±15.18 nmol/L, respectively (P<0.05). Area under the concentration curve (AUC) over 6 h also increased with increasing doses (P<0.05); Cmax and AUC of PG was reduced as a percent of pelargonidin-3-O-glucoside (P3G) delivered in the 4 strawberry beverages (P<0.05). The 3 major anthocyanin polyphenols (PG, Pelagonidin-3-Glucoside (P3G) and Cyanidin-3-O-glucoside (CG)) found in plasma increased significantly after consumption of all strawberry-containing beverages compared to placebo (p<0,05). Bioavailability of PG from P3G among beverages containing 10, 20, 40 g strawberry powder were 1.76%, 1.40%, 1.30%, respectively. While higher concentrations of key strawberry compounds and metabolites were achieved with consumption of more strawberry powder, adjusting for dose suggested possible saturation of absorptive capacity of pelargonidin-based anthocyanins. These data provide the basis for understanding the relationship between dose, kinetic profile and efficacy outcomes for making recommendations to deliver optimal health benefits of strawberries; and moreover, serve as a model of the type of data required for understanding the relationship between dietary phytochemical intake and their biological effects.
M.S. in Food Processing Engineering, May 2014
Show less
- Title
- IMPROVING FAULT TOLERANCE FOR EXTREME SCALE SYSTEMS
- Creator
- Berrocal, Eduardo
- Date
- 2017, 2017-05
- Description
-
Mean Time Between Failures (MTBF), now calculated in days or hours, is expected to drop to minutes on exascale machines. In this thesis, a new...
Show moreMean Time Between Failures (MTBF), now calculated in days or hours, is expected to drop to minutes on exascale machines. In this thesis, a new approach for failure prediction based on the Void Search (VS) algorithm is presented . VS is used primarily in astrophysics for nding areas of space that have a very low den- sity of galaxies. We explore its potential for failure prediction using environmental information and compare it to well known prediction methods. Another important issue for the HPC community is that next-generation supercomputers are expected to have more components and consume several times less energy per operation. Hence, supercomputer designers are pushing the limits of miniaturization and energy-saving strategies. Consequently, the number of soft errors is expected to increase dramati- cally in the coming years. While mechanisms are in place to correct or at least detect soft errors, a percentage of those errors pass unnoticed by the hardware. Techniques that leverage certain properties of iterative HPC applications (such as the smoothness of the evolution of a particular dataset) can be used to detect silent errors at the application level. Results show that it is possible to detect a large number of corruptions (i.e., above 90% in some cases) with less than 100% overhead using these techniques. Nevertheless, these data-analytic solutions are still far from fully pro- tecting applications to a level comparable with more expensive solutions such as full replication. In this thesis, partial replication is explored to overcome this limitation. More speci cally, it has been observed that not all processes of an MPI application experience the same level of data variability at exactly the same time. Thus, one can smartly choose and replicate only those processes for which the lightweight data- analytic detectors would perform poorly. Results indicate that this new approach can protect the MPI applications analyzed with 7{70% less overhead (depending on the application) than that of full duplication with similar detection recall.
Ph.D. in Computer Science, May 2017
Show less
- Title
- GREEN FACADES IN ARID CLIMATE: EFFECTS ON BUILDING ENERGY CONSUMPTION IN JEDDAH, SAUDI ARABIA
- Creator
- Binabid, Jamil
- Date
- 2017, 2017-05
- Description
-
In recent decades, the population of Saudi Arabia has increased significantly, reaching thirty-two million in July 2016. This proliferation of...
Show moreIn recent decades, the population of Saudi Arabia has increased significantly, reaching thirty-two million in July 2016. This proliferation of residents, along with substantial economic growth, has precipitated the construction of numerous new buildings, particularly residential structures. Consequently, post-1940, with the introduction of subdivisions and setbacks, more surfaces were exposed to solar radiation, leading to rising levels of surface heat. With the growing use of air conditioning since the early 1970’s, electrical energy consumption increased, exacerbated by the poor performance of building envelopes, the common use of concrete blocks for construction and, as reported in 2013 by the Saudi Electricity Company, the fact that 70% of buildings are not thermally insulated, all of which contributes to high cooling loads and the increased use of air-conditioning to provide building occupants with the desired level of thermal comfort. In response to this trend, the Saudi government established the Saudi Energy Efficiency Center (“SEEC’) in 2014, requiring that all new construction must have insulation. This policy did not, however, address the study of existing buildings in order to adopt appropriate energy-efficiency solutions. Green facades present an important and efficacious approach to meeting this need. The following research focuses on green facade design strategies, which in conjunction with thermal insulation retrofitting can significantly enhance building envelope performance on existing low-rise (one to three floors) single-family home structures in the arid climate of Jeddah, the second largest city in Saudi Arabia, located in the western area in the most populated province of Mekkah. The city was selected as a case study because the residences require cooling and air-conditioning almost all year round due to low diurnal temperature variation resulting from low elevation and high humidity. Research methods included an experimental approach to understand how much solar radiation is blocked through green façade. After researching both native and nonnative plants, as well as certain vegetation properties provided from previous literature resembling evapotranspiration and thermal conductivity, Bougainvillea Glabra, Clerodendrum Inerme, Ipomoea Pes-Caprae, Jacquemontia Pentantha, and Pentalinon Luteum were chosen as the optimal plants for use in this study. Data collected from existing green façades in Jeddah during the summer season were analyzed for comparison and evaluation. In addition, energy simulation by Energy Plus was used to predict potential cooling and air-conditioning energy savings for buildings in Jeddah in respect to the differences between the types of plants and green façade systems used. Finally, the recommendations on the best design solutions for arid climate of Jeddah will be formulated and could be incorporated into the city policies and regulations from SEEC and the Municipality.
Ph.D. in Architecture, May 2017
Show less
- Title
- DESIGN AND PERFORMANCE EVALUATION OF SWITCHED RELUCTANCE MACHINES WITH HIGHER NUMBER OF ROTOR POLES FOR LOW POWER PROPULSION APPLICATIONS
- Creator
- Ray, Aishwarya
- Date
- 2013-04-30, 2013-05
- Description
-
Currently most of the world along with the United States is based on a fossil fuel economy. The United States alone consumes around 25% of...
Show moreCurrently most of the world along with the United States is based on a fossil fuel economy. The United States alone consumes around 25% of global annual oil production. According to the Institute for Energy Research, around 70% of this oil is utilized for automotive applications [2]. Due to rising concerns over depleting fossil fuel reserves, global warming and other environmental concerns along with volatility in the fuel market, alternative sources of energy and fuel efficiency have received wide spread attention among researchers. Electric machines which are at the heart of any drive-train mechanism, have garnered particular attention. To date, a vast majority of research in this area has been focused on permanent magnet based machine topologies. However, due to concerns regarding rising demand and foreign dependence for the procurement of rare earth materials, coupled with rising costs and the environmentally hazardous excavation process of these materials, machine technologies with little or no permanent magnets have gained significant interest. Switched Reluctance Machines (SRMs) are one of the top contenders in this category. An SRM does not require permanent magnets, is extremely rugged and is well suited for harsh operating conditions. SRMs have a wide operating speed range, a very simple geometric structure and allow for fault tolerant operation. Conventional SRMs were designed to have a higher number of stator poles as compared to rotor poles. However, this configuration has many drawbacks such as high noise, torque ripple and complexity in modeling. Using the new formula developed at the Illinois Institute of Technology, a new SRM topology has been proposed which has higher number of rotor xii poles relative to stator poles (HRSRM). This topology has shown a significant improvement in torque ripple along with a reduction in noise as compared to conventional SRMs. This study evaluates the performance of an SRM with higher number of rotor poles for low power automotive applications like the electric bicycle (eBike), all-terrain vehicles (ATV), golf carts, utility task vehicles (UTV), etc. Using Finite Element Analysis (FEA), the machine has been designed with 6 stator and 10 rotor poles. This thesis presents preliminary results from the iterative machine design process along with detailed results from each stage of development. A closed loop simulation of the system has been carried out in MATLAB for verifying dynamic performance of the designed machine. Finally an experimental setup was developed for the prototype machine. The drive consists of an asymmetric bridge converter operating in closed current loop along with phase detection enabled via a position encoder. This test bed has been used to verify the feasibility of the proposed solution.
M.S. in Electrical Engineering, May 2013
Show less
- Title
- MACROSCOPIC QUANTITIES FOR STOCHASTIC DIFFERENTIAL EQUATIONS WITH A LEVY NOISE IN TWO DIMENSIONS
- Creator
- Albert, Hannah
- Date
- 2016, 2016-05
- Description
-
The mean exit time and transition probability density function are macroscopic quantities used to determine the behavior of stochastic di...
Show moreThe mean exit time and transition probability density function are macroscopic quantities used to determine the behavior of stochastic di↵erential equations (SDEs). The integral-di↵erential equations determining these quantities for SDEs with non- Gaussian ↵-stable L´evy motions involve a nonlocal term consisting of a singular integral, which is a manifestation of the ’flights’ or ’jumps’ due to the non-Gaussian noise. A two-dimensional SDE with radially symmetric ↵-stable L´evy motion is considered, and an efficient second-order accurate numerical scheme is developed for calculating the mean exit time and transition probability density function. The scheme is numerically verified by testing the results of the deterministic integral-di↵erential equations with a known, smooth function u(x) in place of themean exit time, and by calculating an unknown mean exit time u(x).
M.S. in Applied Mathematics, May 2016
Show less
- Title
- CLUSTERING ALGORITHM FOR MASS SPECTROMETRY DATA USING GENERAL-PURPOSE COMPUTING ON GRAPHICS PROCESSING UNITS
- Creator
- Ali, Ansab
- Date
- 2016, 2016-05
- Description
-
Modern mass spectrometers can produce mass spectra data at a very high rate. Usually, this data has a signi cant percentage of redundant...
Show moreModern mass spectrometers can produce mass spectra data at a very high rate. Usually, this data has a signi cant percentage of redundant spectra that in- crease the database lookup time when searching for peptides. Therefore, there is a need for data-mining techniques (e.g. clustering) to reduce the complexity of these mass spectra datasets before database search. Multi-core architectures, speci cally Graphics Processing Units (GPUs) have evolved tremendously in the recent years and are an ideal option for clustering these large mass spectra datasets. In this thesis, we present an e cient and scalable parallel algorithm for clustering mass spectra using the well known 'F-set' similarity metric. We describe the algorithmic framework and the various optimizations that serve to vastly improve the algorithm's performance and accuracy. We test the algorithm on a variety of real as well as self-generated mass spectra datasets and show that the algorithm achieves highly accurate clustering with performance gain of around 50 to 100 times as compared to serial implementations in literature. Thus, by clustering mass spectra corresponding to unique peptides to- gether, the algorithm allows faster identi cation of peptides in a subsequent database search.
M.S. in Electrical Engineering, May 2016
Show less
- Title
- A NUMERICAL AND ANALYTICAL STUDY OF THE GROWTH OF SECOND PHASE PARTICLES USING A SHARP INTERFACE APPROACH
- Creator
- Barua, Amlan K.
- Date
- 2012-08-15, 2012-12
- Description
-
Two phase alloys are quite important in materials science and metallurgy. Some common examples include nickel-aluminum system, iron-carbon...
Show moreTwo phase alloys are quite important in materials science and metallurgy. Some common examples include nickel-aluminum system, iron-carbon system etc. The most important macroscopic properties of these alloys depend on size, orientation and concentration of the second-phase precipitates. It is necessary to understand the details of formation, growth and equilibrium conditions of these micro-structures for better material production. In this dissertation we investigate the growth of the precipitates within the matrix using a sharp interface approach. We consider the effects of elastic fields on the evolution of the precipitates. The elastic fields can either be applied at the far field or can simply arise as a result of crystallographic difference between matrix and precipitate phase. The precipitates exhibit complicated morphology because of the Mullins-Sekerka instability. Our investigation is based on both analytical and numerical techniques. We use linear analysis to understand the qualitative behavior of the problem, at least for short time. To simulate the long time dynamics of the problem and to understand the effects of nonlinearity, we use highly accurate boundary integral methods. Our main contribution in this thesis is threefold. First, starting from linear analysis, we focus on the conditions under which stable growth, in presence of elastic field, is possible for a single precipitate. Finding such conditions are important in material production and simple conditions like constant material flux and constant elastic fields produce precipitates with complicated shapes. Second, we propose a space-time rescaling of the original boundary integral equations of the problem. The rescaling enables us to accurately simulate very long time behavior of the system comprising of multiple precipitates growing under different mass flux and elasticity. It also helps us to understand the long time interaction of precipitates. Third, we xiii implement an adaptive treecode to reduce the computational complexity of the iterative solver from O(N2) to O(N logN) where N is the dimension of the discrete problem. The efficiency of the treecode is demonstrated by performing simulations. Also a parallelization strategy for the treecode is discussed. The speed-up from the parallelization is demonstrated using moderate number of cores. xiv
PH.D in Applied Mathematics, December 2012
Show less
- Title
- THERMAL INACTIVATION OF POLYPHENOL OXIDASE IN POTATO, AVOCODO AND APPLE
- Creator
- Banstola, Anunaya
- Date
- 2011-06, 2011-07
- Description
-
Polyphenol oxidase (PPO) needs to be inactivated to control the enzymatic browning which is undesirable in fruits and vegetables industry....
Show morePolyphenol oxidase (PPO) needs to be inactivated to control the enzymatic browning which is undesirable in fruits and vegetables industry. Enzymes have substrateorigin- specificity, and thus the functionality and inactivation properties of PPO varies depending on the source of enzyme, properties of food matrix, method of treatment applied as well as measurement techniques. Therefore, the result from one food cannot be extrapolated to another food, and more research is needed to understand the behavior of PPO in different processing conditions. In this study, thermal inactivation of PPO in potato, avocado and apple was studied at four different temperatures (50, 60, 70 and 80 °C). The treatment time varied from 1 to 60 min, depending upon the temperature used. The level of inactivation was deduced by residual enzyme activity which was assayed by spectrophotometric methods in two different substrates, pyrocatechol and 3,4- dihydroxyphenylalanin (DOPA) using an ELISA plate reader. The degree of PPO inactivation achieved in potato and apple extracts was higher using pyrocatechol as the substrate than using DOPA, where it was similar for both substrates in case of avocado. Inactivation kinetics was studied in terms of rate constant k, D values and activation energy. The inactivation rate followed the first order kinetics and higher dependency on treatment time was observed at higher temperatures. Biphasic inactivation was observed in case of apple PPO, where activation of enzyme was observed at low temperature (50 °C). In conclusion, level of PPO inactivation was dependent on the degree of the thermal treatment, source of PPO and substrate used for enzyme determination.
M.S. in Food Safety and Technology, July 2011
Show less
- Title
- APPLICATION SOFTWARE DESIGN WITH THE FEATURE LANGUAGE EXTENTION
- Creator
- Maruyama, Shuichi
- Date
- 2013-04-23, 2013-05
- Description
-
When implemented with existing mainstream programming languages, the code of interacting features will inevitably entangle in the same...
Show moreWhen implemented with existing mainstream programming languages, the code of interacting features will inevitably entangle in the same reusable program unit of the programming language such as a method. Interacting features are very common in software applications. Program entanglement destroys separation of concern, making the software difficult to develop, maintain and reuse. The Feature Language Extensions (FLX) is a set of programming language constructs that allow the programmer to develop interacting features as independently reusable program modules. This thesis addresses two questions: how to design software with FLX and whether programs that can be written in a procedural language such as Java can also be written in FLX. We illustrate our results with examples from a computer blackjack game that is implemented using FLX. For the first question, we introduce a set of seven design guidelines. Some of these guidelines are introduced for good programming practices: so that there is better separation of concern and so that FLX is complementary to object oriented design. Some of them are developed so that features written following them will be reusable, and when the features are integrated with other features, they do not need to be changed. A procedural programming language such as Java has constructs that allows a programmer to specify program units to be executed in sequence, conditionally, iteratively and recursively. Previous papers had given examples on how to implement the first two types of execution flows with FLX. In this thesis, we show how to implement the other two types of execution flows.
M.S. in Computer Science, May 2013
Show less