Search results
(8,381 - 8,400 of 10,083)
Pages
- Title
- Understanding the effect of temperature, strain and grain size on the stress relaxation behavior of Ni-base and steel alloys
- Creator
- Reny, Julia
- Date
- 2021
- Description
-
Stress relaxation research is very valuable as it provides information on how materials relieve stress under a constant strain. In...
Show moreStress relaxation research is very valuable as it provides information on how materials relieve stress under a constant strain. In applications where parts are submitted to a constant deformation, it is important to know that regions containing notches or stress concentrations will be able to relax enough to avoid the formation of cracks during operations. In this study, the stress relaxation behaviors of experimental Ni-base superalloy RRHT5 and Pyrowear 53 steel alloy were investigated. Tests were conducted under different conditions, varying temperature, initial strain, and grain size to explore the influence of each of these factors. The stress relaxation test data was used to calculate apparent activation volumes which can be correlated with plastic deformation mechanisms governing the stress relaxation. Understanding these mechanisms and identifying how specific factors influence them is essential to further control and optimize the design and processing of materials to obtain the best properties. Depending on the test conditions, the stress relaxation underwent one or two stages dominated by different mechanisms. These governing mechanisms were considered to be either dislocation-dislocation interactions, dislocation rearrangement, dislocations spreading from localized slip bands or dislocation-precipitate interactions.
Show less
- Title
- High School Mathematics Teachers’ Conceptions of Nature of Mathematics (NOM) and How Prior Learning Environments Affect These Conceptions
- Creator
- Elefteriou, Katherine
- Date
- 2023
- Description
-
Literature shows that the Nature of Mathematics Knowledge (NOMK) dates back to the era of Plato and Aristotle (Dossey, 1992). It suggests that...
Show moreLiterature shows that the Nature of Mathematics Knowledge (NOMK) dates back to the era of Plato and Aristotle (Dossey, 1992). It suggests that mathematics teachers’ beliefs, views, conceptions, and preferences about NOM influence the way in which they teach mathematics. It is important to understand how these conceptions are formed, which may evolve consciously or unconsciously from their experiences. Teachers’ experiences as students of mathematics, their family, school environment, cultural, and social experiences influence their behavior including their decisions, actions, class organization, learning activities, and students’ achievement (Beswick, 2012; Ernest, 2008; Thompson1984). Yet, there is no NCTM standard on NOM (Gfeller, 1999).The purpose of the present study was to assess high school mathematics teachers’ NOMK conceptions, and to explore how these conceptions have been influenced by their personal and educational experiences as students in learning mathematics. Another objective of this study was to explore whether the teachers’ years of mathematics teaching experience, and their level of education have any influence on their NOMK beliefs. The sample consisted of 52 high school mathematics teachers, who were certified to teach secondary mathematics, and who had at least three years of mathematics teaching experience. Two instruments were used to collect the data, 1) the VNOM D instrument to assess the teachers’ beliefs regarding the NOMK aspects, and 2) the demographics instrument to collect information on the teachers’ demographics, and on their experiences as students of mathematics. Interviews were also used to enhance the findings. Results showed that participants had strong beliefs regarding their NOMK, and that their years of experience, and level of education influenced their NOMK beliefs.
Show less
- Title
- MICROSTRUCTURE AND MECHANICAL PROPERTIES OF DISCONTINUOUSLY PRECIPITATED NI-CO-AL ALLOYS
- Creator
- Ho, Kathy
- Date
- 2022
- Description
-
The study of high temperature structural materials has been one of great interest and immense focus in recent years of research and...
Show moreThe study of high temperature structural materials has been one of great interest and immense focus in recent years of research and development. With the capability of catering to specific needs and applications while being commercially cost-effective, these materials can be synthesized using various types of methods and materials for a large range of applications. In order to implement the advantageous properties of these materials for practical use in service, empirical data relating to the material and mechanical properties of these high temperature structural alloys must first be obtained. This can be achieved through numerous processing methods. One particular method involves precipitation strengthening. Two types of transformation modes include discontinuous and continuous precipitation. Discontinuous precipitation (DP) nucleates at high angle, incoherent grain boundary, grows through grain boundary diffusion, and produces a lamellar structure consisting of alternating layers of γ and γˡ (Ephler, 2004). Continuous precipitation (CP) nucleates within the grain, is controlled through volume diffusion producing, and results in a homogeneous distribution of equilibrium composition precipitates with a spherical/cuboidal morphology. Since both modes of transformation possess a chemical driving force, resulting from the supersaturation of solute, the coexistence of both DP and CP transformation in a material is possible. However, as demonstrated from past studies, the presence of a partial DP transformation in structural alloys is undesirable as detrimental effects on mechanical properties are observed. As a result, numerous studies have focused on suppressing DP all together. In 1972 Erhard Hornbogen hypothesized that a fully DP transformed material would yield superior mechanical properties, similar to pearlite formation in steel, since the lamellar structure would increase barriers to dislocation movement (Hornbogen, 1972). As a result, recent studies have redirected their focus in an effort to encourage DP transformation to completion for improved mechanical properties. Therefore, the purpose of this work was to 1) determine the aging conditions under which a complete, 100% DP transformation would be achieved in select alloys, 2) determine the conditions where optimal precipitate size via CP transformation is obtained to effectively use precipitation strengthening without the concern of over-aging, 3) experimentally compare the material and mechanical properties between 100% DP aged samples and CP aged samples consisting of the optimal precipitate size, 4) compare the mechanical properties between alloys that have undergone a complete DP transformation to commercial alloys currently used in service, and 5) provide empirical data to verify Hornbogen’s claim. The results from this work indicated that 1) a lower aging temperature promote DP transformation while a higher temperature promotes CP transformation, 2) a smaller grain size prior to aging was more favorable for DP transformation while larger grains were favorable for CP transformation, 3) a complete DP transformation was observed for Alloy 9 and Alloy 10 after aging at 500°C for 4 hours and 550°C for 4 hours, 4) Alloy 1, Alloy 3, and Alloy 5 were potentially undergoing a different type of transformation at lower DP aging temperatures, where β phase was present, 5) optimal precipitate size for effective use of precipitation strengthening (CP transformation) was achieved under CP aging conditions 700°C-1HR for Alloy 10 and 750°C-1HR for Alloy 9 and the forged stock bar, 6) a small fraction of DP consistently formed at the grain boundaries of the CP aged samples for all alloy samples, indicating that the nucleation of DP was quick, but growth was limited, 7) mechanical properties of the DP aged samples for Alloy 9, Alloy 10 and the forged bar were superior to their corresponding CP aged samples in terms of the hardness, UTS, and yield stress, but were less ductile than the CP aged samples, and 8) the mechanical properties of DP aged samples for Alloy 9, Alloy 10, and the forged bar were comparable, and at times superior, to the commercially available alloys. Due to limited prior research conducted on the mechanical properties of DP alloys, this investigation serves as a pioneering effort experimentally determine if the mechanical properties of completely DP transformed material are superior to that of CP transformed material, aged to optimal precipitate size, while collecting empirical data to verify Hornbogen’s claim.
Show less
- Title
- COMPREHENSIVE ANALYSIS OF EXON SKIPPING EDITS WITHIN DYSTROPHIN D20:24 REGIONS
- Creator
- Niu, Xin
- Date
- 2020
- Description
-
Exon skipping is a disease modifying therapy that operates at the RNA level. In this strategy, oligonucleotide analog drugs are used to...
Show moreExon skipping is a disease modifying therapy that operates at the RNA level. In this strategy, oligonucleotide analog drugs are used to specifically mask specific exons and prevent them from being included in the mature mRNA. Exon skipping can also be used to restore protein expression in cases where a genetic frameshift mutation has occurred, and this how it is applied to Duchenne muscular dystrophy, DMD. DMD most commonly arises as a result of large exonic deletions that juxtapose flanking exons of incompatible reading frame, which abolishes dystrophin protein expression. This loss leads to the pathology of the disease, which is severe, causing death generally in the second or third decade of life. Here, the primary aim of exon skipping is to restore the reading frame by skipping an exon adjacent to the patient’s original. While restoring some protein expression is good, how removing some region from the middle of protein affects its structure and function is unclear. Complicating this in this case is that the dystrophin gene is very large, containing 79 exons. Many different underlying deletions are knowns, and exon skipping can be applied in many ways. It has previously been shown that many exon-skip edits result in structural perturbations of varying degrees. Very few studies are focused on the protein biophysical study and it is still basically unclear whether and how such editing can be done to minimize such perturbations. In order to provide the solid evidences which prove the significant variation among those cases (especially for the clinically relevant cases) and better understanding the general principles of “what makes a good edit”, we examine a systematic and comprehensive panel of possible exon edits in a region of the dystrophin protein. The domain D20:24 of dystrophin rod region are selected for its entirety which is separated by hinge region (mostly random coiled structure) and addition of other STRs will not disrupt the structure stability. Also D20:24 regions lie in the Hot Spot region II (HS2) which holds the most number of DMD patients. During the comprehensive scan, we identify for the first time, exon edits that appear to maintain structural stability similar to wild-type protein and those clinically relevant edits. Then we figure out the factors that appear to be correlated with the degree of structural perturbation, such as the number of cooperative protein domains, as well as how the edited exon structure interacts with the protein domain structure. Our study is the first systematic and comprehensive scan for an entire multiple STRs domain. This would help us understand the protein nature of various exon skipping edits and provide useful target for clinical treatment. Also the knowledge we learned may be applied to produce more sophisticated CRISPR edits in the future work.
Show less
- Title
- CONCEPTUAL COST ESTIMATION MODEL FOR BRIDGES WITH RESPECT TO ABC METHODS
- Creator
- Rajeei, Farshad
- Date
- 2020
- Description
-
As the need for renovating and repairing structurally deficient and functionally obsolete bridges is increased, employing innovative methods...
Show moreAs the need for renovating and repairing structurally deficient and functionally obsolete bridges is increased, employing innovative methods which can lead to shorter construction time, better quality, longer durability, and less life-cycle costs become more popular in transportation agencies.Developing a model that has the capability of estimating the total construction cost of ABC projects and compare them with conventional methods costs [without using these methods] will help decision-makers at DOTs in understanding and assessing the benefits and costs of ABC methods at the planning phase of a project and in return, will lead to the elaboration in the use of ABC methods versus the conventional ones. But this decision making process is complicated since the number of executed ABC projects, especially those which done by SIBC and SPMT [two superstructure replacement method] is limited and as a result; there is a lack of historical knowledge to estimate the associated cost of these methods in future projects. Factors affecting this process include but are not limited to: construction costs, user costs, quality of work, impact on traffic, the safety of road users and construction workers, and the impact on surrounding communities and businesses. The main aim of this study is to make a model to estimate additional costs of using SIBC and SPMT methods and the saving in user costs.
Show less
- Title
- Comparison of an Ideal Point and Dominance IRT Model on the Detection of Differential Item Functioning with DFIT
- Creator
- Spizzuco Jr, Daniel
- Date
- 2019
- Description
-
Item response theory (IRT) models can assume a variety of forms including,notably, dominance and ideal point-based probability distributions....
Show moreItem response theory (IRT) models can assume a variety of forms including,notably, dominance and ideal point-based probability distributions. But researchers haveonly recently begun to explore issues related to the above distinction. The current studytherefore examines whether model-data fit and rates of differential item functioning (DIF)detection remain comparable when data are analyzed via the ideal point-based generalizedgraded unfolding model (GGUM) vs. the dominance-based graded response model (GRM).To address these issues, item response data were simulated to contain dominance,ideal point and mixed response processes, and DIF and impact scenarios. Results indicatedthat model-data fit and DIF detection accuracy were not as closely aligned as anticipated.Overall, the GGUM fit data better than the GRM to the extent that any ideal point processeswere present, while the GRM was slightly better at fitting dominance-only data. With noimpact, however, the GGUM fit all embedded response data types better than the GRM.Results were mixed among impact scenarios. This pattern was found in both no DIF and DIFscenarios.Several points were made with respect to the DIF portion of the study. First, Type 1error rates were in most cases quite conservative for both models. Second, study-wide,more power emerged with dominance as compared to ideal point data for both models.Moreover, in no impact conditions, slightly more power accrued via the GGUM fordominance and ideal point data. With impact, however, the GRM produced somewhat morepower across data types. Third, in terms of DIF patterns/sources, power was high for bothmodels when DIF was embedded on the full set of location/threshold parameters, andlower with fewer differentially functioning (DF) location/threshold parameters. Notably,the GGUM was slightly more powerful in the fewest DF location/threshold scenarios, andthe GRM was more powerful in the most DF location/threshold scenarios. Fourth, neithermodel performed well in the complex within-item cancelling DIF scenarios. These patternsgenerally occurred in both uniform and non-uniform scenarios. The paper concludes with apresentation of recommendations, study limitations and issues for future research.
Show less
- Title
- Influence Of Internal Factors In Construction Organizations On The Implementation Of Integrated Project Delivery Viewed From The Organizational Change Theory
- Creator
- Rashed, Ahmed
- Date
- 2022
- Description
-
Integrated Project Delivery (IPD) is an emerging construction project delivery system that is collaborative oriented. It involves the critical...
Show moreIntegrated Project Delivery (IPD) is an emerging construction project delivery system that is collaborative oriented. It involves the critical participants in an early stage of the project timeline. Recently, IPD is becoming increasingly common. Many organizations are interested in contributing to the Architecture, Engineering, and Construction (AEC) industry. No research studies have previously observed and studied the effect of IPD implementation through an organizational change theory lens. The presented research work was designed to explore the role of organizational factors in the implementing first domain, reflecting the organizational level factors, including cultural and economic considerations. In contrast, the second domain focuses on member-level factors, i.e., employee involvement and readiness to change. Together, these domains influence the organization’s intention and adoption to change toward the IPD as a project delivery system. This impact is viewed through the lens of the OCT based on the contributions and theories discussed by various researchers. These researchers are from a variety of disciplines. A data collection survey was developed to gather quantitative data from the industry. Data was collected from N=128 employees from the construction industry. Data analysis was performed through Structure Equation Modeling using Smart PLS 3. Results showed that communication, integration significantly associated IPD implementation. Moreover, involvement and readiness change also positively predicted the implementation of IPD. The empirical result of current study validates all the constructs of the hypothetical model except reward system.
Show less
- Title
- Toward an Extraordinary Ecotourism Destination on The Shoreline of Aseer Region, Kingdom Of Saudi Arabia
- Creator
- Saleh, Abdulmalik Mohammad S.
- Date
- 2022
- Description
-
Since the dawn of the Anthropocene epoch, human activities have been adversely influencing our globe and becoming a controversial phenomenon....
Show moreSince the dawn of the Anthropocene epoch, human activities have been adversely influencing our globe and becoming a controversial phenomenon. However, as a counterforce, multiple adoptions of sustainable green movements worldwide are continually attempting alternate resolutions to preserve nature. As the tourism industry grows, ecotourism, for instance, is a specific eco-friendly approach that asserts minimizing human impacts and conserving captivating nature, improving the livelihood of local communities, and involving interpretation and education. A demi-decade ago, Saudi Arabia’s 2030 vision (the post-oil plan) was launched to diversify its GDP and develop public service sectors such as tourism. This thesis investigates the relationship between architecture and the possibilities of ecotourism principles, besides the governmental program, under multiple tourism indicators along the untouched Aseer shoreline, which has valuable attractions and amenities; it is faced with several issues, including informal planning, limited infrastructure, and low-income community. We built a suggested project based on a collection of written materials on the area’s environmental and culturally diverse aspects and case studies; architecture-to-ecotourism is thriving, but there is still potential for methodological development. The thesis findings demonstrate that architecture can immensely contribute to sustainable development through the ecotourism concept and can have a tangible impact on the project. Simultaneously, architecture, through ecotourism, is successful by improving the economic aspect of the host societies, reducing environmental consequences, and strengthening heritage identity. This research needs further studies on the correlation, which remains highly debated, between architecture and ecotourism norms to sustain nature.
Show less
- Title
- DEEP LEARNING AND COMPUTER VISION FOR INDUSTRIAL APPLICATIONS: CELLULAR MICROSCOPIC IMAGE ANALYSIS AND ULTRASOUND NONDESTRUCTIVE TESTING
- Creator
- Yuan, Yu
- Date
- 2022
- Description
-
For decades, researchers have sought to develop artificial intelligence (AI) systems that can help human beings on decision making, data...
Show moreFor decades, researchers have sought to develop artificial intelligence (AI) systems that can help human beings on decision making, data analysis and pattern recognition applications where analytical methods are ineffective. In recent years, Deep Learning (DL) has been proven to be an effective AI technique that can outperform other methods in applications such as computer vision, natural language processing, autonomous driving. Realizing the potential of deep learning techniques, researchers have also started to apply deep learning on other industrial applications. Today, deep learning based models are used to innovate and accelerate automation, guidance, and decision making in various industries including automotive industry, pharmaceutical industry, finance, agriculture and more. In this research, several important industrial applications (on Biomedicine and Non-Destructive Testing) utilizing deep learning algorithms will be introduced and analyzed. The first biopharmaceutical application focuses on developing a deep learning based model to automate the visual inspection process in Median Tissue Culture Infectious Dose(TCID50). TCID50 is one of the most popular methods for viral quantification. An important step of TCID50 is to visually inspect the sample and decide if it exhibits cytopathic effect(CPE) or not. Two novel models have been developed to detect CPE in microscopic images of cell culture in 96 well-plates. The first model consists of a convolutional neural network (CNN) and support vector machine(SVM). The second model is a fully convolutional network (FCN) followed by morphological post-processing steps. The models are tested on 4 cell lines and achieve very high accuracy. Another biopharmaceutical application developed for cellular microscopic images is the clonal selection. Clonal selection is one of the mandatory steps in cell line development process. It focuses on verifying the clonality of the cell culture. The researchers used to visually inspect the microscopic images to verify the clonality. In this work, a novel deep learning based model and a workflow is developed to accelerate the process. This algorithm consists of multiple steps, including image analysis after incubation to detect the cell colonies, and verify its clonality in day0 image. The results and common mis-classification cases are shown in this thesis. Image analysis method is not the only technology that has been advancing for cellular image analysis in biopharmaceutical industry. A new class of instruments are currently used in biopharmaceutical industry which enable more opportunities for image analysis. To make the most of these new instruments, a convolutional neural network based architecture is used to perform accurate cell counting and cell morphology based segmentation. This analysis can provide more insight of the cells at very early stage in characterization process of cell line development. The architecture and the testing results are presented in this work. The proposed algorithm has achieved very high accuracy on both applications, and the cell morphology based segmentation enables a brand new feature for scientists to predict the potential productivity of the cells. Next part of this dissertation is focused on hardware implementation of Ultrasonic Non-Destructive Testing (NDT) methods based on deep learning, which can be highly useful in flaw detection and classification applications. With the help of a smart and mobile Non-Destructive Testing device, engineers can accurately detect and locate the flaws inside the materials without reliance on high performance computation resources. The first NDT application presents a hardware implementation of a deep learning algorithm on Field-programmable gate array(FPGA) for Ultrasound flaw detection. The Ultrasound flaw detection algorithm consists of a wavelet transform followed by a LeNet inspired convolutional neural network called Ultra-LeNet. This work is focused on implementing the computationally difficult part of this algorithm: Ultra-LeNet, so that it can be used in the field where high performance computation resources (e.g., AWS) are not accessible. The implementation uses resource partitioning to design two dedicated pipelined accelerators for convolutional layers and fully connected layers respectively. Both accelerators utilize loop unrolling, loop pipelining and batch processing techniques to maximize the throughput. The comparison to other work has shown that the implementation has achieved higher hardware utilization efficiency. The second NDT application is also focused on implementing a deep learning based algorithm for Ultrasound flaw detection on a FPGA. Instead of implementing the Ultra-LeNet, the deep learning model used in this application is Meta-learning based Siamese Network, which is capable for multi-class classification and it can also classify a new class even if it does not appear in the training dataset with the help of automated learning features. The hardware implementation is significantly different than the previous algorithm. In order to improve the inference operation efficiency, the model is compressed with both pruning and quantization, and the FPGA implementation is specifically designed to accelerate the compressed CNN with high efficiency. The CNN model compression method and hardware design are novel methods introduced in this work. Comparison against other compressed CNN accelerators is also presented.
Show less
- Title
- APPLICATIONS OF INTEGRATED DESIGN METHODOLOGIES: HYBRID AUTOMATION OF DESIGN SEQUENCING AND ITS INFLUENCE ON COMPLEX DESIGN PROJECTS
- Creator
- Elshanshoury, Waleed Farouk Omar
- Date
- 2020
- Description
-
After the early development of Sketchpad in 1963 by Ivan Sutherland at MIT, the first system permitted drawing geometries parametrically;...
Show moreAfter the early development of Sketchpad in 1963 by Ivan Sutherland at MIT, the first system permitted drawing geometries parametrically; computation and algorithm aided design have significantly influenced the design practice. Computation and AAD are design approaches in which the medium of expression is logic instead of geometry. These approaches raised the curtain to various utilities, including but not limited to form-finding, automation, optimization, and robotic fabrication. Computational design and algorithm aided design are becoming fundamental approaches in most design practices because of their capability to solve complex problems.This thesis begins with a timeline presenting the evolution in design derivers and manifests how designers considered ideal design throughout history. This timeline starts with architecture approaches in ancient times when beauty, durability, and functions were the first principles to identify good architecture. It ends with the creation of computational technologies, which affected the design process and its logic. It will also investigate relations between software engineering and building design, where both fields intertwine with each other in general methodologies.This research examines how computation can generate integrated design systems to approach city planning and architectural design. IDS employs data, forces, and algorithms to construct a design system instead of solid geometries. This system combines the different design processes and chronological phases in interconnected blocks. This approach manages big data and assists in decision-making using automation, optimization, and machine learning technologies.This paper examines existing precedents, applications, and design projects that utilize IDS, including form-finding, materials, and energy. It will establish how evaluation criteria, simulations, solution optimizations, and processes automation play a vital role in integrated design systems. IDS is a dynamic workflow centered on principles and consists of components and aiding tools. This research explores technological aiding tools for these systems that help increase design performance and efficiencies using voice commands and automated functions.
Show less
- Title
- Extreme Fine-grained Parallelism On Modern Many-Core Architectures
- Creator
- Nookala, Poornima
- Date
- 2022
- Description
-
Processors with 100s of threads of execution and GPUs with 1000s of cores are among the state-of-the-art in high-end computing systems. This...
Show moreProcessors with 100s of threads of execution and GPUs with 1000s of cores are among the state-of-the-art in high-end computing systems. This transition to many-core computing has required the community to develop new algorithms to overcome significant latency bottlenecks through massive concurrency. Implementing efficient parallel runtimes that can scale up to hundreds of threads with extremely fine-grained tasks (less than 100 microseconds) remains a challenge. We propose XQueue, a novel lockless concurrent queueing system that can scale up to hundreds of threads. We integrate XQueue into LLVM OpenMP and implement X-OpenMP, a library for lightweight tasking on modern many-core systems with hundreds of cores. We show that it is possible to implement a parallel execution model using lock-less techniques for enabling applications to strongly scale on many-core architectures. While the fork-join model is suitable for on-node parallelism, the use of joins and synchronization induces artificial dependencies which can lead to under utilization of resources. Data-flow based parallelism is crucial to overcome the limitations of fork-join parallelism by specifying dependencies at a finer granularity. It is also crucial for parallel runtime systems to support heterogeneous platforms to better utilize the hardware resources that are available in modern day supercomputers. The existing parallel programming environments that support distributed memory either discover the DAG entirely on all processes which limits the scalability or introduce explicit communications which increases the complexity of programming. We implement Template Task Graph (TTG), a novel programming model and its C++ implementation by marrying the ideas of control and data flowgraph programming. TTG can address the issues of performance portability without sacrificing scalability or programmability by providing higher-level abstractions than conventionally provided by task-centric programming systems, but without impeding the ability of these runtimes to manage task creation and execution as well as data and resource management efficiently. TTG implementation currently supports distributed memory execution over 2 different task runtimes PaRSEC and MADNESS.
Show less
- Title
- Scalable Indexing and Search in High-End Computing Systems
- Creator
- Orhean, Alexandru Iulian
- Date
- 2023
- Description
-
Rapid advances in digital sensors, networks, storage, and computation coupled with decreasing costs is leading to the creation of huge...
Show moreRapid advances in digital sensors, networks, storage, and computation coupled with decreasing costs is leading to the creation of huge collections of data. Increasing data volumes, particularly in science and engineering, has resulted in the widespread adoption of parallel and distributed file systems for storing and accessing data efficiently. However, as file system sizes and the amount of data ``owned” by users grows, it is increasingly difficult to discover and locate data amongst the petabytes of data. While much research effort has focused on methods to efficiently store and process data, there has been relatively little focus on methods to efficiently explore, index, and search data using the same high-performance storage and compute systems. Users of large file systems either invest significant resources to implement specialized data catalogs for accessing and searching data, or resort to software tools that were not designed to exploit modern hardware. While it is now trivial to quickly discover websites from the billions of websites accessible on the Internet, it remains surprisingly difficult for users to search for data on large-scale storage systems. We initially explored the prospect of using existing search engine building blocks (e.g. CLucene) to integrate search in a high-performance distributed file system (e.g. FusionFS), by proposing and building the FusionDex system, a distributed indexing and query model for unstructured data. We found indexing performance to be orders of magnitude slower than theoretical speeds we could achieve in raw storage input and output, and sought to investigate a new clean-slate design for high-performance indexing and search.We proposed the SCANNS indexing framework to address the problem of efficiently indexing data in high-end systems, characterized by many-core architectures, with multiple NUMA nodes and multiple PCIe NVMe storage devices. We designed SCANNS as a single-node framework that can be used as a building block for implementing high-performance indexed search engines, where the software architecture of the framework is scalable by design. The indexing pipeline is exposed and allows easy modification and tuning, enabling SCANNS to saturate storage, memory and compute resources on different hardware. The proposed indexing framework uses a novel tokenizer and inverted index design to achieve high performance improvement both in terms of indexing and in terms of search latency. Given the large amounts and the variety of data found in scientific large-scale file systems, it stands to reason to try to bridge the gap between various data representations and to build and provide a more uniform search space. ScienceSearch is a search infrastructure for scientific data that uses machine learning to automate the creation of metadata tags from different data sources, such as published papers, proposals, images and file system structure. ScienceSearch is a production system that is deployed on a container service platform at NERSC and provides search over data obtained from NCEM. We conducted a performance evaluation of the ScienceSearch infrastructure focusing on scalability trends in order to better understand the implications of performing search over an index built from the generated tags. Drawing from the insights gained from SCANNS and the performance evaluation of ScienceSearch, we explored the problems of efficiently building and searching persistent indexes that do not fit into main memory. The SCIPIS framework builds on top of SCANNS and further optimizes the inverted index design and indexing pipeline, by exposing new tuning parameters that allows the user to further adapt the index to the characteristics of the input data. The proposed framework allows the user to quickly build a persistent index and to efficiently run TFIDF queries over the built index. We evaluated SCIPIS over three kinds of datasets (logs, scientific data, and file system metadata) and showed that it achieves high indexing and search performance and good scalability across all datasets.
Show less
- Title
- Architecture as a Communicator of Values and Identity Spaces for Public Safety and Community Benefits
- Creator
- Waidele Arteaga, Nicolas
- Date
- 2022
- Description
-
Urban segregation, violence, and crimes are linked to drug trafficking. El Castillo Social Factory is an urban strategy that aims to recover...
Show moreUrban segregation, violence, and crimes are linked to drug trafficking. El Castillo Social Factory is an urban strategy that aims to recover the El Castillo neighborhood and prevent drug trafficking from advancing, understanding that police action is necessary but insufficient.This neighborhood is located on the southern periphery of Santiago in a commune called La Pintana and aspires to make its neighborhoods “more livable, healthy, and economically viable.” This proposal explores how investing in civic commons can make these goals a reality. First, it is essential to increase the presence of the State, strengthening existing services and adding new ones, with a focus on the care of children and young people. The second is to recover vacant lots and public spaces in poor condition or deteriorated through an “urban acupuncture” strategy based on the construction of many small or medium-sized projects. Art and sports are fundamental, allowing us to protect children and young people and offer them horizons of recreation and hope. The public buildings, institutions, land, water bodies, and infrastructure inherited from earlier generations are ready for us to see anew—as a robust network of civic assets ready to be activated for the current needs, desires, and dreams of all the people who share and shape them. El Castillo Social Factory offers a fresh look at our community anchors and the vibrant hubs our public spaces can become when we invest in collective urban life. Its vision focuses on positive transformation at the architectural scale—where personal experience and aspirations meet broad, long-range planning efforts—to spark the imagination and spur us to work together toward realizing the abundant potential of what we hold in common.
Show less
- Title
- MECHANICAL PROPERTIES AND MICROSTRUCTURE OF HEAT AFFECTED ZONE IN HPS 100W STEEL
- Creator
- Lu, Junhua
- Date
- 2011-11-21, 2011-12
- Description
-
In the present investigation, thermal simulated specimens were used to investigate the effect of peak temperature during welding on...
Show moreIn the present investigation, thermal simulated specimens were used to investigate the effect of peak temperature during welding on characteristic fracture toughness and microstructural features of the heat-affected zones (HAZ) of HPS 100W Steel. HPS 100W is a new grade of bridge steel included in ASTM A709 Standard. The material was subjected to varying thermal profiles to simulate the different subzones of HAZ and the resultant microstructures were observed and mechanical properties determined. HAZ simulation was carried out in a Gleeble 3500 at different peak temperatures of 850oC, 1100oC and 1300oC with heat inputs of 3kJ/mm along with specimens transverse or longitudinal to the rolling direction. Subsequently, impact toughness data were obtained at different temperatures by means of a Charpy test. Finally, the different microstructures were characterized using optical microscopy, SEM and TEM in order to correlate the structure-property relationship of the HAZ for HPS 100W steel. Due to small heat-input and accelerated cooling, the grain size in HAZ is relatively small and the structures are ductile. The Charpy impact results show the transverse rolling direction toughness for lower peak temperature and base material have lower values than that of longitudinal direction samples. Nevertheless even the lowest Charpy values obtained are well above minimum values required for structural application transportation. The HAZ showed higher absorbed impact energy in lower temperature. Also, precipitation dissolution study of NbC and Cu was carried out in the end. Cu element is the strengthening element. The NbC have grain refining effect at elevated temperature.
M.S. in Science and Engineering, December 2011
Show less
- Title
- DESIRES OF THE CITY, THE SENSIBLE METROPOLIS
- Creator
- De Sanabria Sales, Lucia Rodriguez
- Date
- 2014, 2014-07
- Description
-
The question about the future of our cities starts with the consideration of what kind of society we want. What role will architecture play in...
Show moreThe question about the future of our cities starts with the consideration of what kind of society we want. What role will architecture play in shaping our society and the way people live their lives? Can architecture really be a tool for other objectives? Can it be part of reactivation of the economy? The future of cities, whether they are as densely populated as the modern metropolis or more sprawled like its surroundings, need to adapt to new technologies and ways of living. We have to be aware that our cities are in constant change and development, and that their future relies on how able they are at adapting. In order to adapt, urbanism needs to step back and analyze the existing city structure, to improve it and create a more flexible environment that will adjust to the next century. The objective of this thesis is to propose a strategy that enhances sustainable development and meets the needs of today by opening a path to the future. Sustainable development is the kind of development that meets the needs of the present without compromising the ability of the future generations to meet their own needs.1 Change is occurring - society is willing to connect with the city and therefore the city must connect back. It should be the playground of young and not so young people. It is in our hands to transform the built environment and create spaces of relation in society. 1 Definition coined in the Brundtland Commission of 1972. xv However, what is really needed is to fix, re-activate, remodel and improve the existing metropolis that we already have. The objective of the thesis is to investigate the urban model of a city and how this can address the present but at the same time is flexible enough to shape the future. Opposites exist and, by definition, there is a strong connection between them. In this research I will work towards a hybrid condition of society. Why do we need to have the opposites separated? Can we bring them together, make them work and interact but at the same time maintain their identity? Can nature influence the metropolis without causing it to lose the density that characterizes it? To investigate this and find answers to the continuous evolution of society, the first studio will concentrate on the Retreat, what it is and how it works. This research will provide an abstraction of nature and Retreat in its pure simple form. I will use this abstraction by applying it to the built environment and using it to analyze and shape the metropolis. I will also observe what changes occur in the basic relationships of society. Many questions arise when an architect gets into the topic of the future city. What kind of people and cities do we want? What will make us experience the difference that we are searching for? How can public spaces be used for people to meet and connect and for culture to grow? What is the difference between metropolis and nature? These questions will guide the projects to find a coherent strategy that could be applied to different metropolitan sceneries. The first field of study proposed is designing with nature. The environment in Colorado around Camp of Arts Perry Mansfield is mostly wild. Here I will see how design xvi responds to the nature around it. How do we bring an urbanized sense to the landscape in order to enhance the feelings we have in it? The second field of study of the thesis addresses the urban tissue of Woodlawn (a neighborhood in Chicago), the existing metropolis, how it is moving towards the future, and creating a bond between environment and metropolis. In this section of the thesis, I will introduce nature as a tool for the development of the city. What are architects and urbanists working on to reshape the built environment, and how do we make spatial conditions where we will experience more diverse stages? I want to create a hybrid stage where natural and urban elements work together, a space for interaction in society, and having retreat and metropolis in the same place. The ´Desires of the City´ will be a strategy that looks into a series of aspects that involve the urban development. Focusing in what the users need and want, and creating a community sense between the neighbors. The aspect of controlling energy and sustainable projects that deal with communication and infrastructures of the neighborhood. In the programmatic side, providing a strategy of one space that hosts different activities, with a variety of cultural and social equipment. I will introduce a designed landscape that will work as the extra layer that we are missing in the urbanism of today. Landscape will be the infrastructure of the neighborhoods, introducing the concept of semi-private ownership in the metropolis. Questioning the actual ownership of the ground and transforming vacant land into options and opportunities, one of the fundamental assertions of the projects is the necessity of community action and engagement. In order to make an impact, society has to be involved xvii in the process. This new landscape infrastructure is where action happens and will spark the beginning of a new urbanism that is characterized by flexibility and where future changes in society are an asset and not an issue. This new residential model is sensible and will respond to the desires of the users.
M.S. in Architecture, July 2014
Show less
- Title
- STUDY OF THE STRUCTURE OF THE INDIRECT FLIGHT MUSCLE OF LETHOCERUS INDICUS BY LABELING WITH HEAVY ATOMS
- Creator
- Xie, Luping
- Date
- 2012-04-24, 2012-05
- Description
-
Insect flight muscle (IFM) from Lethocerus indicus is an asynchronous muscle which can keep on oscillating after a neural stimulation, as long...
Show moreInsect flight muscle (IFM) from Lethocerus indicus is an asynchronous muscle which can keep on oscillating after a neural stimulation, as long as the load is mechanically-resonant. It has high degree of structural order as well. These characteristics make it an ideal material to study the structure of IFM in vitro. In this research, the structure of IFM from Lethocerus indicus was studied using X-ray diffraction. Multiple isomorphous replacement (MIR) using heavy atoms to alter the structure of biological macromolecules was used in an attempt to solve the well-known phase problem of crystallography. MIR is less commonly used in non-crystalline systems. Here we showed that, by labeling with two heavy atoms, potassium tetrachloroaurate (III) (KAuCl4) and p-Chloromercuribenzoic acid (PCMB), the diffraction patterns from IFM samples changed, in particular the intensities of reflections on the meridian. The positions and intensities of every layer line on the meridian before and after labeling were compared, and the best conditions for the two heavy atoms to use for labeling were discussed. These results indicate that this approach may be a feasible way of determining the electron density in this material with further development.
M.S. in Biology, May 2012
Show less
- Title
- DISTRIBUTED NOSQL STORAGE FOR EXTREME-SCALE SYSTEM SERVICES IN CLOUDS AND SUPERCOMPUTERS
- Creator
- Li, Tonglin
- Date
- 2015, 2015-12
- Description
-
As supercomputers gain more parallelism at exponential rates, the storage infrastructure performance is increasing at a significantly lower...
Show moreAs supercomputers gain more parallelism at exponential rates, the storage infrastructure performance is increasing at a significantly lower rate due to relatively centralized management. This implies that the data management and data flow between the storage and compute resources is becoming the new bottleneck for large-scale applications. Similarly, cloud based distributed systems introduce other challenges stemming from the dynamic nature of cloud applications. This dissertation addresses several challenges on storage systems at extreme scales for supercomputers and clouds by designing and implementing a zero-hop distributed NoSQL storage system (ZHT), which has been tuned for the requirements of high-end computing systems. ZHT aims to be a building block for scalable distributed systems. The goals of ZHT are delivering high availability, good fault tolerance, light-weight design, persistence, dynamic joins and leaves, high throughput, and low latencies, at extreme scales (millions of nodes). We have evaluated ZHT’s performance under a variety of systems, ranging from a Linux cluster with 64-nodes, an Amazon EC2 virtual cluster up to 96-nodes, to an IBM Blue Gene/P supercomputer with 8K-nodes. This work also presents several real systems that have adopted ZHT as well as other NoSQL systems, namely ZHT/Q, FusionFS, IStore, MATRIX, Slurm++, Fabriq, FREIDAState, and WaggleDB, all of these real systems have been significantly simplified due to NoSQL storage systems, and have been shown to outperform other leading systems by orders of magnitude in some cases. Through our work, we have shown how NoSQL storage systems can help on both performance and scalability at large scales in such a variety of environments.
Ph.D. in Computer Science, December 2015
Show less
- Title
- INVESTIGATION OF OXYGEN GENERATION DURING THE OPERATION OF LITHIUM-ION CELLS USING IN-SITU FLUORESCENCE SPECTROSCOPY
- Creator
- Li, Mo
- Date
- 2016, 2016-07
- Description
-
An ex-situ fluorescence spectroscopy system was set up and utilized to study the interaction of fluorescent dyes with an oxygen quencher. The...
Show moreAn ex-situ fluorescence spectroscopy system was set up and utilized to study the interaction of fluorescent dyes with an oxygen quencher. The Stern-Volmer relationship was obtained and fitted to correlate the partial pressure of oxygen to the dye fluorescence intensity. The oxygen quenching constant α for 30 μM 9,10-dimethylanthracene_(DMA) dissolved in the mixture of ethylene carbonate_(EC) and dimethyl carbonate_(DMC) (1:1 volume ratio) were 0.69/0.62 at high/low partial pressure of oxygen. Operation of the self-made pouch cells with LiCoO2 as the cathode material was examined by charging/discharging at C/10. The discharge capacities were 107 and 104 mAh/g for the pouch cell both with and without the optical probe, which indicates that the optical probe did not significantly affect the performance and capacity of the cell. The optical probe was inserted into the pouch cell to measure the fluorescence intensity of the dye that was dissolved in the electrolyte. Time series experiments before charging demonstrated that the fluorescence intensity was stable for at least 24 hours. However, the fluorescence intensity decreased abruptly as the voltage of the pouch cell increased during the initial stages of charging, which means that the dye (DMA) could not be employed to detect the oxygen generated in the cell. Both the real-time fluorescence spectroscopy and the cyclic voltammetry illustrated that this dye was not suitable for the in-situ fluorescence tests. The electrochemical stability at room temperature of different dyes such as anthracene, Palladium (II) meso-tetrakis (pentafluorophenyl porphyrin)_(PTTFPP) and Platinum octaethylporphyrin_(PtOEP) were examined in the organic solvents used in the electrolytes in Li-ion cells. Cyclic voltammograms of anthracene and PTTFPP showed oxidation peaks at 2V and reduction peaks at around 1V, with the possible formation of the radical anion causing spectral changes. The chemical compound 1-hexyl-3- methylimidazolium bis (trifluormethylsulfonyl)imide_(HMIM BTI) was electrochemically stable, but the fluorescence intensity was too low (5% of dye DMA) to be used in the in-situ detection of oxygen. As a result, more work must be performed in the future to find a suitable dye. Keywords: fluorescence spectroscopy, in-situ Li-ion cell operation, quencheroxygen, the Stern-Volmer relationship
M.S. in Chemical Engineeering, July 2016
Show less
- Title
- License Plate Recognition in Complex Scenes
- Creator
- Wazalwar, Dhawal S.
- Date
- 2011-12-07, 2011-12
- Description
-
License plate recognition is considered to be one of the fastest growing tech- nologies in the field of surveillance and control. In this...
Show moreLicense plate recognition is considered to be one of the fastest growing tech- nologies in the field of surveillance and control. In this project, we present a new design flow for robust license plate localization and recognition. The algorithm con- sists of three stages: i) license plate localization ii) character segmentation and iii) feature extraction and character recognition. The algorithm uses Mexican hat opera- tor for edge detection and Euler number of a binary image for identifying the license plate region. A pre-processing step using median filter and contrast enhancement is employed to improve the character segmentation performance in case of low resolution and blur images. A unique feature vector comprised of region properties, projection data and reflection symmetry coefficient has been proposed. Back propagation artifi- cial neural network classifier has been used to train and test the neural network based on the extracted feature. A thorough testing of algorithm is performed on a database with varying test cases in terms of illumination and different plate conditions. The results are encouraging with success rate of 98.10% for license plate localization and 97.05% for character recognition.
M.S. in Electrical Engineering, December 2011
Show less
- Title
- ACOUSTIC BEAMFORMING USING PHASED MICROPHONE ARRAY: A NOVEL METHOD TO DETECT AIR INFILTRATION
- Creator
- Prakash, Manisha
- Date
- 2013, 2013-12
- Description
-
The major issue of air in ltration in buildings is the soaring energy consumption. According to the US Department of Energy, in 2003 it was...
Show moreThe major issue of air in ltration in buildings is the soaring energy consumption. According to the US Department of Energy, in 2003 it was estimated that there was 48% of space conditioning energy consumption due to air in ltration. This is responsible for 9.19 quadrillion Btu (quads) of energy which costs half billion dollars annually. If, just the HVAC energy consumption due to in ltration can be taken care of; then 1.2 quads of energy can be saved i.e., around 60 million dollars per year. The present state-of-the-art technique to measure air leakage rates is the Blower Door method. In order to locate air leakage spots in buildings, Tracer Gas/ Smoke Stick technique is used. The major drawbacks of these techniques are; the blower door method requires building evacuation and becomes impractical for large spaces and the tracer gas technique uses chemicals that can be hazardous. An innovative remote air leakage technique namely, beamforming using phased array, is proposed in this thesis. It is fast, compact and economical compared to methods like blower door and tracer gas which are both time consuming and require a lot of resources. It is important to validate our approach by some preliminary experiments and to benchmark these against the \real building" experimental data. We perform tests to compare the present state-of-the-art technique i.e., the blower door test with our proposed remote detection method of using acoustic phased array to detect air in ltration. To resolve sources at low frequency and get better beamform maps, subwoofers are used. Detailed description of the technique along with details of the experimental setup and discussion of results are presented in this thesis.
M.S. in Mechanical and Aerospace Engineering, December 2013
Show less