Search results
(1,101 - 1,120 of 2,990)
Pages
- Title
- DESIGN AND OPTIMIZATION OF NEXT GENERATION WIRELESS NETWORKS
- Creator
- Shila, Devu Manikantan
- Date
- 2011-04-10, 2011-05
- Description
-
A novel paradigm of communication, multi-hop wireless networks, have recently emerged both as a promising and cost-effective architecture to...
Show moreA novel paradigm of communication, multi-hop wireless networks, have recently emerged both as a promising and cost-effective architecture to meet the evergrowing demands and expectations of the users. In these class of networks, a collection of wireless nodes dynamically establish and maintain connectivity among themselves, thus, enabling users and nodes to seamlessly internetwork in areas with a little or no communication infrastructure. Due to the self-organizing and self-configuring nature, these networks make a suitable choice for variety of applications ranging from broadband home networking, intelligent transport system (ITS) to smart grid networking. In spite of the multiple aspects of advantages, however, research efforts have shown that when nodes are randomly or arbitrarily placed in a two-dimensional region, the amount of information that can be transmitted by each source-destination pair in a multi-hop fashion becomes vanishingly small, as number of nodes grows to a large level. Although, in the past, we have designed and developed several solutions to improve the efficiency of protocols for multi-hop wireless networks, the overall information-carrying capacity of these networks is still a critical issue to meet the increasing user requirements. Motivated by such an issue, in this dissertation we are concerned with the problem of optimizing the capacity of multi-hop wireless networks. First, we propose to use a combination of cooperative communications and multiple channels, which together has great potential to evade various issues that limits the capacity of wireless networks. Further, using the insights of the proposed approach, we design a channel allocation protocol at the MAC layer for wireless networks employing cooperative communications. We also construct an analytical model to optimize the parameters used in the MAC protocol design. Second, we study the performance improvement in a multi-hop wireless network by coupling it with the coverage and capacity of infrastructure networks, referred to as hybrid wireless networks. In doing so, we point out severe flaws in the existing research efforts and design a simple and practical power-aware routing policy, that can adapt to the operating environment, for hybrid wireless networks. In comparison to existing works, we clearly show the gain one could obtain on delay as well as on capacity in executing our design. Lastly, we propose to use transmission power of nodes to increase the amount of information sent across each wireless link. While prior solutions rely on minimum transmission power to improve spatial reuse or lifetime of nodes, we look at the power problem from a different perspective and show that one can obtain a significant gain in capacity by judiciously enhancing the power in a multi-channel multi-hop wireless network. To prove this interesting result, we essentially introduce the novel concept of a co-channel enlarging effect and then quantify the maximum power at which nodes can communicate on a given channel, without causing harmful interference to other simultaneously communicating pairs. We conclude this dissertation by identifying open issues that need further investigation.
Ph.D. in Computer Engineering, May 2011
Show less
- Title
- THE IMPACT OF WORK UNIT LEVEL PERCEPTIONS OF CAREER DEVELOPMENT OPPORTUNITIES, COWORKER SUPPORT, ROLE CLARITY, AND WORKLOAD ON EMPLOYEE ENGAGEMENT
- Creator
- Cama, Mike
- Date
- 2018, 2018-05
- Description
-
The Job Demands-Resources (JD-R) model has been used to explain influencing factors of engagement and burnout, with resources traditionally...
Show moreThe Job Demands-Resources (JD-R) model has been used to explain influencing factors of engagement and burnout, with resources traditionally having a relationship with engagement, and demands having a relationship with burnout. Recent research has suggested that demands may serve as moderators in the resources-engagement relationship based on whether the demands are perceived as challenges or hindrances. Additionally, most engagement research that focuses on antecedents of engagement is at the individual level yet that data is often aggregated at a higher level (e.g. business unit) when consequences of engagement (such as financial metrics) are considered. Furthermore, managers often receive aggregated scores for their work units and almost never receive individual level data to protect confidentiality and encourage honest responses. Therefore, this study seek to investigate how job resources (career development opportunities, role-clarity, and coworker support), aggregated at work unit level impact engagement and if the work unit level perception of workload moderates this relationship. Finally, it is expected that engagement will also have a strong positive relationship with intent to stay at the work unit level.
Ph.D. in Psychology, May 2018
Show less
- Title
- GROUP-LEVEL META-ANALYSES: AN EXAMINATION OF THE EFFECTS OF CHARACTERISTICS OF GROUP-LEVEL STUDIES ON THE ACCURACY OF PARAMETER ESTIMATES
- Creator
- Burke, Maura Irene
- Date
- 2018, 2018-05
- Description
-
This dissertation was an empirical investigation of how statistical artifacts and characteristics of group-level studies affect meta-analytic...
Show moreThis dissertation was an empirical investigation of how statistical artifacts and characteristics of group-level studies affect meta-analytic parameter estimates in group-level meta-analyses. Simulation procedures were employed to examine how the proportion of available group-level reliability information, the number of studies in a meta-analysis, and the type of group-level reliability estimate affect the accuracy of estimates of the mean and variance of rho when these population values are known. Archival data was used to identify known population parameter values and create group-level meta-analytic conditions commonly seen in the organizational sciences literature. This study has resulted in the following conclusions. When proportions of sample-based reliability are reduced in availability, meta-analyses are relatively accurate in estimating the magnitude of mean rho. As more studies enter meta-analyses, standard errors of mean rho are substantially reduced and confidence bands become increasingly smaller in width and this pattern of results holds regardless of the group-level reliability estimate used to individually correct correlations. Further, when meta-analyses involved the use of completely assumed values, the degree of accuracy in mirroring known population parameters was dependent on the degree to which the group-level reliability value approximates that of the population. Finally, both ICC(2) values and rCG group-based reliability estimates produced relatively accurate meta-analytic findings relative to their respective known population parameter values. Advantages and limitations to the use of each type of reliability estimate are discussed in detail in the manuscript.
Ph.D. in Psychology, May 2018
Show less
- Title
- INDIVIDUAL-BASED RISK MODELS FOR CRIME PREVENTION AND MEDICAL PROGNOSIS
- Creator
- Haro Alonso, David
- Date
- 2018, 2018-05
- Description
-
Parallel trends are currently taking place in the fields of crime and medicine, in which the focus is shifting from a reactive stance to a...
Show moreParallel trends are currently taking place in the fields of crime and medicine, in which the focus is shifting from a reactive stance to a proactive one. Both fields have traditionally been reactive, with police responding to 911 calls after a crime has occurred, and patients seeking medical care after symptoms have already appeared. In the field of crime, social-services programs, law-enforcement agencies, sociologists, and criminologists are studying ways to prevent crime, instead of merely reacting to it. A similar trend, known as preventive medicine, is concerned with addressing the causes of disease and not just focusing on treatment of disease that has already emerged. If crime and disease are to be prevented, it is important to understand the early warning signs of risk, to anticipate and treat problems before they occur. This can be accomplished via mathematical risk models that can evaluate an individual’s risk based on leading indicators. In this thesis I develop such models for two real-world problems in crime prevention and one in preventive medicine. A major focus of this thesis is to emphasize the accuracy of the ranking of risk for situations in which the allocation of resources must be prioritized to the highest-risk individuals. This is especially true in a social-services program designed to reduce crime, where the number of available social workers may be limited. In the first part of the thesis, I describe a novel method of risk modeling based on the probabilistic framework of a conditional random field, in which a machine-learning regressor is embedded. This is applicable in situations where an individual’s risk of an adverse outcome is partly dependent on the risk levels of others. We have applied this technique to develop a model that assesses an individual’s near-term risk of becoming a victim or arrestee in a shooting or homicide in Chicago. The model was developed as an informational tool for a pilot crime-prevention program that aims to offer social services to at-risk persons with the aim of providing opportunities for life changes that may reduce their crime risk. In the second part of the thesis, I describe a new model with a similar goal—to identify individuals at risk of involvement in crime—but aims to provide information for use in smaller cities that have a more typical array of crime concerns than Chicago. We developed the model as part of a current partnership with the Elgin Police Department, where a social-services intervention program under development will incorporate our model in identifying persons who might benefit from assistance. In the last part of the thesis, I describe a risk assessment algorithm for the medical field, which we developed in partnership with Cedars-Sinai Medical Center, Los Angeles, CA. In this work, we sought to demonstrate to the cardiology field (and the broader medical field) that machine learning can provide a better framework for risk stratification in medicine than traditional statistical methods such as logistic regression, which are the norm in that field. We also showed that, contrary to concerns by medical practitioners, machine learning can provide a solution that is easy to interpret.
Ph.D. in Electrical Engineering, May 2018
Show less
- Title
- OPTIMAL SCHEDULING OF ELECTRIC VEHICLE'S CHARGING/DISCHARGING
- Creator
- Guo, Dalong
- Date
- 2018, 2018-05
- Description
-
The advent of Electric Vehicles (EVs) demonstrates the effort and determination of humans to protect the environment. However, as the number...
Show moreThe advent of Electric Vehicles (EVs) demonstrates the effort and determination of humans to protect the environment. However, as the number of EVs increases, charging those EVs consume large amount of energy that may cause more pressure on Grid. On the other hand, the smart grid enables two-way energy flow which gives EVs the potential to serve as distributed storage system that may help mitigate the pressure of fluctuation brought by Renewable Energy Sources (RES) and reinforce the stability of power systems. Therefore, establishing efficient management mechanism to properly schedule EV charging/discharging behavior becomes imperative. In this thesis, we consider that EVs have one charging mode, Grid-to-Vehicle (G2V), and two discharging modes, Vehicle-to- Grid (V2G) and Vehicle-to-Home (V2H). In V2G, EVs send back their surplus power to grid, while in V2H, EVs supply the power for appliances in a house. We aim to design optimal algorithms to schedule the EV’s operations. We first consider an individual residential household with a single EV, where the EV can operate at all three modes. When the EV works in G2V mode, the owner pays the cost to utility company based on the real-time price (RTP). When the EV works in V2G mode, the owner earns the reward based on the market price from utility companies. In V2H, the owner uses the EV battery to provide power to appliances in the house rather than purchasing from the utility. We propose a linear optimization algorithm to schedule the EV’s operations based on the RTP and market price subject to a set of constraints. The objective is to minimize the total cost. The results show that in general the EV chooses G2V when the RTP is low, responding to demand response. When the RTP is high, the EV tends to work as V2H to avoid buying from the utility. When the market price is high, the EVs will perform V2G to obtain more revenue. Noting that it is not practical for a single EV to perform V2G, we further consider a different scenario in which a group of EVs is aggregated and managed by an aggregator. One example is a parking lot for an enterprise. Initially only V2G is considered, that is, EVs work as energy supplies and the aggregator collects the energy from all connected EVs and then transfers the aggregated energy to the grid. Each EV needs to decide how much energy to discharge to the aggregator depending on its battery capacity, remaining energy level, and etc. To facilitate the energy collection process, we model it as a virtual energy “trading” process by using a hierarchical Stackelberg Game approach. We define the utility functions for aggregator and EVs. To start the game, the aggregator (Leader) announces a set of purchasing prices to EVs and each EV determines how much energy to sell to the aggregator by maximizing its utility based on the announced price and sends that number to the aggregator. Then the aggregator adjusts the purchasing prices by maximizing its utility based on the optimal energy values collected from the EVs and the game process repeats till it converges to an equilibrium point, where the prices and the amounts of energy become fixed values. The proposed game is an uncoordinated game. We also consider power losses during energy transmission and battery degradation caused by additional charging-discharging cycles. Simulation results show the effectiveness and robustness of our game approach. At last, we extend the game to include G2V as well for the aggregated EV group scenario. That is, EVs may charge their batteries according to the RTP so that they can sell more to the aggregator to increase the profit when the purchasing price from the aggregator is attractive. We propose a SG-DR algorithm to combine the game model for V2G and the demand response (DR) for G2V. Specifically, we adjust the utility function for EVs and then update the constraints of the game to include the DR. Subject to the duration of parking period, we solve this optimization problem using our combined SG-DR algorithm and generate EVs’ corresponding hourly charging/discharging pattern. Results show that our algorithm can increase up to 50% utility for EVs compared with the pure game model. Finally, in conclusion, we summarize our work under different scenarios. Then we analyze the potential risk and propose the future trend of EV’s development in Smart Grid.
Ph.D. in Electrical Engineering, May 2018
Show less
- Title
- RISE OF A SINGLE BUBBLE IN A VERTICAL TUBE FILLED WITH NANOFLUIDS
- Creator
- Cho, Heon Ki
- Date
- 2018, 2018-05
- Description
-
The motion of air bubbles in tubes filled with nanofluids is of practical interests. Thus, this study focuses on the dynamics of air bubbles...
Show moreThe motion of air bubbles in tubes filled with nanofluids is of practical interests. Thus, this study focuses on the dynamics of air bubbles rising in tubes in nanofluids. Many authors experimentally and analytically proposed the rising air bubble velocity in vertical tubes in common liquids when Capillary number is large. We report here a systematic study of an air bubble rising in vertical tube filled with nanofluids when the Capillary number is small. The presence of the nanoparticles creates a significant change in the bubble velocity compared with the bubble rising in the common liquids. We observed a novel phenomenon of a step-wise decreases in the bubble rising velocity vs. bubble length for small Capillary number. The step-wise velocity increases is attributed to the nanoparticles self-layering phenomenon in the film adjacent to the tube wall. The effect of volume fraction of the nanoparticles and the tube diameters are investigated. Also, we measured the film thickness and calculated the film structural energy isotherm vs. the film thickness from the film meniscus contact angle measurement using the reflected light interferometric method. Based on the experimental measurement of the film thickness and the calculated values of the film structural energy barrier, we estimated the structural film viscosity vs. the number of nanoparticles/micelles. Due to thenanoparticle film self-layering phenomenon, we observed a gradual increasing the film viscosity with the decrease in the film thickness. But, we found a significant increase in the film viscosity accompanied by a step-wise decrease in the bubble velocity when the number of nanoparticles/micelles decreased from three to two particle layers due to the structural transition in the film. Bretherton analyzed the rise of a single long air bubble at a very small Capillary number under the effect of gravity in a vertical tube filled with common liquids with a thick and stable film. However, Bretherton equation cannot accurately predict the rate of the rise of the slow-moving long bubble in the vertical tube in nanofluids because it is valid only for very thick films and uses the bulk viscosity of the fluid. But, we demonstrate that the Bretherton equation can indeed be used for predicting the rate of the rise of the long single bubble through the vertical tube filled with the nanofluids by simply replacing the bulk viscosity with the proper structural nanofilm viscosity of the fluid.
Ph.D. in Chemical Engineering, May 2018
Show less
- Title
- WETTING OF FUEL CELL MATERIALS BY MOLTEN CARBONATE: OBSERVATION OF SPREADING AND PENETRATION
- Creator
- Gao, Liangjuan
- Date
- 2018, 2018-05
- Description
-
The molten carbonate fuel cell (MCFC) continues to attract significant attention due to its high performance over a lifetime of three to five...
Show moreThe molten carbonate fuel cell (MCFC) continues to attract significant attention due to its high performance over a lifetime of three to five years. The wetting of fuel cell materials by the molten carbonate is key to the long-term performance. Therefore, the wetting behavior under MCFC operating conditions was studied by means of the sessile drop method using a digitized optical analysis system. Specifically the spreading of molten carbonate on dense and porous materials was determined, as well as the penetration into porous materials. Observations were made of the melting and spreading of a solid carbonate pellet upon controlled temperature increase, placed on top of the dense or porous substrate, under either a reducing atmosphere (80%H2+20%CO2 humidified at 45oC), pure CO2 atmosphere, or oxidizing atmosphere (1%O2+99%N2). To provide a relatively simple base case, an extensive study of wetting of dense Ni foil was made. It was demonstrated that the water-gas shift reaction occurred at the interface of the Ni surface and molten carbonate under reducing atmosphere but not under pure CO2 and oxidizing atmospheres. The contact angle was affected by the mass of the carbonate pellet under reducing atmosphere but not under pure CO2 atmosphere. The molten carbonate spread rapidly under oxidizing atmosphere due to the surface oxidation of Ni. The wetting of porous Ni substrate was influenced by the porosity, the amount of carbonate in relation to the empty pore volume available (expressed as degree-of-filling), and the thickness of the substrate. The spreading of molten carbonate on the surface of the porous substrate, as well as penetration into the pores of the substrate was observed and the rates of these two processes were measured as accurately as possible. A linear velocity averaged over a pore was expressed in terms of the absorption rate. A simple model containing the formation of film on the pore walls and the bulk pore filling was established. The wetting of dense and porous Ni-Al alloy substrate was investigated. It revealed that the wettability of Ni-Al substrate was improved by increasing the content of Al under both pure CO2 and reducing atmospheres. The absorption rate of porous Ni-Al substrate was significantly larger than that of a porous Ni substrate of compatible porosity. The absorption rate was significantly slowed down only when the volume of molten carbonate exceeded 1.3 times the volume of empty pores inside the substrate. It was demonstrated that the mechanical strength of α-LiAlO2 matrices is improved by heat-treating at 800oC under ambient gas atmosphere. The non-heat-treated and heat-treated samples were completely wetted by molten carbonate and exhibited the same wetting behavior. A non-heat-treated α-LiAlO2 sample cracked during the wetting investigation, however, the heat-treated α-LiAlO2 matrices did not crack, presumably due to their enhanced mechanical strength.
Ph.D. in Materials Science and Engineering, May 2018
Show less
- Title
- CHARACTERIZING GPS PHASE LOCK LOOP PERFORMANCE IN WIDEBAND INTERFERENCE USING THE DISCRIMINATOR OUTPUT DISTRIBUTION
- Creator
- Stevanovic, Stefan
- Date
- 2018, 2018-05
- Description
-
The use of the Global Positioning System (GPS) has accelerated in recent years. In its inception, GPS was used exclusively by the military for...
Show moreThe use of the Global Positioning System (GPS) has accelerated in recent years. In its inception, GPS was used exclusively by the military for navigation. Today, with the emergence of extremely capable electronics and microprocessors, GPS has been integrated into many aspects of life. It is currently widely used by both the military and various civilian industries for applications that require navigation as well as precise timing. Some applications of GPS include ground vehicle and aircraft navigation, banking, power transmission, and agriculture. As a result, disruptions in GPS availability have the potential to disrupt many services and industries around the globe, and even threaten the safety of life. Reliable operation can be interrupted by radio frequency interference (RFI), which can come from natural and manufactured sources. This work describes new techniques to evaluate the performance of GPS receivers that may be subjected to RFI events. The example application motivating this work is Ground Based Augmentation System (GBAS) reference station receivers subjected to broadband interference, for example, from nearby use of personal privacy devices (PPDs). PPDs most commonly emit broadband interference, and GBAS ground based reference receivers have expe- rienced tracking discontinuities as a result [Pul12]. These events can cause navigation service interruptions to aircraft on nal approach. To ensure continuity of the nav- igation service, GBAS reference stations must be able to track GPS signals in the presence of wideband interference. The objective of this work is to develop the PLL analysis tools required to design PLLs capable of tracking through RFI events, while reducing the need for time-consuming simulations and experimental validation. Instead, simulation and experimental validation can be reserved for PLL designs which are much more likely to be successful. The techniques described in this work are valid for any GPS application in which the receiver cannot tolerate cycle slips in the phase-lock loop (PLL). The methodology is directly applicable to ground-based reference receivers for differential GPS systems, as well as other ground-based receivers that require high continuity of service. It is also relevant to moving receivers, if the additional dynamic stresses on the PLL are also taken into account. The PLL discriminator output (DO) distribution is used to characterize GPS PLL tracking performance, in contrast to the phase jitter metric widely used in prior work and literature. Both the DO variance and the bias on the mean of the DO distribution are shown to be superior to the jitter metric in predicting phase-lock. And, it is shown that the bias in the DO mean is the most effective measure of cycle slip probability. Studying the discriminator output distribution also provides a means of comparing different techniques to extend PLL averaging time beyond the length of a navigation data bit, without time-consuming direct simulation and experimental validation. Experimental results are presented to validate the theoretical analysis and simulations. The observed tracking results are consistent with the theoretically predicted system performance. The DO bias is superior to the variance metric in its ability to predict loss of phase-lock.
Ph.D. in Mechanical and Aerospace Engineering, May 2018
Show less
- Title
- CASE-ADAPTIVE PROCESSING FOR IMPROVING ACCURACY IN COMPUTER-AIDED DIAGNOSIS OF BREAST CANCER
- Creator
- Sainz De Cea, Maria Victoria
- Date
- 2018, 2018-05
- Description
-
Breast cancer is the most commonly diagnosed cancer among women (apart from skin cancer) in the US. If detected early, the five-year survival...
Show moreBreast cancer is the most commonly diagnosed cancer among women (apart from skin cancer) in the US. If detected early, the five-year survival rate is 99%. Because of this, early detection of breast cancer has been an extensively studied topic over the years, and screening mammography is the gold standard for this purpose. Microcalcifications (MCs) are tiny calcium deposits that appear as bright spots in mammogram images, and they can be an early sign of breast cancer in asymptomatic women. Computer aided diagnosis (CAD) tools can be used to assist radiologists in detecting MCs and classifying them as benign or malignant. CAD of breast cancer is often hampered by the presence of false positives (FP) among the detected MCs when a reasonable sensitivity level is achieved. The FPs can be caused by MC-like noise, linear structures, etc. Due to the wide range of factors causing FPs, there is a great inter-patient variability, which can degrade the performance of CAD systems. In this work, we aim to reduce the inter-patient variability of CAD systems in order to improve the performance in both MC detection (Computer aided detection or CADe) and classification of MC clusters (Computer aided diagnosis or CADx). The first part of this thesis focuses on MC detection. We first develop a framework for estimating the accuracy in detection of individual MCs within a lesion region. This framework is general and can be applied to any MC detector. The number of FP detections can vary greatly from patient to patient, so having this knowledge will be useful to make decisions in both CADe and CADx systems. Secondly, we present a case-adaptive method for CADe based on Bayes’ risks, where a distribution is fit to the FPs from a mammogram under consideration, based on which the optimal detection threshold is determined for each patient. Finally we present an outlier approach for detection of individual MCs in a lesion region. This approach is based on the fact that individual MCs are usually different from the FPs (brighter, larger in extent), so they can be detected as statistical outliers. The outlier detection is done in a case-by-case basis, which can yield not only a reduction in the number of FPs but also an increase on the uniformity of the detection accuracy among different cases. The second part of the thesis is focused on CADx. We apply the methods developed in the first part to improve the uniformity and performance in the classification of detected lesions as benign or malignant. For this purpose we first present a quality factor approach for adjusting the contribution of the detected individual MCs to the final feature set. Those detections with a higher quality factor can have more impact in the final features, therefore mitigating the effect of the FP detections. Finally, we use the estimated detection accuracy to determine the optimal detection operating threshold. This is shown to boost the CADx performance.
Ph.D. in Electrical Engineering, May 2018
Show less
- Title
- INCORPORATION OF NATURAL SENSORY FEEDBACK TO BRAIN MACHINE INTERFACE WITH HAND EXOSKELETON
- Creator
- Qian, Kai
- Date
- 2018, 2018-05
- Description
-
The feasibility of a brain-machine interface (BMI) system that links the brain to an external device has been demonstrated for both non-human...
Show moreThe feasibility of a brain-machine interface (BMI) system that links the brain to an external device has been demonstrated for both non-human and human subjects. However, BMI controlled robotic arm movements are usually slow, jerky and imprecise. Many of these problems can be attributed to a lack of somatosensory (tactile, proprioception, etc.) feedback. For a large segment of potential users who have motor impairments but intact sensation, use of an exoskeleton as the external device could provide the natural sensory feedback to improve BMI control. This is especially true for the hand with its incredibly rich sensory innervation. Currently, however, there is no hand exoskeleton available for non-human primate BMI research. In this dissertation, a hand exoskeleton platform was developed for an index finger - thumb precision grip task. The system was first employed as a scientific apparatus to explore the sensory responses in primary motor cortex (M1) to sinusoidal inputs of position and force. Neural firing rate patterns were found to be strongly entrained to the sinusoidal stimulus, with a predominance of neurons responding to joint movement rather than fingertip force. The phase-locking patterns to sinusoidal stimuli were much clearer and more stable for joint movement than for fingertip force as well. Second, the hand exoskeleton system was validated in a real-time BMI-controlled isometric grip force task. Prompted by cues on a computer screen, the monkey used cortical signals to control the grip force produced by the exoskeleton. The exoskeleton drove either the monkey’s own hand (natural sensory feedback condition) or an artificial hand (visual feedback only condition). Although improvements in performance for both conditions were observed over the relatively short training period, it was difficult to differentiate between the efficacy of the two conditions. Interestingly, for both conditions the monkey tried to use a similar neural strategy in controlling grip force as to the one used in natural grip behavior, in which a majority of the neurons recorded exhibited temporal reduction of the firing rates during the force production phase. Overall, the hand exoskeleton platform proved to be not only a powerful platform in BMI research but also an important tool for investigating sensory processing. This new platform should facilitate future experiments which will further insight into BMI design and the neural mechanisms underlying movement control.
Ph.D. in Biomedical Engineering, May 2018
Show less
- Title
- UNEMPLOYMENT AND SUICIDE IN THE FRAMEWORK OF THE INTERPERSONAL THEORY OF SUICIDE
- Creator
- Roubal, Eren A.
- Date
- 2018, 2018-05
- Description
-
Becoming unemployed is typically considered a risk factor for suicidal ideation (SI) and behavior. This study aimed to examine how...
Show moreBecoming unemployed is typically considered a risk factor for suicidal ideation (SI) and behavior. This study aimed to examine how unemployment confers risk for suicidal ideation, positing that Perceived Burdensomeness (PB) and Thwarted Belongingness (TB) function as mediators between the length of an individual’s unemployment and their level of SI. In terms of the Interpersonal Theory of Suicide, individuals with higher levels of these variables are hypothesized to have an increased desire to be dead. Other issues related to unemployment and suicidal thinking were examined including whether the preceding variables had a curvilinear relationship to length of unemployment, whether income loss was a predictor of suicidal thinking and whether veterans of the armed forces experienced higher levels of the preceding variables than non-veterans. PB was found to function as a mediator, but TB did not. There was evidence of a curvilinear relationship, with individuals recently and long-term unemployed reporting lower SI than those unemployed for a moderate duration. Income loss was unrelated to both PB and SI, and veterans were found to exhibit higher PB and SI than non-veterans, but similar levels of PB. These findings begin to shed light on which individuals who lose their job are at greater risk for suicidal thinking; clinical implications for risk assessment are also discussed.
Ph.D. in Psychology, May 2018
Show less
- Title
- SYNTHESIS AND EVALUATION OF STABILIZED ALGINATE MICROBEADS
- Creator
- Soma, Sami Isaac
- Date
- 2018, 2018-05
- Description
-
Alginate hydrogels have been investigated for a broad variety of medical applications. The ability to assemble hydrogels at neutral pH and...
Show moreAlginate hydrogels have been investigated for a broad variety of medical applications. The ability to assemble hydrogels at neutral pH and mild temperatures makes alginate a popular choice for the encapsulation and delivery of cells and proteins. Alginate has been studied extensively for the delivery of islets as a treatment for type 1 diabetes. However, stability of the encapsulation systems after implantation remains a challenge. The broad goal of this proposal was to develop and investigate methods for enhancing the stability of alginate-based encapsulation systems. First, a method was developed to create dual crosslinked alginate microbeads. Alginate was modified with 2-aminoethyl methacrylate hydrochloride (AEMA) to introduce groups that can be photoactivated to generate covalent bonds. This enabled formation of dual crosslinked structure upon exposure to ultraviolet light following initial ionic crosslinking into bead structures. The degree of methacrylation was varied and in vitro stability, long term swelling, and cell viability examined. At low levels of methacrylation, the beads could be formed by first ionic crosslinking followed by exposure to ultraviolet light to generate covalent bonds. The methacrylated alginate resulted in more stable beads and cells were viable following encapsulation. Alginate microbeads, ionic (unmodified) and dual crosslinked, were implanted into a rat omentum pouch model. Implantation was performed with a local injection of Lipopolysaccharide (LPS) to stimulate a robust in inflammatory challenge in vivo. Implants were retrieved at 1 and 3 weeks for analysis. The unmodified alginate microbeads had all failed by week 1, whereas the dual-crosslinked alginate microbeads remained stable up through 3 weeks. The modified alginate microbeads may provide a more stable alternative to current alginate-based systems for cell encapsulation. In the next set of studies, multilayered alginate microbeads (Alginate-Poly-l-ornithine-Alginate (APA)) were investigated for cell encapsulation. The APA microbeads were generated with a thick outer alginate layer that is present in order to reduce inflammation post implantation. The dual crosslinking approach was applied to the outer layer APA microbeads. Dual crosslinked outer layer multilayered alginate microbeads remained intact in presence of chelating agents. APA alginate microbeads, ionic (unmodified) and dual crosslinked were tested using omentum pouch model with local injection of LPS. Dual crosslinked microbeads remained intact up to three weeks without significant change in outer layer size. In conclusion, alginate was modified with methacrylate groups to enhance stability when subjected to an inflammatory challenge. APA microbeads with methacrylated outer layer hold a great potential for cell encapsulation therapies.
Ph.D. in Biomedical Engineering, May 2018
Show less
- Title
- A POLYMORPHIC COMPUTING ARCHITECTURE BASED ON A DATAFLOW PROCESSOR CORE
- Creator
- Hentrich, David
- Date
- 2018, 2018-05
- Description
-
Overall, this work provides an introduction to the subject of polymorphic computing, provides a new innovative polymorphic computer...
Show moreOverall, this work provides an introduction to the subject of polymorphic computing, provides a new innovative polymorphic computer architecture, and studies the architecture's performance with various test programs. The most important innovation of this work is the creation of an instruction set and computer architecture that allows individual instructions in an algorithm to be migrated in a fine-grained manner throughout a fabric of processors without the need for the algorithm to be aware of the underlying computer architecture. Essentially, the algorithms are independent of the underlying processing fabric and can be arbitrarily \draped" over the underlying processor fabric. Logically, this allows the computer architecture of the system to be modified under an algorithm. The intent of this is to create a system where the underlying computer architecture can be modified to improve the performance of an algorithm during runtime. The specific categories of contributions of this work are: 1. A definition of polymorphic computing, 2. A history of reconfigurable computing (the roots of polymorphic computing), 3. A description of relevant computer architecture concepts, 4. Case-studies of current polymorphic computing systems, 5. A new dataflow processor with performance monitoring features at the instruction and microarchitecture levels, 6. A new dataflow instruction set that contributes several advances to the field of dataflow instruction set design, 7. A polymorphic computing architecture based on the dataflow processor that allows programs to be migrated (\draped") across underlying cores in a fine- grained manner (i.e. on an instruction-by-instruction basis), 8. A description of how to write programs for the dataflow processor, 9. A number of programs written in the new instruction set for the dataflow processor/polymorphic computing architecture, 10. A performance evaluation of the ideal performance of the above programs in a single dataflow core, 11. A performance evaluation of a subset of the above programs in several polymorphic compute fabrics that were fine-grained placed (\draped") using a genetic search algorithm, 12. An iterative, deterministic algorithm for placing (\draping") functions in several polymorphic compute fabrics in a fine-grained manner based on runtime monitoring, 13. A performance evaluation of a subset of the above programs that were placed (\draped") in several polymorphic compute fabrics using the deterministic instruction placement algorithm, and 14. A comparison of the results between the genetic search instruction placement evaluation and the deterministic algorithm instruction placement evaluation.
Ph.D. in Computer Engineering, May 2018
Show less
- Title
- MULTI-LAYER AGENT-BASED MODELING FOR BONE TISSUE ENGINEERING
- Creator
- Lu, Chenlin
- Date
- 2018, 2018-05
- Description
-
Bone tissue engineering (BTE) has emerged over the past few decades as a potential alternative to the field of conventional bone regenerative...
Show moreBone tissue engineering (BTE) has emerged over the past few decades as a potential alternative to the field of conventional bone regenerative medicine due to the exceedingly high demand of adequate bone grafts. Regeneration of bone tissue in BTE requires synergistic combination of biomaterial scaffolds, growth factors, and osteogenic cells. Scaffolds with well-designed architectures and degradation characteristics, provided with appropriate angiogenic and osteogenic factors are essential for bone tissue regeneration. Taking into account these factors that contribute to bone tissue regeneration process simultaneously and optimizing their characteristics presents a highly difficult task and cannot be addressed with experimentation alone. Computational models combined with experimental methods provide better understanding of the underlying mechanisms of the complex process. The agent-based modeling (ABM) approach is used to develop three-dimensional models of vascularization and bone growth. ABM is a powerful modeling and simulation technique and is naturally suitable for complex biological system as it simulates actions and interactions of individual agents in an attempt to re-create and predict the appearance of complex phenomena. In this work, a multi-layered, agent-based computational model has been proposed to simulate the vascularization and bone tissue regeneration in a porous, biodegradable biomaterial scaffold. This model aims to investigate the interactions between osteogenic cells, signaling molecules, and biomaterial scaffolds in order to enhance scaffold vascularization and bone tissue formation. Our previous works have already investigated the interactions between endothelial cells (ECs) and biodegradable scaffolds, and provided us significant insights into the combined effect of scaffold geometrical properties and degradation dynamics on scaffold vascularization. Furthermore, the controlled release of angiogenic growth factors has been studied tothis work, a multi-layered, agent-based computational model has been proposed to simulate the vascularization and bone tissue regeneration in a porous, biodegradable biomaterial scaffold. This model aims to investigate the interactions between osteogenic cells, signaling molecules, and biomaterial scaffolds in order to enhance scaffold vascularization and bone tissue formation. Our previous works have already investigated the interactions between endothelial cells (ECs) and biodegradable scaffolds, and provided us significant insights into the combined effect of scaffold geometrical properties and degradation dynamics on scaffold vascularization. Furthermore, the controlled release of angiogenic growth factors has been studied to investigate their effects on vascularization process. This work will mainly focus on three aspects: 1) the improvement of scaffold degradation model. 2) the development of vascularized bone regeneration agent-based model in Repast High Performance Computing (Repast HPC). 3) the investigation of in vitro prevascularization strategy to enhance angiogenesis and overall bone regeneration in BTE applications. The developed model integrates all these factors and simulates the regeneration of bone tissue in biodegradable scaffolds over time. Simulation results can be used in combination with experimental data to design optimal scaffold constructs for bone tissue engineering. A multi-layer scaffold model is implemented in the degradation ABM. Scaffold vascularization is enhanced by the multi-layer scaffold strategy without losing the necessary mechanical support of biomaterial scaffolds. A integrated vascularized bone tissue regeneration ABM was developed using Repast HPC platform. The model successfully simulated the scaffold vascularization and coupled osteogenic differentiation in a 3D porous scaffold. The study demonstrated that scaffolds with higher porosity and combined angiogenic and osteogenic GF factor resulted in optimal vascularized bone formation. A diffusion ABM is developed to simulate the growth factor release in the scaffold. Simulation results indicated a good agreement between the diffusion ABM and mathematical model.The prevascularization high performance ABM is developed to simulate the integrated process of in vitro prevascularization followed by in vivo vascularized bone formation and evaluate the potential of prevascularization strategy to enhance overall scaffold vascularization and bone formation. The results demonstrated that prevascularized scaffold increases overall defect vascularization and bone formation upon implantation.
Ph.D. in Chemical and Biological Engineering, May 2018
Show less
- Title
- COMPREHENSIVE ALWAYS-ON SENSING (CAS) PLATFORM FOR MOBILE CONTEXTUAL AWARENESS
- Creator
- Lautner, Douglas
- Date
- 2018, 2018-05
- Description
-
From conception and for decades, a cell phone was used solely for wireless communication purposes [1]. Recently however, over the past nine...
Show moreFrom conception and for decades, a cell phone was used solely for wireless communication purposes [1]. Recently however, over the past nine years after the Android™ and iOS™ operating systems were released to the market, its definition has changed. With increasing capabilities, importance of data generation, data collection and processing functionalities, a cell phone has evolved into a mobile smart device e.g. smartphone. Smart devices are emerging into new roles such as a portable computing devices [2], sensor hubs [3] and Internet access terminals [4]. As embedded technologies and systems advance, not only smartphones, but all commercial smart devices [5], such as wearables, smartwatches or head mounted displays, extend with these capabilities. Amongst all, the function of contextual sensing is unique on a mobile smart device more than on any other commercial computing platform. Mobile smart devices are carried in close proximity of users, traveling with them throughout a day sensing what the user experiences. Ambient and on-body contexts are shared with the user and hence the sensing data can accurately reflect an individual’s real environment better than any other computing or sensing device. It is likely that most domesticated living beings i.e. humans, pets, livestock, etc. would be associated with a mobile smart device in the future [6]. As the Internet of Things (IoT) wireless capabilities become more cost effective and are connected to more objects [7][8], pervasive deployment can be realized and hence becomes an important information source in contextual sensing. More important, as IoT wireless items can be equipped with various sensing techniques, such as geofencing data [10] or information acquired from any kind of sensor attached to them e.g. temperature, force, strain, pressure, etc. [11][12], the sensing result is comprehensive and highly configurable which is impractical for traditional sensors. The following three challenges are the major causes of limited IoT contextual sensing in smart devices. First, if implementing such sensing capability on a traditional smart device platform, its high current drain becomes intolerable. Second, prevailing smart device platforms are not able to accommodate all IoT contextual sensors and their requirements. Third, there is no solution for the smart device to schedule sensing tasks from different IoT contextual sensors and pre-process sensing raw data at the system’s low layer. To conquer these three problems, and the goal of the thesis is to research, design and implement a novel platform in between a smart device’s system’s hardware layer and operation system layer to accommodate IoT contextual sensors and conduct always-on sensing tasks.
Ph.D. in Computer Science, May 2018
Show less
- Title
- THE PARADOX OF COMMUNICATION TECHNOLOGY IN THE WORK-FAMILY INTERFACE
- Creator
- Ishaya, Nahren M.
- Date
- 2018, 2018-05
- Description
-
The aim of the present study was to investigate the “double-edged sword” nature of communication technology in its impact on the work-family...
Show moreThe aim of the present study was to investigate the “double-edged sword” nature of communication technology in its impact on the work-family interface. Communication technology has many wonderful advantages, one of which is the flexibility that it provides employees with where and when work is completed. The flipside, though, is that communication technology allows employees to be available and accessible at all times. The purpose of this study was to assess the impact of (1) accessibility and availability through communication technology; and (2) flexibility through communication technology, on the experience of work-to-family conflict, family-to-work conflict, and work-family balance. The study utilized Conservation of Resources theory and Job-Demand Control model as its basis for examining how the relationship between work and family demands and work-family interface outcomes are impacted by the two communication technology variables of interest in this study. Qualtrics Panels were used to recruit 405 working adults in the United States across various industries to complete an online survey. To help address single source bias, employees were asked to invite their spouse/partner to complete a survey to assess levels of work-family interface of the employee from the spouse/partner perspective. Hierarchical moderated regression analyses were used to test the hypotheses. The results indicated that employees who perceive that communication technology provided greater flexibility in the work and family domains experienced less work-to-family conflict, and family-to-work conflict, respectively. Further, employees who perceived that greater expectations to be available and accessible to others in their work and family domains experienced greater work-to-family conflict and family-to-work conflict, respectively. Accessibility and availability expectations exacerbated the association between demands and work-to-family conflict in both the work and family domains. Communication technology flexibility was found to buffer the effect of family overload in the experience of family-to-work conflict. The theoretical and practical implications of these findings and potential directions for future research are discussed.
Ph.D. in Psychology, May 2018
Show less
- Title
- vVIENER-HOPF FACTORIZATION FOR TIME-INHOMOGENEOUS MARKOV CHAINS AND STATISTICAL INFERENCE FOR STOCHASTIC PARTIAL DIFFERENTIAL EQUATIONS
- Creator
- Huang, Yicong
- Date
- 2018, 2018-05
- Description
-
The thesis consists of two major parts, and it contributes to two topics in stochastic analysis – Wiener-Hopf factorization (WHf) for Markov...
Show moreThe thesis consists of two major parts, and it contributes to two topics in stochastic analysis – Wiener-Hopf factorization (WHf) for Markov chains and statistical inference for Stochastic Partial Differential Equations (SPDEs). The first part deals with Wiener-Hopf factorization for finite state time inhomogeneous Markov chains. To the best of our knowledge, this study is the first attempt to investigate the WHf for the time-inhomogeneous Markov chains. In this work we only deal with a special class of time-inhomogeneous Markovian generators, namely piece-wise constant, which allows to derive the corresponding WHf by using an appropriately tailored randomization technique. Besides the mathematical importance of the WHf methodology, there is also an important computational aspect: it allows for efficient computation of important functionals of Markov chains. In this work, we also provide an efficient algorithm to compute the quantities in the Wiener-Hopf factorization for the time-inhomogeneous Markov chains. Finally, we provide a comparison (based on numerical simulations) between our algorithm and the brute-force Monte Carlo simulations. The second part is dedicated to statistical inference for Stochastic Partial Differential Equations (SPDEs). First, we study the problem of estimating the drift/viscosity coefficient for a large class of linear, parabolic SPDEs driven by an additive space-time noise. We propose a new class of estimators, called trajectory fitting estimators (TFEs). The estimators are constructed by fitting the observed trajectory with an artificial one, and can be viewed as an analog to the classical least squares estimators from the time-series analysis. As in the existing literature on statistical inference for SPDEs, we take a spectral approach, and assume that we observe the first N Fourier modes of the solution, and we study the consistency and the asymptotic normality of the TFE, as N ₀₀. Next we consider a parameter estimation problem for one dimensional stochastic heat equation, when data is sampled discretely in time or spatial component. We establish some general results on derivation of consistent and asymptotically normal estimates based on computation of the p-variations of stochastic processes and their smooth perturbations. We apply these results to the considered SPDEs, by using some convenient representations of the solutions. For some equations such representations were ready available, while for others classes of SPDEs we derived the needed representations along with their statistical asymptotic properties. We prove that the real valued parameter next to the Laplacian, and the positive parameter in front of the noise can be consistently estimated by observing the solution at a fixed time and on a discrete spatial grid, or at a fixed space point and at discrete time instances of a finite interval, assuming that the mesh size goes to zero.
Ph.D. in Applied Mathematics, May 2018
Show less
- Title
- EXPANDING THE HEP FRONTIER WITH BOOSTED B-TAGS AND THE QCD POWER SPECTRUM
- Creator
- Pedersen, Keith
- Date
- 2018, 2018-05
- Description
-
As particle physics continues to expand into the high-energy and highluminosity frontiers, it is encountering event topologies with extreme...
Show moreAs particle physics continues to expand into the high-energy and highluminosity frontiers, it is encountering event topologies with extreme boosts and intense pileup. This creates unique challenges that limit our ability to use QCD jets to find new physics and conduct precision tests of the standard model. In this thesis, I present two tools that greatly expand our ability to use jets for these important purposes: (i) The μx boosted-bottom jet tag, whose O(100) signal to background ratio does not falter as jet pT exceeds 1TeV, and which is robust to pileup due to its foundation in boosted kinematics. (ii) Power jets, the first stage in a larger program to harness the power spectrum of QCD radiation to better utilize the vast amount of information collected about each collider event. Using the full power spectrum of a detected event, the power jets framework not only provides an accurate and precise recovery of jet kinematics, but also naturally facilitates a global fit to pileup intensity (rather than a local subtraction of pileup energy, which inadvertently strips soft QCD that belongs to the hard scatter).
Ph.D. in Physics, May 2018
Show less
- Title
- ON THE LIST COLORING PROBLEM AND ITS EQUITABLE VARIANTS
- Creator
- Mudrock, Jeffrey Allen
- Date
- 2018, 2018-05
- Description
-
In this thesis we study list coloring which was introduced independently by Vizing and Erd˝os, Rubin, and Taylor in the 1970’s. Suppose we...
Show moreIn this thesis we study list coloring which was introduced independently by Vizing and Erd˝os, Rubin, and Taylor in the 1970’s. Suppose we associate a list assignment L with a graph G which assigns a list, L(v), of colors to each v 2 V (G). A proper L-coloring of G, f, is a proper coloring such that f(v) 2 L(v) for each v 2 V (G). The list chromatic number of G, "`(G), is the minimum k such that G has a proper L-coloring whenever L is a list assignment satisfying |L(v)| ' k for each v 2 V (G). A graph G is said to be chromatic-choosable if "`(G) = "(G). The list chromatic number of the Cartesian product of graphs is not well understood. The best result is by Borowiecki, Jendrol, Kr´al, and Miˇskuf (2006) who proved that the list chromatic number of the Cartesian product of two graphs can be bounded in terms of the list chromatic number and the coloring number of the factors. In Chapter 2, we use the Alon-Tarsi Theorem and an extension of it discovered by Schauz in 2010 to find improved bounds on the list chromatic number and paint number (i.e. online list chromatic number) of the Cartesian product of an odd cycle or complete graph with a traceable graph. We also identify certain Cartesian products as chromatic-choosable. In Chapter 3, we generalize the notion of strong critical graphs, introduced by Stiebitz, Tuza, and Voigt in 2008, to strong k-chromatic-choosable graphs, and we show that it gives a strictly larger family of graphs that includes odd cycles, cliques, the join of a clique and any strongly chromatic-choosable graph, and many more families of graphs. We prove sharp bounds on the list chromatic number of certain Cartesian products where one factor is a strong k-chromatic-choosable graph satisfying an edge bound. Our proofs rely on the notion of unique-choosability as a sufficient condition for list colorability and the list color function which is a list analogue of the chromatic polynomial.In Chapter 4, we study a list analogue of equitable coloring introduced by Kostochka, Pelsmajer, and West in 2003. A graph G is said to be equitably kchoosable if it has a proper L-coloring that uses no color more than d|V (G)|/ke times whenever |L(v)| = k for each v 2 V (G). Generalizing a conjecture of Fu (1994) on total equitable coloring, we conjecture that for any simple graph G, its total graph, T(G), is equitably k-choosable whenever k ' max{"`(T(G)),"(G) + 2}. We prove this conjecture for all graphs satisfying "(G) 2 while also studying the related question of the equitable choosability of powers of paths and cycles. In Chapter 5, we introduce a new list analogue of equitable coloring: proportional choosability. For this new notion, the number of times a color is used must be proportional to the number of lists in which the color appears. Proportional kchoosability implies both equitable k-choosability and equitable k-colorability. Also,the graph property of being proportionally k-choosable is monotone, and if a graph is proportionally k-choosable, it must be proportionally (k +1)-choosable. We study the proportional choosability of graphs with small order and disconnected graphs, and we completely characterize proportionally 2-choosable graphs.
Ph.D. in Applied Mathematics, May 2018
Show less
- Title
- SINTER B0NDING TITANIUM POWDER COMPONENTS: AN UNCONVENTIONAL ADDITIVE MANUFACTURING APPROACH
- Creator
- Montonera, Darrell R
- Date
- 2018, 2018-05
- Description
-
Titanium and its alloys are desirable for many applications. The cost of producing titanium parts that also have needed microstructure for a...
Show moreTitanium and its alloys are desirable for many applications. The cost of producing titanium parts that also have needed microstructure for a given application limit where titanium is used. Methods of reducing the cost of titanium parts have been to use powder metallurgy processing routes. However, not all powder processing routes are cost effective, as additive manufacturing and powder injection molding processes are costly and require expensive spherical powder. Cheaper processing Press and Sinter utilizes cheaper non-spherical powder. Powder titanium components made through Press and Sinter have complexity, size, and geometrical constraints and have detrimental mechanical properties unless further post processing is done. To utilize the simple geometries from Press and Sinter, pressed powder components are bonded to examine the possibility of creating higher complexity parts. To achieve this, the dimensional sintering behavior of powders were quanti- ed using dilatometry. Grade 5 titanium alloys were created by blending hydridedehydride (HDH) commercially pure powder with master alloy (MA) 60/40 wt%. The dimensional effect of varying master alloy produced a maximum difference of 0.341% between an alloy with lower MA content compared to higher content during sintering. The sintering behavior of powder HDH+MA reached a nal shrinkage of 4.59%. Other powders TiH2, TiH2+MA, and Armstrong pre-alloyed had fi nal shrinkages of 9.85, 9.64, and 8.31%. The larger shrinkage powders were pressed into a peripheral component to be bonded to a HDH+MA core. Samples were sintered under a vacuum of 2x106 torr by heating from room temperature to 1370 oC at 15 oCmin1 and holding at 1370 oC for 90 minutes. Sinter Bonded sample interfaces were examined showing the best bond to be the Armstrong j HDH+MA combination. This bond was tested using a push out test achieving shear stresses of 423 60 MPa using a pre-sintering tolerance between components of 0.065 mm and 444 37 MPa using a pre-sintering tolerance of 0.03 mm. Wrought material tested in the same manner as the sinter bonded components had a strength of 517 8 MPa. Sinter bonded samples achieved on average 82% the strength of wrought tested in the same manner. Strong bond strengths lead to a fatigue analysis of sinter bonded samples. Under various applied cyclic compressive stresses the number of cycles to failure were measured using an applied stress ratio R = 0.1. Determination of fatigue properties was done by simulating and probing in Abaqus the max tension stresses located at the bottom center of the sample. Simulations produced steady state tension stresses measured at maximum, mean, and minimum applied compressive stresses. These stresses were used to plot a S-N curve. True stress amplitudes were calculated from probed maximum and minimum stresses and the fatigue data were fi t to the Basquin empirical relation 2 = 810:4(2Nf )0:055 for sinter bonded samples and 2 = 1290:9(2Nf )0:065 for wrought samples. As a proof of concept several pressed titanium parts were combined in the green state and successfully sintered into a single component.
Ph.D. in Materials Science and Engineering, May 2018
Show less