Search results
(1,021 - 1,040 of 2,990)
Pages
- Title
- Benjamin de Brie Taylor, 1970s
- Date
- 1973-1979
- Description
-
Benjamin de Brie Taylor was Director of the Institute of Design at IIT from 1973 to 1975, and remained on the faculty at ID until 1987. Date...
Show moreBenjamin de Brie Taylor was Director of the Institute of Design at IIT from 1973 to 1975, and remained on the faculty at ID until 1987. Date of photograph is unknown. Date range listed is approximate.
Show less
- Title
- Systems and Systematic Design: Tracing the Evolution of Design Methodology at the Institute of Design, 1965-2010: Slides
- Creator
- Owen, Charles L.
- Date
- 2010-10-28
- Collection
- Charles L. Owen presentation, 2010
- Title
- László Moholy-Nagy at the Institute of Design, Chicago, Illinois, 1944
- Date
- 1944
- Description
-
Photograph of Laszlo Moholy-Nagy at the Institute of Design (or possibly the School of Design before it was renamed the Institute of Design in...
Show morePhotograph of Laszlo Moholy-Nagy at the Institute of Design (or possibly the School of Design before it was renamed the Institute of Design in 1944). Photographer unknown.
Show less - Collection
- Biographical files collection, 1900-2014
- Title
- Unbreakable wooden springs, ca. 1942
- Creator
- Halbe, Milton
- Date
- 1942
- Description
-
Photograph of a design for unbreakable wooden springs, possibly designed by Glenn Foss. Superimposition of two photographs shows the...
Show morePhotograph of a design for unbreakable wooden springs, possibly designed by Glenn Foss. Superimposition of two photographs shows the deflection of weight. Date of photograph is unknown. Date listed is approximate.
Show less - Collection
- Institute of Design records, 1937-ca. 1962
- Title
- Economic and Computational Methods for the Control of Uncertain Systems
- Creator
- Zhang, Jin
- Date
- 2019
- Description
-
The Economic Linear Optimal Control (ELOC) can improve the effective use of economic and dynamic information throughout the traditional...
Show moreThe Economic Linear Optimal Control (ELOC) can improve the effective use of economic and dynamic information throughout the traditional optimization and control hierarchy. This dissertation investigates the computational procedures used to obtain a global solution to the ELOC problem. The proposed method employs the Generalized Benders Decomposition (GBD) algorithm. Compared to the previous branch and bound approach, the application of GBD to the ELOC problem will greatly improve computational performance. A technological benefit of decomposing the problem into steady-state and dynamic parts is the ability to utilize nonlinear steady-state models, since the relaxed master problem is free of SDP type constraints and can be solved using any global nonlinear programming algorithm.In order to address the issue of model/plant mismatch, the dissertation will also investigate how to handle box-type uncertainties in ELOC. We consider two methods, a robust formulation for when the uncertainty is completely unknown and a Linear Parameter Varying formulation for when uncertainty can be measured in real time. In both cases, the infinite number of conditions that need to be satisfied are reduced to a finite set of constraints. The resulting problem formulations have a similar structure to the ELOC and can be solved globally by employing the generalized Benders decomposition.Despite a high-quality control law, the ultimate performance of closed-loop systems will be dictated by the quality and limitation of hardware element. Thus, hardware selection is also investigated in the dissertation. The cost-optimal hardware selection problem has been shown to be of the Mixed Integer Convex Programming (MICP) class. While such a formulation provides a route to global optimality, use of the branch and bound search procedure has limited application to fairly small systems. In this dissertation, we illustrate that a simple reformulation of the MICP and subsequent application of the GBD algorithm will result in massive reductions in computational effort.Finally, the problems of value-optimal sensor network design (SND) for steady-state and closed-loop systems are investigated. The value-optimal SND problem has been shown to be of the nonconvex mixed integer programming class. In the dissertation, it is demonstrated after transforming into an equivalent reformation, the application of GBD algorithm will significantly reduce the computational effort.
Show less
- Title
- What student responses do middle school mathematics teachers anticipate for contextualized and decontextualized problems about linear relationships?
- Creator
- Rupe, Kathryn Mary
- Date
- 2019
- Description
-
The recent transition to the Common Core State Standards for Mathematics is in line with current initiatives to improve mathematics teaching...
Show moreThe recent transition to the Common Core State Standards for Mathematics is in line with current initiatives to improve mathematics teaching and learning through the emphasis of conceptual understanding and mathematical reasoning. Much research has been done on how to develop conceptual understanding for students in kindergarten through twelfth grade. Stein, Engle, Smith, & Hughes (2008) highlighted the importance of orchestrating productive classroom discussions. They suggest that this goal can be attained through the process of five steps, each depending on the previous step. Teachers must anticipate student responses to a task that will be taught, monitor student thinking as they are engaged in the task, purposefully select students to present based on their choice of representations, sequence those representations in a purposeful way, and then make connections among the representations so that students are able to understand key concepts. The first step in this process, anticipating student responses (ASR), is an area where little research has been done. The literature suggests that teachers that engaged in professional learning related to the practice did so at varying levels (Empson et al., 2017), and could develop those skills over time with explicit feedback (Popovic, Morrissey, & Kartal, 2018). However, research on average middle school mathematics teachers, those that were not enrolled in any professional learning focused on ASR, was absent from the literature. This study aimed to understand middle school mathematics teachers’ anticipation of student responses. A sample of 19 eighth grade math teachers that represented a variety of years of experience and curriculum use (traditional, reform, and teacher-developed) participated in semi-structured interviews and completed four common eighth grade math problems focused on the content of linear relationships and systems. Teachers’ anticipated student strategies were categorized as showing robust, moderate, limited, or lacking evidence of ASR. Based on the results, all of the teachers fit into one of four categories: those that anticipated student responses at (1) consistently high levels, (2) mixed levels, (3) consistently low levels, and (4) inconsistent levels. The results of this study found teachers who anticipated student responses at consistently high levels were experienced (over 10 years of experience), had numerous student-centered professional development experiences, considered their teacher-role as that of a facilitator, and had high expectations for students. They differed with respect to the type of curriculum they used, the certification they held, and the level of detail in their planning practices. Several of the teachers inconsistently anticipated student responses, providing robust and limited evidence for at least one problem. This speaks to the specialized knowledge that teachers have, what Hill and Charalambous (2012a) refer to as local mathematical knowledge for teaching. Among all of the variables considered, curriculum use did not appear to have an impact on teachers’ skills and knowledge related to anticipating student responses, although teachers used their curriculum materials in very different ways. Years of experience, secondary licensure, a student-centered philosophy of teaching, and teachers that described their role as that of a facilitator related to evidence of anticipating student responses. Understanding the variables that may impact teachers’ abilities to anticipate student responses, the first of five steps outlined by Stein et al. (2008), is important for supporting teachers as they orchestrate productive classroom discussions around important concepts.
Show less
- Title
- Comparing Complex Network and Latent Factor Models of Seasonal Affective Disorder
- Creator
- Smetter, Joseph
- Date
- 2019
- Description
-
Research on Seasonal Affective Disorder (SAD) has produced several etiologicalmodels of SAD symptomatology, including a common cause model...
Show moreResearch on Seasonal Affective Disorder (SAD) has produced several etiologicalmodels of SAD symptomatology, including a common cause model that conceptualizessymptoms as the result of a single underlying disease process, and the Dual VulnerabilityModel (Young et al., 1991) which posits that psychological symptoms of depressionfollow the onset of vegetative symptoms (e.g. hypersomnia, increased appetite) inindividuals with a vulnerability to seasonal changes. Studies of the structure of SADsymptomatology have been limited in their ability to evaluate these models. This studyused exploratory factor analysis and network analysis to examine baseline winter SADsymptoms (using a modified BDI-II) in 177 adults participating in a randomizedcontrolled trial of light treatment and CBT for SAD (Rohan et al., 2015). The factoranalysis supported a four-factor model that included negative cognition/affect, loss ofvitality, dysregulation, and increases in weight/appetite. The complex network model ofSAD conceptualized the network as a system of interacting symptoms. Results of thenetwork model paralleled those of the factor analysis in producing four communities ofinter-correlated symptoms. In addition to the full symptom network, a directed acyclicgraph was constructed to model causal relations between symptoms. Results suggest thatvegetative symptoms (loss of vitality and appetite/weight) lead ultimately to cognitivesymptoms, with intermediate effects of dysregulation symptoms. This partially supportsthe Dual Vulnerability model. Findings from the factor analysis and the network analysisare compared, and their implications for and treatment of SAD is discussed.
Show less
- Title
- DEEP LEARNING FOR IMAGE PROCESSING WITH APPLICATIONS TO MEDICAL IMAGING
- Creator
- Zarshenas, Amin
- Date
- 2019
- Description
-
Deep Learning is a subfield of machine learning concerned with algorithms that learn hierarchical data representations. Deep learning has...
Show moreDeep Learning is a subfield of machine learning concerned with algorithms that learn hierarchical data representations. Deep learning has proven extremely successful in many computer vision tasks including object detection and recognition. In this thesis, we aim to develop and design deep-learning models to better perform image processing and tackle three important problems: natural image denoising, computed tomography (CT) dose reduction, and bone suppression in chest radiography (“chest x-ray”: CXR). As the first contribution of this thesis, we aimed to answer to probably the most critical design questions, under the task of natural image denoising. To this end, we defined a class of deep learning models, called neural network convolution (NNC). We investigated several design modules for designing NNC for image processing. Based on our analysis, we design a deep residual NNC (R-NNC) for this task. One of the important challenges in image denoising regards to a scenario in which the images have varying noise levels. Our analysis showed that training a single R-NNC on images at multiple noise levels results in a network that cannot handle very high noise levels; and sometimes, it blurs the high-frequency information on less noisy areas. To address this problem, we designed and developed two new deep-learning structures, namely, noise-specific NNC (NS-NNC) and a DeepFloat model, for the task of image denoising at varying noise levels. Our models achieved the highest denoising performance comparing to the state-of-the-art techniques.As the second contribution of the thesis, we aimed to tackle the task of CT dose reduction by means of our NNC. Studies have shown that high dose of CT scans can increase the risk of radiation-induced cancer in patients dramatically; therefore, it is very important to reduce the radiation dose as much as possible. For this problem, we introduced a mixture of anatomy-specific (AS) NNC experts. The basic idea is to train multiple NNC models for different anatomic segments with different characteristics, and merge the predictions based on the segmentations. Our phantom and clinical analysis showed that more than 90% dose reduction would be achieved using our AS NNC model.We exploited our findings from image denoising and CT dose reduction, to tackle the challenging task of bone suppression in CXRs. Most lung nodules that are missed by radiologists as well as by computer-aided detection systems overlap with bones in CXRs. Our purpose was to develop an imaging system to virtually separate ribs and clavicles from lung nodules and soft-tissue in CXRs. To achieve this, we developed a mixture of anatomy-specific, orientation-frequency-specific (ASOFS) expert deep NNC model. While our model was able to decompose the CXRs, to achieve an even higher bone suppression performance, we employed our deep R-NNC for the bone suppression application. Our model was able to create bone and soft-tissue images from single CXRs, without requiring specialized equipment or increasing the radiation dose.
Show less
- Title
- ON THE FLOW AND PERFORMANCE OF MUTUAL FUNDS
- Creator
- Zhang, Jingqi
- Date
- 2019
- Description
-
ABSTRACTThis dissertation consists of three essays on mutual funds. I first discuss the flow of active ETFs. And then I focus on the...
Show moreABSTRACTThis dissertation consists of three essays on mutual funds. I first discuss the flow of active ETFs. And then I focus on the performance of mutual funds. Finally, I evaluate the timing ability of mutual fund investors.Using a data set from 2000 to 2016, this thesis first studies the behavior of active ETF investors from the perspective of fund flows. The results show that the investors chase past returns as they do for mutual funds. Furthermore, I find that the return-chasing behavior can be influence by other considerations, such as fee changes. However, the evidence of performance persistence is weak for active ETFs. Therefore, I propose that the return-chasing behavior is not smart, and the flows of active ETFs instead behave more like “dumb money”, which are demonstrated by the data.I continue to study the performance of the mutual funds. To avoid the bias caused by pricing models themselves, I introduce a model-independent method to assess the mutual fund performance relative to the portfolios constructed by ordinary investors, assuming they are following a naive strategy. Using a data set from October 1984 to September 2017, I find that the majority of mutual funds have higher buy-and-hold returns than the T-bill returns as well as the market returns in the long run. And employing the model-independent measure of performance, I find that the mutual fund industry creates value for individual investors for that mutual funds on average exceed the performance of the majority of the portfolios constructed by the investors selecting stocks randomly.To measure the timing ability of mutual fund investors, I use the difference between the internal rate of return realized by investors and the buy-and-hold return of the funds. Different from the existing literature, I modify the cash flows used to generate the internal rate of return, in which way I can capture the realized return of investors more accurately. I find that investors show timing skills in short horizon. And on average, investors of mutual funds have worse timing skills than those of ETFs. And compared with active fund investors, passive fund investors have better timing skills. I also find that investors who simply chase past winners would show worse timing skills.
Show less
- Title
- Inefficiencies in resource allocation games
- Creator
- Tota, Praneeth
- Date
- 2019
- Description
-
This thesis addresses a problem that has been debated by the academic community, the government and the industry at large which is : How...
Show moreThis thesis addresses a problem that has been debated by the academic community, the government and the industry at large which is : How unfair is a tiered Internet compared to a open Internet ? On one hand we have an open Internet in which all the data is treated equally and the Internet service providers have no say when it comes to a pricing differentiation and on the other hand we have a tiered Internet in which the ISPs can charge different amounts based on certain constraints like the type of data or the content provider. The architecture of the internet imposes certain constraints which need mechanisms to efficiently allocate the resources among all the competing participants who only concern themselves with their best interests without considering the social benefit as a whole. We consider one such mechanism known as proportional sharing in which resource or the bandwidth is divided among the participants based on their bids. An efficient allocation is one which maximizes the aggregate utility of the users. We consider inelastic demand with the participants as price anticipating and ensure market clearing.We examine a tiered Internet in which the ISPs can partition the bandwidth based on certain constraints and charge a premium for better service. The participants involved are from all economic classes, so they have different amounts of wealth at their disposal. We quantify the relative loss incurred by the participants in lower economic classes as compared to the higher economic classes. We also calculate the loss of efficiency caused by competition among the participants as compared to the optimum social allocation.
Show less
- Title
- Structural Uncertainty Analysis of Nuclear Reactor Core Load Pads
- Creator
- Wozniak, Nicholas
- Date
- 2019
- Description
-
In fast spectrum nuclear reactors, reactivity is directly related to the capability of the reactor to sustain a fission chain reaction for...
Show moreIn fast spectrum nuclear reactors, reactivity is directly related to the capability of the reactor to sustain a fission chain reaction for power production. Historically, mechanical/structural analysis and design have been driven primarily by deterministic methods. However, reactivity is extremely sensitive to the location of the fuel within the reactor; which is subject to uncertainties. This makes deterministic models unstable and can allow manufacturing errors to contribute to uncertainties in analysis, resulting in potential safety concerns and incorrect reactor lifetime prediction. One potential means to address this challenge is the use of stochastic analysis. A framework is presented which introduces uncertainty analysis through the use of Monte Carlo Simulation. Latin Hypercube Sampling is used to reduce the number of sample runs and the computational effort and storage space requirements for the results. Geometric parameters such as the gaps at the load pad contact points, the location of the above core load pad (ACLP), and even temperature gradient profiles, that are important to the design of nuclear reactors are varied, and their effects on the overall performance are studied through sensitivity analysis. The main focus was to quantify the effects of the variation of these parameters directly on the variation of the contact forces and deformations of the fuel assemblies which house and control the movement of the fuel. Based on the results of the sensitivity study, this study found that the ACLP location has the largest effect on contact forces. And as such, any uncertainty in this parameter results in a rather large variation in the intensity of the contact force. Furthermore, specific recommendations are given to help control these variations as well as for further investigations on other parameters that may be significant to the design of fuel assemblies.
Show less
- Title
- CHARACTERIZATION OF DISPERSION AND ULTRAFINE PARTICLE EMISSION FACTORS USING NEAR ROADWAY FIELD MEASUREMENTS
- Creator
- Xiang, Sheng
- Date
- 2019
- Description
-
Recent epidemiology evidence suggests that vehicle emissions are major contributors to poor urban air quality. Human exposure to elevated...
Show moreRecent epidemiology evidence suggests that vehicle emissions are major contributors to poor urban air quality. Human exposure to elevated concentration of traffic emissions has been associated with increased risk factors for a range of negative health outcomes. Evaluation of human exposure to vehicle emissions (e.g. ultrafine particles) mainly relies on dispersion models. Consequently, dispersion models need to comply with constantly increasing requirements to provide predictions of pollutant concentration. The dynamic of near roadway dispersion process needs to be investigated since most of the existing models does not account traffic condition variability (e.g. vehicle type and mode of operation) for dispersion. A five-year long field study was conducted to characterize dispersion near roadway with various vehicle mode of operation and vehicle type. To better understand the dispersion process near roadway, the impact of different ambient background categories (e.g. remote, lake, urban, industrial) on ultrafine particles (UFPs) need to be evaluated. Results demonstrate that each category has a different average ambient background concentration (pt cm–3) as follows: remote, 2,700; lake, 6,000; industrial 12,000 and urban 11,000. The large variations exist in ambient background concentration will result in significant variations in near roadway concentrations. The total near roadway measurements are generally near 20,000 pt cm–3 and reach to 60,000 pt cm–3 depending on the background and traffic emission. The dispersion near the roadway is also investigated in this study. A roadway restricted to light-duty vehicles (LDVs) was selected to conducted near roadway field measurement. Results indicate that the dispersion induced by vehicles is a two-stage process. When under the unsteady-state condition with small number of operating vehicles, the rate of dispersion near roadway increased from 2 m2 s-1 to 6 m2 s-1 as the number of vehicles increased. For steady-state condition, the rate of dispersion was constant near 6 m2 s-1 and not increased with additional vehicles. For a roadway mixed with both LDVs and heavy duty vehicles (HDVs), similar results were found. Dispersion increased from 6 to 18 m2 s-1 as total vehicle flow rate increased to 10,000 veh h-1 and HDV flow rate increased to 1000 veh h-1. Finally, the calculated dispersion near roadway is used to estimate the UFP emission factors. The UFP emission factors were ranged from 0.5 × 1013 to 1.5 × 1013 pt km-1 veh-1 and from 7 × 1014 to 20 × 1014 pt km-1 veh-1 for LDVs and HDVs, respectively. The variations in UFP emission factors are due to change in vehicle mode of operation.The results from this study will be critical for parameterization of dispersion near roadway and provide important emission inventory for interdisciplinary partnership among different fields (e.g. air quality, transportation design and urban planning) in solving transportation air quality problem.
Show less
- Title
- Structural Condition Assessment for Wind Turbine Towers
- Creator
- Zahraee, Afshin
- Date
- 2019
- Description
-
Wind-based energy generation has special priority in efforts related to global sustainability. Based on this priority and the desire for...
Show moreWind-based energy generation has special priority in efforts related to global sustainability. Based on this priority and the desire for increase in electricity generation, the size of wind turbines has been tremendously increased in recent years. Moreover, larger wind turbines have access to more stable wind speeds which assists in electricity generation consistency. However, larger wind turbines are more prone to exhibit structural failure due to the increase of size as well as presence of complexities in the structure and wind load interaction. As such, condition monitoring and fault diagnosis of wind turbines are crucial in their sustainable operation. In this work, a new framework for condition assessment of wind turbine towers is developed. This framework enhances the ability to assess the structural condition of in-service wind turbine towers. Using this framework: 1) the wind data for the wind turbine location is collected, 2) a series of numerical modeling and analysis for the wind turbine tower for various wind velocities are performed to obtain the maximum induced stresses and their corresponding critical fatigue components (hot spots), and 3) fatigue analysis is performed leading to prediction for the remaining life of the wind turbine tower. To illustrate the capability of the present method, a case study is performed on an existing wind turbine. The obtained analytical results are compared and verified by the original design parameters. The results obtained for life prediction of the wind turbine tower correlate with life predictions of other existing wind turbine towers. It is anticipated that application of this framework for existing and future wind turbines will enhance their inspection planning as well as offer a more cost-effective process for repair and rehabilitation of wind turbine towers. This will ultimately increase the overall safety of wind turbine systems and enhance their reliability of performance.Keywords: Wind Turbine Tower, Condition Assessment, Life Prediction.
Show less
- Title
- Numerical and Experimental Investigation to Improve Radio Frequency Performance of Photonic Band Gap Accelerating Structure
- Creator
- Zhou, Ning
- Date
- 2019
- Description
-
In this thesis, the design and experimental work of a Photonic Band Gap (PBG) accelerator cavity with star-shape array is presented. Photonic...
Show moreIn this thesis, the design and experimental work of a Photonic Band Gap (PBG) accelerator cavity with star-shape array is presented. Photonic band gap structures (metallic and/ or dielectric) have been proposed for accelerator applications. These structures act like filters, allowing electromagnetic waves propagating at some frequencies to be transmitted through the lattice, while rejecting the RF fields in some (unwanted) frequency range. Additionally PBG structures are used to support selective field patterns (modes) in a resonator or waveguide by a defect region within the lattice; while damping unwanted higher- or lower-order modes without impacting the supported mode. The unwanted modes affect beam propagation or even distort the beam. Thus, suppression of unwanted modes is important. In this thesis work, a star shape structure is obtained from removing elements in a PBG structure with triangular lattice and employed for integration with a metallic cavity resonator for accelerator applications. Impedance matching is accomplished by adjustment of positions of some elements in the array. The design was fabricated and measured to have an input return loss of over 30 dB at the targeted frequency of 11.4GHz. The measured results are in an excellent agreement with the computer simulation.
Show less
- Title
- Spring Thing tricycle race, Illinois Institute of Technology, Chicago, Illinois, 1970
- Date
- 1970
- Description
-
Photograph of the tricycle race during the 1970 Spring Thing. Spring Thing, sponsored by the Union Board, occurred during the fall semester,...
Show morePhotograph of the tricycle race during the 1970 Spring Thing. Spring Thing, sponsored by the Union Board, occurred during the fall semester, usually in October. The tricycle race, first held in 1968, was a highlight of the annual festivities. Photographer unknown.
Show less - Collection
- Office of Communications and Marketing photographs, 1905-1999
- Title
- Using Peer Navigators to Address the Integrated Healthcare Needs of African Americans with Serious Mental Illness
- Creator
- Corrigan, Patrick
- Date
- 2017, 2017
- Publisher
- American Psychiatric Association
- Description
-
Objective...
Show moreObjective Impact of a peer navigator program (PNP) develop by a community based participatory research team was examined on African Americans with serious mental illness who were homeless. Methods Research participants were randomized to PNP or a treatment-as-usual control group for one year. Data on physical and mental health, recovery, and quality of life were collected at baseline, 4, 8 and 12 months. Results Findings from group by trial ANOVAs of omnibus measures of the four constructs showed significant impact over the one year for participants in PNP compared to control described by small to moderate effect sizes. These differences emerged even though both groups showed significant improvements in reduced homelessness and insurance coverage. Conclusions Implications for improving in-the-field health care for this population are discussed. Whether these results occurred because navigators were peers per se needs to be examined in future research.
Show less
- Title
- Union Board, Illinois Institute of Technology, Chicago, Illinois, 1980s
- Creator
- Lightfoot, Robert M., III
- Date
- 1980-1989
- Description
-
Photograph of the Union Board, the primary student organization at Illinois Tech responsible for programming events on- and off-campus.
- Collection
- Office of Communications and Marketing photographs, 1905-1999
- Title
- Fraternity Barbecue, Illinois Institute of Technology, Chicago, Illinois, 1981
- Date
- 1981
- Description
-
Photograph of students at a fraternity barbecue on the Illinois Tech campus in 1981. Photographer unknown.
- Collection
- Office of Communications and Marketing photographs, 1905-1999
- Title
- Fraternity Barbecue, Illinois Institute of Technology, Chicago, Illinois, 1981
- Date
- 1981
- Description
-
Photograph of students at a fraternity barbecue on the Illinois Tech campus in 1981. Photographer unknown.
- Collection
- Office of Communications and Marketing photographs, 1905-1999
- Title
- ENHANCED DEGRADATION AND PEPTIDE SPECIFICITY OF MMP-SENSITIVE SCAFFOLDS FOR NEOVASCULARIZATION OF ENGINEERED TISSUES
- Creator
- Sokic, Sonja
- Date
- 2013, 2013-07
- Description
-
Biomaterial strategies for engineering tissues of clinically relevant size require the formation of rapid and stable neovascularization. The...
Show moreBiomaterial strategies for engineering tissues of clinically relevant size require the formation of rapid and stable neovascularization. The ability of an engineered scaffold to induce vascularization is highly dependent on its rate of degradation. During the process of material degradation, the scaffold should degrade in a manner allowing for cellular infiltration, lumen formation, and extracellular matrix (ECM) synthesis. Matrix metalloproteinases (MMPs) play a key role in mediating cell-induced proteolytic matrix degradation, remodeling, and controlled neovascularization. Poly (ethylene glycol) PEG hydrogels have been extensively investigated as scaffolds for tissue engineering applications due to their ease of chemical modification allowing for the recapitulation of key aspects of the neovascularization process. The goal of the work described in this thesis was to develop strategies to enhance and control the degradation of MMP-sensitive PEG diacrylate (PEGDA) hydrogels without inducing changes to the bulk physical and mechanical properties of the material and to further study the effect of the cleavage site concentration and MMP-sensitive peptide substrate specificity on the rate of neovascularization and tissue remodeling in vitro and in vivo. In the first part of this study, a detailed investigation was completed to investigate the effects of the mechanical and physical properties of the scaffolds as well as the role of proteolytically mediated hydrogel degradation on 3D fibroblast invasion within MMPsensitive PEGDA hydrogels. Initial studies focused on the use of a modified version of a previously published multistep conjugation method to generate degradable PEGDA macromer conjugates containing variations in the number of MMP-sensitive domains. Theoretical and experimental characterization of this multistep conjugation demonstrated xi that this method leads to the formation of multiple species that directly affect the compressive modulus and degradation rate of the scaffold making it difficult to control degradation independent of alterations in the bulk physical and mechanical hydrogel properties. After manipulation of multiple polymerization conditions, hydrogels with similar compressive moduli but different hydrogel degradation rates were synthesized. These initial studies showed that an increase in the incorporation of proteolytically sensitive domains in PEGDA hydrogels of similar modulus lead to enhanced degradation and 3D fibroblast invasion. In this study, the role of soluble FGF-1 on fibroblast invasion within these scaffolds was investigated and it was demonstrated that the inclusion of FGF-1 in the scaffolds results in further enhancement of fibroblast invasion in a dosedependent fashion. Further studies were necessary to develop a more controllable and robust approach in tuning scaffold degradation independent of alterations in the bulk physical and mechanical properties. In order to address this, a novel approach was developed to engineer protease-sensitive peptides with multiple proteolytic cleavage sites that could be covalently crosslinked into hydrogels without compromising the physical and mechanical biomaterial properties. This approach avoided the need for utilizing a multistep conjugation process as peptides could be incorporated into the backbone of PEG using a single step conjugation. Using this approach, hydrogels formed with the engineered peptides led to significantly enhanced degradation and neovascularization in vitro as compared to scaffolds with a single protease sensitive peptide between crosslinks. In addition, hydrogels with enhanced susceptibility to degradation promoted vascularization over a wider range of matrix properties. This approach allowed for controlled xii concentration of the proteolytic cleavage sites within the matrix and thus tuning of hydrogel degradation for tissue engineering applications. In the final study, MMP-sensitive peptide substrates specific to degradation by MMPs known to be expressed during neovascularization were screened for degradation and their role in neovascularization. MMP-sensitive PEGDA hydrogels (SSite and TriSite) were synthesized with peptide substrates sensitive to cleavage by MMP-2, MMP- 9, MMP-14, a mixed sequence of MMP-2, 9 and 14, and compared to the peptide substrate used in the previous studies, which is degraded by collagenase enzymes. The hydrogels were evaluated for their sensitivity and specificity to degradation by MMPs, in terms of cleavage site concentration, and for their role in neovascularization and tissue remodeling in vitro and in vivo. The presented approach allows for the incorporation of varying cleavage site concentration and MMP-sensitive peptide substrates into PEG hydrogels without alterations in the mechanical and physical properties of the hydrogels. Results showed that without the incorporation of growth factors in this scaffold, vascularization and tissue invasion was supported in all MMP-sensitive hydrogel groups regardless of the MMP-sensitive peptide substrate embedded in the matrix. In addition, the cleavage site concentration had a profound impact in enhancing vascularization in vitro and tissue invasion in vivo. These techniques can be used to tune the properties of polymer scaffolds for neovascularization and tissue remodeling. In addition, these studies provide insight into the effect of the physical, mechanical, and degradative properties of these systems and on the role of cleavage site concentration, and MMP substrate specificity on xiii neovascularization and tissue invasion within proteolytically degradable PEG hydrogel constructs.
PH.D in Biomedical Engineering, July 2013
Show less