Search results
(1 - 13 of 13)
- Title
- DEVELOPMENTAL CHANGES OF KNOWLEDGE OF MATHEMATICS FOR TEACHING OVER TIME THROUGH PROFESSIONAL DEVELOPMENT
- Creator
- Alahmadi, Reham Abdulrahman
- Date
- 2019
- Description
-
In the past, the knowledge base for effective teaching was measured based on presage variables, methods of teaching, process-product research,...
Show moreIn the past, the knowledge base for effective teaching was measured based on presage variables, methods of teaching, process-product research, competency-based teacher education, and professional decision-making. In addition, teachers’ effectiveness has been measured indirectly using proxy measurement. For example, teachers were assessed based on their performance on certification exams, their course, mathematical courses taken, and various experiences. Furthermore, teachers were measured based on students’ achievement tests, and using a pre-test and post-test model with limited knowledge of their development. This study aims to understand teacher mathematical knowledge for teaching (MKT), its development over time, and how professional development influences their professional knowledge. Twenty middle school teachers (sixth through eighth grade) participated in content-based (algebra) professional development (PD) in Saudi Arabia. The selection pool of teachers targeted in the study is eighth (sixth grade) mathematics teachers that represented a variety of years of experience and PD experience. The results of this study found that teachers positively developed their MKT through the professional development program. In particular, based on the results of the pre-test, teachers had a low level of MKT before they participated in the PD program. Teachers’ developmental steps of their changes over time were captured during the PD via multiple interviews. These interviews revealed within-teacher themes, cross-teacher themes, and factors that impacted teachers’ changes. Furthermore, a paired sample t-test showed that there was a significant difference between the means of pre-test and post-test.
Show less
- Title
- Identity and Self-Efficacy Among Mathematically Successful African American Single Mothers in Urban Community College Contexts
- Creator
- Devi, Shavila
- Date
- 2019
- Description
-
This dissertation is a phenomenological, multi-case study of 13 mathematically successful African American single mothers from two urban...
Show moreThis dissertation is a phenomenological, multi-case study of 13 mathematically successful African American single mothers from two urban community colleges in Chicago. While a number of recent studies have focused on Black girls and women in K-12 and university contexts, the community college context remains understudied despite the presence of large numbers of Black women. Moreover, there has been a tendency in mainstream research contexts to normalize failure, and focus on problematic aspects of being a Black single mother or being a Black mathematics learner. Bringing together considerations of identity (racial, mathematics, single mother) and mathematics self-efficacy, this study will be the first to focus on mathematically successful African American single mothers in the community college context. The following research questions guided the research for this dissertation:1. How do African American single mothers, who return to study mathematics at the community college and are successful in their courses, narrate their identities and life experiences around race, gender, mathematics learning, and being a mother?2. How do these women score on the Mathematics Self-Efficacy Scale (MSES) and what sources of and influences on their self-efficacy are reported by these women via interviews? 3. What other factors (intrapersonal and beyond) do these women report as being particularly salient in their mathematics success?Multiple forms of data–semi-structured interviews, pre-and-post responses to a widely-used mathematics self-efficacy survey, and mathematics artifacts–were collected to address the research questions. A cross-case analysis of the data revealed four themes that emerged across the 13 participants. Within-case analyses of three participants reveals how the themes play out in-depth for these women. The four themes are (1) strong counter-narratives of being a single mother that resisted dominant and deficit-oriented discourses; (2) education as a key tool and resource to manage and mitigate risks associated with single motherhood; (3) multifaceted stories of resilience to achieve success in mathematics and life; and (4) positive, success-oriented mathematics identities and positive math self-efficacy. This study contributes to an emerging success-oriented literature on Black women and mathematics, and a growing research literature on identity in mathematics education. In surfacing how the participants narrate and negotiate race, gender, and class, this dissertation also contributes to an emerging literature on intersectionality in mathematics education. Results from this study can inform community college administrators and faculty in crafting practice and policy to support African American single mothers in mathematics.
Show less
- Title
- A BOUNDARY INTEGRAL METHOD FOR COMPUTING THE FORCES OF MOVING BEADS IN A THREE-DIMENSIONAL LINEAR VISCOELASTIC FLOW
- Creator
- Hernandez, Francisco
- Date
- 2019
- Description
-
Computing the forces acting on particles in fluids is fundamental to understanding particle dynamics and interactions. In this thesis, we...
Show moreComputing the forces acting on particles in fluids is fundamental to understanding particle dynamics and interactions. In this thesis, we study the dynamics of a two-particle system in a three-dimensional linear viscoelastic flow. Using a correspondence principle between unsteady Stokes flow and viscoelastic flow, we reformulate the problem and derive a boundary integral formulation that solves the Brinkman’s equation in the Fourier domain. We show that computational costs can be reduced by carefully eliminating the double-layer potential, and that a unique solution can be obtained by desingularizing the equation. We develop a highly accurate numerical integration scheme to evaluate the resulting boundary integrals. We solve the backward problem by making use of our numerical integration scheme, variable transformations, generalized minimum residual (GMRES) method, and spherical harmonic interpolations. In particular, spherical harmonic interpolations ensure that this numerical scheme is of high accuracy. Our method also has the advantage of working for both unsteady Stokes and linear viscoelastic flow by appropriately adjusting the oscillation frequency. Our numerical results are in agreement with the exact solution for a single-particle system, as well as the asymptotic solution for large particle separation in the two-particle system. Last, we analyze the numerical results for high oscillation frequencies and small particle separation. Our numerical method is shown to only depend on the frequency parameter and the distance between the particles. We find that for high frequencies, the forces on the particles behave differently for unsteady Stokes and linear viscoelastic flows.
Show less
- Title
- Modeling the Aerodynamic Response to Impulsive Active Flow Control
- Creator
- Asztalos, Katherine
- Date
- 2021
- Description
-
In unsteady aerodynamics the response to external disturbances can depend significantly on the initial condition, and the extent to which this...
Show moreIn unsteady aerodynamics the response to external disturbances can depend significantly on the initial condition, and the extent to which this impacts the ability to model the flowfield can vary. In this work, we look to develop a model that can capture and predict the long-time response to actuation, which we suspect to be sensitive to the instantaneous state. We investigate whether a physical understanding of the short-time response to impulsive actuation can be obtained, with the goal of understanding the observed physical phenomenon present in the immediate response to this type of actuation. We find that the response to impulsive actuation is sensitive to the instantaneous wake, and that the short-time response is directly proportional to the time rate of change of the actuation input. Computational simulations of a stalled NACA 0009 airfoil subject to leading-edge synthetic jet actuation were performed. Full state information, as well as force response measurements, were collected using an immersed boundary method (IBM) numerical code. The numerical simulations performed sought to characterize the response to actuation by varying the actuation parameters, such as the strength, direction, and phase at which the onset of actuation occurs. It was found that the long-time response to actuation can be sensitive to the instantaneous wake state at the onset of actuation. The ability to extract models that describe the complex behavior of the system provides additional insight into the dominant features governing the response of such systems, as well as achieves predictive capabilities of the systems' response. The data-driven models, which are identified using variants of dynamic mode decomposition, can capture both the short- and long-time response of the system to actuation. Predictive models are identified using multiple trajectories of data corresponding to varying the phase of vortex shedding at which the onset of actuation occurs. These models achieve accurate predictions for off-design cases as well. It is also shown that multiple control objectives with the same actuator can be achieved. Classical theory aids in understanding the physics governing unsteady aerodynamic motion and the response to disturbances. Theoretical models are developed using the assumptions from classical unsteady aerodynamic theory, which provide insight into the forms that the data-driven models take. The effect of short-duration momentum injection actuation is modeled through a combination of source/sink, doublet, and vortex elements. Regardless of the precise elements used in the theoretical model, the lift response is composed of a contribution directly proportional to the rate of change of actuation strength, and a contribution that persists after the actuation burst ends that arises due to the enforcement of the Kutta condition. Methodologies that retain the physics inherent to the system by projecting the governing equations of motion onto a well-suited basis are extremely valuable for gaining physical insight and understanding into the dynamics of the flowfield. A new methodology is proposed for extracting spectral content from systems with limited data available using projection-based modeling approaches. There are challenges associated with using modal decomposition-based modeling techniques for systems exhibiting large transient dynamics due to external inputs, which is applicable in this particular instance and for related systems. The methodology presented here shows how the dynamics of this system can be understood through analysis of optimal finite-time horizon transient energy growth, applied to reduced-order models identified using actuation response data with either data-driven or physics-based models. A novel methodology is proposed to guide future experimental actuation design to achieve maximal response by considering an optimal forcing mode, identified from considering the optimal perturbation of the full unactuated system, which maximizes a given output.
Show less
- Title
- Advances in Machine Learning: Theory and Applications in Time Series Prediction
- Creator
- London, Justin J.
- Date
- 2021
- Description
-
A new time series modeling framework for forecasting, prediction and regime switching for recurrent neural networks (RNNs) using machine...
Show moreA new time series modeling framework for forecasting, prediction and regime switching for recurrent neural networks (RNNs) using machine learning is introduced. In this framework, we replace the perceptron with an econometric modeling unit. This cell/unit is a functionally dedicated to processing the prediction component from the econometric model. These supervised learning methods overcome the parameter estimation and convergence problems of traditional econometric autoregression (AR) models that use MLE and expectation-maximization (EM) methods which are computationally expensive, assume linearity, Gaussian distributed errors, and suffer from the curse of dimensionality. Consequently, due to these estimation problems and lower number of lags that can be estimated, AR models are limited in their ability to capture long memory or dependencies. On the other hand, plain RNNs suffer from the vanishing and gradient problem that also limits their ability to have long-memory. We introduce a new class of RNN models, the $\alpha$-RNN and dynamic $\alpha_{t}$-RNNs that does not suffer from these problems by utilizing an exponential smoothing parameter. We also introduce MS-RNNs, MS-LSTMs, and MS-GRUs., novel models that overcome the limitations of MS-ARs but enable regime (Markov) switching and detection of structural breaks in the data. These models have long memory, can handle non-linear dynamics, do not require data stationarity or assume error distributions. Thus, they make no assumptions about the data generating process and have the ability to better capture temporal dependencies leading to better forecasting and prediction accuracy over traditional econometric models and plain RNNs. Yet, the partial autocorrelation function and econometric tools, such as the the ADF, Ljung-Box, and AIC test statistics, can be used to determine optimal sequence lag lengths to input into these RNN models and to diagnose serial correlation. The new framework has capacity to characterize the non-linear partial autocorrelation of time series and directly capture dynamic effects such as trends and seasonality. The optimal sequence lag order can greatly influence prediction performance on test data. This structure provides more interpretability to ML models since traditional econometric models are embedded into RNNs. The ability to embed econometric models into RNNs will allow firms to improve prediction accuracy compared to traditional econometric or traditional ML models by creating a hybrid utilizing a well understood traditional econometric model and a ML. In theory the traditional econometric model should focus on the portion of the estimation error that is best managed by a traditional model and the ML should focus the non-linear portion of the model. This combined structure is a step towards explainable AI and lays the framework for econometric AI.
Show less
- Title
- IMPLEMENTING ASYNCHRONOUS DISCUSSION AS AN INSTRUCTIONAL STRATEGY IN THE DEVELOPMENTAL MATHEMATICS COURSES TO SUPPORT STUDENT LEARNING
- Creator
- Zenati, Lynda
- Date
- 2020
- Description
-
Remedial known as developmental coursework are designed to get under-prepared students ready for college. Ninety one percent of colleges offer...
Show moreRemedial known as developmental coursework are designed to get under-prepared students ready for college. Ninety one percent of colleges offer remedial courses in mathematics and English (Seo, 2014). Evidence suggests that traditional teaching methods do not enable all students to engage with the types of academic literacy constitutive to higher education (Lea and Street, 2006). The popularity of online discussion has been made possible through their availability in most LMS which are widely used in higher education (Dahlstrom, Brooks, & Bichsel, 2014). This study aimed at examining the use of asynchronous discussion (AD) as an instructional strategy to help alleviate some of the difficulties developmental math students make in different topics. Participants were 15 students enrolled in Summer, 2019 semester at a Community College. Results showed that students’ performance increased from pretest to posttest for students’ who participated in AD. Comparison was made with two other sections of the same course at the same college taught by two different instructors. Controlling for prior academic ability, results showed a statistically significant difference between students’ performance in the posttest in the section that utilized the AD but not the other two sections. Content analysis of students' posts showed the use of AD at least temporarily corrected students’ misconceptions when they were active and Consistent. Results were mixed for the lurker and the passive students. Moreover, correlation analysis showed no relationship for the frequency of interaction; however, a significant relationship was found for the quality of participation and students’ performance as measured by the final exam. Furthermore; no relationship between the CoI presences and students’ performance. Students’ reflections indicated that students valued the online experience. Benefits were related to students’ engagement and collaborative learning. Obstacles included students’ behavior, timing and the structure of the AD. This may imply that using structured AD may help in building a community of learners. Also, instructor presence and facilitation were necessary to promote deep learning. Future research can build on this finding by replicating the study using a bigger sample size and a longer period to allow students to reflect and discuss any conflict with their peers.
Show less
- Title
- Fast Automatic Bayesian Cubature Using Matching Kernels and Designs
- Creator
- Rathinavel, Jagadeeswaran
- Date
- 2019
- Description
-
Automatic cubatures approximate multidimensional integrals to user-specified error tolerances. In many real-world integration problems, the...
Show moreAutomatic cubatures approximate multidimensional integrals to user-specified error tolerances. In many real-world integration problems, the analytical solution is either unavailable or difficult to compute. To overcome this, one can use numerical algorithms that approximately estimate the value of the integral. For high dimensional integrals, quasi-Monte Carlo (QMC) methods are very popular. QMC methods are equal-weight quadrature rules where the quadrature points are chosen deterministically, unlike Monte Carlo (MC) methods where the points are chosen randomly.The families of integration lattice nodes and digital nets are the most popular quadrature points used. These methods consider the integrand to be a deterministic function. An alternative approach, called Bayesian cubature, postulates the integrand to be an instance of a Gaussian stochastic process. For high dimensional problems, it is difficult to adaptively change the sampling pattern. But one can automatically determine the sample size, $n$, given a fixed and reasonable sampling pattern. We take this approach using a Bayesian perspective. We assume a Gaussian process parameterized by a constant mean and a covariance function defined by a scale parameter and a function specifying how the integrand values at two different points in the domain are related. These parameters are estimated from integrand values or are given non-informative priors. This leads to a credible interval for the integral. The sample size, $n$, is chosen to make the credible interval for the Bayesian posterior error no greater than the desired error tolerance. However, the process just outlined typically requires vector-matrix operations with a computational cost of $O(n^3)$. Our innovation is to pair low discrepancy nodes with matching kernels, which lowers the computational cost to $O(n \log n)$. We begin the thesis by introducing the Bayesian approach to calculate the posterior cubature error and define our automatic Bayesian cubature. Although much of this material is known, it is used to develop the necessary foundations. Some of the major contributions of this thesis include the following: 1) The fast Bayesian transform is introduced. This generalizes the techniques that speedup Bayesian cubature when the kernel matches low discrepancy nodes. 2) The fast Bayesian transform approach is demonstrated using two methods: a) rank-1 lattice sequences and shift-invariant kernels, and b) Sobol' sequences and Walsh kernels. These two methods are implemented as fast automatic Bayesian cubature algorithms in the Guaranteed Automatic Integration Library (GAIL). 3) We develop additional numerical implementation techniques: a) rewriting the covariance kernel to avoid cancellation error, b) gradient descent for hyperparameter search, and c) non-integer kernel order selection.The thesis concludes by applying our fast automatic Bayesian cubature algorithms to three sample integration problems. We show that our algorithms are faster than the basic Bayesian cubature and that they provide answers within the error tolerance in most cases. The Bayesian cubatures that we develop are guaranteed for integrands belonging to a cone of functions that reside in the middle of the sample space. The concept of a cone of functions is also explained briefly.
Show less
- Title
- Latent Price Model for Market Microstructure: Estimation and Simulation
- Creator
- Yin, Yuan
- Date
- 2023
- Description
-
This thesis focuses on exploring and solving several problems based on partiallyobserved diffusion models. The thesis has two parts....
Show moreThis thesis focuses on exploring and solving several problems based on partiallyobserved diffusion models. The thesis has two parts. In the first part we present a tractable sufficient condition for the consistency of maximum likelihood estimators (MLEs) in partially observed diffusion models, stated in terms of stationary distributions of the associated test processes, under the assumption that the set of unknown parameter values is finite. We illustrate the tractability of this sufficient condition by verifying it in the context of a latent price model of market microstructure. Finally, we describe an algorithm for computing MLEs in partially observed diffusion models and test it on historical data to estimate the parameters of the latent price model. In the second part we provide a thorough analysis of the particle filtering algorithm for estimating the conditional distribution in partially observed diffusion models. Specifically, we focus on estimating the distribution of unobserved processes using observed data. The algorithm involves several steps and assumptions, which are described in detail. We also examine the convergence of the algorithm and identify the sufficient conditions under which it converges. Finally, we derive an explicit upper bound of the convergence rate of the algorithm, which depends on the set of parameters and the choice of time frequency. This bound provides a measure of the algorithm’s performance and can be used to optimize its parameters to achieve faster convergence.
Show less
- Title
- Machine Learning On Graphs
- Creator
- He, Jia
- Date
- 2022
- Description
-
Deep learning has revolutionized many machine learning tasks in recent years.Successful applications range from computer vision, natural...
Show moreDeep learning has revolutionized many machine learning tasks in recent years.Successful applications range from computer vision, natural language processing to speech recognition, etc. The success is partially due to the availability of large amounts of data and fast growing computing resources (i.e., GPU and TPU), and partially due to the recent advances in deep learning technology. Neural networks, in particular, have been successfully used to process regular data such as images and videos. However, for many applications with graph-structured data, due to the irregular structure of graphs, many powerful operations in deep learning can not be readily applied. In recent years, there is a growing interest in extending deep learning to graphs. We first propose graph convolutional networks (GCNs) for the task of classification or regression on time-varying graph signals, where the signal at each vertex is given as a time series. An important element of the GCN design is filter design. We consider filtering signals in either the vertex (spatial) domain, or the frequency (spectral) domain. Two basic architectures are proposed. In the spatial GCN architecture, the GCN uses a graph shift operator as the basic building block to incorporate the underlying graph structure into the convolution layer. The spatial filter directly utilizes the graph connectivity information. It defines the filter to be a polynomial in the graph shift operator to obtain the convolved features that aggregate neighborhood information of each node. In the spectral GCN architecture, a frequency filter is used instead. A graph Fourier transform operator or a graph wavelet transform operator first transforms the raw graph signal to the spectral domain, then the spectral GCN uses the coe"cients from the graph Fourier transform or graph wavelet transform to compute the convolved features. The spectral filter is defined using the graph’s spectral parameters. There are additional challenges to process time-varying graph signals as the signal value at each vertex changes over time. The GCNs are designed to recognize di↵erent spatiotemporal patterns from high-dimensional data defined on a graph. The proposed models have been tested on simulation data and real data for graph signal classification and regression. For the classification problem, we consider the power line outage identification problem using simulation data. The experiment results show that the proposed models can successfully classify abnormal signal patterns and identify the outage location. For the regression problem, we use the New York city bike-sharing demand dataset to predict the station-level hourly demand. The prediction accuracy is superior to other models. We next study graph neural network (GNN) models, which have been widely used for learning graph-structured data. Due to the permutation-invariant requirement of graph learning tasks, a basic element in graph neural networks is the invariant and equivariant linear layers. Previous work by Maron et al. (2019) provided a maximal collection of invariant and equivariant linear layers and a simple deep neural network model, called k-IGN, for graph data defined on k-tuples of nodes. It is shown that the expressive power of k-IGN is equivalent to k-Weisfeiler-Lehman (WL) algorithm in graph isomorphism tests. However, the dimension of the invariant layer and equivariant layer is the k-th and 2k-th bell numbers, respectively. Such high complexity makes it computationally infeasible for k-IGNs with k > 3. We show that a much smaller dimension for the linear layers is su"cient to achieve the same expressive power. We provide two sets of orthogonal bases for the linear layers, each with only 3(2k & 1) & k basis elements. Based on these linear layers, we develop neural network models GNN-a and GNN-b, and show that for the graph data defined on k-tuples of data, GNN-a and GNN-b achieve the expressive power of the k-WL algorithm and the (k + 1)-WL algorithm in graph isomorphism tests, respectively. In molecular prediction tasks on benchmark datasets, we demonstrate that low-order neural network models consisting of the proposed linear layers achieve better performance than other neural network models. In particular, order-2 GNN-b and order-3 GNN-a both have 3-WL expressive power, but use a much smaller basis and hence much less computation time than known neural network models. Finally, we study generative neural network models for graphs. Generative models are often used in semi-supervised learning or unsupervised learning. We address two types of generative tasks. In the first task, we try to generate a component of a large graph, such as predicting if a link exists between a pair of selected nodes, or predicting the label of a selected node/edge. The encoder embeds the input graph to a latent vector space via vertex embedding, and the decoder uses the vertex embedding to compute the probability of a link or node label. In the second task, we try to generate an entire graph. The encoder embeds each input graph to a point in the latent space. This is called graph embedding. The generative model then generates a graph from a sampled point in the latent space. Di↵erent from the previous work, we use the proposed equivariant and invariant layers in the inference model for all tasks. The inference model is used to learn vertex/graph embeddings and the generative model is used to learn the generative distributions. Experiments on benchmark datasets have been performed for a range of tasks, including link prediction, node classification, and molecule generation. Experiment results show that the high expressive power of the inference model directly improves latent space embedding, and hence the generated samples.
Show less
- Title
- Numerical Analysis and Deep Learning Solver of the Non-local Fokker-Planck Equations
- Creator
- Jiang, Senbao
- Date
- 2022
- Description
-
This thesis is divided into three mutually connected parts. ...
Show moreThis thesis is divided into three mutually connected parts. In the first part, we introduce and analyze arbitrarily high-order quadrature rules for evaluating the two-dimensional singular integrals of the forms \begin{align*} I_{i,j} = \int_{\mathbb{R}^2}\phi(x)\frac{x_ix_j}{|x|^{2+\alpha}} \d x, \quad 0< \alpha < 2 \end{align*} where $i,j\in\{1,2\}$ and $\phi\in C_c^N$ for $N\geq 2$. This type of singular integrals and its quadrature rule appear in the numerical discretization of fractional Laplacian in non-local Fokker-Planck Equations in 2D. The quadrature rules are trapezoidal rules equipped with correction weights for points around singularity. We prove the order of convergence is $2p+4-\alpha$, where $p\in\mathbb{N}_{0}$ is associated with total number of correction weights. We present numerical experiments to validate the order of convergence of the proposed modified quadrature rules. In the second part, we propose and analyze a general arbitrarily high-order modified trapezoidal rule for a class of weakly singular integrals of the forms $I = \int_{\R^n}\phi(x)s(x)\d x$ in $n$ dimensions, where $\phi$ and $s$ is the regular and singular part respectively. The admissible class requires $s$ satisfies three hypotheses and is large enough to contain singular kernel of the form $P(x)/|x|^r,\ r > 0$ where $P(x)$ is any monomial with degree strictly less than $r$. The modified trapezoidal rule is the singularity-punctured trapezoidal rule plus correction terms involving the correction weights for grid points around singularity. Correction weights are determined by enforcing the quadrature rule to exactly evaluate some monomials and solving corresponding linear systems. A long-standing difficulty of these types of methods is establishing the non-singularity of the linear system, despite strong numerical evidence. By using an algebraic-combinatorial argument, we show the non-singularity always holds and prove the general order of convergence of the modified quadrature rule. We present numerical experiments to validate the order of convergence. In the final part, we propose \emph{trapz-PiNN}, a physics-informed neural network incorporated with a modified trapezoidal rule and solve the space-fractional Fokker-Planck equations in 2D and 3D. We verify the modified trapezoidal rule has the second-order accuracy for evaluating the fractional laplacian. We demonstrate trapz-PiNNs have high expressive power through predicting solutions with low $\mathcal{L}^2$ relative error on a variety of numerical examples. We also use local metrics such as point-wise absolute and relative errors to analyze where could be further improved. We present an effective method for improving performance of trapz-PiNN on local metrics, provided that physical observations of high-fidelity simulation of the true solution are available. Besides the usual advantages of the deep learning solvers such as adaptivity and mesh-independence, the trapz-PiNN is able to solve PDEs with fractional laplacian with arbitrary $\alpha\in (0,2)$ and specializes on rectangular domains. It also has potential to be generalized into higher dimensions.
Show less
- Title
- Modeling, Analysis and Computation of Tumor Growth
- Creator
- Lu, Min-Jhe
- Date
- 2022
- Description
-
In this thesis we investigate the modeling, analysis and computation of tumor growth.The sharp interface model we considered is to understand...
Show moreIn this thesis we investigate the modeling, analysis and computation of tumor growth.The sharp interface model we considered is to understand how the two key factors of (1) the mechanical interaction between the tumor cells and their surroundings, and (2) the biochemical reactions in the microenvironment of tumor cells can influence the dynamics of tumor growth. From this general model we give its energy formulation and solve it numerically using the boundary integral methods and the small-scale decomposition under three different scenarios.The first application is the two-phase Stokes model, in which tumor cells and the extracellular matrix are both assumed to behave like viscous fluids. We compared the effect of membrane elasticity on the tumor interface and the curvature-weakening one and found the latter would promote the development of branching patterns.The second application is the two-phase nutrient model under complex far-field geometries, which represents the heterogeneous vascular distribution. Our nonlinear simulations reveal that vascular heterogeneity plays an important role in the development of morphological instabilities that range from fingering and chain-like morphologies to compact,plate-like shapes in two-dimensions.The third application is for the effect of angiogenesis, chemotaxis and the control of necrosis. Our nonlinear simulations reveal the stabilizing effects of angiogenesis and the destabilizing ones of chemotaxisand necrosis in the development of tumor morphological instabilities if the necrotic core is fixed. We also perform the bifurcation analysis for this model.In the end, as a future work, we propose new models through Energetic Variational Approach (EnVarA) to shed light on the modeling issues.
Show less
- Title
- Extremal and Enumerative Problems on DP-Coloring of Graphs
- Creator
- Sharma, Gunjan
- Date
- 2024
- Description
-
Graph coloring is the mathematical model for studying problems related to conflict-free allocation of resources. DP-coloring (also known as...
Show moreGraph coloring is the mathematical model for studying problems related to conflict-free allocation of resources. DP-coloring (also known as correspondence coloring) of graphs is a vast generalization of classic graph coloring, and many more concepts of colorings studied in the past 150+ years. We study problems in DP-coloring of graphs that combine questions and ideas from extremal, structural, probabilistic, and enumerative aspects of graph coloring. In particular, we study (i) DP-coloring Cartesian products of graphs using the DP-color function, the DP coloring counterpart of the Chromatic polynomial, and robust criticality, a new notion of graph criticality; (ii) Shameful conjecture on the mean number of colors used in a graph coloring, in the context of list coloring and DP-coloring; and (iii) asymptotic bounds on the difference between the chromatic polynomial and the DP color function, as well as the difference between the dual DP color function and the chromatic polynomial, in terms of the cycle structure of a graph. These results respectively give an upper bound and a lower bound on the chromatic polynomial in terms of DP colorings of a graph.
Show less
- Title
- Extremal and Enumerative Problems on DP-Coloring of Graphs
- Creator
- Sharma, Gunjan
- Date
- 2024
- Description
-
Graph coloring is the mathematical model for studying problems related to conflict-free allocation of resources. DP-coloring (also known as...
Show moreGraph coloring is the mathematical model for studying problems related to conflict-free allocation of resources. DP-coloring (also known as correspondence coloring) of graphs is a vast generalization of classic graph coloring, and many more concepts of colorings studied in the past 150+ years. We study problems in DP-coloring of graphs that combine questions and ideas from extremal, structural, probabilistic, and enumerative aspects of graph coloring. In particular, we study (i) DP-coloring Cartesian products of graphs using the DP-color function, the DP coloring counterpart of the Chromatic polynomial, and robust criticality, a new notion of graph criticality; (ii) Shameful conjecture on the mean number of colors used in a graph coloring, in the context of list coloring and DP-coloring; and (iii) asymptotic bounds on the difference between the chromatic polynomial and the DP color function, as well as the difference between the dual DP color function and the chromatic polynomial, in terms of the cycle structure of a graph. These results respectively give an upper bound and a lower bound on the chromatic polynomial in terms of DP colorings of a graph.
Show less