Search results
(1  17 of 17)
 Title
 DEVELOPMENTAL CHANGES OF KNOWLEDGE OF MATHEMATICS FOR TEACHING OVER TIME THROUGH PROFESSIONAL DEVELOPMENT
 Creator
 Alahmadi, Reham Abdulrahman
 Date
 2019
 Description

In the past, the knowledge base for effective teaching was measured based on presage variables, methods of teaching, processproduct research,...
Show moreIn the past, the knowledge base for effective teaching was measured based on presage variables, methods of teaching, processproduct research, competencybased teacher education, and professional decisionmaking. In addition, teachers’ effectiveness has been measured indirectly using proxy measurement. For example, teachers were assessed based on their performance on certification exams, their course, mathematical courses taken, and various experiences. Furthermore, teachers were measured based on students’ achievement tests, and using a pretest and posttest model with limited knowledge of their development. This study aims to understand teacher mathematical knowledge for teaching (MKT), its development over time, and how professional development influences their professional knowledge. Twenty middle school teachers (sixth through eighth grade) participated in contentbased (algebra) professional development (PD) in Saudi Arabia. The selection pool of teachers targeted in the study is eighth (sixth grade) mathematics teachers that represented a variety of years of experience and PD experience. The results of this study found that teachers positively developed their MKT through the professional development program. In particular, based on the results of the pretest, teachers had a low level of MKT before they participated in the PD program. Teachers’ developmental steps of their changes over time were captured during the PD via multiple interviews. These interviews revealed withinteacher themes, crossteacher themes, and factors that impacted teachers’ changes. Furthermore, a paired sample ttest showed that there was a significant difference between the means of pretest and posttest.
Show less
 Title
 Identity and SelfEfficacy Among Mathematically Successful African American Single Mothers in Urban Community College Contexts
 Creator
 Devi, Shavila
 Date
 2019
 Description

This dissertation is a phenomenological, multicase study of 13 mathematically successful African American single mothers from two urban...
Show moreThis dissertation is a phenomenological, multicase study of 13 mathematically successful African American single mothers from two urban community colleges in Chicago. While a number of recent studies have focused on Black girls and women in K12 and university contexts, the community college context remains understudied despite the presence of large numbers of Black women. Moreover, there has been a tendency in mainstream research contexts to normalize failure, and focus on problematic aspects of being a Black single mother or being a Black mathematics learner. Bringing together considerations of identity (racial, mathematics, single mother) and mathematics selfefficacy, this study will be the first to focus on mathematically successful African American single mothers in the community college context. The following research questions guided the research for this dissertation:1. How do African American single mothers, who return to study mathematics at the community college and are successful in their courses, narrate their identities and life experiences around race, gender, mathematics learning, and being a mother?2. How do these women score on the Mathematics SelfEfficacy Scale (MSES) and what sources of and influences on their selfefficacy are reported by these women via interviews? 3. What other factors (intrapersonal and beyond) do these women report as being particularly salient in their mathematics success?Multiple forms of data–semistructured interviews, preandpost responses to a widelyused mathematics selfefficacy survey, and mathematics artifacts–were collected to address the research questions. A crosscase analysis of the data revealed four themes that emerged across the 13 participants. Withincase analyses of three participants reveals how the themes play out indepth for these women. The four themes are (1) strong counternarratives of being a single mother that resisted dominant and deficitoriented discourses; (2) education as a key tool and resource to manage and mitigate risks associated with single motherhood; (3) multifaceted stories of resilience to achieve success in mathematics and life; and (4) positive, successoriented mathematics identities and positive math selfefficacy. This study contributes to an emerging successoriented literature on Black women and mathematics, and a growing research literature on identity in mathematics education. In surfacing how the participants narrate and negotiate race, gender, and class, this dissertation also contributes to an emerging literature on intersectionality in mathematics education. Results from this study can inform community college administrators and faculty in crafting practice and policy to support African American single mothers in mathematics.
Show less
 Title
 A BOUNDARY INTEGRAL METHOD FOR COMPUTING THE FORCES OF MOVING BEADS IN A THREEDIMENSIONAL LINEAR VISCOELASTIC FLOW
 Creator
 Hernandez, Francisco
 Date
 2019
 Description

Computing the forces acting on particles in fluids is fundamental to understanding particle dynamics and interactions. In this thesis, we...
Show moreComputing the forces acting on particles in fluids is fundamental to understanding particle dynamics and interactions. In this thesis, we study the dynamics of a twoparticle system in a threedimensional linear viscoelastic flow. Using a correspondence principle between unsteady Stokes flow and viscoelastic flow, we reformulate the problem and derive a boundary integral formulation that solves the Brinkman’s equation in the Fourier domain. We show that computational costs can be reduced by carefully eliminating the doublelayer potential, and that a unique solution can be obtained by desingularizing the equation. We develop a highly accurate numerical integration scheme to evaluate the resulting boundary integrals. We solve the backward problem by making use of our numerical integration scheme, variable transformations, generalized minimum residual (GMRES) method, and spherical harmonic interpolations. In particular, spherical harmonic interpolations ensure that this numerical scheme is of high accuracy. Our method also has the advantage of working for both unsteady Stokes and linear viscoelastic flow by appropriately adjusting the oscillation frequency. Our numerical results are in agreement with the exact solution for a singleparticle system, as well as the asymptotic solution for large particle separation in the twoparticle system. Last, we analyze the numerical results for high oscillation frequencies and small particle separation. Our numerical method is shown to only depend on the frequency parameter and the distance between the particles. We find that for high frequencies, the forces on the particles behave differently for unsteady Stokes and linear viscoelastic flows.
Show less
 Title
 Learning Stochastic Governing Laws from Noisy Data Using Normalizing Flows
 Creator
 McClure, William Jacob
 Date
 2021
 Description

With the increasing availability of massive collections of data, researchers in all sciences need tools to synthesize useful and pertinent...
Show moreWith the increasing availability of massive collections of data, researchers in all sciences need tools to synthesize useful and pertinent descriptors of the systems they study. Perhaps the most fundamental knowledge of a dynamical system is its governing laws, which describe its evolution through time and can be leveraged for a number of analyses about its behavior. We present a novel technique for learning the infinitesimal generator of a Markovian stochastic process from large, noisy datasets generated by a stochastic system. Knowledge of the generator in turn allows us to find the governing laws for the process. This technique relies on normalizing flows, neural networks that estimate probability densities, to learn the density of timedependent stochastic processes. We establish the efficacy of this technique on multiple systems with Brownian noise, and use our learned governing laws to perform analysis on one system by solving for its mean exit time. Our approach also allows us to learn other dynamical behaviors such as escape probability and most probable pathways in a system. The potential impact of this technique is farreaching, since most stochastic processes in various fields are assumed to be Markovian, and the only restriction for applying our method is available data from a time near the beginning of an experiment or recording.
Show less
 Title
 Modeling the Aerodynamic Response to Impulsive Active Flow Control
 Creator
 Asztalos, Katherine
 Date
 2021
 Description

In unsteady aerodynamics the response to external disturbances can depend significantly on the initial condition, and the extent to which this...
Show moreIn unsteady aerodynamics the response to external disturbances can depend significantly on the initial condition, and the extent to which this impacts the ability to model the flowfield can vary. In this work, we look to develop a model that can capture and predict the longtime response to actuation, which we suspect to be sensitive to the instantaneous state. We investigate whether a physical understanding of the shorttime response to impulsive actuation can be obtained, with the goal of understanding the observed physical phenomenon present in the immediate response to this type of actuation. We find that the response to impulsive actuation is sensitive to the instantaneous wake, and that the shorttime response is directly proportional to the time rate of change of the actuation input. Computational simulations of a stalled NACA 0009 airfoil subject to leadingedge synthetic jet actuation were performed. Full state information, as well as force response measurements, were collected using an immersed boundary method (IBM) numerical code. The numerical simulations performed sought to characterize the response to actuation by varying the actuation parameters, such as the strength, direction, and phase at which the onset of actuation occurs. It was found that the longtime response to actuation can be sensitive to the instantaneous wake state at the onset of actuation. The ability to extract models that describe the complex behavior of the system provides additional insight into the dominant features governing the response of such systems, as well as achieves predictive capabilities of the systems' response. The datadriven models, which are identified using variants of dynamic mode decomposition, can capture both the short and longtime response of the system to actuation. Predictive models are identified using multiple trajectories of data corresponding to varying the phase of vortex shedding at which the onset of actuation occurs. These models achieve accurate predictions for offdesign cases as well. It is also shown that multiple control objectives with the same actuator can be achieved. Classical theory aids in understanding the physics governing unsteady aerodynamic motion and the response to disturbances. Theoretical models are developed using the assumptions from classical unsteady aerodynamic theory, which provide insight into the forms that the datadriven models take. The effect of shortduration momentum injection actuation is modeled through a combination of source/sink, doublet, and vortex elements. Regardless of the precise elements used in the theoretical model, the lift response is composed of a contribution directly proportional to the rate of change of actuation strength, and a contribution that persists after the actuation burst ends that arises due to the enforcement of the Kutta condition. Methodologies that retain the physics inherent to the system by projecting the governing equations of motion onto a wellsuited basis are extremely valuable for gaining physical insight and understanding into the dynamics of the flowfield. A new methodology is proposed for extracting spectral content from systems with limited data available using projectionbased modeling approaches. There are challenges associated with using modal decompositionbased modeling techniques for systems exhibiting large transient dynamics due to external inputs, which is applicable in this particular instance and for related systems. The methodology presented here shows how the dynamics of this system can be understood through analysis of optimal finitetime horizon transient energy growth, applied to reducedorder models identified using actuation response data with either datadriven or physicsbased models. A novel methodology is proposed to guide future experimental actuation design to achieve maximal response by considering an optimal forcing mode, identified from considering the optimal perturbation of the full unactuated system, which maximizes a given output.
Show less
 Title
 Advances in Machine Learning: Theory and Applications in Time Series Prediction
 Creator
 London, Justin J.
 Date
 2021
 Description

A new time series modeling framework for forecasting, prediction and regime switching for recurrent neural networks (RNNs) using machine...
Show moreA new time series modeling framework for forecasting, prediction and regime switching for recurrent neural networks (RNNs) using machine learning is introduced. In this framework, we replace the perceptron with an econometric modeling unit. This cell/unit is a functionally dedicated to processing the prediction component from the econometric model. These supervised learning methods overcome the parameter estimation and convergence problems of traditional econometric autoregression (AR) models that use MLE and expectationmaximization (EM) methods which are computationally expensive, assume linearity, Gaussian distributed errors, and suffer from the curse of dimensionality. Consequently, due to these estimation problems and lower number of lags that can be estimated, AR models are limited in their ability to capture long memory or dependencies. On the other hand, plain RNNs suffer from the vanishing and gradient problem that also limits their ability to have longmemory. We introduce a new class of RNN models, the $\alpha$RNN and dynamic $\alpha_{t}$RNNs that does not suffer from these problems by utilizing an exponential smoothing parameter. We also introduce MSRNNs, MSLSTMs, and MSGRUs., novel models that overcome the limitations of MSARs but enable regime (Markov) switching and detection of structural breaks in the data. These models have long memory, can handle nonlinear dynamics, do not require data stationarity or assume error distributions. Thus, they make no assumptions about the data generating process and have the ability to better capture temporal dependencies leading to better forecasting and prediction accuracy over traditional econometric models and plain RNNs. Yet, the partial autocorrelation function and econometric tools, such as the the ADF, LjungBox, and AIC test statistics, can be used to determine optimal sequence lag lengths to input into these RNN models and to diagnose serial correlation. The new framework has capacity to characterize the nonlinear partial autocorrelation of time series and directly capture dynamic effects such as trends and seasonality. The optimal sequence lag order can greatly influence prediction performance on test data. This structure provides more interpretability to ML models since traditional econometric models are embedded into RNNs. The ability to embed econometric models into RNNs will allow firms to improve prediction accuracy compared to traditional econometric or traditional ML models by creating a hybrid utilizing a well understood traditional econometric model and a ML. In theory the traditional econometric model should focus on the portion of the estimation error that is best managed by a traditional model and the ML should focus the nonlinear portion of the model. This combined structure is a step towards explainable AI and lays the framework for econometric AI.
Show less
 Title
 IMPLEMENTING ASYNCHRONOUS DISCUSSION AS AN INSTRUCTIONAL STRATEGY IN THE DEVELOPMENTAL MATHEMATICS COURSES TO SUPPORT STUDENT LEARNING
 Creator
 Zenati, Lynda
 Date
 2020
 Description

Remedial known as developmental coursework are designed to get underprepared students ready for college. Ninety one percent of colleges offer...
Show moreRemedial known as developmental coursework are designed to get underprepared students ready for college. Ninety one percent of colleges offer remedial courses in mathematics and English (Seo, 2014). Evidence suggests that traditional teaching methods do not enable all students to engage with the types of academic literacy constitutive to higher education (Lea and Street, 2006). The popularity of online discussion has been made possible through their availability in most LMS which are widely used in higher education (Dahlstrom, Brooks, & Bichsel, 2014). This study aimed at examining the use of asynchronous discussion (AD) as an instructional strategy to help alleviate some of the difficulties developmental math students make in different topics. Participants were 15 students enrolled in Summer, 2019 semester at a Community College. Results showed that students’ performance increased from pretest to posttest for students’ who participated in AD. Comparison was made with two other sections of the same course at the same college taught by two different instructors. Controlling for prior academic ability, results showed a statistically significant difference between students’ performance in the posttest in the section that utilized the AD but not the other two sections. Content analysis of students' posts showed the use of AD at least temporarily corrected students’ misconceptions when they were active and Consistent. Results were mixed for the lurker and the passive students. Moreover, correlation analysis showed no relationship for the frequency of interaction; however, a significant relationship was found for the quality of participation and students’ performance as measured by the final exam. Furthermore; no relationship between the CoI presences and students’ performance. Students’ reflections indicated that students valued the online experience. Benefits were related to students’ engagement and collaborative learning. Obstacles included students’ behavior, timing and the structure of the AD. This may imply that using structured AD may help in building a community of learners. Also, instructor presence and facilitation were necessary to promote deep learning. Future research can build on this finding by replicating the study using a bigger sample size and a longer period to allow students to reflect and discuss any conflict with their peers.
Show less
 Title
 Fast Automatic Bayesian Cubature Using Matching Kernels and Designs
 Creator
 Rathinavel, Jagadeeswaran
 Date
 2019
 Description

Automatic cubatures approximate multidimensional integrals to userspecified error tolerances. In many realworld integration problems, the...
Show moreAutomatic cubatures approximate multidimensional integrals to userspecified error tolerances. In many realworld integration problems, the analytical solution is either unavailable or difficult to compute. To overcome this, one can use numerical algorithms that approximately estimate the value of the integral. For high dimensional integrals, quasiMonte Carlo (QMC) methods are very popular. QMC methods are equalweight quadrature rules where the quadrature points are chosen deterministically, unlike Monte Carlo (MC) methods where the points are chosen randomly.The families of integration lattice nodes and digital nets are the most popular quadrature points used. These methods consider the integrand to be a deterministic function. An alternative approach, called Bayesian cubature, postulates the integrand to be an instance of a Gaussian stochastic process. For high dimensional problems, it is difficult to adaptively change the sampling pattern. But one can automatically determine the sample size, $n$, given a fixed and reasonable sampling pattern. We take this approach using a Bayesian perspective. We assume a Gaussian process parameterized by a constant mean and a covariance function defined by a scale parameter and a function specifying how the integrand values at two different points in the domain are related. These parameters are estimated from integrand values or are given noninformative priors. This leads to a credible interval for the integral. The sample size, $n$, is chosen to make the credible interval for the Bayesian posterior error no greater than the desired error tolerance. However, the process just outlined typically requires vectormatrix operations with a computational cost of $O(n^3)$. Our innovation is to pair low discrepancy nodes with matching kernels, which lowers the computational cost to $O(n \log n)$. We begin the thesis by introducing the Bayesian approach to calculate the posterior cubature error and define our automatic Bayesian cubature. Although much of this material is known, it is used to develop the necessary foundations. Some of the major contributions of this thesis include the following: 1) The fast Bayesian transform is introduced. This generalizes the techniques that speedup Bayesian cubature when the kernel matches low discrepancy nodes. 2) The fast Bayesian transform approach is demonstrated using two methods: a) rank1 lattice sequences and shiftinvariant kernels, and b) Sobol' sequences and Walsh kernels. These two methods are implemented as fast automatic Bayesian cubature algorithms in the Guaranteed Automatic Integration Library (GAIL). 3) We develop additional numerical implementation techniques: a) rewriting the covariance kernel to avoid cancellation error, b) gradient descent for hyperparameter search, and c) noninteger kernel order selection.The thesis concludes by applying our fast automatic Bayesian cubature algorithms to three sample integration problems. We show that our algorithms are faster than the basic Bayesian cubature and that they provide answers within the error tolerance in most cases. The Bayesian cubatures that we develop are guaranteed for integrands belonging to a cone of functions that reside in the middle of the sample space. The concept of a cone of functions is also explained briefly.
Show less
 Title
 Latent Price Model for Market Microstructure: Estimation and Simulation
 Creator
 Yin, Yuan
 Date
 2023
 Description

This thesis focuses on exploring and solving several problems based on partiallyobserved diffusion models. The thesis has two parts....
Show moreThis thesis focuses on exploring and solving several problems based on partiallyobserved diffusion models. The thesis has two parts. In the first part we present a tractable sufficient condition for the consistency of maximum likelihood estimators (MLEs) in partially observed diffusion models, stated in terms of stationary distributions of the associated test processes, under the assumption that the set of unknown parameter values is finite. We illustrate the tractability of this sufficient condition by verifying it in the context of a latent price model of market microstructure. Finally, we describe an algorithm for computing MLEs in partially observed diffusion models and test it on historical data to estimate the parameters of the latent price model. In the second part we provide a thorough analysis of the particle filtering algorithm for estimating the conditional distribution in partially observed diffusion models. Specifically, we focus on estimating the distribution of unobserved processes using observed data. The algorithm involves several steps and assumptions, which are described in detail. We also examine the convergence of the algorithm and identify the sufficient conditions under which it converges. Finally, we derive an explicit upper bound of the convergence rate of the algorithm, which depends on the set of parameters and the choice of time frequency. This bound provides a measure of the algorithm’s performance and can be used to optimize its parameters to achieve faster convergence.
Show less
 Title
 Machine Learning On Graphs
 Creator
 He, Jia
 Date
 2022
 Description

Deep learning has revolutionized many machine learning tasks in recent years.Successful applications range from computer vision, natural...
Show moreDeep learning has revolutionized many machine learning tasks in recent years.Successful applications range from computer vision, natural language processing to speech recognition, etc. The success is partially due to the availability of large amounts of data and fast growing computing resources (i.e., GPU and TPU), and partially due to the recent advances in deep learning technology. Neural networks, in particular, have been successfully used to process regular data such as images and videos. However, for many applications with graphstructured data, due to the irregular structure of graphs, many powerful operations in deep learning can not be readily applied. In recent years, there is a growing interest in extending deep learning to graphs. We first propose graph convolutional networks (GCNs) for the task of classification or regression on timevarying graph signals, where the signal at each vertex is given as a time series. An important element of the GCN design is filter design. We consider filtering signals in either the vertex (spatial) domain, or the frequency (spectral) domain. Two basic architectures are proposed. In the spatial GCN architecture, the GCN uses a graph shift operator as the basic building block to incorporate the underlying graph structure into the convolution layer. The spatial filter directly utilizes the graph connectivity information. It defines the filter to be a polynomial in the graph shift operator to obtain the convolved features that aggregate neighborhood information of each node. In the spectral GCN architecture, a frequency filter is used instead. A graph Fourier transform operator or a graph wavelet transform operator first transforms the raw graph signal to the spectral domain, then the spectral GCN uses the coe"cients from the graph Fourier transform or graph wavelet transform to compute the convolved features. The spectral filter is defined using the graph’s spectral parameters. There are additional challenges to process timevarying graph signals as the signal value at each vertex changes over time. The GCNs are designed to recognize di↵erent spatiotemporal patterns from highdimensional data defined on a graph. The proposed models have been tested on simulation data and real data for graph signal classification and regression. For the classification problem, we consider the power line outage identification problem using simulation data. The experiment results show that the proposed models can successfully classify abnormal signal patterns and identify the outage location. For the regression problem, we use the New York city bikesharing demand dataset to predict the stationlevel hourly demand. The prediction accuracy is superior to other models. We next study graph neural network (GNN) models, which have been widely used for learning graphstructured data. Due to the permutationinvariant requirement of graph learning tasks, a basic element in graph neural networks is the invariant and equivariant linear layers. Previous work by Maron et al. (2019) provided a maximal collection of invariant and equivariant linear layers and a simple deep neural network model, called kIGN, for graph data defined on ktuples of nodes. It is shown that the expressive power of kIGN is equivalent to kWeisfeilerLehman (WL) algorithm in graph isomorphism tests. However, the dimension of the invariant layer and equivariant layer is the kth and 2kth bell numbers, respectively. Such high complexity makes it computationally infeasible for kIGNs with k > 3. We show that a much smaller dimension for the linear layers is su"cient to achieve the same expressive power. We provide two sets of orthogonal bases for the linear layers, each with only 3(2k & 1) & k basis elements. Based on these linear layers, we develop neural network models GNNa and GNNb, and show that for the graph data defined on ktuples of data, GNNa and GNNb achieve the expressive power of the kWL algorithm and the (k + 1)WL algorithm in graph isomorphism tests, respectively. In molecular prediction tasks on benchmark datasets, we demonstrate that loworder neural network models consisting of the proposed linear layers achieve better performance than other neural network models. In particular, order2 GNNb and order3 GNNa both have 3WL expressive power, but use a much smaller basis and hence much less computation time than known neural network models. Finally, we study generative neural network models for graphs. Generative models are often used in semisupervised learning or unsupervised learning. We address two types of generative tasks. In the first task, we try to generate a component of a large graph, such as predicting if a link exists between a pair of selected nodes, or predicting the label of a selected node/edge. The encoder embeds the input graph to a latent vector space via vertex embedding, and the decoder uses the vertex embedding to compute the probability of a link or node label. In the second task, we try to generate an entire graph. The encoder embeds each input graph to a point in the latent space. This is called graph embedding. The generative model then generates a graph from a sampled point in the latent space. Di↵erent from the previous work, we use the proposed equivariant and invariant layers in the inference model for all tasks. The inference model is used to learn vertex/graph embeddings and the generative model is used to learn the generative distributions. Experiments on benchmark datasets have been performed for a range of tasks, including link prediction, node classification, and molecule generation. Experiment results show that the high expressive power of the inference model directly improves latent space embedding, and hence the generated samples.
Show less
 Title
 Numerical Analysis and Deep Learning Solver of the Nonlocal FokkerPlanck Equations
 Creator
 Jiang, Senbao
 Date
 2022
 Description

This thesis is divided into three mutually connected parts. ...
Show moreThis thesis is divided into three mutually connected parts. In the first part, we introduce and analyze arbitrarily highorder quadrature rules for evaluating the twodimensional singular integrals of the forms \begin{align*} I_{i,j} = \int_{\mathbb{R}^2}\phi(x)\frac{x_ix_j}{x^{2+\alpha}} \d x, \quad 0< \alpha < 2 \end{align*} where $i,j\in\{1,2\}$ and $\phi\in C_c^N$ for $N\geq 2$. This type of singular integrals and its quadrature rule appear in the numerical discretization of fractional Laplacian in nonlocal FokkerPlanck Equations in 2D. The quadrature rules are trapezoidal rules equipped with correction weights for points around singularity. We prove the order of convergence is $2p+4\alpha$, where $p\in\mathbb{N}_{0}$ is associated with total number of correction weights. We present numerical experiments to validate the order of convergence of the proposed modified quadrature rules. In the second part, we propose and analyze a general arbitrarily highorder modified trapezoidal rule for a class of weakly singular integrals of the forms $I = \int_{\R^n}\phi(x)s(x)\d x$ in $n$ dimensions, where $\phi$ and $s$ is the regular and singular part respectively. The admissible class requires $s$ satisfies three hypotheses and is large enough to contain singular kernel of the form $P(x)/x^r,\ r > 0$ where $P(x)$ is any monomial with degree strictly less than $r$. The modified trapezoidal rule is the singularitypunctured trapezoidal rule plus correction terms involving the correction weights for grid points around singularity. Correction weights are determined by enforcing the quadrature rule to exactly evaluate some monomials and solving corresponding linear systems. A longstanding difficulty of these types of methods is establishing the nonsingularity of the linear system, despite strong numerical evidence. By using an algebraiccombinatorial argument, we show the nonsingularity always holds and prove the general order of convergence of the modified quadrature rule. We present numerical experiments to validate the order of convergence. In the final part, we propose \emph{trapzPiNN}, a physicsinformed neural network incorporated with a modified trapezoidal rule and solve the spacefractional FokkerPlanck equations in 2D and 3D. We verify the modified trapezoidal rule has the secondorder accuracy for evaluating the fractional laplacian. We demonstrate trapzPiNNs have high expressive power through predicting solutions with low $\mathcal{L}^2$ relative error on a variety of numerical examples. We also use local metrics such as pointwise absolute and relative errors to analyze where could be further improved. We present an effective method for improving performance of trapzPiNN on local metrics, provided that physical observations of highfidelity simulation of the true solution are available. Besides the usual advantages of the deep learning solvers such as adaptivity and meshindependence, the trapzPiNN is able to solve PDEs with fractional laplacian with arbitrary $\alpha\in (0,2)$ and specializes on rectangular domains. It also has potential to be generalized into higher dimensions.
Show less
 Title
 Modeling, Analysis and Computation of Tumor Growth
 Creator
 Lu, MinJhe
 Date
 2022
 Description

In this thesis we investigate the modeling, analysis and computation of tumor growth.The sharp interface model we considered is to understand...
Show moreIn this thesis we investigate the modeling, analysis and computation of tumor growth.The sharp interface model we considered is to understand how the two key factors of (1) the mechanical interaction between the tumor cells and their surroundings, and (2) the biochemical reactions in the microenvironment of tumor cells can influence the dynamics of tumor growth. From this general model we give its energy formulation and solve it numerically using the boundary integral methods and the smallscale decomposition under three different scenarios.The first application is the twophase Stokes model, in which tumor cells and the extracellular matrix are both assumed to behave like viscous fluids. We compared the effect of membrane elasticity on the tumor interface and the curvatureweakening one and found the latter would promote the development of branching patterns.The second application is the twophase nutrient model under complex farfield geometries, which represents the heterogeneous vascular distribution. Our nonlinear simulations reveal that vascular heterogeneity plays an important role in the development of morphological instabilities that range from fingering and chainlike morphologies to compact,platelike shapes in twodimensions.The third application is for the effect of angiogenesis, chemotaxis and the control of necrosis. Our nonlinear simulations reveal the stabilizing effects of angiogenesis and the destabilizing ones of chemotaxisand necrosis in the development of tumor morphological instabilities if the necrotic core is fixed. We also perform the bifurcation analysis for this model.In the end, as a future work, we propose new models through Energetic Variational Approach (EnVarA) to shed light on the modeling issues.
Show less
 Title
 ChoiceDistinguishing Colorings of Cartesian Products of Graphs
 Creator
 Tomlins, Christian James
 Date
 2022
 Description

A coloring $f: V(G)\rightarrow \mathbb N$ of a graph $G$ is said to be \emph{distinguishing} if no nonidentity automorphism preserves every...
Show moreA coloring $f: V(G)\rightarrow \mathbb N$ of a graph $G$ is said to be \emph{distinguishing} if no nonidentity automorphism preserves every vertex color. The distinguishing number, $D(G)$, of a graph $G$ is the smallest positive integer $k$ such that there exists a distinguishing coloring $f: V(G)\rightarrow [k]$ and was introduced by Albertson and Collins in their paper ``Symmetry Breaking in Graphs.'' By restricting what kinds of colorings are considered, many variations of distinguishing numbers have been studied. In this paper, we study proper listcolorings of graphs which are also distinguishing and investigate the choicedistinguishing number $\text{ch}_D(G)$ of a graph $G$. Primarily, we focus on the choicedistinguishing number of Cartesian products of graphs. We determine the exact value of $\text{ch}_D(G)$ for lattice graphs and prism graphs and provide an upper bound on the choicedistinguishing number of the Cartesian products of two relatively prime graphs, assuming a sufficient condition is satisfied. We use this result to bound the choice distinguishing number of toroidal grids and the Cartesian product of a tree with a clique. We conclude with a discussion on how, depending on the graphs $G$ and $H$, we may weaken the sufficient condition needed to bound $\text{ch}_D(G\square H)$.
Show less
 Title
 Extremal and Enumerative Problems on DPColoring of Graphs
 Creator
 Sharma, Gunjan
 Date
 2024
 Description

Graph coloring is the mathematical model for studying problems related to conflictfree allocation of resources. DPcoloring (also known as...
Show moreGraph coloring is the mathematical model for studying problems related to conflictfree allocation of resources. DPcoloring (also known as correspondence coloring) of graphs is a vast generalization of classic graph coloring, and many more concepts of colorings studied in the past 150+ years. We study problems in DPcoloring of graphs that combine questions and ideas from extremal, structural, probabilistic, and enumerative aspects of graph coloring. In particular, we study (i) DPcoloring Cartesian products of graphs using the DPcolor function, the DP coloring counterpart of the Chromatic polynomial, and robust criticality, a new notion of graph criticality; (ii) Shameful conjecture on the mean number of colors used in a graph coloring, in the context of list coloring and DPcoloring; and (iii) asymptotic bounds on the difference between the chromatic polynomial and the DP color function, as well as the difference between the dual DP color function and the chromatic polynomial, in terms of the cycle structure of a graph. These results respectively give an upper bound and a lower bound on the chromatic polynomial in terms of DP colorings of a graph.
Show less
 Title
 Extremal and Enumerative Problems on DPColoring of Graphs
 Creator
 Sharma, Gunjan
 Date
 2024
 Description

Graph coloring is the mathematical model for studying problems related to conflictfree allocation of resources. DPcoloring (also known as...
Show moreGraph coloring is the mathematical model for studying problems related to conflictfree allocation of resources. DPcoloring (also known as correspondence coloring) of graphs is a vast generalization of classic graph coloring, and many more concepts of colorings studied in the past 150+ years. We study problems in DPcoloring of graphs that combine questions and ideas from extremal, structural, probabilistic, and enumerative aspects of graph coloring. In particular, we study (i) DPcoloring Cartesian products of graphs using the DPcolor function, the DP coloring counterpart of the Chromatic polynomial, and robust criticality, a new notion of graph criticality; (ii) Shameful conjecture on the mean number of colors used in a graph coloring, in the context of list coloring and DPcoloring; and (iii) asymptotic bounds on the difference between the chromatic polynomial and the DP color function, as well as the difference between the dual DP color function and the chromatic polynomial, in terms of the cycle structure of a graph. These results respectively give an upper bound and a lower bound on the chromatic polynomial in terms of DP colorings of a graph.
Show less
 Title
 Independence and Graphical Models for Fitting Real Data
 Creator
 Cho, Jason Y.
 Date
 2023
 Description

Given some real life dataset where the attributes of the dataset take on categorical values, with corresponding r(1) × r(2) × … × r(m)...
Show moreGiven some real life dataset where the attributes of the dataset take on categorical values, with corresponding r(1) × r(2) × … × r(m) contingency table with nonzero rows or nonzero columns, we will be testing the goodnessoffit of various independence models to the dataset using a variation of MetropolisHastings that uses Markov bases as a tool to get a Monte Carlo estimate of the pvalue. This variation of MetropolisHastings can be found in Algorithm 3.1.1. Next we will consider the problem: ``out of all possible undirected graphical models each associated to some graph with m vertices that we test to fit on our dataset, which one best fits the dataset?" Here, the m attributes are labeled as vertices for the graph. We would have to conduct 2^(mC2) goodnessoffit tests since there are 2^(mC2) possible undirected graphs on m vertices. Instead, we consider a backwards selection method likelihoodratio test algorithm. We first start with the complete graph G = K(m), and call the corresponding undirected graphical model ℳ(G) as the parent model. Then for each edge e in E(G), we repeatedly apply the likelihoodratio test to test the relative fit of the model ℳ(Ge), the child model, vs. ℳ(G), the parent model, where ℳ(Ge) ⊆ℳ(G). More details on this iterative process can be found in Algorithm 4.1.3. For our dataset, we will be using the alcohol dataset found in https://www.kaggle.com/datasets/sooyoungher/smokingdrinkingdataset, where the four attributes of the dataset we will use are ``Gender" (male, female), ``Age", ``Total cholesterol (mg/dL)", and ``Drinks alcohol or not?". After testing the goodnessoffit of three independence models corresponding to the independence statements ``Gender vs Drink or not?", ``Age vs Drink or not?", and "Total cholesterol vs Drink or not?", we found that the data came from a distribution from the two independence models corresponding to``Age vs Drink or not?" and "Total cholesterol vs Drink or not?" And after applying the backwards selection likelihoodratio method on the alcohol dataset, we found that the data came from a distribution from the undirected graphical model associated to the complete graph minus the edge {``Total cholesterol”, ``Drink or not?”}.
Show less
 Title
 Independence and Graphical Models for Fitting Real Data
 Creator
 Cho, Jason Y.
 Date
 2023
 Description

Given some real life dataset where the attributes of the dataset take on categorical values, with corresponding r(1) × r(2) × … × r(m)...
Show moreGiven some real life dataset where the attributes of the dataset take on categorical values, with corresponding r(1) × r(2) × … × r(m) contingency table with nonzero rows or nonzero columns, we will be testing the goodnessoffit of various independence models to the dataset using a variation of MetropolisHastings that uses Markov bases as a tool to get a Monte Carlo estimate of the pvalue. This variation of MetropolisHastings can be found in Algorithm 3.1.1. Next we will consider the problem: ``out of all possible undirected graphical models each associated to some graph with m vertices that we test to fit on our dataset, which one best fits the dataset?" Here, the m attributes are labeled as vertices for the graph. We would have to conduct 2^(mC2) goodnessoffit tests since there are 2^(mC2) possible undirected graphs on m vertices. Instead, we consider a backwards selection method likelihoodratio test algorithm. We first start with the complete graph G = K(m), and call the corresponding undirected graphical model ℳ(G) as the parent model. Then for each edge e in E(G), we repeatedly apply the likelihoodratio test to test the relative fit of the model ℳ(Ge), the child model, vs. ℳ(G), the parent model, where ℳ(Ge) ⊆ℳ(G). More details on this iterative process can be found in Algorithm 4.1.3. For our dataset, we will be using the alcohol dataset found in https://www.kaggle.com/datasets/sooyoungher/smokingdrinkingdataset, where the four attributes of the dataset we will use are ``Gender" (male, female), ``Age", ``Total cholesterol (mg/dL)", and ``Drinks alcohol or not?". After testing the goodnessoffit of three independence models corresponding to the independence statements ``Gender vs Drink or not?", ``Age vs Drink or not?", and "Total cholesterol vs Drink or not?", we found that the data came from a distribution from the two independence models corresponding to``Age vs Drink or not?" and "Total cholesterol vs Drink or not?" And after applying the backwards selection likelihoodratio method on the alcohol dataset, we found that the data came from a distribution from the undirected graphical model associated to the complete graph minus the edge {``Total cholesterol”, ``Drink or not?”}.
Show less