Publications
Below is a list of my publications in reverse chronological order. For each publication, you can access the Abstract for a technical summary, BibTeX for citation format, the URL to view the full paper, and In Simple Terms for an accessible explanation of the work and its significance.
Curvilinear coordinates and curvature in radiative transport
Johannes Krotz, Ryan G. McClarren
Journal of Computational and Theoretical Transport (under review)
We derive a general expression for the streaming term in radiative transport equations and other transport problems when formulated in curvilinear coordinates, emphasizing coordinate systems adapted to the geometry of the domain and the directional dependence of particle transport. By parametrizing the angular variable using a local orthonormal frame, we express directional derivatives in terms of curvature-related quantities that reflect the geometry of underlying spatial manifolds. Our formulation highlights how the interaction between coordinate choices and curvature influences the streaming operator, offering geometric interpretations of its components. The resulting framework offers intuitive insight into when and how angular dependence can be simplified and may guide the selection of coordinate systems that balance analytical tractability and computational efficiency.
@misc{krotz2025curvilinearcoordinatescurvatureradiative, title={Curvilinear coordinates and curvature in radiative transport}, author={Johannes Krotz and Ryan G. McClarren}, year={2025}, eprint={2508.20852}, archivePrefix={arXiv}, primaryClass={math.NA}, url={https://arxiv.org/abs/2508.20852}, }
When scientists simulate how light, radiation, or particles move through a medium — whether inside a star, in the Earth's atmosphere, or in a nuclear reactor — they use transport equations. At the heart of these equations is a deceptively simple piece: the streaming term, which tracks how particles move forward in space and direction.
This sounds straightforward, but there's a twist. Particles don't always move through nice, flat, box-shaped domains. Often the geometry is curved — like spherical shells around a planet, cylindrical reactors, or more irregular shapes. When the geometry is curved, the math describing particle directions also bends. Choosing the right coordinate system can make the problem much easier — or much harder.
The Core Idea: Curvature Meets Transport
This work tackles the streaming term when the coordinates themselves are curved — so-called curvilinear coordinates. Instead of sticking with the usual Cartesian grid, we allow coordinates that better match the geometry of the problem (spheres, cylinders, ellipses, translating surfaces, etc.).
The key contribution is a unified way to write the directional derivative (how particles stream) in terms of curvature quantities — intuitive geometric measures of how surfaces bend and how directions twist around each other. In plain terms:
- The math of particle transport isn't just about position and direction.
- It's also about how the shape of space (flat vs. curved) changes the apparent motion.
- By expressing the streaming operator via curvatures, we see how geometry and physics interact.
Why This Matters
Traditionally, deriving the streaming term in a curved system requires tedious, case-by-case geometry. Each new coordinate choice (spherical, cylindrical, elliptical…) brings its own messy formulas. This framework offers a common language based on curvature, which helps in three ways:
- Simplification: it tells you when certain terms vanish in a given geometry, so the equations get simpler.
- Intuition: instead of pages of indices, you reason with "is the surface flat, curved, or twisting?"
- Practical guidance: it informs coordinate choices that balance tractability and accuracy.
Putting It Into Practice
The paper walks through familiar cases — cylindrical and spherical setups — and also more involved choices like ellipsoids or translating graphs. The general recipe shows:
- Cylinders: several terms drop out because the relevant surfaces are flat and directions don't twist.
- Spheres: curvature naturally appears with clean, symmetric contributions.
- Ellipsoids / translating surfaces: complexity increases, but the curvature view still identifies which pieces matter and why.
Takeaway
The contribution isn't a new solver; it's a geometric perspective that makes derivations clearer and decisions more informed. By seeing transport through the lens of curvature, researchers can:
- avoid unnecessary terms,
- understand the effect of geometry on streaming, and
- choose coordinate systems that make the physics — and the computation — more manageable.
It's a modest but useful step: connecting particle transport — a cornerstone of many physical simulations — more directly with the language of geometry.
A dynamic likelihood approach to filtering transport processes: advection-diffusion dynamics
Johannes Krotz,Juan M. Restrepo, Jorge Ramirez
Journal of Computational Physics
A Bayesian data assimilation scheme is formulated for advection-dominated advective and diffusive evolutionary problems, based upon the Dynamic Likelihood (DLF) approach to filtering. The DLF was developed specifically for hyperbolic problems –waves–, and in this paper, it is extended via a split step formulation, to handle advection-diffusion problems. In the dynamic likelihood approach, observations and their statistics are used to propagate probabilities along characteristics, evolving the likelihood in time. The estimate posterior thus inherits phase information. For advection-diffusion the advective part of the time evolution is handled on the basis of observations alone, while the diffusive part is informed through the model as well as observations. We expect, and indeed show here, that in advection-dominated problems, the DLF approach produces better estimates than other assimilation approaches, particularly when the observations are sparse and have low uncertainty. The added computational expense of the method is cubic in the total number of observations over time, which is on the same order of magnitude as a standard Kalman filter and can be mitigated by bounding the number of forward propagated observations, discarding the least informative data.
@article{KROTZ_2025_DynamicLikelihoodFilter, title = {A dynamic likelihood approach to filtering transport processes: advection-diffusion dynamics}, journal = {Journal of Computational Physics}, volume = {536}, pages = {114089}, year = {2025}, issn = {0021-9991}, doi = {https://doi.org/10.1016/j.jcp.2025.114089}, url = {https://www.sciencedirect.com/science/article/pii/S0021999125003729}, author = {Johannes Krotz and Juan M. Restrepo and Jorge Ramirez}, keywords = {Data assimilation, Bayesian estimation, Dynamic likelihood filter, Advection-diffusion, Transport, Kalman filter}, abstract = {A Bayesian data assimilation scheme is formulated for advection-dominated advective and diffusive evolutionary problems, based upon the Dynamic Likelihood (DLF) approach to filtering. The DLF was developed specifically for hyperbolic problems –waves–, and in this paper, it is extended via a split step formulation, to handle advection-diffusion problems. In the dynamic likelihood approach, observations and their statistics are used to propagate probabilities along characteristics, evolving the likelihood in time. The estimate posterior thus inherits phase information. For advection-diffusion the advective part of the time evolution is handled on the basis of observations alone, while the diffusive part is informed through the model as well as observations. We expect, and indeed show here, that in advection-dominated problems, the DLF approach produces better estimates than other assimilation approaches, particularly when the observations are sparse and have low uncertainty. The added computational expense of the method is cubic in the total number of observations over time, which is on the same order of magnitude as a standard Kalman filter and can be mitigated by bounding the number of forward propagated observations, discarding the least informative data.}}}
Imagine you’re trying to follow the spread of smoke in the air, or how pollutants drift and mix in a river. These processes are driven by two forces: advection (movement along currents, carrying things downstream or with the wind) and diffusion (spreading and blurring out over time). Scientists want to predict these processes as accurately as possible — but the problem is, we never have perfect information. We rely on models (our best guess of the physics) and observations (data from sensors). Blending these two is called data assimilation.
The classic tool for this job is the Kalman Filter, which acts like a smart balance: constantly correcting a model with incoming data. It works well when observations are plentiful and evenly spaced. But in the real world, data is often sparse (only a few sensors, not everywhere all the time) — and this is exactly where traditional methods begin to struggle.
The Core Idea: A Dynamic Likelihood
This paper develops the Dynamic Likelihood Filter (DLF) for advection–diffusion problems. The key innovation is that instead of waiting passively for the next measurement, the DLF projects existing observations forward in time along the natural flow of the system. In other words, it doesn’t just use a measurement once and discard it. It “carries” that information along the path particles or pollutants would naturally follow, generating pseudo-observations at new times and places where sensors may not exist.
For the advective part of the process (the bulk flow), these pseudo-observations are especially powerful. For the diffusive part (the spreading and smoothing), the DLF blends in predictions from the model itself. Together, this allows the filter to maintain not only the amount of what is being transported, but also its location and phase — something traditional methods tend to lose quickly as diffusion blurs the signal.
Why This Matters
The DLF is designed for the kinds of real-world challenges that matter most:
- Sparse but reliable data: a few high-quality sensors instead of dense networks.
- Advection-dominated systems: where currents, winds, or flows move things much more than they diffuse.
- Uncertain models: situations where we know the equations but don’t capture every detail of reality.
These situations arise in meteorology (weather forecasting with limited stations), environmental monitoring (tracking contaminants in rivers or the atmosphere), and engineering (any process where materials or signals are transported through space and time).
What the Results Show
Numerical experiments in the paper show that the DLF consistently outperforms the standard Kalman Filter when advection dominates and data is sparse. It:
- Produces more accurate estimates of both intensity and location.
- Corrects errors in initial conditions more quickly.
- Handles uncertain or slightly wrong models better, staying anchored to reality.
There is a tradeoff: the DLF requires more computation. But the paper demonstrates practical ways to manage this, such as discarding “stale” pseudo-observations once they no longer add value. This keeps the method efficient while retaining its accuracy advantage.
The Bigger Picture
Originally, the Dynamic Likelihood idea was built for wave problems. Extending it to advection–diffusion, one of the most common forms of transport in nature and engineering, shows its broader potential. By projecting observations dynamically, the DLF squeezes more information out of every measurement. It makes data assimilation smarter, more resilient to sparse data, and better at tracking where things actually are.
In short: The Dynamic Likelihood Filter lets us do more with less. It turns limited observations into a continuous source of information, improving forecasts in fields from weather and climate science to pollution monitoring and engineering transport problems.
A hybrid Monte Carlo, discontinuous Galerkin method for linear kinetic transport equations
Johannes Krotz, Cory D. Hauck, Ryan G. McClarren
Journal of Computational Physics
We present a hybrid method for time-dependent particle transport problems that combines Monte Carlo (MC) estimation with deterministic solutions based on discrete ordinates. For spatial discretizations, the MC algorithm computes a piecewise constant solution and the discrete ordinates use bilinear discontinuous finite elements. From the hybridization of the problem, the resulting problem solved by Monte Carlo is scattering free, resulting in a simple, efficient solution procedure. Between time steps, we use a projection approach to “relabel” collided particles as uncollided particles. From a series of standard 2-D Cartesian test problems we observe that our hybrid method has improved accuracy and reduction in computational complexity of approximately an order of magnitude relative to standard discrete ordinates solutions.
@article{KROTZ_2024_HybridMCDG, title = {A hybrid Monte Carlo, discontinuous Galerkin method for linear kinetic transport equations}, journal = {Journal of Computational Physics}, volume = {514}, pages = {113253}, year = {2024}, issn = {0021-9991}, doi = {https://doi.org/10.1016/j.jcp.2024.113253}, url = {https://www.sciencedirect.com/science/article/pii/S0021999124005011}, author = {Johannes Krotz and Cory D. Hauck and Ryan G. McClarren}, keywords = {Hybrid stochastic-deterministic method, Monte Carlo, Kinetic equations, Particle transport}, abstract = {We present a hybrid method for time-dependent particle transport problems that combines Monte Carlo (MC) estimation with deterministic solutions based on discrete ordinates. For spatial discretizations, the MC algorithm computes a piecewise constant solution and the discrete ordinates use bilinear discontinuous finite elements. From the hybridization of the problem, the resulting problem solved by Monte Carlo is scattering free, resulting in a simple, efficient solution procedure. Between time steps, we use a projection approach to “relabel” collided particles as uncollided particles. From a series of standard 2-D Cartesian test problems we observe that our hybrid method has improved accuracy and reduction in computational complexity of approximately an order of magnitude relative to standard discrete ordinates solutions.}}
Imagine trying to predict how sunlight travels through Earth’s atmosphere, how radiation moves inside a nuclear reactor, or how heat from a laser interacts with a dense plasma. All of these processes involve particles moving, bouncing, and sometimes getting absorbed as they travel through a medium. Modeling these processes accurately is a monumental challenge.
Traditionally, scientists have relied on two approaches: deterministic methods, which translate physics into large systems of equations, and Monte Carlo methods, which simulate particle paths randomly. Each has strengths and weaknesses. Deterministic methods can be precise but become prohibitively expensive at high resolution. Monte Carlo is flexible and natural for complex geometries, but it’s noisy and often requires an enormous number of samples to be reliable.
The Core Idea: Pre- and Post-Collision Particles
The hybrid method presented in this paper blends the two approaches by recognizing that particles behave differently depending on whether they’ve collided yet.
- Pre-collision (uncollided): particles stream like rays of light. Their paths are straightforward, and Monte Carlo can handle them efficiently — essentially like ray tracing.
- Post-collision (collided): once particles scatter, their distribution becomes smoother and more diffuse. A deterministic solver is better suited here, capturing the averaged behavior without tracking every random path.
A clever relabeling step keeps the balance by periodically shifting some particles back into the pre-collision group, ensuring the method stays both accurate and efficient over time.
Results & Takeaways
Across standard benchmark problems, the hybrid method consistently delivered greater accuracy at significantly lower cost. Unlike pure deterministic solvers, it avoided artificial streaks (“ray effects”), and unlike pure Monte Carlo, it didn’t require massive sample counts. Overall, it achieved comparable or better accuracy with about an order of magnitude less computational complexity.
This is more than a technical curiosity, it has direct relevance to nuclear engineering, astrophysics, medical physics, and fusion research. Anywhere particles stream and scatter through matter, this method can provide faster, cleaner, and more reliable simulations.
Looking Ahead
While the current work focuses on single-energy problems, the approach naturally extends to more complex cases, such as multi-energy neutron transport or nonlinear radiative transfer in astrophysics. Future versions may even include adaptive strategies to automatically balance the two solvers as the simulation evolves.
In short: By treating particles before and after collisions differently, Monte Carlo for the sharp, ray-like stage, and deterministic solvers for the scattered stage, this hybrid method achieves what neither approach can do alone: faster, more accurate, and more practical simulations of particle transport.
Variable resolution Poisson-disk sampling for meshing discrete fracture networks
Johannes Krotz, Matthew R. Sweeney, Carl W. Gable, Jeffrey D. Hyman, Juan M. Restrepo
Journal of Computational and Applied Mathematics
We present the near-Maximal Algorithm for Poisson-disk Sampling (nMAPS) to generate point distributions for variable resolution Delaunay triangular and tetrahedral meshes in two and three-dimensions, respectively. nMAPS consists of two principal stages. In the first stage, an initial point distribution is produced using a cell-based rejection algorithm. In the second stage, holes in the sample are detected using an efficient background grid and filled in to obtain a near-maximal covering. Extensive testing shows that nMAPS generates a variable resolution mesh in linear run time with the number of accepted points. We demonstrate nMAPS capabilities by meshing three-dimensional discrete fracture networks (DFN) and the surrounding volume. The discretized boundaries of the fractures, which are represented as planar polygons, are used as the seed of 2D-nMAPS to produce a conforming Delaunay triangulation. The combined mesh of the DFN is used as the seed for 3D-nMAPS, which produces conforming Delaunay tetrahedra surrounding the network. Under a set of conditions that naturally arise in maximal Poisson-disk samples and are satisfied by nMAPS, the two-dimensional Delaunay triangulations are guaranteed to only have well-behaved triangular faces. While nMAPS does not provide triangulation quality bounds in more than two dimensions, we found that low-quality tetrahedra in 3D are infrequent, can be readily detected and removed, and a high quality balanced mesh is produced.
@article{KROTZ_2022_PoissonDiskDFN, title = {Variable resolution Poisson-disk sampling for meshing discrete fracture networks}, journal = {Journal of Computational and Applied Mathematics}, volume = {407}, pages = {114094}, year = {2022}, issn = {0377-0427}, doi = {https://doi.org/10.1016/j.cam.2022.114094}, url = {https://www.sciencedirect.com/science/article/pii/S0377042722000073}, author = {Johannes Krotz and Matthew R. Sweeney and Carl W. Gable and Jeffrey D. Hyman and Juan M. Restrepo}, keywords = {Discrete fracture network, Maximal Poisson-disk sampling, Mesh generation, Conforming Delaunay triangulation}, abstract = {We present the near-Maximal Algorithm for Poisson-disk Sampling (nMAPS) to generate point distributions for variable resolution Delaunay triangular and tetrahedral meshes in two and three-dimensions, respectively. nMAPS consists of two principal stages. In the first stage, an initial point distribution is produced using a cell-based rejection algorithm. In the second stage, holes in the sample are detected using an efficient background grid and filled in to obtain a near-maximal covering. Extensive testing shows that nMAPS generates a variable resolution mesh in linear run time with the number of accepted points. We demonstrate nMAPS capabilities by meshing three-dimensional discrete fracture networks (DFN) and the surrounding volume. The discretized boundaries of the fractures, which are represented as planar polygons, are used as the seed of 2D-nMAPS to produce a conforming Delaunay triangulation. The combined mesh of the DFN is used as the seed for 3D-nMAPS, which produces conforming Delaunay tetrahedra surrounding the network. Under a set of conditions that naturally arise in maximal Poisson-disk samples and are satisfied by nMAPS, the two-dimensional Delaunay triangulations are guaranteed to only have well-behaved triangular faces. While nMAPS does not provide triangulation quality bounds in more than two dimensions, we found that low-quality tetrahedra in 3D are infrequent, can be readily detected and removed, and a high quality balanced mesh is produced.}}
When engineers and geoscientists model how water, oil, gas, or contaminants move underground, they often need to account for cracks in the rock. These fractures form intricate networks that guide where fluids can flow. To simulate this on a computer, you don’t just draw the fractures—you build a mesh: a set of triangles (in 2D) or tetrahedra (in 3D) that fills the fractures and the surrounding rock.
There’s a balancing act. If the mesh is too coarse, you miss important details. If it’s too fine, the simulation can become prohibitively expensive. And if the mesh has poorly shaped elements, numerical solvers can struggle. This paper introduces nMAPS (near-Maximal Algorithm for Poisson-disk Sampling), a way to generate meshes that aim to be both efficient and well-balanced for discrete fracture networks (DFNs).
The Core Idea: Poisson-Disk Sampling
nMAPS starts by sprinkling points using Poisson-disk sampling. Think of scattering seeds so that no two land too close together (to avoid clusters) and the whole area is covered (to avoid gaps). nMAPS extends this with variable resolution: it places more points where fractures intersect, where geometry and gradients are complex, and fewer points farther away, where things are smoother.
After placing points, nMAPS connects them with a Delaunay triangulation to make the mesh. In 2D, this produces triangles that are better shaped for simulation. In 3D, the method includes a practical step to detect and remove “slivers”, unhealthy, skinny tetrahedra, then re-triangulates, improving overall mesh quality.
Why It Matters
Reliable meshes are essential for trustworthy simulations of fractured rock. By directly generating a variable-resolution point set and then triangulating, nMAPS can reduce unnecessary computational effort while still capturing the features that matter. In tests on synthetic DFNs, the approach produced well-shaped elements in 2D and reduced problematic tetrahedra in 3D. It also compared favorably with existing workflows that refine meshes iteratively, with run times that scale efficiently with the number of points.
This makes nMAPS a practical option for teams modeling groundwater flow, energy extraction, or contaminant transport in fractured media, and it may be useful in other settings that face similar meshing challenges.
In Summary
nMAPS offers a straightforward way to build meshes that balance accuracy, efficiency, and numerical robustness. It doesn’t solve every meshing challenge, but it provides a useful step toward making fracture-network simulations more dependable and accessible.