Publications
Below is a list of my publications in reverse chronological order. For each publication, you can access the Abstract for a technical summary, BibTeX for citation format, the URL to view the full paper, and In Simple Terms for an accessible explanation of the work and its significance.
Curvilinear coordinates and curvature in radiative transport
Johannes Krotz, Ryan G. McClarren
Journal of Computational and Theoretical Transport (under review)
We derive a general expression for the streaming term in radiative transport equations and other transport problems when formulated in curvilinear coordinates, emphasizing coordinate systems adapted to the geometry of the domain and the directional dependence of particle transport. By parametrizing the angular variable using a local orthonormal frame, we express directional derivatives in terms of curvature-related quantities that reflect the geometry of underlying spatial manifolds. Our formulation highlights how the interaction between coordinate choices and curvature influences the streaming operator, offering geometric interpretations of its components. The resulting framework offers intuitive insight into when and how angular dependence can be simplified and may guide the selection of coordinate systems that balance analytical tractability and computational efficiency.
@misc{krotz2025curvilinearcoordinatescurvatureradiative,
title={Curvilinear coordinates and curvature in radiative transport},
author={Johannes Krotz and Ryan G. McClarren},
year={2025},
eprint={2508.20852},
archivePrefix={arXiv},
primaryClass={math.NA},
url={https://arxiv.org/abs/2508.20852},
}
When scientists simulate how light, radiation, or particles move through a medium — whether inside a star, in the Earth's atmosphere, or in a nuclear reactor — they use transport equations. At the heart of these equations is a deceptively simple piece: the streaming term, which tracks how particles move forward in space and direction.
This sounds straightforward, but there's a twist. Particles don't always move through nice, flat, box-shaped domains. Often the geometry is curved — like spherical shells around a planet, cylindrical reactors, or more irregular shapes. When the geometry is curved, the math describing particle directions also bends. Choosing the right coordinate system can make the problem much easier — or much harder.
The Core Idea: Curvature Meets Transport
This work tackles the streaming term when the coordinates themselves are curved — so-called curvilinear coordinates. Instead of sticking with the usual Cartesian grid, we allow coordinates that better match the geometry of the problem (spheres, cylinders, ellipses, translating surfaces, etc.).
The key contribution is a unified way to write the directional derivative (how particles stream) in terms of curvature quantities — intuitive geometric measures of how surfaces bend and how directions twist around each other. In plain terms:
- The math of particle transport isn't just about position and direction.
- It's also about how the shape of space (flat vs. curved) changes the apparent motion.
- By expressing the streaming operator via curvatures, we see how geometry and physics interact.
Why This Matters
Traditionally, deriving the streaming term in a curved system requires tedious, case-by-case geometry. Each new coordinate choice (spherical, cylindrical, elliptical…) brings its own messy formulas. This framework offers a common language based on curvature, which helps in three ways:
- Simplification: it tells you when certain terms vanish in a given geometry, so the equations get simpler.
- Intuition: instead of pages of indices, you reason with "is the surface flat, curved, or twisting?"
- Practical guidance: it informs coordinate choices that balance tractability and accuracy.
Putting It Into Practice
The paper walks through familiar cases — cylindrical and spherical setups — and also more involved choices like ellipsoids or translating graphs. The general recipe shows:
- Cylinders: several terms drop out because the relevant surfaces are flat and directions don't twist.
- Spheres: curvature naturally appears with clean, symmetric contributions.
- Ellipsoids / translating surfaces: complexity increases, but the curvature view still identifies which pieces matter and why.
Takeaway
The contribution isn't a new solver; it's a geometric perspective that makes derivations clearer and decisions more informed. By seeing transport through the lens of curvature, researchers can:
- avoid unnecessary terms,
- understand the effect of geometry on streaming, and
- choose coordinate systems that make the physics — and the computation — more manageable.
It's a modest but useful step: connecting particle transport — a cornerstone of many physical simulations — more directly with the language of geometry.
A dynamic likelihood approach to filtering transport processes: advection-diffusion dynamics
Johannes Krotz,Juan M. Restrepo, Jorge Ramirez
Journal of Computational Physics
A Bayesian data assimilation scheme is formulated for advection-dominated advective and diffusive evolutionary problems, based upon the Dynamic Likelihood (DLF) approach to filtering. The DLF was developed specifically for hyperbolic problems –waves–, and in this paper, it is extended via a split step formulation, to handle advection-diffusion problems. In the dynamic likelihood approach, observations and their statistics are used to propagate probabilities along characteristics, evolving the likelihood in time. The estimate posterior thus inherits phase information. For advection-diffusion the advective part of the time evolution is handled on the basis of observations alone, while the diffusive part is informed through the model as well as observations. We expect, and indeed show here, that in advection-dominated problems, the DLF approach produces better estimates than other assimilation approaches, particularly when the observations are sparse and have low uncertainty. The added computational expense of the method is cubic in the total number of observations over time, which is on the same order of magnitude as a standard Kalman filter and can be mitigated by bounding the number of forward propagated observations, discarding the least informative data.
@article{KROTZ_2025_DynamicLikelihoodFilter,
title = {A dynamic likelihood approach to filtering transport processes: advection-diffusion dynamics},
journal = {Journal of Computational Physics},
volume = {536},
pages = {114089},
year = {2025},
issn = {0021-9991},
doi = {https://doi.org/10.1016/j.jcp.2025.114089},
url = {https://www.sciencedirect.com/science/article/pii/S0021999125003729},
author = {Johannes Krotz and Juan M. Restrepo and Jorge Ramirez},
keywords = {Data assimilation, Bayesian estimation, Dynamic likelihood filter,
Advection-diffusion, Transport, Kalman filter},
abstract = {A Bayesian data assimilation scheme is formulated for advection-dominated advective
and diffusive evolutionary problems, based upon the Dynamic Likelihood (DLF) approach to
filtering. The DLF was developed specifically for hyperbolic problems –waves–, and in this
paper, it is extended via a split step formulation, to handle advection-diffusion problems.
In the dynamic likelihood approach, observations and their statistics are used to propagate
probabilities along characteristics, evolving the likelihood in time. The estimate posterior
thus inherits phase information. For advection-diffusion the advective part of the time
evolution is handled on the basis of observations alone, while the diffusive part is informed
through the model as well as observations. We expect, and indeed show here, that in
advection-dominated problems, the DLF approach produces better estimates than other
assimilation approaches, particularly when the observations are sparse and have low
uncertainty. The added computational expense of the method is cubic in the total number of
observations over time, which is on the same order of magnitude as a standard Kalman filter
and can be mitigated by bounding the number of forward propagated observations, discarding
the least informative data.}}}
Imagine you’re trying to follow the spread of smoke in the air, or how pollutants drift and mix in a river. These processes are driven by two forces: advection (movement along currents, carrying things downstream or with the wind) and diffusion (spreading and blurring out over time). Scientists want to predict these processes as accurately as possible — but the problem is, we never have perfect information. We rely on models (our best guess of the physics) and observations (data from sensors). Blending these two is called data assimilation.
The classic tool for this job is the Kalman Filter, which acts like a smart balance: constantly correcting a model with incoming data. It works well when observations are plentiful and evenly spaced. But in the real world, data is often sparse (only a few sensors, not everywhere all the time) — and this is exactly where traditional methods begin to struggle.
The Core Idea: A Dynamic Likelihood
This paper develops the Dynamic Likelihood Filter (DLF) for advection–diffusion problems. The key innovation is that instead of waiting passively for the next measurement, the DLF projects existing observations forward in time along the natural flow of the system. In other words, it doesn’t just use a measurement once and discard it. It “carries” that information along the path particles or pollutants would naturally follow, generating pseudo-observations at new times and places where sensors may not exist.
For the advective part of the process (the bulk flow), these pseudo-observations are especially powerful. For the diffusive part (the spreading and smoothing), the DLF blends in predictions from the model itself. Together, this allows the filter to maintain not only the amount of what is being transported, but also its location and phase — something traditional methods tend to lose quickly as diffusion blurs the signal.
Why This Matters
The DLF is designed for the kinds of real-world challenges that matter most:
- Sparse but reliable data: a few high-quality sensors instead of dense networks.
- Advection-dominated systems: where currents, winds, or flows move things much more than they diffuse.
- Uncertain models: situations where we know the equations but don’t capture every detail of reality.
These situations arise in meteorology (weather forecasting with limited stations), environmental monitoring (tracking contaminants in rivers or the atmosphere), and engineering (any process where materials or signals are transported through space and time).
What the Results Show
Numerical experiments in the paper show that the DLF consistently outperforms the standard Kalman Filter when advection dominates and data is sparse. It:
- Produces more accurate estimates of both intensity and location.
- Corrects errors in initial conditions more quickly.
- Handles uncertain or slightly wrong models better, staying anchored to reality.
There is a tradeoff: the DLF requires more computation. But the paper demonstrates practical ways to manage this, such as discarding “stale” pseudo-observations once they no longer add value. This keeps the method efficient while retaining its accuracy advantage.
The Bigger Picture
Originally, the Dynamic Likelihood idea was built for wave problems. Extending it to advection–diffusion, one of the most common forms of transport in nature and engineering, shows its broader potential. By projecting observations dynamically, the DLF squeezes more information out of every measurement. It makes data assimilation smarter, more resilient to sparse data, and better at tracking where things actually are.
In short: The Dynamic Likelihood Filter lets us do more with less. It turns limited observations into a continuous source of information, improving forecasts in fields from weather and climate science to pollution monitoring and engineering transport problems.
A hybrid Monte Carlo, discontinuous Galerkin method for linear kinetic transport equations
Johannes Krotz, Cory D. Hauck, Ryan G. McClarren
Journal of Computational Physics
We present a hybrid method for time-dependent particle transport problems that combines Monte Carlo (MC) estimation with deterministic solutions based on discrete ordinates. For spatial discretizations, the MC algorithm computes a piecewise constant solution and the discrete ordinates use bilinear discontinuous finite elements. From the hybridization of the problem, the resulting problem solved by Monte Carlo is scattering free, resulting in a simple, efficient solution procedure. Between time steps, we use a projection approach to “relabel” collided particles as uncollided particles. From a series of standard 2-D Cartesian test problems we observe that our hybrid method has improved accuracy and reduction in computational complexity of approximately an order of magnitude relative to standard discrete ordinates solutions.
@article{KROTZ_2024_HybridMCDG,
title = {A hybrid Monte Carlo, discontinuous Galerkin method for linear kinetic transport equations},
journal = {Journal of Computational Physics},
volume = {514},
pages = {113253},
year = {2024},
issn = {0021-9991},
doi = {https://doi.org/10.1016/j.jcp.2024.113253},
url = {https://www.sciencedirect.com/science/article/pii/S0021999124005011},
author = {Johannes Krotz and Cory D. Hauck and Ryan G. McClarren},
keywords = {Hybrid stochastic-deterministic method, Monte Carlo, Kinetic equations, Particle transport},
abstract = {We present a hybrid method for time-dependent particle transport problems that combines
Monte Carlo (MC) estimation with deterministic solutions based on discrete ordinates. For
spatial discretizations, the MC algorithm computes a piecewise constant solution and the
discrete ordinates use bilinear discontinuous finite elements. From the hybridization of
the problem, the resulting problem solved by Monte Carlo is scattering free, resulting in
a simple, efficient solution procedure. Between time steps, we use a projection approach to
“relabel” collided particles as uncollided particles. From a series of standard 2-D Cartesian
test problems we observe that our hybrid method has improved accuracy and reduction in
computational complexity of approximately an order of magnitude relative to standard discrete
ordinates solutions.}}
Imagine trying to predict how sunlight travels through Earth’s atmosphere, how radiation moves inside a nuclear reactor, or how heat from a laser interacts with a dense plasma. All of these processes involve particles moving, bouncing, and sometimes getting absorbed as they travel through a medium. Modeling these processes accurately is a monumental challenge.
Traditionally, scientists have relied on two approaches: deterministic methods, which translate physics into large systems of equations, and Monte Carlo methods, which simulate particle paths randomly. Each has strengths and weaknesses. Deterministic methods can be precise but become prohibitively expensive at high resolution. Monte Carlo is flexible and natural for complex geometries, but it’s noisy and often requires an enormous number of samples to be reliable.
The Core Idea: Pre- and Post-Collision Particles
The hybrid method presented in this paper blends the two approaches by recognizing that particles behave differently depending on whether they’ve collided yet.
- Pre-collision (uncollided): particles stream like rays of light. Their paths are straightforward, and Monte Carlo can handle them efficiently — essentially like ray tracing.
- Post-collision (collided): once particles scatter, their distribution becomes smoother and more diffuse. A deterministic solver is better suited here, capturing the averaged behavior without tracking every random path.
A clever relabeling step keeps the balance by periodically shifting some particles back into the pre-collision group, ensuring the method stays both accurate and efficient over time.
Results & Takeaways
Across standard benchmark problems, the hybrid method consistently delivered greater accuracy at significantly lower cost. Unlike pure deterministic solvers, it avoided artificial streaks (“ray effects”), and unlike pure Monte Carlo, it didn’t require massive sample counts. Overall, it achieved comparable or better accuracy with about an order of magnitude less computational complexity.
This is more than a technical curiosity, it has direct relevance to nuclear engineering, astrophysics, medical physics, and fusion research. Anywhere particles stream and scatter through matter, this method can provide faster, cleaner, and more reliable simulations.
Looking Ahead
While the current work focuses on single-energy problems, the approach naturally extends to more complex cases, such as multi-energy neutron transport or nonlinear radiative transfer in astrophysics. Future versions may even include adaptive strategies to automatically balance the two solvers as the simulation evolves.
In short: By treating particles before and after collisions differently, Monte Carlo for the sharp, ray-like stage, and deterministic solvers for the scattered stage, this hybrid method achieves what neither approach can do alone: faster, more accurate, and more practical simulations of particle transport.
Variable resolution Poisson-disk sampling for meshing discrete fracture networks
Johannes Krotz, Matthew R. Sweeney, Carl W. Gable, Jeffrey D. Hyman, Juan M. Restrepo
Journal of Computational and Applied Mathematics
We present the near-Maximal Algorithm for Poisson-disk Sampling (nMAPS) to generate point distributions for variable resolution Delaunay triangular and tetrahedral meshes in two and three-dimensions, respectively. nMAPS consists of two principal stages. In the first stage, an initial point distribution is produced using a cell-based rejection algorithm. In the second stage, holes in the sample are detected using an efficient background grid and filled in to obtain a near-maximal covering. Extensive testing shows that nMAPS generates a variable resolution mesh in linear run time with the number of accepted points. We demonstrate nMAPS capabilities by meshing three-dimensional discrete fracture networks (DFN) and the surrounding volume. The discretized boundaries of the fractures, which are represented as planar polygons, are used as the seed of 2D-nMAPS to produce a conforming Delaunay triangulation. The combined mesh of the DFN is used as the seed for 3D-nMAPS, which produces conforming Delaunay tetrahedra surrounding the network. Under a set of conditions that naturally arise in maximal Poisson-disk samples and are satisfied by nMAPS, the two-dimensional Delaunay triangulations are guaranteed to only have well-behaved triangular faces. While nMAPS does not provide triangulation quality bounds in more than two dimensions, we found that low-quality tetrahedra in 3D are infrequent, can be readily detected and removed, and a high quality balanced mesh is produced.
@article{KROTZ_2022_PoissonDiskDFN,
title = {Variable resolution Poisson-disk sampling for meshing discrete fracture networks},
journal = {Journal of Computational and Applied Mathematics},
volume = {407},
pages = {114094},
year = {2022},
issn = {0377-0427},
doi = {https://doi.org/10.1016/j.cam.2022.114094},
url = {https://www.sciencedirect.com/science/article/pii/S0377042722000073},
author = {Johannes Krotz and Matthew R. Sweeney and Carl W. Gable and Jeffrey D. Hyman and Juan M. Restrepo},
keywords = {Discrete fracture network, Maximal Poisson-disk sampling, Mesh generation, Conforming Delaunay triangulation},
abstract = {We present the near-Maximal Algorithm for Poisson-disk Sampling (nMAPS) to generate point
distributions for variable resolution Delaunay triangular and tetrahedral meshes in two and
three-dimensions, respectively. nMAPS consists of two principal stages. In the first stage,
an initial point distribution is produced using a cell-based rejection algorithm. In the
second stage, holes in the sample are detected using an efficient background grid and filled
in to obtain a near-maximal covering. Extensive testing shows that nMAPS generates a variable
resolution mesh in linear run time with the number of accepted points. We demonstrate nMAPS
capabilities by meshing three-dimensional discrete fracture networks (DFN) and the surrounding
volume. The discretized boundaries of the fractures, which are represented as planar polygons, are
used as the seed of 2D-nMAPS to produce a conforming Delaunay triangulation. The combined mesh
of the DFN is used as the seed for 3D-nMAPS, which produces conforming Delaunay tetrahedra
surrounding the network. Under a set of conditions that naturally arise in maximal Poisson-disk
samples and are satisfied by nMAPS, the two-dimensional Delaunay triangulations are guaranteed
to only have well-behaved triangular faces. While nMAPS does not provide triangulation quality
bounds in more than two dimensions, we found that low-quality tetrahedra in 3D are infrequent,
can be readily detected and removed, and a high quality balanced mesh is produced.}}
When engineers and geoscientists model how water, oil, gas, or contaminants move underground, they often need to account for cracks in the rock. These fractures form intricate networks that guide where fluids can flow. To simulate this on a computer, you don’t just draw the fractures—you build a mesh: a set of triangles (in 2D) or tetrahedra (in 3D) that fills the fractures and the surrounding rock.
There’s a balancing act. If the mesh is too coarse, you miss important details. If it’s too fine, the simulation can become prohibitively expensive. And if the mesh has poorly shaped elements, numerical solvers can struggle. This paper introduces nMAPS (near-Maximal Algorithm for Poisson-disk Sampling), a way to generate meshes that aim to be both efficient and well-balanced for discrete fracture networks (DFNs).
The Core Idea: Poisson-Disk Sampling
nMAPS starts by sprinkling points using Poisson-disk sampling. Think of scattering seeds so that no two land too close together (to avoid clusters) and the whole area is covered (to avoid gaps). nMAPS extends this with variable resolution: it places more points where fractures intersect, where geometry and gradients are complex, and fewer points farther away, where things are smoother.
After placing points, nMAPS connects them with a Delaunay triangulation to make the mesh. In 2D, this produces triangles that are better shaped for simulation. In 3D, the method includes a practical step to detect and remove “slivers”, unhealthy, skinny tetrahedra, then re-triangulates, improving overall mesh quality.
Why It Matters
Reliable meshes are essential for trustworthy simulations of fractured rock. By directly generating a variable-resolution point set and then triangulating, nMAPS can reduce unnecessary computational effort while still capturing the features that matter. In tests on synthetic DFNs, the approach produced well-shaped elements in 2D and reduced problematic tetrahedra in 3D. It also compared favorably with existing workflows that refine meshes iteratively, with run times that scale efficiently with the number of points.
This makes nMAPS a practical option for teams modeling groundwater flow, energy extraction, or contaminant transport in fractured media, and it may be useful in other settings that face similar meshing challenges.
In Summary
nMAPS offers a straightforward way to build meshes that balance accuracy, efficiency, and numerical robustness. It doesn’t solve every meshing challenge, but it provides a useful step toward making fracture-network simulations more dependable and accessible.
Theses and Dissertation
Probabilistic and data-driven methods for numerical PDEs
Johannes Krotz
Dissertation
This dissertation consists of three integral self-contained parts. The first part develops a novel Monte Carlo algorithm, called the near-Maximal Algorithm for Poisson-disk Sampling (nMAPS), to efficiently generate the nodes of a high-quality mesh for the calculation of flow and the associated transport of chemical species in low-permeability fractured rock, such as shale and granite. A good mesh balances accuracy requirements with a reasonable computational cost, i.e., it is generated efficiently, dense where necessary for accuracy, and contains no cells that cause instabilities or blown-up errors. Quality bounds for meshes generated through nMAPS are proven, and its efficiency is demonstrated through numerical experiments (see Variable resolution Poisson-disk sampling paper above for complete details).
In the second part, a deterministic Monte Carlo hybrid method for time-dependent problems based on the physics of particle transport described through the linear Boltzmann equation is presented. The method splits the system into collided and uncollided particles and treats these sets with different methods. Uncollided particles are handled through high-accuracy Monte Carlo methods, while the density of collided particles is calculated using discontinuous Galerkin methods. Theoretical details of the algorithm are developed and shown to be effective through numerical experiments. The properties associated with the labeling as collided and uncollided leverage the respective strengths of these methods, allowing for overall more accurate and computationally efficient solving than each method on its own (see Hybrid Monte Carlo, discontinuous Galerkin method paper above for complete details).
In the last chapter, an extension to the Dynamic Likelihood Filter (DLF) is presented to include Advection-Diffusion equations. The DLF is a Bayesian estimation method specifically designed for wave-related problems. It improves on traditional methods, such as variants of Kalman filters, by not only using data at its time of observation but also at later times by propagating observations forward through time. This enriches the available data and improves predictions and uncertainties. The theory to include diffusion in the framework of the DLF is developed, and it is shown through numerical experiments that the DLF outperforms traditional data assimilation techniques, especially when observations are precise but sparse in space and time (see Dynamic likelihood approach to filtering paper above for complete details).
@phdthesis{krotz2024dissertation,
author = {Johannes Krotz},
title = {Probabilistic and data-driven methods for numerical PDEs},
school = {University of Tennessee, Knoxville},
year = {2024},
type = {Ph.D. dissertation}
}
This dissertation brings together three innovative computational methods that blend probability with traditional numerical approaches to solve challenging problems in science and engineering. Rather than developing entirely new theory from scratch, it demonstrates how smart combinations of existing techniques can lead to significant improvements in computational performance and reliability.
Three Connected Innovations
The dissertation consists of three main chapters, each corresponding to a published paper:
- Chapter 2: Developing better ways to create computational meshes for fractured rock systems (see the Variable resolution Poisson-disk sampling paper above for details)
- Chapter 3: Improving how we combine computer models with real-world data to track pollution and other transported substances (see the Dynamic likelihood approach to filtering paper above for details)
- Chapter 4: Creating hybrid solvers that combine random sampling with deterministic methods for radiation transport problems (see the Hybrid Monte Carlo, discontinuous Galerkin method paper above for details)
The Common Thread: Probabilistic Meets Deterministic
While these three projects tackle different applications — underground fluid flow, environmental monitoring, and radiation transport — they share a common philosophy: combining the flexibility of probabilistic methods with the precision of deterministic approaches. This hybrid thinking allows each method to capture the best of both worlds while avoiding their individual weaknesses.
Real-World Impact
The dissertation demonstrates that computational science doesn't always need revolutionary breakthroughs. Sometimes, thoughtful integration of existing tools can solve practical problems more effectively. Each method has been tested on realistic problems and shown to outperform traditional approaches, making complex simulations more accessible and reliable for engineers and scientists.
In essence: This work shows how probabilistic thinking can enhance traditional computational methods, leading to faster, more accurate, and more robust simulations across diverse fields in science and engineering.
Computer Simulation of Gel Formation in Colloidal Systems of Sticky Rods
Johannes Krotz
Master's Thesis
We develop and validate a simulation framework for colloidal gelation. We first reproduce the benchmark results of Santos, Campanella, and Carignano for spherical, gel-forming particles, then extend the methodology to more complex systems of "sticky" spherocylindrical rods interacting via a Kihara-like potential. Using comprehensive parameter sweeps documented for reproducibility, we analyze the emergence of porous, percolating networks and conduct a topological characterization of the resulting microstructures. This characterization leverages Early TDA to extract multiscale connectivity features and to define topology-driven metrics for automated comparison between simulations and experiments. Our simulations reveal a clear dependence of network formation on rod aspect ratio and particle density, consistent with established theory and, to our knowledge, not previously demonstrated for spherocylindrical colloids with Kihara-type interactions. Rheological probing of the simulated systems shows signatures characteristic of gels, which supports the structural analysis. We further compare our computational results with experimental data obtained on Bastian Trepka's gels collected by Jacob Steindl. Although these first comparisons indicate that the present model is not yet sufficient to quantitatively describe those specific gelled systems, the agreement in qualitative trends and the robustness of our tools suggest strong potential. Overall, the work demonstrates functional, extensible methods for simulating gelation in rod-based colloids, provides topological data analysis based metrics that can aid automated comparison between experiments and simulations, and outlines several promising directions for future refinement and application.
@mastersthesis{krotz2019master,
author = {Johannes Krotz},
title = {Computer Simulation of Gel Formation in Colloidal Systems of Sticky Rods},
school = {Universität Konstanz},
year = {2019},
type = {Master's thesis}
}
This project marked the real beginning of my journey into computational and statistical physics, the world of simulations, stochastic models, and data-driven approaches. It combined physics, mathematics, and computation to study how complex materials form structure and stability from simple microscopic rules.
The Core Idea
When you stir milk into coffee or mix nanoparticles into a liquid, the particles don't always just float around freely — sometimes they self-organize into networks that trap the liquid, forming a gel. Understanding how and why such gelation happens is a major challenge in soft matter physics and materials science.
This work simulated colloidal particles, tiny "sticky" spheres and rods that diffuse through a liquid and can attach to one another. Using Brownian dynamics simulations, it explored how these particles cluster, form networks, and eventually create gel-like structures that behave partly like solids and partly like fluids.
What the Thesis Explored
- From theory to simulation: Developed and extended a C++ simulation framework to model thousands of interacting particles undergoing Brownian motion, governed by stochastic (random) differential equations derived from the Langevin equation.
- Sticky rods and spheres: Investigated both spherical and rod-shaped (spherocylindrical) colloids, examining how their shape, density, and stickiness influence the formation of extended, percolating networks.
- Rheology and structure: Simulated how these systems respond to small oscillating deformations, measuring the storage and loss moduli, which reveal how solid- or fluid-like a gel behaves over time. Applied topological and statistical analysis to characterize network formation, pore sizes, and the number of "holes" or connected structures within the gels.
- Validation and comparison: Reproduced well-known results for sticky spheres, then successfully extended them to sticky rods. The simulation results were compared with experimental data on EuO-based nanorod gels grown in the lab.
Why It Matters
The work demonstrated that computational models, blending stochastic dynamics, deterministic integration, and data analysis, can reproduce complex physical behavior like gelation and network formation from first principles.
It helped establish a framework for:
- linking microscopic particle models with macroscopic material properties,
- using statistical measures (cluster size, topology, correlation functions) to extract physical insight, and
- integrating simulations and experiments through quantitative comparison.
A Personal Note
This project was my first deep dive into computational physics — building and debugging simulation codes, analyzing large datasets, and thinking statistically about noisy, nonlinear systems. It was here that stochastic and deterministic modeling came together for me for the first time. From there, the path toward data-driven physical modeling and machine learning for physical systems unfolded naturally.
Gibt es Rotationssymmetrische Fische?
Johannes Krotz
Bachelor's Thesis (Mathematics)
This bachelor's thesis investigates the existence of rotationally symmetric, homothetically shrinking "fish"-shaped hypersurface networks evolving by the Gaussian curvature flow. Building on prior work that established one-dimensional, lens/fish-shaped networks under curve-shortening/curvature flows, the thesis formulates the rotationally symmetric problem in R^{n+1} and derives three equivalent ordinary differential equations from the geometric PDE using distinct parameterizations (axial graph, angular, and radial graph). The core result is an alternative proof of existence in the one-dimensional case; the higher-dimensional case remains open, though supporting lemmas and numerical experiments suggest feasibility.
The analysis includes (i) equivalence of the derived ODEs and preservation of geometric quantities under reparametrization, (ii) local existence and regularity near the rotation axis, (iii) comparison and barrier arguments to control curvature and intersection behavior with rays from the origin, and (iv) a shooting/mirror construction that enforces the 120 degree junction and orthogonality conditions required at the triple point of the fish network. Numerical solutions illustrate the shape and support the analytical construction. Overall, the work advances understanding of self-similar shrinkers for Gaussian curvature flow in rotational symmetry, fully resolving n=1 and outlining a pathway for n>1.
@bachelorsthesis{krotz2018bachelor,
author = {Johannes Krotz},
title = {Gibt es Rotationssymmetrische Fische?},
school = {Universität Konstanz},
year = {2018},
type = {Bachelor's thesis},
note = {Mathematics}
}
When shapes evolve under geometric rules — like how a soap bubble rounds itself out or a surface shrinks smoothly over time — mathematicians use something called curvature flows to describe this motion. These flows are powerful but can lead to surprising and sometimes whimsical shapes.
This thesis explored a playful question:
Can there exist "fish-shaped" surfaces that shrink smoothly under the Gaussian curvature flow while keeping perfect rotational symmetry?
The "fish" here isn't biological — it's a curve network (or surface) that looks vaguely like a fish outline, made of three smooth arcs meeting at 120° angles. Previous work had shown such one-dimensional (curve) "fish" can exist and shrink homothetically (i.e., maintain shape while uniformly scaling down). This project asked: does that idea extend to higher dimensions?
The Core Idea
The goal was to understand whether rotationally symmetric, self-similarly shrinking surfaces (the higher-dimensional analogs of those fish-shaped curves) could exist under curvature flow. The work involved translating the geometric problem into ordinary differential equations and analyzing these equations for possible solutions.
What the Thesis Found
- For 1D (curve) cases: the known results were recovered and re-proved with an alternative method.
- For higher dimensions: the equations became more complex. Numerical experiments suggested such shapes might exist, but a full proof remained open.
- Along the way: the work derived several geometric relationships, showing how the curvature flow equations simplify when symmetry is present.
Why It Matters
While the project had a light-hearted title, it trained fundamental skills:
- translating geometric intuition into equations,
- analyzing nonlinear differential systems, and
- understanding the interplay between shape, symmetry, and curvature.
In essence: it was a geometry-for-math's-sake project, a fun, rigorous exercise that honed analytical techniques while exploring how even abstract flows can generate charming, "fish-like" patterns.
Quantum Transport in Topological Insulators - Superconductor Heterostructures
Johannes Krotz
Bachelor's Thesis (Physics)
This thesis investigates quantum transport in two-dimensional time-reversal-invariant topological insulators, focusing on HgTe/CdTe quantum wells described by the Bernevig--Hughes--Zhang (BHZ) model. After reproducing the helical edge states protected against elastic backscattering, I derive an effective one-dimensional edge Hamiltonian and analyze how a localized magnetic (Zeeman) barrier breaks time-reversal symmetry, opens a gap in the edge spectrum, and enables controllable backscattering. In the low-energy limit, the resulting transmission and reflection yield a characteristic conductance suppression set by barrier strength and width.
Placing an s-wave superconductor in proximity to the edge, I project the full Bogoliubov--de Gennes description onto the edge subspace and show that subgap transport is dominated by perfect Andreev reflection at an NS interface and by phase-dependent Andreev bound states in an SNS junction with energies E = |α| cos(φ/2). The corresponding Josephson current exhibits the expected current--phase relation and, under a perpendicular magnetic field, distinct interference patterns that differentiate edge-dominated from bulk-dominated transport.
Finally, I propose transport protocols to extract microscopically the subband g-factors (gE, gH) and superconducting order parameters (ΔE, ΔH) from their effective edge quantities ĝ and α. By exploiting their linear dependence on a mixing parameter γ (set by BHZ parameters and sample thickness), measurements across devices with varied well thickness can determine these band-resolved couplings.
@bachelorsthesis{krotz2015bachelor,
author = {Johannes Krotz},
title = {Quantum Transport in Topological Insulators - Superconductor Heterostructures},
school = {Universität Konstanz},
year = {2015},
type = {Bachelor's thesis},
note = {Physics}
}
In 2016, the Nobel Prize in Physics was awarded for discoveries related to topological phases of matter, exotic materials whose properties are protected by geometry rather than by ordinary symmetries. A few years earlier, this bachelor thesis explored one such system on a smaller, more theoretical scale: topological insulators and their interface with superconductors.
The Core Idea
Topological insulators are materials that act as electrical insulators inside but have perfectly conducting surfaces or edges. Electrons moving along these edges are "locked" to their spin direction, a phenomenon called spin–momentum locking, which protects them from backscattering and disorder. This makes them fascinating for potential applications in spintronics and quantum computing, especially when coupled to superconductors, where Majorana-like quasiparticles may emerge.
The thesis focused on a two-dimensional quantum well made of HgTe/CdTe, the first experimentally realized topological insulator. Using the Bernevig–Hughes–Zhang (BHZ) model, it studied how magnetic barriers and superconducting regions affect the quantum transport along the material's edges.
What the Thesis Explored
- Edge States: It reproduced the characteristic helical edge states of a 2D topological insulator, current-carrying modes that run along the edges but not through the bulk.
- Magnetic Barriers: It showed how applying a magnetic field (which breaks time-reversal symmetry) opens a small energy gap and reduces conductivity.
- Superconducting Proximity Effect: It then added superconductivity to the picture, where electron pairs "leak" from a nearby superconductor into the insulator, and analyzed Andreev reflection and Josephson effects that arise at such junctions.
- Combined Effects: Finally, it proposed ways to extract physical parameters (like the effective g-factors and superconducting order parameters) experimentally by studying how conductance changes with magnetic field strength and geometry.
Why It Matters
While modest in scope, the project explored ideas that later became central to topological quantum materials, an area recognized by the Nobel Prize only a year later. The work connected quantum transport, superconductivity, and topology in one unified framework — themes that continue to influence condensed matter physics and quantum technology.
A Personal Note
This project was carried out at Reykjavík University, Iceland, in collaboration with both Reykjavík University and the University of Konstanz (Germany). Beyond the physics, it was a formative experience — an introduction to research in an inspiring international setting, full of late-night equations and early morning drinks (or viceversa), northern lights, and the humbling beauty of Iceland.