Tianyu HuangTsinghua UniversityBeijingChinahuang-ty21@mails.tsinghua.edu.cn,Jingwang LingTsinghua UniversityBeijingChinalingjw20@mails.tsinghua.edu.cn,Shuang ZhaoUniversity of California, IrvineIrvineUSAshz@ics.uci.eduandFeng XuTsinghua UniversityBeijingChinaxufeng2003@gmail.com

(2025)

###### Abstract.

In recent years, Monte Carlo PDE solvers have garnered increasing attention in computer graphics, demonstrating value across a wide range of applications. Despite offering clear advantages over traditional methods—such as avoiding discretization and enabling local evaluations—Monte Carlo PDE solvers face challenges due to their stochastic nature, including high variance and slow convergence rates. To mitigate the variance issue, we draw inspiration from Monte Carlo path tracing and apply the path guiding technique to the Walk on Stars estimator. Specifically, we examine the target sampling distribution at each step of the Walk on Stars estimator, parameterize it, and introduce neural implicit representations to model the spatially-varying guiding distribution. This path guiding approach is implemented in a *wavefront*-style PDE solver, and experimental results demonstrate that it effectively reduces variance in Monte Carlo PDE solvers.

^{†}

^{†}copyright: acmlicensed

^{†}

^{†}journalyear: 2025

^{†}

^{†}doi: XXXXXXX.XXXXXXX

^{†}

^{†}conference: ; ;

^{†}

^{†}isbn: 978-1-4503-XXXX-X/18/06

^{†}

^{†}ccs: Mathematics of computingPartial differential equations

^{†}

^{†}ccs: Computing methodologiesRendering

^{†}

^{†}ccs: Mathematics of computingProbabilistic algorithms

## 1. Introduction

Monte Carlo PDE solvers have gained significant attention in graphics research over recent years. Initial investigations by Sawhney and Crane (2020) applied the Walk on Spheres algorithm(Muller, 1956) to the Poisson equation. Subsequently, this approach was expanded to the Walk on Stars algorithm(Sawhney etal., 2023; Simonov, 2008; Miller etal., 2024b). Monte Carlo PDE solvers have demonstrated significant value across a wide range of applications, such as approximate volume rendering(Qi etal., 2022), fluid simulation(Rioux-Lavoie etal., 2022; Jain etal., 2024), heat simulation(DeLambilly etal., 2023), robotics(Muchacho and Pokorny, 2024), machine learning(Nam etal., 2024), shape modelling(deGoes and Desbrun, 2024) and infrared rendering(Bati etal., 2023).

Unlike traditional grid-based methods, Monte Carlo PDE solvers operate independently of discretized grids, effectively circumventing issues related to quantization errors, limited geometry processing speeds, and poorly constructed grids. This flexibility enables these solvers to operate on nearly any geometric representation. Moreover, Monte Carlo PDE solvers allow for local evaluation, which can significantly reduce computational cost compared to traditional methods that require a global solve. However, due to its stochastic nature, Monte Carlo PDE solvers encounter challenges, especially slow convergence and the need for a large sample count to achieve accurate solutions. In contrast, Monte Carlo path tracing—widely studied for simulating light transport—has accumulated a range of effective variance reduction techniques, including bidirectional methods(Lafortune and Willems, 1993; Veach and Guibas, 1995a), radiance caching(Müller etal., 2021; Krivanek etal., 2005), and, more recently, path guiding. Path guiding, in particular, reduces the variance in Monte Carlo sampling by learning and importance sampling the target distribution at shading points. Unlike bidirectional methods and radiance caching, path guiding is conceptually straightforward and can be easily integrated into existing integrators, introducing no additional sampling bias. Traditional path guiding methods(Müller etal., 2017; Vorba etal., 2014; Herholz etal., 2019; Reibold etal., 2018; Rath etal., 2020) use discrete data structures to store distribution. In the era of deep learning, modern approaches increasingly leverage neural networks to model the distribution. Notably, methods using neural networks to represent spatially-varying parametric distributions(Dong etal., 2023; Huang etal., 2024) have demonstrated exceptional performance in path guiding tasks.

Inspired by path guiding techniques from Monte Carlo path tracing in the rendering domain, we introduce the path guiding method into the Walk on Stars estimator-the estimator primarily used in current Monte Carlo PDE solvers. We examine the target distribution necessary for performing importance sampling of the next-step in the Walk on Stars estimator. Based on this analysis, we fit the target distribution using a von Mises-Fisher mixture distribution. We employ implicit neural representations to represent the guiding distribution in space. We also explore how to make our method more robust by employing multiple importance sampling(Veach and Guibas, 1995b) with the original uniform sampling. Finally, drawing from *wavefront*-style designs(Laine etal., 2013) in renderers, we implement our approach in a *wavefront*-style Monte Carlo PDE solver for experimental validation. Experimental results show that our method outperforms the original Walk on Stars algorithm in both qualitative and quantitative metrics. We discuss the unique advantages of our method compared to existing variance reduction techniques for Monte Carlo PDE solvers, as well as the potential for alternative implementation strategies.

## 2. Related Work

### 2.1. Monte Carlo PDE Solvers

The recent exploration of Monte Carlo methods for solving partial differential equations (PDEs) has garnered significant attention in the graphics community. Compared to traditional methods such as finite element method (FEM) and finite difference method (FDM), Monte Carlo PDE solvers circumvent the challenges of mesh generation, offering both generality and performance advantages. The pioneering work, Monte Carlo Geometry Processing(Sawhney and Crane, 2020) revisited the Walk on Spheres (WoS) algorithm(Muller, 1956) for solving linear elliptic equations with Dirichlet boundary conditions, which was later extended to the Walk on Stars (WoSt) algorithm(Sawhney etal., 2023; Simonov, 2008; Ermakov and Sipin, 2009) to handle Neumann and Robin(Miller etal., 2024b) boundary conditions. Under the WoS and WoSt frameworks, the methods were further generalized to address problems with spatially varying coefficients(Sawhney etal., 2022) surface PDEs(Sugimoto etal., 2024b), and infinite domains(Nabizadeh etal., 2021). Monte Carlo PDE solvers have demonstrated broad applicability in both forward(Jain etal., 2024; Rioux-Lavoie etal., 2022; DeLambilly etal., 2023; Muchacho and Pokorny, 2024; Nam etal., 2024; deGoes and Desbrun, 2024; Bati etal., 2023) and inverse(Yu etal., 2024; Miller etal., 2024a; Yilmazer etal., 2024) problems. In parallel with WoS and WoSt, the Walk on Boundary method(Sugimoto etal., 2023), another Monte Carlo solver for PDEs, has been revisited and applied to fluid simulations(Sugimoto etal., 2024a). Similar to Monte Carlo path tracing, Monte Carlo PDE solvers face the challenges of slow convergence and high variance. Various methods, including neural caches(Li etal., 2023), bidirectional formulation(Qi etal., 2022), and boundary value caching(Miller etal., 2023), have been proposed to address these issues. Our proposed path guiding method is orthogonal to these existing approaches, offering a solution that can be used independently or potentially in conjunction with them to effectively reduce variance.

### 2.2. Path Guiding in Rendering

In rendering, path guiding is a variance reduction technique employed in Monte Carlo path tracing through the use of importance sampling. In each step of path tracing, a new ray is generated in a certain direction, and the sampling distribution for generating the next ray is a directional distribution defined on the 2-sphere $\mathbb{S}^{2}$. Local path guiding reduces variance by learning the radiance distribution over this 2-sphere in an online manner, directing the sampler to perform importance sampling based on this learned distribution. Research in path guiding primarily focuses on how to model the distribution and how to store the spatially-varying distribution across space. Early attempts in this domain include constructing spatially cached histograms(Jensen, 1995), cones(Hey and Purgathofer, 2002) or Gaussian mixtures(Vorba etal., 2014). A well-known recent work is Müller etal. (2017)’s Practical Path Guiding, which utilizes SD-trees to implement path guiding suitable for production environments. Subsequent works have considered volume rendering(Herholz etal., 2019), caustics(Li etal., 2022; Fan etal., 2023), path space(Reibold etal., 2018), and physics-based differentiable rendering(Fan etal., 2024). In the deep learning era, path guiding methods based on neural networks have also been explored, such as employing convolutional neural networks to reconstruct radiance fields(Huo etal., 2020; Zhu etal., 2021) or using neural networks to model complex distributions(Müller etal., 2019). Recently, methods utilizing neural implicit representations to model parameterized guiding distribution in space(Huang etal., 2024; Dong etal., 2023) have emerged as the state-of-the-art in the field of path guiding. These methods ensure strong real-time performance while easily avoiding the parallax issues inherent in traditional discrete spatial storage structures. Notably, radiance distribution in Monte Carlo path tracing has predominantly been studied within the context of 3D space.In our work, we model the distribution on the $(d-1)$-sphere $\mathbb{S}^{d-1}$ in $\mathbb{R}^{d}$, thereby offering a path guiding solution for Monte Carlo PDE solvers in $d$-dimensional space.

## 3. Preliminary

### 3.1. Linear Elliptic Equations

Monte Carlo PDE solvers primarily target linear elliptic equations, which encompass a wide variety of forms. However, the structure of the corresponding Walk on Stars estimator remains largely consistent across different formulations. Extended walking behaviors in certain scenarios(Sawhney etal., 2022) do not require modifications to our method, as we are solely concerned with selecting the *optimal sampling direction* at each step. The strategy for choosing the sampling direction remains consistent across all linear elliptic equations. For simplicity, in our method, we focus on the most typical case—the Poisson equation with Dirichlet and Neumann boundary conditions:

(1) | $\displaystyle\Delta u(x)$ | $\displaystyle=f(x)\quad\text{on }\Omega,$ | ||

$\displaystyle u(x)$ | $\displaystyle=g(x)\quad\text{on }\partial\Omega_{\text{D}},$ | |||

$\displaystyle\frac{\partial u(x)}{\partial n_{x}}$ | $\displaystyle=h(x)\quad\text{on }\partial\Omega_{\text{N}}.$ |

$\Omega\subset\mathbb{R}^{d}~{}(d=2,3,...)$ denotes the domain of interest, $u:\Omega\rightarrow\mathbb{R}$ is the unknown solution to be determined, and $f:\Omega\rightarrow\mathbb{R}$ represents the source term. The boundary of $\Omega$ is divided into $\partial\Omega_{D}$ and $\partial\Omega_{N}$, corresponding to the Dirichlet and Neumann boundary conditions, respectively.

Recent study(Miller etal., 2024b) have extended the Walk on Stars estimator to handle Robin boundary conditions with a new equation:

(2) | $\frac{\partial u(x)}{\partial n_{x}}-\mu(x)u(x)=r(x)\quad\text{on }\partial%\Omega_{\text{R}}.$ |

However, the only modification required to solve the Poisson equation under Robin boundary conditions with the Walk on Stars estimator is the introduction of a *contraction* of the star-shaped region. This contraction is not related to the sampling process but is instead computed via deterministic geometric queries, thus not affecting our method. For simplicity, we consider only the case described in eq.1.

### 3.2. Walk on Stars Estimator

The solution $u(x_{k})$ at any point $x_{k}$ for eq.1 can be obtained using the following one-sample Walk on Stars estimator:

(3) | $\displaystyle\langle u(x_{k})\rangle$ | $\displaystyle=\frac{P^{B}(x_{k},x_{k+1})\,\langle u(x_{k+1})\rangle}{\alpha(x_%{k})\,p^{\partial\text{St}(x_{k},r)}(x_{k+1})}$ | ||

$\displaystyle-\frac{G^{B}(x_{k},z_{k+1})\,h(z_{k+1})}{\alpha(x_{k})\,p^{%\partial\text{StN}(x_{k},r)}(z_{k+1})}+\frac{G^{B}(x_{k},y_{k+1})\,f(y_{k+1})}%{\alpha(x_{k})\,p^{\text{St}(x_{k},r)}(y_{k+1})}.$ |

Here, $B$ is a spherical region centered at $x_{k}$ with radius $r$, $\mathrm{St}$ represents the star-shaped region obtained by the intersection of $B$ and $\Omega$, and $\partial\mathrm{StN}$ denotes the Neumann boundary of $\mathrm{St}$. Sawhney etal. (2023) propose an algorithm to select $r$ such that the region $\mathrm{St}$ remains star-shaped: the radius $r$ equals to the smaller value between the distance from $x_{k}$ to the nearest Dirichlet boundary and the distance from $x_{k}$ to the nearest Neumann silhouette, so any ray originating from $x_{k}$ can intersect with either the Neumann boundary or the Dirichlet boundary *at most once*. The parameter $\alpha(x_{k})$ is set to $1$ if $x_{k}$ lies within $\mathrm{St}$, $1/2$ if it lies on the boundary of $\mathrm{St}$, and $0$ if it lies outside $\mathrm{St}$. $G^{B}$ denotes the Green’s function defined over the sphere $B$, while $P^{B}$, the Poisson kernel on $B$, is defined as $P^{B}=\partial G^{B}/\partial n$. The parameter $p$ represents the sampling probability of the Monte Carlo estimator. The illustration of this one-sample Monte Carlo estimator can be found in fig.1.

In a complete *walk*, the estimator begins from an arbitrary probe point within $\Omega$; at each *step*, the estimator performs up to three sampling operations:

- (1)
Generating the next-step sample point $x_{k+1}$.

- (2)
Generating the source sample point $y_{k+1}$, if source exists in star-shaped region.

- (3)
Generating the sample point $z_{k+1}$ on the Neumann boundary $\partial\mathrm{StN}$, if Neumann boundary exists in the spherical region.

This process continues until the walk reaches the $\epsilon$-shell of the Dirichlet boundary $\partial\Omega_{\text{D}}$. It is easy to observe that the process of sampling the next-step $x_{k+1}$ is recursive, much like the Monte Carlo estimator in path tracing. In this work, we employ path guiding at this to achieve variance reduction.

### 3.3. von Mises-Fisher Mixtures

To fit the target distribution (see section4.1) for importance sampling in local path guiding, we adopt the von Mises-Fisher (vMF) mixture distribution as the parametric distribution model. Unlike the parametric models often used on 2-spheres $\mathbb{S}^{2}$ in path tracing, Monte Carlo PDE solvers frequently need to operate in various dimensions. Therefore, we adopt the generalized form of the vMF distribution to accommodate different dimensional spaces. The vMF distribution on the $(d-1)$-sphere $\mathbb{S}^{d-1}$ in $\mathbb{R}^{d}~{}(d=2,3,...)$ is defined as:

(4) | $v(\omega\mid\mu,\kappa)=\frac{\kappa^{d/2-1}}{(2\pi)^{d/2}I_{d/2-1}(\kappa)}%\exp(\kappa\mu^{\mathrm{T}}\omega),$ |

where $\kappa>0$ and $\left|\mu\right|=1$ define the concentration and direction of the vMF distribution respectively, and $I_{k}$ denotes the modified Bessel function of the first kind at order $k$. The vMF mixture distribution is thus a convex combination of $K$ vMF components:

(5) | $\mathcal{V}(\omega\mid\Theta)=\sum_{i=1}^{K}\lambda_{i}\cdot v(\omega\mid\mu_{%i},\kappa_{i}),$ |

where $\lambda_{i}$ are the mixture weights. In addition to being applicable across various dimensions, the vMF mixture distribution is naturally defined on the spherical surface, which aligns perfectly with the directional nature of next-step sampling. The vMF mixture distribution has already been used in the path guiding work within the path tracing domain(Dong etal., 2023; Herholz etal., 2019).

## 4. Path Guiding for Walk on Stars Estimator

### 4.1. Target Distribution for Local Path Guiding

In the original Walk on Stars estimator(Sawhney etal., 2023), the next-step sample point $x_{k+1}$ is obtained through *uniform directional sampling*, meaning a direction is chosen uniformly, with the corresponding probability $p_{\text{u,walk}}=1/\left|\mathbb{S}^{d-1}\right|~{}(d=2,3,...)$. However, this method does not account for the anisotropy of the solution $u$ on the boundary of the star-shaped region, as illustrated in fig.2. To reduce variance, we perform importance sampling at each step to determine the next-step position. This involves constructing a directional *guiding distribution* at point $x_{k}$, which should be approximately proportional to the *target distribution*, i.e., the absolute value of the solution $u(x_{k+1})$ on the boundary of the star-shaped region:

(6) | $p_{\text{g,walk}}(\omega\mid x_{k})\propto\left|u(x_{k+1})\right|,\quad x_{k+1%}\in\partial\text{St}(x_{k}),$ |

where $\omega=\frac{x_{k+1}-x_{k}}{\left|x_{k+1}-x_{k}\right|}$ represents direction. $p_{\text{g,walk}}(\omega\mid x_{k})$ emphasizes that the guiding distribution is a function of the direction, with the spatial position as its parameter. The target distribution is the absolute value of $u(x_{k+1})$ because the solution $u$ to the PDE problem may take negative values. Later in section4.2 we will discuss how to store the guiding distribution.

#### Sampling source

The core of source sampling lies in generating the sample point $y_{k+1}$. In the original Walk on Stars algorithm, the direction for the source sample point is first uniformly generated, followed by importance sampling along the 1D segment in that direction. Since both the source sampling and the next-step sampling are uniform in the original algorithm, a *sample reuse* strategy is introduced, allowing both to share the result of the uniform direction sampling. However, these two events are independent in nature. Therefore, we disabled this strategy in our approach. Furthermore, directional distribution of the source could also be modeled and importance sampled, but for simplicity, we do not adopt this approach in our method.

### 4.2. Representing the Guiding Distribution in Space

We adopt von Mises-Fisher mixture distribution (eq.5) as theparametric guiding distribution at any point $x$:

(7) | $p_{\text{g,walk}}(\omega\mid x)=\mathcal{V}(\omega\mid\Theta(x)).$ |

To represent the spatially-varying parameter $\Theta(x)$, in the rendering field, there has been prior exploration of using discrete storage structures(Müller etal., 2017; Lafortune and Willems, 1995), as well as implicit neural representations(Dong etal., 2023; Huang etal., 2024) to express such spatial variations. In our work, we follow Dong etal. (2023)’s practice, use implicit neural representations to encode the distribution parameters and employ a neural network $\mathbf{NN}(x\mid\Phi)$ with trainable parameters $\Phi$ to decode the spatially-varying parameter $\Theta(x)$:

(8) | $\mathbf{NN}(x\mid\Phi)=\Theta(x).$ |

$\mathbf{NN}$ consists of a multi-resolution feature grid and a lightweight multi-layer perceptron (MLP). The inference and training procedure of the network is illustrated in fig.3. The query position $x$ is first processed through the multi-resolution feature grid with learnable parameters, the resulting feature vector is then passed into the MLP with 3 layers, each containing 64 neurons. The output of the MLP is a tensor with a dimension of $(2+p)\times K$, containing unnormalized components of the $K$-component von Mises-Fisher mixture distribution: the weights $\lambda^{\prime}_{i}$, concentrations $\kappa^{\prime}_{i}$, and means $\mu^{\prime}_{i}$, where $\mu^{\prime}_{i}$ is represented by a $d$-dimensional unnormalized vector. Since the MLP cannot ensure that each output component satisfies the validity requirements of a von Mises-Fisher parametric distribution, we introduce a normalization process, as shown in table1. The normalized parameters correspond one-to-one to a valid von Mises-Fisher mixture distribution, guiding the Monte Carlo sampler to perform importance sampling.

Parameter | Mapping |
---|---|

$\mu_{i}\in\mathbb{S}^{d-1}(d=2,3,...)$ | $\mu_{i}={\mu^{\prime}_{i}}/{|\mu^{\prime}_{i}|}$ |

$\kappa_{i}\in[0,+\infty)$ | $\kappa_{i}=\exp(\kappa^{\prime}_{i})$ |

$\lambda_{i}\in(0,1)$ | $\lambda_{i}={\exp(\lambda^{\prime}_{i})}/{\sum_{j=1}^{K}\exp(\lambda^{\prime}_%{j})}$ |

To fit the von Mises-Fisher mixture distribution $\mathcal{V}$ to the target distribution $\mathcal{D}$ at $x$, the Kullback-Leibler (KL) divergence is introduced:

(9) | $D_{\mathrm{KL}}(\mathcal{D}\parallel\mathcal{V};\Theta)=\int_{\Omega}\mathcal{%D}(\omega)\mathrm{log}\frac{\mathcal{D}(\omega)}{\mathcal{V}(\omega\mid\hat{%\Theta})}\mathrm{d}\omega,$ |

where $\mathcal{D}\propto u$. The one-sample Monte Carlo estimator of eq.9 is:

(10) | $\langle D_{\mathrm{KL}}(\mathcal{D}\parallel\mathcal{V};\Theta)\rangle=\frac{%\mathcal{D}(\omega)}{\tilde{p}(\omega\mid\hat{\Theta})}\mathrm{log}\frac{%\mathcal{D}(\omega)}{\mathcal{V}(\omega\mid\hat{\Theta})},$ |

where $\tilde{p}$ represents the distribution of the sampler. In our context, this distribution is provided by the multiple importance sampling (section4.3) method. The expected solution for network parameter $\Phi$ is thus:

(11) | $\Phi^{*}=\mathop{\text{argmin}}_{\Phi}\mathbb{E}_{x}\Big{[}D_{\text{KL}}(%\mathcal{D}(x)\parallel\mathcal{V};\Theta(x))\Big{]}.$ |

We back-propagate the loss function along the path indicated by the purple arrows in Figure 3, optimizing the parameters $\Phi$ of both the MLP and the multi-resolution feature grid. This allows us to learn the spatially-varying parametric distribution in space.

### 4.3. Multiple Importance Sampling

Relying solely on the learned distribution for sampling is not an optimal strategy, as it may introduce variance or even bias(Owen and Zhou, 2000). Therefore, we combine our method with uniform sampling from the original algorithm using multiple importance sampling (MIS). Specifically, we employ the balance heuristic(Veach and Guibas, 1995b). To avoid branching in the Monte Carlo estimator, we adopt a one-sample MIS approach to mix the two distributions:

(12) | $\langle u(x_{k})\rangle=\frac{P^{B}(x_{k},x_{k+1})\,\langle u(x_{k+1})\rangle}%{\alpha(x_{k})\left(\beta p_{\text{u,walk}}(\omega)+(1-\beta)p_{\text{g,walk}}%(\omega)\right)},$ |

where $\beta$ represents the selection probability of the one-sample MIS, and we refer to section4.1 for the definitions of $p_{\text{u,walk}}$ and $p_{\text{g,walk}}$. For simplicity, we set selection probability $\beta=0.5$ throughout the entire scene in this work. In theory, other MIS heuristics could be introduced, or different values of $\beta$ could be chosen for specific spatial locations and directions(Vorba etal., 2019; Sbert etal., 2016; Havran and Sbert, 2014), i.e. $\beta=\beta(x_{k},\omega)$, to achieve higher estimator efficiency, but we will not delve into these aspects in this discussion.

## 5. Implementation Details

### 5.1. *Wavefront*-style PDE Solver

Modern neural network inference and training require batched input to achieve optimal performance. In the rendering domain, a common approach to efficiently batch per-sample data is to implement the renderer using a wavefront-style architecture(Laine etal., 2013), enabling the generation and processing of samples in batches. Drawing inspiration from this approach in rendering, our work implements a wavefront-style PDE solver on GPU.

Our proposed wavefront-style PDE solver generates a batch of sample points at the beginning of each walks per pixel (wpp) and stores them in GPU memory using a Structure-of-Arrays (SoA) layout. Subsequently, a batched distance query is performed, and the sample points are divided into two groups. The first group consists of points that fall within the epsilon-shell around the Dirichlet boundary, which are then directly processed for Dirichlet boundary evaluation. The remaining sample points undergo the next-step random sampling, source sampling, and Neumann boundary sampling. The newly generated next-step sample points are reintroduced into the distance query step.

As shown in fig.4, in this wavefront-style PDE solver, we insert the neural network inference process before the next-step sampling step and integrate the neural network training process at the end of each per pixel loop. This approach maximizes the batch size during both inference and training, effectively avoiding the inefficiencies associated with single-sample inference or training.

### 5.2. Network Design and Implementation

#### Multi-resolution spatial embedding

To effectively capture the high-frequency spatial variation of the guiding distribution, we employ a learnable spatial embedding to implicitly encode the parametric mixtures. This method aligns with recent practices in NeRF-like applications. In a $d$-dimensional space, we define $L$ uniform grids $G_{l}$, where each grid covers the entire scene with a spatial resolution of $D_{l}^{d}$. Here, $G_{l}$ represents the $l$-th embedding grid, and the resolution $D_{l}$ increases exponentially, resulting in a multi-resolution representation. At each lattice point of $G_{l}$, we associate a learnable embedding vector $v\in\mathbb{R}^{F}$. To retrieve the spatial embedding for any point $x$, we perform bi-linear interpolation of the features at neighboring lattice points for each resolution. The resulting features are concatenated to form the final embedding $G(x)$:

(13) | $G(x\mid\Phi)=\mathop{\oplus}_{l=1}^{L}\text{bilinear}\left(x,V_{l}[x]\right),%\quad G:\mathbb{R}^{d}\to\mathbb{R}^{L\times F}(d=2,3,...),$ |

where $V_{l}[x]$ is the set of features at the eight corners of the cell enclosing $x$ within $G_{l}$. We thus formlate the desired mapping as a two step procedure:

(14) | $\mathbf{MLP}\Big{(}G(x\mid\Phi_{\text{E}})\mid\Phi_{\text{M}}\Big{)}=\hat{%\Theta}(x),$ |

where the parameters of the spatial embedding $\Phi_{\text{E}}$ and the parameters of the MLP $\Phi_{\text{M}}$ collectively constitute the trainable parameters $\Phi$ of our $\mathbf{NN}$ (eq.8). The multi-resolution structure efficiently captures varying levels of detail, allowing us to naturally model the spatial variations of the guiding distribution. This design alleviates the need for a single, monolithic MLP to serve as the implicit representation, enabling the MLP to primarily focus on decoding the embedding into the parametric models $\Theta$. As a result, this approach significantly accelerates both training and inference, with a reduced memory footprint.

#### Network implementation

We implement the neural network using *tiny-cuda-nn*(Müller, 2021). For the input to the neural network, we adopt DenseGrid as the encoding method, utilizing linear interpolation. We employ ReLU as the activation function of the MLP, while at the output, we apply the previously mentioned normalization mapping functions in table1.

#### Online training scheme

Similar to path guiding works in Monte Carlo path tracing, after the completion of all walks for each batch of pixels, a training step is performed. This online learning mechanism ensures that the neural network used for inference is updated with each sampling iteration. Once the walks per pixel (wpp) reaches a certain threshold, the learned guiding distribution at each point tends to stabilize. At this stage, we can terminate the training process and rely solely on the neural network for inference, which can further enhance performance.

### 5.3. Geometric Queries

The experiments conducted in this technical report utilizes the fcpw library(Sawhney, 2021) to perform geometric queries. The GPU query portion of fcpw is implemented in Slang and runs on the Vulkan backend on Unix platforms. We have not yet found publicly available detailed documentation on interoperation between Slang’s Vulkan backend and CUDA, so for the time being, we have implemented a less efficient solution that uses the CPU as an intermediary for memory transfers. In future work, we will re-implement a high-performance geometric query system to address this issue. Since both the baseline and our method operate on this system, the comparison remains fair.

## 6. Experiments and Results

### 6.1. Baseline, Configurations, and Metrics

For the selection of the baseline, we note that although there are some variance reduction strategies in the Monte Carlo PDE domain, our approach is the first to specifically address variance reduction at the sampling level within Monte Carlo PDE solvers. Other strategies are fundamentally different from our approach and are orthogonal, which will be discussed in section7. Therefore, we use the original unidirectional Walk on Stars estimator as our baseline.

All experiments are conducted at a resolution of 800$\times$800, running 1024 walks per pixel (wpp). In line with path guiding work in the rendering domain, we disable all existing primary variance reduction strategies (control variates etc.) but retain importance sampling of the source in both methods. We disable russian roulette to ensure fairness. We designate the first 256 steps as the training phase, during which the neural network also participates in inference, producing guiding distribution for the sampler. Once the training phase concludes, the neural network is solely used for inference. The sampling results from the training phase are retained.

Typically, Monte Carlo PDE solvers operate in 2D or 3D spaces; therefore, we have designed 2D and 3D examples. Details of these examples can be found in section6.2.

We report the variation of relMSE as a function of the number of samples. Due to the performance issues discussed in section5.3, we consider comparing relMSE over time to be impractical at this stage, and therefore, we only present the results for the former. We conduct all experiments on an i7 9700K CPU and an RTX 2080ti GPU.

### 6.2. Results and Comparison

The qualitative and quantitative results of the experiments are shown in fig.5, while the variation of relMSE with the number of samples is illustrated in fig.6. In this technical report, we present three representative cases. Case Sparse Dirichlets is a 2D Laplace equation problem, where the distribution of high-value Dirichlet boundaries is narrow and scattered, introducing significant variation in the sampling space. Our method demonstrates a clear advantage quantitatively and qualitatively in this scenario. Case Sparse Sources is a 3D Poisson equation problem, where the source is spatially dispersed. Our method also shows distinct qualitative and quantitative superiority. Case Mixed Boundaries is a 2D Laplace equation problem with mixed boundary conditions. The domain is enclosed by a square Neumann boundary and an internal Dirichlet boundary, showing that our method performs well in cases with Neumann boundaries. In the relMSE over number of samples plots for all three examples, our method consistently outperforms the baseline methods.

## 7. Discussion

#### Other variance reduction methods

Existing *neural-cache-based* variance reduction method(Li etal., 2023) combines the unbiased but high-variance Walk on Spheres (WoS) estimator with a biased but zero-variance neural field, using the WoS estimate as a supervision to replace the self-supervised loss commonly used in Physics-Informed Neural Networks (PINNs). While this approach can achieve excellent variance reduction (since the neural field’s output has zero variance), it does not address the inherent limitations of bias in neural field representations, and the length threshold of the walk to access the cache, as a hyperparameter, can be difficult to tune. In the experimental section of their paper, it is shown that after a relatively small number ($\sim 10^{3}$) of walks per pixel, the baseline WoS MSE outperforms their result. Thus, the cache-based method is mainly applicable to scenarios with extremely limited computational resources for fast approximations. Existing *bidirectional methods*(Qi etal., 2022) is an efficient two-pass approach; however, they do not thoroughly investigate sampling strategies for generating the next-step in the forward and backward processes. Combining our method with the bidirectional approach offers the *potential* for even better variance reduction performance. *Boundary Value Caching*(Miller etal., 2023) is also a competitive variance reduction method, but its mathematical formulation differs from ours, and it does not address issues related to local importance sampling.

#### Locality

Compared to traditional global solvers such as Finite Difference Methods (FDM) and Finite Element Methods (FEM), a notable advantage of Monte Carlo PDE solvers is their ability to perform local evaluations. While our method uses implicit neural representations to store a spatially-varying parametric distribution, this does not imply a global approach. In fact, the guiding process can be selectively applied to high-variance regions within the solution space or even restricted to a single point. The choice of guiding regions is entirely flexible—storing the parametric distribution does not necessitate any global information.

#### Alternative implementations

Although we implement our system using implicit neural representations, this is not a core part of our method. In fact, using traditional discrete spatial storage structures should also work well with our algorithm. Additionally, the vMF mixture distribution used to model the sampling distribution could be replaced with other distributions. Our method only models the distribution of the mean of the solution $u$, modeling the second moment of $u$ to achieve variance-aware path guiding(Rath etal., 2020) might yield even better results.

## 8. Conclusion, Limitations and Future Work

We present Path Guiding for Monte Carlo PDE Solvers, inspired by path guiding techniques in physically-based rendering. We introduce neural-parametric-mixtures-based path guiding into the mainstream Walk on Stars estimator to reduce the variance in Monte Carlo estimation. We analyze the next-step distribution of the Walk on Stars estimator across different dimensions and make targeted improvements to the path guiding approach based on the characteristics of Monte Carlo PDE solvers. Experimental results demonstrate that our method significantly reduces variance both qualitatively and quantitatively.

Our path guiding method operates within a localized sampling space. Although the working space of Monte Carlo PDE solvers differs significantly from that of path space in Monte Carlo path tracing, we still anticipate the possibility of developing a generalized path integral formulation(Veach and Guibas, 1995a, 1997), similar to Monte Carlo path tracing, which could offer better guidance for variance reduction research in this field. In the path tracing domain, the Metropolis Light Transport (MLT)(Veach and Guibas, 1997) algorithm is commonly used to sample specific optical phenomena, such as caustics. While the Poisson equation is smooth in nature and unlikely to produce singular high-energy distributions, whether MCMC methods such as MLT can achieve comparable or superior results to path guiding and other existing variance reduction techniques remains an open question. In recent years, several extensions of Monte Carlo PDE solvers have been proposed, such as the extension for spatially varying coefficient problems(Sawhney etal., 2022). Whether more advanced and specific path guiding strategies can be employed in these cases remains an area of active exploration. Externally to the solvers, it may also be possible to design denoisers(Kalantari etal., 2015) specifically tailored for Monte Carlo PDE solvers. How such a denoiser should be designed and what information from the solver’s output would be necessary to leverage are still open questions that warrant further investigation. Finally, the goal of our research is to develop a Monte Carlo estimator to perform forward solving of PDEs. As for the inverse Monte Carlo estimator, drawing from experience in the field of differentiable path tracing, it is expected to require different path guiding strategies(Fan etal., 2024).

To date, research on Monte Carlo PDE solvers has primarily focused on generalizing the method to broaden its applicability. Variance reduction for Monte Carlo PDE solvers remains a vast and largely unexplored area. We look forward to further exploration into variance reduction for Monte Carlo PDE solvers and hope that many of the advancements from the rendering domain will find broader applications in other areas.

## References

- (1)
- Bati etal. (2023)Mégane Bati, Stéphane Blanco, Christophe Coustet, Vincent Eymet, Vincent Forest, Richard Fournier, Jacques Gautrais, Nicolas Mellado, Mathias Paulin, and Benjamin Piaud. 2023.Coupling Conduction, Convection and Radiative Transfer in a Single Path-Space: Application to Infrared Rendering.
*ACM Trans. Graph.*42, 4, Article 79 (July 2023), 20pages.https://doi.org/10.1145/3592121 - deGoes and Desbrun (2024)Fernando de Goes and Mathieu Desbrun. 2024.Stochastic Computation of Barycentric Coordinates.
*ACM Trans. Graph.*43, 4, Article 42 (July 2024), 13pages.https://doi.org/10.1145/3658131 - DeLambilly etal. (2023)Auguste DeLambilly, Gabriel Benedetti, Nour Rizk, Chen Hanqi, Siyuan Huang, Junnan Qiu, David Louapre, Raphael Granier DeCassagnac, and Damien Rohmer. 2023.Heat Simulation on Meshless Crafted-Made Shapes. In
*Proceedings of the 16th ACM SIGGRAPH Conference on Motion, Interaction and Games*(Rennes, France)*(MIG ’23)*. Association for Computing Machinery, New York, NY, USA, Article 9, 7pages.https://doi.org/10.1145/3623264.3624457 - Dong etal. (2023)Honghao Dong, Guoping Wang, and Sheng Li. 2023.Neural Parametric Mixtures for Path Guiding. In
*ACM SIGGRAPH 2023 Conference Proceedings*(Los Angeles, CA, USA)*(SIGGRAPH ’23)*. Association for Computing Machinery, New York, NY, USA, Article 29, 10pages.https://doi.org/10.1145/3588432.3591533 - Ermakov and Sipin (2009)S. Ermakov and A. Sipin. 2009.The “walk in hemispheres” process and its applications to solving boundary value problems.
*Vestnik St. Petersburg University: Mathematics*42 (09 2009), 155–163.https://doi.org/10.3103/S1063454109030029 - Fan etal. (2023)Zhimin Fan, Pengpei Hong, Jie Guo, Changqing Zou, Yanwen Guo, and Ling-Qi Yan. 2023.Manifold Path Guiding for Importance Sampling Specular Chains.
*ACM Trans. Graph.*42, 6, Article 257 (Dec. 2023), 14pages.https://doi.org/10.1145/3618360 - Fan etal. (2024)Zhimin Fan, Pengcheng Shi, Mufan Guo, Ruoyu Fu, Yanwen Guo, and Jie Guo. 2024.Conditional Mixture Path Guiding for Differentiable Rendering.
*ACM Trans. Graph.*43, 4, Article 48 (July 2024), 11pages.https://doi.org/10.1145/3658133 - Havran and Sbert (2014)Vlastimil Havran and Mateu Sbert. 2014.Optimal combination of techniques in multiple importance sampling. In
*Proceedings of the 13th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry*(Shenzhen, China)*(VRCAI ’14)*. Association for Computing Machinery, New York, NY, USA, 141–150.https://doi.org/10.1145/2670473.2670496 - Herholz etal. (2019)Sebastian Herholz, Yangyang Zhao, Oskar Elek, Derek Nowrouzezahrai, Hendrik P.A. Lensch, and Jaroslav Křivánek. 2019.Volume Path Guiding Based on Zero-Variance Random Walk Theory.
*ACM Trans. Graph.*38, 3, Article 25 (June 2019), 19pages.https://doi.org/10.1145/3230635 - Hey and Purgathofer (2002)Heinrich Hey and Werner Purgathofer. 2002.Importance sampling with hemispherical particle footprints. In
*Proceedings of the 18th Spring Conference on Computer Graphics*(Budmerice, Slovakia)*(SCCG ’02)*. Association for Computing Machinery, New York, NY, USA, 107–114.https://doi.org/10.1145/584458.584476 - Huang etal. (2024)Jiawei Huang, Akito Iizuka, Hajime Tanaka, Taku Komura, and Yoshifumi Kitamura. 2024.Online Neural Path Guiding with Normalized Anisotropic Spherical Gaussians.
*ACM Trans. Graph.*43, 3, Article 26 (April 2024), 18pages.https://doi.org/10.1145/3649310 - Huo etal. (2020)Yuchi Huo, Rui Wang, Ruzahng Zheng, Hualin Xu, Hujun Bao, and Sung-Eui Yoon. 2020.Adaptive Incident Radiance Field Sampling and Reconstruction Using Deep Reinforcement Learning.
*ACM Trans. Graph.*39, 1, Article 6 (Jan. 2020), 17pages.https://doi.org/10.1145/3368313 - Jain etal. (2024)Pranav Jain, Ziyin Qu, PeterYichen Chen, and Oded Stein. 2024.Neural Monte Carlo Fluid Simulation. In
*ACM SIGGRAPH 2024 Conference Papers*(Denver, CO, USA)*(SIGGRAPH ’24)*. Association for Computing Machinery, New York, NY, USA, Article 9, 11pages.https://doi.org/10.1145/3641519.3657438 - Jensen (1995)HenrikWann Jensen. 1995.Importance Driven Path Tracing using the Photon Map. In
*Rendering Techniques*.https://api.semanticscholar.org/CorpusID:9344202 - Kalantari etal. (2015)NimaKhademi Kalantari, Steve Bako, and Pradeep Sen. 2015.A machine learning approach for filtering Monte Carlo noise.
*ACM Trans. Graph.*34, 4, Article 122 (July 2015), 12pages.https://doi.org/10.1145/2766977 - Krivanek etal. (2005)J. Krivanek, P. Gautron, S. Pattanaik, and K. Bouatouch. 2005.Radiance caching for efficient global illumination computation.
*IEEE Transactions on Visualization and Computer Graphics*11, 5 (2005), 550–561.https://doi.org/10.1109/TVCG.2005.83 - Lafortune and Willems (1993)EricP. Lafortune and YvesD. Willems. 1993.Bi-directional path tracing. In
*Proceedings of Third International Conference on Computational Graphics and Visualization Techniques (Compugraphics ’93)*. Alvor, Portugal, 145–153. - Lafortune and Willems (1995)EricP. Lafortune and YvesD. Willems. 1995.A 5D Tree to Reduce the Variance of Monte Carlo Ray Tracing. In
*Rendering Techniques*.https://api.semanticscholar.org/CorpusID:7506343 - Laine etal. (2013)Samuli Laine, Tero Karras, and Timo Aila. 2013.Megakernels considered harmful: wavefront path tracing on GPUs. In
*Proceedings of the 5th High-Performance Graphics Conference*(Anaheim, California)*(HPG ’13)*. Association for Computing Machinery, New York, NY, USA, 137–143.https://doi.org/10.1145/2492045.2492060 - Li etal. (2022)He Li, Beibei Wang, Changehe Tu, Kun Xu, Nicolas Holzschuch, and Ling-Qi Yan. 2022.Unbiased Caustics Rendering Guided by Representative Specular Paths. In
*Proceedings of SIGGRAPH Asia 2022*. - Li etal. (2023)Zilu Li, Guandao Yang, Xi Deng, Christopher DeSa, Bharath Hariharan, and Steve Marschner. 2023.Neural Caches for Monte Carlo Partial Differential Equation Solvers. In
*SIGGRAPH Asia 2023 Conference Papers*(Sydney, NSW, Australia)*(SA ’23)*. Association for Computing Machinery, New York, NY, USA, Article 34, 10pages.https://doi.org/10.1145/3610548.3618141 - Miller etal. (2023)Bailey Miller, Rohan Sawhney, Keenan Crane, and Ioannis Gkioulekas. 2023.Boundary Value Caching for Walk on Spheres.
*ACM Trans. Graph.*42, 4 (2023). - Miller etal. (2024a)Bailey Miller, Rohan Sawhney, Keenan Crane, and Ioannis Gkioulekas. 2024a.Differential Walk on Spheres.
*ACM Trans. Graph.*43, 6 (2024). - Miller etal. (2024b)Bailey Miller, Rohan Sawhney, Keenan Crane, and Ioannis Gkioulekas. 2024b.Walkin’ Robin: Walk on Stars with Robin Boundary Conditions.
*ACM Trans. Graph.*43, 4 (2024). - Muchacho and Pokorny (2024)Rafael I.Cabral Muchacho and FlorianT. Pokorny. 2024.Walk on Spheres for PDE-based Path Planning.arXiv:2406.01713[cs.RO]https://arxiv.org/abs/2406.01713
- Muller (1956)MervinE. Muller. 1956.Some Continuous Monte Carlo Methods for the Dirichlet Problem.
*The Annals of Mathematical Statistics*27, 3 (1956), 569 – 589.https://doi.org/10.1214/aoms/1177728169 - Müller (2021)Thomas Müller. 2021.
*tiny-cuda-nn*.https://github.com/NVlabs/tiny-cuda-nn - Müller etal. (2017)Thomas Müller, Markus Gross, and Jan Novák. 2017.Practical Path Guiding for Efficient Light-Transport Simulation.
*Computer Graphics Forum (Proceedings of EGSR)*36, 4 (June 2017), 91–100.https://doi.org/10.1111/cgf.13227 - Müller etal. (2019)Thomas Müller, Brian McWilliams, Fabrice Rousselle, Markus Gross, and Jan Novák. 2019.Neural Importance Sampling.
*ACM Trans. Graph.*38, 5, Article 145 (Oct. 2019), 19pages.https://doi.org/10.1145/3341156 - Müller etal. (2021)Thomas Müller, Fabrice Rousselle, Jan Novák, and Alexander Keller. 2021.Real-time neural radiance caching for path tracing.
*ACM Trans. Graph.*40, 4, Article 36 (July 2021), 16pages.https://doi.org/10.1145/3450626.3459812 - Nabizadeh etal. (2021)MohammadSina Nabizadeh, Ravi Ramamoorthi, and Albert Chern. 2021.Kelvin transformations for simulations on infinite domains.
*ACM Transactions on Graphics (TOG)*40, 4 (2021), 97:1–97:15. - Nam etal. (2024)HongChul Nam, Julius Berner, and Anima Anandkumar. 2024.Solving Poisson Equations Using Neural Walk-on-Spheres. In
*Forty-first International Conference on Machine Learning*. - Owen and Zhou (2000)Art Owen and Yi Zhou. 2000.Safe and Effective Importance Sampling.
*J. Amer. Statist. Assoc.*95, 449 (2000), 135–143.http://www.jstor.org/stable/2669533 - Qi etal. (2022)Yang Qi, Dario Seyb, Benedikt Bitterli, and Wojciech Jarosz. 2022.A bidirectional formulation for Walk on Spheres.
*Computer Graphics Forum (Proceedings of EGSR)*41, 4 (July 2022).https://doi.org/10/jgzr - Rath etal. (2020)Alexander Rath, Pascal Grittmann, Sebastian Herholz, Petr Vévoda, Philipp Slusallek, and Jaroslav Křivánek. 2020.Variance-aware path guiding.
*ACM Trans. Graph.*39, 4, Article 151 (Aug. 2020), 12pages.https://doi.org/10.1145/3386569.3392441 - Reibold etal. (2018)Florian Reibold, Johannes Hanika, Alisa Jung, and Carsten Dachsbacher. 2018.Selective guided sampling with complete light transport paths.
*ACM Trans. Graph.*37, 6, Article 223 (Dec. 2018), 14pages.https://doi.org/10.1145/3272127.3275030 - Rioux-Lavoie etal. (2022)Damien Rioux-Lavoie, Ryusuke Sugimoto, Tümay Özdemir, NaoharuH. Shimada, Christopher Batty, Derek Nowrouzezahrai, and Toshiya Hachisuka. 2022.A Monte Carlo Method for Fluid Simulation.
*ACM Transactions on Graphics*41, 6 (Dec. 2022).https://doi.org/10.1145/3550454.3555450 - Sawhney (2021)Rohan Sawhney. 2021.
*FCPW: Fastest Closest Points in the West*. - Sawhney and Crane (2020)Rohan Sawhney and Keenan Crane. 2020.Monte Carlo Geometry Processing: A Grid-Free Approach to PDE-Based Methods on Volumetric Domains.
*ACM Trans. Graph.*39, 4 (2020). - Sawhney etal. (2023)Rohan Sawhney, Bailey Miller, Ioannis Gkioulekas, and Keenan Crane. 2023.Walk on Stars: A Grid-Free Monte Carlo Method for PDEs with Neumann Boundary Conditions.
*ACM Trans. Graph.*42, 4 (2023). - Sawhney etal. (2022)Rohan Sawhney, Dario Seyb, Wojciech Jarosz, and Keenan Crane. 2022.Grid-Free Monte Carlo for PDEs with Spatially Varying Coefficients.
*ACM Trans. Graph.*XX, X (2022). - Sbert etal. (2016)Mateu Sbert, Vlastimil Havran, and Laszlo Szirmay-Kalos. 2016.Variance Analysis of Multi-sample and One-sample Multiple Importance Sampling.
*Computer Graphics Forum*(2016).https://doi.org/10.1111/cgf.13042 - Simonov (2008)NikolaiA. Simonov. 2008.Walk-on-Spheres Algorithm for Solving Boundary-Value Problems with Continuity Flux Conditions.https://api.semanticscholar.org/CorpusID:117970575
- Sugimoto etal. (2024a)Ryusuke Sugimoto, Christopher Batty, and Toshiya Hachisuka. 2024a.Velocity-Based Monte Carlo Fluids. In
*ACM SIGGRAPH 2024 Conference Papers*(Denver, CO, USA)*(SIGGRAPH ’24)*. Association for Computing Machinery, New York, NY, USA, Article 8, 11pages.https://doi.org/10.1145/3641519.3657405 - Sugimoto etal. (2023)Ryusuke Sugimoto, Terry Chen, Yiti Jiang, Christopher Batty, and Toshiya Hachisuka. 2023.A Practical Walk-on-Boundary Method for Boundary Value Problems.
*ACM Trans. Graph.*42, 4, Article 81 (jul 2023), 16pages.https://doi.org/10.1145/3592109 - Sugimoto etal. (2024b)Ryusuke Sugimoto, Nathan King, Toshiya Hachisuka, and Christopher Batty. 2024b.Projected Walk on Spheres: A Monte Carlo Closest Point Method for Surface PDEs. In
*ACM SIGGRAPH Asia 2024 Conference Papers*(Tokyo, Japan)*(SIGGRAPH Asia ’24)*. Association for Computing Machinery, New York, NY, USA, 10pages.https://doi.org/10.1145/3680528.3687599 - Veach and Guibas (1995a)Eric Veach and Leonidas Guibas. 1995a.Bidirectional Estimators for Light Transport. In
*Photorealistic Rendering Techniques*, Georgios Sakas, Stefan Müller, and Peter Shirley (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 145–167. - Veach and Guibas (1995b)Eric Veach and LeonidasJ. Guibas. 1995b.Optimally combining sampling techniques for Monte Carlo rendering. In
*Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques**(SIGGRAPH ’95)*. Association for Computing Machinery, New York, NY, USA, 419–428.https://doi.org/10.1145/218380.218498 - Veach and Guibas (1997)Eric Veach and LeonidasJ. Guibas. 1997.Metropolis light transport. In
*Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques**(SIGGRAPH ’97)*. ACM Press/Addison-Wesley Publishing Co., USA, 65–76.https://doi.org/10.1145/258734.258775 - Vorba etal. (2019)Jiří Vorba, Johannes Hanika, Sebastian Herholz, Thomas Müller, Jaroslav Křivánek, and Alexander Keller. 2019.Path guiding in production. In
*ACM SIGGRAPH 2019 Courses*(Los Angeles, California)*(SIGGRAPH ’19)*. Association for Computing Machinery, New York, NY, USA, Article 18, 77pages.https://doi.org/10.1145/3305366.3328091 - Vorba etal. (2014)Jiří Vorba, Ondřej Karlík, Martin Šik, Tobias Ritschel, and Jaroslav Křivánek. 2014.On-line learning of parametric mixture models for light transport simulation.
*ACM Trans. Graph.*33, 4, Article 101 (July 2014), 11pages.https://doi.org/10.1145/2601097.2601203 - Yilmazer etal. (2024)EkremFatih Yilmazer, Delio Vicini, and Wenzel Jakob. 2024.Solving Inverse PDE Problems using Monte Carlo Estimators.
*Transactions on Graphics (Proceedings of SIGGRAPH Asia)*43 (Dec. 2024).https://doi.org/10.1145/3687990 - Yu etal. (2024)Z. Yu, L. Wu, Z. Zhou, and S. Zhao. 2024.A Differential Monte Carlo Solver For the Poisson Equation. In
*ACM SIGGRAPH 2024 Conference Proceedings*. - Zhu etal. (2021)Shilin Zhu, Zexiang Xu, Tiancheng Sun, Alexandr Kuznetsov, Mark Meyer, HenrikWann Jensen, Hao Su, and Ravi Ramamoorthi. 2021.Hierarchical neural reconstruction for path guiding using hybrid path and photon samples.
*ACM Trans. Graph.*40, 4, Article 35 (July 2021), 16pages.https://doi.org/10.1145/3450626.3459810