Sparse grid

Sparse Grids Matlab Kit

The Sparse Grids Matlab Kit is a collection of Matlab functions for high-dimensional quadrature and interpolation, based on the combination technique version of sparse grids.

It is lightweight, high-level and (hopefully) easy to use, good for quick prototyping and teaching. It comes with a very extensive documentation and examples (7000 lines of code, 3600 lines of comments).

It is somehow geared towards Uncertainty Quantification (UQ), but it is flexible enough for other purposes.

The Sparse Grids Matlab Kit is distributed with a BSD2 License

  • Sparse-grid-based quadrature and interpolation (Gauss-Legendre, Leja, Clenshaw-Curtis, Gauss-Hermite, Kronrod-Patterson-normal, Gaussian-Leja points supported)

  • Dimension-adaptive sparse grid algorithm

  • Conversion of a sparse-grid interpolant to a Polynomial Chaos Representation (Legendre, Chebyshev, Hermite polynomials supported)

  • Sparse-grid-based global and local sensitivity analysis (by computation of Sobol Indices and gradients of a sparse grid interpolant)

  • Export of sparse grid collocation points and weights to ASCII file

  • Visualization functions (plot of sparse grid points and sparse grid interpolant)


  • Chiara Piazzola,Lorenzo Tamellini,Raúl Tempone.A note on tools for prediction under uncertainty and identifiability of SIR-like dynamical systems for epidemiology, arXiv:2008.01400. Matlab code available here

  • Jesús Martínez-Frutos, Francisco Periago Esparza. Optimal Control of PDEs under Uncertainty - An Introduction with Application to Optimal Shape Design of Structures. Springer International Publishing, 2018. Book available here. Matlab code available here

Please cite our toolbox by mentioning the webpage containing the package and adding the following reference to your work:

J. Bäck, F. Nobile, L. Tamellini, and R. Tempone. Stochastic spectral Galerkin and collocation methods for PDEs with random coefficients: a numerical comparison. In J.S. Hesthaven and E.M. Ronquist, editors, Spectral and High Order Methods for Partial Differential Equations, volume 76 of Lecture Notes in Computational Science and Engineering, pages 43–62. Springer, 2011.


author = {B\"ack, J. and Nobile, F. and Tamellini, L. and Tempone, R.},

title = {Stochastic spectral {G}alerkin and collocation methods for {PDE}s with random coefficients: a numerical comparison},

booktitle = {Spectral and High Order Methods for Partial Differential Equations},

pages = {43--62},

publisher = {Springer},

year = 2011,

volume = 76,

series = {Lecture Notes in Computational Science and Engineering},

editor = {Hesthaven, J.S. and Ronquist, E.M.},

note = {Selected papers from the ICOSAHOM '09 conference, June 22-26, Trondheim, Norway}


For any questions or to report a bug, send an email to tamellini AT imati DOT cnr DOT it .

Send us your email if you want to be notified when a new version is released online


Adaptive sparse grid approximation for high dimensional interval field construction


This paper introduces a novel approach to model interval fields in high dimensional Finite Element models containing thousands of degrees of freedom. Typically, to simulate with interval fields in such high dimensional model spaces, a non-negligible computational cost has to be dedicated to the calculation of a combinatorial amount of distances in order to determine the interval field basis functions (e.g, via Inverse Distance Weighting interpolation). It is proposed to alleviate this computational cost by applying so-called sparse grid interpolants to construct the interval field basis functions over the high-dimensional model domain, rather than computing the distances over the underlying finite element grid. First, a theoretical framework hereto is presented to compute interval field basis functions via Smolyak’s algorithm, including rates of error convergence. Then, a case study on an L-shaped beam model with holes is performed. The performed case study shows that a highly accurate representation of the interval field basis can be obtained at strongly reduced computational expense, as compared to the full combinatorial calculation.

  1. Natures garden
  2. Rohman shawl
  3. Kuroko tetsuya

Dr. CU: Detailed Routing by Sparse Grid Graph and Minimum-Area-Captured Path Search

Abstract: Different from global routing, detailed routing takes care of many detailed design rules and is performed on a significantly larger routing grid graph. In advanced technology nodes, it becomes the most complicated and time-consuming stage in the very large-scale integration physical design flow. We propose Dr. CU, an efficient and effective detailed router, to tackle the challenges. To handle a 3-D detailed routing grid graph of enormous size, a set of two-level sparse data structures is designed for runtime and memory efficiency. For handling the minimum-area constraint, an optimal correct-by-construction path search algorithm is proposed. Besides, an efficient bulk synchronous parallel scheme is adopted to further reduce the runtime usage. Compared with the other state-of-the-art academic detailed routers, Dr. CU reduces the number of design rule violations by one or two orders of magnitude. At the same time, it uses shorter wire length, fewer vias, and significantly less runtime. The source code of Dr. CU is available at

Published in: IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems ( Volume: 39 , Issue: 9 , Sept. 2020 )

Sours: /document/

This page slowly walks you through an introduction of sparse grids. Apart from mathematical knowledge at a level similar to high school, no other prerequisites are required.

If formulas do not display correctly on this page, ensure that JavaScript is enabled in your browser.

What Are Sparse Grids?

Sparse grids are a numerical discretization technique that can be used to accelerate the solution of a wide range of computational problems: interpolation, regression, classification, density estimation, quadrature, uncertainty quantification, partial differential equations, and so on.

Often, these problems are solved on full grids (also called “regular grids”). While this is a feasible approach if the dimensionality $d$ of the problem is low ($d = 2$, $d = 3$), full grids become very expensive in computational sense when used for higher dimensionalities $d > 4$. This is due to the curse of dimensionality, which states that the complexity of full grids grows exponentially with $d$.

If the dimensionality cannot be reduced by dimension-reducing methods (e.g., principal component analysis), then different approaches have to be used in order to solve these problems. This is where sparse grids come into play.

Sparse grids defeat the curse of dimensionality and allow the solution of the problems with much smaller effort, at the cost of slightly deteriorated errors.

Setting and Prerequisites

The basic setting is as follows: We have a scalar-valued function $\objfun$ which maps some input parameter $\*x$ to some output value $\objfun(\*x)$.

The requirements of $\objfun$ are as follows:

  • $\objfun$ is defined on the unit hyper-cube $\clint{0, 1}^d$. This means that $\objfun(\*x)$ can be computed for all $\*x \in \clint{0, 1}^d$. If every parameter $x_t$ independently “lives” on some interval $\clint{a_t, b_t}$, then the $x_t$ can be easily transformed to be in $\clint{0, 1}$.
  • $\objfun$ can be evaluated at arbitrary points in $\clint{0, 1}^d$, so $\objfun$ is not given by a “point cloud” of values at predetermined points.

Furthermore, usually we assume that $\objfun$ is a “computationally expensive” function, which means that evaluations $\objfun(\*x)$ might take a long time.

To study $\objfun$ and to solve problems involving $\objfun$ (e.g., function optimization), we want to replace $\objfun$ with another function $\surfun\colon \clint{0, 1}^d \to \real$ that approximates $\objfun$ well, but is much cheaper to evaluate. We could then discard the original function $\objfun$ and study $\surfun$ instead.

In our setting, we construct $\surfun$ by interpolation of $\objfun$: We evaluate $\objfun$ at a (hopefully small) number of points $\*x_k$ and use the values $\objfun(\*x_k)$ to define $\surfun$. These points $\*x_k$ will later be our sparse grid.

Full Grids in One Dimension

The most common method is to use full grids for the points $\*x_k$.

To explain full grids, let’s start with one dimension $d = 1$, i.e., our domain is the unit interval $\clint{0, 1}$. Full grids subdivide this interval with equidistant points into intervals of equal size. Choosing the number of intervals as a power $2^n$ of two will later make it easier for us.

Hence, our one-dimensional (full) grid is given by

$$\fgset{n} := \{\gp{n,i} \mid i = 0, \dotsc, 2^n\},\quad \gp{n,i} := i \gs{n},\quad \gs{n} := 2^{-n},$$

where $n \ge 0$ is the level of the grid, $\gp{n,i}$ are the grid points of index $i$, and $\gs{n}$ is the grid spacing. For example, the 1D grid of level two is given by $\fgset{2} = \{0, 0.25, 0.5, 0.75, 1\}$, since $\gs{2} = 0.25$.

With this grid, we now construct our interpolant function $\surfun$ as a linear combination of basis functions, one for each grid point:

$$\surfun(x) := \sum_{i=0}^{2^n} \coeff{n,i} \basis{n,i}(x),\quad \coeff{n,i} \in \real.$$

The simplest, reasonably good basis is given by piecewise linear functions. Basically, we evaluate the original function $\objfun$ at the grid points $\gp{n,i}$ and connect the resulting dots in the plot by drawing straight lines between them.

Formally, piecewise linear basis functions are defined as

$$\basis{n,i}(\*x) := \max(1 – |2^n x – i|, 0).$$

This means that $\basis{n,i}$ is zero everywhere except between the two neighbors $\gp{n,i-1}$ and $\gp{n,i+1}$, where it increases linearly from $0$ to $1$ on $\clint{\gp{n,i-1}, \gp{n,i}}$ and then decreases back again from $1$ to $0$ on $\clint{\gp{n,i}, \gp{n,i+1}}$. The shape of $\basis{n,i}$ is why these piecewise linear functions are also called hat functions.

Since every basis function $\basis{n,i}$ vanishes at all grid points but $\gp{n,i}$, the coefficients $\coeff{n,i}$ are simply equal to the value $\objfun(\gp{n,i})$ of the original function at the grid points. This is the reason why this basis is called a nodal basis.

Full Grids in Higher Dimensions

The one-dimensional grid is generalized to higher dimensions $d > 1$ by using the so-called tensor product approach, which uses Cartesian products to construct the higher-dimensional grids and tensor products to construct higher-dimensional functions.

The level of the grid is now a vector $\*n = (n_1, \dotsc, n_d)$ of 1D levels $n_t$ ($t = 1, \dotsc, d$). The full grid of level $\*n$ is the Cartesian product

$$\fgset{\*n} := \fgset{n_1} \times \dotsb \times \fgset{n_d}$$

of the one-dimensional grids $\fgset{n_t}$. This means that $\fgset{\*n}$ consists of all possible combinations of the 1D grid points. The $d$-dimensional grid points (the elements of $\fgset{\*n}$) are defined $\vgp{\*n,\*i} := (\gp{n_1,i_1}, \dotsc, \gp{n_d,i_d})$, where the index $\*i = (i_1, \dotsc, i_d)$ is now a vector, too.

As an example, we want to determine the 2D grid of level $\*n = (2, 1)$. The 1D grid of level $n_1 = 2$ is $\fgset{2} = \{0, 0.25, 0.5, 0.75, 1\}$ and the 1D grid of level $n_2 = 1$ is $\fgset{1} = \{0, 0.5, 1\}$. The 2D grid is now the Cartesian product with all $5 \cdot 3 = 15$ possible combinations:

\fgset{(2,1)} = \fgset{2} \times \fgset{1} = \{&(0, 0), (0.25, 0), (0.5, 0), (0.75, 0), (1, 0),\\
&(0, 0.5), (0.25, 0.5), (0.5, 0.5), (0.75, 0.5), (1, 0.5),\\
&(0, 1), (0.25, 1), (0.5, 1), (0.75, 1), (1, 1)\}.

Again, each grid point $\vgp{\*n,\*i}$ is corresponding to one basis function $\basis{\*n,\*i}$. This multivariate basis function is constructed from the univariate functions $\basis{n_t,i_t}$ as a tensor product:

$$\basis{\*n,\*i}(\*x) := \basis{n_1,i_1}(x_1) \;\cdot\; \dotsm \;\cdot\; \basis{n_d,i_d}(x_d).$$

In other words, to evaluate $\basis{\*n,\*i}(\*x)$ at some point $\*x = (x_1, \dotsc, x_d) \in \clint{0, 1}^d$, we iterate over all dimensions $t = 1, \dotsc, d$, evaluate the 1D basis function of level $n_t$ and index $i_t$ at point $x_t$, and multiply the $d$ resulting values to obtain $\basis{\*n,\*i}(\*x)$.

The corresponding interpolant function is simply

$$\surfun(\*x) := \sum_{i_1=0}^{2^{n_1}} \dotsb \sum_{i_d=0}^{2^{n_d}} \coeff{\*n,\*i} \basis{\*n,\*i}(\*x),\quad \coeff{\*n,\*i} \in \real,$$

where $\coeff{\*n,\*i} = \objfun(\vgp{\*n,\*i})$ for the piecewise linear basis (called piecewise $d$-linear in higher dimensions).

The Limits of Full Grids

Obviously, the size of $\fgset{\*n}$ will rapidly grow in higher dimensions. This is because $\fgset{\*n}$ has $2^{n_1} \dotsm 2^{n_d}$ grid points. If $\*n$ is chosen as the same number $n$ in all dimensions, this equals $2^{nd}$ grid points, which grows exponentially in $d$ (curse of dimensionality).

This is a problem, since for interpolation we have to know the values $\objfun(\vgp{\*n,\*i})$ of the objective function at each grid point $\vgp{\*n,\*i}$, and evaluations of $\objfun$ are by assumption computationally expensive. Even if evaluations of $\objfun$ are cheap, the sheer number of grid points quickly exhausts all available memory even on large computers already for moderate dimensionalities $d$ and levels $\*n$.

We therefore try to reduce the size of the grid by removing “unimportant” basis functions and their corresponding grid points. Unfortunately, in the nodal basis as introduced above, all basis functions are equally important, because they all look the same.

Thus, it is necessary to perform a change of basis, so that the basis functions and their grid points have different levels of importance, but the same functions (continuous, piecewise linear functions) can be represented by the basis. It suffices to perform this change of basis for the 1D case, since the multivariate case will be treated by the same tensor product approach as used above.

Hierarchy in One Dimension

To this end, we note that the 1D grids are nested in the sense that $\fgset{n-1}$ is contained in $\fgset{n}$. For instance, the grid $\Omega_0 = \{0, 1\}$ of level zero is contained in the grid $\fgset{1} = \{0, 0.5, 1\}$ of level one, which is in turn contained in the grid $\fgset{2} = \{0, 0.25, 0.5, 0.75, 1\}$ of level two, and so on. For each level, only the grid points with odd index $i$ (e.g., $0.25$ and $0.75$ with index $1$ and $3$) are not already contained in the grid of the previous level.

This means that the grid $\fgset{n}$ actually decomposes into $n+1$ incremental grids $\hsset{\ell}$ ($\ell = 0, \dotsc, n$):

$$\fgset{n} = \bigdcup_{\ell=0}^n \,\hsset{\ell}\, = \,\hsset{0} \,\dcup\, \hsset{1} \,\dcup\, \dotsb \,\dcup\, \hsset{n},\quad \hsset{\ell}\, := \{\gp{l,i} \mid i \in \hiset{\ell}\},$$

where $\hiset{\ell}$ only contains the odd indices from $\fgset{\ell}$ (except for level zero as a special case):

$$\hiset{\ell} := \begin{cases}\{1, 3, 5, \dotsc, 2^\ell – 1\}&\text{for }\ell \ge 1,\\\{0, 1\}&\text{for }\ell = 0.\end{cases}$$

Here, the symbol “$\dcup$” denotes the disjoint union of sets. It is the same as “$\cup$” with the additional hint that the sets being joined are pairwise disjoint (they do not have points in common).

In short, this can be written as

$$\fgset{n} = \{\gp{\ell,i} \mid 0 \le \ell \le n,\, i \in \hiset{\ell}\}.$$

From a data structure point of view, this hierarchy of grid points with different can be represented by tree, which is binary up to $\ell = 1$. (For level zero, again a special case has to be employed.) The two children of a grid point $\gp{\ell,i}$ ($\ell \ge 1$) are given by $\gp{\ell+1,2i\pm 1}$.

This grid hierarchy can be applied to the basis functions as well, if we just take the corresponding basis function for each grid point:

$$\surfun(x) := \sum_{\ell=0}^n \sum_{i \in \hiset{\ell}} \surplus{\ell,i} \basis{\ell,i}(x),\quad \surplus{\ell,i} \in \real.$$

If specific conditions are fulfilled, then the involved functions $\{\basis{\ell,i} \mid 0 \le \ell \le n,\, i \in \hiset{\ell}\}$ are also a basis (called hierarchical basis) and span the same function space as the nodal basis. One can show that these conditions are fulfilled for the piecewise linear basis. Of course, the coefficients $\surplus{\ell,i}$ (usually called hierarchical surpluses) are different from the nodal coefficients $\coeff{n,i}$ from above, but the resulting function $\surfun$ is exactly the same.

Hierarchy in Higher Dimensions

The tensor product approach from the nodal case can also be applied to the hierarchical representation of the one-dimensional grid and basis.

Similarly to the 1D case, the $d$-dimensional full grid $\fgset{\*n}$ decomposes into $(n_1 + 1) \dotsm (n_d + 1)$ incremental grids $\hsset{\*\ell}$:

$$\fgset{\*n} = \bigdcup_{\ell_1=0}^{n_1} \dotsb \bigdcup_{\ell_d=0}^{n_d} \,\hsset{\*\ell},\quad \hsset{\*\ell}\, := \,\hsset{\ell_1}\, \times \dotsb \times \,\hsset{\ell_d}.$$

Again, the incremental grids only contain the grid points with odd indices, i.e., $\hsset{\*\ell}$ can also be formulated as follows:

$$\hsset{\*\ell}\, = \{\vgp{\*\ell,\*i} \mid \*i \in \hiset{\*\ell}\},\quad \hiset{\*\ell} := \hiset{\ell_1} \times \dotsb \times \hiset{\ell_d}.$$

As an example, the grid $\fgset{(2,1)}$ from above, i.e.,

\fgset{(2,1)} = \{&(0, 0), (0.25, 0), (0.5, 0), (0.75, 0), (1, 0),\\
&(0, 0.5), (0.25, 0.5), (0.5, 0.5), (0.75, 0.5), (1, 0.5),\\
&(0, 1), (0.25, 1), (0.5, 1), (0.75, 1), (1, 1)\},

decomposes as follows:

\fgset{(2,1)} &= \,\hsset{(0,0)}\, \dcup \,\hsset{(1,0)}\, \dcup \,\hsset{(2,0)}\, \dcup \,\hsset{(0,1)}\, \dcup \,\hsset{(1,1)}\, \dcup \,\hsset{(2,1)},\text{ where}\\
\hsset{(0,0)}\, &= \{(0, 0), (0, 1), (1, 0), (1, 1)\},\\
\hsset{(1,0)}\, &= \{(0.5, 0), (0.5, 1)\},\\
\hsset{(2,0)}\, &= \{(0.25, 0), (0.25, 1), (0.75, 0), (0.75, 1)\},\\
\hsset{(0,1)}\, &= \{(0, 0.5), (1, 0.5)\},\\
\hsset{(1,1)}\, &= \{(0.5, 0.5)\},\\
\hsset{(2,1)}\, &= \{(0.25, 0.5), (0.75, 0.5)\}.

Data-structure-wise, the binary tree from the 1D case becomes are directed acyclic graph (DAG) in the multivariate case as grid points have multiple direct ancestors; every grid point $\vgp{\*\ell,\*i}$ has $2d$ children, two for each dimension.

The interpolant function is given by

$$\surfun(\*x) := \sum_{\*\ell = \*0}^{\*n} \sum_{\*i \in \hiset{\*\ell}} \surplus{\*\ell,\*i} \basis{\*\ell,\*i}(\*x),\quad \surplus{\*\ell,\*i} \in \real,$$

where the first sum is shorthand for $\sum_{\ell_1=0}^{n_1} \dotsb \sum_{\ell_d=0}^{n_d}$. The hierarchical surpluses $\surplus{\*\ell,\*i}$ are again different from the coefficients $c_{\*\ell,\*i}$ for the nodal basis, but the resulting interpolant function $\surfun$ is the same.

Sparse Grids

However, this different representation now enables us to assess the contribution/importance of each basis function $\basis{\*\ell,\*i}$ for the resulting $\surfun$.

To this end, we consider again the piecewise linear basis in 1D. The key observation is that the size of the support (basically the set where the function is non-zero) of $\basis{\ell,i}$ is smaller in higher levels: The size of the support of $\basis{\ell,i}$ is $\gp{\ell,i+1} – \gp{\ell,i-1} = 2 \gs{\ell} = 2^{-\ell+1}$, i.e., it halves in size for each level. Hence, basis functions of a very high level (say $\ell \ge 10$) have a very small support, which means that these functions only have little influence on the resulting linear combination $\surfun$, no matter what their coefficient $\surplus{\ell,i}$ is.

Conversely, basis function of a low level $\ell$ have a large support. The three basis functions of level $\ell \le 1$ even have global support, which means that they influence $\surfun$ globally on the whole domain $\clint{0, 1}$.

In $d$ dimensions, the area of the support of $\basis{\*\ell,\*i}$ is

$$(2\gs{\ell_1}) \dotsm (2\gs{\ell_d}) = 2^{-\normone{\ell} + d},\quad \normone{\ell} := \ell_1 + \dotsb + \ell_d,$$

where $\normone{\ell}$ is the $1$-norm of $\ell$, i.e., the area decreases with increasing $\normone{\ell}$.

Thus, it may be supposed that functions with high $\normone{\ell}$ only contribute little to the solution and can thus be omitted. This is exactly what sparse grids do: For the definition of the interpolant function, we only use the levels $\*l$ whose $1$-norm doesn’t exceed a specific threshold $n \ge 0$:

$$\surfun(\*x) := \sum_{\normone{\*\ell} \le n} \sum_{\*i \in \hiset{\*\ell}} \surplus{\*\ell,\*i} \basis{\*\ell,\*i}(\*x),\quad \surplus{\*\ell,\*i} \in \real.$$

The corresponding grid points form the regular sparse grid $\sgset{n}{d}$ of level $n$:

$$\sgset{n}{d} := \{\gp{\*\ell,\*i} \mid \normone{\*\ell} \le n,\, \*i \in \hiset{\*\ell}\}.$$

Comparison to Full Grids

For the piecewise linear basis, homogeneous boundary conditions, the full grid of level $n$ (same level in all dimensions), and the corresponding regular sparse grid of level $n$, the following estimates hold for the number of grid points and the $\Ltwo$ interpolation error $\normLtwo{\objfun – \surfun}$:

Number of grid points$\Ltwo$ interpolation error
Full grid$\landauO{2^{nd}}$$\landauO{2^{-2n}}$
Sparse grid$\landauO{2^n n^{d-1}}$$\landauO{2^{-2n} n^{d-1}}$

This means that while the number of grid points is massively reduced as the curse of dimensionality (exponential dependency on $d$) is removed, the $\Ltwo$ interpolation error is only slightly increased by a factor of $n^{d-1}$.

Compared to full grids, sparse grids enable the construction of function interpolants with much less (expensive) grid points, at the expense of a slightly increased error.

Continue reading on the next page to learn about adaptive sparse grids.

[1] [URL]J. Valentin, “B-Splines for Sparse Grids: Algorithms and Application to Higher-Dimensional Optimization,” PhD thesis, University of Stuttgart, 2019.

Grid sparse

The excursion was called something about the culture and life of a mountain village with an inspection of ancient ruins (how could it be without it). Wine tasting (and without this, somewhere in a country where the sun shines 340 days a year!) And participation in national dances ( after the wine, you can dance).

And the highlight of the event is donkey riding. Unusual, interesting.

More. On the way to the lake, Lidia Arkadyevna kept herself light and relaxed, she did not suffer at all with any stupid remorse about her behavior. Instead, the anticipation of a new sexual adventure tickled her from within.

Similar news:

A woman who knew how to present herself, holding a proud posture and an expression of calm independence on her face. A mysterious smile played. On her face, and her cornflower blue eyes looked into the distance, almost not focusing on the world around her. The wind played with light curls of hair, and when one of the strands fell on her face, sticking to her lips, she did.

2880 2881 2882 2883 2884