lapy.shapedna

Functions for computing and comparing Laplace spectra.

Includes code for solving the anisotropic Laplace-Beltrami eigenvalue problem as well as functions for normalization and comparison of Laplace spectra (shapeDNA descriptors).

The shapeDNA is a descriptor based on the eigenvalues and eigenvectors of the Laplace-Beltrami operator and can be used for shape analysis and comparison.

lapy.shapedna.compute_distance(ev1, ev2, dist='euc')[source]

Compute the shape dissimilarity from two shapeDNA descriptors.

Computes a distance metric between two sets of eigenvalues to quantify the dissimilarity between two shapes.

Parameters:
ev1np.ndarray

First set of sorted eigenvalues, shape (k,).

ev2np.ndarray

Second set of sorted eigenvalues, shape (k,).

dist{‘euc’}, default=’euc’

Distance measure. Currently only ‘euc’ (Euclidean) is implemented.

Returns:
float

Distance between the eigenvalue arrays.

Raises:
ValueError

If dist is not ‘euc’ (other distance metrics not yet implemented).

Notes

The eigenvalue arrays should have the same length and be normalized and reweighted in the same way for meaningful comparison.

lapy.shapedna.compute_shapedna(geom, k=50, lump=False, aniso=None, aniso_smooth=10, use_cholmod=False)[source]

Compute the shapeDNA descriptor for triangle or tetrahedral meshes.

The shapeDNA descriptor consists of the eigenvalues and eigenvectors of the Laplace-Beltrami operator and can be used for shape analysis and comparison.

Parameters:
geomTriaMesh | TetMesh

Mesh geometry.

kint, default=50

Number of eigenvalues/eigenvectors to compute.

lumpbool, default=False

If True, lump the mass matrix (diagonal). See lapy.Solver class.

anisofloat | tuple of shape (2,) | None, default=None

Anisotropy for curvature-based anisotropic Laplace. See lapy.Solver class.

aniso_smoothint, default=10

Number of smoothing iterations for curvature computation on vertices. See lapy.Solver class.

use_cholmodbool, default=False

If True, attempts to use the Cholesky decomposition for improved execution speed. Requires the scikit-sparse library. If it cannot be found, an error will be thrown. If False, will use slower LU decomposition.

Returns:
dict

A dictionary with the following keys:

  • ‘Refine’ : int - Refinement level (0)

  • ‘Degree’ : int - Polynomial degree (1)

  • ‘Dimension’ : int - Mesh dimension (2 for TriaMesh, 3 for TetMesh)

  • ‘Elements’ : int - Number of mesh elements

  • ‘DoF’ : int - Degrees of freedom (number of vertices)

  • ‘NumEW’ : int - Number of eigenvalues computed

  • ‘Eigenvalues’ : np.ndarray - Array of eigenvalues, shape (k,)

  • ‘Eigenvectors’ : np.ndarray - Array of eigenvectors, shape (n_vertices, k)

lapy.shapedna.normalize_ev(geom, evals, method='geometry')[source]

Normalize eigenvalues for a surface or a volume.

Normalizes eigenvalues to account for different mesh sizes and dimensions, enabling meaningful comparison between different shapes.

Parameters:
geomTriaMesh or TetMesh

Mesh geometry.

evalsnp.ndarray

Set of sorted eigenvalues, shape (k,).

method{‘surface’, ‘volume’, ‘geometry’}, default=’geometry’

Normalization method:

  • ‘surface’: Normalize by surface area (for 2D surfaces)

  • ‘volume’: Normalize by enclosed volume (for 3D objects)

  • ‘geometry’: Automatically choose surface for TriaMesh, volume for TetMesh

Returns:
np.ndarray

Vector of re-weighted eigenvalues, shape (k,).

Raises:
ValueError

If method is not one of ‘surface’, ‘volume’, or ‘geometry’. If geometry type is unsupported for the chosen normalization. If the measure (area/volume) is not positive.

lapy.shapedna.reweight_ev(evals)[source]

Apply linear re-weighting to eigenvalues.

Divides each eigenvalue by its index to reduce the influence of higher eigenvalues, which tend to be less stable.

Parameters:
evalsnp.ndarray

Set of sorted eigenvalues, shape (k,).

Returns:
np.ndarray

Vector of re-weighted eigenvalues, shape (k,). Each eigenvalue is divided by its 1-based index: evals[i] / (i+1).

Notes

This reweighting scheme gives less importance to higher eigenvalues, which are typically more sensitive to discretization and numerical errors.