Famous cryptographers’ tombstone cryptogram decrypted
5 stars based on
We show how a arlig talt i binare alternativer hierarchical tree called a cover tree can be turned into an asymptotically more efficient one known as a net-tree in linear time. We also introduce two linear-time operations to manipulate cover trees called coarsening and refining.
These operations make a trade-off between tree height and the node degree. In this talk, I address a new take on adaptive sampling with respect to the local feature size, i. We recently proved that such samples can be viewed as uniform samples with respect to an alternative metric on the Euclidean space.
The idea of adaptive metrics gives a general way to adapt the critical point theory of distance functions to locally adaptive sampling conditions. In this talk, I address two new ideas in sampling geometric objects. The first is a new take on adaptive sampling with respect to the local feature size, i.
The second is a generalization of Voronoi refinement sampling. There, one also achieves an adaptive sample while simultaneously "discovering" the underlying sizing function. This talk addresses some upper and lower bounds techniques for bounding the arlig talt i binare alternativer between mappings between Euclidean metric spaces including circles, spheres, pairs of lines, triples of planes, and the union of a hyperplane and a point. Nested dissection exploits the underlying topology to do matrix reductions while persistent homology exploits matrix reductions to the reveal underlying topology.
It seems natural that one should be able to combine these techniques to beat the currently best bound of matrix multiplication time for computing persistent homology. However, nested dissection works by fixing a reduction order, whereas persistent homology generally constrains the ordering according to an input filtration.
Despite this obstruction, we show that it is possible to combine these two theories. This shows that one can improve the computation of persistent homology if the underlying space has some additional structure.
We give reasonable geometric conditions under which one can beat the matrix multiplication bound for persistent homology. In their seminal work on homological sensor networks, de Silva and Ghrist showed the surprising fact that its possible to certify the coverage of a coordinate free sensor network even with very minimal knowledge of the space to be covered.
We give a new, simpler proof of the de Silva-Ghrist Topological Coverage Criterion that eliminates any assumptions about the smoothness of the boundary of the underlying space, allowing the results to be applied to much more general problems. The new proof factors the geometric, topological, and combinatorial aspects of this approach. This factoring reveals an interesting new connection between the topological coverage condition and the notion of weak feature size in geometric sampling theory.
We then apply this connection to the problem of showing that for a given scale, if one knows the number of connected components and the distance to the boundary, one can also arlig talt i binare alternativer the higher betti numbers or provide strong evidence that more samples are needed.
This is in arlig talt i binare alternativer to previous work which merely assumed a good sample and gives no guarantees if the sampling condition is not met. Nested dissection exploits underlying topology to do matrix reductions while persistent homology exploits matrix reductions to reveal underlying topology. However, nested dissection works by fixing a reduction order, whereas persistent homology generally constrains the ordering.
This shows arlig talt i binare alternativer one can improve the computation of persistent homology arlig talt i binare alternativer a filtration by exploiting information about the underlying space. It gives reasonable geometric conditions under which one can beat the matrix multiplication bound for persistent homology.
This is a purely theoretical result, but it's a valuable one. Arlig talt i binare alternativer also give a topological view of nested dissection, which should be interesting to the wider theory community. For example, separators are covers, sparsification is simplification, asymmetric reduction is morse reduction, graph separators can be extended to complex separators.
In this talk, I give a gentle introduction to geometric and topological data analysis and then segue into some natural questions that arise when one combines the topological view with the perhaps more well-studied linear algebraic view.
These separators provide a natural way to do divide and conquer in geometric settings. A particularly nice geometric separator algorithm originally introduced by Miller and Thurston has three steps: It is both simple and elegant. We show that a change of perspective literally can make this algorithm even simpler by eliminating the entire middle step. By computing the centerpoint of the points lifted onto a paraboloid rather than using the stereographic map as in the original method, one can sample the desired sphere directly, without computing the conformal transformation.
Despite the simplicity of the algorithm and its analysis, it improves on the state of the art for all inputs with polynomial spread and near-linear output size. The key idea is to first build the Voronoi diagram of a superset of the input points using ideas from Voronoi refinement mesh generation.
Then, the extra points are removed in a straightforward way that allows the total work to be bounded in terms of the output complexity, yielding the output sensitive bound. The removal only involves local flips and is inspired by kinetic data structures. The word optimal is used in different ways in mesh generation. It could mean that the output is in some sense, "the best mesh" or that the algorithm is, by some measure, "the best algorithm".
One might hope that the best algorithm also produces the best mesh, but maybe some tradeoffs are necessary. In this talk, I will survey several different notions of optimality in mesh generation and explore the arlig talt i binare alternativer tradeoffs between them.
Note that I have made some corrections from the version presented at SoCG. Specifically, I had presented the analyssi for why the feature size intergral counts the vertices size in a quality mesh. This was meant to be simple, but it ignores the fact that one must first prove the algorithm produces verties that are spaced according to the feature size. As this may have been confusing, I updated the slides and added a note.
Voronoi diagrams and their duals, Delaunay triangulations, are used in many areas of computing and the sciences. Starting in 3-dimensions, there is a substantial i. This motivates the search for algorithms that are output-senstiive rather than relying only on worst-case guarantees. For arlig talt i binare alternativer wide range of inputs, this is the best known algorithm.
The algorithm is novel in the that it turns the classic algorithm of Delaunay refinement for mesh generation on its head, working backwards from a quality mesh to the Delaunay triangulation of the input. Along the way, we will see instances of several other classic problems for which no higher-dimensional results are known, including kinetic convex hulls and splitting Delaunay triangulations.
In this talk, I show how mesh generation and topological data analysis are a natural fit. Watch the video here. Moreover, we relate this result to the Vietoris-Rips complex to get an approximation in terms of the persistent homology. The theory of optimal size arlig talt i binare alternativer gives a method for analyzing the output size number of simplices of a Delaunay refinement mesh in terms of the integral of a sizing function over the input domain.
The input points define a maximal such sizing function called the feature size. This paper presents a way to bound the feature size integral in terms of an easy to compute property of a suitable ordering of the point set. The arlig talt i binare alternativer idea is to consider the pacing of an ordered point set, a measure of arlig talt i binare alternativer rate of change in the feature size arlig talt i binare alternativer points are added one at a time.
In previous work, Miller et al. The new analysis tightens bounds from several previous results and provides matching lower bounds. This talk explores upper and lower bounds on sampling rates for determining the homology of a maifold from noisy random samples. The Vietoris-Rips filtration is a arlig talt i binare alternativer tool in topological data analysis. Unfortunately, it is often too large to construct in full.
The constants depend only on the doubling dimension of the metric space and the desired tightness of the approximation. For the first time, this makes it computationally tractable to approximate arlig talt i binare alternativer persistence diagram of the Vietoris-Rips filtration across all scales for large data sets.
Our approach uses a hierarchical net-tree to sparsify the filtration. We can either sparsify the data by throwing out points at larger scales to give a zigzag filtration, or arlig talt i binare alternativer the underlying graph by throwing out edges at larger scales to give a standard filtration.
Both methods yield the same guarantees. We present NetMesh, a new algorithm that produces a conforming Delaunay mesh for point sets in any fixed dimension with guaranteed optimal mesh size and quality. The previous best results in the comparison model depended on the log of arlig talt i binare alternativer spread of the input, the ratio of the largest to smallest pairwise distance among input points.
We generate a hierarchy of well-spaced meshes and use these to show that the complexity of the Voronoi diagram stays linear in the number of points throughout the construction. In this talk, I argue that mesh generation is good tool to preprocess point clouds for geometric inference and discuss how to compute such meshes.
What is the difference between a mesh and a net? What is the difference between a metric space epsilon-net and a range space epsilon-net? What is the difference between geometric divide-and-conquer and combinatorial divide-and-conquer? In this talk, I will answer these questions and discuss how these different ideas come together to finally settle the question of how to compute conforming point set meshes in optimal time. The meshing problem is to discretize space into as few pieces as possible and yet still capture the underlying density of the input points.
Meshes are fundamental in scientific computing, graphics, and more recently, topological data analysis. This is joint work with Gary Miller and Arlig talt i binare alternativer Phillips. Here's a toy problem: In this talk, I arlig talt i binare alternativer show how just thinking about a naive greedy approach to this problem leads to a simple derivation of several of the most important theoretical results in the field of mesh generation.
We'll prove classic upper and lower bounds on both the number of balls and the complexity of their interrelationships. Then, we'll relate this problem to a similar one called the Fat Voronoi Problem, in which we try to find point sets such that every Voronoi cell is fat the ratio of the radii of the largest contained to smallest containing ball is bounded.
This problem has tremendous promise in the future of mesh generation as it can circumvent the classic lowerbounds presented in the first half of the arlig talt i binare alternativer.
Unfortunately the simple approach no longer works. In the end we will show that the number of neighbors of any cell in a Fat Voronoi Diagram in the plane is bounded by a constant if you think that's obvious, spend a minute to try to prove it. We'll also talk a little about the higher dimensional version of the problem and its wide range of applications.
In topological inference, the goal is to extract information about a shape, given only a sample of points from it. There are many approaches to this problem, but the one we focus on is persistent homology. We get a view of the data at different scales by imagining the points are balls and consider different radii. The shape information we want comes in the form of a persistence diagram, which describes the components, cycles, bubbles, etc in the space that persist over a range arlig talt i binare alternativer different scales.
We reduce this complexity to O n hiding some large constants depending on d by using ideas from mesh generation. This talk will not assume any knowledge of topology. In this light talk, I give a high level view of some of my recent research in using ideas from mesh generation to lower the complexity of computing persistent homology in arlig talt i binare alternativer settings. Because this talk is for a general audience, I will focus on three related applications where related is interpreted loosely that I think have arlig talt i binare alternativer widest appeal.
There are many depth measures on point sets that yield centerpoint theorems.