Top Qs
Timeline
Chat
Perspective

Sample abundance

Signal-processing paradigm that trades precision for volume of measurements From Wikipedia, the free encyclopedia

Remove ads

Sample abundance is a signal processing paradigm in which very large numbers of low-precision measurements—often one-bit samples produced by comparators with time-varying thresholds—are leveraged to recover signals or parameters with high fidelity and reduced computational cost.[1] Instead of enforcing difficult constraints (e.g., positive-semidefiniteness or low rank) during reconstruction, many problems under sample abundance are reformulated as overdetermined linear feasibility tasks defined by half-space inequalities. With sufficiently many binary measurements, these inequalities confine the solution to a small polyhedral region around the ground truth, making formerly essential constraints unnecessary. Beyond a critical number of samples, the algorithmic load collapses suddenly—a phenomenon that may be referred to as the sample abundance singularity.[2]

Thumb
Conceptual illustration of a one-bit polyhedron (black lines) formed by half-spaces from binary comparisons. As samples increase, the feasible region shrinks around the true solution (purple) and can lie entirely inside a broader constraint set (orange).
Remove ads

Background

One-bit and few-bit analog-to-digital converters (ADCs) are attractive in applications such as massive MIMO and radar because comparators are inexpensive, fast, and power-efficient. Introducing dither or time-varying thresholds allows binary signs to retain sufficient statistical information for estimation, including covariance and spectrum recovery via generalized arcsine-law results.[3][4] Hybrid architectures such as Unlimited One-Bit Sampling (UNO) combine modulo folding with one-bit thresholds to further increase dynamic range while retaining low hardware cost.[5] Related efforts span low-resolution MIMO channel estimation and radar processing with binary data.[6][7]

Remove ads

Definition

Let a one-bit sample be obtained by comparing a measurement with a threshold : Each observation yields the linear inequality . Stacking many samples and writing for linear sensing gives the one-bit polyhedron:

,

where collects signed rows of and stacks the threshold terms. Under sample abundance (many more inequalities than unknowns), typically has finite volume near the ground truth and shrinks as more samples are added.[1]

Remove ads

Sample abundance singularity

The sample abundance singularity refers to the observed regime change in which, after a problem-dependent measurement threshold is exceeded, computational requirements collapse from non-convex or constrained programs (e.g., semidefinite or rank-constrained formulations) to simple projections onto linear half-spaces. In this regime, enforcing positive semidefiniteness, rank, or sparsity may become unnecessary because the polyhedral feasible set already localizes the solution to within the desired accuracy.[8][1]

Mathematical formulation and examples

Phase retrieval

With quadratic measurements known only through one-bit comparisons to thresholds, each binary sample imposes an inequality in the lifted variable : Beyond ample sampling, explicit PSD and rank-one constraints used by semidefinite programs (e.g., PhaseLift) can be omitted in practice.[8][9]

Low-rank matrix sensing

For linear measurements , one-bit signs define a polyhedron in the matrix space; with abundant samples, nuclear-norm or rank constraints can become optional to enforce.[10]

Compressed sensing

Given and signs , the feasible set

can localize a sparse vector without explicit minimization when many binary comparisons are available.[11][12]

Remove ads

Algorithms

Because sample abundance yields overdetermined systems of linear inequalities, projection methods are natural. The Randomized Kaczmarz Algorithm (RKA) selects a row at random and projects the iterate; for consistent systems it converges linearly in expectation at a rate governed by a scaled condition number.[13][14] The Sampling Kaczmarz–Motzkin (SKM) method draws a mini-batch of rows, chooses the most violated constraint, and projects, often accelerating convergence on large systems.[15] Unrolled and plug-and-play variants tailored to one-bit data have also been reported for speed and robustness.[1]

Remove ads

Finite volume property

The Finite Volume Property (FVP) provides sample-complexity bounds ensuring that the polyhedron formed by one-bit inequalities has small volume (e.g., lies inside an -ball around the truth). For isotropic measurements, one set of results implies that

samples yield error ,

with improved scaling when the signal belongs to structured sets (e.g., sparse vectors or low-rank matrices) whose Kolmogorov entropies are smaller.[1][16] These guarantees help explain why explicit PSD, rank, or sparsity constraints can become redundant once the number of binary comparisons exceeds a problem-dependent threshold.[1]

Remove ads

Applications

  • Low-resolution receivers in massive MIMO communications and detection.[17]
  • Radar and automotive sensing, including covariance-based DOA and one-bit Hankel matrix completion.[18][19]
  • Unlimited/one-bit sampling for bandlimited and finite-rate-of-innovation signals, and HDR imaging with sweeping thresholds.[20]

See also

References

Loading related searches...

Wikiwand - on

Seamless Wikipedia browsing. On steroids.

Remove ads