As spacecraft stream toward the Moon and back, the space between them remains a dimly lit frontier where faint travelers can slip by unseen and critical clues vanish into noise, and that mismatch between activity and awareness has sharpened a new urgency. The region from geosynchronous orbit out to lunar distance is vast, complex, and poorly watched, yet it is becoming central to national missions and commercial logistics. Into that gap steps a Defense Advanced Research Projects Agency effort with an unusually candid proposition: better algorithms, not bigger telescopes, can unlock a persistent picture of the Earth–Moon system. The initiative, dubbed “Track at Big Distances with Track-Before-Detect,” pursues a space-based way to spot and follow meter-class objects up to roughly 2 gigameters away, refresh the view in hours rather than days, and do it with tight power, mass, and aperture constraints. It treats cislunar surveillance as a systems problem—geometry, sensing, computation, and operations—where the decisive variable is how skillfully faint motion can be extracted from noisy data at the edge of detection.
What TBD2 Is Trying to Achieve
The stated ambition is straightforward but audacious: continuous, space-based detection and tracking of small resident space objects across most of cislunar space at cadences relevant to navigation safety and security. The envisioned payload would detect targets about a meter across at million-mile ranges, maintain a rolling catalog of tracks, and refresh a wide-area picture in under 12 hours. That refresh cycle matters because transit windows are tight, handoffs between orbits are time-sensitive, and an object’s apparent motion can change quickly with geometry. Rather than treat each image as a standalone event, the concept looks for coherent motion first, then elevates a candidate to a confirmed detection. In practice, this pivot reverses the normal order of operations: estimate trajectory hypotheses across many frames, align the data along those paths, and only then declare what is real.
That reversal enables sensitivity beyond single-frame thresholds, tapping signal that never rises above background in any one exposure. It also supports a tiered architecture: a primary sensor at a deep-space vantage scanning the bulk of the volume, complemented by nearer assets that specialize in fast movers and smaller targets. The program sets unusually crisp performance lines—probability of detection above 95% with false alarms under 1%—because a surveillance picture that cries wolf is as unworkable as one that misses real objects. Those numbers frame every subsequent design choice, from optical throughput and detector readout to how much onboard compute can be budgeted for hypothesis testing. The aim is not a demo; it is a blueprint for operational awareness that is both persistent and trustworthy under the harsh realities of deep space.
Why Cislunar Surveillance Is Hard and Urgent
The sheer scale is the first obstacle. The volume between GEO and lunar distance is roughly 1,200 times GEO’s alone, and objects within it do not move like satellites on near-circular Earth orbits. They follow trajectories shaped by three-body dynamics, solar radiation pressure, and frequent maneuvers, producing apparent motions that vary with viewing geometry in ways that foil simplistic tracking. Ground telescopes contribute indispensable coverage, but they live under weather, daylight, and the atmosphere’s blurring hand, which together carve duty-cycle gaps and limit sensitivity for very faint targets seen at high phase angles. Even when the skies cooperate, observation geometry from Earth is often suboptimal for objects near the Moon or out along the Earth–Moon line.
The urgency grows as the number of actors increases. Lunar relays, landers, sample-return craft, and commercial transit vehicles are shifting operational risk into cislunar space, where collision probabilities and navigational uncertainties scale with the unknown. At the same time, strategic ambiguity becomes easier to exploit when assets can maneuver far from persistent eyes. Small spacecraft loitering near Earth–Moon Lagrange regions or transiting through poorly watched corridors could approach high-value systems without early warning. The result is a multi-layered requirement: reduce safety risk by removing blind spots, reduce ambiguity by cataloging faint movers, and reduce friction by standardizing a shared picture that multiple stakeholders can trust. That is the field into which a space-based, algorithm-forward approach promises practical gains.
The SEL1 Vantage and Its Limits
The proposal’s anchor point is the Sun–Earth Lagrange Point 1, nearly 930,000 miles sunward of Earth. From SEL1, a sensor can keep the Sun behind it, mitigating stray light, while maintaining an unobstructed view of most of the Earth–Moon system. This geometry enables long, stable looks across transit lanes and lunar orbital regions without the intermittent visibility and atmospheric penalties that bind ground observatories. Duty cycle becomes a function of onboard scheduling and power, not cloud cover or day-night cycles. Equally important, SEL1 offers a coherent reference frame for motion analysis: objects sweep across the field with consistent parallax and phase behavior, which algorithmic pipelines can exploit to sharpen track estimates.
Yet SEL1 is not a cure-all. The far side of the Moon is a physical occlusion, parts of the Earth–Moon corridor can fall into less favorable phase geometries, and some zones—especially near lunar orbits—feature apparent motions so fast that a distant vantage becomes a handicap. Regions near EML1 and EML2 can host loitering spacecraft with low relative motion that blends into background clutter from the SEL1 perspective, creating opportunities for confusion unless complemented by other views. These limits are not disqualifying; rather, they establish where a single distant sensor’s strengths taper and where auxiliary assets would add the most value. A realistic architecture acknowledges those blind spots up front and designs a coverage plan that treats SEL1 as a hub, not as a solitary sentinel.
A Layered Architecture To Close Blind Spots
The architecture that follows from those realities looks layered by design. A common payload and computing stack would be deployed first at SEL1 for wide-area surveillance and then replicated in nearer cislunar orbits—roughly 124,000 to 248,000 miles from Earth or the Moon—where it can hunt smaller, faster movers and fill in occluded regions. Nearer nodes gain sensitivity to objects in the four- to eight-inch class at shorter ranges and can revisit dynamic zones at higher cadence, tightening custody on objects whose tracks are otherwise intermittent. The choice to keep the hardware and software common across layers is pragmatic: it eases production, simplifies updates, and allows data fusion without wrangling incompatible formats or behaviors.
In operation, the SEL1 asset would generate a rolling catalog of candidates and tracks, prioritizing sectors where geometry shifts are most informative, while the nearer nodes would focus on corridors and lunar orbits that need high-cadence attention. Cross-cueing becomes central: a hint from SEL1 can steer a nearer sensor into a small search box, cutting computation and exposure time, while a short-range detection can seed a track that SEL1 fosters over long arcs. This choreography turns a set of modest sensors into a surveillance mesh whose aggregate output is better than any single node could deliver. It also builds resilience: if one vantage is compromised, the others can preserve continuity in the traffic picture and prevent gaps from widening into unknowns.
Track-Before-Detect And The Compute Gap
The heart of the approach is track-before-detect, a family of methods that collects faint signal across many frames along hypothesized motion vectors before issuing a detection. Synthetic tracking variants have shown that even sub-threshold targets can be lifted above noise by coherent integration if the assumed motion is close to truth. In cislunar space, however, the search space is large: velocities, accelerations, and apparent directions span wide domains over multi-hour windows, and the field may host multiple faint movers at once. Naively enumerating all possibilities would bury a spacecraft in computation, swamping a processor budget that in deep space tends to be counted in tens or hundreds of gigaFLOPs at best.
Estimates peg the brute-force demand near 300 teraFLOPs for the desired scope and cadence, a figure out of reach for radiation-hardened computers that deliver under 4 gigaFLOPs and even for radiation-tolerant boards or specialized accelerators that top out in the tens to roughly 150 gigaFLOPs. TBD2 leans hard into algorithmic efficiency to close that gap by orders of magnitude. The strategy is to prune the hypothesis space with strong geometric priors, exploit sparsity in the motion domain, fuse temporal windows adaptively, and minimize data movement—often the hidden tax—through carefully staged memory hierarchies. Architectures that co-design algorithms and hardware, shaping kernels to match vector units and on-chip buffers found in space-qualified processors, promise further gains. The prize is not a marginal speedup but a transformation: turning an intractable search into a tractable set of structured inferences that fit within a few hundred watts.
Hardware, Performance, And Coverage By Design
The payload envelope forces discipline. A 0.5-meter aperture cap sets optical throughput, which drives exposure times and detector choices; roughly 300 watts of routine power, with surges to about 600 watts, caps how aggressive the onboard processing can be; and an aggressive mass budget pushes toward compact optics, efficient thermal control, and minimalistic data paths. These realities shape the entire stack, from low-noise sensor readouts that tolerate longer integrations without blooming, to data compression that preserves faint streaks, to compute pipelines that avoid costly global memory traffic. The program’s contention that an 80-fold mass reduction over legacy concepts is feasible signals a bias for elegant solutions that do more with less rather than brute-force hardware.
Performance metrics anchor the engineering. Detection probability must exceed 95% while false alarms stay below 1% in cluttered, low-SNR scenes, and the system must refresh a wide-area picture in less than 12 hours from SEL1. Meeting those numbers forces careful allocation of time across the sky: dwell too long and cadence suffers; sweep too fast and sensitivity collapses. The layered design helps resolve that tension. SEL1 devotes longer integrations to faint, distant traffic, while nearer nodes handle fast apparent motions and small targets with shorter exposures and tighter revisit cycles. Together, the network composes a coherent picture across GEO, transit lanes, and lunar orbital regimes, and it does so within power and mass limits that make deep-space deployment practical.
From Blueprint To Operations
The program laid out a 15-month sprint with three deliverables that pointed directly at transition rather than laboratory novelty. Teams were expected to produce a low-complexity, high-performance tracking algorithm that met the detection and false-alarm thresholds at realistic compute budgets; a complete SEL1 payload design covering optics, sensor, compute, power, and spacecraft integration; and a variant tuned for nearer cislunar orbits where apparent motion is faster and range shorter. The intent was to exit the period with designs mature enough for a transition partner to push into system development, creating a pathway from prototype to an operational mesh that could scale in steps rather than all at once.
Downstream, the operations concept followed logically from the designs. The SEL1 node would serve as the wide-area hub, publishing a rolling catalog and cross-cues; nearer nodes would maintain custody in hotspots, resolve fast movers, and sweep occluded regions; and ground segments would perform fusion, attribution, and dissemination for both safety and security users. Because the payloads shared a common architecture, upgrades to algorithms or data handling could propagate across the fleet with minimal friction. If funding and partners aligned, initial deployments could have begun with one SEL1 asset and a small number of nearer sensors, then expanded as coverage needs and commercial traffic grew. The path emphasized incremental value at each step while preserving the end-state of near-continuous, layered cislunar awareness.
