^{1}

^{*}

^{1}

^{2}

^{3}

^{4}

^{5}

^{6}

^{1}

^{2}

^{†}

^{1}

^{2}

^{3}

^{4}

^{5}

^{6}

Edited by: Lorenzo Iorio, Ministry of Education, Universities and Research, Italy

Reviewed by: J. Allyn Smith, Austin Peay State University, United States; Akos Bazso, Universität Wien, Austria

This article was submitted to Fundamental Astronomy, a section of the journal Frontiers in Astronomy and Space Sciences

†Present Address: Cory Shankman, City of Toronto, Toronto, ON, Canada

This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

All surveys include observational biases, which makes it impossible to directly compare properties of discovered trans-Neptunian Objects (TNOs) with dynamical models. However, by carefully keeping track of survey pointings on the sky, detection limits, tracking fractions, and rate cuts, the biases from a survey can be modeled in Survey Simulator software. A Survey Simulator takes an intrinsic orbital model (from, for example, the output of a dynamical Kuiper belt emplacement simulation) and applies the survey biases, so that the biased simulated objects can be directly compared with real discoveries. This methodology has been used with great success in the Outer Solar System Origins Survey (OSSOS) and its predecessor surveys. In this chapter, we give four examples of ways to use the OSSOS Survey Simulator to gain knowledge about the true structure of the Kuiper Belt. We demonstrate how to statistically compare different dynamical model outputs with real TNO discoveries, how to quantify detection biases within a TNO population, how to measure intrinsic population sizes, and how to use upper limits from non-detections. We hope this will provide a framework for dynamical modelers to statistically test the validity of their models.

The orbital structure, size frequency distribution, and total mass of the trans-Neptunian region of the Solar System is an enigmatic puzzle. Fernandez (_{1}; Jewitt and Luu,

Major puzzles in the Solar System's history can be explored if one has accurate knowledge of the distribution of material in this zone. Examples include: the orbital evolution of Neptune (e.g., Malhotra,

The presence of large-scale biases in the detected sample of TNOs has been apparent since the initial discoveries in the Kuiper belt, and multiple approaches have been used to account for these biases. Jewitt and Luu (

Carefully measuring the true, unbiased structure of the Kuiper belt provides constraints on exactly how Neptune migrated through the Kuiper belt. Two main models of Neptune's migration have been proposed and modeled extensively. Pluto's eccentric, resonant orbit was first explained by a smooth migration model for Neptune (Malhotra,

The level of detail that must be included in Neptune migration scenarios is increasing with the number of discovered TNOs with well-measured orbits; some recent examples of literature comparisons between detailed dynamical models and TNO orbital distributions are summarized here. Batygin et al. (

In this chapter, we discuss what it means for a survey to be “well-characterized” (section 2) and explain the structure and function of a Survey Simulator (section 2.1). In section 3, we then give four explicit examples of how to use a Survey Simulator, with actual dynamical model output and real TNO data. We hope this chapter provides an outline for others to follow.

A well-characterized survey is one in which the survey field pointings, depths, and tracking fractions at different magnitudes and on-sky rates of motion have been carefully measured. The largest well-characterized TNO survey to date is the Outer Solar System Origins Survey (OSSOS; Bannister et al.,

These well-characterized surveys are all arranged into observing “blocks”: many individual camera fields tiled together into a continuous block on the sky (the observing blocks for OSSOS are each ~10–20 square degrees in size). The full observing block is covered during each dark run (when the Moon is closest to new) for the 2 months before, 2 months after, and during the dark run closest to opposition for the observing block. The observing cadence is important for discovering and tracking TNOs, which change position against the background stars on short timescales. The time separation between imaging successive camera fields inside each observing block must be long enough that significant motion against background stars has occurred for TNOs, but not so long that the TNOs have moved too far to be easily recovered by eye or by software. OSSOS used triplets of images for each camera field, taken over the course of 2 h. The exposure times are chosen as a careful compromise between photometric depth and limiting trailing^{1}

The tracking of the discovered sample provides another opportunity for biases to enter and the process must be closely monitored. A survey done in blocks of fields that are repeated ~monthly removes the need to make orbit predictions based on only a few hours of arc from a single night's discovery observations. Such short-arc orbit predictions are notoriously imprecise, and dependence on them ensures that assumptions made regarding orbit distribution will find their way into the detected sample as biases. For example, a common assumption for short-arc orbits is a circular orbit. If follow-up observations based on this circular orbital prediction are attempted with only a small area of sky coverage, then those orbits whose Keplerian elements match the input assumptions will be preferentially recovered, while those that do not will be preferentially lost, resulting in a discovery bias against non-circular orbits. Correcting for this type of ephemeris bias is impossible. Several of the large-sample TNO surveys had short arcs on a high fraction of the detections; this introduces unknown tracking biases into the sample that cannot be reproduced in a Survey Simulator because the systematic reasons for object loss (ephemeris bias, Jones et al., ^{2}

Schwamb et al. (

A well-characterized survey will have flux limits in each observing block from measurements of implanted artificial objects, equal sensitivity to a wide range of orbits, and a known spatial coverage on the sky. A Survey Simulator can now be configured to precisely mimic the observing process for this survey.

A Survey Simulator allows models of intrinsic Kuiper Belt distributions to be forward-biased to replicate the biases inherent in a given well-characterized survey. These forward-biased simulated distributions can then be directly compared with real TNO detections, and a statement can be made about whether or not a given model is statistically consistent with the known TNOs. One particular strength of this approach is that the effect of non-detection of certain orbits can be included in the analysis. Methods that rely on the inversion of orbital distributions are, by their design, not sensitive to a particular survey's blind spots.

Directly comparing a model with a list of detected TNOs (for example, from the Minor Planet Center database), with a multitude of unknown detection biases, can lead to inaccurate and possibly false conclusions. Using a Survey Simulator avoids this problem completely, with the only downside being that comparisons can only be made using TNOs from well-characterized surveys. Fortunately, the single OSSOS survey contains over 800 TNOs, and the ensemble of well-characterized affiliated surveys contains over 1100 TNOs with extremely precisely measured orbits (Bannister et al.,

At its most basic, a Survey Simulator must produce a list of instantaneous on-sky positions, rates of motion, and apparent magnitudes. These are computed by assigning absolute magnitudes to simulated objects with a given distribution of orbits. These apparent magnitudes, positions, and rates of motion are then evaluated to determine the likelihood of detection by the survey, and a simulated detected distribution of objects is produced. The OSSOS Survey Simulator follows this basic model, but takes into account more realities of survey limitations. It is the result of refinement of this Survey Simulator software through several different well-characterized surveys: initially the CFEPS pre-survey (Jones et al.,

While the methodology presented here is specific to the OSSOS Survey Simulator, by measuring on sky pointings, magnitude limits, and tracking fractions, a Survey Simulator can be built for any survey. The Survey Simulator for the OSSOS ensemble of well-characterized surveys is available as a package^{3}

The user must supply the Survey Simulator with a routine that generates an object with orbital elements and an absolute

The orbital elements of an object can be determined in a variety of ways. The Survey Simulator can choose an orbit and a random position within that orbit, either from a list of orbits (as would be produced by a dynamical model) or from a parametric distribution set by the user. Orbits from a list can also be easily “smeared,” that is, variation is allowed within a fraction of the model orbital elements, in order to smooth a distribution or produce additional similar orbits (however, one must be careful that the distribution is dominated by the original list of orbits, and not by the specifics of the smearing procedure). To determine the likely observed magnitude of the source, an absolute

The process of drawing simulated objects and determining if they would have been detected by the given surveys is repeated until the desired number of simulated tracked or detected objects is produced by the Survey Simulator. The desired number of simulated detected objects may be the same as the number of real detected TNOs in a survey in order to measure an intrinsic population size (as demonstrated in section 3.3), or an upper limit on a non-detection of a particular subpopulation (section 3.4), or it may be a large number in order to test the rejectability of an underlying theoretical distribution (as demonstrated in sections 3.1) or to quantify survey biases in a given subpopulation (section 3.2).

Here we present four examples of different ways to use the Survey Simulator to gain statistically valuable information about TNO populations. In section 3.1, we demonstrate how to use the Survey Simulator to forward-bias the output of dynamical simulations and then statistically compare the biased simulation with a distribution of real TNOs. In section 3.2, we use the Survey Simulator to build a parametric intrinsic distribution and then bias this distribution by our surveys, examining survey biases for a particular TNO subpopulation in detail. The Survey Simulator can also be used to measure the size of the intrinsic population required to produce a given number of detections in a survey; this is demonstrated in section 3.3. And finally, in section 3.4, we demonstrate a handy aspect of the Survey Simulator: using non-detections from a survey to set statistical upper limits on TNO subpopulations. We hope that these examples will prove useful for dynamical modelers who want a statistically powerful way to test their models.

This example expands on the analysis of Lawler et al. (

The four giant planets and including the effects of Galactic tides and stellar flybys (simulation from Kaib et al.,

The four giant planets with an additional 10 Earth mass planet having

The four giant planets with an additional 10 Earth mass planet having

The four giant planets and an additional 2 Earth mass rogue planet that started with

The papers which have recently proposed the presence of a distant undiscovered massive planet (popularly referred to as “Planet 9”; Trujillo and Sheppard, ^{4}

Here we demonstrate how to use the results of well-characterized surveys to compare real TNO detections to the output of a dynamical model, comprising a list of orbits. We visualize this output with a set of cumulative distributions. Figure _{r}, pericenter distance _{r}. The outputs from the four emplacement models are shown by different colors in the plot, with the intrinsic distributions shown as dotted lines, and the forward-biased simulated detection distributions as solid lines. These intrinsic distributions have all been cut to only include the pericenter and semimajor axis range predicted by Batygin and Brown (

Cumulative distributions of TNOs in six different parameters: semimajor axis _{r}, pericenter distance _{r}. The result of four different emplacement models are presented: the known Solar System (orange), the Solar System plus a circular orbit Planet 9 (blue), or plus an eccentric orbit Planet 9 (gray), and a “rogue” planet simulation (purple). The intrinsic model distributions are shown with dotted lines, and the resulting simulated detections in solid lines. Black circles show non-resonant TNOs discovered by the OSSOS ensemble having

We cannot directly compare the output from these dynamical simulations covering such a huge range of

The solid lines in Figure

These models have not been explicitly constructed in an attempt to produce the orbital and magnitude distribution of the elements in the detection range, so the following exercise is pedagogic rather than attempting to be diagnostic. All these models statistically fail dramatically in producing several of the distributions shown here (discussed in detail later in this section), but how they fail allows one to understand what changes to the model may be required. All the models generically produce a slight misbalance in both the absolute (lower right panel of Figure _{r}-magnitude distribution from Lawler et al. (

The orbital inclination distribution (upper center panel of Figure

The comparison of the models in the semimajor axis distribution gives very clear trends (upper left panel of Figure

In order to determine whether any of these biased distributions provide a statistically acceptable match to the real TNOs, we use a bootstrapped Anderson-Darling statistic^{5}

When we calculate the bootstrapped AD statistics for each of the simulated distributions as compared with the real TNOs in Figure

We note that none of the four dynamical emplacement models analyzed here include the effects of Neptune's migration, which is well-known to have an important influence on the structure of the distant Kuiper belt. Recent detailed migration simulations (Kaib and Sheppard,

We reiterate that the point of this section has been to provide a walk-through of how to compare the output of a dynamical model to real TNO detections in a statistically powerful way. The preceding discussion of the shortcomings of the specific dynamical models presented here highlights that a holistic approach to dynamical simulations of Kuiper belt emplacement is necessary. For example, Nesvorný (

In this section we show how to use the Survey Simulator with a resonant TNO population in order to demonstrate two important and useful points: how to build a simulated population from a parametric distribution, and showing how the Survey Simulator handles longitude biases in non-uniform populations. For this demonstration, we build a toy model of the 2:1 mean motion resonance with Neptune, using a parametric distribution built within the Survey Simulator software (though we note that this parametric distribution could just as easily be created with a separate script and later utilized by the unedited Survey Simulator). The parametric distribution here is roughly based on that used in Gladman et al. (^{6}_{21}〉 is chosen to populate these three islands equally in this toy model. For the purposes of our toy model, we define these angles as: leading asymmetric _{21} in the snapshot is chosen sinusoidally within the libration amplitude. The ascending node Ω and mean anomaly _{21} = 2λ_{TNO}−λ_{N}−ϖ_{TNO}, where mean longitude

Figure ^{7}^{8}

Black points show the positions of TNOs in a snapshot from this parametric toy model of TNOs in the 2:1 mean-motion resonance. The position of Neptune is shown by a blue circle, and dotted circles show distances from the Sun. Red points show simulated detections after running this model through the Survey Simulator. One third of the TNOs are in each libration island in the intrinsic model, but 2/3 of the detections are in the leading asymmetric island (having pericenters in the upper right quadrant of this plot). This bias is simply due to the longitudinal direction of pointings within the OSSOS ensemble and pericenter locations in this toy model of the 2:1 resonance.

The red points in Figure

The relative fraction of

The Survey Simulator can easily be used to determine intrinsic population sizes. As described in section 2.1, the Survey Simulator keeps track of the number of “drawn” simulated objects that are needed for the requested number of simulated tracked objects. By asking the Survey Simulator to produce the same number of tracked simulated objects as were tracked in a given survey, the number of drawn simulated objects is a realization of the intrinsic population required for the survey to have detected the actual number of TNOs that were found by the survey. By repeating this many times, with different random number seeds, different orbits and instantaneous positions are chosen from the model and slightly different numbers of simulated drawn objects are required each time. This allows us to measure the range of intrinsic population sizes needed to produce a given number of tracked, detected TNOs in a survey.

Here we measure the intrinsic population required to produce the 17 Centaurs that are detected in the OSSOS ensemble. Once the parameters of the orbital distribution and the _{r}≃14, the Survey Simulator is run to this

Using the properties of simulated objects in the drawn file, we measure the intrinsic population size to _{r} < 12 (right panel, Figure _{r} < 12.

The range of intrinsic Centaur population sizes required for the Survey Simulator to produce the same number of Centaur detections (17) as were discovered by the OSSOS ensemble, _{r} < 8.66 intrinsic populations shown in red _{r} < 12 shown in orange _{r} < 8.66 Centaurs and _{r} < 12 Centaurs with 95% confidence.

To ease comparison with other statistically produced intrinsic TNO population estimates (e.g., Petit et al., _{r} < 8.66, which corresponds to ^{9}_{r} magnitudes this small, this estimate is valid because it is calculated from our Survey Simulator-based population estimate and a measured _{r} < 8.66, the statistically tested 95% confidence limit on the intrinsic Centaur population is

Here we show a perhaps unintuitive aspect of the Survey Simulator: non-detections can be just as powerful as detections for constraining Kuiper Belt populations. Non-detections can only be used if the full pointing list from a survey is published along with the detected TNOs. An examination of the orbital distribution of TNOs in the MPC database makes it clear that there is a sharp dropoff in the density of TNO detections at

Randomly drawing zero objects when you expect three has a probability of 5%, assuming Poisson statistics; thus, the simulated population required to produce three detections is the 95% confidence upper limit for a population that produced zero detections. As our example for non-detection upper limits, we create an artificial population in the distant, low-eccentricity Kuiper belt, where OSSOS has zero detections. Figure ^{10}_{r}≲7, corresponding to _{g} < 8.5, gives ~700 objects in this artificial population with _{r} < 8.

Colored points show the relative populations and semimajor axis-eccentricity distributions from the CFEPS L7 model of the Kuiper belt, where absolute population estimates have been produced for each subpopulation in the well-characterized CFEPS survey (Petit et al., _{g} < 8.5 and color-coded by dynamical class. Resonances included in this model (those with >1 TNO detected by CFEPS) are labeled. A low-_{r} < 8).

This is a very small population size. We note that this specific analysis is only applicable to dynamically cold TNOs of relatively large size (_{r} < 7). A steep size distribution could allow many smaller TNOs to remain undiscovered on similar orbits. The point of this exercise it to show statistically tested constraints on populations with no survey detections. For comparison, at this absolute magnitude limit (_{r} < 7), the scattering disk is estimated to have an intrinsic population of ~4000 (Lawler et al., _{r} < 7 TNOs in the classical belt. The 3:1 mean-motion resonance, which is located at a similar semimajor axis (_{r} < 7 population of ~200 (Gladman et al.,

Using TNO discoveries from well-characterized surveys and only analyzing the goodness of fit between models and TNO discoveries after forward-biasing the models gives a statistically powerful framework within which to validate dynamical models of Kuiper belt formation. Understanding the effects that various aspects of Neptune's migration have on the detailed structure of the Kuiper belt not only provides constraints on the formation of Neptune and Kuiper belt planetesimals, but also provides useful comparison to extrasolar planetesimal belts (Matthews and Kavelaars,

One such mystery is explaining the very large inclinations in the scattering disk (Shankman et al.,

Explaining the population of high pericenter TNOs (Sheppard and Trujillo,

Here we have highlighted only a small number of the inconsistencies between models and real TNO orbital data. The level of detail that must be included in Neptune migration simulations has increased dramatically with the release of the full OSSOS dataset, containing hundreds of TNOs with the most precise orbits ever measured. The use of the Survey Simulator will be vital for testing future highly detailed dynamical emplacement simulations, and for solving the lingering mysteries in the observed structure of the Kuiper belt.

SL ran the simulations, created the figures, and wrote the majority of the text. JK wrote part of the text and helped plan simulations. BG, JK, and J-MP designed the OSSOS survey and proposed for telescope time. MB, JK, MA, BG, and J-MP took observations and reduced data for OSSOS. J-MP, JK, BG, CS, MA, MB, and SL helped write and test the Survey Simulator software and moving object pipeline.

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The authors acknowledge the sacred nature of Maunakea, and appreciate the opportunity to observe from the mountain. CFHT is operated by the National Research Council (NRC) of Canada, the Institute National des Sciences de l'Universe of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii, with OSSOS receiving additional access due to contributions from the Institute of Astronomy and Astrophysics, Academia Sinica, Taiwan. Data are produced and hosted at the Canadian Astronomy Data Centre; processing and analysis were performed using computing and storage capacity provided by the Canadian Advanced Network For Astronomy Research (CANFAR).

_{372}: A likely long-period comet from the inner Oort cloud

_{105}and 2003 VB

_{12}(Sedna)

^{1}We note that clever algorithms can be used to obtain accurate photometry from trailed sources (e.g., Fraser et al.,

^{2}The two objects that were not tracked are

^{3}

^{4}Brown (

^{5}The Anderson-Darling test is similar to the better-known Kolmogorov-Smirnov test, but with higher sensitivity to the tails of the distributions being compared.

^{6}The inclinations are actually set to a very small value close to zero to avoid ambiguities in the other orbital angles.

^{7}We note that in reality the symmetric island generally has very large libration amplitudes and the pericenter positions of symmetric librators overlap with each of the asymmetric islands (e.g., Volk et al.,

^{8}While this is obviously an exaggerated example, pericenters in

^{9}The approximate _{r} magnitude that corresponds to

^{10}Model available at