Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
28
result(s) for
"McCauliff, Sean"
Sort by:
Kepler Presearch Data Conditioning I-Architecture and Algorithms for Error Correction in Kepler Light Curves
by
Girouard, Forrest R.
,
Twicken, Joseph D.
,
Smith, Jeffrey C.
in
Algorithms
,
Architecture
,
Astronomy
2012
ABSTRACT Kepler provides light curves of 156,000 stars with unprecedented precision. However, the raw data as they come from the spacecraft contain significant systematic and stochastic errors. These errors, which include discontinuities, systematic trends, and outliers, obscure the astrophysical signals in the light curves. To correct these errors is the task of the Presearch Data Conditioning (PDC) module of the Kepler data analysis pipeline. The original version of PDC in Kepler did not meet the extremely high performance requirements for the detection of miniscule planet transits or highly accurate analysis of stellar activity and rotation. One particular deficiency was that astrophysical features were often removed as a side effect of the removal of errors. In this article we introduce the completely new and significantly improved version of PDC which was implemented in Kepler SOC version 8.0. This new PDC version, which utilizes a Bayesian approach for removal of systematics, reliably corrects errors in the light curves while at the same time preserving planet transits and other astrophysically interesting signals. We describe the architecture and the algorithms of this new PDC module, show typical errors encountered in Kepler data, and illustrate the corrections using real light curve examples.
Journal Article
Kepler Data Validation I-Architecture, Diagnostic Tests, and Data Products for Vetting Transiting Planet Candidates
by
Girouard, Forrest
,
Seader, Shawn E.
,
Bryson, Stephen T.
in
binaries: eclipsing
,
Candidates
,
Data processing
2018
The Kepler Mission was designed to identify and characterize transiting planets in the Kepler Field of View and to determine their occurrence rates. Emphasis was placed on identification of Earth-size planets orbiting in the Habitable Zone of their host stars. Science data were acquired for a period of four years. Long-cadence data with 29.4 min sampling were obtained for ∼200,000 individual stellar targets in at least one observing quarter in the primary Kepler Mission. Light curves for target stars are extracted in the Kepler Science Data Processing Pipeline, and are searched for transiting planet signatures. A Threshold Crossing Event is generated in the transit search for targets where the transit detection threshold is exceeded and transit consistency checks are satisfied. These targets are subjected to further scrutiny in the Data Validation (DV) component of the Pipeline. Transiting planet candidates are characterized in DV, and light curves are searched for additional planets after transit signatures are modeled and removed. A suite of diagnostic tests is performed on all candidates to aid in discrimination between genuine transiting planets and instrumental or astrophysical false positives. Data products are generated per target and planet candidate to document and display transiting planet model fit and diagnostic test results. These products are exported to the Exoplanet Archive at the NASA Exoplanet Science Institute, and are available to the community. We describe the DV architecture and diagnostic tests, and provide a brief overview of the data products. Transiting planet modeling and the search for multiple planets on individual targets are described in a companion paper. The final revision of the Kepler Pipeline code base is available to the general public through GitHub. The Kepler Pipeline has also been modified to support the Transiting Exoplanet Survey Satellite (TESS) Mission which is expected to commence in 2018.
Journal Article
Kepler Presearch Data Conditioning II - A Bayesian Approach to Systematic Error Correction
by
Girouard, Forrest R.
,
Twicken, Joseph D.
,
Smith, Jeffrey C.
in
Acoustic waves
,
Astronomy
,
Data analysis
2012
ABSTRACT With the unprecedented photometric precision of the Kepler spacecraft, significant systematic and stochastic errors on transit signal levels are observable in the Kepler photometric data. These errors, which include discontinuities, outliers, systematic trends, and other instrumental signatures, obscure astrophysical signals. The presearch data conditioning (PDC) module of the Kepler data analysis pipeline tries to remove these errors while preserving planet transits and other astrophysically interesting signals. The completely new noise and stellar variability regime observed in Kepler data poses a significant problem to standard cotrending methods. Variable stars are often of particular astrophysical interest, so the preservation of their signals is of significant importance to the astrophysical community. We present a Bayesian maximum a posteriori (MAP) approach, where a subset of highly correlated and quiet stars is used to generate a cotrending basis vector set, which is in turn used to establish a range of \"reasonable\" robust fit parameters. These robust fit parameters are then used to generate a Bayesian prior and a Bayesian posterior probability distribution function (PDF) which, when maximized, finds the best fit that simultaneously removes systematic effects while reducing the signal distortion and noise injection that commonly afflicts simple least-squares (LS) fitting. A numerical and empirical approach is taken where the Bayesian prior PDFs are generated from fits to the light-curve distributions themselves.
Journal Article
KeplerPresearch Data Conditioning II - A Bayesian Approach to Systematic Error Correction
2012
With the unprecedented photometric precision of theKeplerspacecraft, significant systematic and stochastic errors on transit signal levels are observable in theKeplerphotometric data. These errors, which include discontinuities, outliers, systematic trends, and other instrumental signatures, obscure astrophysical signals. The presearch data conditioning (PDC) module of theKeplerdata analysis pipeline tries to remove these errors while preserving planet transits and other astrophysically interesting signals. The completely new noise and stellar variability regime observed inKeplerdata poses a significant problem to standard cotrending methods. Variable stars are often of particular astrophysical interest, so the preservation of their signals is of significant importance to the astrophysical community. We present a Bayesian maximum a posteriori (MAP) approach, where a subset of highly correlated and quiet stars is used to generate a cotrending basis vector set, which is in turn used to establish a range of “reasonable” robust fit parameters. These robust fit parameters are then used to generate a Bayesian prior and a Bayesian posterior probability distribution function (PDF) which, when maximized, finds the best fit that simultaneously removes systematic effects while reducing the signal distortion and noise injection that commonly afflicts simple least-squares (LS) fitting. A numerical and empirical approach is taken where the Bayesian prior PDFs are generated from fits to the light-curve distributions themselves.
Journal Article
Kepler Data Validation I—Architecture, Diagnostic Tests, and Data Products for Vetting Transiting Planet Candidates
by
Girouard, Forrest
,
Seader, Shawn E.
,
Bryson, Stephen T.
in
Astronomical Software, Data Analysis, and Techniques
2018
The Kepler Mission was designed to identify and characterize transiting planets in the Kepler Field of View and to determine their occurrence rates. Emphasis was placed on identification of Earth-size planets orbiting in the Habitable Zone of their host stars. Science data were acquired for a period of four years. Long-cadence data with 29.4 min sampling were obtained for ∼200,000 individual stellar targets in at least one observing quarter in the primary Kepler Mission. Light curves for target stars are extracted in the Kepler Science Data Processing Pipeline, and are searched for transiting planet signatures. A Threshold Crossing Event is generated in the transit search for targets where the transit detection threshold is exceeded and transit consistency checks are satisfied. These targets are subjected to further scrutiny in the Data Validation (DV) component of the Pipeline. Transiting planet candidates are characterized in DV, and light curves are searched for additional planets after transit signatures are modeled and removed. A suite of diagnostic tests is performed on all candidates to aid in discrimination between genuine transiting planets and instrumental or astrophysical false positives. Data products are generated per target and planet candidate to document and display transiting planet model fit and diagnostic test results. These products are exported to the Exoplanet Archive at the NASA Exoplanet Science Institute, and are available to the community. We describe the DV architecture and diagnostic tests, and provide a brief overview of the data products. Transiting planet modeling and the search for multiple planets on individual targets are described in a companion paper. The final revision of the Kepler Pipeline code base is available to the general public through GitHub. The Kepler Pipeline has also been modified to support the Transiting Exoplanet Survey Satellite (TESS) Mission which is expected to commence in 2018.
Journal Article
Auto-Vetting Transiting Planet Candidates Identified by the Kepler Pipeline
by
Burke, Christopher
,
Seader, Shawn
,
McCauliff, Sean
in
Astronomy
,
Contributed Papers
,
Diagnostic systems
2012
The Kepler Mission simultaneously measures the brightness of more than 150,000 stars every 29.4 minutes primarily for the purpose of transit photometry. Over the course of its 3.5-year primary mission Kepler has observed over 190,000 distinct stars, announcing 2,321 planet candidates, 2,165 eclipsing binaries, and 105 confirmed planets. As Kepler moves into its 4-year extended mission, the total number of transit-like features identified in the light curves has increased to as many as ~18,000. This number of signals has become intractable for human beings to inspect by eye in a thorough and timely fashion. To mitigate this problem we are developing machine learning approaches to perform the task of reviewing the diagnostics for each transit signal candidate to establish a preliminary list of planetary candidates ranked from most credible to least credible. Our preliminary results indicate that random forests can classify potential transiting planet signatures with an accuracy of more than 98.6% as measured by the area under a receiver-operating curve.
Journal Article
An Ultra-Hot Neptune in the Neptune desert
by
Walker, Simon R
,
Jerome Pitogo de Leon
,
Pollacco, Don
in
Atmospheric models
,
Extrasolar planets
,
Gas giant planets
2020
About one out of 200 Sun-like stars has a planet with an orbital period shorter than one day: an ultra-short-period planet (Sanchis-ojeda et al. 2014; Winn et al. 2018). All of the previously known ultra-short-period planets are either hot Jupiters, with sizes above 10 Earth radii (Re), or apparently rocky planets smaller than 2 Re. Such lack of planets of intermediate size (the \"hot Neptune desert\") has been interpreted as the inability of low-mass planets to retain any hydrogen/helium (H/He) envelope in the face of strong stellar irradiation. Here, we report the discovery of an ultra-short-period planet with a radius of 4.6 Re and a mass of 29 Me, firmly in the hot Neptune desert. Data from the Transiting Exoplanet Survey Satellite (Ricker et al. 2015) revealed transits of the bright Sun-like star \\starname\\, every 0.79 days. The planet's mean density is similar to that of Neptune, and according to thermal evolution models, it has a H/He-rich envelope constituting 9.0^(+2.7)_(-2.9)% of the total mass. With an equilibrium temperature around 2000 K, it is unclear how this \"ultra-hot Neptune\" managed to retain such an envelope. Follow-up observations of the planet's atmosphere to better understand its origin and physical nature will be facilitated by the star's brightness (Vmag=9.8).
Kepler Data Validation I -- Architecture, Diagnostic Tests, and Data Products for Vetting Transiting Planet Candidates
by
Girouard, rest
,
Seader, Shawn E
,
Henze, Christopher E
in
Architecture
,
Candidates
,
Circumstellar habitable zone
2018
The Kepler Mission was designed to identify and characterize transiting planets in the Kepler Field of View and to determine their occurrence rates. Emphasis was placed on identification of Earth-size planets orbiting in the Habitable Zone of their host stars. Science data were acquired for a period of four years. Long-cadence data with 29.4 min sampling were obtained for ~200,000 individual stellar targets in at least one observing quarter in the primary Kepler Mission. Light curves for target stars are extracted in the Kepler Science Data Processing Pipeline, and are searched for transiting planet signatures. A Threshold Crossing Event is generated in the transit search for targets where the transit detection threshold is exceeded and transit consistency checks are satisfied. These targets are subjected to further scrutiny in the Data Validation (DV) component of the Pipeline. Transiting planet candidates are characterized in DV, and light curves are searched for additional planets after transit signatures are modeled and removed. A suite of diagnostic tests is performed on all candidates to aid in discrimination between genuine transiting planets and instrumental or astrophysical false positives. Data products are generated per target and planet candidate to document and display transiting planet model fit and diagnostic test results. These products are exported to the Exoplanet Archive at the NASA Exoplanet Science Institute, and are available to the community. We describe the DV architecture and diagnostic tests, and provide a brief overview of the data products. Transiting planet modeling and the search for multiple planets on individual targets are described in a companion paper. The final revision of the Kepler Pipeline code base is available to the general public through GitHub. The Kepler Pipeline has also been modified to support the TESS Mission which will commence in 2018.
Automatic Classification of Kepler Planetary Transit Candidates
2015
In the first three years of operation the Kepler mission found 3,697 planet candidates from a set of 18,406 transit-like features detected on over 200,000 distinct stars. Vetting candidate signals manually by inspecting light curves and other diagnostic information is a labor intensive effort. Additionally, this classification methodology does not yield any information about the quality of planet candidates; all candidates are as credible as any other candidate. The torrent of exoplanet discoveries will continue after Kepler as there will be a number of exoplanet surveys that have an even broader search area. This paper presents the application of machine-learning techniques to the classification of exoplanet transit-like signals present in the \\Kepler light curve data. Transit-like detections are transformed into a uniform set of real-numbered attributes, the most important of which are described in this paper. Each of the known transit-like detections is assigned a class of planet candidate; astrophysical false positive; or systematic, instrumental noise. We use a random forest algorithm to learn the mapping from attributes to classes on this training set. The random forest algorithm has been used previously to classify variable stars; this is the first time it has been used for exoplanet classification. We are able to achieve an overall error rate of 5.85% and an error rate for classifying exoplanets candidates of 2.81%.
The TESS Science Processing Operations Center
by
Jenkins, Jon
,
Morgan, Edward
,
Girouard, rest
in
Extrasolar planets
,
Planet detection
,
Satellites
2016
The Transiting Exoplanet Survey Satellite (TESS) will conduct a search for Earth’s closest cousins starting in late 2017. TESS will discover approx.1,000 small planets and measure the masses of at least 50 of these small worlds. The Science Processing Operations Center (SPOC) is being developed based on the Kepler science pipeline and will generate calibrated pixels and light curves on the NAS Pleiades supercomputer. The SPOC will search for periodic transit events and generate validation products for the transit-like features in the light curves. All TESS SPOC data products will be archived to the Mikulski Archive for Space Telescopes.
Conference Proceeding