new

Get trending papers in your email inbox!

Subscribe

Daily Papers

byAK and the research community

Oct 31

Unsupervised Monocular Depth Perception: Focusing on Moving Objects

As a flexible passive 3D sensing means, unsupervised learning of depth from monocular videos is becoming an important research topic. It utilizes the photometric errors between the target view and the synthesized views from its adjacent source views as the loss instead of the difference from the ground truth. Occlusion and scene dynamics in real-world scenes still adversely affect the learning, despite significant progress made recently. In this paper, we show that deliberately manipulating photometric errors can efficiently deal with these difficulties better. We first propose an outlier masking technique that considers the occluded or dynamic pixels as statistical outliers in the photometric error map. With the outlier masking, the network learns the depth of objects that move in the opposite direction to the camera more accurately. To the best of our knowledge, such cases have not been seriously considered in the previous works, even though they pose a high risk in applications like autonomous driving. We also propose an efficient weighted multi-scale scheme to reduce the artifacts in the predicted depth maps. Extensive experiments on the KITTI dataset and additional experiments on the Cityscapes dataset have verified the proposed approach's effectiveness on depth or ego-motion estimation. Furthermore, for the first time, we evaluate the predicted depth on the regions of dynamic objects and static background separately for both supervised and unsupervised methods. The evaluation further verifies the effectiveness of our proposed technical approach and provides some interesting observations that might inspire future research in this direction.

  • 4 authors
·
Aug 30, 2021

ForestSplats: Deformable transient field for Gaussian Splatting in the Wild

Recently, 3D Gaussian Splatting (3D-GS) has emerged, showing real-time rendering speeds and high-quality results in static scenes. Although 3D-GS shows effectiveness in static scenes, their performance significantly degrades in real-world environments due to transient objects, lighting variations, and diverse levels of occlusion. To tackle this, existing methods estimate occluders or transient elements by leveraging pre-trained models or integrating additional transient field pipelines. However, these methods still suffer from two defects: 1) Using semantic features from the Vision Foundation model (VFM) causes additional computational costs. 2) The transient field requires significant memory to handle transient elements with per-view Gaussians and struggles to define clear boundaries for occluders, solely relying on photometric errors. To address these problems, we propose ForestSplats, a novel approach that leverages the deformable transient field and a superpixel-aware mask to efficiently represent transient elements in the 2D scene across unconstrained image collections and effectively decompose static scenes from transient distractors without VFM. We designed the transient field to be deformable, capturing per-view transient elements. Furthermore, we introduce a superpixel-aware mask that clearly defines the boundaries of occluders by considering photometric errors and superpixels. Additionally, we propose uncertainty-aware densification to avoid generating Gaussians within the boundaries of occluders during densification. Through extensive experiments across several benchmark datasets, we demonstrate that ForestSplats outperforms existing methods without VFM and shows significant memory efficiency in representing transient elements.

  • 5 authors
·
Mar 8

First Light And Reionisation Epoch Simulations (FLARES) VI: The colour evolution of galaxies z=5-15

With its exquisite sensitivity, wavelength coverage, and spatial and spectral resolution, the James Webb Space Telescope is poised to revolutionise our view of the distant, high-redshift (z>5) Universe. While Webb's spectroscopic observations will be transformative for the field, photometric observations play a key role in identifying distant objects and providing more comprehensive samples than accessible to spectroscopy alone. In addition to identifying objects, photometric observations can also be used to infer physical properties and thus be used to constrain galaxy formation models. However, inferred physical properties from broadband photometric observations, particularly in the absence of spectroscopic redshifts, often have large uncertainties. With the development of new tools for forward modelling simulations it is now routinely possible to predict observational quantities, enabling a direct comparison with observations. With this in mind, in this work, we make predictions for the colour evolution of galaxies at z=5-15 using the FLARES: First Light And Reionisation Epoch Simulations cosmological hydrodynamical simulation suite. We predict a complex evolution, driven predominantly by strong nebular line emission passing through individual bands. These predictions are in good agreement with existing constraints from Hubble and Spitzer as well as some of the first results from Webb. We also contrast our predictions with other models in the literature: while the general trends are similar we find key differences, particularly in the strength of features associated with strong nebular line emission. This suggests photometric observations alone should provide useful discriminating power between different models.

  • 9 authors
·
Jul 22, 2022

CfA3: 185 Type Ia Supernova Light Curves from the CfA

We present multi-band photometry of 185 type-Ia supernovae (SN Ia), with over 11500 observations. These were acquired between 2001 and 2008 at the F. L. Whipple Observatory of the Harvard-Smithsonian Center for Astrophysics (CfA). This sample contains the largest number of homogeneously-observed and reduced nearby SN Ia (z < 0.08) published to date. It more than doubles the nearby sample, bringing SN Ia cosmology to the point where systematic uncertainties dominate. Our natural system photometry has a precision of 0.02 mag or better in BVRIr'i' and roughly 0.04 mag in U for points brighter than 17.5 mag. We also estimate a systematic uncertainty of 0.03 mag in our SN Ia standard system BVRIr'i' photometry and 0.07 mag for U. Comparisons of our standard system photometry with published SN Ia light curves and comparison stars, where available for the same SN, reveal agreement at the level of a few hundredths mag in most cases. We find that 1991bg-like SN Ia are sufficiently distinct from other SN Ia in their color and light-curve-shape/luminosity relation that they should be treated separately in light-curve/distance fitter training samples. The CfA3 sample will contribute to the development of better light-curve/distance fitters, particularly in the few dozen cases where near-infrared photometry has been obtained and, together, can help disentangle host-galaxy reddening from intrinsic supernova color, reducing the systematic uncertainty in SN Ia distances due to dust.

  • 8 authors
·
Jan 29, 2009

Estimation of Classical Cepheid's Physical Parameters from NIR Light Curves

Recent space-borne and ground-based observations provide photometric measurements as time series. The effect of interstellar dust extinction in the near-infrared range is only 10% of that measured in the V band. However, the sensitivity of the light curve shape to the physical parameters in the near-infrared is much lower. So, interpreting these types of data sets requires new approaches like the different large-scale surveys, which create similar problems with big data. Using a selected data set, we provide a method for applying routines implemented in R to extract most information of measurements to determine physical parameters, which can also be used in automatic classification schemes and pipeline processing. We made a multivariate classification of 131 Cepheid light curves (LC) in J, H, and K colors, where all the LCs were represented in 20D parameter space in these colors separately. Performing a Principal Component Analysis (PCA), we got an orthogonal coordinate system and squared Euclidean distances between LCs, with 6 significant eigenvalues, reducing the 20-dimension to 6. We also estimated the optimal number of partitions of similar objects and found it to be equal to 7 in each color; their dependence on the period, absolute magnitude, amplitude, and metallicity are also discussed. We computed the Spearman rank correlations, showing that periods and absolute magnitudes correlate with the first three PCs significantly. The first two PC are also found to have a relationship with the amplitude, but the metallicity effects are only marginal. The method shown can be generalized and implemented in unsupervised classification schemes and analysis of mixed and biased samples. The analysis of our Classical Cepheid near-infrared LC sample showed that the J, H, K curves are insufficient for determination of stellar metallicity, with mass being the key factor shaping them.

  • 2 authors
·
Dec 9, 2024

Non-Uniform Spatial Alignment Errors in sUAS Imagery From Wide-Area Disasters

This work presents the first quantitative study of alignment errors between small uncrewed aerial systems (sUAS) geospatial imagery and a priori building polygons and finds that alignment errors are non-uniform and irregular. The work also introduces a publicly available dataset of imagery, building polygons, and human-generated and curated adjustments that can be used to evaluate existing strategies for aligning building polygons with sUAS imagery. There are no efforts that have aligned pre-existing spatial data with sUAS imagery, and thus, there is no clear state of practice. However, this effort and analysis show that the translational alignment errors present in this type of data, averaging 82px and an intersection over the union of 0.65, which would induce further errors and biases in downstream machine learning systems unless addressed. This study identifies and analyzes the translational alignment errors of 21,619 building polygons in fifty-one orthomosaic images, covering 16787.2 Acres (26.23 square miles), constructed from sUAS raw imagery from nine wide-area disasters (Hurricane Ian, Hurricane Harvey, Hurricane Michael, Hurricane Ida, Hurricane Idalia, Hurricane Laura, the Mayfield Tornado, the Musset Bayou Fire, and the Kilauea Eruption). The analysis finds no uniformity among the angle and distance metrics of the building polygon alignments as they present an average degree variance of 0.4 and an average pixel distance variance of 0.45. This work alerts the sUAS community to the problem of spatial alignment and that a simple linear transform, often used to align satellite imagery, will not be sufficient to align spatial data in sUAS orthomosaic imagery.

  • 6 authors
·
May 10, 2024

Lessons Learned from the 1st ARIEL Machine Learning Challenge: Correcting Transiting Exoplanet Light Curves for Stellar Spots

The last decade has witnessed a rapid growth of the field of exoplanet discovery and characterisation. However, several big challenges remain, many of which could be addressed using machine learning methodology. For instance, the most prolific method for detecting exoplanets and inferring several of their characteristics, transit photometry, is very sensitive to the presence of stellar spots. The current practice in the literature is to identify the effects of spots visually and correct for them manually or discard the affected data. This paper explores a first step towards fully automating the efficient and precise derivation of transit depths from transit light curves in the presence of stellar spots. The methods and results we present were obtained in the context of the 1st Machine Learning Challenge organized for the European Space Agency's upcoming Ariel mission. We first present the problem, the simulated Ariel-like data and outline the Challenge while identifying best practices for organizing similar challenges in the future. Finally, we present the solutions obtained by the top-5 winning teams, provide their code and discuss their implications. Successful solutions either construct highly non-linear (w.r.t. the raw data) models with minimal preprocessing -deep neural networks and ensemble methods- or amount to obtaining meaningful statistics from the light curves, constructing linear models on which yields comparably good predictive performance.

  • 23 authors
·
Oct 29, 2020

Revision of the Phenomenological Characteristics of the Algol-Type Stars Using the NAV Algorithm

Phenomenological characteristics of the sample of the Algol-type stars are revised using a recently developed NAV ("New Algol Variable") algorithm (2012Ap.....55..536A, 2012arXiv 1212.6707A) and compared to that obtained using common methods of Trigonometric Polynomial Fit (TP) or local Algebraic Polynomial (A) fit of a fixed or (alternately) statistically optimal degree (1994OAP.....7...49A, 2003ASPC..292..391A). The computer program NAV is introduced, which allows to determine the best fit with 7 "linear" and 5 "non-linear" parameters and their error estimates. The number of parameters is much smaller than for the TP fit (typically 20-40, depending on the width of the eclipse, and is much smaller (5-20) for the W UMa and beta Lyrae - type stars. This causes more smooth approximation taking into account the reflection and ellipsoidal effects (TP2) and generally different shapes of the primary and secondary eclipses. An application of the method to two-color CCD photometry to the recently discovered eclipsing variable 2MASS J18024395 + 4003309 = VSX J180243.9 +400331 (2015JASS...32..101A) allowed to make estimates of the physical parameters of the binary system based on the phenomenological parameters of the light curve. The phenomenological parameters of the light curves were determined for the sample of newly discovered EA and EW - type stars (VSX J223429.3+552903, VSX J223421.4+553013, VSX J223416.2+553424, US-NO-B1.0 1347-0483658, UCAC3-191-085589, VSX J180755.6+074711= UCAC3 196-166827). Despite we have used original observations published by the discoverers, the accuracy estimates of the period using the NAV method are typically better than the original ones.

  • 3 authors
·
Nov 30, 2015

Phemenological Modeling of Eclipsing Binary Stars

We review the method NAV (New Algol Variable) first introduced in 2012Ap.....55..536A, which uses the locally-dependent shapes of eclipses in an addition to the trigonometric polynomial of the second order (which typically describes the "out-of-eclipse" part of the light curve with effects of reflection, ellipticity and O'Connell). Eclipsing binary stars are believed to show distinct eclipses only if belonging to the EA type. With a decreasing eclipse width, the statistically optimal value of the trigonometric polynomial s (2003ASPC..292..391A) drastically increases from ~2 for elliptic (EL) variables without eclipses, ~6-8 for EW and up to ~30-50 for some EA with narrow eclipses. In this case of large number of parameters, the smoothing curve becomes very noisy and apparent waves (the Gibbs phenomenon) may be seen. The NAV set of the parameters may be used for classification in the GCVS, VSX and similar catalogs. The maximal number of parameters is m=12, which corresponds to s=5, if correcting both the period and the initial epoch. We have applied the method to few stars, also in a case of multi-color photometry (2015JASS...32..127A), when it is possible to use the phenomenological parameters from the NAV fit to estimate physical parameters using statistical dependencies. We conclude that the NAV approximation is better than the TP one even for the case of EW-type stars with much wider eclipses. It may also be used to determine timings (see 2005ASPC..335...37A for a review of methods) or to determine parameters in the case of variable period, using a complete light curve modeling the phase variations. The method is illustrated on 2MASS J11080447-6143290 (EA-type), USNO-B1.0 1265-0306001 and USNO-B1.0 1266-0313413 (EW-type) and compared to various other methods from the literature.

  • 3 authors
·
Feb 12, 2016

Phemenological Modelling of a Group of Eclipsing Binary Stars

Phenomenological modeling of variable stars allows determination of a set of the parameters, which are needed for classification in the "General Catalogue of Variable Stars" and similar catalogs. We apply a recent method NAV ("New Algol Variable") to eclipsing binary stars of different types. Although all periodic functions may be represented as Fourier series with an infinite number of coefficients, this is impossible for a finite number of the observations. Thus one may use a restricted Fourier series, i.e. a trigonometric polynomial (TP) of order s either for fitting the light curve, or to make a periodogram analysis. However, the number of parameters needed drastically increases with decreasing width of minimum. In the NAV algorithm, the special shape of minimum is used, so the number of parameters is limited to 10 (if the period and initial epoch are fixed) or 12 (not fixed). We illustrate the NAV method by application to a recently discovered Algol-type eclipsing variable 2MASS J11080308-6145589 (in the field of previously known variable star RS Car) and compare results to that obtained using the TP fits. For this system, the statistically optimal number of parameters is 44, but the fit is still worse than that of the NAV fit. Application to the system GSC 3692-00624 argues that the NAV fit is better than the TP one even for the case of EW-type stars with much wider eclipses. Model parameters are listed.

  • 3 authors
·
Sep 17, 2015

Cosmic Evolution Early Release Science (CEERS) survey: The colour evolution of galaxies in the distant Universe

The wavelength-coverage and sensitivity of JWST now enables us to probe the rest-frame UV - optical spectral energy distributions (SEDs) of galaxies at high-redshift (z>4). From these SEDs it is, in principle, through SED fitting possible to infer key physical properties, including stellar masses, star formation rates, and dust attenuation. These in turn can be compared with the predictions of galaxy formation simulations allowing us to validate and refine the incorporated physics. However, the inference of physical properties, particularly from photometry alone, can lead to large uncertainties and potential biases. Instead, it is now possible, and common, for simulations to be forward-modelled to yield synthetic observations that can be compared directly to real observations. In this work, we measure the JWST broadband fluxes and colours of a robust sample of 5<z<10 galaxies using the Cosmic Evolution Early Release Science (CEERS) Survey. We then analyse predictions from a variety of models using the same methodology and compare the NIRCam/F277W magnitude distribution and NIRCam colours with observations. We find that the predicted and observed magnitude distributions are similar, at least at 5<z<8. At z>8 the distributions differ somewhat, though our observed sample size is small and thus susceptible to statistical fluctuations. Likewise, the predicted and observed colour evolution show broad agreement, at least at 5<z<8. There is however some disagreement between the observed and modelled strength of the strong line contribution. In particular all the models fails to reproduce the F410M-F444W colour at z>8, though, again, the sample size is small here.

  • 23 authors
·
Nov 14, 2023

Testing the extended corona model with the optical/UV reverberation mapping of the accretion disk

The illumination of the accretion disks is frequently studied assuming that the incident X-ray flux is a point-like source. The approach is referred as lamppost model.The most recent computations of the X-ray reprocessing by the disk take into account the departure from the simple lamppost models. However, in computations of the incident flux thermalization and subsequent re-emission in the optical-UV band the lamppost approximation is most frequently assumed. We test if the UV-optical reverberation mapping and time delay measurements are sensitive to this assumption. We assume that the incident radiation originates from a region extended along the symmetry axis. To model this, we adopt a simple setup by representing the emission as two lamps irradiating the disk simultaneously from two different heights. We then compare the resulting predictions with those obtained for a single lamppost located at an intermediate height. We show at the basis of the transfer function that the deviation of the wavelength-dependent delay curve shows at most a difference of 20% in comparison to a single lamppost, assuming the black hole mass of 10^8 M_{odot}, Eddington ratio 1, and the location of the lamps at 5 and 100 rg. The maximum deviation happens for the lamp luminosity ratio sim3. When simulating light curves for a two-lamp setup and a standard lamppost with the same black hole mass and a sampling rate of 0.1 days, we find no measurable differences in the ICCF profiles between the two setups. Larger black hole mass and considerably lower Eddington ratio would allow to see larger differences between a single lamppost and a two-lampost model. UV/optical reverberation mapping is not very sensitive to the vertical extension of the corona.

  • 2 authors
·
Jan 1

A 2.4% Determination of the Local Value of the Hubble Constant

We use the Wide Field Camera 3 (WFC3) on the Hubble Space Telescope (HST) to reduce the uncertainty in the local value of the Hubble constant (H_0) from 3.3% to 2.4%. Improvements come from new, near-infrared observations of Cepheid variables in 11 new hosts of recent SNe~Ia, more than doubling the sample of SNe~Ia having a Cepheid-calibrated distance for a total of 19; these leverage the magnitude-z relation based on 300 SNe~Ia at z<0.15. All 19 hosts and the megamaser system NGC4258 were observed with WFC3, thus nullifying cross-instrument zeropoint errors. Other improvements include a 33% reduction in the systematic uncertainty in the maser distance to NGC4258, more Cepheids and a more robust distance to the LMC from late-type DEBs, HST observations of Cepheids in M31, and new HST-based trigonometric parallaxes for Milky Way (MW) Cepheids. We consider four geometric distance calibrations of Cepheids: (i) megamasers in NGC4258, (ii) 8 DEBs in the LMC, (iii) 15 MW Cepheids with parallaxes, and (iv) 2 DEBs in M31. H_0 from each is 72.25+/-2.51, 72.04+/-2.67, 76.18+/-2.37, and 74.50+/-3.27 km/sec/Mpc, respectively. Our best estimate of 73.24+/-1.74 km/sec/Mpc combines the anchors NGC4258, MW, and LMC, and includes systematic errors for a final uncertainty of 2.4%. This value is 3.4 sigma higher than 66.93+/-0.62 km/sec/Mpc predicted by LambdaCDM with 3 neutrinos with mass 0.06 eV and the Planck data, but reduces to 2.1 sigma relative to the prediction of 69.3+/-0.7 km/sec/Mpc with the combination of WMAP+ACT+SPT+BAO, suggesting systematic uncertainties in CMB measurements may play a role in the tension. If we take the conflict between Planck and H_0 at face value, one plausible explanation could involve an additional source of dark radiation in the early Universe in the range of Delta N_eff=0.4-1. We anticipate significant improvements in H_0 from upcoming parallax measurements.

  • 15 authors
·
Apr 5, 2016

The DESI PRObabilistic Value-Added Bright Galaxy Survey (PROVABGS) Mock Challenge

The PRObabilistic Value-Added Bright Galaxy Survey (PROVABGS) catalog will provide measurements of galaxy properties, such as stellar mass (M_*), star formation rate ({rm SFR}), stellar metallicity (Z_{rm MW}), and stellar age (t_{rm age, MW}), for >10 million galaxies of the DESI Bright Galaxy Survey. Full posterior distributions of the galaxy properties will be inferred using state-of-the-art Bayesian spectral energy distribution (SED) modeling of DESI spectroscopy and Legacy Surveys photometry. In this work, we present the SED model, Bayesian inference framework, and methodology of PROVABGS. Furthermore, we apply the PROVABGS SED modeling on realistic synthetic DESI spectra and photometry, constructed using the L-GALAXIES semi-analytic model. We compare the inferred galaxy properties to the true galaxy properties of the simulation using a hierarchical Bayesian framework to quantify accuracy and precision. Overall, we accurately infer the true M_*, {rm SFR}, Z_{rm MW}, and t_{rm age, MW} of the simulated galaxies. However, the priors on galaxy properties induced by the SED model have a significant impact on the posteriors. They impose a {rm SFR}{>}10^{-1} M_odot/{rm yr} lower bound on {rm SFR}, a {sim}0.3 dex bias on log Z_{rm MW} for galaxies with low spectral signal-to-noise, and t_{rm age, MW} < 8,{rm Gyr} upper bound on stellar age. This work also demonstrates that a joint analysis of spectra and photometry significantly improves the constraints on galaxy properties over photometry alone and is necessary to mitigate the impact of the priors. With the methodology presented and validated in this work, PROVABGS will maximize information extracted from DESI observations and provide a probabilistic value-added galaxy catalog that will extend current galaxy studies to new regimes and unlock cutting-edge probabilistic analyses.

  • 19 authors
·
Feb 3, 2022

Understanding of the properties of neural network approaches for transient light curve approximations

Modern-day time-domain photometric surveys collect a lot of observations of various astronomical objects and the coming era of large-scale surveys will provide even more information on their properties. Spectroscopic follow-ups are especially crucial for transients such as supernovae and most of these objects have not been subject to such studies. }{Flux time series are actively used as an affordable alternative for photometric classification and characterization, for instance, peak identifications and luminosity decline estimations. However, the collected time series are multidimensional and irregularly sampled, while also containing outliers and without any well-defined systematic uncertainties. This paper presents a search for the best-performing methods to approximate the observed light curves over time and wavelength for the purpose of generating time series with regular time steps in each passband.}{We examined several light curve approximation methods based on neural networks such as multilayer perceptrons, Bayesian neural networks, and normalizing flows to approximate observations of a single light curve. Test datasets include simulated PLAsTiCC and real Zwicky Transient Facility Bright Transient Survey light curves of transients.}{The tests demonstrate that even just a few observations are enough to fit the networks and improve the quality of approximation, compared to state-of-the-art models. The methods described in this work have a low computational complexity and are significantly faster than Gaussian processes. Additionally, we analyzed the performance of the approximation techniques from the perspective of further peak identification and transients classification. The study results have been released in an open and user-friendly Fulu Python library available on GitHub for the scientific community.

  • 7 authors
·
Sep 15, 2022

Cosmological Distance Measurement of 12 Nearby Supernovae IIP with ROTSE-IIIB

We present cosmological analysis of 12 nearby (z<0.06) Type IIP supernovae (SNe IIP) observed with the ROTSE-IIIb telescope. To achieve precise photometry, we present a new image differencing technique that is implemented for the first time on the ROTSE SN photometry pipeline. With this method, we find up to a 20\% increase in the detection efficiency and significant reduction in residual RMS scatter of the SN lightcurves when compared to the previous pipeline performance. We use the published optical spectra and broadband photometry of well studied SNe IIP to establish temporal models for ejecta velocity and photospheric temperature evolution for our SNe IIP population. This study yields measurements that are competitive to other methods even when the data are limited to a single epoch during the photospheric phase of SNe IIP. Using the fully reduced ROTSE photometry and optical spectra, we apply these models to the respective photometric epochs for each SN in the ROTSE IIP sample. This facilitates the use of the Expanding Photosphere Method (EPM) to obtain distance estimates to their respective host galaxies. We then perform cosmological parameter fitting using these EPM distances from which we measure the Hubble constant to be 72.9^{+5.7}_{-4.3}~{rm kms^{-1}~Mpc^{-1}}, which is consistent with the standard Lambda CDM model values derived using other independent techniques.

  • 17 authors
·
Aug 1, 2023

TDCOSMO XVII. New time delays in 22 lensed quasars from optical monitoring with the ESO-VST 2.6m and MPG 2.2m telescopes

We present new time delays, the main ingredient of time delay cosmography, for 22 lensed quasars resulting from high-cadence r-band monitoring on the 2.6 m ESO VLT Survey Telescope and Max-Planck-Gesellschaft 2.2 m telescope. Each lensed quasar was typically monitored for one to four seasons, often shared between the two telescopes to mitigate the interruptions forced by the COVID-19 pandemic. The sample of targets consists of 19 quadruply and 3 doubly imaged quasars, which received a total of 1 918 hours of on-sky time split into 21 581 wide-field frames, each 320 seconds long. In a given field, the 5-{\sigma} depth of the combined exposures typically reaches the 27th magnitude, while that of single visits is 24.5 mag - similar to the expected depth of the upcoming Vera-Rubin LSST. The fluxes of the different lensed images of the targets were reliably de-blended, providing not only light curves with photometric precision down to the photon noise limit, but also high-resolution models of the targets whose features and astrometry were systematically confirmed in Hubble Space Telescope imaging. This was made possible thanks to a new photometric pipeline, lightcurver, and the forward modelling method STARRED. Finally, the time delays between pairs of curves and their uncertainties were estimated, taking into account the degeneracy due to microlensing, and for the first time the full covariance matrices of the delay pairs are provided. Of note, this survey, with 13 square degrees, has applications beyond that of time delays, such as the study of the structure function of the multiple high-redshift quasars present in the footprint at a new high in terms of both depth and frequency. The reduced images will be available through the European Southern Observatory Science Portal.

  • 32 authors
·
Apr 3

Revisiting the Classics: On the Optical Colours of Novae as Standard Crayons

We present a systematic study of the BVRI colours of novae over the course of their eruptions. Where possible, interstellar reddening was measured using the equivalent widths of Diffuse Interstellar Bands (DIBs). Some novae lack spectra with sufficient resolution and signal-to-noise ratios; therefore, we supplement as necessary with 3D and 2D dust maps. Utilising only novae with DIB- or 3D-map-based E(B-V), we find an average intrinsic (B-V)_0 colour of novae at V-band light curve peak of 0.18 with a standard deviation of 0.31, based on a sample of 23 novae. When the light curve has declined by 2 magnitudes (t_2), we find an average (B-V)_0 = -0.02 with a standard deviation of 0.19. These average colours are consistent with previous findings, although the spreads are larger than previously found due to more accurate reddening estimates. We also examined the intrinsic (R-I)_0 and (V-R)_0 colours across our sample. These colours behave similarly to (B-V)_0, except that the (V-R)_0 colour gets redder after peak, likely due to the contributions of emission line flux. We searched for correlations between nova colours and t_2, peak V-band absolute magnitude, and GeV gamma-ray luminosity, but find no statistically significant correlations. Nova colours can therefore be used as standard "crayons" to estimate interstellar reddening from photometry alone, with 0.2--0.3 mag uncertainty. We present a novel Bayesian strategy for estimating distances to Galactic novae based on these E(B-V) measurements, independent of assumptions about luminosity, built using 3D dust maps and a stellar mass model of the Milky Way.

  • 12 authors
·
Dec 19, 2024

JAGB 2.0: Improved Constraints on the J-region Asymptotic Giant Branch-based Hubble Constant from an Expanded Sample of JWST Observations

The J-region Asymptotic Giant Branch (JAGB) is an overdensity of stars in the near-infrared, attributed to carbon-rich asymptotic giant branch stars, and recently used as a standard candle for measuring extragalactic distances and the Hubble constant. Using JWST in Cycle 2, we extend JAGB measurements to 6 hosts of 9 Type Ia supernovae (SNe Ia) (NGC 2525, NGC 3147, NGC 3370, NGC 3447, NGC 5468, and NGC 5861), with two at D sim 40 Mpc, all calibrated by the maser host NGC 4258. We investigate the effects of incompleteness and find that we are unable to recover a robust JAGB measurement in one of the two most distant hosts at R sim 40 Mpc, NGC 3147. We compile all JWST JAGB observations in SNe Ia hosts, 15 galaxies hosting 18 SNe Ia, from the SH0ES and CCHP programs and employ all literature measures (mode, mean, median, model). We find no significant mean difference between these distances and those from HST Cepheids, -0.03pm0.02 (stat) pm 0.05 (sys) mag. We find a difference of 0.11 pm 0.02 mag between JAGB mode measurements in the CCHP analyses of two fields in NGC 4258, a feature also seen in two SH0ES fields (see field-to-field variations in Li et al. 2024a), indicating significant field-to-field variation of JAGB measurements in NGC 4258 which produce a large absolute calibration uncertainty. Variations are also seen in the shape of the JAGB LF across galaxies so that different measures produce different values of the Hubble constant. We look for but do not (yet) find a standardizing relation between JAGB LF skew or color dependence and the apparent variation. Using the middle result of all JAGB measures to calibrate SNe Ia yields a Hubble constant of H_0 = 73.3 pm 1.4 (stat) pm 2.0 (sys) km/s/Mpc with the systematic dominated by apparent differences across NGC 4258 calibrating fields or their measures.

  • 5 authors
·
Feb 7

The Foundation Supernova Survey: Measuring Cosmological Parameters with Supernovae from a Single Telescope

Measurements of the dark energy equation-of-state parameter, w, have been limited by uncertainty in the selection effects and photometric calibration of z<0.1 Type Ia supernovae (SNe Ia). The Foundation Supernova Survey is designed to lower these uncertainties by creating a new sample of z<0.1 SNe Ia observed on the Pan-STARRS system. Here, we combine the Foundation sample with SNe from the Pan-STARRS Medium Deep Survey and measure cosmological parameters with 1,338 SNe from a single telescope and a single, well-calibrated photometric system. For the first time, both the low-z and high-z data are predominantly discovered by surveys that do not target pre-selected galaxies, reducing selection bias uncertainties. The z>0.1 data include 875 SNe without spectroscopic classifications and we show that we can robustly marginalize over CC SN contamination. We measure Foundation Hubble residuals to be fainter than the pre-existing low-z Hubble residuals by 0.046 pm 0.027 mag (stat+sys). By combining the SN Ia data with cosmic microwave background constraints, we find w=-0.938 pm 0.053, consistent with LambdaCDM. With 463 spectroscopically classified SNe Ia alone, we measure w=-0.933pm0.061. Using the more homogeneous and better-characterized Foundation sample gives a 55% reduction in the systematic uncertainty attributed to SN Ia sample selection biases. Although use of just a single photometric system at low and high redshift increases the impact of photometric calibration uncertainties in this analysis, previous low-z samples may have correlated calibration uncertainties that were neglected in past studies. The full Foundation sample will observe up to 800 SNe to anchor the LSST and WFIRST Hubble diagrams.

  • 30 authors
·
Nov 22, 2018

VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information

Errors in understanding visual information in images (i.e., visual perception errors) remain a major source of mistakes in Large Vision Language Models (LVLMs). While further analysis is essential, there is a deficiency in datasets for evaluating the visual perception of LVLMs. In this work, we introduce VisOnlyQA, a new dataset designed to directly evaluate the visual perception capabilities of LVLMs on questions about geometric and numerical information in scientific figures. Our dataset enables us to analyze the visual perception of LVLMs for fine-grained visual information, independent of other capabilities such as reasoning. The evaluation set of VisOnlyQA includes 1,200 multiple-choice questions in 12 tasks on four categories of figures. We also provide synthetic training data consisting of 70k instances. Our experiments on VisOnlyQA highlight the following findings: (i) 20 LVLMs we evaluate, including GPT-4o and Gemini 1.5 Pro, work poorly on the visual perception tasks in VisOnlyQA, while human performance is nearly perfect. (ii) Fine-tuning on synthetic training data demonstrates the potential for enhancing the visual perception of LVLMs, but observed improvements are limited to certain tasks and specific models. (iii) Stronger language models improve the visual perception of LVLMs. In summary, our experiments suggest that both training data and model architectures should be improved to enhance the visual perception capabilities of LVLMs. The datasets, code, and model responses are provided at https://github.com/psunlpgroup/VisOnlyQA.

  • 5 authors
·
Dec 1, 2024 2

Correspondences of the Third Kind: Camera Pose Estimation from Object Reflection

Computer vision has long relied on two kinds of correspondences: pixel correspondences in images and 3D correspondences on object surfaces. Is there another kind, and if there is, what can they do for us? In this paper, we introduce correspondences of the third kind we call reflection correspondences and show that they can help estimate camera pose by just looking at objects without relying on the background. Reflection correspondences are point correspondences in the reflected world, i.e., the scene reflected by the object surface. The object geometry and reflectance alters the scene geometrically and radiometrically, respectively, causing incorrect pixel correspondences. Geometry recovered from each image is also hampered by distortions, namely generalized bas-relief ambiguity, leading to erroneous 3D correspondences. We show that reflection correspondences can resolve the ambiguities arising from these distortions. We introduce a neural correspondence estimator and a RANSAC algorithm that fully leverages all three kinds of correspondences for robust and accurate joint camera pose and object shape estimation just from the object appearance. The method expands the horizon of numerous downstream tasks, including camera pose estimation for appearance modeling (e.g., NeRF) and motion estimation of reflective objects (e.g., cars on the road), to name a few, as it relieves the requirement of overlapping background.

  • 3 authors
·
Dec 7, 2023

Synthetic Light Curves and Spectra for the Photospheric Phase of a 3D Stripped-Envelope Supernova Explosion Model

We present synthetic light curves and spectra from three-dimensional (3D) Monte Carlo radiative transfer simulations based on a 3D core-collapse supernova explosion model of an ultra-stripped 3.5,M_{odot} progenitor. Our calculations predict a fast and faint transient with Delta m_{15} sim 1- 2,mag and peak bolometric luminosity between -15.3,mag and -16.4,mag. Due to a large-scale unipolar asymmetry in the distribution of ^{56}Ni, there is a pronounced viewing-angle dependence with about 1,mag difference between the directions of highest and lowest luminosity. The predicted spectra for this rare class of explosions do not yet match any observed counterpart. They are dominated by prominent Mg~II lines, but features from O, C, Si, and Ca are also found. In particular, the O~I line at 7{774} appears as a blended feature together with Mg~II emission. Our model is not only faster and fainter than the observed Ib/c supernova population, but also shows a correlation between higher peak luminosity and larger Delta m_{15} that is not present in observational samples. A possible explanation is that the unusually small ejecta mass of our model accentuates the viewing-angle dependence of the photometry. We suggest that the viewing-angle dependence of the photometry may be used to constrain asymmetries in explosion models of more typical stripped-envelope supernova progenitors in future.

  • 5 authors
·
Oct 28, 2024

Parameter estimation from the core-bounce phase of rotating core collapse supernovae in real interferometer noise

In this work we propose an analytical model that reproduces the core-bounds phase of gravitational waves (GW) of Rapidly Rotating (RR) from Core Collapse Supernovae (CCSNe), as a function of three parameters, the arrival time tau, the ratio of the kinetic and potential energy beta and a phenomenological parameter alpha related to rotation and equation of state (EOS). To validate the model we use 126 waveforms from the Richers catalog Richers_2017 selected with the criteria of exploring a range of rotation profiles, and involving EOS. To quantify the degree of accuracy of the proposed model, with a particular focus on the rotation parameter beta, we show that the average Fitting Factor (FF) between the simulated waveforms with the templates is 94.4\%. In order to estimate the parameters we propose a frequentist matched filtering approach in real interferometric noise which does not require assigning any priors. We use the Matched Filter (MF) technique, where we inject a bank of templates considering simulated colored Gaussian noise and the real noise of O3L1. For example for A300w6.00\_BHBLP at 10Kpc we obtain a standar deviation of sigma = 3.34times 10^{-3} for simulated colored Gaussian noise and sigma= 1.46times 10^{-2} for real noise. On the other hand, from the asymptotic expansion of the variance we obtain the theoretical minimum error for beta at 10 kpc and optimal orientation. The estimation error in this case is from 10^{-2} to 10^{-3} as beta increases. We show that the results of the estimation error of beta for the 3-parameter space (3D) is consistent with the single-parameter space (1D), which allows us to conclude that beta is decoupled from the others two parameters.

  • 5 authors
·
Apr 3, 2023

Flying Triangulation - towards the 3D movie camera

Flying Triangulation sensors enable a free-hand and motion-robust 3D data acquisition of complex shaped objects. The measurement principle is based on a multi-line light-sectioning approach and uses sophisticated algorithms for real-time registration (S. Ettl et al., Appl. Opt. 51 (2012) 281-289). As "single-shot principle", light sectioning enables the option to get surface data from one single camera exposure. But there is a drawback: A pixel-dense measurement is not possible because of fundamental information-theoretical reasons. By "pixel-dense" we understand that each pixel displays individually measured distance information, neither interpolated from its neighbour pixels nor using lateral context information. Hence, for monomodal single-shot principles, the 3D data generated from one 2D raw image display a significantly lower space-bandwidth than the camera permits. This is the price one must pay for motion robustness. Currently, our sensors project about 10 lines (each with 1000 pixels), reaching an considerable lower data efficiency than theoretically possible for a single-shot sensor. Our aim is to push Flying Triangulation to its information-theoretical limits. Therefore, the line density as well as the measurement depth needs to be significantly increased. This causes serious indexing ambiguities. On the road to a single-shot 3D movie camera, we are working on solutions to overcome the problem of false line indexing by utilizing yet unexploited information. We will present several approaches and will discuss profound information-theoretical questions about the information efficiency of 3D sensors.

  • 4 authors
·
May 17, 2013

Interpretable structural model error discovery from sparse assimilation increments using spectral bias-reduced neural networks: A quasi-geostrophic turbulence test case

Earth system models suffer from various structural and parametric errors in their representation of nonlinear, multi-scale processes, leading to uncertainties in their long-term projections. The effects of many of these errors (particularly those due to fast physics) can be quantified in short-term simulations, e.g., as differences between the predicted and observed states (analysis increments). With the increase in the availability of high-quality observations and simulations, learning nudging from these increments to correct model errors has become an active research area. However, most studies focus on using neural networks, which while powerful, are hard to interpret, are data-hungry, and poorly generalize out-of-distribution. Here, we show the capabilities of Model Error Discovery with Interpretability and Data Assimilation (MEDIDA), a general, data-efficient framework that uses sparsity-promoting equation-discovery techniques to learn model errors from analysis increments. Using two-layer quasi-geostrophic turbulence as the test case, MEDIDA is shown to successfully discover various linear and nonlinear structural/parametric errors when full observations are available. Discovery from spatially sparse observations is found to require highly accurate interpolation schemes. While NNs have shown success as interpolators in recent studies, here, they are found inadequate due to their inability to accurately represent small scales, a phenomenon known as spectral bias. We show that a general remedy, adding a random Fourier feature layer to the NN, resolves this issue enabling MEDIDA to successfully discover model errors from sparse observations. These promising results suggest that with further development, MEDIDA could be scaled up to models of the Earth system and real observations.

  • 3 authors
·
Sep 22, 2023

Mantis Shrimp: Exploring Photometric Band Utilization in Computer Vision Networks for Photometric Redshift Estimation

We present Mantis Shrimp, a multi-survey deep learning model for photometric redshift estimation that fuses ultra-violet (GALEX), optical (PanSTARRS), and infrared (UnWISE) imagery. Machine learning is now an established approach for photometric redshift estimation, with generally acknowledged higher performance in areas with a high density of spectroscopically identified galaxies over template-based methods. Multiple works have shown that image-based convolutional neural networks can outperform tabular-based color/magnitude models. In comparison to tabular models, image models have additional design complexities: it is largely unknown how to fuse inputs from different instruments which have different resolutions or noise properties. The Mantis Shrimp model estimates the conditional density estimate of redshift using cutout images. The density estimates are well calibrated and the point estimates perform well in the distribution of available spectroscopically confirmed galaxies with (bias = 1e-2), scatter (NMAD = 2.44e-2) and catastrophic outlier rate (eta=17.53%). We find that early fusion approaches (e.g., resampling and stacking images from different instruments) match the performance of late fusion approaches (e.g., concatenating latent space representations), so that the design choice ultimately is left to the user. Finally, we study how the models learn to use information across bands, finding evidence that our models successfully incorporates information from all surveys. The applicability of our model to the analysis of large populations of galaxies is limited by the speed of downloading cutouts from external servers; however, our model could be useful in smaller studies such as generating priors over redshift for stellar population synthesis.

  • 6 authors
·
Jan 15

Gaia Data Release 3: Summary of the content and survey properties

We present the third data release of the European Space Agency's Gaia mission, GDR3. The GDR3 catalogue is the outcome of the processing of raw data collected with the Gaia instruments during the first 34 months of the mission by the Gaia Data Processing and Analysis Consortium. The GDR3 catalogue contains the same source list, celestial positions, proper motions, parallaxes, and broad band photometry in the G, G_{BP}, and G_{RP} pass-bands already present in the Early Third Data Release. GDR3 introduces an impressive wealth of new data products. More than 33 million objects in the ranges G_{rvs} < 14 and 3100 <T_{eff} <14500 , have new determinations of their mean radial velocities based on data collected by Gaia. We provide G_{rvs} magnitudes for most sources with radial velocities, and a line broadening parameter is listed for a subset of these. Mean Gaia spectra are made available to the community. The GDR3 catalogue includes about 1 million mean spectra from the radial velocity spectrometer, and about 220 million low-resolution blue and red prism photometer BPRP mean spectra. The results of the analysis of epoch photometry are provided for some 10 million sources across 24 variability types. GDR3 includes astrophysical parameters and source class probabilities for about 470 million and 1500 million sources, respectively, including stars, galaxies, and quasars. Orbital elements and trend parameters are provided for some 800,000 astrometric, spectroscopic and eclipsing binaries. More than 150,000 Solar System objects, including new discoveries, with preliminary orbital solutions and individual epoch observations are part of this release. Reflectance spectra derived from the epoch BPRP spectral data are published for about 60\,000 asteroids. Finally, an additional data set is provided, namely the Gaia Andromeda Photometric Survey (abridged)

  • 456 authors
·
Jul 30, 2022

Beyond the Pixel: a Photometrically Calibrated HDR Dataset for Luminance and Color Prediction

Light plays an important role in human well-being. However, most computer vision tasks treat pixels without considering their relationship to physical luminance. To address this shortcoming, we introduce the Laval Photometric Indoor HDR Dataset, the first large-scale photometrically calibrated dataset of high dynamic range 360{\deg} panoramas. Our key contribution is the calibration of an existing, uncalibrated HDR Dataset. We do so by accurately capturing RAW bracketed exposures simultaneously with a professional photometric measurement device (chroma meter) for multiple scenes across a variety of lighting conditions. Using the resulting measurements, we establish the calibration coefficients to be applied to the HDR images. The resulting dataset is a rich representation of indoor scenes which displays a wide range of illuminance and color, and varied types of light sources. We exploit the dataset to introduce three novel tasks, where: per-pixel luminance, per-pixel color and planar illuminance can be predicted from a single input image. Finally, we also capture another smaller photometric dataset with a commercial 360{\deg} camera, to experiment on generalization across cameras. We are optimistic that the release of our datasets and associated code will spark interest in physically accurate light estimation within the community. Dataset and code are available at https://lvsn.github.io/beyondthepixel/.

  • 5 authors
·
Apr 24, 2023

Optical night sky brightness measurements from the stratosphere

This paper presents optical night sky brightness measurements from the stratosphere using CCD images taken with the Super-pressure Balloon-borne Imaging Telescope (SuperBIT). The data used for estimating the backgrounds were obtained during three commissioning flights in 2016, 2018, and 2019 at altitudes ranging from 28 km to 34 km above sea level. For a valid comparison of the brightness measurements from the stratosphere with measurements from mountain-top ground-based observatories (taken at zenith on the darkest moonless night at high Galactic and high ecliptic latitudes), the stratospheric brightness levels were zodiacal light and diffuse Galactic light subtracted, and the airglow brightness was projected to zenith. The stratospheric brightness was measured around 5.5 hours, 3 hours, and 2 hours before the local sunrise time in 2016, 2018, and 2019 respectively. The B, V, R, and I brightness levels in 2016 were 2.7, 1.0, 1.1, and 0.6 mag arcsec^{-2} darker than the darkest ground-based measurements. The B, V, and R brightness levels in 2018 were 1.3, 1.0, and 1.3 mag arcsec^{-2} darker than the darkest ground-based measurements. The U and I brightness levels in 2019 were 0.1 mag arcsec^{-2} brighter than the darkest ground-based measurements, whereas the B and V brightness levels were 0.8 and 0.6 mag arcsec^{-2} darker than the darkest ground-based measurements. The lower sky brightness levels, stable photometry, and lower atmospheric absorption make stratospheric observations from a balloon-borne platform a unique tool for astronomy. We plan to continue this work in a future mid-latitude long duration balloon flight with SuperBIT.

  • 30 authors
·
Oct 10, 2020

Stereophotoclinometry Revisited

Image-based surface reconstruction and characterization is crucial for missions to small celestial bodies, as it informs mission planning, navigation, and scientific analysis. However, current state-of-the-practice methods, such as stereophotoclinometry (SPC), rely heavily on human-in-the-loop verification and high-fidelity a priori information. This paper proposes Photoclinometry-from-Motion (PhoMo), a novel framework that incorporates photoclinometry techniques into a keypoint-based structure-from-motion (SfM) system to estimate the surface normal and albedo at detected landmarks to improve autonomous surface and shape characterization of small celestial bodies from in-situ imagery. In contrast to SPC, we forego the expensive maplet estimation step and instead use dense keypoint measurements and correspondences from an autonomous keypoint detection and matching method based on deep learning. Moreover, we develop a factor graph-based approach allowing for simultaneous optimization of the spacecraft's pose, landmark positions, Sun-relative direction, and surface normals and albedos via fusion of Sun vector measurements and image keypoint measurements. The proposed framework is validated on real imagery taken by the Dawn mission to the asteroid 4 Vesta and the minor planet 1 Ceres and compared against an SPC reconstruction, where we demonstrate superior rendering performance compared to an SPC solution and precise alignment to a stereophotogrammetry (SPG) solution without relying on any a priori camera pose and topography information or humans-in-the-loop.

  • 6 authors
·
Apr 11

Paying Attention to Astronomical Transients: Introducing the Time-series Transformer for Photometric Classification

Future surveys such as the Legacy Survey of Space and Time (LSST) of the Vera C. Rubin Observatory will observe an order of magnitude more astrophysical transient events than any previous survey before. With this deluge of photometric data, it will be impossible for all such events to be classified by humans alone. Recent efforts have sought to leverage machine learning methods to tackle the challenge of astronomical transient classification, with ever improving success. Transformers are a recently developed deep learning architecture, first proposed for natural language processing, that have shown a great deal of recent success. In this work we develop a new transformer architecture, which uses multi-head self attention at its core, for general multi-variate time-series data. Furthermore, the proposed time-series transformer architecture supports the inclusion of an arbitrary number of additional features, while also offering interpretability. We apply the time-series transformer to the task of photometric classification, minimising the reliance of expert domain knowledge for feature selection, while achieving results comparable to state-of-the-art photometric classification methods. We achieve a logarithmic-loss of 0.507 on imbalanced data in a representative setting using data from the Photometric LSST Astronomical Time-Series Classification Challenge (PLAsTiCC). Moreover, we achieve a micro-averaged receiver operating characteristic area under curve of 0.98 and micro-averaged precision-recall area under curve of 0.87.

  • 2 authors
·
May 13, 2021

Segmentation with Noisy Labels via Spatially Correlated Distributions

In semantic segmentation, the accuracy of models heavily depends on the high-quality annotations. However, in many practical scenarios such as medical imaging and remote sensing, obtaining true annotations is not straightforward and usually requires significant human labor. Relying on human labor often introduces annotation errors, including mislabeling, omissions, and inconsistency between annotators. In the case of remote sensing, differences in procurement time can lead to misaligned ground truth annotations. These label errors are not independently distributed, and instead usually appear in spatially connected regions where adjacent pixels are more likely to share the same errors. To address these issues, we propose an approximate Bayesian estimation based on a probabilistic model that assumes training data includes label errors, incorporating the tendency for these errors to occur with spatial correlations between adjacent pixels. Bayesian inference requires computing the posterior distribution of label errors, which becomes intractable when spatial correlations are present. We represent the correlation of label errors between adjacent pixels through a Gaussian distribution whose covariance is structured by a Kac-Murdock-Szeg\"{o} (KMS) matrix, solving the computational challenges. Through experiments on multiple segmentation tasks, we confirm that leveraging the spatial correlation of label errors significantly improves performance. Notably, in specific tasks such as lung segmentation, the proposed method achieves performance comparable to training with clean labels under moderate noise levels. Code is available at https://github.com/pfnet-research/Bayesian_SpatialCorr.

  • 3 authors
·
Apr 20

BLADE: Single-view Body Mesh Learning through Accurate Depth Estimation

Single-image human mesh recovery is a challenging task due to the ill-posed nature of simultaneous body shape, pose, and camera estimation. Existing estimators work well on images taken from afar, but they break down as the person moves close to the camera. Moreover, current methods fail to achieve both accurate 3D pose and 2D alignment at the same time. Error is mainly introduced by inaccurate perspective projection heuristically derived from orthographic parameters. To resolve this long-standing challenge, we present our method BLADE which accurately recovers perspective parameters from a single image without heuristic assumptions. We start from the inverse relationship between perspective distortion and the person's Z-translation Tz, and we show that Tz can be reliably estimated from the image. We then discuss the important role of Tz for accurate human mesh recovery estimated from close-range images. Finally, we show that, once Tz and the 3D human mesh are estimated, one can accurately recover the focal length and full 3D translation. Extensive experiments on standard benchmarks and real-world close-range images show that our method is the first to accurately recover projection parameters from a single image, and consequently attain state-of-the-art accuracy on 3D pose estimation and 2D alignment for a wide range of images. https://research.nvidia.com/labs/amri/projects/blade/

  • 8 authors
·
Dec 11, 2024

TokenHMR: Advancing Human Mesh Recovery with a Tokenized Pose Representation

We address the problem of regressing 3D human pose and shape from a single image, with a focus on 3D accuracy. The current best methods leverage large datasets of 3D pseudo-ground-truth (p-GT) and 2D keypoints, leading to robust performance. With such methods, we observe a paradoxical decline in 3D pose accuracy with increasing 2D accuracy. This is caused by biases in the p-GT and the use of an approximate camera projection model. We quantify the error induced by current camera models and show that fitting 2D keypoints and p-GT accurately causes incorrect 3D poses. Our analysis defines the invalid distances within which minimizing 2D and p-GT losses is detrimental. We use this to formulate a new loss Threshold-Adaptive Loss Scaling (TALS) that penalizes gross 2D and p-GT losses but not smaller ones. With such a loss, there are many 3D poses that could equally explain the 2D evidence. To reduce this ambiguity we need a prior over valid human poses but such priors can introduce unwanted bias. To address this, we exploit a tokenized representation of human pose and reformulate the problem as token prediction. This restricts the estimated poses to the space of valid poses, effectively providing a uniform prior. Extensive experiments on the EMDB and 3DPW datasets show that our reformulated keypoint loss and tokenization allows us to train on in-the-wild data while improving 3D accuracy over the state-of-the-art. Our models and code are available for research at https://tokenhmr.is.tue.mpg.de.

  • 5 authors
·
Apr 25, 2024

AstroM^3: A self-supervised multimodal model for astronomy

While machine-learned models are now routinely employed to facilitate astronomical inquiry, model inputs tend to be limited to a primary data source (namely images or time series) and, in the more advanced approaches, some metadata. Yet with the growing use of wide-field, multiplexed observational resources, individual sources of interest often have a broad range of observational modes available. Here we construct an astronomical multimodal dataset and propose AstroM^3, a self-supervised pre-training approach that enables a model to learn from multiple modalities simultaneously. Specifically, we extend the CLIP (Contrastive Language-Image Pretraining) model to a trimodal setting, allowing the integration of time-series photometry data, spectra, and astrophysical metadata. In a fine-tuning supervised setting, our results demonstrate that CLIP pre-training improves classification performance for time-series photometry, where accuracy increases from 84.6% to 91.5%. Furthermore, CLIP boosts classification accuracy by up to 12.6% when the availability of labeled data is limited, showing the effectiveness of leveraging larger corpora of unlabeled data. In addition to fine-tuned classification, we can use the trained model in other downstream tasks that are not explicitly contemplated during the construction of the self-supervised model. In particular we show the efficacy of using the learned embeddings for misclassifications identification, similarity search, and anomaly detection. One surprising highlight is the "rediscovery" of Mira subtypes and two Rotational variable subclasses using manifold learning and dimension reduction algorithm. To our knowledge this is the first construction of an n>2 mode model in astronomy. Extensions to n>3 modes is naturally anticipated with this approach.

  • 2 authors
·
Nov 13, 2024

CalibFormer: A Transformer-based Automatic LiDAR-Camera Calibration Network

The fusion of LiDARs and cameras has been increasingly adopted in autonomous driving for perception tasks. The performance of such fusion-based algorithms largely depends on the accuracy of sensor calibration, which is challenging due to the difficulty of identifying common features across different data modalities. Previously, many calibration methods involved specific targets and/or manual intervention, which has proven to be cumbersome and costly. Learning-based online calibration methods have been proposed, but their performance is barely satisfactory in most cases. These methods usually suffer from issues such as sparse feature maps, unreliable cross-modality association, inaccurate calibration parameter regression, etc. In this paper, to address these issues, we propose CalibFormer, an end-to-end network for automatic LiDAR-camera calibration. We aggregate multiple layers of camera and LiDAR image features to achieve high-resolution representations. A multi-head correlation module is utilized to identify correlations between features more accurately. Lastly, we employ transformer architectures to estimate accurate calibration parameters from the correlation information. Our method achieved a mean translation error of 0.8751 cm and a mean rotation error of 0.0562 ^{circ} on the KITTI dataset, surpassing existing state-of-the-art methods and demonstrating strong robustness, accuracy, and generalization capabilities.

  • 5 authors
·
Nov 26, 2023

The Stellar Populations and Rest-Frame Colors of Star-Forming Galaxies at z approx 8: Exploring the Impact of Filter Choice and Star Formation History Assumption with JADES

Our understanding of the physical properties of star-forming galaxies during the Epoch of Reionization (EoR, at z > 6) suffers from degeneracies among the apparent properties of the stars, the nebular gas, and the dust. These degeneracies are most prominent with photometry, which has insufficient (1) spectral resolution and (2) rest-frame spectral coverage. We explore ways to break these degeneracies with a sample of N = 22 high-redshift star-forming galaxies at 7 < z_{phot} leq 9, using some of the deepest existing imaging from JWST/NIRCam and JWST/MIRI with JADES. Key to this study is the imaging from JWST/MIRI at 7.7 mum, which provides coverage of the rest-frame I-band at the observed redshifts. We infer stellar population properties and rest-frame colors using a variety of filter sets and star formation history assumptions to explore the impact of these choices. Evaluating these quantities both with and without the 7.7 mum data point shows that dense spectral coverage with JWST/NIRCam (eight or more filters, including at least one medium-band) can compensate for lacking the rest-frame I-band coverage for the vast majority (approx 80%) of our sample. Furthermore, these galaxy properties are most consistently determined by assuming the delayed-tau star formation history, which provides the smallest offsets and scatters around these offsets when including JWST/MIRI. Within extragalactic surveys like JADES and CEERS, our findings suggest that robust characterization of the stellar population properties and rest-frame colors for high-redshift star-forming galaxies is possible with JWST/NIRCam alone at z approx 8.

  • 33 authors
·
Jun 2

First Light And Reionisation Epoch Simulations (FLARES) XVI: Size Evolution of Massive Dusty Galaxies at Cosmic Dawn from UV to IR

We use the First Light And Reionisation Epoch Simulations (FLARES) to study the evolution of the rest-frame ultraviolet (UV) and far-infrared (FIR) sizes for a statistical sample of massive (gtrsim10^{9}M_{odot}) high redshift galaxies (z in [5,10]). Galaxies are post-processed using the SKIRT radiative transfer code, to self-consistently obtain the full spectral energy distribution and surface brightness distribution. We create mock observations of the galaxies for the Near Infrared Camera (NIRCam) to study the rest-frame UV 1500 xC5 morphology. We also generate mock rest-frame FIR (50 mum) photometry and mock ALMA (158 mum) (0.01"-0.03" and approx0.3" angular resolution) observations to study the dust-continuum. We find the effect of dust on observed sizes reduces with increasing wavelength from the UV to optical (sim0.6 times the UV at 0.4mum), with no evolution in FIR sizes. Observed sizes vary within 0.4-1.2 times the intrinsic sizes at different signal to noise ratios (SNR = 5-20) across redshifts. The effect of PSF and noise makes bright structures prominent, whereas fainter regions blend with noise, leading to an underestimation (factor of 0.4-0.8) of sizes at SNR=5. At SNR=15-20, the underestimation reduces (factor of 0.6-0.9) at z=5-8 but due to PSF, at z=9-10, bright cores are dominant, resulting in an overestimation (factor of 1.0-1.2). For ALMA, low resolution sizes are effected by noise which acts as extended emission. The size evolution in UV broadly agrees with current observational samples and other simulations. This work is one of the first to analyse the panchromatic sizes of a statistically significant sample of simulated high-redshift galaxies, complementing a growing body of research highlighting the importance of conducting an equivalent comparison between observed galaxies and their simulated counterparts in the early Universe.

  • 12 authors
·
Aug 20, 2024

Euclid. II. The VIS Instrument

This paper presents the specification, design, and development of the Visible Camera (VIS) on the ESA Euclid mission. VIS is a large optical-band imager with a field of view of 0.54 deg^2 sampled at 0.1" with an array of 609 Megapixels and spatial resolution of 0.18". It will be used to survey approximately 14,000 deg^2 of extragalactic sky to measure the distortion of galaxies in the redshift range z=0.1-1.5 resulting from weak gravitational lensing, one of the two principal cosmology probes of Euclid. With photometric redshifts, the distribution of dark matter can be mapped in three dimensions, and, from how this has changed with look-back time, the nature of dark energy and theories of gravity can be constrained. The entire VIS focal plane will be transmitted to provide the largest images of the Universe from space to date, reaching m_AB>24.5 with S/N >10 in a single broad I_E~(r+i+z) band over a six year survey. The particularly challenging aspects of the instrument are the control and calibration of observational biases, which lead to stringent performance requirements and calibration regimes. With its combination of spatial resolution, calibration knowledge, depth, and area covering most of the extra-Galactic sky, VIS will also provide a legacy data set for many other fields. This paper discusses the rationale behind the VIS concept and describes the instrument design and development before reporting the pre-launch performance derived from ground calibrations and brief results from the in-orbit commissioning. VIS should reach fainter than m_AB=25 with S/N>10 for galaxies of full-width half-maximum of 0.3" in a 1.3" diameter aperture over the Wide Survey, and m_AB>26.4 for a Deep Survey that will cover more than 50 deg^2. The paper also describes how VIS works with the other Euclid components of survey, telescope, and science data processing to extract the cosmological information.

  • 435 authors
·
May 22, 2024

Pre-perihelion Development of Interstellar Comet 3I/ATLAS

We describe pre-perihelion optical observations of interstellar comet 3I/ATLAS taken during July - September 2025 using the Nordic Optical Telescope. Fixed aperture photometry of the comet is well described by a power law function of heliocentric distance, rH, with the exponent (``index") n = 3.8+/-0.3 across the 4.6 au to 1.8 au distance range (phase function 0.04+/-0.02 magnitude/degree assumed). This indicates that the dust production rates vary in proportion to rH**(-1.8+/-0.3). An rH**(-2) variation is expected of a strongly volatile material, and consistent with independent spectroscopic observations showing that carbon dioxide is the primary driver of activity. The measured heliocentric index is unremarkable in the context of solar system comets, for which n is widely dispersed, and provides no basis on which to describe 3I as either dynamically old (thermally processed) or new (pristine). The morphology of the comet changes from a Sun-facing dust fan in the early 2025 July observations, to one dominated by an antisolar dust tail at later dates. We attribute the delayed emergence of the tail to the large size (effective radius 0.1 mm) and slow ejection (5 m/s) of the optically dominant dust particles, and their consequently sluggish response to solar radiation pressure. Small (micron-sized) particles may be present but not in numbers sufficient to dominate the scattering cross-section. Their relative depletion possibly reflects interparticle cohesion, which binds small particles more effectively than large ones. A similar preponderance of 0.1 mm grains was reported in 2I/Borisov. However, 2I differed from 3I in having a much smaller (asteroid-like) heliocentric index, n = 1.9+/-0.1. Dust production rates in 3I are 180 kg/s at 2 au, compared with 70 kg/s in 2I/Borisov at the same distance.

  • 2 authors
·
Oct 21

Euclid Quick Data Release (Q1): From images to multiwavelength catalogues: the Euclid MERge Processing Function

The Euclid satellite is an ESA mission that was launched in July 2023. \Euclid is working in its regular observing mode with the target of observing an area of 14,000~deg^2 with two instruments, the Visible Camera (VIS) and the Near IR Spectrometer and Photometer (NISP) down to I_{rm E} = 24.5~mag (10, sigma) in the Euclid Wide Survey. Ground-based imaging data in the ugriz bands complement the \Euclid data to enable photo-z determination and VIS PSF modeling for week lensing analysis. Euclid investigates the distance-redshift relation and the evolution of cosmic structures by measuring shapes and redshifts of galaxies and clusters of galaxies out to zsim 2. Generating the multi-wavelength catalogues from \Euclid and ground-based data is an essential part of the \Euclid data processing system. In the framework of the \Euclid Science Ground Segment (SGS), the aim of the MER Processing Function (PF) pipeline is to detect objects in the \Euclid imaging data, measure their properties, and MERge them into a single multi-wavelength catalogue. The MER PF pipeline performs source detection on both visible (VIS) and near-infrared (NIR) images and offers four different photometric measurements: Kron total flux, aperture photometry on PSF-matched images, template fitting photometry, and S\'ersic fitting photometry. Furthermore, the MER PF pipeline measures a set of ancillary quantities, spanning from morphology to quality flags, to better characterise all detected sources. In this paper, we show how the MER PF pipeline is designed, detailing its main steps, and we show that the pipeline products meet the tight requirements that Euclid aims to achieve on photometric accuracy. We also present the other measurements (e.g. morphology) that are included in the OU-MER output catalogues and we list all output products coming out of the MER PF pipeline.

  • 348 authors
·
Mar 19

Exploring the Current Star Formation Rate and Nebula Ratio of Star-Formation Galaxies at z < 0.4 with FADO

The star formation rate is a crucial astrophysical tracer for understanding the formation and evolution of galaxies, determining the interaction between interstellar medium properties and star formation, thereby inferring the evolutionary laws of cosmic star formation history and cosmic energy density. The mainstream approach to studying the stellar property in galaxies relies on pure stellar population synthesis models. However, these methods fail to account for the contamination of SFR caused by nebular gas radiation. Recent studies have indicated that neglecting nebular radiation contamination appears non-negligible in galaxies with intense star-forming activities and at relatively high redshifts, potentially leading to overestimating stellar masses. However, there is currently limited targeted research, particularly regarding galaxies at redshifts (z < 0.4). In this work, 6,511 star-formation galaxies are selected from the SDSS-DR18, and FADO fits their spectra. This tool can exclude nebular radiation contributions in the spectral fitting. A tentative work is carried out to explore the SFR of these galaxies. The results indicate that the median \( H_{\alpha} \) flux obtained from FADO fitting differs from that obtained using the pure stellar population synthesis model {\it qsofitmore} by approximately 0.034 dex. Preliminary evidence suggests that the average nebula ratio increases with redshift. Additionally, we investigated the impact of stellar mass on the nebula ratio at low to moderate redshifts. By comparing two spectral fitting software packages, we found that although the contribution of nebular emission is minimal, it generally shows an increasing trend with redshift. We anticipate that by combining optical and near-infrared spectral data, the influence of nebulae may become more prominent in star-forming galaxies at higher redshifts (e.g., up to z sim 2).

  • 5 authors
·
Apr 11, 2024

Size and Shape Constraints of (486958) Arrokoth from Stellar Occultations

We present the results from four stellar occultations by (486958) Arrokoth, the flyby target of the New Horizons extended mission. Three of the four efforts led to positive detections of the body, and all constrained the presence of rings and other debris, finding none. Twenty-five mobile stations were deployed for 2017 June 3 and augmented by fixed telescopes. There were no positive detections from this effort. The event on 2017 July 10 was observed by SOFIA with one very short chord. Twenty-four deployed stations on 2017 July 17 resulted in five chords that clearly showed a complicated shape consistent with a contact binary with rough dimensions of 20 by 30 km for the overall outline. A visible albedo of 10% was derived from these data. Twenty-two systems were deployed for the fourth event on 2018 Aug 4 and resulted in two chords. The combination of the occultation data and the flyby results provides a significant refinement of the rotation period, now estimated to be 15.9380 pm 0.0005 hours. The occultation data also provided high-precision astrometric constraints on the position of the object that were crucial for supporting the navigation for the New Horizons flyby. This work demonstrates an effective method for obtaining detailed size and shape information and probing for rings and dust on distant Kuiper Belt objects as well as being an important source of positional data that can aid in spacecraft navigation that is particularly useful for small and distant bodies.

  • 133 authors
·
Dec 31, 2019

Colors and Dynamics of a Near-Sun Orbital Asteroid Family: 2021 PH27 and 2025 GN1

We observed the dynamically similar near-Sun asteroids 2021 PH27 and 2025 GN1 for their optical colors. These objects have the lowest known semi-major axes of any asteroids. 2021 PH27 has the largest general relativistic effects of any known solar system object. The small semi-major axis and very close passage to the Sun suggests the extreme thermal and gravitational environment should highly modify these asteroids' surfaces. From g', r', i' and z'-band imaging, we find the colors of 2021 PH27 to be between the two major asteroid types the S and C classes (g'-r'= 0.58 +- 0.02, r'-i'=0.12 +- 0.02 and i'-z'=-0.08 +- 0.05 mags). With a spectral slope of 6.8 +-0.03 percent per 100nm, 2021 PH27 is a X-type asteroid and requires albedo or spectral features to further identify its composition. We find the dynamically similar 2025 GN1 also has very similar colors (g'-r'=0.55 +-0.06 and r'-i'=0.14 +-0.04) as 2021 PH27, suggesting these objects are fragments from a once larger parent asteroid or 2021 PH27 is shedding material. The colors are not blue like some other near-Sun asteroids such as 3200 Phaethon that have been interpreted to be from the loss of reddening substances from the extreme temperatures. There is no evidence of activity or a large amplitude period for 2021 PH27, whereas 2025 GN1 might have a more significant rotational light curve. 2025 GN1 may have a very close encounter or hit Venus in about 2155 years and likely separated from 2021 PH27 in about the last 10 kyrs.

  • 9 authors
·
Apr 22

Sparse-view Pose Estimation and Reconstruction via Analysis by Generative Synthesis

Inferring the 3D structure underlying a set of multi-view images typically requires solving two co-dependent tasks -- accurate 3D reconstruction requires precise camera poses, and predicting camera poses relies on (implicitly or explicitly) modeling the underlying 3D. The classical framework of analysis by synthesis casts this inference as a joint optimization seeking to explain the observed pixels, and recent instantiations learn expressive 3D representations (e.g., Neural Fields) with gradient-descent-based pose refinement of initial pose estimates. However, given a sparse set of observed views, the observations may not provide sufficient direct evidence to obtain complete and accurate 3D. Moreover, large errors in pose estimation may not be easily corrected and can further degrade the inferred 3D. To allow robust 3D reconstruction and pose estimation in this challenging setup, we propose SparseAGS, a method that adapts this analysis-by-synthesis approach by: a) including novel-view-synthesis-based generative priors in conjunction with photometric objectives to improve the quality of the inferred 3D, and b) explicitly reasoning about outliers and using a discrete search with a continuous optimization-based strategy to correct them. We validate our framework across real-world and synthetic datasets in combination with several off-the-shelf pose estimation systems as initialization. We find that it significantly improves the base systems' pose accuracy while yielding high-quality 3D reconstructions that outperform the results from current multi-view reconstruction baselines.

  • 2 authors
·
Dec 4, 2024

Analyzing Data Quality and Decay in Mega-Constellations: A Physics-Informed Machine Learning Approach

In the era of mega-constellations, the need for accurate and publicly available information has become fundamental for satellite operators to guarantee the safety of spacecrafts and the Low Earth Orbit (LEO) space environment. This study critically evaluates the accuracy and reliability of publicly available ephemeris data for a LEO mega-constellation - Starlink. The goal of this work is twofold: (i) compare and analyze the quality of the data against high-precision numerical propagation. (ii) Leverage Physics-Informed Machine Learning to extract relevant satellite quantities, such as non-conservative forces, during the decay process. By analyzing two months of real orbital data for approximately 1500 Starlink satellites, we identify discrepancies between high precision numerical algorithms and the published ephemerides, recognizing the use of simplified dynamics at fixed thresholds, planned maneuvers, and limitations in uncertainty propagations. Furthermore, we compare data obtained from multiple sources to track and analyze deorbiting satellites over the same period. Empirically, we extract the acceleration profile of satellites during deorbiting and provide insights relating to the effects of non-conservative forces during reentry. For non-deorbiting satellites, the position Root Mean Square Error (RMSE) was approximately 300 m, while for deorbiting satellites it increased to about 600 m. Through this in-depth analysis, we highlight potential limitations in publicly available data for accurate and robust Space Situational Awareness (SSA), and importantly, we propose a data-driven model of satellite decay in mega-constellations.

  • 3 authors
·
Oct 13

Quantifying the Poor Purity and Completeness of Morphological Samples Selected by Galaxy Colour

The galaxy population is strongly bimodal in both colour and morphology, and the two measures correlate strongly, with most blue galaxies being late-types (spirals) and most early-types, typically ellipticals, being red. This observation has led to the use of colour as a convenient selection criteria to make samples which are then labelled by morphology. Such use of colour as a proxy for morphology results in necessarily impure and incomplete samples. In this paper, we make use of the morphological labels produced by Galaxy Zoo to measure how incomplete and impure such samples are, considering optical (ugriz), NUV and NIR (JHK) bands. The best single colour optical selection is found using a threshold of g-r = 0.742, but this still results in a sample where only 56% of red galaxies are smooth and 56% of smooth galaxies are red. Use of the NUV gives some improvement over purely optical bands, particularly for late-types, but still results in low purity/completeness for early-types. No significant improvement is found by adding NIR bands. With any two bands, including NUV, a sample of early-types with greater than two-thirds purity cannot be constructed. Advances in quantitative galaxy morphologies have made colour-morphology proxy selections largely unnecessary going forward; where such assumptions are still required, we recommend studies carefully consider the implications of sample incompleteness/impurity.

  • 10 authors
·
Dec 8, 2021

The Low Mass Ratio Overcontact Binary GV Leonis and Its Circumbinary Companion

Photometric and spectroscopic observations of GV Leo were performed from 2017 to 2024. The light curves show a flat bottom at the primary eclipse and the conventional O'Connell effect. The echelle spectra reveal that the effective temperature and rotation velocity of the more massive secondary are T_{rm eff,2} = 5220pm120 K and v_2 sin i = 223pm40 km s^{-1}, respectively. Our binary modeling indicates that the program target is a W-subclass contact binary with a mass ratio of q = 5.48, an inclination angle of i = 81^circ.68, a temperature difference of (T_{rm eff,1}-T_{rm eff,2}) = 154 K, and a filling factor of f = 36 \%. The light asymmetries were reasonably modeled by a dark starspot on the secondary's photosphere. Including our 26 minimum epochs, 84 times of minimum light were used to investigate the orbital period of the system. We found that the eclipse times of GV Leo have varied by a sinusoid with a period of 14.9 years and a semi-amplitude of 0.0076 days superimposed on a downward parabola. The periodic modulation is interpreted as a light time effect produced by an unseen outer tertiary with a minimum mass of 0.26 M_odot, while the parabolic component is thought to be a combination of mass transfer (secondary to primary) and angular momentum loss driven by magnetic braking. The circumbinary tertiary would have caused the eclipsing pair of GV Leo to evolve into its current short-period contact state by removing angular momentum from the primordial widish binary.

  • 5 authors
·
Apr 13

LighthouseGS: Indoor Structure-aware 3D Gaussian Splatting for Panorama-Style Mobile Captures

Recent advances in 3D Gaussian Splatting (3DGS) have enabled real-time novel view synthesis (NVS) with impressive quality in indoor scenes. However, achieving high-fidelity rendering requires meticulously captured images covering the entire scene, limiting accessibility for general users. We aim to develop a practical 3DGS-based NVS framework using simple panorama-style motion with a handheld camera (e.g., mobile device). While convenient, this rotation-dominant motion and narrow baseline make accurate camera pose and 3D point estimation challenging, especially in textureless indoor scenes. To address these challenges, we propose LighthouseGS, a novel framework inspired by the lighthouse-like sweeping motion of panoramic views. LighthouseGS leverages rough geometric priors, such as mobile device camera poses and monocular depth estimation, and utilizes the planar structures often found in indoor environments. We present a new initialization method called plane scaffold assembly to generate consistent 3D points on these structures, followed by a stable pruning strategy to enhance geometry and optimization stability. Additionally, we introduce geometric and photometric corrections to resolve inconsistencies from motion drift and auto-exposure in mobile devices. Tested on collected real and synthetic indoor scenes, LighthouseGS delivers photorealistic rendering, surpassing state-of-the-art methods and demonstrating the potential for panoramic view synthesis and object placement.

  • 7 authors
·
Jul 8

MERLiN: Single-Shot Material Estimation and Relighting for Photometric Stereo

Photometric stereo typically demands intricate data acquisition setups involving multiple light sources to recover surface normals accurately. In this paper, we propose MERLiN, an attention-based hourglass network that integrates single image-based inverse rendering and relighting within a single unified framework. We evaluate the performance of photometric stereo methods using these relit images and demonstrate how they can circumvent the underlying challenge of complex data acquisition. Our physically-based model is trained on a large synthetic dataset containing complex shapes with spatially varying BRDF and is designed to handle indirect illumination effects to improve material reconstruction and relighting. Through extensive qualitative and quantitative evaluation, we demonstrate that the proposed framework generalizes well to real-world images, achieving high-quality shape, material estimation, and relighting. We assess these synthetically relit images over photometric stereo benchmark methods for their physical correctness and resulting normal estimation accuracy, paving the way towards single-shot photometric stereo through physically-based relighting. This work allows us to address the single image-based inverse rendering problem holistically, applying well to both synthetic and real data and taking a step towards mitigating the challenge of data acquisition in photometric stereo.

  • 3 authors
·
Sep 1, 2024

Euclid Quick Data Release (Q1). Active galactic nuclei identification using diffusion-based inpainting of Euclid VIS images

Light emission from galaxies exhibit diverse brightness profiles, influenced by factors such as galaxy type, structural features and interactions with other galaxies. Elliptical galaxies feature more uniform light distributions, while spiral and irregular galaxies have complex, varied light profiles due to their structural heterogeneity and star-forming activity. In addition, galaxies with an active galactic nucleus (AGN) feature intense, concentrated emission from gas accretion around supermassive black holes, superimposed on regular galactic light, while quasi-stellar objects (QSO) are the extreme case of the AGN emission dominating the galaxy. The challenge of identifying AGN and QSO has been discussed many times in the literature, often requiring multi-wavelength observations. This paper introduces a novel approach to identify AGN and QSO from a single image. Diffusion models have been recently developed in the machine-learning literature to generate realistic-looking images of everyday objects. Utilising the spatial resolving power of the Euclid VIS images, we created a diffusion model trained on one million sources, without using any source pre-selection or labels. The model learns to reconstruct light distributions of normal galaxies, since the population is dominated by them. We condition the prediction of the central light distribution by masking the central few pixels of each source and reconstruct the light according to the diffusion model. We further use this prediction to identify sources that deviate from this profile by examining the reconstruction error of the few central pixels regenerated in each source's core. Our approach, solely using VIS imaging, features high completeness compared to traditional methods of AGN and QSO selection, including optical, near-infrared, mid-infrared, and X-rays.

  • 274 authors
·
Mar 19

Cross-modal feature fusion for robust point cloud registration with ambiguous geometry

Point cloud registration has seen significant advancements with the application of deep learning techniques. However, existing approaches often overlook the potential of integrating radiometric information from RGB images. This limitation reduces their effectiveness in aligning point clouds pairs, especially in regions where geometric data alone is insufficient. When used effectively, radiometric information can enhance the registration process by providing context that is missing from purely geometric data. In this paper, we propose CoFF, a novel Cross-modal Feature Fusion method that utilizes both point cloud geometry and RGB images for pairwise point cloud registration. Assuming that the co-registration between point clouds and RGB images is available, CoFF explicitly addresses the challenges where geometric information alone is unclear, such as in regions with symmetric similarity or planar structures, through a two-stage fusion of 3D point cloud features and 2D image features. It incorporates a cross-modal feature fusion module that assigns pixel-wise image features to 3D input point clouds to enhance learned 3D point features, and integrates patch-wise image features with superpoint features to improve the quality of coarse matching. This is followed by a coarse-to-fine matching module that accurately establishes correspondences using the fused features. We extensively evaluate CoFF on four common datasets: 3DMatch, 3DLoMatch, IndoorLRS, and the recently released ScanNet++ datasets. In addition, we assess CoFF on specific subset datasets containing geometrically ambiguous cases. Our experimental results demonstrate that CoFF achieves state-of-the-art registration performance across all benchmarks, including remarkable registration recalls of 95.9% and 81.6% on the widely-used 3DMatch and 3DLoMatch datasets, respectively...(Truncated to fit arXiv abstract length)

  • 6 authors
·
May 19

YOCO: You Only Calibrate Once for Accurate Extrinsic Parameter in LiDAR-Camera Systems

In a multi-sensor fusion system composed of cameras and LiDAR, precise extrinsic calibration contributes to the system's long-term stability and accurate perception of the environment. However, methods based on extracting and registering corresponding points still face challenges in terms of automation and precision. This paper proposes a novel fully automatic extrinsic calibration method for LiDAR-camera systems that circumvents the need for corresponding point registration. In our approach, a novel algorithm to extract required LiDAR correspondence point is proposed. This method can effectively filter out irrelevant points by computing the orientation of plane point clouds and extracting points by applying distance- and density-based thresholds. We avoid the need for corresponding point registration by introducing extrinsic parameters between the LiDAR and camera into the projection of extracted points and constructing co-planar constraints. These parameters are then optimized to solve for the extrinsic. We validated our method across multiple sets of LiDAR-camera systems. In synthetic experiments, our method demonstrates superior performance compared to current calibration techniques. Real-world data experiments further confirm the precision and robustness of the proposed algorithm, with average rotation and translation calibration errors between LiDAR and camera of less than 0.05 degree and 0.015m, respectively. This method enables automatic and accurate extrinsic calibration in a single one step, emphasizing the potential of calibration algorithms beyond using corresponding point registration to enhance the automation and precision of LiDAR-camera system calibration.

  • 4 authors
·
Jul 25, 2024

TVG-SLAM: Robust Gaussian Splatting SLAM with Tri-view Geometric Constraints

Recent advances in 3D Gaussian Splatting (3DGS) have enabled RGB-only SLAM systems to achieve high-fidelity scene representation. However, the heavy reliance of existing systems on photometric rendering loss for camera tracking undermines their robustness, especially in unbounded outdoor environments with severe viewpoint and illumination changes. To address these challenges, we propose TVG-SLAM, a robust RGB-only 3DGS SLAM system that leverages a novel tri-view geometry paradigm to ensure consistent tracking and high-quality mapping. We introduce a dense tri-view matching module that aggregates reliable pairwise correspondences into consistent tri-view matches, forming robust geometric constraints across frames. For tracking, we propose Hybrid Geometric Constraints, which leverage tri-view matches to construct complementary geometric cues alongside photometric loss, ensuring accurate and stable pose estimation even under drastic viewpoint shifts and lighting variations. For mapping, we propose a new probabilistic initialization strategy that encodes geometric uncertainty from tri-view correspondences into newly initialized Gaussians. Additionally, we design a Dynamic Attenuation of Rendering Trust mechanism to mitigate tracking drift caused by mapping latency. Experiments on multiple public outdoor datasets show that our TVG-SLAM outperforms prior RGB-only 3DGS-based SLAM systems. Notably, in the most challenging dataset, our method improves tracking robustness, reducing the average Absolute Trajectory Error (ATE) by 69.0\% while achieving state-of-the-art rendering quality. The implementation of our method will be released as open-source.

  • 7 authors
·
Jun 29

AstroMLab 1: Who Wins Astronomy Jeopardy!?

We present a comprehensive evaluation of proprietary and open-weights large language models using the first astronomy-specific benchmarking dataset. This dataset comprises 4,425 multiple-choice questions curated from the Annual Review of Astronomy and Astrophysics, covering a broad range of astrophysical topics. Our analysis examines model performance across various astronomical subfields and assesses response calibration, crucial for potential deployment in research environments. Claude-3.5-Sonnet outperforms competitors by up to 4.6 percentage points, achieving 85.0% accuracy. For proprietary models, we observed a universal reduction in cost every 3-to-12 months to achieve similar score in this particular astronomy benchmark. Open-source models have rapidly improved, with LLaMA-3-70b (80.6%) and Qwen-2-72b (77.7%) now competing with some of the best proprietary models. We identify performance variations across topics, with non-English-focused models generally struggling more in exoplanet-related fields, stellar astrophysics, and instrumentation related questions. These challenges likely stem from less abundant training data, limited historical context, and rapid recent developments in these areas. This pattern is observed across both open-weights and proprietary models, with regional dependencies evident, highlighting the impact of training data diversity on model performance in specialized scientific domains. Top-performing models demonstrate well-calibrated confidence, with correlations above 0.9 between confidence and correctness, though they tend to be slightly underconfident. The development for fast, low-cost inference of open-weights models presents new opportunities for affordable deployment in astronomy. The rapid progress observed suggests that LLM-driven research in astronomy may become feasible in the near future.

  • 11 authors
·
Jul 15, 2024

Flashlights: An Off-Caustic Lensed Star at Redshift z = 1.26 in Abell 370

We report the discovery of a transient seen in a strongly lensed arc at redshift z_{rm s}=1.2567 in Hubble Space Telescope imaging of the Abell 370 galaxy cluster. The transient is detected at 29.51pm0.14 AB mag in a WFC3/UVIS F200LP difference image made using observations from two different epochs, obtained in the framework of the Flashlights program, and is also visible in the F350LP band (m_{rm F350LP} approx 30.53pm0.76 AB mag). The transient is observed on the negative-parity side of the critical curve at a distance of sim 0.6" from it, greater than previous examples of lensed stars. The large distance from the critical curve yields a significantly smaller macromagnification, but our simulations show that bright, O/B-type supergiants can reach sufficiently high magnifications to be seen at the observed position and magnitude. In addition, the observed transient image is a trailing image with an observer-frame time delay of sim+0.8 days from its expected counterpart, so that any transient lasting for longer than that should have also been seen on the minima side and is thus excluded. This, together with the blue colour we measure for the transient (m_{rm F200LP} - m_{rm F350LP} approx [-0.3,-1.6] AB), rules out most other transient candidates such as (kilo)novae, for example, and makes a lensed star the prime candidate. Assuming the transient is indeed a lensed star as suggested, many more such events should be detected in the near future in cluster surveys with the Hubble Space Telescope and James Webb Space Telescope.

  • 13 authors
·
Nov 2, 2022

KIC 4150611: A quadruply eclipsing heptuple star system with a g-mode period-spacing pattern Asteroseismic modelling of the g-mode period-spacing pattern

In this work, we aim to estimate the stellar parameters of the primary (Aa) by performing asteroseismic analysis on its period-spacing pattern. We use the C-3PO neural network to perform asteroseismic modelling of the g-mode period-spacing pattern of Aa, discussing the interplay of this information with external constraints from spectroscopy (T_{rm eff} and log(g)) and eclipse modelling (R). To estimate the level of uncertainty due to different frequency extraction and pattern identification processes, we consider four different variations on the period-spacing patterns. To better understand the correlations between and the uncertainty structure of our parameter estimates, we also employed a classical, parameter-based MCMC grid search on four different stellar grids. The best-fitting, externally constrained model to the period-spacing pattern arrives at estimates of the stellar properties for Aa of: M=1.51 pm 0.05 M_odot, X_c =0.43 pm 0.04, R=1.66 pm 0.1 R_odot, f_{rm ov}=0.010, Omega_c=1.58 pm 0.01 d^{-1} with rigid rotation to within the measurement errors, log(T_{rm eff})=3.856 pm 0.008 dex, log(g)=4.18 pm 0.04 dex, and log(L)=0.809 pm 0.005 dex, which agree well with previous measurements from eclipse modelling, spectroscopy, and the Gaia DR3 luminosity. We find that the near-core properties of the best-fitting asteroseismic models are consistent with external constraints from eclipse modelling and spectroscopy. Aa appears to be a typical example of a gamma Dor star, fitting well within existing populations. We find that Aa is quasi-rigidly rotating to within the uncertainties, and note that the asteroseismic age estimate for Aa (1100 pm 100 Myr) is considerably older than the young (35 Myr) age implied by previous isochrone fits to the B binary in the literature. Our MCMC parameter-based grid-search agrees well with our pattern-modelling approach.

  • 10 authors
·
Nov 27, 2024

Solar System Elemental Abundances from the Solar Photosphere and CI-Chondrites

Solar photospheric abundances and CI-chondrite compositions are reviewed and updated to obtain representative solar system abundances of the elements and their isotopes. The new photospheric abundances obtained here lead to higher solar metallicity. Full 3D NLTE photospheric analyses are only available for 11 elements. A quality index for analyses is introduced. For several elements, uncertainties remain large. Protosolar mass fractions are H (X = 0.7060), He (Y = 0.2753), and for metals Li to U (Z = 0.0187). The protosolar (C+N)/H agrees within 13% with the ratio for the solar core from the Borexino experiment. Elemental abundances in CI-chondrites were screened by analytical methods, sample sizes, and evaluated using concentration frequency distributions. Aqueously mobile elements (e.g., alkalis, alkaline earths, etc.) often deviate from normal distributions indicating mobilization and/or sequestration into carbonates, phosphates, and sulfates. Revised CI-chondrite abundances of non-volatile elements are similar to earlier estimates. The moderately volatile elements F and Sb are higher than before, as are C, Br and I, whereas the CI-abundances of Hg and N are now significantly lower. The solar system nuclide distribution curves of s-process elements agree within 4% with s-process predictions of Galactic chemical evolution models. P-process nuclide distributions are assessed. No obvious correlation of CI-chondritic to solar elemental abundance ratios with condensation temperatures is observed, nor is there one for ratios of CI-chondrites/solar wind abundances.

  • 3 authors
·
Feb 14

First Light And Reionisation Epoch Simulations (FLARES) XII: The consequences of star-dust geometry on galaxies in the EoR

Using the First Light And Reionisation Epoch Simulations ({rm F{small LARES}}), a suite of hydrodynamical simulations we explore the consequences of a realistic model for star--dust geometry on the observed properties of galaxies. We find that the UV attenuation declines rapidly from the central regions of galaxies, and bright galaxies have spatially extended star formation that suffers less obscuration than their fainter counterparts, demonstrating a non-linear relationship between the UV luminosity and the UV attenuation, giving a double power-law shape to the UVLF. Spatially distinct stellar populations within galaxies experience a wide range of dust attenuation due to variations in the dust optical depth along their line-of-sight; which can range from completely dust obscured to being fully unobscured. The overall attenuation curve of a galaxy is then a complex combination of various lines-of-sight within the galaxy. We explore the manifestation of this effect to study the reliability of line ratios to infer galaxy properties, in particular the Balmer decrement and the BPT diagram. We find the Balmer decrement predicted Balmer line attenuation to be higher (factor of 1 to gtrsim10) than expected from commonly used attenuation curves. The observed BPT line ratios deviate from their intrinsic values (median difference of 0.08 (0.02) and standard deviation of 0.2 (0.05) for log_{10}([N{small II}]lambda 6585/H_{alpha}) (log_{10}([O{small III}]lambda 5008/H_{beta})). Finally, we explore the variation in observed properties (UV attenuation, UV slope and Balmer decrement) with viewing angle, finding average differences of sim0.3 magnitudes in the UV attenuation.

  • 8 authors
·
Mar 7, 2023

New Radio Observations of the Supernova Remnant CTA 1

We present new radio images of the supernova remnant (SNR) CTA 1 at 1420 and 408 MHz, and in the 21 cm line of H I observed with the Dominion Radio Astrophysical Observatory Synthesis Telescope and at 1420 MHz observed with the Effelsberg 100 m telescope. We confirm previously described continuum features and elaborate further on filamentary features identified using the high-resolution (1') maps from these new observations. We investigate the abrupt change in sign of rotation measure (RM) across the SNR, using the linear polarization observations in the four bands around 1420 MHz. Following X. H. Sun et al.'s (2011) investigation, we both confirm that the distribution of signs of the RMs for extragalactic sources in the area appears to match that of the shell, as well as combine the data from the four bands to estimate the relative depolarization and the intrinsic rotation measure of the SNR. We do not conclusively reject X. H. Sun et al.'s (2011) claim of a Faraday screen in the foreground causing the distribution of RMs that we observe; however, we do suggest an alternative explanation of a swept-up stellar wind from the progenitor star with a toroidal magnetic field. Finally, we expand on the analysis of the H I observations by applying the Rolling Hough Transform to isolate filamentary structure and better identify H I emission with the SNR. Further constraining the H I velocity channels associated with CTA 1, we use more recent Galactic rotation curves to calculate an updated kinematic distance of 1.09 +/- 0.2 kpc.

  • 6 authors
·
Dec 19, 2024

HiMo: High-Speed Objects Motion Compensation in Point Clouds

LiDAR point clouds often contain motion-induced distortions, degrading the accuracy of object appearances in the captured data. In this paper, we first characterize the underlying reasons for the point cloud distortion and show that this is present in public datasets. We find that this distortion is more pronounced in high-speed environments such as highways, as well as in multi-LiDAR configurations, a common setup for heavy vehicles. Previous work has dealt with point cloud distortion from the ego-motion but fails to consider distortion from the motion of other objects. We therefore introduce a novel undistortion pipeline, HiMo, that leverages scene flow estimation for object motion compensation, correcting the depiction of dynamic objects. We further propose an extension of a state-of-the-art self-supervised scene flow method. Due to the lack of well-established motion distortion metrics in the literature, we also propose two metrics for compensation performance evaluation: compensation accuracy at a point level and shape similarity on objects. To demonstrate the efficacy of our method, we conduct extensive experiments on the Argoverse 2 dataset and a new real-world dataset. Our new dataset is collected from heavy vehicles equipped with multi-LiDARs and on highways as opposed to mostly urban settings in the existing datasets. The source code, including all methods and the evaluation data, will be provided upon publication. See https://kin-zhang.github.io/HiMo for more details.

  • 7 authors
·
Mar 2

Pz Cats: Photometric redshift catalogs based on DES Y3 BAO sample

The photometric redshift estimation (photo-z) has been developed over the years with various methods. In this work, we analyse four different photo-z estimators using the Dark Energy Survey Y3 BAO Sample: ANNz2, BPZ, ENF, and DNF. Unlike what is usually found in the literature, we investigate the possibility of selecting the best galaxies according to their redshift Probability Distribution Function (PDF). We selected 25,760 galaxies from four different spectroscopic surveys and cross-matched them with the photo-z sample. These galaxies served to understand the redshift bias and its 68th percentile sigma_{68}. We found that within a range of 0.79<z_p<0.85 there is the lowest sigma for all the estimators we analysed. DNF has the biggest absolute value of the bias (sigma), while ENF, ANNz2 and BPZ lose precision for a redshift range below 0.7 and higher than 0.9. If one wants to pick the best galaxies by removing the bins with the worst bias, one will find that ANNz2 is the most robust algorithm for all chosen criteria. When selecting the best PDFs, the resulting sub-samples gave BPZ with more selected objects. ANNz2 shows better precision, ENF has the worst selection of Gaussian PDFs, with very few galaxies left for an LSS study. We also showed that even though the PDFs are smooth, there are catastrophic redshift results. Lastly, DNF is the worst in precision but with sufficient galaxies for cosmological analysis. We also selected galaxies whose PDFs have only secondary peaks not bigger than 30\% of the main peak height, called Small Peaks. For these sub-samples, ANNz2 outperformed the other algorithms. We will make all catalogs publicly available through the package Pz Cats.

  • 2 authors
·
Jan 7

Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks

We identify label errors in the test sets of 10 of the most commonly-used computer vision, natural language, and audio datasets, and subsequently study the potential for these label errors to affect benchmark results. Errors in test sets are numerous and widespread: we estimate an average of at least 3.3% errors across the 10 datasets, where for example label errors comprise at least 6% of the ImageNet validation set. Putative label errors are identified using confident learning algorithms and then human-validated via crowdsourcing (51% of the algorithmically-flagged candidates are indeed erroneously labeled, on average across the datasets). Traditionally, machine learning practitioners choose which model to deploy based on test accuracy - our findings advise caution here, proposing that judging models over correctly labeled test sets may be more useful, especially for noisy real-world datasets. Surprisingly, we find that lower capacity models may be practically more useful than higher capacity models in real-world datasets with high proportions of erroneously labeled data. For example, on ImageNet with corrected labels: ResNet-18 outperforms ResNet-50 if the prevalence of originally mislabeled test examples increases by just 6%. On CIFAR-10 with corrected labels: VGG-11 outperforms VGG-19 if the prevalence of originally mislabeled test examples increases by just 5%. Test set errors across the 10 datasets can be viewed at https://labelerrors.com and all label errors can be reproduced by https://github.com/cleanlab/label-errors.

  • 3 authors
·
Mar 26, 2021

Machine learning-driven Anomaly Detection and Forecasting for Euclid Space Telescope Operations

State-of-the-art space science missions increasingly rely on automation due to spacecraft complexity and the costs of human oversight. The high volume of data, including scientific and telemetry data, makes manual inspection challenging. Machine learning offers significant potential to meet these demands. The Euclid space telescope, in its survey phase since February 2024, exemplifies this shift. Euclid's success depends on accurate monitoring and interpretation of housekeeping telemetry and science-derived data. Thousands of telemetry parameters, monitored as time series, may or may not impact the quality of scientific data. These parameters have complex interdependencies, often due to physical relationships (e.g., proximity of temperature sensors). Optimising science operations requires careful anomaly detection and identification of hidden parameter states. Moreover, understanding the interactions between known anomalies and physical quantities is crucial yet complex, as related parameters may display anomalies with varied timing and intensity. We address these challenges by analysing temperature anomalies in Euclid's telemetry from February to August 2024, focusing on eleven temperature parameters and 35 covariates. We use a predictive XGBoost model to forecast temperatures based on historical values, detecting anomalies as deviations from predictions. A second XGBoost model predicts anomalies from covariates, capturing their relationships to temperature anomalies. We identify the top three anomalies per parameter and analyse their interactions with covariates using SHAP (Shapley Additive Explanations), enabling rapid, automated analysis of complex parameter relationships. Our method demonstrates how machine learning can enhance telemetry monitoring, offering scalable solutions for other missions with similar data challenges.

  • 6 authors
·
Nov 8, 2024

Observation-Centric SORT: Rethinking SORT for Robust Multi-Object Tracking

Kalman filter (KF) based methods for multi-object tracking (MOT) make an assumption that objects move linearly. While this assumption is acceptable for very short periods of occlusion, linear estimates of motion for prolonged time can be highly inaccurate. Moreover, when there is no measurement available to update Kalman filter parameters, the standard convention is to trust the priori state estimations for posteriori update. This leads to the accumulation of errors during a period of occlusion. The error causes significant motion direction variance in practice. In this work, we show that a basic Kalman filter can still obtain state-of-the-art tracking performance if proper care is taken to fix the noise accumulated during occlusion. Instead of relying only on the linear state estimate (i.e., estimation-centric approach), we use object observations (i.e., the measurements by object detector) to compute a virtual trajectory over the occlusion period to fix the error accumulation of filter parameters during the occlusion period. This allows more time steps to correct errors accumulated during occlusion. We name our method Observation-Centric SORT (OC-SORT). It remains Simple, Online, and Real-Time but improves robustness during occlusion and non-linear motion. Given off-the-shelf detections as input, OC-SORT runs at 700+ FPS on a single CPU. It achieves state-of-the-art on multiple datasets, including MOT17, MOT20, KITTI, head tracking, and especially DanceTrack where the object motion is highly non-linear. The code and models are available at https://github.com/noahcao/OC_SORT.

  • 5 authors
·
Mar 27, 2022

RUBIES: a complete census of the bright and red distant Universe with JWST/NIRSpec

We present the Red Unknowns: Bright Infrared Extragalactic Survey (RUBIES), providing JWST/NIRSpec spectroscopy of red sources selected across ~150 arcmin^2 from public JWST/NIRCam imaging in the UDS and EGS fields. RUBIES novel observing strategy offers a well-quantified selection function: the survey is optimised to reach high (>70%) completeness for bright and red (F150W-F444W>2) sources that are very rare. To place these rare sources in context, we simultaneously observe a reference sample of the 2<z<7 galaxy population, sampling sources at a rate that is inversely proportional to their number density in the 3D space of F444W magnitude, F150W-F444W colour, and photometric redshift. In total, RUBIES observes ~3000 targets across 1<z_{phot}<10 with both the PRISM and G395M dispersers, and ~1500 targets at z_{phot}>3 using only the G395M disperser. The RUBIES data reveal a highly diverse population of red sources that span a broad redshift range (z_{spec}sim1-9), with photometric redshift scatter and outlier fraction that are 3 times higher than for similarly bright sources that are less red. This diversity is not apparent from the photometric SEDs. Only spectroscopy reveals that the SEDs encompass a mixture of galaxies with dust-obscured star formation, extreme line emission, a lack of star formation indicating early quenching, and luminous active galactic nuclei. As a first demonstration of our broader selection function we compare the stellar masses and rest-frame U-V colours of the red sources and our reference sample: red sources are typically more massive (M_*sim10^{10-11.5} M_odot) across all redshifts. However, we find that the most massive systems span a wide range in U-V colour. We describe our data reduction procedure and data quality, and publicly release the reduced RUBIES data and vetted spectroscopic redshifts of the first half of the survey through the DJA.

  • 28 authors
·
Sep 9, 2024

Transformation Decoupling Strategy based on Screw Theory for Deterministic Point Cloud Registration with Gravity Prior

Point cloud registration is challenging in the presence of heavy outlier correspondences. This paper focuses on addressing the robust correspondence-based registration problem with gravity prior that often arises in practice. The gravity directions are typically obtained by inertial measurement units (IMUs) and can reduce the degree of freedom (DOF) of rotation from 3 to 1. We propose a novel transformation decoupling strategy by leveraging screw theory. This strategy decomposes the original 4-DOF problem into three sub-problems with 1-DOF, 2-DOF, and 1-DOF, respectively, thereby enhancing the computation efficiency. Specifically, the first 1-DOF represents the translation along the rotation axis and we propose an interval stabbing-based method to solve it. The second 2-DOF represents the pole which is an auxiliary variable in screw theory and we utilize a branch-and-bound method to solve it. The last 1-DOF represents the rotation angle and we propose a global voting method for its estimation. The proposed method sequentially solves three consensus maximization sub-problems, leading to efficient and deterministic registration. In particular, it can even handle the correspondence-free registration problem due to its significant robustness. Extensive experiments on both synthetic and real-world datasets demonstrate that our method is more efficient and robust than state-of-the-art methods, even when dealing with outlier rates exceeding 99%.

  • 7 authors
·
Nov 2, 2023

Radii, masses, and transit-timing variations of the three-planet system orbiting the naked-eye star TOI-396

TOI-396 is an F6V star (Vapprox6.4) orbited by three transiting planets. The orbital periods of the two innermost planets are close to the 5:3 commensurability (P_b sim3.6 d and P_c sim6.0 d). To measure the masses of the three planets, refine their radii, and investigate whether planets b and c are in MMR, we carried out HARPS RV observations and retrieved photometric data from TESS. We extracted the RVs via a skew-normal fit onto the HARPS CCFs and performed an MCMC joint analysis of the Doppler measurements and transit photometry, while employing the breakpoint method to remove stellar activity from the RV time series. We also performed a thorough TTV dynamical analysis of the system. Our analysis confirms that the three planets have similar sizes: R_b=2.004_{-0.047}^{+0.045}R_{oplus}; R_c=1.979_{-0.051}^{+0.054}R_{oplus}; R_d=2.001_{-0.064}^{+0.063}R_{oplus}. For the first time, we have determined the RV masses for TOI-396b and d: M_b=3.55_{-0.96}^{+0.94}M_{oplus} (rho_b=2.44_{-0.68}^{+0.69} g cm^{-3}) and M_d=7.1pm1.6M_{oplus} (rho_d=4.9_{-1.1}^{+1.2} g cm^{-3}). Our results suggest a quite unusual system architecture, with the outermost planet being the densest. The Doppler reflex motion induced by TOI-396c remains undetected in our RV time series, likely due to the proximity of P_c to the star's rotation period (P_{rot}=6.7pm1.3 d). We also discovered that TOI-396b and c display significant TTVs. While the TTV dynamical analysis returns a formally precise mass for TOI-396c (M_{c,dyn}=2.24^{+0.13}_{-0.67}M_{oplus}), the result might not be accurate owing to the poor sampling of the TTV phase. We also conclude that TOI-396b and c are close to but out of the 5:3 MMR. Our numerical simulation suggests TTV semi-amplitudes of up to 5 hours over a temporal baseline of sim5.2 years.

  • 41 authors
·
Nov 22, 2024

AstronomicAL: An interactive dashboard for visualisation, integration and classification of data using Active Learning

AstronomicAL is a human-in-the-loop interactive labelling and training dashboard that allows users to create reliable datasets and robust classifiers using active learning. This technique prioritises data that offer high information gain, leading to improved performance using substantially less data. The system allows users to visualise and integrate data from different sources and deal with incorrect or missing labels and imbalanced class sizes. AstronomicAL enables experts to visualise domain-specific plots and key information relating both to broader context and details of a point of interest drawn from a variety of data sources, ensuring reliable labels. In addition, AstronomicAL provides functionality to explore all aspects of the training process, including custom models and query strategies. This makes the software a tool for experimenting with both domain-specific classifications and more general-purpose machine learning strategies. We illustrate using the system with an astronomical dataset due to the field's immediate need; however, AstronomicAL has been designed for datasets from any discipline. Finally, by exporting a simple configuration file, entire layouts, models, and assigned labels can be shared with the community. This allows for complete transparency and ensures that the process of reproducing results is effortless

  • 4 authors
·
Sep 11, 2021

PS-GS: Gaussian Splatting for Multi-View Photometric Stereo

Integrating inverse rendering with multi-view photometric stereo (MVPS) yields more accurate 3D reconstructions than the inverse rendering approaches that rely on fixed environment illumination. However, efficient inverse rendering with MVPS remains challenging. To fill this gap, we introduce the Gaussian Splatting for Multi-view Photometric Stereo (PS-GS), which efficiently and jointly estimates the geometry, materials, and lighting of the object that is illuminated by diverse directional lights (multi-light). Our method first reconstructs a standard 2D Gaussian splatting model as the initial geometry. Based on the initialization model, it then proceeds with the deferred inverse rendering by the full rendering equation containing a lighting-computing multi-layer perceptron. During the whole optimization, we regularize the rendered normal maps by the uncalibrated photometric stereo estimated normals. We also propose the 2D Gaussian ray-tracing for single directional light to refine the incident lighting. The regularizations and the use of multi-view and multi-light images mitigate the ill-posed problem of inverse rendering. After optimization, the reconstructed object can be used for novel-view synthesis, relighting, and material and shape editing. Experiments on both synthetic and real datasets demonstrate that our method outperforms prior works in terms of reconstruction accuracy and computational efficiency.

  • 6 authors
·
Jul 24

Making Images Real Again: A Comprehensive Survey on Deep Image Composition

As a common image editing operation, image composition (object insertion) aims to combine the foreground from one image and another background image, resulting in a composite image. However, there are many issues that could make the composite images unrealistic. These issues can be summarized as the inconsistency between foreground and background, which includes appearance inconsistency (e.g., incompatible illumination), geometry inconsistency (e.g., unreasonable size), and semantic inconsistency (e.g., mismatched semantic context). Image composition task could be decomposed into multiple sub-tasks, in which each sub-task targets at one or more issues. Specifically, object placement aims to find reasonable scale, location, and shape for the foreground. Image blending aims to address the unnatural boundary between foreground and background. Image harmonization aims to adjust the illumination statistics of foreground. Shadow (resp., reflection) generation aims to generate plausible shadow (resp., reflection) for the foreground. These sub-tasks can be executed sequentially or parallelly to acquire realistic composite images. To the best of our knowledge, there is no previous survey on image composition (object insertion). In this paper, we conduct comprehensive survey over the sub-tasks and combinatorial task of image composition (object insertion). For each one, we summarize the existing methods, available datasets, and common evaluation metrics. We have also contributed the first image composition toolbox libcom, which assembles 10+ image composition related functions (e.g., image blending, image harmonization, object placement, shadow generation, generative composition). The ultimate goal of this toolbox is solving all the problems related to image composition with simple `import libcom'.

  • 7 authors
·
Jun 28, 2021 1

Kineo: Calibration-Free Metric Motion Capture From Sparse RGB Cameras

Markerless multiview motion capture is often constrained by the need for precise camera calibration, limiting accessibility for non-experts and in-the-wild captures. Existing calibration-free approaches mitigate this requirement but suffer from high computational cost and reduced reconstruction accuracy. We present Kineo, a fully automatic, calibration-free pipeline for markerless motion capture from videos captured by unsynchronized, uncalibrated, consumer-grade RGB cameras. Kineo leverages 2D keypoints from off-the-shelf detectors to simultaneously calibrate cameras, including Brown-Conrady distortion coefficients, and reconstruct 3D keypoints and dense scene point maps at metric scale. A confidence-driven spatio-temporal keypoint sampling strategy, combined with graph-based global optimization, ensures robust calibration at a fixed computational cost independent of sequence length. We further introduce a pairwise reprojection consensus score to quantify 3D reconstruction reliability for downstream tasks. Evaluations on EgoHumans and Human3.6M demonstrate substantial improvements over prior calibration-free methods. Compared to previous state-of-the-art approaches, Kineo reduces camera translation error by approximately 83-85%, camera angular error by 86-92%, and world mean-per-joint error (W-MPJPE) by 83-91%. Kineo is also efficient in real-world scenarios, processing multi-view sequences faster than their duration in specific configuration (e.g., 36min to process 1h20min of footage). The full pipeline and evaluation code are openly released to promote reproducibility and practical adoption at https://liris-xr.github.io/kineo/.

  • 3 authors
·
Oct 28

Promise and Peril: Stellar Contamination and Strict Limits on the Atmosphere Composition of TRAPPIST-1c from JWST NIRISS Transmission Spectra

Attempts to probe the atmospheres of rocky planets around M dwarfs present both promise and peril. While their favorable planet-to-star radius ratios enable searches for even thin secondary atmospheres, their high activity levels and high-energy outputs threaten atmosphere survival. Here, we present the 0.6--2.85\,mum transmission spectrum of the 1.1\,rm R_oplus, sim340\,K rocky planet TRAPPIST-1\,c obtained over two JWST NIRISS/SOSS transit observations. Each of the two spectra displays 100--500\,ppm signatures of stellar contamination. Despite being separated by 367\,days, the retrieved spot and faculae properties are consistent between the two visits, resulting in nearly identical transmission spectra. Jointly retrieving for stellar contamination and a planetary atmosphere reveals that our spectrum can rule out hydrogen-dominated, lesssim300times solar metallicity atmospheres with effective surface pressures down to 10\,mbar at the 3-sigma level. For high-mean molecular weight atmospheres, where O_2 or N_2 is the background gas, our spectrum disfavors partial pressures of more than sim10\,mbar for H_2O, CO, NH_3 and CH_4 at the 2-sigma level. Similarly, under the assumption of a 100\% H_2O, NH_3, CO, or CH_4 atmosphere, our spectrum disfavors thick, >1\,bar atmospheres at the 2-sigma level. These non-detections of spectral features are in line with predictions that even heavier, CO_2-rich, atmospheres would be efficiently lost on TRAPPIST-1\,c given the cumulative high-energy irradiation experienced by the planet. Our results further stress the importance of robustly accounting for stellar contamination when analyzing JWST observations of exo-Earths around M dwarfs, as well as the need for high-fidelity stellar models to search for the potential signals of thin secondary atmospheres.

  • 12 authors
·
Sep 28, 2024

Instant Uncertainty Calibration of NeRFs Using a Meta-Calibrator

Although Neural Radiance Fields (NeRFs) have markedly improved novel view synthesis, accurate uncertainty quantification in their image predictions remains an open problem. The prevailing methods for estimating uncertainty, including the state-of-the-art Density-aware NeRF Ensembles (DANE) [29], quantify uncertainty without calibration. This frequently leads to over- or under-confidence in image predictions, which can undermine their real-world applications. In this paper, we propose a method which, for the first time, achieves calibrated uncertainties for NeRFs. To accomplish this, we overcome a significant challenge in adapting existing calibration techniques to NeRFs: a need to hold out ground truth images from the target scene, reducing the number of images left to train the NeRF. This issue is particularly problematic in sparse-view settings, where we can operate with as few as three images. To address this, we introduce the concept of a meta-calibrator that performs uncertainty calibration for NeRFs with a single forward pass without the need for holding out any images from the target scene. Our meta-calibrator is a neural network that takes as input the NeRF images and uncalibrated uncertainty maps and outputs a scene-specific calibration curve that corrects the NeRF's uncalibrated uncertainties. We show that the meta-calibrator can generalize on unseen scenes and achieves well-calibrated and state-of-the-art uncertainty for NeRFs, significantly beating DANE and other approaches. This opens opportunities to improve applications that rely on accurate NeRF uncertainty estimates such as next-best view planning and potentially more trustworthy image reconstruction for medical diagnosis. The code is available at https://niki-amini-naieni.github.io/instantcalibration.github.io/.

  • 4 authors
·
Dec 4, 2023 1

Selection Function of Clusters in Dark Energy Survey Year 3 Data from Cross-Matching with South Pole Telescope Detections

Galaxy clusters selected based on overdensities of galaxies in photometric surveys provide the largest cluster samples. Yet modeling the selection function of such samples is complicated by non-cluster members projected along the line of sight (projection effects) and the potential detection of unvirialized objects (contamination). We empirically constrain the magnitude of these effects by cross-matching galaxy clusters selected in the Dark Energy survey data with the \rdmpr, algorithm with significant detections in three South Pole Telescope surveys (SZ, pol-ECS, pol-500d). For matched clusters, we augment the \rdmpr,catalog by the SPT detection significance. For unmatched objects we use the SPT detection threshold as an upper limit on the SZe signature. Using a Bayesian population model applied to the collected multi-wavelength data, we explore various physically motivated models to describe the relationship between observed richness and halo mass. Our analysis reveals the limitations of a simple lognormal scatter model in describing the data. We rule out significant contamination by unvirialized objects at the high-richness end of the sample. While dedicated simulations offer a well-fitting calibration of projection effects, our findings suggest the presence of redshift-dependent trends that these simulations may not have captured. Our findings highlight that modeling the selection function of optically detected clusters remains a complicated challenge, requiring a combination of simulation and data-driven approaches.

  • 55 authors
·
Feb 18

PoSh: Using Scene Graphs To Guide LLMs-as-a-Judge For Detailed Image Descriptions

While vision-language models (VLMs) have advanced into detailed image description, evaluation remains a challenge. Standard metrics (e.g. CIDEr, SPICE) were designed for short texts and tuned to recognize errors that are now uncommon, such as object misidentification. In contrast, long texts require sensitivity to attribute and relation attachments and scores that localize errors to particular text spans. In this work, we introduce PoSh, a metric for detailed image description that uses scene graphs as structured rubrics to guide LLMs-as-a-Judge, producing aggregate scores grounded in fine-grained errors (e.g. mistakes in compositional understanding). PoSh is replicable, interpretable and a better proxy for human raters than existing metrics (including GPT4o-as-a-Judge). To validate PoSh, we introduce a challenging new dataset, DOCENT. This novel benchmark contains artwork, paired with expert-written references, and model-generated descriptions, augmented with granular and coarse judgments of their quality from art history students. Thus, DOCENT enables evaluating both detailed image description metrics and detailed image description itself in a challenging new domain. We show that PoSh achieves stronger correlations (+0.05 Spearman rho) with the human judgments in DOCENT than the best open-weight alternatives, is robust to image type (using CapArena, an existing dataset of web imagery) and is a capable reward function, outperforming standard supervised fine-tuning. Then, using PoSh, we characterize the performance of open and closed models in describing the paintings, sketches and statues in DOCENT and find that foundation models struggle to achieve full, error-free coverage of images with rich scene dynamics, establishing a demanding new task to gauge VLM progress. Through both PoSh and DOCENT, we hope to enable advances in important areas such as assistive text generation.

Quantifying spectroscopic Ca II exocomet transit occurrence in two decades of HARPS data

The field of exocomets has been built around the unmatched number of detections made in the circumstellar disc of the archetypal star Beta Pictoris. An exocomet detection in spectroscopy is identified by variable atomic absorption features in a stellar spectrum, associated with transiting gas in and trailing an exocomet coma. This paper presents the largest spectroscopic search for exocomet transits to date, which overcomes the limitations of biased samples of stars with debris discs, and instead looks through the approx7500 stars in the HARPS archive for signs of exocomets in the CaII doublet (H:396.847nm and K:393.366nm). The search resulted in 155 candidate stars, which after filtering for false positives (e.g. binaries, stellar activity, etc.), were cut down to 22 stars. These 22 stars are classified into Tier1, 2, and 3 exocomet candidates, reflecting the confidence level of their exocomet detection. Our two best candidates (Tier1: Beta Pictoris, HD172555) and four lower confidence candidates (Tier2: Gl1, HIP5158, HD94771, HR1996) are discussed, yielding a detection rate of 0.03% (Tier1 only) and 0.1% (Tier1 & 2) in the HARPS sample. Both Tier1 stars are known exocomet host stars. These two young A-type stars correspond to 0.4% of all A-types in the sample, suggesting that detecting signs of exocomet transits using CaII is more likely around young A-type stars. Reanalysing a past HARPS study, we found no evidence to support the previously claimed four exocomet detections, indicating either that those detections are not robust or that we are only sensitive to the strongest signals.

  • 4 authors
·
Dec 17, 2024

TESS Discovers a Second System of Transiting Exocomets in the Extreme Debris Disk of RZ Psc

We present the TESS discovery of only the second system of transiting exocomets with a sufficient number of events to measure the size distribution in the RZ Psc system, enabling comparisons with the beta Pictoris and Solar System size distributions. Twenty-four transits with absorption depths (AD) of 1--20\% were observed across three TESS sectors of the 20-50 Myr K0V star, detected as part of our TESS survey of extreme debris disks identified by their IR excess. We discover that the ADs (and hence exocomet radii) follow a broken power-law cumulative frequency distribution not previously seen in extrasolar contexts but similar to that observed in Solar System Kuiper Belt Object sizes, with power-law slopes above and below the break of gamma_AD>break=2.32pm0.12 and gamma_AD<break=0.11pm0.04, respectively. We derive size distributions of 1--7~km from two independent lines of evidence. We use the RZ Psc exocomet rate to predict exocomet yields for the Early eVolution Explorer (EVE) NASA astrophysics Small Explorer (SMEX) mission concept to obtain simultaneous photometry of 10^4 young stars in NUV, optical, and NIR bands. Assuming occurrence rates scaled from RZ Psc, EVE would detect 590 exocomets from approx70 young systems in the optical band, with approx120 simultaneous 5sigma detections in all three bands. These data would enable grain sizes of 200--700~nm and graphite--olivine compositions of dozens of events to be distinguished at 2.5--3sigma, as well as a 4sigma determination of the accuracy of the Herschel-derived M-debris disk fraction.

  • 12 authors
·
Oct 10

Adaptive Detection of Fast Moving Celestial Objects Using a Mixture of Experts and Physical-Inspired Neural Network

Fast moving celestial objects are characterized by velocities across the celestial sphere that significantly differ from the motions of background stars. In observational images, these objects exhibit distinct shapes, contrasting with the typical appearances of stars. Depending on the observational method employed, these celestial entities may be designated as near-Earth objects or asteroids. Historically, fast moving celestial objects have been observed using ground-based telescopes, where the relative stability of stars and Earth facilitated effective image differencing techniques alongside traditional fast moving celestial object detection and classification algorithms. However, the growing prevalence of space-based telescopes, along with their diverse observational modes, produces images with different properties, rendering conventional methods less effective. This paper presents a novel algorithm for detecting fast moving celestial objects within star fields. Our approach enhances state-of-the-art fast moving celestial object detection neural networks by transforming them into physical-inspired neural networks. These neural networks leverage the point spread function of the telescope and the specific observational mode as prior information; they can directly identify moving fast moving celestial objects within star fields without requiring additional training, thereby addressing the limitations of traditional techniques. Additionally, all neural networks are integrated using the mixture of experts technique, forming a comprehensive fast moving celestial object detection algorithm. We have evaluated our algorithm using simulated observational data that mimics various observations carried out by space based telescope scenarios and real observation images. Results demonstrate that our method effectively detects fast moving celestial objects across different observational modes.

  • 5 authors
·
Apr 10

Astrometric Effects of a Stochastic Gravitational Wave Background

A stochastic gravitational wave background causes the apparent positions of distant sources to fluctuate, with angular deflections of order the characteristic strain amplitude of the gravitational waves. These fluctuations may be detectable with high precision astrometry, as first suggested by Braginsky et al. in 1990. Several researchers have made order of magnitude estimates of the upper limits obtainable on the gravitational wave spectrum \Omega_gw(f), at frequencies of order f ~ 1 yr^-1, both for the future space-based optical interferometry missions GAIA and SIM, and for VLBI interferometry in radio wavelengths with the SKA. For GAIA, tracking N ~ 10^6 quasars over a time of T ~ 1 yr with an angular accuracy of \Delta \theta ~ 10 \mu as would yield a sensitivity level of \Omega_gw ~ (\Delta \theta)^2/(N T^2 H_0^2) ~ 10^-6, which would be comparable with pulsar timing. In this paper we take a first step toward firming up these estimates by computing in detail the statistical properties of the angular deflections caused by a stochastic background. We compute analytically the two point correlation function of the deflections on the sphere, and the spectrum as a function of frequency and angular scale. The fluctuations are concentrated at low frequencies (for a scale invariant stochastic background), and at large angular scales, starting with the quadrupole. The magnetic-type and electric-type pieces of the fluctuations have equal amounts of power.

  • 2 authors
·
Sep 21, 2010

Probing the shape of the Milky Way dark matter halo with hypervelocity stars: a new method

We propose a new method to determine the shape of the gravitational potential of the dark matter (DM) halo of the Milky Way (MW) with the galactocentric tangential velocities of a sample of hypervelocity stars (HVSs). We compute the trajectories of different samples of HVSs in a MW where the baryon distribution is axisymmetric and the DM potential either is spherical or is spheroidal or triaxial with radial-dependent axis ratios. We determine the shape of the DM potential with the distribution of the latitudinal velocity |v_{vartheta}| in axisymmetric Galactic potentials, or with the distribution of |v_{vartheta}| and of a function bar v_{varphi} of the azimuthal velocity in non-axisymmetric Galactic potentials. We recover the correct shape of the DM potential by comparing the distribution of |v_{vartheta}| and bar v_{varphi} against the corresponding distributions of mock samples of HVSs that traveled in DM halos of different shapes. We use the largest possible sample of sim 800 HVSs of 4~M_odot ejected with the Hills mechanism at a rate sim 10^{-4} yr^{-1}, currently outgoing, and located at more than 10 kpc from the Galactic center. In our ideal case of galactocentric velocities with null uncertainties and no observational limitations, our method recovers the correct shape of the DM potential with a success rate Sgtrsim 89% in axisymmetric Galactic potentials, and S > 96% in the explored non-axisymmetric cases. The unsuccessful cases yield axis ratios of the DM potential that are off by pm 0.1. The success rate decreases with decreasing sample size: for example, for a spherical DM halo, S drops from sim 98% to sim 38% when the sample size decreases from sim 800 to sim 40 HVSs. A robust determination of the shape of the DM potential thus requires the measure of the galactocentric velocity of a few hundred genuine HVSs.

  • 5 authors
·
Nov 18, 2021

Quasi-periodic pulsations in extreme-ultraviolet brightenings

Context. Extreme-ultraviolet (EUV) observations have revealed small-scale transient brightenings that may share common physical mechanisms with larger-scale solar flares. A notable feature of solar and stellar flares is the presence of quasi-periodic pulsations (QPPs), which are considered a common and potentially intrinsic characteristic. Aims. We investigate the properties of QPPs detected in EUV brightenings, which are considered small-scale flares, and compare their statistical properties with those observed in solar and stellar flares. Methods. We extracted integrated light curves of 22,623 EUV brightenings in two quiet Sun regions observed by the Solar Orbiter/Extreme Ultraviolet Imager and identified QPPs in their light curves using Fourier analysis. Results. Approximately 2.7 % of the EUV brightenings exhibited stationary QPPs. The QPP occurrence rate increased with the surface area, lifetime, and peak brightness of the EUV brightenings. The detected QPP periods ranged from approximately 15 to 260 seconds, which is comparable to the periods observed in solar and stellar flares. Consistent with observations of QPPs in solar and stellar flares, no correlation was found between the QPP period and peak brightness. However, unlike the trend observed in solar flares, no correlation was found between the QPP period and lifetime/length scale. Conclusions. The presence of QPPs in EUV brightenings supports the interpretation that these events may be small-scale manifestations of flares, and the absence of period scaling with loop length further suggests that standing waves may not be the primary driver of QPPs in these events.

  • 8 authors
·
Apr 21

A New Dataset and Performance Benchmark for Real-time Spacecraft Segmentation in Onboard Flight Computers

Spacecraft deployed in outer space are routinely subjected to various forms of damage due to exposure to hazardous environments. In addition, there are significant risks to the subsequent process of in-space repairs through human extravehicular activity or robotic manipulation, incurring substantial operational costs. Recent developments in image segmentation could enable the development of reliable and cost-effective autonomous inspection systems. While these models often require large amounts of training data to achieve satisfactory results, publicly available annotated spacecraft segmentation data are very scarce. Here, we present a new dataset of nearly 64k annotated spacecraft images that was created using real spacecraft models, superimposed on a mixture of real and synthetic backgrounds generated using NASA's TTALOS pipeline. To mimic camera distortions and noise in real-world image acquisition, we also added different types of noise and distortion to the images. Finally, we finetuned YOLOv8 and YOLOv11 segmentation models to generate performance benchmarks for the dataset under well-defined hardware and inference time constraints to mimic real-world image segmentation challenges for real-time onboard applications in space on NASA's inspector spacecraft. The resulting models, when tested under these constraints, achieved a Dice score of 0.92, Hausdorff distance of 0.69, and an inference time of about 0.5 second. The dataset and models for performance benchmark are available at https://github.com/RiceD2KLab/SWiM.

  • 9 authors
·
Jul 14