We are pleased to invite you to participate to the SKACH Winter Meeting hosted by our collaborators at the International Space Science Institute (ISSI) Bern and the University of Bern (UniBe). This event will focus on consortium-wide updates, and updates from the five SKACH programs: Comms & Outreach, Instrumentation, Computing Platforms & Infrastructure, Data Science & Simulations and Science (AstroSignals).
SKACH is primarily funded by the State Secretariat for Education, Research and Innovation (SERI), and is responsible for managing the Swiss contribution to the SKAO. SKACH also leverages funding from external grants such as AstroSignals SINERGIA, SNF Bilateral, PASC, InnoSuisse, and Horizon Europe.
This endeavor is lead by a strong contingent of Swiss institutions; including, Universität Basel (UniBas), Fachhochschule Nordwestschweiz (FHNW), Universität Zürich (UZH), Eidgenössische Technische Hochschule Zürich (ETHZ), École Polytechnique Fédérale de Lausanne (EPFL), Zürcher Hochschule für Angewandte Wissenschaften (ZHAW), Université de Genève (UniGE), Haute École spécialisée de Suisse Occidentale (HES-SO), Centro Svizzero di Calcolo Scientifico (CSCS) and the International Space Science Institute in Bern (ISSI Bern).
As SKACH moves into the next 4 years of funding we will look ahead into structural changes and goals.
This session will update the communications and outreach work of SKACH including a section on Radio Cosmos that will launch during the Winter Meeting.
The Computing Platforms and Infrastructure work package is responsible for enabling the science use cases for the SKACH consortium by providing an HPC platform at CSCS and contributing with technical expertise in HPC technologies and software development.
In this presentation, we will present the final status of the work package activities.
The SKA Data Challenge 3 is divided into two related yet independent challenges: ' Foreground' and 'Inference,' or SDC3a and SDC3b, respectively. The former challenge ended last year, and the SKACH team participated, obtaining remarkable results. As a continuation of this effort, the SKACH team, in collaboration with the Science working group from Sweden, is participating in the second part (SDC3b), which asks participants to infer the cosmological and astrophysical reionization properties from the power spectra of the hydrogen-21cm signal from the Epoch of Reionisation corresponding to different redshift ranges.
These challenges are designed to enable the community's future efforts in addressing scientific questions pertinent to the Epoch of Reionisation while utilizing the SKA-Low telescope.
In this presentation, I will illustrate the challenge goal and the SKACH team's current result, which comprises standard inference techniques and machine learning approaches.
In this short presentation we will cover newly added features since the last SKA meeting.
Scoop developed an easy to use benchmark monitoring solution that enables the gathering of Hardware-, Software-, Execution- and Tracing-Metrics for benchmark runs. This enables clear, concise and reproducible benchmark executions.
SRCNet recently reached a significant milestone, which is the first release, branded as v0.1. We will showcase the capabilities of v0.1 in the Swiss SRC node.
I will discuss a new paradigm for cosmological likelihood-based inference, leveraging recent developments in machine learning and its underlying technology, to accelerate Bayesian inference in high-dimensional settings. This paradigm combines (i) emulation, where a machine learning model is trained to mimic cosmological observables, e.g. CosmoPower-JAX; (ii) differentiable and probabilistic programming, e.g. JAX and NumPyro, respectively; (iii) scalable Markov chain Monte Carlo (MCMC) sampling techniques that exploit gradients, e.g. Hamiltonian Monte Carlo; and (iv) decoupled and scalable Bayesian model selection techniques that compute the Bayesian evidence purely from posterior samples, e.g. the learned harmonic mean implemented in harmonic. This paradigm allows to carry out a complete Bayesian analysis, including both parameter estimation and model selection, in a fraction of the time of traditional approaches. For instance, if we consider a joint analysis between three simulated next-generation surveys, each performing a 3x2pt analysis resulting in more than 150 parameters, standard nested sampling techniques are simply unlikely to be feasible, requiring a projected 12 years of compute time on 48 CPU cores; on the other hand, the proposed approach only requires 8 days of compute time on 24 GPUs. I will also discuss recent developments in simulation-based inference which promise to overcome the limitations of likelihood-based approaches, especially in light of the SKAO and its precursors like MWA, and which will be crucial for the SKA Science Data Challenge 3b.
Latent Diffusion Models (LDMs) have achieved remarkable success in various domains of generative modeling, particularly due to their ability to efficiently learn and represent complex data distributions in an abstract latent space. This success is exemplified by their use in Stable Diffusion or FLUX.1, state-of-the-art text-to-image synthesis techniques that leverage the strengths of LDMs to generate detailed natural images from noise (guided by text prompts).
Here, we present our ongoing work on LDMs for prior matching lensing galaxies. Strong gravitational lensing represents an ill-posed inverse problem which is why it is crucial to have tight physical constraints on a given observation to narrow down the solution space. This is often the sole focus of parametric lens modelling studies, but such models lack physical grounding and ignore knowledge about galaxy formation and evolution. We propose a novel data-driven approach which uses generative deep learning to match the prior distribution. By training LDMs on galaxies from (magneto-)hydrodynamical large-scale simulations, we can guide the lens inference process, leading to more accurate, robust, and most importantly physical free-form lens reconstructions. Finally, we demonstrate our approach on recent lens discovery J1721+8842: the fist Einstein zig-zag lens.
The SMART Pulsar survey produced Petabytes of data, only a small fraction of which has been processed for Pulsar search. Processing was done with a coarse dedispersion plan and using a simple periodicity search. An improvement in the software is required to search a larger fraction of the dataset and to conduct fine searches. We share our contributions to the GPU based implementations of these two algorithms and compare them with the commonly used PRESTO library which uses only OpenMP/MPI parallelism.
In this talk we present an update of the HPC Bluebild algorithm developed at EPFL, BIPP. We give a short overview of the algorithm and present some new science use cases that are being examined in the group.
The growth of radio astronomy data rates challenges current data processing techniques. With the Square Kilometer Array (SKA) expected to produce exabytes of data every year, many more radio sources can be expected to be discovered at higher redshift and better resolution. Due to storage difficulties, analysis of this data will need to be done without manual manipulation of visibilities. An early-stage analysis of this data can be done using interpretable Machine Learning (ML). However, classifier performance has proven highly data-dependent and unstable on small, unlabelled datasets. In this talk I will propose generative modelling, based on the scattering transform, to artificially augment training data for training of classifiers. I will demonstrate that stability, interpretability and training efficiency are benefits of using the scattering transform, and show how classifier performance is affected by artificial augmentation.
Radio Frequency Interference (RFI) is a growing problem in radio astronomy. As of October 2024, more than 13,000 satellites are orbiting Earth, with the number steadily increasing due to the deployment of large-scale satellite constellations such as Starlink. Methods to address RFI are therefore essential. In novel work we have successfully performed trajectory based RFI subtraction on real radio astronomy data contaminated by a Starlink satellite. The method used, known as TABASCAL (Trajectory Based RFI Subtraction and Calibration), has been in development for several years and has demonstrated its capabilities of RFI subtraction on simulated data. Now, for the first time, TABASCAL has been applied to real data provided by the EDA2 telescope, an SKA-low analogue. The results show the effective removal of the Starlink satellite and a correct prediction of the astronomical sources. This first real-world demonstration marks a significant milestone in mitigating RFI.
The Hydrogen Intensity and Real-time Analysis eXperiment (HIRAX) radio interferometer array aims to observe neutral hydrogen (HI) through intensity mapping (IM) in the redshift range of 0.775-2.55. It is currently being built at the South African Radio Astronomy Observatory (SARAO) Square Kilometer Array (SKA) site in South Africa. HI IM makes it possible to tomographically probe large, cosmological volumes, enabling constraints on, for example, the dark energy equation of state. In this talk, we will present an overview of the HIRAX instrument and its science goals. Systematics are a significant concern in deriving cosmological constraints from HI IM due to the presence of strong foreground signals, therefore we need to carefully control the systematics and calibration. We will discuss the design, instrument characterisation and analysis challenges that this presents, focusing on the commissioning of the test array as well as the main dish production.
We will present updates concerning VIRUP (the VIrtual Reality Universe Project).
octreegen, the tool converting cosmological simulation data to a visualization-optimized file format will be presented.
MeerKAT and the upcoming SKA will drastically increase our horizon for direct measurements of neutral hydrogen (HI) in the Universe providing new insights on the baryonic content of galaxies across cosmic times. We recently acquired deep U-band MeerKAT observations of the COSMOS field. With these data and with the use of 21 cm stacking techniques, we will measure the HI mass of large samples of COSMOS MS galaxies in two redshift bins, z=0.4-0.8 and z=0.8-1.5. Our HI stack results in combination with the MIGHTEE-HI (L-band) survey will yield the first observational constraint on the fHI evolution from z=0 to z~1.5 in the COSMOS field, which will allow us to make a direct comparison with cosmological halo mass models. Very importantly, we will also get an independent measurement of fHI at z=0.8-1.5, obtained earlier with the GMRT, with another telescope and in another field. Moreover, the new MeerKAT observations will deliver an unprecedentedly deep continuum map at the 580-1015 MHz frequencies, which will lead to the detections of about 10k individual galaxies throughout a large range of redshifts.
Galaxy physical properties are not randomly scattered throughout a multi-dimensional parameter space, but instead follow distinct clustering ("scaling relations") that reflect their formation path and the associated astrophysical processes.
In this presentation I will describe two examples - both relevant to science with SKA-MID - of how these interrelations between different galaxy properties can be constrained via empirical population modelling, starting from the galaxy stellar mass distribution: (i) the link between galaxy star-formation rate and radio continuum luminosity, and (ii) the relation between the brightness of emission from the molecular gas tracer carbon monoxide (CO) and the galaxy gas-phase metallicity.
Tidal disruption events (TDE) numerical simulations with SPH-EXA
Vision foundation models, which have demonstrated significant potential in many multimedia applications, are often underutilized in the natural sciences. This is primarily due to mismatches between the nature of domain-specific scientific data and the typical training data used for foundation models, leading to distribution shifts. Scientific data often differ substantially in structure and characteristics; researchers frequently face the challenge of optimizing model performance with limited labeled data of only a few hundred or thousand images. To adapt foundation models effectively requires customized approaches in preprocessing, data augmentation, and training techniques. Additionally, each vision foundation model exhibits unique strengths and limitations, influenced by differences in architecture, training procedures, and the datasets used for training. In this work, we evaluate the application of various vision foundation models to astrophysics data, specifically images from optical and radio astronomy. Our results show that using features extracted by specific foundation models improves the classification accuracy of optical galaxy images compared to conventional supervised training. Similarly, these models achieve equivalent or better performance in object detection tasks with radio images. However, their performance in classifying radio galaxy images is generally poor and often inferior to traditional supervised training results. These findings suggest that selecting suitable vision foundation models for astrophysics applications requires careful consideration of the model characteristics and alignment with the specific requirements of the downstream tasks.
Physics-Informed Neural Networks (PINNs) have emerged as a powerful tool for solving differential equations by integrating physical laws into the learning process. This work leverages PINNs to simulate gravitational collapse, a critical phenomenon in astrophysics and cosmology. We introduce the Schrödinger-Poisson Informed Neural Network (SPINN) which solve nonlinear Schrödinger-Poisson (SP) equations to simulate the gravitational collapse of Fuzzy Dark Matter (FDM) in both 1D and 3D settings. Results demonstrate accurate predictions of key metrics such as mass distribution, density profiles, and soliton formation, validating against known analytical or numerical benchmarks. This work highlights the potential of PINNs for efficient, scalable modeling of FDM and other astrophysical systems, overcoming the challenges faced by traditional numerical solvers due to the non-linearity of the involved equations and the necessity to resolve multi-scale phenomena especially resolving the fine wave features of FDM on cosmological scales.