2017
List of past RPMs — 2017
Fast radio bursts (FRB’s) are an exciting, recently discovered, astrophysical transients which their origins are unknown.
Currently, these bursts are believed to be coming from cosmological distances, potentially allowing us to probe the electron content on cosmological length scales. Even though their precise localization is crucial for the determination of their origin, radio interferometers were not extensively employed in searching for them due to computational limitations.
I will briefly present the Fast Dispersion Measure Transform (FDMT) algorithm, allowing to reduce the operation count in blind incoherent dedispersion by 2-3 orders of magnitude.
In addition, FDMT enables to probe the unexplored domain of sub-microsecond astrophysical pulses.Pulsars in tight binary systems are among the most important astrophysical objects as they provide us our best tests of general relativity in the strong field regime.
I will provide a preview to a novel algorithm that enables the detection of pulsars in short binary systems using observation times longer than an orbital period.
Current pulsar search programs limit their searches for integration times shorter than a few percents of the orbital period.
Until now, searching for pulsars in binary systems using observation times longer than an orbital period was considered impossible as one has to blindly enumerate all options for the Keplerian parameters, the pulsar rotation period, and the unknown DM.
Using the current state of the art pulsar search techniques and all computers on the earth, such an enumeration would take longer than a Hubble time. I will demonstrate that using the new algorithm, (called Pruning) it is possible to conduct such an enumeration on a laptop using real data of the double pulsar, PSR J0737-3039.
Among the other (astronomical) applications of the Pruning technique are:
1) Searching for all pulsars on all sky positions in gamma ray observations of the Fermi LAT satellite.
2) Blind searching for continuous gravitational wave sources emitted by pulsars with non-axis-symmetric matter distribution.
3) Blind searching for planets in the outskirts of the solar system (AKA “Planet 9”), both in imaging data and on GAIA data (through astrometric deflection of background stars).
4) Blind searching for asteroids and Kuiper belt objects in imaging data.
5) Searching for stars in close orbits around the super massive black hole in the galactic center.
The Cosmic Microwave Background (CMB) temperature and polarization anisotropy measurements from the Planck mission have significantly improved previous constraints on the neutrino masses, as well as the bounds on extended models with massive sterile neutrino states or extra particles, as for example thermal axions. In this talk firstly I will show the recent constraints from cosmology for the thermal axion mass and the neutrino sector, by considering several combination of datasets and scenarios. In particular, I will show how the inclusion of additional low redshift priors is mandatory in order to sharpen the CMB neutrino bounds, and that we are close to test the neutrino mass hierarchy with existing cosmological probes. Secondly, I will discuss how these constraints can change by taking into account the possibility that the primordial power spectrum could assume a more general shape than the usual power law description. Finally, I will present cosmological constraints in a significantly extended scenario, varying up to 12 cosmological parameters simultaneously, by looking for a new concordance model that should solve at the same time all the current tensions between the Planck data and the new direct measurements of the Hubble constant by Riess et al. 2016 and the parameters from weak lensing surveys, such as CFHTLenS and KiDS-450.
Galaxy redshift surveys deliver increasingly precise tests of gravity on cosmological scales and shed light on the uncertain nature of Dark Energy. I will present the VIPERS (http://vipers.inaf.it) census of the galaxy distribution at redshift 0.8 and describe its consistency with the expansion history and rate of gravitational collapse predicted by General Relativity and a Planck (2015) cosmology. This is facilitated by the anisotropy of the observed clustering, which is sensitive to both the coherent infall of galaxies towards clusters and the assumption of an expansion history differing from the true one.
I will then present the results of including a simple density transform prior to this conventional analysis, which suppresses the most massive structures and extends the validity of the simplest models. Moreover, this has been shown to amplify signatures of modified gravity in ‘shielded’ theories and contains information beyond that available to the power spectrum. To do so requires correcting for many systematics that are characteristic of high-redshift surveys. I will describe the properties common to VIPERS, eBOSS and DESI and the potential of a density-weighted analysis with these next-generation surveys.
Finally, tests of gravity have predominantly focused on the large-scale velocities of galaxies to date, but that of clusters is imprinted on the Cosmic Microwave Background by the kinetic Sunyaev-Zel’dovich effect. The Simons Observatory and CMB-S4 experiments represent ideal test-beds for exploring the latter. I will discuss this and other future avenues for revealing the properties of Dark Energy with large-scale structure.
The ATLAS experiment at Large Hadron Collider (LHC) searches for experimental ev-
idence of many new beyond the standard model physics at the TeV scale. As we collect
more data at the LHC we continue to extend our sensitivity to these new phenomenon,
particularly probing increasingly more massive new particles. Despite this progress there
are still regions of parameter space where constraints remain weak. One common cause of
this lack of sensitivity is because the new particle has a very small mass splitting between it
and its decay products. The particle then has little energy left over to give momenta to its
decay products and the low momenta decay products are difficult to experimentally detect.
These regions of small mass splitting are called compressed regions. We are able to gain
sensitivity to these difficult regions by searching for new particles produced in conjunction
with strong initial state radiation (ISR). The strong initial state radiation boosts the new
particle’s decay products and gives them momentum.
In this seminar, I will cover in detail the search for the supersymmetric partner to the
top quark (stop) in the region when the stop and its decay products are nearly degenerate in
mass. No searches prior to 2016 was sensitive to this region. We were able to exclude stops
up to a mass of 425 GeV in this region with the 2015 and summer 2016 ATLAS dataset. I will
demonstrate a new and more accurate technique for identifying whole initial state radiation
systems instead of a single ISR jet. As the LHC provides more data and traditional search
methods rule out parameter space at higher masses, it becomes more important that we also
gain sensitivity to these compressed regions that are still unconstrained at low masses. I will
show that this initial state radiation identification technique is completely generalizable and
useful for many other searches that target small mass splittings.
The precise measurements of the W boson, the Higgs boson and the top quark masses enables to test the consistency of the Standard Model. Constraints on physics beyond the Standard Model are currently limited by the precision of the W-boson mass measurement. In this talk a measurement of the W-boson mass is presented with the data collected in 2011 at centre-of-mass energy of 7 TeV with the ALTAS detector corresponding to an integrated luminosity of 4.6 fb?¹. The measurement is based on about 8 and 6 million W candidates in the muon and electron channels, respectively. The W-boson mass is extracted from the template fits to the transverse momentum of the charged lepton and to the transverse mass of the W boson distributions. This measurement yields a W-boson mass:
mW = 80370 ± 7 (stat.) ± 11 (exp. syst.) ± 14 (mod. syst.) MeV = 80370 ± 19 MeV,
where the first uncertainty is statistical, the second corresponds to the experimental systematic uncertainty, and the third to the physics-modelling systematic uncertainty.
The BICEP/Keck Array cosmic microwave background (CMB) polarization experiments located at the South Pole are a series of small-aperture refracting telescopes focused on the degree-scale B-mode signature of inflationary gravitational waves. I will present our latest results which have produced the most stringent constraints on the tensor-to-scalar ratio to date: sigma(r) = 0.024 and r < 0.09 from B-modes alone (r < 0.07 in combination with other datasets). These constraints will rapidly improve with upcoming measurements at the multiple frequencies needed to separate Galactic foregrounds from the CMB, and in combination with higher-resolution experiments to remove B-modes induced by gravitational lensing. I will provide an update on our expanded frequency coverage and plans for future receivers.
Next-generation CMB experiments with hundreds of thousands of detectors will require exquisite control of instrumental systematics. I will review key aspects of the BICEP/Keck instrument design which maximize polarization sensitivity and reduce systematics at large angular scales, including the ability to measure beams in the far field with high precision. Finally, I will discuss the prospects for dealing with temperature-to-polarization leakage in future experiments, and how the beams systematics levels we achieve with current instrument and analysis technology will scale with detector count.
I will discuss the new (old) idea that dark matter may reside in a hidden sector whose interactions with the standard model are mediated by a dark photon. Possibilities for direct detection of such particles will be discussed. This is particularly timely given the recent DOE call for a small scale direct detection experiment that can search new parameter space.
20170209_RPMÂ slides
ÂStage-IIÂ CMB experiments started to deploy in early 2010Âs, contain ~1,000 millimeter-wave, polarization-sensitive detectors, and have discovered B-mode polarization due to weak gravitational lensing and set limits on B-modes due to inflationary gravitational waves. ÂStage-IIIÂ experiments have begun to deploy this year and contain ~10,000 detectors for an order-of-magnitude improvement in sensitivity.
Looking to the future, the CMB community has begun studying ÂCMB-S4Â, a ÂStage-IVÂ experiment that will contain ~500,000 detectors, a factor ~100 increase over experiments currently in the field. The goal of CMB-S4 is to make a definitive measurement of CMB polarization from the ground in order to explore inflationary scenarios, constrain the sum of the neutrino masses, and search for new physics within the early universe.
During this presentation, I will discuss the exciting science objectives of modern CMB experiments, the tremendous technological challenge of fielding large numbers of highly-sensitive detector arrays, and the advancement in technologies we are developing to overcome these challenges to conduct the ultimate CMB polarization measurement.
Precision measurements of electroweak observables offer a viable
option for finding indications of new physics and also guidance for
the next big discovery. In this talk, I will focus on the latest
electroweak precision measurements at the LHC, with an emphasis on the
W-boson mass measurement recently published by ATLAS. The evaluation
of the experimental systematic uncertainties, as well as the
uncertainties due to the modeling of the vector boson production and
decay of the ATLAS W-boson mass measurement will be discussed.
In addition, the LHC results will be put in context to previous
measurements at the Tevatron and LEP colliders.
The low noise, high spatial resolution and reliable performance of charge-coupled devices (CCDs) and complementary metal-oxide-semiconductor (CMOS) active-pixel sensors have made them detectors of choice for digital imaging. Although the slow time response of these devices has limited their application in high-energy particle physics, for the case of rare-event searches, where the particle interaction rate is extremely low, their properties can be fully exploited to build detectors that outperform in many aspects the traditional technologies of the field. I will present recent results from the DAMIC experiment, a low-mass dark matter search consisting of low-noise CCDs deployed in the SNOLAB laboratory. I will show how the exquisite spatial resolution of the detector allows for particle identification, and provides the unique capability to reject sequences of radioactive decay with utmost efficiency. These techniques can be extended to the search for neutrinoless double beta decay. I will present a recent proposal where we argue that a large array of amorphous Se-82 imagers based on CMOS technology could achieve the background levels necessary to test if neutrinos are Majorana fermions even in the case of a normal hierarchy of neutrino masses.
The discovery of neutrino oscillation, which implies neutrinos have non-zero masses, is the first instance of a conflict with the Standard Model of particle physics. The fruitful results from neutrino experiments in the past two decades have opened a window into a new territory, where the unanswered questions in the current theory, such as the observed matter-antimatter asymmetry, may be addressed by the upcoming precision measurements.
In this talk, I will introduce the core topics of neutrino physics and the requirements of neutrino experiments, focusing on the technology of liquid argon time projection chamber (LArTPC). The outstanding spatial and energy resolution of LArTPC provides us with a promising apparatus for the required precision. In particular, I will discuss detection of neutrinos from supernova explosions, and searches for other weakly interacting particles as well as rare physics processes. I will talk about the MicroBooNE experiment, the first large LArTPC in the U.S., its recent results, and the future LArTPC experiments.
Although the Standard Model (SM) of particle physics is a very successful theory, it fails to explain the origin of particles’ mass. If the Higgs mechanism, developed in the 60s as a solution to this puzzle, is the correct theory of Nature, a new fundamental particle, the Higgs boson, must exist. In 2012, at the LHC, a particle consistent with the Higgs boson was discovered. The theory prescribes the strength of the Higgs boson interaction with SM particles, but physics beyond the Standard Model could modify it. In this talk I will present the evidence of direct couplings of the Higgs boson to fermions and compare them with the predictions. I will also discuss the ATLAS+CMS combined measurement of the Higgs boson production and decay rates, and how they constrains the Higgs boson’s couplings to SM particles and the existence of new physics.
We report a correlation between the radial acceleration traced by rotation curves and that predicted by the observed distribution of baryons. The same relation is followed by 2693 points in 153 galaxies with very different morphologies, masses, sizes, and gas fractions. The correlation persists even when dark matter dominates. Consequently, the dark matter contribution is fully specified by that of the baryons. The observed scatter is small and largely dominated by observational uncertainties. This radial acceleration relation is tantamount to a natural law for rotating galaxies.
For over a decade, the BICEP collaboration has been deploying small aperture telescopes to the South Pole Station to map B-mode polarization in the Cosmic Microwave Background and
search for evidence of primordial gravitational waves. If detected, they would allow physicists
to constrain physics at energy scales of ~10 16 GeV levels. To date, this program places the
tightest constrains on the tensor-scalar ratio of r<0.07 (95% confidence). We are at an
important point in our program as we transition into ÂStage-3Â instruments with the BICEP-
array, which will search to a level of r>0.01. This rapid progress is fueled by the antenna-
coupled superconducting bolometers that we have developed at NASAÂs Jet Propulsion
Laboratory. In my talk, IÂll summarize the science case for these experiments, describe the
instruments themselves with a focus on the detectors, discuss the recently collected data, and
comment on the future of our program. In particular, Thermal KIDs, an emerging bolometer
technology, will be critical for this next phase. Finally, IÂll comment on how LBL and DOE can
and will play a future role in this exciting frontier of fundamental physics.
High-resolution imaging spectroscopy in the soft x-ray waveband (0.1-10 keV) is an essential tool for probing the physics of the x-ray universe. Unique line diagnostics available in this waveband allow transformative scientific observations of a wide array of sources. For example, measurements of turbulence in the intra-cluster medium of galaxy clusters can be used to calibrate hydrodynamic simulations used in cosmology; and measurements of outflow processes from supermassive black holes may identify the key mechanism that regulates the co-evolution of host galaxies and their central black holes. I will introduce the microcalorimeter, a low-temperature detector capable of x-ray photon counting with high spectral resolution, and discuss observations of the Perseus Cluster made using our microcalorimeter instrument that launched aboard the Japanese-led Hitomi (Astro-H) mission in 2016. I will discuss our recent advances using transition-edge-sensor (TES) microcalorimeters and identify areas in detector, readout, and instrument development that are needed for next-generation instrumentation for space- and laboratory-based experiments. Techniques and challenges will be compared to those of envisaged cryogenic CMB and direct dark matter detection experiments.Â
Search for physics beyond the Standard Model (SM) has been one of the most important goals of the physics program at CERN’s Large Hadron Collider (LHC). Among all the final states, the multijet final state has long been considered as a challenging one for the search of physics beyond the SM due to its large background. Though, exciting new physics phenomena, such as the production of black hole as well as massive supersymmetric (SUSY) particles, may well result in signals in the multijet final state. I present searches for physics beyond the SM using multijet events from 13 TeV collision data taken in 2015 and the first half of 2016 by the ATLAS experiment at the LHC. I focus on a search for the production of black hole and a search for massive supersymmetric particles decaying to many jets via R-Parity Violating (RPV) couplings. The two examples represent searches targeting physics beyond the SM at different mass scales, and therefore different analysis strategies are employed. These searches have greatly improved the sensitivity of the LHC to the black hole production and RPV SUSY scenarios, and they are complementary to searches using events of leptons, photons and missing transverse energy.
The dark matter problem, known since the 1930s, has only grown in importance during the current era of precision cosmology. We remain unable to answer the question: what is the matter that makes of 5/6 of the universeÂs matter density? Yet we are also in an era of unparalleled theoretical creativity and experimental opportunity. Theorists have vastly expanded the parameter space for weakly interacting massive particle dark matter in the last decade. New experimental constraints and candidates have emerged from the LHC, other accelerator experiments, and direct and indirect detection dark matter searches. I will summarize the current state of experimental searches for particle dark matter and focus on the upcoming search at low mass with the SuperCDMS SNOLAB experiment.
Dark matter makes up 85% of the matter in our Universe, but we have yet to learn its identity. A broad array of search strategies are needed to probe for non-gravitational interactions between dark matter and ordinary matter. While most searches focus on Weakly Interacting Massive Particles (WIMPs) with masses between 1 GeV and 1 TeV, it is imperative to also consider other motivated dark matter candidates. In this talk, I will discuss dark matter with MeV-to-GeV masses, which is a theoretically and phenomenologically appealing possibility and presents a new frontier in the search for dark matter. I will highlight novel dark matter direct-detection strategies that can probe this under-explored mass range. I will review how XENON10 data already probes dark matter with masses as low as a few MeV, and present recent constraints using XENON100 data. I will then discuss improvements expected from near-future experiments, focusing on xenon, semiconductor, and scintillator targets. This includes, for example, SENSEI, which is a new ultra-low-threshold silicon CCD detector that is poised to probe vast new regions of parameter space in the next few years. I will also present a few simple benchmark models of MeV-to-GeV dark matter, and contrast direct-detection probes with searches at colliders and fixed-target experiments.
Experiments at the Large Hadron Collider are characterizing the top quark and its interactions to unexpected precision. The combination of direct searches for new particles and indirect constraints from precision measurements provides powerful limits on extensions of the Standard Model. I start the seminar with a brief overview of top physics today and of the prospects for precision top physics of future LHC runs. This sets a high bar for future colliders that are to continue the programme initiated by Tevatron and LHC. In the second half of the seminar I discuss the potential of the future facilities envisaged in Asia and Europe. Powerful electron-positron machines have exquisite potential for key measurements such as the top quark mass and electro-weak couplings. Hadron colliders with a center-of-mass energy that significantly exceeds that of the LHC can probe the QCD interactions of the top quark an order of magnitude deeper and can provide a precise measurement of the top Yukawa coupling.
One of the most enticing and popular additions to the Standard Model of particle physics, Supersymmetry, is the target of a wide range of searches at the Large Hadron Collider at CERN. The ATLAS and CMS experiments have produced an incredible variety of searches, but Supersymmetry remains elusive. We will discuss the results of some of the recent searches that demonstrate the dexterity of both experiments, and point out which assumptions might be playing an important role in the interpretation of those results. There are still many ways Supersymmetry could have escaped detection up to now, and we will also point towards some of the next directions for searches for Supersymmetry at the LHC. Along the way, we will discuss some of the software and computing challenges that come with world-wide analysis of tens of billions of events, and how the
Despite the lack of a sparticle discovery at the Large Hadron Collider, and despite the fact that most of the discovery potential has shriveled to an upsetting size, supersymmetry (SUSY) remains the most motivated solution to the Standard Model’s inadequacies. Because of this (and a large dose of stubbornness), we are forced to consider ways the nominal search strategy may have missed a discovery. Must SUSY present itself with lots of high-momentum objects and large missing transverse energy just because it’d be easy to find? This talk will discuss recent SUSY searches at ATLAS targeting signatures without large missing transverse energy as well as those sensitive to long-lived sparticles.
ABSTRACT:
The LBNL xTalk (Âcross-talkÂ) series provides a forum for the exchange of ideas between groups and across lab divisions. Â Each seminar is focused on a theme which is confronted from two perspectives. Â In particular, the talks are given by two presenters from different domains. The format is informal (chalk talk!) and is designed to be entertaining and engaging. Â Come prepared to learn and ask about details with the experts in order to stimulate collaborations between groups. Â We hope that this will strengthen the program at the lab and spread new ideas.
For the first xTalk, we will have Mateusz Ploskon from Nuclear Science and Benjamin Nachman from the Physics Division to discuss how jet substructure is used in ALICE/STAR and in ATLAS. Jets are collimated sprays of particles resulting from quarks and gluons produced at high energy. The radiation pattern within jets encodes a wealth of information about the parton that initiated it. Jet substructure is a hot topic in pp for tagging boosted W/Z/H bosons and also recently for precision probes of Quantum Chromodynamics (QCD); in heavy ion collisions, jet substructure is an exciting new probe of medium modifications. Please join us for this comparative discussion of how jet substructure is used in two very different environments!
Cosmological observations have provided us with answers to age-old questions, involving the age, geometry, and composition of the universe. However, there are profound questions that still remain unanswered. I will describe ongoing efforts to shed light on some of these questions.
In the first part of this talk, I will explain how we can use measurements of the CMB and the large-scale structure of the universe to reconstruct the detailed physics of much earlier epochs, when the universe was only a tiny fraction of a second old. In particular, I will show how we can probe the shape of the inflationary potential, extra degrees of freedom during inflation, and the signature of possible particles with mass and spin during this period.
In the last part of the talk, I will discuss how we can use observations at large scales and sub-galactic scales (through strong gravitational lensing) to improve our understanding of another open question in fundamental physics: the particle nature of dark matter.
I will discuss analyses leading to two recent candidate detections of photons from dark matter. Specifically, these are: first, gamma rays in a continuum “bump” at a few GeV which can be due to WIMP-like dark matter annihilation in the Galactic Center; and, second, X-rays from clusters of galaxies and Andromeda consistent with monoenergetic 3.55 keV photons from dark matter decay such as that predicted from sterile neutrino dark matter. Commensurately, there are also stringent constraints on these signals. I will discuss the particle and cosmological model implications of both.
Elucidating the nature of dark matter is one of the central challenges
in fundamental physics. Dark matter originating as a thermal relic
from the early Universe is arguably one of the most compelling
paradigm, and WIMP searches have been the main focus of past
experimental efforts. Not as extensively explored, the possibility of
light (sub-GeV) thermal dark matter could arise naturally if it is
part of a dark sector neutral under all Standard Model forces. The
“Light Dark Matter eXperiment” (LDMX) proposes to explore light
thermal dark matter using an electron fixed-target missing momentum
approach with a low current, high-repetition beam. The expected
sensitivity would surpassing by orders of magnitude the reach of any
previous or currently envisioned experiment, and decisively test many
sub-GeV thermal dark matter scenarios.
Strong HI absorbers are essentially the largest foreground contamination for Lyman alpha forest surveys, and so a better understanding of them is necessary for achieving the goals of future Lyman alpha cosmology surveys. I will talk about a new automated technique for generating a probabilistic catalogue of strong absorbers for the entire survey, allowing more robust cleaning of the foreground. Since no technique can entirely remove strong absorbers, I will discuss new templates for characterising their effect on the flux power spectrum. A secondary systematic is induced by interpolation error between theoretical models, and I will discuss techniques to mitigate this error with refining Gaussian Process emulators. Lastly I will discuss the interesting possibility that the surprisingly common mergers of ~30 solar mass black holes observed by LIGO could be primordial black hole dark matter, which is intriguingly (still) not convincingly ruled out.
The most common machine learning algorithms operate on
finite-dimensional vectorial feature representations. In
many applications, however, the natural representation of the data
consists of more complex objects,
for example functions, distributions, and sets, rather than
?nite-dimensional vectors. In this talk
we will discuss machine learning algorithms that can operate directly
on these complex
objects. For this purpose, we use nonparametric statistical methods
that can consistently estimate the inner product, distance, and
certain kernel functions between distributions, sets, and other
objects. We will discuss applications in various scientific areas
including cosmology (e.g. estimating the mass of galaxy clusters,
finding anomalous galaxy clusters, estimating the cosmological
parameters of our Universe, accelerating cosmological simulations),
fluid dynamics (finding anomalous events in turbulence data),
neuroimaging, and agriculture
In this talk, I’d like to discuss the intertwining importance and connections of three principles of data science in the title in data-driven decisions.
Making prediction as its central task and embracing computation as its core, machine learning has enabled wide-ranging data-driven successes. Prediction is a useful way to check with reality. Good prediction implicitly assumes stability between past and future. Stability (relative to data and model perturbations) is also a minimum requirement for interpretability and reproducibility of data driven results (cf. Yu, 2013). It is closely related to uncertainty assessment. Obviously, both prediction and stability principles can not be employed without feasible computational algorithms, hence the importance of computability.
The three principles will be demonstrated in the context of two neuroscience projects and through analytical connections. In particular, the first project adds stability to predictive modeling used for reconstruction of movies from fMRI brain signlas to gain interpretability of the predictive model. The second project uses predictive transfer learning that combines
AlexNet, GoogleNet and VGG with single V4 neuron data for state-of-the-art prediction performance. It provides stable function characterization of neurons via (manifold) deep dream images from the predictive models in the difficult primate visual cortex V4. Our V4 results lend support, to a certain extent, to the resemblance of these CNNs to a primate brain.
The discovery of the Higgs boson has completed the set of particles predicted by the Standard Model (SM). It has been established by the CMS and ATLAS Collaborations at the Large Hadron Collider that the discovered boson is consistent with J^PC=0^++. The width of this boson is also consistent with the predicted value from the SM, but the constraints using events at the resonance peak are orders of magnitude looser than the prediction. In this talk, we will be exploring techniques developed to probe small anomalous couplings of the Higgs boson. Emphasis will be given to the recent studies of Higgs-diboson (HVV) couplings from the CMS Collaboration, where information from associated production becomes even more important than the kinematics of Higgs decay products. We will also look at joint mass-width measurements using either events at the resonance peak or at the offshell tail of Higgs boson production. We will see that even small anomalous couplings show enhancement at the offshell tail and discuss briefly how joint constraints can be studied.
As the universe cooled immediately following the Big Bang, the laws of physics underwent a dramatic phase transition. Underlying symmetries were broken and particles acquired mass as the Higgs field moved to a new ground state.
In this talk I will discuss why we should care about the precise nature of the electroweak phase transition (EWPT) and how the potential which generated the EWPT may be measured at the LHC.
Bio: Patrick received his B.S. in physics and mathematics at the University of Oregon in 2013 and has since been working towards his Ph.D. in experimental particle physics at the University of Chicago as a member of the ATLAS collaboration. He recently returned to Chicago after a year and a half at CERN in Geneva, Switzerland to write his thesis on hardware based track reconstruction and the search for Higgs pair production in the four b-jet final state.
During Run 1 of LHC, the ATLAS and CMS collaboration firmly established the existence of a Higgs boson but detailed measurements were limited by statistical precision. With the larger Run 2 dataset, we have measured the couplings and production cross-section of the Higgs boson using the H->ZZ->4l decay channel. The results improve upon the previous ones by more than a factor of 2. For the first time, we also measure the differential cross-section within production modes and use it to place constraints on Beyond Standard Model scenarios.
ATLAS has an extensive detector upgrade plan to allow itself to collect and exploit the data delivered during HL-LHC. The upgrade of the current inner track with the all- silicon ITk is the largest project. At University of Toronto, we have lead the Canadian effort to establish ITk strip module assembly. Collaborating with Celestica, we have also established ASIC gluing and wire-bonding processes in industry. Additionally, we are probing the impact of radiation damage to estimate the end-of-life performance of the ITk.
Bio:
As I have done my undergraduate training in Engineering Science, I have been interested in bringing this prospective to physics research. As such I have been involved in many research projects, ranging from condensed matter to accelerator development.
Since 2013, I have focused on measurements of Higgs boson properties using the ATLAS detector and ITk detector development. I have lead and been involved in numerous analyses using the H->ZZ->4l channel and analyses combining results from various other decay channels. Additionally I have led the development and investigation of ITk modules at University of Toronto. In the future, I intend to continue pursuing both physics analyses and detector hardware.
Dark matter represents one of the most sought after discoveries in physics. Leading theories predict that extremely sensitive detectors could probe nuclear recoils from dark matter interactions. The PICO collaboration uses bubble chambers to look for the energy deposition from such an interaction. The biggest challenge in searching for dark matter is the mitigation and understanding of the numerous other sources of events that could look like dark matter, namely radioactive backgrounds. I will discuss how a long effort to understand backgrounds in the PICO bubble chambers led to the recent background-free result of PICO-60 and how discoveries made along the way might affect other rare event searches.
Bio:
Daniel Baxter is a graduating 5th year PhD student at Northwestern University and active member of the international PICO collaboration searching for dark matter using bubble chamber detectors. His research has largely focused on dark matter detector calibration, specifically understanding differences in detector response between nuclear and electron recoils. He has applied this expertise to the PICO-60 detector as run coordinator of its first physics run, which became the first background-free run of a bubble chamber at the 40L scale.
ÂDespite the huge success of the Standard Model (SM), there is overwhelming evidence for physics beyond the SM and many remaining questions, such as the nature of dark matter (DM). The current leading candidate for DM are Weakly Interacting Massive Particles (WIMPs), which could be produced directly at colliders.
In this talk I will discuss complimentary methods for exploring new phenomena at the LHC with the ATLAS experiment. Firstly, through precision measurements of the Higgs boson properties, and secondly though the production of missing transverse energy in association with pairs of top quarks.
Finally, I will discuss the High Luminosity LHC (HL-LHC) project and the ATLAS inner tracker (ITk) upgrade, focusing on characterising the end-of-lifetime performance and validations of material budget estimates.”
Abstract
During Long Shutdown 3 (2024/2025) of the Large Hadron Collider, the ATLAS Inner Detector will be replaced with a new all-silicon tracker, composed of a pixel and a strip tracker. The strip tracker will consist of 18,000 detector modules, each comprised of a silicon strip sensor with readout electronics glued on to its surface.
Extensive quality assurance and quality control programmes have been established for module components as well as combined structures in order to ensure high reliability and efficiency. Among different methods to test components and modules, measurements in particle beams (testbeams) provide operating conditions similar to the LHC and thus are critical for understanding the detector performance.
This talk presents measurements performed using an electron beam at the DESY-II testbeam facility and X-ray photons at the Diamond Light Source. The results show the influence of different sensor architectures on the module performance, how testbeams can improve the understanding of material distribution in the detector and how these measurements can benefit the future ATLAS detector.
BIO:
I have studied at the Humboldt-Universität zu Berlin, where I worked on ATLAS analysis (b-tagging efficiencies) for my bachelor thesis. I joined the ITk strip tracker project in 2012, when I started to work at DESY for my master thesis and later for my PhD. Since 2015, I have participated in all ITk testbeam activities at DESY and organised nine testbeams myself.