Chaos Emerge with Exceptional Points in Reset-Driven Floquet Dynamics Authors Jia-jin Feng, Quntao Zhuang Published: 05.12.2026 Updated: 05.12.2026 Summary We investigate the spectral structure of reset-driven Floquet quantum channels generated by the Hamiltonian evolution of a many-body system followed by periodic resetting of a bath. By tuning a chaos-controlling parameter in the underlying Hamiltonian, we uncover an exceptional-point-induced spectral transition from a symmetry-constrained ergodic regime to a fully chaotic regime. Across this transition, increasing the chaos parameter causes the real eigenvalues of the channel to drift, coalesce at exceptional points, and bifurcate into complex-conjugate pairs, signaling the progressive breaking of symmetry constraints in operator space. We further show that the channel spectrum sharply distinguishes chaotic, ergodic, many-body localized, and scarred dynamical regimes. Finally, we connect the leading channel eigenvalues to experimentally accessible probes based on quantum mutual information, establishing a link between the spectral organization of reset-driven quantum channels and observable relaxation dynamics. Source arXiv: 2605.11751v1
Optical signatures of antiferromagnetic correlations in a strongly interacting quantum Hall MoSe2 monolayer Authors Jiho Sung, Pavel A. Volkov, Ilya Esterlis, Jue Wang, Luke N. Holtzmann, Takashi Taniguchi, Kenji Watanabe, Katayun Barmak, James Hone, Mikhail D. Lukin, Philip Kim, Hongkun Park Published: 05.11.2026 Updated: 05.11.2026 Summary Strong magnetic fields quench the kinetic energy of electrons, leading to the formation of flat energy bands, known as Landau levels (LLs). In this situation, even weak interactions can drive the emergence of various ordered phases. The simplest of such phases is a quantum Hall ferromagnet, where a spontaneous spin polarization emerges when LLs with opposite spins cross. The presence of strong electron-electron interaction at zero field changes this picture and makes the resulting states much harder to predict. Here we use magneto-optical spectroscopy to reveal quantum Hall states with unconventional correlations favouring an unpolarized state in the strongly correlated electron liquid in a MoSe2 monolayer. The oscillations of the exciton polaron energies as a function of perpendicular magnetic field and electron density demonstrate the emergence of LLs in a correlated electron liquid and density-dependent crossings between LLs of opposite valleys. On lowering the LL filling factor, where interactions within LLs are stronger, the crossings systematically broaden, indicating an increase in the Zeeman energy required to fully polarize the valley-degenerate LLs. These observations are shown to be consistent with antiferromagnetic interactions between LL electrons, favouring a ground state with zero valley polarization, and are therefore inconsistent with conventional quantum Hall ferromagnetism. This discovery demonstrates a qualitatively distinct form of quantum Hall magnetism in a strongly correlated electron liquid, establishing an anchoring point for understanding spin-unpolarized fractional and ordered states of correlated electrons driven by magnetic field. Source arXiv: 2605.11249v1
Rank Is Not Capacity: Spectral Occupancy for Latent Graph Models Authors Nikolaos Nakis, Panagiotis Promponas, Konstantinos Tsirkas, Katerina Mamali, Eftychia Makri, Leandros Tassiulas, Nicholas A. Christakis Published: 05.11.2026 Updated: 05.11.2026 Summary Graph representation learning has become a standard approach for analyzing networked data, with latent embeddings widely used for link prediction, community detection, and related tasks. Yet a basic design choice, the latent dimension, is still treated as a brittle hyperparameter, fixed before training and tuned by held-out performance. Learned factors are also identifiable only up to rotation and rescaling, so the nominal rank rarely coincides with the quantity that governs model behavior. We propose Spectral Prefix Extraction and Capacity-Targeted Representation Analysis (Spectra), which replaces rank as the unit of analysis with the spectrum of a learned positive semidefinite kernel, trace-normalized so that spectra are comparable across fits. The normalized eigenvalues form a distribution on the simplex, and their Shannon effective rank acts both as a summary of learned capacity and as a controllable training-time coordinate: a single scalar shapes this realized dimension during training, and bisection targets any desired value within the rank cap. To theoretically support that, we show local regularity and monotonicity of the realized-dimension profile. Across collaboration, social, biological, and infrastructure networks, Spectra traces performance–capacity frontiers that make the trade-off between predictive accuracy and realized dimension visible. It performs competitively with strong link-prediction baselines, yields aligned lower-capacity views of the same fitted model through spectral prefixes, and provides a principled handle on capacity in the overparameterized regime. Capacity thus becomes a property of the fitted model rather than a hyperparameter of the training. Source arXiv: 2605.11142v1
Passive optical superresolution at the quantum limit Authors A. I. Lvovsky, Michael R. Grace, Saikat Guha, Mankei Tsang, Gerardo Adesso, Nicolas Treps Published: 05.11.2026 Updated: 05.11.2026 Summary For more than a century, the diffraction limit has defined the resolution achievable by passive optical imaging systems. Although some resolution improvement can be gained through classical data processing of the image, it is limited by the noise arising from quantum nature of light. Minimizing the effect of this noise requires quantum treatment of optical imaging. By reformulating imaging as a problem of quantum measurement and estimation, it becomes possible to identify optimal detection strategies that recover spatial information previously thought inaccessible. This review summarizes the theoretical framework that underpins this development, from the formulation of quantum Cramér-Rao bounds and Chernoff bounds to the construction of receivers that attain them, such as those based on spatial-mode demultiplexing. We show how these methods can beat conventional imaging in the classification, localization, and imaging of sub-Rayleigh incoherent sources. We then discuss extensions to multiparameter and partially coherent scenarios, and highlight the unifying connections between estimation and discrimination tasks. Finally, we survey recent experimental demonstrations that approach quantum-limited resolution and outline emerging applications in microscopy, astronomy, and optical sensing. Source arXiv: 2605.10767v1
Raman suppression in nanophotonics enabled by multimode spectral filtering Authors Yunxiang Song, Jinsheng Lu, Xinrui Zhu, Danxian Liu, Zongda Li, Pawan Ratra, Norman Lippok, Miro Erkintalo, Federico Capasso, Marko Loncar Published: 05.09.2026 Updated: 05.09.2026 Summary Miniaturized photonic cavities generating nonlinear optical states of light are central to telecommunications and metrology applications. The emergence of such states is primarily underpinned by the ubiquitous Kerr nonlinearity that is present in all media. However, stimulated Raman scattering (SRS), an additional process inherent to many materials, has been shown to critically hinder the states’ formation, imposing fundamental constraints on the choice of photonic platforms. Here, we introduce a novel strategy for the suppression of SRS in nanophotonic devices, adaptable to diverse Raman spectral responses. This is achieved by controlling the coupling and loss among multiple transverse spatial modes of the system, tailored across ultrabroad spectral bandwidths. Specifically, we combine nanometrically-corrugated Bragg gratings and tapered waveguides that, together enable co-directional multimode coupling and mode-selective filtering. We use lithium niobate as an exemplary Raman-active material to realize the concept, and we demonstrate the robust generation of two distinct Kerr nonlinear states (corresponding to coherent optical frequency combs) using the fabricated devices. The simplicity and generality of the concept suggest wide applicability to classical and quantum light generation on many technologically-relevant platforms nominally plagued by SRS (e.g., silicon and diamond photonics). More broadly, our multimode spectral shaping and filtering concept opens a path forward for highly-structured, wavelength-specific losses in nanophotonic waveguides and cavities, with potential applications in ultrafast and nonlinear integrated photonics. Source arXiv: 2605.09219v1
Covert Signaling for Communication and Sensing over the Bosonic Channels Authors Tianrui Tan, Evan J. D. Anderson, Michael S. Bullock, Boulat A. Bash Published: 05.08.2026 Updated: 05.08.2026 Summary Preventing signal detection in communication and active sensing requires careful control of transmission power. In fact, the square-root laws (SRL) for covert classical and quantum communication and sensing prescribe that the average output power per channel use scales as $1/sqrt{n}$ for $n$ channel uses. Two strategies for achieving this are diffuse and sparse signaling. The former transmits signals with power decaying as $1/sqrt{n}$ on all $n$ channel uses, which is convenient for mathematical analysis. The latter transmits constant-power signals rarely, on approximately $sqrt{n}$ out of $n$ channel uses, while remaining silent on the others. This offers significant practical advantages in compatibility with modern digital transmitters. Here, we study sparse signaling over lossy thermal-noise bosonic channels, which describe quantumly many practical channels (including optical, microwave, and radio-frequency). We characterize the input signal state that minimizes detectability. We find an unintuitive optimal quantum state structure: a mixture of just two consecutive photon-number states. In particular, in the low-brightness regime, the optimal signal state is a mixture of vacuum and a single photon. Since these states are generally suboptimal for both communication and active sensing, we explore the resulting trade-off and identify input-power thresholds for transitions between optimizing for covertness vs. performance in communication and sensing tasks. Source arXiv: 2605.08066v1
Realistic Simulation of Quantum Repeater with Encoding and Classical Error Correction Authors Sagar Patange, Caitao Zhan, Bikun Li, Joaquin Chung, Allen Zang, Liang Jiang, Rajkumar Kettimuthu Published: 05.07.2026 Updated: 05.07.2026 Summary Quantum repeaters are essential for scalable long-distance quantum networking. As quantum information processing moves toward fault-tolerant and error-corrected operations, it becomes increasingly important to study quantum repeaters that also move beyond raw physical entanglement and towards logical entanglement. In this paper, we implement and simulate the quantum repeater with encoding and classical error correction (QRE-CEC) protocol in SeQUeNCe, a discrete-event simulator of quantum networks. The protocol distributes logical Bell pairs, performs encoded entanglement swapping, and uses classical error correction for the decoding of entanglement swapping measurement outcomes to determine Pauli-frame corrections. For this study, we extend SeQUeNCe with a stabilizer-based backend, add support for CSS code-based encoded operations, and integrate gate, measurement, idle decoherence, and state-initialization noise models. Our simulation results show that QRE-CEC suppresses all modeled errors to the second order. Also, QRE-CEC can distribute logical Bell pairs with 0.91 fidelity over a distance of 2000 km under the parameter regimes we study. Beyond protocol-level performance evaluation, our implementation exposes practical simulator and control-plane challenges that are typically abstracted away in theoretical studies. Source arXiv: 2605.06928v1
CLAD: A Clustered Label-Agnostic Federated Learning Framework for Joint Anomaly Detection and Attack Classification Authors Iason Ofeidis, Nikos Papadis, Randeep Bhatia, Leandros Tassiulas, TV Lakshman Published: 05.07.2026 Updated: 05.07.2026 Summary The rapid expansion of the Internet of Things (IoT) and Industrial IoT (IIoT) has created a massive, heterogeneous attack surface that challenges traditional network security mechanisms. While Federated Learning (FL) offers a privacy-preserving alternative to centralized Intrusion Detection Systems (IDS), standard approaches struggle to generalize across diverse device behaviors and typically fail to utilize the vast amounts of unlabeled data present in realistic edge environments. To bridge these gaps, we propose CLAD, a holistic framework that seamlessly incorporates Clustered Federated Learning (CFL) with a novel Dual-Mode Micro-Architecture ($text{DM}^2text{A}$). This unified approach simultaneously tackles the two primary bottlenecks of IoT security: device heterogeneity and label scarcity. The $text{DM}^2text{A}$ component features a shared encoder followed by two branches, enabling joint unsupervised anomaly detection and supervised attack classification; this allows the framework to harvest intelligence from both labeled and unlabeled clients. Concurrently, the clustering component dynamically groups devices with congruent traffic patterns, preventing global model divergence. By carefully combining these elements, CLAD ensures that no data is discarded and distinct operational patterns are preserved. Extensive evaluations demonstrate that this integrated approach significantly outperforms state-of-the-art baselines, achieving a 30% relative improvement in detection performance in scenarios with 80% unlabeled clients, with only half the communication cost. Source arXiv: 2605.06571v1
Non-Markovian delay-assisted sensing with waveguide-coupled quantum emitters Authors Prajit Dhara, Isack Padilla, Saikat Guha, Annyun Das, Kanu Sinha Published: 05.06.2026 Updated: 05.06.2026 Summary We show that in a minimal setup of two waveguide-coupled quantum emitters, separated by long distances and subject to an external field, time-delayed feedback can be a resource for sensing field gradients. While the field gradient induces a detuning between the emitters; the large interatomic separations render the system dynamics non-Markovian. We show that the quantum Fisher information (QFI) for estimating the detuning parameter, and thereby the field gradient, is enhanced in the presence of non-Markovian delay. Such an enhancement can be attributed to the formation of atom-photon quasi-bound states that enable the field to interact with the emitters for longer times, thereby gaining more information about their relative detunings. Additionally, in the presence of delay, the interaction between the emitters is mediated via multiple spectral modes of the field, further enhancing the sensing capabilities of the system. Our results establish non-Markovian time-delayed feedback and multimode reservoirs as a resource for distributed quantum sensing with waveguide-coupled quantum emitters. Source arXiv: 2605.05434v1
GHZ is All You Need: Quantum Sensing with VISTA Authors Oskar Novak, Christos N. Gagatsos, Narayanan Rengaswamy Published: 05.05.2026 Updated: 05.05.2026 Summary Quantum metrology holds the potential to enhance magnetic field sensing beyond current limits. However, in the presence of realistic noise, this advantage degrades to the Standard Quantum Limit. While recent algorithmic and variational techniques attempt to recover this scaling, they are hindered by stringent control requirements on the probe state that are infeasible in the near term, or by barren plateaus and interpretability issues inherent to black-box variational quantum circuits. Here, we introduce Variational Inference and Sensing with Twin Ansätze (VISTA), a closed-loop protocol that combines passive sensing, or where the probe state is left to evolve without any active control, with physics-informed variational optimization. In the VISTA framework, a probe state evolves under a Lindbladian master-equation, and is compared, via the Swap test, to a parameterized “quantum twin”, a shallow quantum circuit designed to mimic the underlying pure-state or Lindbladian master-equation dynamics. By restricting the optimization space to the physical parameters of interest, VISTA circumvents barren plateaus. We demonstrate that by coupling the protocol with a classical optimizer and high shot counts, VISTA can temporarily achieve near-Heisenberg scaling for moderately noisy qubits over a finite range of system sizes. Furthermore, we introduce a Quasi-Normalization technique that sharpens the loss gradients, enabling simultaneous extraction of both the coherent signal $θ$ and the environmental noise rate $γ$ with low absolute error. Finally, we extend VISTA to the multi-parameter vector metrology regime, enabling simultaneous parameter extraction from a transverse-magnetic-field Hamiltonian. By eliminating the need for complex, open-loop control and processing, VISTA offers a highly practical, resource-efficient framework for near- to intermediate-term quantum sensors. Source arXiv: 2605.04203v1
Edge-Based Anisotropic Decoding for Generalized Bicycle Codes Authors Dimitris Chytas, Paul N. Fessatidis, Boulat A. Bash, Bane Vasić Published: 05.04.2026 Updated: 05.04.2026 Summary Quantum low-density parity-check (QLDPC) codes provide non vanishing rates, distance scaling with the blocklength of the code, and facilitate fast iterative decoding because of their sparsity. However, in practice iterative decoding fails to exploit the distance of the code, because it cannot resolve the symmetries imposed by degeneracy. In this work, we provide a graph theoretic characterization of degeneracy for the family of generalized bicycle (GB) codes. This viewpoint shows that harmful degenerate error patterns persist whenever they remain related by automorphisms preserved by the decoder. Motivated by symmetry breaking via graph coloring, we compare three coloring approaches: no coloring, block-coloring, and edge-coloring. For GB codes, we show that edge-coloring can eliminate all automorphisms in low-weight stabilizer-induced subgraphs. We practically realize the coloring schemes as isotropic, block- anisotropic and edge-anisotropic min-sum (MS) decoding. Experimental results show that edge anisotropic min-sum decoding obtains improved performance over isotropic and block anisotropic decoding for several GB codes in a small number of iterations. Source arXiv: 2605.03218v1
Simulation-guided design of an integrated photonic cavity for frequency-multiplexed Spontaneous Parametric Down Conversion Authors Benjamin Szamosfalvi, Michael Raymer, CJ Xin, Leticia Magalhaes, Jarrett Nelson, Marko Lončar, Ryan M. Camacho Published: 05.04.2026 Updated: 05.04.2026 Summary Frequency-multiplexed entangled photon pair sources with narrow bandwidths and high pair generation efficiency are a key enabling technology for quantum networking. We present a simulation-based design study of an integrated photonic racetrack resonator source for spontaneous parametric down-conversion (SPDC) that simultaneously achieves all three properties. The central result is a simulated set of 90 doubly resonant signal/idler frequency-mode pairs with an effective Schmidt number of 89.62, average bandwidths of 1.08 GHz, a mean free spectral range of 51.9 GHz, and a total internal pair-generation-rate efficiency of 1.16 GHz/mW. Under deterministic wavelength-based splitting, the accessible frequency-state Schmidt number is reduced to 44.93. To support these predictions, we derive a closed-form analytical connection between classical cavity parameters (resonant frequencies, decay rates, coupling coefficients) and the quantum joint spectral amplitude and pair generation rate, extending the dispersive-medium quantization formalism of Raymer to the nonlinear optical cavity case. We demonstrate how classical electromagnetic field simulations can be combined with this analytical framework to predict quantum figures of merit for an integrated photonic source prior to fabrication. Fabrication and experimental validation are left for future work. Source arXiv: 2605.03121v1
Designing a Satellite Serviced Quantum Network Backbone for Concurrent Global Connectivity Authors Prateek Mantri, Stav Haldar, Albert Williams, Don Towsley Published: 05.04.2026 Updated: 05.04.2026 Summary Satellite-serviced quantum networks pose an architectural problem distinct from classical satellite networking: because entanglement cannot be copied, and long-lived buffering is technologically constrained for near-term devices, useful end-to-end service requires fixed optical ground infrastructure and simultaneous multi-hop path availability. We investigate the design of a satellite-serviced quantum backbone aimed at supporting concurrent global connectivity across a traffic matrix of major population and financial centers under finite waiting-time constraints. Using a discrete-time simulator, we evaluate performance using two architecture-level metrics: (i) time-to-connectivity, and (ii) latency-conditioned average active-link strength. Across a broad parameter sweep, we identify three dominant architectural effects. First, anisotropic ground-station lattices reduce time-to-connectivity relative to longitudinally collapsed and isotropic baselines by aligning ground infrastructure with latitude-dependent satellite access. Second, multi-inclination LEO constellations reduce waiting times for strong connectivity compared to single-inclination constellations at fixed satellite budgets by providing additional visibility for a diverse latitude set. Third, multi-party satellite service policies alleviate per-satellite concurrency bottlenecks and substantially reduce time-to-connectivity at stringent traffic-matrix thresholds. We further show that satellite altitude is the dominant physical lever shaping the visibility–loss trade-off, strongly affecting both connectivity latency and achievable link strength, while orbital plane count and satellite packing provide secondary refinements at fixed altitude. Together, these results delineate the architectural conditions required for scalable, concurrent entanglement connectivity in satellite-serviced quantum networks. Source arXiv: 2605.02164v1
Hierarchical Federated Learning for Networked AI: From Communication Saving to Architecture-Aware Design Authors Seyed Mohammad Azimi-Abarghouyi, Mehdi Bennis, Leandros Tassiulas Published: 05.01.2026 Updated: 05.01.2026 Summary Federated learning (FL) is fundamentally a distributed optimization problem executed by communicating agents with local data, local computation, and partial system visibility. Once FL is viewed through that lens, hierarchy is not merely a scalability mechanism. It becomes the natural place to rethink how distributed optimization should be organized over real multi-tier networks. This article argues that hierarchical federated learning (HFL) should move beyond its common framing as a communication-saving protocol and instead be viewed as an architecture-aware design framework for networked AI. The framework is organized around three coupled design axes: architectural parameters, layer-wise optimization decomposition, and layer-wise communication realization. The first axis determines the coordination geometry of learning through hierarchy depth, layer asymmetry, and layered connectivity. The second determines how the global FL objective is decomposed across layers and highlights modular multi-layer optimization as a major opportunity beyond one dominant method everywhere. The third determines how the distributed optimization is physically realized under heterogeneous communication regimes, from interference-limited lower tiers to reliable upper tiers. A central message is that, in HFL, convergence becomes architecture-dependent: it is directly shaped by the chosen hierarchy, the assigned optimization roles, and the communication mechanisms that connect them. We develop this viewpoint using large-scale wireless edge intelligence as a flagship networked AI setting, then provide a comparative perspective on flat FL, two-tier HFL, and deep HFL together with a regime-oriented design map. The resulting perspective positions HFL as a practical methodology for designing future networked AI systems. Source arXiv: 2605.00931v1
High-fidelity entangling gates and nonlocal circuits with neutral atoms Authors Simon J. Evered, Muqing Xu, Sophie H. Li, Alexandra A. Geim, J. Pablo Bonilla Ataides, Marcin Kalinowski, Dolev Bluvstein, Nishad Maskara, Christian Kokail, Markus Greiner, Vladan Vuletić, Mikhail D. Lukin Published: 04.28.2026 Updated: 04.28.2026 Summary Creation and manipulation of entanglement with low error is essential in quantum information systems. In practice, two-qubit entangling gates constitute a dominant error source, limiting circuit depths and performance in fault-tolerant architectures. Using a neutral-atom quantum processor, we realize entangling CZ gates with a high Rabi frequency smooth-amplitude pulse, employing state-selective readout and qubit reuse for fast calibration, and achieve state-of-the-art fidelities of 99.854(4)% which improve to 99.941(3)% upon loss postselection, with stable performance for 10 hours. We then use these low-error gates in quantum circuits with coherent atom rearrangement. We first benchmark performance by creating and disentangling cluster states, and subsequently implement scrambling circuits featuring longer-range connectivity to study non-locally entangled states generated through chaotic dynamics. These results pave the way towards deep-circuit, efficient fault-tolerant quantum computation. Source arXiv: 2604.25987v1
Quantum-enhanced Network Tomography Authors Yufei Zheng, Zihao Gong, Saikat Guha, Don Towsley Published: 04.28.2026 Updated: 04.28.2026 Summary Network tomography refers to the use of inference techniques for inferring internal network states from end-to-end probes. Quantum probes, implemented by sending blocks of $n$ coherent-state pulses augmented with continuous-variable (CV) squeezing ($n=1$) or weak temporal-mode entanglement ($n>1$) over a lossy channel to a receiver with homodyne detection capabilities, are known to carry information about the channel transmissivity. Assuming a subset of nodes in an optical network is capable of sending and receiving such probes through intermediate nodes with all-optical switching capabilities, we leverage these quantum probes to estimate link transmissivities. To determine how to route the probes in a network, we propose a probe construction algorithm that guarantees link identifiability, while maximizing the number of information orthogonal sets of transmissivities. A set of probes induces a Fisher information matrix (FIM). We then derive two metrics, the determinant of the FIM and the trace of its inverse, to evaluate the performance of the probes. In particular, our results can be used to characterize the quantum improvement in estimating link transmissivities in a general optical network. Source arXiv: 2604.25194v1
Stabilizers for Compiling Logical Circuits under Hardware Constraints Authors Jack Weinberg, Narayanan Rengaswamy Published: 04.27.2026 Updated: 04.27.2026 Summary To implement quantum algorithms on a quantum computer, we must overcome the twin problems of fault-tolerance — how can we realize a relatively noiseless computation by cleverly combining noisy components? — and compilation — how can we realize an arbitrary quantum algorithm given the basic operations available on the quantum device at hand? We show how treating the former problem via error-correcting codes enables greater flexibility in resolving the latter. Specifically, we explicitly leverage the fact that error-correcting codes introduce redundancy which renders physically distinct operators logically indistinguishable. In terms of computation, it suffices to implement any operator logically equivalent to some target, yet from a compilation perspective, certain choices may be preferable to others. Our novel contribution is making this intuition precise in the general setting of the special unitary group. In particular, we describe how to reduce the problem of making a compilation-ideal choice to a least squares problem and provide a closed form solution thereof. Using our framework, it is possible to circumvent inserting costly swaps to adhere to hardware connectivity; instead, we could realize the logical target through a distinct physical Hamiltonian that is natively accessible. We elucidate our approach using the $[[4,2,2]]$ code. We discuss connections to compressed sensing that may pave the way to efficient compilation leveraging physical degrees of freedom. Source arXiv: 2604.25042v1
Networked Realization of Quantum LDPC Codes Authors Swayangprabha Shaw, Narayanan Rengaswamy Published: 04.27.2026 Updated: 04.27.2026 Summary Quantum low-density parity-check (QLDPC) codes with good parameters are promising candidates for low-overhead fault-tolerant quantum computing, but their non-local stabilizers require long-range connectivity and frequent qubit movement, introducing practical challenges. Prior work has studied the networked implementation of topological codes, where each node only holds one or a few qubits of the entire code, and demonstrated competitive performance under practical constraints such as the quality of network-provided entanglement. However, since these codes are already geometrically local, such a networked setting might not be essential. In this work, we propose and study the networked implementation of better QLDPC codes, specifically bivariate bicycle codes due to their similarity to surface codes and the controlled amount of long-range connections in their stabilizers. We begin by recreating networked surface codes in Stim, with one code qubit per node, and provide additional insights into their circuit-level noise performance. We then extend this approach to bipartitions of bivariate bicycle codes, using balanced min-cut partitioning on their combined X-Z Tanner graph to identify optimal qubit splits. For stabilizers spanning nodes, we implement teleported CNOTs and vary the Bell pair fidelity enabling these gates. Through circuit-level noise simulations with BP-OSD decoding, we provide the first insights into networked realizations of these codes and compare their performance with monolithic implementations. We conclude by outlining advantages, limitations, and future directions. Source arXiv: 2604.25026v1
Optimum-Transmission Free-Space Optical Communications Authors Prajit Dhara, Babak N. Saif, Jeffrey H. Shapiro, Saikat Guha Published: 04.25.2026 Updated: 04.25.2026 Summary Slepian developed the Prolate Spheroidal Wavefunction (PSW) spatial-mode basis, which forms the normal modes of the Fresnel-propagation kernel of a free-space optical communications channel bookended by hard-circular apertures. The zero-th order PSW mode has the highest power-transfer eigenvalue, exciting which on the transmitter side therefore maximizes the transmissivity for single-spatial-mode communications. We show that the transmissivity performance of this fundamental PSW mode can be obtained by an aperture-truncated Gaussian beam of an optimized beam waist, despite the two mode shapes deviating from one another in the near-field regime. Source arXiv: 2604.23417v1
Reconfigurable Superconducting Logic for On-Chip Photon Coincidence Detection Authors Gabriel Le Guay, Matteo Castellani, Reed Foster, Francesca Incalza, Alejandro Simon, Owen Medeiros, Phillip D. Keathley, Karl K. Berggren Published: 04.23.2026 Updated: 04.23.2026 Summary Scaling photonic quantum-information platforms requires arrays of superconducting nanowire single-photon detectors (SNSPDs) for feedforward control, in which optical operations are conditioned on preceding Bell-state measurements that typically rely on photon coincidence detections. On-chip superconducting cryotron electronics, performing logic directly on detector outputs and subsequently driving optical modulators, could substantially reduce latency and room-temperature interconnect complexity for feedforward schemes. To date, no cryotron logic gates specifically designed to process SNSPD outputs for quantum applications have been demonstrated. We demonstrate a bias-programmable logic gate based on three nanocryotrons (nTrons), fabricated using the same thin-film technology as SNSPDs. The circuit implements selectable AND (coincidence), XOR (odd-parity), and OR functions on two externally generated electrical pulses at 4.2 K, with bit-error rates below $10^{-3}$, bias margins up to $pm24%$, and operation extending to 25 MHz over narrower bias windows. Moreover, it performs coincidence and odd-parity detection on two co-fabricated SNSPDs’ outputs with bit-error rates below $3.2 times 10^{-2}$. As a proof-of-concept, we show that nTrons can drive capacitive loads up to 1.15 V, potentially enabling compatibility with electro-optic modulators in feedforward schemes. Source arXiv: 2604.22101v1
Reconfigurable Superconducting Logic for On-Chip Photon Coincidence Detection Authors Gabriel Le Guay, Matteo Castellani, Reed Foster, Francesca Incalza, Alejandro Simon, Owen Medeiros, Phillip D. Keathley, Karl K. Berggren Published: 04.23.2026 Updated: 04.27.2026 Summary Scaling photonic quantum-information platforms requires arrays of superconducting nanowire single-photon detectors (SNSPDs) for feedforward control, in which optical operations are conditioned on preceding Bell-state measurements that typically rely on photon coincidence detections. On-chip superconducting cryotron electronics, performing logic directly on detector outputs and subsequently driving optical modulators, could substantially reduce latency and room-temperature interconnect complexity for feedforward schemes. To date, no cryotron logic gates specifically designed to process SNSPD outputs for quantum applications have been demonstrated. We demonstrate a bias-programmable logic gate based on three nanocryotrons (nTrons), fabricated using the same thin-film technology as SNSPDs. The circuit implements selectable AND (coincidence), XOR (odd-parity), and OR functions on two externally generated electrical pulses at 4.2 K, with bit-error rates below $10^{-3}$, bias margins up to $pm24%$, and operation extending to 25 MHz over narrower bias windows. Moreover, it performs coincidence and odd-parity detection on two co-fabricated SNSPDs’ outputs with bit-error rates below $3.2 times 10^{-2}$. As a proof-of-concept, we show that nTrons can drive capacitive loads up to 1.15 V, potentially enabling compatibility with electro-optic modulators in feedforward schemes. Source arXiv: 2604.22101v2
Enhanced Mid-Infrared Single-Photon Detection with Antenna-Coupled Superconducting Nanowires Authors Dip Joti Paul, Stewart Koppell, Gregor G. Taylor, Boris Korzh, Sahil R. Patel, Andrew D. Beyer, Emma E. Wollman, Matthew D. Shaw, Phillip D. Keathley, Karl K. Berggren Published: 04.20.2026 Updated: 04.20.2026 Summary Scaling the photon-detection area of superconducting nanowire single-photon detectors (SNSPDs) has traditionally been achieved by nanowire meandering. However, material inhomogeneities and fabrication-induced defects, such as line-edge roughness, increase with nanowire length, leading to reduced internal photon-detection efficiency and elevated dark-count rates. This trade-off becomes increasingly pronounced as nanowires are scaled to sub-100 nm widths and sub-5 nm thicknesses required for mid- to far-infrared sensitivity. Here, we demonstrate an antenna-coupled SNSPD architecture that enhances the effective photon-detection area without increasing nanowire length. A crossed bowtie antenna integrated with an 80 nm-wide, 3 nm-thick WSi nanowire yields 15.7$times$ increase in effective detection area at 7.4 $μ$m compared to a bare nanowire of identical geometric footprint, while maintaining the same internal detection efficiency and dark-count rate. Antenna coupling improves noise-equivalent power and provides a more scalable route to increasing photon-detection area than conventional meander geometries, offering performance benefits for applications in astronomy, biological imaging, and molecular spectroscopy. Source arXiv: 2604.18155v1
Resource-Efficient Quantum-Enhanced Compressive Imaging via Quantum Classical co-Design Authors Haowei Shi, Visuttha Manthamkarn, Christopher M. Jones, Zheshen Zhang, Quntao Zhuang Published: 04.17.2026 Updated: 04.17.2026 Summary Quantum sensing can enhance imaging performance by reducing measurement noise below the classical limit, thereby improving the signal-to-noise ratio (SNR) of acquired data. In conventional quantum imaging schemes, squeezing is applied independently to each pixel or spatial mode, leading to a quantum resource cost that scales linearly with image dimension. This approach implicitly separates quantum enhancement from classical post-processing, treating them as independent layers. In this work, we demonstrate that integrating quantum resource allocation with the guidance from classical compressive imaging, via co-design between the quantum hardware layer and the classical software layer, substantially reduces the required quantum resources. We employ principal component analysis (PCA) to identify a low-dimensional principal component subspace for measurement and apply squeezing selectively to the most informative spatial modes corresponding to these principal components. Our numerical experiments show that high-accuracy image classification and high-fidelity image reconstruction can be achieved with significantly fewer squeezed modes compared to pixel-wise squeezing. Our results establish a joint quantum classical co-design framework for resource-efficient quantum-enhanced imaging. Source arXiv: 2604.16662v1
Noise factor of Brillouin amplifiers Authors John H. Dallyn, Nils T. Otterstrom, Matt Eichenfield, Peter T. Rakich, Ryan O. Behunin Published: 04.14.2026 Updated: 04.14.2026 Summary Stimulated Brillouin scattering (SBS), an optical nonlinearity arising from photon-phonon interactions, has formed the basis for a large class of optical signal processing devices, including Brillouin amplifiers. A limiting factor of such amplifiers is the noise due to thermal-mechanical fluctuations that the phonons imprint on the optical signal. Prior work has either inferred or experimentally observed a noise factor ($F$) that depends only on the thermal occupation of the phonons ($Fapprox 1+n_{th}$). We show that this noise factor results naturally from a Hamiltonian-based spatio-temporal coupled mode treatment in the limit of large Brillouin amplification and when phonon propagation is neglected. Moreover, this theoretical framework allows us to extend our treatment to a much larger and more representative parameter space for emerging SBS systems; specifically, this analysis accounts for the forward or backward nature of the scattering process and the effects of phonon propagation, optical loss, and small Brillouin gains. Our results demonstrate that the noise factor can deviate radically from $Fapprox 1+n_{th}$ for a host of modern SBS devices, especially those in which phonon propagation significantly changes the coupled mode dynamics. Source arXiv: 2604.12906v1
Three-body interactions in Rydberg lattices Authors Rhine Samajdar, Mikhail D. Lukin, Valentin Walther Published: 04.13.2026 Updated: 04.13.2026 Summary Programmable arrays of neutral Rydberg atoms are one of the leading platforms today for scalable quantum simulation and computation. In these systems, the dipole-dipole interactions between the individual atoms, or qubits, typically result in binary — i.e., two-body — couplings. In this work, we develop an experimentally accessible scheme for engineering three-body interactions in Rydberg lattices. Such strong three-body couplings can fundamentally modify the underlying physics compared to systems with only two-body interactions: we demonstrate this, in particular, by systematically investigating the effective many-body Hamiltonian and its emergent quantum phases. This capability paves the way for the quantum simulation of a broader class of correlated models of condensed matter and high-energy physics. Source arXiv: 2604.11870v1
Scalable Neural Decoders for Practical Fault-Tolerant Quantum Computation Authors Andi Gu, J. Pablo Bonilla Ataides, Mikhail D. Lukin, Susanne F. Yelin Published: 04.09.2026 Updated: 04.09.2026 Summary Quantum error correction (QEC) is essential for scalable quantum computing. However, it requires classical decoders that are fast and accurate enough to keep pace with quantum hardware. While quantum low-density parity-check codes have recently emerged as a promising route to efficient fault tolerance, current decoding algorithms do not allow one to realize the full potential of these codes in practical settings. Here, we introduce a convolutional neural network decoder that exploits the geometric structure of QEC codes, and use it to probe a novel “waterfall” regime of error suppression, demonstrating that the logical error rates required for large-scale fault-tolerant algorithms are attainable with modest code sizes at current physical error rates, and with latencies within the real-time budgets of several leading hardware platforms. For example, for the $[144, 12, 12]$ Gross code, the decoder achieves logical error rates up to $sim 17$x below existing decoders – reaching logical error rates $sim 10^{-10}$ at physical error $p=0.1%$ – with 3-5 orders of magnitude higher throughput. This decoder also produces well-calibrated confidence estimates that can significantly reduce the time overhead of repeat-until-success protocols. Taken together, these results suggest that the space-time costs associated with fault-tolerant quantum computation may be significantly lower than previously anticipated. Source arXiv: 2604.08358v1
A Family of Open Time-Series Foundation Models for the Radio Access Network Authors Ioannis Panitsas, Leandros Tassiulas Published: 04.05.2026 Updated: 04.05.2026 Summary The Radio Access Network (RAN) is evolving into a programmable and disaggregated infrastructure that increasingly relies on AI-native algorithms for optimization and closed-loop control. However, current RAN intelligence is still largely built from task-specific models tailored to individual functions, resulting in model fragmentation, limited knowledge sharing across tasks, poor generalization, and increased system complexity. To address these limitations, we introduce TimeRAN, a unified multi-task learning framework for time-series modeling in the RAN. TimeRAN leverages a lightweight time-series foundation model with few task-specific heads to learn transferable representations that can be efficiently adapted across diverse tasks with limited supervision. To enable large-scale pretraining, we further curate and open-source TimeRAN DataPile, the largest time-series corpus for RAN analytics to date, comprising over 355K time series and 0.56B measurements across diverse telemetry sources, protocol layers, and deployment scenarios. We evaluate TimeRAN across a comprehensive set of RAN analytics tasks, including anomaly detection, classification, forecasting, and imputation, and show that it achieves state-of-the-art performance with minimal or no task-specific fine-tuning. Finally, we integrate TimeRAN into a proof-of-concept 5G testbed and demonstrate that it operates efficiently with limited resource requirements in real-world scenarios. Source arXiv: 2604.04271v1
Understanding Intrinsic Loss in Thin-Film Lithium Niobate Ring Resonators via Adiabatic Coupling Authors Xinrui Zhu, Hana K. Warner, Yunxiang Song, Donald Witt, Marko Loncar Published: 04.02.2026 Updated: 04.02.2026 Summary Thin-film lithium niobate (TFLN) has emerged as a versatile integrated photonics platform, combining strong electro-optic and nonlinear effects. Among TFLN devices, ring resonators play a central role in filtering, modulation, and nonlinear optical processes. However, intrinsic loss, which ultimately limits ring performance, is most often summarized by single-valued metrics, and its statistical variability across resonances has received limited attention. Here, we show that intrinsic loss rates in monolithic TFLN ring resonators follow a statistical distribution, comprising a baseline loss and a tail arising from discrete loss events. This behavior is revealed by characterizing 2233 resonances, using an adiabatic waveguide-ring coupling architecture that selectively excites the fundamental mode and yields clean spectra in the ultra-high-Qi regime. We find the most probable intrinsic loss rate ki = 2 pi x 10.4 MHz, indicating operation in a low-loss regime comparable to state-of-the-art thick silicon nitride platforms. Source arXiv: 2604.01922v1
Unlocking Open-Player-Modeling-enhanced Game-Based Learning: The Open Player Socially Analytical Intelligence Architecture Authors Zhiyu Lin, Boyd Fox, Devon Mckee, Sai Siddartha Maram, Jiahong Li, Tyler Sorensen, Brian K. Smith, Roger Azevedo, Jichen Zhu, Magy Seif El-Nasr Published: 03.27.2026 Updated: 03.27.2026 Summary Game-Based Learning (GBL) is a learner-engaging pedagogical methodology, yet adapting games to heterogeneous learners requires transparent, real-time Open Player Models (OPMs). We contribute to the community Open Player Socially Analytical Intelligence (OPSAI), an architecture implementing OPM beyond conceptual frameworks and validated in a GBL application. It decouples gameplay telemetry and analysis from the game engine and automatically derives pedagogically actionable insights, supporting the transparency of computational player models while making them accessible to players. OPSAI comprises three logical layers: a Frontend that both provides the GBL experience and collects information needed for analytics; a stateless Backend that hosts transparent analytics services producing reflective prompts, recommendations, and visualization guides; and a two-tier Log Storage that balances heavy raw gameplay data with lightweight reference indices for low-latency queries. By feeding analytics outputs back into the game interface, OPSAI closes the feedback loop between play and learning, empowering teachers, researchers, and learners alike. We further showcase OPSAI with a full deployment on the Parallel GBL environment, featuring live play traces, peer comparisons, and personalized suggestions, demonstrating a reusable blueprint for future educational games. Source arXiv: 2603.26915v1
T Count as a Numerically Solvable Minimization Problem Authors Marc Grau Davis, Ed Younis, Mathias Weiden, Hyeongrak Choi, Dirk Englund Published: 03.26.2026 Updated: 03.26.2026 Summary We present a formulation of the problem of finding the smallest T -Count circuit that implements a given unitary as a binary search over a sequence of continuous minimization problems, and demonstrate that these problems are numerically solvable in practice. We reproduce best-known results for synthesis of circuits with a small number of qubits, and push the bounds of the largest circuits that can be solved for in this way. Additionally, we show that circuit partitioning can be used to adapt this technique to be used to optimize the T -Count of circuits with large numbers of qubits by breaking the circuit into a series of smaller sub-circuits that can be optimized independently. Source arXiv: 2603.25101v1
Visible Spectral-Domain Optical Coherence Tomography for Photonic Integrated Circuits Characterization Authors Yin Min Goh, Chao Li, Yunchan Hwang, Helaman Flores, Mahmoud Jalali Mehrabad, James G. Fujimoto, Dirk R. Englund Published: 03.25.2026 Updated: 03.25.2026 Summary Visible photonic integrated circuits underpin applications ranging from AR/VR to quantum control, yet lack a high-resolution, nondestructive diagnostic comparable to the optical frequency-domain reflectometry used in infrared silicon photonics. Here we adapt spectral-domain optical coherence tomography to measure guided-mode back-reflections in visible PICs. Broadband visible light injected into a circuit generates back-reflections that interfere with a depth-referencing local oscillator, and the resulting spectral fringes are recorded on a spectrometer. We validate the approach by resolving multiple round-trip echoes in a waveguide-coupled ring resonator using only single-port access. We then extend it to circuits integrated with diamond quantum micro-chiplets, clearly resolving input and output facets as well as PIC–QMC transition regions. The system achieves shot-noise-limited sensitivity, 50 dB dynamic range, 8 um axial resolution in silicon nitride, and a 2 mm imaging depth at 6 dB roll-off. SD-OCT therefore provides a practical, high-resolution diagnostic for visible PICs that uses a broadband probe source and requires only single-port optical access, enabling rapid characterization of propagation loss, backscattering, and dispersion. Source arXiv: 2603.23815v1
Rethinking Quantum Networking with Advances in Fiber Technology Authors Prateek Mantri, Michael S. Bullock, Aditya Tripathi, Robert Kwolek, Rajveer Nehra, Don Towsley Published: 03.24.2026 Updated: 03.24.2026 Summary Recent comparisons of quantum repeater protocols have highlighted the strong near-term potential of multiplexed two-way architectures for long-distance quantum communication. At the same time, advances in hollow-core fiber (HCF) technology motivate a re-examination of the physical transmission medium as an architectural lever in quantum network design. In this work, we compare emerging anti-resonant HCFs against conventional silica single-mode fibers (SMFs) in multiplexed two-way quantum repeater networks. We evaluate their performance under both telecom and memory-native transmission, accounting for frequency-conversion overheads, coupling efficiencies, memory decoherence, and operational noise. We find that HCF significantly outperforms SMF across a wide range of regimes. With memory-native transmission, HCF yields up to an order of magnitude improvement in secret-key rate per channel use under realistic conversion efficiencies. Even at telecom wavelengths, HCF enables larger optimal repeater spacing, improving rate–cost tradeoffs and reducing repeater requirements. We further quantify the role of memory quality, hardware efficiency, detector and conversion losses, and two-qubit gate noise in shaping these gains. These results show that recent advances in HCF materially expand the design space of practical terrestrial quantum repeater networks. Source arXiv: 2603.23718v1
Observation of microscopic domain effects in the metal-insulator transition of thin-film NdNiO$_3$ Authors Lucy S. Nathwani, Anne Ruperto, Ashvini Vallipuram, Abigail Y. Jiang, Grace A. Pan, Dan Ferenc Segedin, Ari B. Turkiewicz, Charles M. Brooks, Jarad A. Mason, Qichen Song, Julia A. Mundy Published: 03.22.2026 Updated: 03.22.2026 Summary Perovskite oxides display correlated electrical, magnetic, and thermal properties that can be further tuned in the thin-film limit, making them contenders for next-generation electronics. Measuring thermal transport in thin films is challenging, because traditional techniques are dominated by the substrate. Here, frequency-domain thermoreflectance (FDTR) of an epitaxial NdNiO$_3$ thin film reveals a sharp change in out-of-plane thermal conductivity across the metal-insulator transition. Complementary frequency-domain photoreflectance (FDPR) reveals a large change in ambipolar diffusivity of photoexcited carriers. While the in-plane electrical resistance shows large hysteresis, out-of-plane thermal and charge transport shows negligible hysteresis. We attribute this discrepancy to anisotropy in the percolation of nanoscale domains across the transition as the film thickness approaches the domain length scale. We establish FDTR and FDPR as sensitive probes of quantum material phase transitions and highlight NdNiO$_3$ for thermal control and memory applications. Source arXiv: 2603.21405v1
Fast Real-Axis Eliashberg Calculations: Full-bandwidth solutions beyond the constant density of states approximation Authors Alejandro Simon, James Shi, Dominik Spath, Eva Kogler, Reed Foster, Emma Batson, Pedro N. Ferreira, Mihir Sahoo, Phillip D. Keathley, Warren E. Pickett, Rohit Prasankumar, Karl K. Berggren, Christoph Heil Published: 03.18.2026 Updated: 03.18.2026 Summary Experimentally relevant signatures of superconductivity require access to real-frequency quantities, such as the spectral functions, optical response, and transport properties, yet Migdal-Eliashberg calculations are commonly performed on the imaginary axis and then analytically continued, a step that is numerically delicate and can obscure physically relevant spectral features. Here we present a practical route to solving the finite-temperature Migdal-Eliashberg equations directly on the real-frequency axis, while retaining the effects from the full-bandwidth electronic structure. Our formulation accounts for particle-hole asymmetry through an energy-dependent electronic density of states, avoiding the constant density of states approximation often used in real-axis calculations, and includes a static screened Coulomb contribution. We introduce an efficient numerical technique to solve the Migdal-Eliashberg integrals whose computational cost scales linearly with the real-frequency grid, making high-resolution, full-bandwidth real-axis calculations feasible and providing direct access to the interacting Green’s function and derived observables without analytic continuation. As an illustration, we apply the method to H$_{3}$S, where a van-Hove singularity near the Fermi level produces strong particle-hole asymmetry. The full-bandwidth solution yields noticeably different spectra than the constant density of states approximation and brings the superconducting gap and lineshapes into closer agreement with experiment, highlighting when band-structure details are essential. Furthermore, the methods presented here open the door to time-dependent, nonequilibrium simulations within Eliashberg theory. Source arXiv: 2603.18199v1
Ultrafast dynamics and light-induced superconductivity from first principles Authors Alejandro Simon, James Shi, Eva Kogler, Reed Foster, Dominik Spath, Emma Batson, Pedro N. Ferreira, Mihir Sahoo, Rohit Prasankumar, Phillip D. Keathley, Karl K. Berggren, Christoph Heil Published: 03.18.2026 Updated: 03.18.2026 Summary Experiments on superconducting materials have unveiled unique emergent properties when they are driven far from equilibrium. However, a quantitative first-principles treatment that describes experimental observations is lacking. In this work, we develop an ab-initio model for the nonequilibrium response of optically irradiated superconducting films within the framework of conventional electron-phonon-mediated superconductivity, leveraging new numerical techniques to solve the Migdal-Eliashberg equations directly on the real-frequency axis. This enables us to quantitatively reproduce the optical response of superconducting films in pump-probe experiments and validate our approach on measurements of the differential reflectance of Pb and LaH$_{10}$ in response to a pump excitation. Similar calculations performed on the alkali-doped fulleride K$_3$C$_{60}$ reveal that a photo-induced superconducting state is generated after irradiation by an ultrafast mid-infrared pulse of sufficient intensity, as reported in prior experimental work. The enhancement in this framework is attributed to the excitation of quasiparticles to energies resonant with the strongest electron-phonon coupling in K$_3$C$_{60}$, in close analogy to the mechanism for enhancement of superconductivity under microwave irradiation, explaining the nature of the photo-induced superconducting state and elucidating the subsequent quasiparticle and phonon dynamics. Our results suggest that photo-induced superconductivity is accessible in more materials than previously recognized. We demonstrate this by performing calculations on calcium-intercalated graphite, CaC$_6$, and predict a similar photo-induced superconducting gap. Source arXiv: 2603.18182v1
Boosted linear-optical measurements on single-rail qubits with unentangled ancillas Authors Aqil Sajjad, Isack Padilla, Saikat Guha Published: 03.17.2026 Updated: 03.17.2026 Summary Any quantum state of the radiation field, sliced in small non-overlapping space-time bins is a collection of single-rail qubits, each spanning the vacuum and single-photon Fock state of a mode. Quantum logic on these qubits would enable arbitrary measurements on information-bearing light, but is hard due to the lack of strong nonlinearities. With unentangled ancilla single-rail qubits, an $8$-port interferometer and photon detection, we show any single-rail qubit measurement in the $XY$ Bloch plane is realizable with success probability $147/256$, which beats the prior-known $1/2$ limit. Source arXiv: 2603.16795v1
Programmable pixel-mode linear interferometers using multi-plane light conversion Authors Mushkan Sureka, Itay Ozer, Wenhua He, Michael R. Grace, Chaohan Cui, Saikat Guha Published: 03.16.2026 Updated: 03.16.2026 Summary Programmable linear optical interferometers are a core primitive in optical signal processing, quantum information processing, and photonic computing. Existing photonic-integrated implementations realize arbitrary $M$-mode unitaries using Mach–Zehnder-interferometer meshes whose footprint and accumulated loss scale with $O(M^2)$ optical components. Here we analyze and experimentally demonstrate a programmable architecture for implementing linear optical transformations directly on spatially tiled free-space pixel modes using multi-plane light conversion (MPLC). In this architecture, $M$ spatial modes arranged on a transverse lattice undergo a unitary transformation and are mapped to $M$ output modes of identical geometry through a sequence of programmable phase masks separated by free-space propagation segments. Numerical simulations show that arbitrary $M$-mode unitaries can be compiled to a desired high fidelity using a number of phase planes that scales approximately linearly with $M$. Using a spatial-light-modulator-based MPLC, we experimentally demonstrate programmable interferometers acting on up to $16$ spatial pixel modes, including tunable beamsplitters, Hadamard unitaries, spatial permutations, boosted-Bell-measurement unitaries, and partial unitaries on select subsets of modes. These results establish MPLC-based pixel-mode interferometers as a promising architecture for programmable linear optics with applications in classical and quantum optical interconnects, photonic switching, and quantum information processing. Source arXiv: 2603.15836v1
Noise and dynamics in acoustoelectric waveguides Authors Ryan O. Behunin, Andrew Shepherd, Ruoyu Yuan, Taylor Ray, Matthew J. Storey, Peter T. Rakich, Nils T. Otterstrom, Matt Eichenfield Published: 03.16.2026 Updated: 03.16.2026 Summary We present a quantum field theoretic formulation of acoustoelectric interactions in waveguide-like systems of arbitrary cross-section. Building on an open quantum systems approach, we derive a unified description of plasmon-phonon coupling that incorporates dissipation, noise, and the influence of drift currents. Our analysis captures both bulk and surface plasmon modes, highlighting how drift currents Doppler-shift plasmonic resonances and reshape the phonon noise spectrum. The resulting Heisenberg-Langevin equations yield closed-form expressions for frequency shifts, gain, and noise power spectra, enabling direct evaluation of performance metrics such as the noise factor in acoustoelectric amplifiers and oscillators. In the appropriate limits, this framework reproduces known results while extending them to complex geometries. Source arXiv: 2603.15482v1
Microwave spin resonance in epitaxial thin films of spin liquid candidate TbInO3 Authors Sandesh S. Kalantre, Johanna Nordlander, Margaret A. Anderson, Julia A. Mundy, David Goldhaber-Gordon Published: 03.15.2026 Updated: 03.15.2026 Summary Minimizing the energy of a many body system tends to favor order, but classical frustration and quantum fluctuations destabilize that order. The tension between these effects can produce exotic quantum states of matter. Quantum spin liquid (QSL) states emerge in models of localized magnetic moments where the crystal lattice connectivity frustrates ordering, and the exchange interaction of neighboring spins strengthens quantum fluctuations. Experimentally identifying a QSL in a real material is challenging from the lack of an order parameter. Piecing together evidence from varied techniques is necessary for diagnosing the nature of the ground state — QSL or otherwise — of a frustrated spin system. In this work, we use coplanar superconducting resonators to probe magnetic excitations in epitaxially grown thin films of a spin liquid candidate TbInO3. Adapting microwave techniques from the field of circuit quantum electrodynamics, we measure responses of these thin films whose volume is too low for applying conventional bulk techniques. In-plane susceptibility extracted from the spin resonance signal indicates extreme frustration of magnetic order down to 20 mK, over two orders of magnitude lower than the Curie-Weiss energy scale. Through a crystal field analysis, we identify the doublet eigenstates comprising the ground state. As a consequence of improper ferroelectricity, Tb moments split into two flavors with distinct g-factors reflecting the local crystal field environment of each site. Spin-orbit coupling, crystal fields, magnetic frustration and improper ferroelectricity distinctively combine to shape the magnetic ground state of TbInO3. This work establishes a measurement technique using superconducting resonators to probe thin films of frustrated magnets, and applies this technique towards building a coherent understanding of the magnetic properties of TbInO3. Source arXiv: 2603.14545v1
Digital dissipative state preparation for frustration-free gapless quantum systems Authors Johannes Feldmeier, Yu-Jie Liu, Mikhail D. Lukin, Soonwon Choi Published: 03.10.2026 Updated: 03.10.2026 Summary Preparing algebraically correlated ground states of quantum many-body systems is an important, yet challenging task for quantum simulation. We introduce a protocol that employs local projective measurements and unitary feedback for frustration-free gapless systems. Our approach prepares a priori unknown ground states in time that scales polynomially with system size. We analytically show the performance our protocol for the dynamics of a single-particle; we argue the same mechanism generalizes to many-body systems based on the physics of quasiparticles. Our theory predicts that a transient cooling dynamics directly reveals the system’s universal critical properties. In particular, the state preparation time is linear in the inverse of the finite-size gap (up to log correction) when the system’s dynamical critical exponent is larger or equal the effective spatial dimension explored by the quasiparticles. We verify these predictions in numerical simulations of ferromagnetic Heisenberg models in one- and two dimensions, a Fredkin spin chain, and a two-dimensional model of resonating valence bond states. Our protocol stabilizes gapless many-body ground states fully digitally without requiring analog rotations, enabling access to high-fidelity states beyond conventional adiabatic approaches in near-term experiments. Source arXiv: 2603.10119v1
Heterogeneously Integrated Diamond-on-Lithium Niobate Quantum Photonic Platform Authors Sophie W. Ding, Chang Jin, Zixi Li, Nicholas Achuthan, Kazuhiro Kuruma, Xinghan Guo, Brandon Grinkemeyer, David D. Awschalom, Nazar Delegan, F. Joseph Heremans, Alexander A. High, Marko Loncar Published: 03.09.2026 Updated: 03.09.2026 Summary Diamond photonics has enabled efficient interfaces for quantum memories and is predicted to be a critical component of quantum networks. However, scalable network architectures require spatial, temporal, and spectral control of photons, which relies on nonlinear and electro-optic functionalities that diamond alone cannot provide. Here, we demonstrate heterogeneous integration of a thin-film lithium niobate (TFLN) platform, which has strong chi-2 nonlinearity and electro-optic effects, with thin diamond films. We demonstrate high-Q diamond photonic crystal cavities (Q factors exceeding 5×10^4 at 735 nm) that are lithographically aligned with TFLN photonic backbone and critically coupled to it. This allows us to realize low-loss diamond-TFLN “escalators” (loss ~1 dB/coupler) that support efficient light transfer between them. At cryogenic temperatures (5K), we can collect photons emitted from silicon vacancies (SiVs) embedded within the diamond structure via the TFLN photonic circuit. This approach establishes a scalable route toward integrated photonic circuits for practical quantum networking and other technologies. Source arXiv: 2603.08609v1
Benchmark for Assessing Olfactory Perception of Large Language Models Authors Eftychia Makri, Nikolaos Nakis, Laura Sisson, Gigi Minsky, Leandros Tassiulas, Vahid Satarifard, Nicholas A. Christakis Published: 03.08.2026 Updated: 03.08.2026 Summary Here we introduce the Olfactory Perception (OP) benchmark, designed to assess the capability of large language models (LLMs) to reason about smell. The benchmark contains 1,010 questions across eight task categories spanning odor classification, odor primary descriptor identification, intensity and pleasantness judgments, multi-descriptor prediction, mixture similarity, olfactory receptor activation, and smell identification from real-world odor sources. Each question is presented in two prompt formats, compound names and isomeric SMILES, to evaluate the effect of molecular representations. Evaluating 21 model configurations across major model families, we find that compound-name prompts consistently outperform isomeric SMILES, with gains ranging from +2.4 to +18.9 percentage points (mean approx +7 points), suggesting current LLMs access olfactory knowledge primarily through lexical associations rather than structural molecular reasoning. The best-performing model reaches 64.4% overall accuracy, which highlights both emerging capabilities and substantial remaining gaps in olfactory reasoning. We further evaluate a subset of the OP across 21 languages and find that aggregating predictions across languages improves olfactory prediction, with AUROC = 0.86 for the best performing language ensemble model. LLMs should be able to handle olfactory and not just visual or aural information. Source arXiv: 2604.00002v1
Quantum Hamlets: Distributed Compilation of Large Algorithmic Graph States Authors Anthony Micciche, Naphan Benchasattabuse, Andrew McGregor, Michal Hajdušek, Rodney Van Meter, Stefan Krastanov Published: 03.06.2026 Updated: 05.03.2026 Summary We investigate the problem of compiling the generation of graph states to arbitrarily many distributed homogeneous quantum processing units (QPUs), providing a scalable partitioning algorithm and graph state generation protocol to minimize the number of Bell pairs required. Current approaches focus on the naive metric of cut edges to estimate the quantum communication cost. We show that the problem of balanced k graph partitioning, with the objective of minimizing the sizes of the maximum matchings between the partitions, leads to lower entanglement requirements across partitions. Our heuristic algorithm, BURY, partitions graph states to require fewer Bell pairs for generation than state-of-the-art k partition algorithms. Furthermore, we show that BURY reduces the cut-rank of the partitions, demonstrating that the partitioning found by our algorithm is likely to minimize the Bell pair utilization of any future improved distributed graph state generation protocol. We also discuss how our methods apply to the dynamic case where the graph state generation and measurement are performed concurrently. Our compilation approach provides a scalable foundation for reducing quantum network overhead for distributed measurement-based quantum computation (MBQC), as well as any scheme where distributed graph state generation is desired. Source arXiv: 2603.06387v2
Quantum Hamlets: Distributed Compilation of Large Algorithmic Graph States Authors Anthony Micciche, Naphan Benchasattabuse, Andrew McGregor, Michal Hajdušek, Rodney Van Meter, Stefan Krastanov Published: 03.06.2026 Updated: 03.06.2026 Summary We investigate the problem of compiling the generation of graph states to arbitrarily many distributed homogeneous quantum processing units (QPUs), providing a scalable partitioning algorithm and graph state generation protocol to minimize the number of Bell pairs required. To this goal, we consider the problem of balanced k graph partitioning with the objective of minimizing the sizes of the maximum matchings between partitions, a more natural measure of entanglement compared to the naive but common metric of cut edges. We show that our heuristic algorithm, BURY, partitions graph states to require fewer Bell pairs for generation than state-of-the-art k partition algorithms. Furthermore, we show that BURY reduces the cut-rank of the partitions, demonstrating that the partitioning found by our algorithm is likely to minimize the Bell pair utilization of any future improved distributed graph state generation protocol. Additionally, we discuss how one could straightforwardly apply our methods to the dynamic case where the graph state generation and measurement are performed concurrently. Our study of the balanced minimum maximum matching k partition problem and the heuristic algorithm we design provides a scalable foundation for reducing quantum network overhead for distributed measurement-based quantum computation (MBQC), as well as any scheme where distributed graph state generation is desired. Source arXiv: 2603.06387v1
Challenges in Synchronous & Remote Collaboration Around Visualization Authors Matthew Brehmer, Maxime Cordeil, Christophe Hurter, Takayuki Itoh, Wolfgang Büschel, Mahmood Jasim, Arnaud Prouzeau, David Saffo, Lyn Bartram, Sheelagh Carpendale, Chen Zhu-Tian, Andrew Cunningham, Tim Dwyer, Samuel Huron, Masahiko Itoh, Alark Joshi, Kiyoshi Kiyokawa, Hideaki Kuzuoka, Bongshin Lee, Gabriela Molina León, Harald Reiterer, Bektur Ryskeldiev, Jonathan Schwabish, Brian A. Smith, Yasuyuki Sumi, Ryo Suzuki, Anthony Tang, Yalong Yang, Jian Zhao Published: 03.06.2026 Updated: 03.06.2026 Summary We characterize 16 challenges faced by those investigating and developing remote and synchronous collaborative experiences around visualization. Our work reflects the perspectives and prior research efforts of an international group of 29 experts from across human-computer interaction and visualization sub-communities. The challenges are anchored around five collaborative activities that exhibit a centrality of visualization and multimodal communication. These activities include exploratory data analysis, creative ideation, visualization-rich presentations, joint decision making grounded in data, and real-time data monitoring. The challenges also reflect the changing dynamics of these activities in the face of recent advances in extended reality (XR) and artificial intelligence (AI). As an organizing scheme for future research at the intersection of visualization and computer-supported cooperative work, we align the challenges with a sequence of four sets of research and development activities: technological choices, social factors, AI assistance, and evaluation. Source arXiv: 2603.05871v1
An Optimization Framework for Monitor Placement in Quantum Network Tomography Authors Athira Kalavampara Raghunadhan, Matheus Guedes De Andrade, Don Towsley, Indrakshi Dey, Daniel Kilper, Nicola Marchetti Published: 03.06.2026 Updated: 03.06.2026 Summary Quantum Network Tomography (QNT) offers a framework for end-to-end quantum channel characterization by strategically placing monitor nodes within the network. Building upon prior work on single-monitor placement, we study optimal monitor placement and measurement assignments for channel parameter estimation in arbitrary quantum networks. Using an n-node star network as a baseline, we analyze multi-monitor configurations and show that distributing monitors across end nodes can achieve estimation performance comparable to a monitor placed at the hub. Estimation precision is quantified using the Quantum Fisher Information Matrix (QFIM), with channel parameters inferred via Maximum Likelihood Estimation (MLE) and benchmarked against the Quantum Cramer-Rao Bound (QCRB). To generalize, we develop two Integer Linear Program (ILP) formulations: one maximizing estimation accuracy (QF), and another jointly optimizing accuracy and monitoring overhead (QMF). Unlike QF, QMF prevents monitor overloading, enabling scalability and parallelism. We prove optimality for star and analyze applicability to tree-structured quantum networks. Source arXiv: 2603.05777v1
The Evolution of Magnetism in a Thin Film Pyrochlore Ferromagnetic Insulator Authors Margaret A. Anderson, Megan E. Goh, Yang Zhang, Kyeong-Yoon Baek, Michael Schulze, Mario Brutzam, Christoph Liebald, Chris Lygouras, Dan Ferenc Segedin, Aaron M. Day, Zubia Hasan, Donald A. Walko, Hua Zhou, Peter Bencok, Alpha T. N'Diaye, Charles M. Brooks, Ismail El Baggari, John T. Heron, S. M. Koopayeh, Daniel Rytz, Christo Guguschev, Julia A. Mundy Published: 03.05.2026 Updated: 03.05.2026 Summary The pyrochlore vanadates are compelling candidates for next-generation dissipationless devices. Lu2V2O7 and Y2V2O7 are ferromagnetic insulators (Tc ~ 70 K) that are believed to exhibit the magnon Hall effect and are expected to host topological magnons. Their completely dissipationless magnon edge states could be harnessed to realize low-power information transport in spintronic or magnonic devices. As a crucial step in the realization of devices, we synthesize the first thin films of pyrochlore Y2V2O7 on isostructural Y2Ti2O7 substrates and explore the evolution of their magnetic properties down to the ultrathin limit. All films are insulating ferromagnets with transition temperatures of up to the bulk value (Tc ~ 68 K) that decrease with thickness according to finite-size effects. Our films also exhibit a change in anisotropy from in-plane to out-of-plane easy axis coincident with the development of partial strain relaxation and nonzero magnetic hysteresis in an applied field. This evolution demonstrates the impact of strain on magnetic anisotropy and paves the way to tunable magnon topology. Source arXiv: 2603.05717v1
Quantum advantages for syndrome-aware noisy logical observable estimation Authors Kento Tsubouchi, Hyukgun Kwon, Liang Jiang, Nobuyuki Yoshioka Published: 03.05.2026 Updated: 03.05.2026 Summary Recent progress in fault-tolerant quantum computing suggests that leveraging error-syndrome information at the logical layer can substantially improve performance, including the estimation of logical observables from noisy states. In this work, based on quantum estimation theory, we develop an information-theoretic framework to quantify the utility of error syndromes for noisy logical observable estimation. We distinguish two operational regimes of such syndrome-aware protocols: classical protocols, in which the logical measurement basis is fixed and syndrome information is used only in classical post-processing, and quantum protocols, in which the logical quantum control can be tailored to depend on the observed error syndrome. For classical syndrome-aware protocols, we prove a universal limitation: on average, syndrome information can improve the effective logical error rate by at most a factor of two, implying at most a quadratic reduction in sampling overhead. In contrast, once syndrome-conditioned quantum control is permitted, we exhibit settings in which the effective logical error rate decays exponentially with the number of logical qubits. These findings provide fundamental guidance for designing future fault-tolerant architectures that actively exploit syndrome records rather than discarding them after decoding. Source arXiv: 2603.05145v1
Variational Quantum Transduction Authors Pengcheng Liao, Haowei Shi, Quntao Zhuang Published: 03.04.2026 Updated: 03.04.2026 Summary Quantum transducers are critical for quantum interconnect, enabling coherent signal transfer across disparate frequency domains. Beyond material and device advances, protocol design has become a powerful means to improve transduction. We introduce a variational quantum transduction (VQT) framework that employs variational tools from near-term quantum computing to systematically optimize protocol performance. As a variational quantum circuit framework, VQT is not plagued by known training issues such as barren plateau, because a small-scale problem is sufficient for substantial advantage and training only needs to be done once to configure a VQT system. Maximizing the quantum information rate within this framework yields protocols that surpass all known schemes in their respective classes. For non-adaptive protocols, VQT exceeds the performance envelopes of Gottesman-Kitaev-Preskill (GKP)-based and entanglement-assisted approaches. In the adaptive setting, VQT provides only a marginal improvement over Gaussian feedforward strategies, indicating that Gaussian adaptive transduction is already close to optimal. With increasingly universal quantum control, VQT provides a systematic path toward optimal quantum transduction. Source arXiv: 2603.03642v1
Rate-Fidelity Tradeoffs in All-Photonic and Memory-Equipped Quantum Switches Authors Panagiotis Promponas, Leonardo Bacciottini, Paul Polakos, Gayane Vardoyan, Don Towsley, Leandros Tassiulas Published: 03.03.2026 Updated: 03.03.2026 Summary Quantum entanglement switches are a key building block for early quantum networks, and a central design question is whether near-term devices should use only flying photons or also incorporate quantum memories. We compare two architectures: an all-photonic entanglement generation switch (EGS) that repeatedly attempts Bell-state measurements (BSM) without storing qubits, and a quantum memory-equipped switch that buffers entanglement and triggers measurements only when heralded connectivity is available (herald-then-swap control). These two designs trade off simple, memoryless operation that avoids decoherence and memory-induced latency against heralding-based control that buffers entanglement to use BSMs more efficiently. We formalize both models under a common hardware abstraction and characterize their achievable rate-fidelity regions, yielding a benchmarking methodology that translates hardware and protocol parameters into network-level performance. Numerical evaluation quantifies the rate-fidelity tradeoffs of both models, identifies operating regions in which each architecture dominates, and shows how hardware and protocol knobs can be tuned to meet application-specific targets. Source arXiv: 2603.02610v1
Ultra-low loss piezo-optomechanical low-confinement silicon nitride platform for visible wavelength quantum photonic circuits Authors Mayank Mishra, Gwangho Choi, Wenhua He, Gina M. Talcott, Katherine Kearney, Michael Gehl, Andrew Leenheer, Daniel Dominguez, Nils T. Otterstrom, Matt Eichenfield Published: 03.03.2026 Updated: 03.04.2026 Summary The stringent demands of photonic quantum computing protocols motivate photonic integrated circuit (PIC) platforms with passive optical properties such as extremely low losses and correspondingly large circuit depths, as well as active optical properties such as high reconfiguration rates, low power dissipation, and minimal crosstalk. At the same time, many quantum photonic resource state generators, such as single-photon sources and quantum memories, require operation in the visible wavelength range. These requirements make the passive optical properties of CMOS-fabricated, ultralow-loss, low-confinement silicon nitride waveguides especially attractive. However, the conventional active properties of these systems based on thermo-optic modulation are plagued by high levels of crosstalk, slow modulation rates, and high power dissipation. Although there have been recent demonstrations of CMOS-fabricated, visible wavelength, piezo-optomechanical PICs that solve the above challenges associated with implementing active functionality, these have made use of high-confinement waveguides with currently demonstrated losses of order $0.3$-$1~mathrm{dB/cm}$, precluding circuit depths required for scalable quantum algorithms. Here, we demonstrate that combining piezo-optomechanical actuation with a low-confinement, ultra-low loss silicon nitride platform addresses the scalability challenge while enabling high-performance active functionality at visible wavelengths. This platform achieves a propagation loss $0.026~mathrm{dB/cm}$ at $780~mathrm{nm}$, modulation bandwidths in the MHz range, and a phase shifter voltage-length product ($V_πL$) of approximately $2.8~mathrm{mathrm{V}cdotmathrm{m}}$ and negligible hysteresis. We further demonstrate reconfigurable Mach-Zehnder interferometers based on spiral phase shifters with 0.63 dB loss per phase shifter. Source arXiv: 2603.02584v2
Ultra-low loss piezo-optomechanical low-confinement silicon nitride platform for visible wavelength quantum photonic circuits Authors Mayank Mishra, Gwangho Choi, Wenhua He, Gina M. Talcott, Katherine Kearney, Michael Gehl, Andrew Leenheer, Daniel Dominguez, Nils T. Otterstrom, Matt Eichenfield Published: 03.03.2026 Updated: 03.08.2026 Summary The stringent demands of photonic quantum computing protocols motivate photonic integrated circuit (PIC) platforms with passive optical properties such as extremely low losses and correspondingly large circuit depths, as well as active optical properties such as high reconfiguration rates, low power dissipation, and minimal crosstalk. At the same time, many quantum photonic resource state generators, such as single-photon sources and quantum memories, require operation in the visible wavelength range. These requirements make the passive optical properties of CMOS-fabricated, ultralow-loss, low-confinement silicon nitride waveguides especially attractive. However, the conventional active properties of these systems based on thermo-optic modulation are plagued by high levels of crosstalk, slow modulation rates, and high power dissipation. Although there have been recent demonstrations of CMOS-fabricated, visible wavelength, piezo-optomechanical PICs that solve the above challenges associated with implementing active functionality, these have made use of high-confinement waveguides with currently demonstrated losses of order $0.3$-$1~mathrm{dB/cm}$, precluding circuit depths required for scalable quantum algorithms. Here, we demonstrate that combining piezo-optomechanical actuation with a low-confinement, ultra-low loss silicon nitride platform addresses the scalability challenge while enabling high-performance active functionality at visible wavelengths. This platform achieves a propagation loss $0.026~mathrm{dB/cm}$ at $780~mathrm{nm}$, modulation bandwidths in the MHz range, and a phase shifter voltage-length product ($V_πL$) of approximately $2.8~mathrm{mathrm{V}cdotmathrm{m}}$ and negligible hysteresis. We further demonstrate reconfigurable Mach-Zehnder interferometers based on spiral phase shifters with 0.63 dB loss per phase shifter. Source arXiv: 2603.02584v3
Optimizing Orbital Parameters of Satellites for a Global Quantum Network Authors Athul Ashok, Owen DePoint, Jackson MacDonald, Albert Williams, Don Towsley Published: 03.03.2026 Updated: 03.03.2026 Summary Due to fundamental limitations on terrestrial quantum links, satellites have received considerable attention for their potential as entanglement generation sources in a global quantum internet. In this work, we focus on the problem of designing a constellation of satellites for such a quantum network. We find satellite inclination angles and satellite cluster allocations to achieve maximal entanglement generation rates to fixed sets of globally distributed ground stations. Exploring two black-box optimization frameworks: a Bayesian Optimization (BO) approach and a Genetic Algorithm (GA) approach, we find comparable results, indicating their effectiveness for this optimization task. While GA and BO often perform remarkably similar, BO often converges more efficiently, while later growth noted in GAs is indicative of less susceptibility towards local maxima. In either case, they offer substantial improvements over naive approaches that maximize coverage with respect to ground station placement. Source arXiv: 2603.02480v1
Quantum squeezing in an all-resonant periodically poled lithium niobate microresonator Authors Xinyi Ren, Reshma Kopparapu, Tushar Sanjay Karnik, Chun-Ho Lee, Kiwon Kwon, Clayton Cheung, Yue Yu, Shi-Yuan Ma, Bo-Han Wu, Ran Yin, Lian Zhou, Quntao Zhuang, Dirk Englund, Zaijun Chen, Mengjie Yu Published: 02.26.2026 Updated: 02.26.2026 Summary Quantum noise limits the sensitivity of optical measurements, but squeezed states of light enable quantum-enhanced metrology, sensing, and information processing. Most on-chip squeezed-light sources rely on Kerr ($χ^{(3)}$) nonlinearities, remain limited by pump power and excess loss constraints. Quadratic ($χ^{(2)}$) platforms instead provide stronger parametric interactions, lower pump power requirements, and greater spectral engineering flexibility. Here, we demonstrate strong, broadband squeezed-light generation on a thin-film lithium niobate (TFLN) photonic chip using a dual-resonant optical parametric amplifier implemented in a single periodically poled LN (PPLN) microresonator. Near-full-depth domain inversion is achieved simultaneously with highly over-coupled resonances, exhibiting escape efficiencies exceeding 90% and intrinsic quality factors above 2.5 million in a 0.6 mm$^2$ X-cut TF-PPLN resonator, enabling efficient squeezing at 1587 nm when pumped at 793.5 nm. Operating in the continuous-wave regime, we directly measure -0.81 dB of squeezing below the shot-noise limit with a pump power of 27 mW, together with +4.29 dB of anti-squeezing. From these measurements, we infer an on-chip squeezing level of -7.52 dB $pm$ 0.22 dB (95% confidence interval: [-7.96,-7.10] dB), and an on-chip anti-squeezing level of +9.62 dB $pm$ 0.25 dB. We demonstrate single-mode squeezing at degeneracy with a squeezed-light spectrum exceeding 10.3 THz. This work reports the highest squeezing ratio among integrated $χ^{(2)}$ cavity platforms and the first quasi-phase matched, fully resonant $χ^{(2)}$ cavity squeezer on chip, establishing a scalable route to fully integrated power-efficient squeezed-light sources for quantum-enhanced sensing and metrology. Source arXiv: 2602.22693v1
Passive Environment-Assisted Quantum Communication Authors Evelyn Voss, Bikun Li, Zhaoyou Wang, Liang Jiang Published: 02.25.2026 Updated: 02.25.2026 Summary As quantum information systems mature, efficient and coherent transfer of quantum information through noisy channels becomes increasingly important. We examine how passive environment-assisted quantum communication enhances direct quantum information transfer efficiency. A bosonic pure-loss channel, modeled as transmission through a beam splitter with a vacuum input state at the dark port, has zero quantum capacity when transmissivity is below 50%. Quantum communication through the channel can be enhanced by passive environment assistance, achieved via the selection of an appropriate input state for the ancilla port. Although ideal Gottesman-Kitaev-Preskill (GKP) states enable perfect quantum information transmission at arbitrarily small transmissivity, they are challenging to realize experimentally. We therefore explore more experimentally accessible non-Gaussian ancilla states, such as Fock, cat, and squeezed cat states, and numerically determine the optimal encoding and decoding strategies. We also construct analytical schemes that yield high-fidelity transmission and good information rates. Source arXiv: 2602.21549v1
Passive Environment-Assisted Quantum Communication Authors Evelyn Voss, Bikun Li, Zhaoyou Wang, Liang Jiang Published: 02.25.2026 Updated: 02.27.2026 Summary As quantum information systems mature, efficient and coherent transfer of quantum information through noisy channels becomes increasingly important. We examine how passive environment-assisted quantum communication enhances direct quantum information transfer efficiency. A bosonic pure-loss channel, modeled as transmission through a beam splitter with a vacuum input state at the dark port, has zero quantum capacity when transmissivity is below 50%. Quantum communication through the channel can be enhanced by passive environment assistance, achieved via the selection of an appropriate input state for the ancilla port. Although ideal Gottesman-Kitaev-Preskill (GKP) states enable perfect quantum information transmission at arbitrarily small transmissivity, they are challenging to realize experimentally. We therefore explore more experimentally accessible non-Gaussian ancilla states, such as Fock, cat, and squeezed cat states, and numerically determine the optimal encoding and decoding strategies. We also construct analytical schemes that yield high-fidelity transmission and good information rates. Source arXiv: 2602.21549v2
Universal Sample Complexity Bounds in Quantum Learning Theory via Fisher Information matrix Authors Hyukgun Kwon, Seok Hyung Lie, Liang Jiang Published: 02.25.2026 Updated: 02.25.2026 Summary In this work, we show that the sample complexity (equivalently, the number of measurements) required in quantum learning theory within a general parametric framework, is fundamentally governed by the inverse Fisher information matrix. More specifically, we derive upper and lower bounds on the number of samples required to estimate the parameters of a quantum system within a prescribed small additive error and with high success probability under maximum likelihood estimation. The upper bound is governed by the supremum of the largest diagonal entry of the inverse Fisher information matrix, while the lower bound is characterized by any diagonal element evaluated at arbitrary parameter values. We then apply the general bounds to Pauli channel learning and to the estimation of Pauli expectation values in the asymptotic small-error regime, and recover the previously established sample complexity through considerably streamlined derivations. Furthermore, we identify the structural origin of exponential sample complexity in Pauli channel learning without entanglement and in Pauli expectation value estimation without quantum memory. We then extend the analysis to an error criterion based on the Euclidean distance between the true parameter values and their estimators. We derive the corresponding upper and lower bounds on the sample complexity, which are likewise characterized by the inverse Fisher information matrix. As an application, we consider Pauli expectation estimation with entangled probes. Finally, we highlight two fundamental contributions to quantum learning theory. First, we establish a systematic framework that determines the task-independent sample complexity under maximum-likelihood estimation. Second, we show that, in the small-error regime, learning sample complexity is governed by the inverse Fisher information matrix. Source arXiv: 2602.21510v1
Universal Sample Complexity Bounds in Quantum Learning Theory via Fisher Information Matrix Authors Hyukgun Kwon, Seok Hyung Lie, Liang Jiang Published: 02.25.2026 Updated: 03.09.2026 Summary In this work, we show that the sample complexity required in quantum learning theory within a general parametric framework, is fundamentally governed by the inverse Fisher information matrix. More specifically, we derive upper and lower bounds on the number of samples required to estimate the parameters of a quantum system within a prescribed small additive error, with high success probability under maximum likelihood estimation. The upper bound is governed by the supremum of the largest diagonal entry of the inverse Fisher information matrix, while the lower bound is characterized by any diagonal element evaluated at arbitrary parameter values. We then apply the general bounds to Pauli channel learning and to Pauli expectation values learning in the asymptotic small-error regime, and recover the previously established sample complexity through considerably streamlined derivations. Furthermore, we identify the structural origin of exponential sample complexity in Pauli channel learning without entanglement and in Pauli expectation values learning without quantum memory. We then extend the analysis to an error criterion based on the Euclidean distance between the true parameter values and their estimators. We derive the corresponding upper and lower bounds on the sample complexity, which are likewise characterized by the inverse Fisher information matrix. As an application, we consider Pauli channel learning with entangled probes. Finally, we highlight two fundamental contributions to quantum learning theory. First, we establish a systematic framework that determines the task-independent sample complexity under maximum-likelihood estimation. Second, we show that, in the small-error regime, the learning sample complexity is determined by the inverse Fisher information matrix, the central quantity in quantum metrology that determines the ultimate achievable mean squared error. Source arXiv: 2602.21510v2
Agentic AI for Scalable and Robust Optical Systems Control Authors Zehao Wang, Mingzhe Han, Wei Cheng, Yue-Kai Huang, Philip Ji, Denton Wu, Mahdi Safari, Flemming Holtorf, Kenaish AlQubaisi, Norbert M. Linke, Danyang Zhuo, Yiran Chen, Ting Wang, Dirk Englund, Tingjun Chen Published: 02.23.2026 Updated: 02.23.2026 Summary We present AgentOptics, an agentic AI framework for high-fidelity, autonomous optical system control built on the Model Context Protocol (MCP). AgentOptics interprets natural language tasks and executes protocol-compliant actions on heterogeneous optical devices through a structured tool abstraction layer. We implement 64 standardized MCP tools across 8 representative optical devices and construct a 410-task benchmark to evaluate request understanding, role-aware responses, multi-step coordination, robustness to linguistic variation, and error handling. We assess two deployment configurations–commercial online LLMs and locally hosted open-source LLMs–and compare them with LLM-based code generation baselines. AgentOptics achieves 87.7%–99.0% average task success rates, significantly outperforming code-generation approaches, which reach up to 50% success. We further demonstrate broader applicability through five case studies extending beyond device-level control to system orchestration, monitoring, and closed-loop optimization. These include DWDM link provisioning and coordinated monitoring of coherent 400 GbE and analog radio-over-fiber (ARoF) channels; autonomous characterization and bias optimization of a wideband ARoF link carrying 5G fronthaul traffic; multi-span channel provisioning with launch power optimization; closed-loop fiber polarization stabilization; and distributed acoustic sensing (DAS)-based fiber monitoring with LLM-assisted event detection. These results establish AgentOptics as a scalable, robust paradigm for autonomous control and orchestration of heterogeneous optical systems. Source arXiv: 2602.20144v1
Exact Solutions to Acoustoelectric Interactions in Arbitrary Geometries Authors William W. Roberts, Matt Eichenfield Published: 02.23.2026 Updated: 02.23.2026 Summary Acoustoelectric interactions occur when free carriers in a semiconductor interact with the fields of an acoustic wave in a piezoelectric medium. These interactions can amplify acoustic waves, as well as give rise to extremely large phononic nonlinearities and strong non-reciprocal effects. Despite the tremendous progress in the last ten years, the field is entirely dependent on analytical and perturbative solutions for the two simplest arrangements of piezoelectric-semiconductor materials. While these models have allowed the field to advance substantially, new geometries are arising that do not satisfy assumptions integral to these canonical models. These models rely on simplifying assumptions that remove the tensorial nature of the materials, restricting analysis to plane wave and perturbative solutions. Such restrictions fails to capture the non-perturbative nature of the acoustoelectric effect, illustrating the need for more advanced computational methods to analyze acoustoelectric systems. We develop, for the first time, a finite element method (FEM) model to solve for acoustoelectric interactions in arbitrary geometries. We use the model to verify existing results for amplification, dispersion, and non-reciprocity obtained from the canonical models. We then examine the acoustoelectric effect in two geometries not covered by the canonical models: a thin piezoelectric film placed above a semiconductor substrate and a fully 2D waveguide under a thin semiconductor layer. This work lays the foundation for accurate modeling of arbitrary acoustoelectric geometries such as those currently being developed for all-acoustic radio frequency (RF) signal processing, acoustoelectrically enhanced photonic devices, and quantum acoustoelectric devices. Source arXiv: 2602.19482v1
Exact Solutions to Acoustoelectric Interactions in Arbitrary Geometries Authors William W. Roberts, Matt Eichenfield Published: 02.23.2026 Updated: 02.25.2026 Summary Acoustoelectric interactions occur when free carriers in a semiconductor interact with the fields of an acoustic wave in a piezoelectric medium. These interactions can amplify acoustic waves, as well as give rise to extremely large phononic nonlinearities and strong non-reciprocal effects. The field of acoustoelectric devices is currently dependent on analytical and perturbative solutions for the two simplest arrangements of piezoelectric-semiconductor materials. While these canonical models have allowed the field to advance substantially, new geometries are arising that do not satisfy assumptions integral to these models. These assumptions include the treatment of the interactions between the acoustic fields and free carriers as weak, the neglect of the tensorial nature of the material properties, the omission of the spatial variations in the phonons’ electric field profiles, and the disregard of elastic coupling across material boundaries, among others. We develop, for the first time, a finite element method (FEM) model to solve for acoustoelectric interactions in arbitrary geometries that avoids making the assumptions of the canonical models. We verify the FEM model using results for amplification, dispersion, and non-reciprocity obtained from the canonical models in their regime of validity. We then examine the acoustoelectric effect in two geometries not covered by the canonical models: a thin piezoelectric film placed on a semiconductor substrate and a fully 2D waveguide under a thin semiconductor layer. This work lays the foundation for accurate modeling of arbitrary acoustoelectric geometries such as those currently being developed for all-acoustic radio frequency (RF) signal processing, acoustoelectrically enhanced photonic devices, and quantum acoustoelectric devices. Source arXiv: 2602.19482v2
Superconducting phase diagram of multi-layer square-planar nickelates Authors Grace A. Pan, Dan Ferenc Segedin, Sophia F. R. TenHuisen, Lopa Bhatt, Harrison LaBollita, Abigail Y. Jiang, Qi Song, Ari B. Turkiewicz, Denitsa R. Baykusheva, Abhishek Nag, Stefano Agrestini, Ke-Jin Zhou, Jonathan Pelliciari, Valentina Bisogni, Hua Zhou, Mark P. M. Dean, Hanjong Paik, David A. Muller, Lena F. Kourkoutis, Charles M. Brooks, Matteo Mitrano, Antia S. Botana, Berit H. Goodge, Julia A. Mundy Published: 02.22.2026 Updated: 02.22.2026 Summary The discovery of superconductivity in square-planar nickelates has offered a rich materials platform to explore the origins of cuprate-like superconductivity. Experimental investigations however have largely been limited to the infinite-layer $R$NiO$_2$ ($R$=rare-earth) nickelates. Here, we construct a phase diagram of multi-layer square-planar Nd$_{n+1}$Ni$_n$O$_{2n+2}$ compounds and discover signatures of superconductivity for $n$ = 4 – 8. Upon decreasing the dimensionality $n$, the superconducting anisotropy evolves due to 4$f$ electron effects, and electronic structure characteristics approach cuprate-like behavior. Magnetic fluctuations persist from within the superconducting regime and into the over-doped, non-superconducting regime. Remarkably, the superconducting regime overlaps with that of chemically-doped infinite-layer nickelates, demonstrating underlying commonalities and distinct differences across varying structural realizations of square-planar nickelates. Our work establishes this layered template for creating new nickel-based superconductors. Source arXiv: 2602.19093v1
Engineering quantum criticality and dynamics on an analog-digital simulator Authors Alexandra A. Geim, Nazli Ugur Koyluoglu, Simon J. Evered, Rahul Sahay, Sophie H. Li, Muqing Xu, Dolev Bluvstein, Nik O. Gjonbalaj, Nishad Maskara, Marcin Kalinowski, Tom Manovitz, Ruben Verresen, Susanne F. Yelin, Johannes Feldmeier, Markus Greiner, Vladan Vuletic, Mikhail D. Lukin Published: 02.20.2026 Updated: 02.20.2026 Summary Understanding emergent phenomena in out-of-equilibrium interacting many-body systems is an exciting frontier in physical science. While quantum simulators represent a promising approach to this long-standing problem, in practice it can be challenging to directly realize the required interactions, measure arbitrary observables, and mitigate errors. Here we use coherent mapping between the Rydberg and hyperfine qubits in a neutral atom array simulator to engineer and probe complex quantum dynamics. We combine efficient analog dynamics with fully programmable state preparation and measurement, leverage non-destructive readout for loss information and atomic qubit reuse, and use an atom reservoir for replacing lost atoms. With this analog-digital approach, we first demonstrate dynamical engineering of ring-exchange and particle hopping dynamics via Floquet driving and measure the spectral function of single excitations by evolving initial superposition states. Extending these techniques to a 271-site kagome lattice, we employ closed-loop optimization to target an out-of-equilibrium critical quantum spin liquid of the Rokhsar-Kivelson type. We observe the key features of such a state, including the absence of local order, many-body coherences between nearly equal-amplitude dimer configurations over up to 18 sites, and universal correlations consistent with predictions from field theory. Together, these results pave the way for using dynamical control in analog-digital quantum simulators to study complex quantum many-body systems. Source arXiv: 2602.18555v1
Quantum superresolution and noise spectroscopy with quantum computing Authors James W. Gardner, Federico Belliardo, Gideon Lee, Tuvia Gefen, Liang Jiang Published: 02.19.2026 Updated: 02.19.2026 Summary Quantum metrology of an incoherent signal is a canonical sensing problem related to superresolution and noise spectroscopy. We show that quantum computing can accelerate searches for a weak incoherent signal when the signal and noise are not precisely known. In particular, we consider weak Schur sampling, density matrix exponentiation, and quantum signal processing for testing the rank, purity, and spectral gap of the unknown quantum state to detect the incoherent signal. We show that these algorithms are faster than full-state tomography, which scales with the dimension of the Hilbert space. We apply our results to detecting exoplanets, stochastic gravitational waves, ultralight dark matter, geontropic quantum gravity, and Pauli noise. Source arXiv: 2602.17862v1
Approaching the Limit in Multiparameter AC Magnetometry with Quantum Control Authors Takuya Isogawa, Zhiyao Hu, Ayumi Kanamoto, Nutdech Phadetsuwannukun, Shilin Wang, Shunsuke Nishimura, Boning Li, Liang Jiang, Zain H. Saleem, Guoqing Wang, Haidong Yuan, Paola Cappellaro Published: 02.19.2026 Updated: 02.19.2026 Summary Simultaneously estimating multiple parameters at the ultimate limit is a central challenge in quantum metrology, often hindered by inherent incompatibilities in optimal estimation strategies. At its most extreme, this incompatibility culminates in a fundamental impossibility when the quantum Fisher information matrix (QFIM) becomes singular, rendering joint estimation unattainable. This is the case for a canonical problem: estimating the amplitude and frequency of an AC magnetic field, where the generators are parallel to each other. Here, we introduce a quantum control protocol that resolves this singularity. Our control protocol strategically engineers the sensor’s time evolution so the generators for the two parameters become orthogonal. It not only removes the singularity but also restores the optimal scaling of precision with interrogation time for both parameters simultaneously. We experimentally validate this protocol using a nitrogen-vacancy center in diamond at room temperature, demonstrating the concurrent achievement of the optimal scaling for both parameters under realistic conditions. Source arXiv: 2602.17648v1
Recirculating Quantum Photonic Networks for Fast Deterministic Quantum Information Processing Authors Emil Grovn, Matias Bundgaard-Nielsen, Jesper Mørk, Dirk Englund, Mikkel Heuck Published: 02.11.2026 Updated: 02.11.2026 Summary A fundamental challenge in photonics-based deterministic quantum information processing is to realize key transformations on time scales shorter than those of detrimental decoherence and loss mechanisms. This challenge has been addressed through device-focused approaches that aim to increase nonlinear interactions relative to decoherence rates. In this work, we adopt a complementary architecture-focused approach by proposing a recirculating quantum photonic network (RQPN) that minimizes the duration of quantum information processing tasks, thereby reducing the requirements on nonlinear interaction rates. The RQPN consists of a network of all-to-all connected nonlinear cavities with dynamically controlled waveguide couplings, and it processes information by capturing a photonic input state, recirculating photons between the cavities, and releasing a photonic output state. We demonstrate the RQPN’s architectural advantage through two examples: first, we show that processing all qubits simultaneously yields faster operations than single- and two-qubit decompositions of the three-qubit Toffoli gate. Second, we demonstrate implementations of a measurement-free correction for single-photon loss, achieving up to seven-fold speedups and significantly improved hardware efficiency relative to state-of-the-art architecture proposals. Our work shows that a single hardware-efficient recirculating architecture substantially reduces the temporal overhead of multi-qubit gates and quantum error correction, thereby lowering the barrier to experimental realizations of deterministic photonic quantum information processing. Source arXiv: 2602.11033v1
LitBench: A Graph-Centric Large Language Model Benchmarking Tool For Literature Tasks Authors Andreas Varvarigos, Ali Maatouk, Jiasheng Zhang, Ngoc Bui, Jialin Chen, Leandros Tassiulas, Rex Ying Published: 02.10.2026 Updated: 02.10.2026 Summary While large language models (LLMs) have become the de facto framework for literature-related tasks, they still struggle to function as domain-specific literature agents due to their inability to connect pieces of knowledge and reason across domain-specific contexts, terminologies, and nomenclatures. This challenge underscores the need for a tool that facilitates such domain-specific adaptation and enables rigorous benchmarking across literature tasks. To that end, we introduce LitBench, a benchmarking tool designed to enable the development and evaluation of domain-specific LLMs tailored to literature-related tasks. At its core, LitBench uses a data curation process that generates domain-specific literature sub-graphs and constructs training and evaluation datasets based on the textual attributes of the resulting nodes and edges. The tool is designed for flexibility, supporting the curation of literature graphs across any domain chosen by the user, whether high-level fields or specialized interdisciplinary areas. In addition to dataset curation, LitBench defines a comprehensive suite of literature tasks, ranging from node and edge level analyses to advanced applications such as related work generation. These tasks enable LLMs to internalize domain-specific knowledge and relationships embedded in the curated graph during training, while also supporting rigorous evaluation of model performance. Our results show that small domain-specific LLMs trained and evaluated on LitBench datasets achieve competitive performance compared to state-of-the-art models like GPT-4o and DeepSeek-R1. To enhance accessibility and ease of use, we open-source the tool along with an AI agent tool that streamlines data curation, model training, and evaluation. Source arXiv: 2603.00051v1
Quantization-aware Photonic Homodyne computing for Accelerated Artificial Intelligence and Scientific Simulation Authors Lian Zhou, Kaiwen Xue, Amirhossein Fallah, Lijin Liu, Chun-Ho Lee, Kiwon Kwon, Clayton Cheung, Yuan Li, Yue Yu, Yun-Jhu Lee, Songlin Zhao, Ryan Hamerly, Edo Waks, Dirk Englund, Constantine Sideris, Mengjie Yu, Zaijun Chen Published: 02.09.2026 Updated: 02.09.2026 Summary Modern problems in high-performance computing, ranging from training and inferencing deep learning models in computer vision and language models to simulating complex physical systems with nonlinearly-coupled equations, require exponential growth of computational resources. Photonic analog systems are emerging with solutions of intrinsic parallelism, high bandwidth, and low propagation loss. However, their application has been hindered by the low analog accuracy due to the electro-optic distortion, material nonlinearities, and signal-to-noise ratios. Here we overcome this barrier with a quantization-aware digital-photonic mixed-precision framework across chiplets for accelerated AI processing and physical simulation. Using Lithium Niobate photonics with channel equalization techniques, we demonstrate linear multiplication (9-bit amplitude-phase decoupling) in homodyne optical logics with 6-bit precision at the clock rate of 128 giga-symbol-per-second (128 GS/s), enabling AI processing with 6 ns latency. Codesign hardware-algorithms, including iterative solvers, sparse-dense quantization, and bit-sliced matrix multiplication, explore photonic amplitude and phase coherence for complex-valued, physics-inspired computation. In electromagnetic problems, our approach yields 12-bit solutions for partial differential equations (PDEs) in scattering problems that would conventionally require up to 32-bit and often even 64-bit precision. These results preserve digital-level fidelity while leveraging the high-speed low-energy photonic hardware, establishing a pathway toward general-purpose optical acceleration for generative artificial intelligence, real-time robotics, and accurate simulation for climate challenges and biological discoveries. Source arXiv: 2602.08269v1
HypRAG: Hyperbolic Dense Retrieval for Retrieval Augmented Generation Authors Hiren Madhu, Ngoc Bui, Ali Maatouk, Leandros Tassiulas, Smita Krishnaswamy, Menglin Yang, Sukanta Ganguly, Kiran Srinivasan, Rex Ying Published: 02.08.2026 Updated: 02.08.2026 Summary Embedding geometry plays a fundamental role in retrieval quality, yet dense retrievers for retrieval-augmented generation (RAG) remain largely confined to Euclidean space. However, natural language exhibits hierarchical structure from broad topics to specific entities that Euclidean embeddings fail to preserve, causing semantically distant documents to appear spuriously similar and increasing hallucination risk. To address these limitations, we introduce hyperbolic dense retrieval, developing two model variants in the Lorentz model of hyperbolic space: HyTE-FH, a fully hyperbolic transformer, and HyTE-H, a hybrid architecture projecting pre-trained Euclidean embeddings into hyperbolic space. To prevent representational collapse during sequence aggregation, we introduce the Outward Einstein Midpoint, a geometry-aware pooling operator that provably preserves hierarchical structure. On MTEB, HyTE-FH outperforms equivalent Euclidean baselines, while on RAGBench, HyTE-H achieves up to 29% gains over Euclidean baselines in context relevance and answer relevance using substantially smaller models than current state-of-the-art retrievers. Our analysis also reveals that hyperbolic representations encode document specificity through norm-based separation, with over 20% radial increase from general to specific concepts, a property absent in Euclidean embeddings, underscoring the critical role of geometric inductive bias in faithful RAG systems. Source arXiv: 2602.07739v1
Fin-RATE: A Real-world Financial Analytics and Tracking Evaluation Benchmark for LLMs on SEC Filings Authors Yidong Jiang, Junrong Chen, Eftychia Makri, Jialin Chen, Peiwen Li, Ali Maatouk, Leandros Tassiulas, Eliot Brenner, Bing Xiang, Rex Ying Published: 02.07.2026 Updated: 02.07.2026 Summary With increasing deployment of Large Language Models (LLMs) in the finance domain, LLMs are increasingly expected to parse complex regulatory disclosures. However, existing benchmarks often focus on isolated details, failing to reflect the complexity of professional analysis that requires synthesizing information across multiple documents, reporting periods, and corporate entities. They do not distinguish whether errors stem from retrieval failures, generation flaws, finance-specific reasoning mistakes, or misunderstanding of the query or context. This makes it difficult to pinpoint performance bottlenecks. To bridge these gaps, we introduce Fin-RATE, a benchmark built on U.S. Securities and Exchange Commission (SEC) filings and mirror financial analyst workflows through three pathways: detail-oriented reasoning within individual disclosures, cross-entity comparison under shared topics, and longitudinal tracking of the same firm across reporting periods. We benchmark 17 leading LLMs, spanning open-source, closed-source, and finance-specialized models, under both ground-truth context and retrieval-augmented settings. Results show substantial performance degradation, with accuracy dropping by 18.60% and 14.35% as tasks shift from single-document reasoning to longitudinal and cross-entity analysis. This is driven by rising comparison hallucinations, time and entity mismatches, and mirrored by declines in reasoning and factuality–limitations that prior benchmarks have yet to formally categorize or quantify. Source arXiv: 2602.07294v1
Fin-RATE: A Real-world Financial Analytics and Tracking Evaluation Benchmark for LLMs on SEC Filings Authors Yidong Jiang, Junrong Chen, Eftychia Makri, Jialin Chen, Peiwen Li, Ali Maatouk, Leandros Tassiulas, Eliot Brenner, Bing Xiang, Rex Ying Published: 02.07.2026 Updated: 02.12.2026 Summary With the increasing deployment of Large Language Models (LLMs) in the finance domain, LLMs are increasingly expected to parse complex regulatory disclosures. However, existing benchmarks often focus on isolated details, failing to reflect the complexity of professional analysis that requires synthesizing information across multiple documents, reporting periods, and corporate entities. Furthermore, these benchmarks do not disentangle whether errors arise from retrieval failures, generation inaccuracies, domain-specific reasoning mistakes, or misinterpretation of the query or context, making it difficult to precisely diagnose performance bottlenecks. To bridge these gaps, we introduce Fin-RATE, a benchmark built on U.S. Securities and Exchange Commission (SEC) filings and mirroring financial analyst workflows through three pathways: detail-oriented reasoning within individual disclosures, cross-entity comparison under shared topics, and longitudinal tracking of the same firm across reporting periods. We benchmark 17 leading LLMs, spanning open-source, closed-source, and finance-specialized models, under both ground-truth context and retrieval-augmented settings. Results show substantial performance degradation, with accuracy dropping by 18.60% and 14.35% as tasks shift from single-document reasoning to longitudinal and cross-entity analysis. This degradation is driven by increased comparison hallucinations, temporal and entity mismatches, and is further reflected in declines in reasoning quality and factual consistency–limitations that existing benchmarks have yet to formally categorize or quantify. Source arXiv: 2602.07294v2
Fin-RATE: A Real-world Financial Analytics and Tracking Evaluation Benchmark for LLMs on SEC Filings Authors Yidong Jiang, Junrong Chen, Eftychia Makri, Jialin Chen, Peiwen Li, Ali Maatouk, Leandros Tassiulas, Eliot Brenner, Bing Xiang, Rex Ying Published: 02.07.2026 Updated: 02.14.2026 Summary With the increasing deployment of Large Language Models (LLMs) in the finance domain, LLMs are increasingly expected to parse complex regulatory disclosures. However, existing benchmarks often focus on isolated details, failing to reflect the complexity of professional analysis that requires synthesizing information across multiple documents, reporting periods, and corporate entities. Furthermore, these benchmarks do not disentangle whether errors arise from retrieval failures, generation inaccuracies, domain-specific reasoning mistakes, or misinterpretation of the query or context, making it difficult to precisely diagnose performance bottlenecks. To bridge these gaps, we introduce Fin-RATE, a benchmark built on U.S. Securities and Exchange Commission (SEC) filings and mirroring financial analyst workflows through three pathways: detail-oriented reasoning within individual disclosures, cross-entity comparison under shared topics, and longitudinal tracking of the same firm across reporting periods. We benchmark 17 leading LLMs, spanning open-source, closed-source, and finance-specialized models, under both ground-truth context and retrieval-augmented settings. Results show substantial performance degradation, with accuracy dropping by 18.60% and 14.35% as tasks shift from single-document reasoning to longitudinal and cross-entity analysis. This degradation is driven by increased comparison hallucinations, temporal and entity mismatches, and is further reflected in declines in reasoning quality and factual consistency–limitations that existing benchmarks have yet to formally categorize or quantify. Source arXiv: 2602.07294v3
InftyThink+: Effective and Efficient Infinite-Horizon Reasoning via Reinforcement Learning Authors Yuchen Yan, Liang Jiang, Jin Jiang, Shuaicheng Li, Zujie Wen, Zhiqiang Zhang, Jun Zhou, Jian Shao, Yueting Zhuang, Yongliang Shen Published: 02.06.2026 Updated: 02.09.2026 Summary Large reasoning models achieve strong performance by scaling inference-time chain-of-thought, but this paradigm suffers from quadratic cost, context length limits, and degraded reasoning due to lost-in-the-middle effects. Iterative reasoning mitigates these issues by periodically summarizing intermediate thoughts, yet existing methods rely on supervised learning or fixed heuristics and fail to optimize when to summarize, what to preserve, and how to resume reasoning. We propose InftyThink+, an end-to-end reinforcement learning framework that optimizes the entire iterative reasoning trajectory, building on model-controlled iteration boundaries and explicit summarization. InftyThink+ adopts a two-stage training scheme with supervised cold-start followed by trajectory-level reinforcement learning, enabling the model to learn strategic summarization and continuation decisions. Experiments on DeepSeek-R1-Distill-Qwen-1.5B show that InftyThink+ improves accuracy by 21% on AIME24 and outperforms conventional long chain-of-thought reinforcement learning by a clear margin, while also generalizing better to out-of-distribution benchmarks. Moreover, InftyThink+ significantly reduces inference latency and accelerates reinforcement learning training, demonstrating improved reasoning efficiency alongside stronger performance. Source arXiv: 2602.06960v2
InftyThink+: Effective and Efficient Infinite-Horizon Reasoning via Reinforcement Learning Authors Yuchen Yan, Liang Jiang, Jin Jiang, Shuaicheng Li, Zujie Wen, Zhiqiang Zhang, Jun Zhou, Jian Shao, Yueting Zhuang, Yongliang Shen Published: 02.06.2026 Updated: 02.06.2026 Summary Large reasoning models achieve strong performance by scaling inference-time chain-of-thought, but this paradigm suffers from quadratic cost, context length limits, and degraded reasoning due to lost-in-the-middle effects. Iterative reasoning mitigates these issues by periodically summarizing intermediate thoughts, yet existing methods rely on supervised learning or fixed heuristics and fail to optimize when to summarize, what to preserve, and how to resume reasoning. We propose InftyThink+, an end-to-end reinforcement learning framework that optimizes the entire iterative reasoning trajectory, building on model-controlled iteration boundaries and explicit summarization. InftyThink+ adopts a two-stage training scheme with supervised cold-start followed by trajectory-level reinforcement learning, enabling the model to learn strategic summarization and continuation decisions. Experiments on DeepSeek-R1-Distill-Qwen-1.5B show that InftyThink+ improves accuracy by 21% on AIME24 and outperforms conventional long chain-of-thought reinforcement learning by a clear margin, while also generalizing better to out-of-distribution benchmarks. Moreover, InftyThink+ significantly reduces inference latency and accelerates reinforcement learning training, demonstrating improved reasoning efficiency alongside stronger performance. Source arXiv: 2602.06960v1
Efficient learning of logical noise from syndrome data Authors Han Zheng, Chia-Tung Chu, Senrui Chen, Argyris Giannisis Manes, Su-un Lee, Sisi Zhou, Liang Jiang Published: 01.29.2026 Updated: 01.29.2026 Summary Characterizing errors in quantum circuits is essential for device calibration, yet detecting rare error events requires a large number of samples. This challenge is particularly severe in calibrating fault-tolerant, error-corrected circuits, where logical error probabilities are suppressed to higher order relative to physical noise and are therefore difficult to calibrate through direct logical measurements. Recently, Wagner et al. [PRL 130, 200601 (2023)] showed that, for phenomenological Pauli noise models, the logical channel can instead be inferred from syndrome measurement data generated during error correction. Here, we extend this framework to realistic circuit-level noise models. From a unified code-theoretic perspective and spacetime code formalism, we derive necessary and sufficient conditions for learning the logical channel from syndrome data alone and explicitly characterize the learnable degrees of freedom of circuit-level Pauli faults. Using Fourier analysis and compressed sensing, we develop efficient estimators with provable guarantees on sample complexity and computational cost. We further present an end-to-end protocol and demonstrate its performance on several syndrome-extraction circuits, achieving orders-of-magnitude sample-complexity savings over direct logical benchmarking. Our results establish syndrome-based learning as a practical approach to characterizing the logical channel in fault-tolerant quantum devices. Source arXiv: 2601.22286v1
Efficient learning of logical noise from syndrome data Authors Han Zheng, Chia-Tung Chu, Senrui Chen, Argyris Giannisis Manes, Su-un Lee, Sisi Zhou, Liang Jiang Published: 01.29.2026 Updated: 02.10.2026 Summary Characterizing errors in quantum circuits is essential for device calibration, yet detecting rare error events requires a large number of samples. This challenge is particularly severe in calibrating fault-tolerant, error-corrected circuits, where logical error probabilities are suppressed to higher order relative to physical noise and are therefore difficult to calibrate through direct logical measurements. Recently, Wagner et al. [PRL 130, 200601 (2023)] showed that, for phenomenological Pauli noise models, the logical channel can instead be inferred from syndrome measurement data generated during error correction. Here, we extend this framework to realistic circuit-level noise models. From a unified code-theoretic perspective and spacetime code formalism, we derive necessary and sufficient conditions for learning the logical channel from syndrome data alone and explicitly characterize the learnable degrees of freedom of circuit-level Pauli faults. Using Fourier analysis and compressed sensing, we develop efficient estimators with provable guarantees on sample complexity and computational cost. We further present an end-to-end protocol and demonstrate its performance on several syndrome-extraction circuits, achieving orders-of-magnitude sample-complexity savings over direct logical benchmarking. Our results establish syndrome-based learning as a practical approach to characterizing the logical channel in fault-tolerant quantum devices. Source arXiv: 2601.22286v2
Hierarchy of discriminative power and complexity in learning quantum ensembles Authors Jian Yao, Pengtao Li, Xiaohui Chen, Quntao Zhuang Published: 01.29.2026 Updated: 01.29.2026 Summary Distance metrics are central to machine learning, yet distances between ensembles of quantum states remain poorly understood due to fundamental quantum measurement constraints. We introduce a hierarchy of integral probability metrics, termed MMD-$k$, which generalizes the maximum mean discrepancy to quantum ensembles and exhibit a strict trade-off between discriminative power and statistical efficiency as the moment order $k$ increases. For pure-state ensembles of size $N$, estimating MMD-$k$ using experimentally feasible SWAP-test-based estimators requires $Θ(N^{2-2/k})$ samples for constant $k$, and $Θ(N^3)$ samples to achieve full discriminative power at $k = N$. In contrast, the quantum Wasserstein distance attains full discriminative power with $Θ(N^2 log N)$ samples. These results provide principled guidance for the design of loss functions in quantum machine learning, which we illustrate in the training quantum denoising diffusion probabilistic models. Source arXiv: 2601.22005v1
Multi-Modal Time Series Prediction via Mixture of Modulated Experts Authors Lige Zhang, Ali Maatouk, Jialin Chen, Leandros Tassiulas, Rex Ying Published: 01.29.2026 Updated: 01.29.2026 Summary Real-world time series exhibit complex and evolving dynamics, making accurate forecasting extremely challenging. Recent multi-modal forecasting methods leverage textual information such as news reports to improve prediction, but most rely on token-level fusion that mixes temporal patches with language tokens in a shared embedding space. However, such fusion can be ill-suited when high-quality time-text pairs are scarce and when time series exhibit substantial variation in scale and characteristics, thus complicating cross-modal alignment. In parallel, Mixture-of-Experts (MoE) architectures have proven effective for both time series modeling and multi-modal learning, yet many existing MoE-based modality integration methods still depend on token-level fusion. To address this, we propose Expert Modulation, a new paradigm for multi-modal time series prediction that conditions both routing and expert computation on textual signals, enabling direct and efficient cross-modal control over expert behavior. Through comprehensive theoretical analysis and experiments, our proposed method demonstrates substantial improvements in multi-modal time series prediction. The current code is available at https://github.com/BruceZhangReve/MoME Source arXiv: 2601.21547v1
Transversal gates for quantum CSS codes Authors Eduardo Camps-Moreno, Hiram H. López, Gretchen L. Matthews, Narayanan Rengaswamy, Rodrigo San-José Published: 01.29.2026 Updated: 01.29.2026 Summary In this paper, we focus on the problem of computing the set of diagonal transversal gates fixing a CSS code. We determine the logical actions of the gates as well as the groups of transversal gates that induce non-trivial logical gates and logical identities. We explicitly declare the set of equations defining the groups, a key advantage and differentiator of our approach. We compute the complete set of transversal stabilizers and transversal gates for any CSS code arising from monomial codes, a family that includes decreasing monomial codes and polar codes. As a consequence, we recover and extend some results in the literature on CSS-T codes, triorthogonal codes, and divisible codes. Source arXiv: 2601.21514v1
In-situ benchmarking of fault-tolerant quantum circuits. I. Clifford circuits Authors Xiao Xiao, Dominik Hangleiter, Dolev Bluvstein, Mikhail D. Lukin, Michael J. Gullans Published: 01.29.2026 Updated: 01.29.2026 Summary Benchmarking physical devices and verifying logical algorithms are important tasks for scalable fault-tolerant quantum computing. Numerous protocols exist for benchmarking devices before running actual algorithms. In this work, we show that both physical and logical errors of fault-tolerant circuits can even be characterized in-situ using syndrome data. To achieve this, we map general fault-tolerant Clifford circuits to subsystem codes using the spacetime code formalism and develop a scheme for estimating Pauli noise in Clifford circuits using syndrome data. We give necessary and sufficient conditions for the learnability of physical and logical noise from given syndrome data, and show that we can accurately predict logical fidelities from the same data. Importantly, our approach requires only a polynomial sample size, even when the logical error rate is exponentially suppressed by the code distance, and thus gives an exponential advantage against methods that use only logical data such as direct fidelity estimation. We demonstrate the practical applicability of our methods in various scenarios using synthetic data as well as the experimental data from a recent demonstration of fault-tolerant circuits by Bluvstein et al. [Nature 626, 7997 (2024)]. Our methods provide an efficient, in-situ way of characterizing a fault-tolerant quantum computer to help gate calibration, improve decoding accuracy, and verify logical circuits. Source arXiv: 2601.21472v1
In-situ benchmarking of fault-tolerant quantum circuits. I. Clifford circuits Authors Xiao Xiao, Dominik Hangleiter, Dolev Bluvstein, Mikhail D. Lukin, Michael J. Gullans Published: 01.29.2026 Updated: 02.04.2026 Summary Benchmarking physical devices and verifying logical algorithms are important tasks for scalable fault-tolerant quantum computing. Numerous protocols exist for benchmarking devices before running actual algorithms. In this work, we show that both physical and logical errors of fault-tolerant circuits can even be characterized in-situ using syndrome data. To achieve this, we map general fault-tolerant Clifford circuits to subsystem codes using the spacetime code formalism and develop a scheme for estimating Pauli noise in Clifford circuits using syndrome data. We give necessary and sufficient conditions for the learnability of physical and logical noise from given syndrome data, and show that we can accurately predict logical fidelities from the same data. Importantly, our approach requires only a polynomial sample size, even when the logical error rate is exponentially suppressed by the code distance, and thus gives an exponential advantage against methods that use only logical data such as direct fidelity estimation. We demonstrate the practical applicability of our methods in various scenarios using synthetic data as well as the experimental data from a recent demonstration of fault-tolerant circuits by Bluvstein et al. [Nature 626, 7997 (2024)]. Our methods provide an efficient, in-situ way of characterizing a fault-tolerant quantum computer to help gate calibration, improve decoding accuracy, and verify logical circuits. Source arXiv: 2601.21472v2
Entangling logical qubits without physical operations Authors Jin Ming Koh, Anqi Gong, Andrei C. Diaconu, Daniel Bochen Tan, Alexandra A. Geim, Michael J. Gullans, Norman Y. Yao, Mikhail D. Lukin, Shayan Majidy Published: 01.28.2026 Updated: 01.28.2026 Summary Fault-tolerant logical entangling gates are essential for scalable quantum computing, but are limited by the error rates and overheads of physical two-qubit gates and measurements. To address this limitation, we introduce phantom codes-quantum error-correcting codes that realize entangling gates between all logical qubits in a code block purely through relabelling of physical qubits during compilation, yielding perfect fidelity with no spatial or temporal overhead. We present a systematic study of such codes. First, we identify phantom codes using complementary numerical and analytical approaches. We exhaustively enumerate all $2.71 times 10^{10}$ inequivalent CSS codes up to $n=14$ and identify additional instances up to $n=21$ via SAT-based methods. We then construct higher-distance phantom-code families using quantum Reed-Muller codes and the binarization of qudit codes. Across all identified codes, we characterize other supported fault-tolerant logical Clifford and non-Clifford operations. Second, through end-to-end noisy simulations with state preparation, full QEC cycles, and realistic physical error rates, we demonstrate scalable advantages of phantom codes over the surface code across multiple tasks. We observe a one-to-two order-of-magnitude reduction in logical infidelity at comparable qubit overhead for GHZ-state preparation and Trotterized many-body simulation tasks, given a modest preselection acceptance rate. Our work establishes phantom codes as a viable architectural route to fault-tolerant quantum computation with scalable benefits for workloads with dense local entangling structure, and introduces general tools for systematically exploring the broader landscape of quantum error-correcting codes. Source arXiv: 2601.20927v1
Foundry-Enabled Patterning of Diamond Quantum Microchiplets for Scalable Quantum Photonics Authors Jawaher Almutlaq, Alessandro Buzzi, Anders Khaykin, Linsen Li, William Yzaguirre, Maxim Sirotin, Gerald Gilbert, Genevieve Clark, Dirk Englund Published: 01.27.2026 Updated: 01.27.2026 Summary Quantum technologies promise secure communication networks and powerful new forms of information processing, but building these systems at scale remains a major challenge. Diamond is an especially attractive material for quantum devices because it can host atomic-scale defects that emit single photons and store quantum information with exceptional stability. However, fabricating the optical structures needed to control light in diamond typically relies on slow, bespoke processes that are difficult to scale. In this work, we introduce a manufacturing approach that brings diamond quantum photonics closer to industrial production. Instead of sequentially defining each device by lithography written directly on diamond, we fabricate high-precision silicon masks using commercial semiconductor foundries and transfer them onto diamond via microtransfer printing. These masks define large arrays of nanoscale optical structures, shifting the most demanding pattern-definition steps away from the diamond substrate, improving uniformity, yield, and throughput. Using this method, we demonstrate hundreds of diamond “quantum microchiplets” with improved optical performance and controlled interaction with quantum emitters. The chiplet format allows defective devices to be replaced and enables integration with existing photonic and electronic circuits. Our results show that high-quality diamond quantum devices can be produced using scalable, foundry-compatible techniques. This approach provides a practical pathway toward large-scale quantum photonic systems and hybrid quantum-classical technologies built on established semiconductor manufacturing infrastructure. Source arXiv: 2601.20025v1
Quick Change Detection in Discrete-Time in Presence of a Covert Adversary Authors Amir Reza Ramtin, Philippe Nain, Don Towsley Published: 01.27.2026 Updated: 01.27.2026 Summary We study the problem of covert quickest change detection in a discrete-time setting, where a sequence of observations undergoes a distributional change at an unknown time. Unlike classical formulations, we consider a covert adversary who has knowledge of the detector’s false alarm constraint parameter $γ$ and selects a stationary post-change distribution that depends on it, seeking to remain undetected for as long as possible. Building on the theoretical foundations of the CuSum procedure, we rigorously characterize the asymptotic behavior of the average detection delay (ADD) and the average time to false alarm (AT2FA) when the post-change distribution converges to the pre-change distribution as $γto infty$. Our analysis establishes exact asymptotic expressions for these quantities, extending and refining classical results that no longer hold in this regime. We identify the critical scaling laws governing covert behavior and derive explicit conditions under which an adversary can maintain covertness, defined by ADD = $Θ(γ)$, whereas in the classical setting, ADD grows only as $mathcal{O}(log γ)$. In particular, for Gaussian and Exponential models under adversarial perturbations of their respective parameters, we asymptotically characterize ADD as a function of the Kullback–Leibler divergence between the pre- and post-change distributions and $γ$. Source arXiv: 2601.20022v1
Quickest Change Detection in Discrete-Time in Presence of a Covert Adversary Authors Amir Reza Ramtin, Philippe Nain, Don Towsley Published: 01.27.2026 Updated: 02.16.2026 Summary We study the problem of covert quickest change detection in a discrete-time setting, where a sequence of observations undergoes a distributional change at an unknown time. Unlike classical formulations, we consider a covert adversary who has knowledge of the detector’s false alarm constraint parameter $γ$ and selects a stationary post-change distribution that depends on it, seeking to remain undetected for as long as possible. Building on the theoretical foundations of the CuSum procedure, we rigorously characterize the asymptotic behavior of the average detection delay (ADD) and the average time to false alarm (AT2FA) when the post-change distribution converges to the pre-change distribution as $γto infty$. Our analysis establishes exact asymptotic expressions for these quantities, extending and refining classical results that no longer hold in this regime. We identify the critical scaling laws governing covert behavior and derive explicit conditions under which an adversary can maintain covertness, defined by ADD = $Θ(γ)$, whereas in the classical setting, ADD grows only as $mathcal{O}(log γ)$. In particular, for Gaussian and Exponential models under adversarial perturbations of their respective parameters, we asymptotically characterize ADD as a function of the Kullback–Leibler divergence between the pre- and post-change distributions and $γ$. Source arXiv: 2601.20022v2
Multivariate Multicycle Codes for Complete Single-Shot Decoding Authors Feroz Ahmed Mian, Owen Gwilliam, Stefan Krastanov Published: 01.26.2026 Updated: 01.26.2026 Summary We introduce multivariate multicycle (MM) codes, a new family of quantum error correcting codes that unifies and generalizes bivariate bicycle codes, multivariate bicycle codes, abelian two-block group algebra codes, generalized bicycle codes, trivariate tricycle codes, and n-dimensional toric codes. MM codes are Calderbank-Shor-Steane (CSS) codes defined from length-t chain complexes with $t ge 4$. The chief advantage of these codes is that they possess metachecks and high confinement that permit complete single-shot decoding, while also having additional algebraic structure that might enable logical non-Clifford gates. We offer a framework that facilitates the construction of long-length chain complexes through the use of Koszul complex. In particular, obtaining explicit boundary maps (parity check and metacheck matrices) is particularly straightforward in our approach. This simple but very general parameterization of codes permitted us to efficiently perform a numerical search, where we identify several MM code candidates that demonstrate these capabilities at high rates and high code distances. Examples of new codes with parameters $[[n,k,d]]$ include $[[96, 12, 8]]$, $[[96, 44, 4]]$ $[[144, 40, 4]]$, $[[216, 12, 12]]$, $[[360, 30, 6]]$, $[[384, 80, 4]]$, $[[486, 24, 12]]$, $[[486, 66, 9]]$ and $[[648, 60, 9]]$. Notably, our codes achieve confinement profiles that surpass all known single-shot decodable quantum CSS codes of practical blocksize. Source arXiv: 2601.18879v1
Upper bounds on the purity of Wigner positive quantum states that verify the Wigner entropy conjecture Authors Qipeng Qian, Christos Gagatsos Published: 01.23.2026 Updated: 01.23.2026 Summary We present analytical results toward the Wigner entropy conjecture, which posits that among all physical Wigner non-negative states the Wigner entropy is minimized by pure Gaussian states for which it attains the value $1+lnπ$.Working under a minimal set of constraints on the Wigner function, namely, non-negativity, normalization, and the pointwise bound $πWle 1$, we construct an explicit hierarchy of lower bounds $B_n$ on $S[W]$ by combining a truncated series lower bound for $-ln x$ with moment identities of the Wigner function.This yields closed-form purity-based sufficient conditions ensuring $S[W]ge 1+lnπ$.In particular, we first prove that all Wigner non-negative states with $μle 4-2sqrt3$ satisfy the Wigner entropy conjecture. We further obtain a systematic purity-only relaxation of the hierarchy, yielding the simple sufficient condition $μle 2/e$. On top of aforesaid results, our analysis clarifies why additional physicality constraints are necessary for purity-based approaches that aim to approach the extremal case $μleq1$. Source arXiv: 2601.16898v1
Integrated Photonic Quantum Computing: From Silicon to Lithium Niobate Authors Hui Zhang, Yiming Ma, Di Zhu, Yuancheng Zhan, Yuzhi Shi, Zhanshan Wang, Leong Chuan Kwek, Anthony Laing, Ai Qun Liu, Marko Loncar, Xinbin Cheng Published: 01.23.2026 Updated: 01.23.2026 Summary Quantum technologies have surpassed classical systems by leveraging the unique properties of superposition and entanglement in photons and matter. Recent advancements in integrated quantum photonics, especially in silicon-based and lithium niobate platforms, are pushing the technology toward greater scalability and functionality. Silicon circuits have progressed from centimeter-scale, dual-photon systems to millimeter-scale, high-density devices that integrate thousands of components, enabling sophisticated programmable manipulation of multi-photon states. Meanwhile, lithium niobate, thanks to its wide optical transmission window, outstanding nonlinear and electro-optic coefficients, and chemical stability, has emerged as an optimal substrate for fully integrated photonic quantum chips. Devices made from this material exhibit high efficiency in in generating, manipulating, converting, storing, and detecting photon states, thereby establishing a basis for deterministic multi-photon generation and single-photon quantum interactions, as well as comprehensive frequency-state control. This review explores the development of integrated photonic quantum technologies based on both silicon and lithium niobate, highlighting invaluable insights gained from silicon-based systems that can assist the scaling of lithium niobate technologies. It examines the functional integration mechanisms of lithium niobate in electro-optic tuning and nonlinear energy conversion, showcasing its transformative impact throughout the photonic quantum computing process. Looking ahead, we speculate on the developmental pathways for lithium niobate platforms and their potential to revolutionize areas such as quantum communication, complex system simulation, quantum sampling, and optical quantum computing paradigms. Source arXiv: 2601.16484v1
Heterogeneous Transfer of Thin Film BaTiO$_3$ onto Silicon for Device Fabrication Authors Temazulu S. Zulu, Larissa B. Little, Aaron M. Day, Chaoshen Zhang, Keith Powell, Kyeong-Yoon Baek, Benazir Fazlioglu-Yalcin, Neil Sinclair, Charles M. Brooks, David R. Barton, Marko Loncar, Julia A. Mundy Published: 01.21.2026 Updated: 01.21.2026 Summary Thin film BaTiO$_3$ has one of the highest known Pockels coefficients (>1200 pm/V), making it an attractive material for use in electro-optic devices. It is advantageous to integrate BaTiO$_3$ on silicon to enable complementary metal-oxide-semiconductor (CMOS) compatible processing. However, synthesis of high-quality BaTiO$_3$ directly on silicon remains a challenge. Here, we synthesize BaTiO$_3$ using hybrid metal-organic molecular beam epitaxy (hMBE) and demonstrate its transfer onto silicon using thermocompression bonding and chemical lift-off. Hybrid metal-organic MBE enables self-regulated synthesis of highly stoichiometric thin films at high growth rates (>100nm/hr). Our transfer method results in millimeter-scale areas of atomically flat, crack-free BaTiO$_3$ making it a potentially scalable method. Finally, we demonstrate the applicability of our process to device fabrication through characterization of lithographically-patterned and etch-transferred sub-micron features. Source arXiv: 2601.14551v1
Inverse Quantum Simulation for Quantum Material Design Authors Christian Kokail, Pavel E. Dolgirev, Rick van Bijnen, Daniel Gonzalez-Cuadra, Mikhail D. Lukin, Peter Zoller Published: 01.18.2026 Updated: 01.18.2026 Summary Quantum simulation provides a powerful route for exploring many-body phenomena beyond the capabilities of classical computation. Existing approaches typically proceed in the forward direction: a model Hamiltonian is specified, implemented on a programmable quantum platform, and its phase diagram and properties are explored. Here we present a quantum algorithmic framework for inverse quantum simulation, enabling quantum material design with desired properties. Target material characteristics are encoded as a cost function, which is minimized on quantum hardware to prepare a many-body state with the desired properties in quantum memory. Hamiltonian learning is then used to reconstruct a low-energy Hamiltonian for which this state is an approximate ground state, yielding a physically interpretable model that can guide experimental synthesis. As illustrative applications, we outline how the method can be used to search for high-temperature superconductors within the fermionic Hubbard model, enhancing $d$-wave correlations over a broad range of dopings and temperatures, design quantum phases by stabilizing a topological order through continuous Hamiltonian modifications, and optimize dynamical properties relevant for photochemistry and frequency- and momentum-resolved condensed-matter data. These results extend the scope of quantum simulators from exploring quantum many-body systems to designing and discovering new quantum materials. Source arXiv: 2601.12239v1
Controlling Rydberg atom-polariton interactions: from exceptional points to fast readout Authors Tamara Šumarac, Emily H. Qiu, Shai Tsesses, Peiran Niu, Adrian J. Menssen, Wenchao Xu, Valentin Walther, Uroš Delić, Soonwon Choi, Mikhail D. Lukin, Vladan Vuletić Published: 01.09.2026 Updated: 01.09.2026 Summary Rydberg atoms represent a platform underpinning many recent developments in quantum computation, simulation, sensing, and metrology. They further facilitate optical nonlinearity at the single-photon level when coupled to photons propagating in atomic clouds, which form collective atomic excitations called Rydberg polaritons, strongly interacting with each other. Here, we experimentally explore interactions between a Rydberg polariton in an atomic ensemble and a single, adjacent, Rydberg atom. We discover three different regimes of quantum dynamics corresponding to polariton blockade, coherent exchange, and probabilistic hopping, which are defined by their distinct transmission characteristics, with a transition through an exceptional point occurring between blockade and coherent exchange. We investigate the applications of such interactions for fast, non-destructive detection of Rydberg atoms and present proof-of-principle demonstrations for their potential application in nonlinear photonic networks. Source arXiv: 2601.06345v1
Benchmarking Quantum Data Center Architectures: A Performance and Scalability Perspective Authors Shahrooz Pouryousef, Eneet Kaur, Hassan Shapourian, Don Towsley, Ramana Kompella, Reza Nejabati Published: 01.04.2026 Updated: 01.10.2026 Summary Scalable distributed quantum computing (DQC) has motivated the design of multiple quantum data-center (QDC) architectures that overcome the limitations of single quantum processors through modular interconnection. While these architectures adopt fundamentally different design philosophies, their relative performance under realistic quantum hardware constraints remains poorly understood. In this paper, we present a systematic benchmarking study of four representative QDC architectures-QFly, BCube, Clos, and Fat-Tree-quantifying their impact on distributed quantum circuit execution latency, resource contention, and scalability. Focusing on quantum-specific effects absent from classical data-center evaluations, we analyze how optical-loss-induced Einstein-Podolsky-Rosen (EPR) pair generation delays, coherence-limited entanglement retry windows, and contention from teleportation-based non-local gates shape end-to-end execution performance. Across diverse circuit workloads, we evaluate how architectural properties such as path diversity and path length, and shared BSM (Bell State Measurement) resources interact with optical-switch insertion loss and reconfiguration delay. Our results show that distributed quantum performance is jointly shaped by topology, scheduling policies, and physical-layer parameters, and that these factors interact in nontrivial ways. Together, these insights provide quantitative guidance for the design of scalable and high-performance quantum data-center architectures for DQC. Source arXiv: 2601.01353v2
Benchmarking Quantum Data Center Architectures: A Performance and Scalability Perspective Authors Shahrooz Pouryousef, Eneet Kaur, Hassan Shapourian, Don Towsley, Ramana Kompella, Reza Nejabati Published: 01.04.2026 Updated: 01.04.2026 Summary Scalable distributed quantum computing (DQC) has motivated the design of multiple quantum data-center (QDC) architectures that overcome the limitations of single quantum processors through modular interconnection. While these architectures adopt fundamentally different design philosophies, their relative performance under realistic quantum hardware constraints remains poorly understood. In this paper, we present a systematic benchmarking study of four representative QDC architectures-QFly, BCube, Clos, and Fat-Tree-quantifying their impact on distributed quantum circuit execution latency, resource contention, and scalability. Focusing on quantum-specific effects absent from classical data-center evaluations, we analyze how optical-loss-induced Einstein-Podolsky-Rosen (EPR) pair generation delays, coherence-limited entanglement retry windows, and contention from teleportation-based non-local gates shape end-to-end execution performance. Across diverse circuit workloads, we evaluate how architectural properties such as path diversity and path length, and shared BSM (Bell State Measurement) resources interact with optical-switch insertion loss and reconfiguration delay. Our results show that distributed quantum performance is jointly shaped by topology, scheduling policies, and physical-layer parameters, and that these factors interact in nontrivial ways. Together, these insights provide quantitative guidance for the design of scalable and high-performance quantum data-center architectures for DQC. Source arXiv: 2601.01353v1
Generalized model of anisotropic thermo-optic response on thin-film lithium niobate platform Authors Joonsup Shim, Seonghun Kim, Shengyuan Lu, Jiayu Yang, Seongjin Jeon, Sanghyeon Kim, Marko Lončar, Young-Ik Sohn Published: 01.01.2026 Updated: 01.01.2026 Summary Thermo-optic (TO) control is crucial for thin-film lithium niobate (TFLN) photonic integrated circuits (PICs), offering a simple and practical method for low-frequency and DC tuning while remaining compatible with high-frequency electro-optic (EO) modulation. In x-cut TFLN, the TO response is inherently anisotropic, depending on both waveguide propagation angle and polarization due to the mode-specific overlap of the electric field with the ordinary and extraordinary refractive index axes of the crystal. Despite its significance, a systematic and quantitative analysis of this anisotropy has remained elusive. Here, we present the first generalized analytical model that describes the anisotropic TO response as a function of polarization and arbitrary waveguide orientation, and rigorously validate it through numerical simulations and experiments. This study provides foundational insight into anisotropic thermal tuning and enables new opportunities for engineering energy-efficient and scalable photonic design in next-generation TFLN PICs. Source arXiv: 2601.00174v1
Fast-Recovery Epitaxial NbN Superconducting Nanowire Single-Photon Detectors with Saturated Efficiency at 1550 nm in Liquid Helium Authors Francesca Incalza, Matteo Castellani, Dip Joti Paul, Alejandro Simon, Emma Batson, Davide Mondin, Owen Medeiros, Karl K. Berggren Published: 12.19.2025 Updated: 12.19.2025 Summary Achieving both high internal efficiency and fast reset times at elevated temperatures remains challenging due to limited understanding of how film properties govern SNSPD performance. We demonstrate that epitaxial NbN films on sapphire enable simultaneous high efficiency and rapid response. We fabricate and characterize SNSPDs based on these films deposited via DC magnetron sputtering on c-cut sapphire. High-quality epitaxial growth preserves a low electron diffusion coefficient and promotes strong electron-phonon coupling, yielding a high critical temperature and efficient hotspot formation in the dirty limit. X-ray diffraction and transmission electron microscopy confirm epitaxial alignment and lattice order. Nanowires of 20 nm width exhibit saturated internal efficiency at 1550 nm wavelength and short reset times at 4.2 K, enabled by lattice matching and high thermal conductance of the sapphire interface. Ab initio modeling reproduces photon count rates, linking device performance quantitatively to film properties such as diffusivity and electron-phonon coupling. Source arXiv: 2512.18063v1
Zero-added-loss entanglement multiplexing using time-bin spectral shearing Authors Joseph C. Chapman, Muneer Alshowkan, Jack Postlewaite, Saikat Guha, Nageswara Rao Published: 12.19.2025 Updated: 12.19.2025 Summary High-quality quantum communications that enable important capabilities, such as distributed quantum computing and sensing, will require quantum repeaters for providing high-quality entanglement. To realize high-rate heralded entanglement for quantum repeaters, Chen et al. [Phys. Rev. Appl. 19, 054209 (2023)] proposed a scheme for heralded-multiplexed generation of quasi-deterministic entangled photon pairs, called zero-added-loss multiplexing (ZALM). Here, we propose a design of ZALM source using time-bin entanglement and spectral shearing. Additionally, we provide an analysis of experimentally relevant spectral-shearing parameters to optimize the spectral multiplexing. Moreover, we experimentally verify the compatibility of time-bin pulses and spectral shearing, as supported by observation of no phase shift when the same shearing is applied to both time bins. These results expand the benefits of applying a ZALM source to time-bin entanglement use cases. Moreover, more fully demonstrating time-bin and spectral shearing compatibility clears a path towards a broader use of spectral shearing that provides a deterministic frequency shift of high utility. Source arXiv: 2512.17148v1
Zero-added-loss entanglement multiplexing using time-bin spectral shearing Authors Joseph C. Chapman, Muneer Alshowkan, Jack Postlewaite, Saikat Guha, Nageswara Rao Published: 12.19.2025 Updated: 05.03.2026 Summary High-quality quantum communications that enable important capabilities, such as distributed quantum computing and sensing, will require quantum repeaters for providing high-quality entanglement. To realize high-rate heralded entanglement for quantum repeaters, Chen et al. [Phys. Rev. Appl. 19, 054209 (2023)] proposed a scheme for heralded-multiplexed generation of quasi-deterministic entangled photon pairs, called zero-added-loss multiplexing (ZALM). Here, we propose a design of ZALM source using time-bin entanglement and spectral shearing. Additionally, we provide an analysis of experimentally relevant spectral-shearing parameters to optimize the spectral multiplexing. Moreover, we experimentally verify the compatibility of time-bin pulses and spectral shearing, as supported by observation of no appreciable phase shift when the same shearing is applied to both time bins. These results expand the benefits of applying a ZALM source to time-bin entanglement use cases. Moreover, more fully demonstrating time-bin and spectral shearing compatibility clears a path towards a broader use of spectral shearing that provides a deterministic frequency shift of high utility. Source arXiv: 2512.17148v2
Comparing Homodyne and Heterodyne Tomography of Quantum States of Light Authors Rhea P. Fernandes, Andrew J. Pizzimenti, Christos N. Gagatsos, Joseph M. Lukens Published: 12.18.2025 Updated: 12.18.2025 Summary Non-Gaussian quantum states are critical resources in photonic quantum information processing, rendering their generation and characterization of increasing importance in quantum optics. In this work, we theoretically and numerically analyze the relative efficiency of homodyne versus heterodyne measurements for reconstructing non-Gaussian states, a major outstanding question in continuous-variable tomography. Combining a Fisher information-based formalism with simulated experiments, we find homodyne tomography to outperform heterodyne measurements for all non-Gaussian states tested, although the separation between the two modalities proves significantly narrower than suggested by the asymptotic Cramer-Rao lower bound. Our results should find use for optimizing measurement strategies in practical continuous-variable quantum systems. Source arXiv: 2512.17031v1
QuantumSavory: Write Symbolically, Run on Any Backend — A Unified Simulation Toolkit for Quantum Computing and Networking Authors Hana KimLee, Leonardo Bacciottini, Abhishek Bhatt, Andrew Kille, Stefan Krastanov Published: 12.18.2025 Updated: 12.18.2025 Summary Progress in quantum computing and networking depends on codesign across abstraction layers: device-level noise and heterogeneous hardware, algorithmic structure, and distributed classical control. We present QuantumSavory, an open-source toolkit built to make such end-to-end studies practical by cleanly separating a symbolic computer-algebra frontend from interchangeable numerical simulation backends. States, operations, measurements, and protocol logic are expressed in a backend-agnostic symbolic language; the same model can be executed across multiple backends (e.g., stabilizer, wavefunction, phase-space), enabling rapid exploration of accuracy-performance tradeoffs without rewriting the model. Furthermore, new custom backends can be added via a small, well-defined interface that immediately reuses existing models and protocols. QuantumSavory also addresses the classical-quantum interaction inherent to LOCC protocols via discrete-event execution and a tag/query system for coordination. Tags attach structured classical metadata to quantum registers and message buffers, and queries retrieve, filter, or wait on matching metadata by wildcards or arbitrary predicates. This yields a data-driven control plane where protocol components coordinate by publishing and consuming semantic facts (e.g., resource availability, pairing relationships, protocol outcomes) rather than by maintaining rigid object graphs or bespoke message plumbing, improving composability and reuse as models grow. Our toolkit is also not limited to qubits and Bell pairs; rather, any networking dynamics of any quantum system under any type of multipartite entanglement can be tackled. Lastly, QuantumSavory ships reusable libraries of standard states, circuits, and protocol building blocks with consistent interfaces, enabling full-stack examples to be assembled, modified, and compared with minimal glue code. Source arXiv: 2512.16752v1
Noncooperative Quantum Networks Authors Yanxuan Shao, Jannik L. Wyss, Don Towsley, Adilson E. Motter Published: 12.17.2025 Updated: 12.17.2025 Summary Existing protocols for quantum communication networks usually assume an initial allocation of quantum entanglement resources, which are then manipulated through local operations and classical communication (LOCC) to establish high-fidelity entanglement between distant parties. It is generally held that the resulting fidelity would increase monotonically with the entanglement budget. Here, we show that for noncooperative LOCC protocols, the resulting fidelity may decrease as more entanglement is added to a network with non-pure states. This effect results from a quantum analog of selfish routing and constitutes a potential obstacle to the optimal use of resources in large quantum networks. Source arXiv: 2512.15884v1