Publications

    2017

  1. Monitoring a quantum observable continuously in time produces a stochastic measurement record that noisily tracks the observable. For a classical process such noise may be reduced to recover an average signal by minimizing the mean squared error between the noisy record and a smooth dynamical estimate. We show that for a monitored qubit this usual procedure returns unusual results. While the record seems centered on the expectation value of the observable during causal generation, examining the collected past record reveals that it better approximates a moving-mean Gaussian stochastic process centered at a distinct (smoothed) observable estimate. We show that this shifted mean converges to the real part of a generalized weak value in the time-continuous limit without additional postselection. We verify that this smoothed estimate minimizes the mean squared error even for individual measurement realizations. We go on to show that if a second observable is weakly monitored concurrently, then that second record is consistent with the smoothed estimate of the second observable based solely on the information contained in the first observable record. Moreover, we show that such a smoothed estimate made from incomplete information can still outperform estimates made using full knowledge of the causal quantum state.

  2. The quantum Zeno effect is the suppression of Hamiltonian evolution by repeated observation, resulting in the pinning of the state to an eigenstate of the measurement observable. Using measurement only, control of the state can be achieved if the observable is slowly varied such that the state tracks the now time-dependent eigenstate. We demonstrate this using a circuit-QED readout technique that couples to a dynamically controllable observable of a qubit. Continuous monitoring of the measurement record allows us to detect an escape from the eigenstate, thus serving as a built-in form of error detection. We show this by post-selecting on realizations with arbitrarily high fidelity with respect to the target state. Our dynamical measurement operator technique offers a new tool for numerous forms of quantum feedback protocols, including adaptive measurements and rapid state purification.

  3. We examine the time reversal symmetry of quantum measurement sequences by introducing a forward and backward Janus sequence of measurements. If the forward sequence of measurements creates a sequence of quantum states in time, starting from an initial state and ending in a final state, then the backward sequence begins with the time-reversed final state, exactly retraces the intermediate states, and ends with the time-reversed initial state. We prove that such a sequence can always be constructed, showing that unless the measurements are ideal projections, it is impossible to tell if a given sequence of measurements is progressing forward or backward in time. A statistical arrow of time emerges only because typically the forward sequence is more probable than the backward sequence.

  4. The state of a continuously monitored qubit evolves stochastically, exhibiting competition between coherent Hamiltonian dynamics and diffusive partial collapse dynamics that follow the measurement record. We couple these distinct types of dynamics together by linearly feeding the collected record for dispersive energy measurements directly back into a coherent Rabi drive amplitude. Such feedback turns the competition cooperative, and effectively stabilizes the qubit state near a target state. We derive the conditions for obtaining such dispersive state stabilization and verify the stabilization conditions numerically. We include common experimental nonidealities, such as energy decay, environmental dephasing, detector efficiency, and feedback delay, and show that the feedback delay has the most significant negative effect on the feedback protocol. Setting the measurement collapse timescale to be long compared to the feedback delay yields the best stabilization.

  5. Two topics, evolving rapidly in separate fields, were combined recently: The out-of-time-ordered correlator (OTOC) signals quantum-information scrambling in many-body systems. The Kirkwood-Dirac (KD) quasiprobability represents operators in quantum optics. The OTOC was shown to equal a moment of a summed quasiprobability. That quasiprobability, we argue, is an extension of the KD distribution. We explore the quasiprobability's structure from experimental, numerical, and theoretical perspectives. First, we simplify and analyze Yunger Halpern's weak-measurement and interference protocols for measuring the OTOC and its quasiprobability. We decrease, exponentially in system size, the number of trials required to infer the OTOC from weak measurements. We also construct a circuit for implementing the weak-measurement scheme. Next, we calculate the quasiprobability (after coarse-graining) numerically and analytically: We simulate a transverse-field Ising model first. Then, we calculate the quasiprobability averaged over random circuits, which model chaotic dynamics. The quasiprobability, we find, distinguishes chaotic from integrable regimes. We observe nonclassical behaviors: The quasiprobability typically has negative components. It becomes nonreal in some regimes. The onset of scrambling breaks a symmetry that bifurcates the quasiprobability, as in classical-chaos pitchforks. Finally, we present mathematical properties. The quasiprobability obeys a Bayes-type theorem, for example, that exponentially decreases the memory required to calculate weak values, in certain cases. A time-ordered correlator analogous to the OTOC, insensitive to quantum-information scrambling, depends on a quasiprobability closer to a classical probability. This work not only illuminates the OTOC's underpinnings, but also generalizes quasiprobability theory and motivates immediate-future weak-measurement challenges.

  6. We investigate the determination of a Hamiltonian parameter in a quantum system undergoing continuous measurement. We demonstrate a computationally rapid yet statistically optimal method to estimate an unknown and possibly time-dependent parameter, where we maximize the likelihood of the observed stochastic readout. By dealing directly with the raw measurement record rather than the quantum state trajectories, the estimation can be performed while the data is being acquired, permitting continuous tracking of the parameter during slow drifts in real time. Furthermore, we incorporate realistic nonidealities, such as decoherence processes and measurement inefficiency. As an example, we focus on estimating the value of the Rabi frequency of a continuously measured qubit, and compare maximum likelihood estimation to a simpler fast Fourier transform. Using this example, we discuss how the quality of the estimation depends on both the strength and duration of the measurement; we also discuss the trade-off between the accuracy of the estimate and the sensitivity to drift as the estimation duration is varied.

  7. A novel method was recently proposed and experimentally realized for characterizing a quantum state by directly measuring its complex probability amplitudes in a particular basis using so-called weak values. Recently Vallone and Dequal showed theoretically that weak measurements are not a necessary condition to determine the weak value [Phys. Rev. Lett. 116, 040502 (2016)]. Here we report a measurement scheme used in a matter-wave interferometric experiment in which the neutron path system's quantum state was characterized via direct measurements using both strong and weak interactions. Experimental evidence is given that strong interactions outperform weak ones. Our results are not limited to neutron interferometry, but can be used in a wide range of quantum systems.

  8. 2016

  9. We investigate the statistical arrow of time for a quantum system being monitored by a sequence of measurements. For a continuous qubit measurement example, we demonstrate that time-reversed evolution is always physically possible, provided that the measurement record is also negated. Despite this restoration of dynamical reversibility, a statistical arrow of time emerges, and may be quantified by the log-likelihood difference between forward and backward propagation hypotheses. We then show that such reversibility is a universal feature of non-projective measurements, with forward or backward Janus measurement sequences that are time-reversed inverses of each other.

  10. Previous experimental tests of quantum contextuality based on the Bell-Kochen-Specker (BKS) theorem have demonstrated that not all observables among a given set can be assigned noncontextual eigenvalue predictions, but have never identified which specific observables must fail such assignment. Using neutron interferometry, we remedy this shortcoming by showing that BKS contextuality can be confined to particular observables in the form of anomalous weak values, which can be directly witnessed through weak measurements. We construct a confined contextuality witness from weak values, which we measure experimentally to obtain a 5σ average violation of the noncontextual bound, with one contributing term violating an independent bound by more than 99σ. This experimentally measured confined contextuality confirms the quantum pigeonhole effect, wherein eigenvalue assignments to contextual observables apparently violate the classical pigeonhole principle.

  11. We analyze the continuous measurement of two non-commuting observables for a qubit, and investigate whether the simultaneously observed noisy signals are consistent with the evolution of an equivalent classical system. Following the approach outlined by Leggett and Garg, we show that the readouts violate macrorealistic inequalities for arbitrarily short temporal correlations. Moreover, the derived inequalities are manifestly violated even in the absence of Hamiltonian evolution, unlike for Leggett-Garg inequalities that use a single continuous measurement. Such a violation should indicate the failure of at least one postulate of macrorealism: either physical quantities do not have well defined values at all times, or the measurement process itself disturbs what is being measured. For measurements of equal strength we are able to construct a classical stochastic model for a spin that perfectly emulates both the qubit evolution and the observed noisy signals, thus emulating the violations; interestingly, this model also requires an unphysical noise to emulate the readouts, which effectively restricts the ability of an observer to learn information about the spin.

  12. Using circuit QED, we consider the measurement of a superconducting transmon qubit via a coupled microwave resonator. For ideally dispersive coupling, ringing up the resonator produces coherent states with frequencies matched to transmon energy states. Realistic coupling is not ideally dispersive, however, so transmon-resonator energy levels hybridize into joint eigenstate ladders of the Jaynes-Cummings type. Previous work has shown that ringing up the resonator approximately respects this ladder structure to produce a coherent state in the eigenbasis (a dressed coherent state). We numerically investigate the validity of this coherent state approximation to find two primary deviations. First, resonator ring-up leaks small stray populations into eigenstate ladders corresponding to different transmon states. Second, within an eigenstate ladder the transmon nonlinearity shears the coherent state as it evolves. We then show that the next natural approximation for this sheared state in the eigenbasis is a dressed squeezed state, and derive simple evolution equations for such states using a hybrid phase-Fock-space description.

  13. Weak measurement has provided new insight into the nature of quantum measurement by demonstrating the ability to extract average state information without fully projecting the system. For single qubit measurements, this partial projection has been demonstrated with violations of the Leggett-Garg inequality. Here we investigate the effects of weak measurement on a maximally entangled Bell state through application of the Hybrid Bell-Leggett-Garg inequality (BLGI) on a linear chain of four transmon qubits. By correlating the results of weak ancilla measurements with subsequent projective readout, we achieve a violation of the BLGI with 27 standard deviations of certainty.

  14. 2015

  15. In modern circuit QED architectures, superconducting transmon qubits are measured via the state-dependent phase and amplitude shift of a microwave field leaking from a coupled resonator. Determining this shift requires integrating the field quadratures for a nonzero duration, which can permit unwanted concurrent evolution. Here we investigate such dynamical degradation of the measurement fidelity caused by a detuned neighboring qubit. We find that in realistic parameter regimes, where the qubit ensemble-dephasing rate is slower than the qubit-qubit detuning, the joint qubit-qubit eigenstates are better discriminated by measurement than the bare states. Furthermore, we show that when the resonator leaks much more slowly than the qubit-qubit detuning, the measurement tracks the joint eigenstates nearly adiabatically. However, the measurement process also causes rare quantum jumps between the eigenstates. The rate of these jumps becomes significant if the resonator decay is comparable to or faster than the qubit-qubit detuning, thus significantly degrading the measurement fidelity in a manner reminiscent of energy relaxation processes.

  16. We present a comprehensive introduction to spacetime algebra that emphasizes its practicality and power as a tool for the study of electromagnetism. We carefully develop this natural (Clifford) algebra of the Minkowski spacetime geometry, with a particular focus on its intrinsic (and often overlooked) complex structure. Notably, the scalar imaginary that appears throughout the electromagnetic theory properly corresponds to the unit 4-volume of spacetime itself, and thus has physical meaning. The electric and magnetic fields are combined into a single complex and frame-independent bivector field, which generalizes the Riemann-Silberstein complex vector that has recently resurfaced in studies of the single photon wavefunction. The complex structure of spacetime also underpins the emergence of electromagnetic waves, circular polarizations, the normal variables for canonical quantization, the distinction between electric and magnetic charge, complex spinor representations of Lorentz transformations, and the dual (electric-magnetic field exchange) symmetry that produces helicity conservation in vacuum fields. This latter symmetry manifests as an arbitrary global phase of the complex field, motivating the use of a complex vector potential, along with an associated transverse and gauge-invariant bivector potential, as well as complex (bivector and scalar) Hertz potentials. Our detailed treatment aims to encourage the use of spacetime algebra as a readily available and mature extension to existing vector calculus and tensor methods that can greatly simplify the analysis of fundamentally relativistic objects like the electromagnetic field.

  17. We improve the precision of the interferometric weak-value-based beam deflection measurement by introducing a power recycling mirror, creating a resonant cavity. This results in all the light exiting to the detector with a large deflection, thus eliminating the inefficiency of the rare postselection. The signal-to-noise ratio of the deflection is itself magnified by the weak value. We discuss ways to realize this proposal, using a transverse beam filter and different cavity designs.

  18. Weak values arise experimentally as conditioned averages of weak (noisy) observable measurements that minimally disturb an initial quantum state, and also as dynamical variables for reduced quantum state evolution even in the absence of measurement. These averages can exceed the eigenvalue range of the observable ostensibly being estimated, which has prompted considerable debate regarding their interpretation. Classical conditioned averages of noisy signals only show such anomalies if the quantity being measured is also disturbed prior to conditioning. This fact has recently been rediscovered, along with the question whether anomalous weak values are merely classical disturbance effects. Here we carefully review the role of the weak value as both a conditioned observable estimation and a dynamical variable, and clarify why classical disturbance models will be insufficient to explain the weak value unless they can also simulate other quantum interference phenomena.

  19. We consider the discrimination of two pure quantum states with three allowed outcomes: a correct guess, an incorrect guess, and a nonguess. To find an optimum measurement procedure, we define a tunable cost that penalizes the incorrect guess and nonguess outcomes. Minimizing this cost over all projective measurements produces a rigorous cost bound that includes the usual Helstrom discrimination bound as a special case. We then show that nonprojective measurements can outperform this modified Helstrom bound for certain choices of cost function. The Ivanovic-Dieks-Peres unambiguous state discrimination protocol is recovered as a special case of this improvement. Notably, while the cost advantage of the latter protocol is destroyed with the introduction of any amount of experimental noise, other choices of cost function have optima for which nonprojective measurements robustly show an appreciable, and thus experimentally measurable, cost advantage. Such an experiment would be an unambiguous demonstration of a benefit from nonprojective measurements.

  20. We examine the results of the paper “Precision metrology using weak measurements” (Zhang et al. arXiv:1310.5302, 2013) from a quantum state discrimination point of view. The Heisenberg scaling of the photon number for the precision of the interaction parameter between coherent light and a spin one-half particle (or pseudo-spin) has a simple interpretation in terms of the interaction rotating the quantum state to an orthogonal one. To achieve this scaling, the information must be extracted from the spin rather than from the coherent state of light, limiting the applications of the method to phenomena such as cross-phase modulation. We next investigate the effect of dephasing noise and show a rapid degradation of precision, in agreement with general results in the literature concerning Heisenberg scaling metrology. We also demonstrate that a von Neumann-type measurement interaction can display a similar effect with no system/meter entanglement.

  21. 2014

  22. We review and re-examine the description and separation of the spin and orbital angular momenta (AM) of an electromagnetic field in free space. While the spin and orbital AM of light are not separately meaningful physical quantities in orthodox quantum mechanics or classical field theory, these quantities are routinely measured and used for applications in optics. A meaningful quantum description of the spin and orbital AM of light was recently provided by several authors, which describes separately conserved and measurable integral values of these quantities. However, the electromagnetic field theory still lacks corre- sponding locally conserved spin and orbital AM currents. In this paper, we construct these missing spin and orbital AM densities and fluxes that satisfy the proper continuity equations. We show that these are physically measurable and conserved quantities. These are, however, not Lorentz-covariant, so only make sense in the single laboratory reference frame of the measurement probe. The fluxes we derive improve the canonical (nonconserved) spin and orbital AM fluxes, and include a ‘spin–orbit’ term that describes the spin–orbit interaction effects observed in nonparaxial optical fields. We also consider both standard and dual-symmetric versions of the electromagnetic field theory. Applying the general theory to nonparaxial optical vortex beams validates our results and allows us to discriminate between earlier approaches to the problem. Ourtreatment yields the complete and consistent description of the spin and orbital AM of free Maxwell fields in both quantum-mechanical and field-theory approaches.

  23. A central feature of quantum mechanics is that a measurement result is intrinsically probabilistic. Consequently, continuously monitoring a quantum system will randomly perturb its natural unitary evolution. The ability to control a quantum system in the presence of these fluctuations is of increasing importance in quantum information processing and finds application in fields ranging from nuclear magnetic resonance to chemical synthesis. A detailed understanding of this stochastic evolution is essential for the development of optimized control methods. Here we reconstruct the individual quantum trajectories of a superconducting circuit that evolves under the competing influences of continuous weak measurement and Rabi drive. By tracking individual trajectories that evolve between any chosen initial and final states, we can deduce the most probable path through quantum state space. These pre- and post-selected quantum trajectories also reveal the optimal detector signal in the form of a smooth, time-continuous function that connects the desired boundary conditions. Our investigation reveals the rich interplay between measurement dynamics, typically associated with wavefunction collapse, and unitary evolution of the quantum state as described by the Schroedinger equation. These results and the underlying theory, based on a principle of least action, reveal the optimal route from initial to final states, and may inform new quantum control methods for state steering and information processing.

  24. Large weak values have been used to amplify the sensitivity of a linear response signal for detecting changes in a small parameter, which has also enabled a simple method for precise parameter estimation. However, producing a large weak value requires a low postselection probability for an ancilla degree of freedom, which limits the utility of the technique. We propose an improvement to this method that uses entanglement to increase the efficiency. We show that by entangling and postselecting n ancillas, the postselection probability can be increased by a factor of n while keeping the weak value fixed (compared to n uncorrelated attempts with one ancilla), which is the optimal scaling with n that is expected from quantum metrology. Furthermore, we show the surprising result that the quantum Fisher information about the detected parameter can be almost entirely preserved in the postselected state, which allows the sensitive estimation to approximately saturate the relevant quantum Cramér-Rao bound. To illustrate this protocol we provide simple quantum circuits that can be implemented using current experimental realizations of three entangled qubits.

  25. We describe a method to perform any generalized purity-preserving measurement of a qubit with techniques tailored to superconducting systems. First, we consider two methods for realizing a two-outcome partial projection: using a thresholded continuous measurement in the circuit QED setup and using an indirect ancilla qubit measurement. Second, we decompose an arbitrary purity-preserving two-outcome measurement into single-qubit unitary rotations and a partial projection. Third, we systematically reduce any multiple-outcome measurement to a sequence of such two-outcome measurements and unitary operations. Finally, we consider how to define suitable fidelity measures for multiple-outcome generalized measurements.

  26. By combining the postulates of macrorealism with Bell-locality, we derive a qualitatively different hybrid inequality that avoids two loopholes that commonly appear in Leggett-Garg and Bell inequalities. First, locally-invasive measurements can be used, which avoids the "clumsiness" Leggett-Garg inequality loophole. Second, a single experimental ensemble with fixed analyzer settings is sampled, which avoids the "disjoint sampling" Bell inequality loophole. The derived hybrid inequality has the same form as the Clauser-Horne-Shimony-Holt Bell inequality; however, its quantum violation intriguingly requires weak measurements. A realistic explanation of an observed violation requires either the failure of Bell-locality, or a preparation-conspiracy of finely tuned and nonlocally-correlated noise. Modern superconducting and optical systems are poised to implement this test.

  27. We revisit the definitions of error and disturbance recently used in error-disturbance inequalities derived by Ozawa and others by expressing them in the reduced system space. The interpretation of the definitions as mean-squared deviations relies on an implicit assumption that is generally incompatible with the Bell-Kochen-Specker-Spekkens contextuality theorems, and which results in averaging the deviations over a non-positive-semidefinite joint quasiprobability distribution. For unbiased measurements, the error admits a concrete interpretation as the dispersion in the estimation of the mean induced by the measurement ambiguity. We demonstrate how to directly measure not only this dispersion but also every observable moment with the same experimental data, and thus demonstrate that perfect distributional estimations can have nonzero error according to this measure. We conclude that the inequalities using these definitions do not capture the spirit of Heisenberg's eponymous inequality, but do indicate a qualitatively different relationship between dispersion and disturbance that is appropriate for ensembles being probed by all outcomes of an apparatus. To reconnect with the discussion of Heisenberg, we suggest alternative definitions of error and disturbance that are intrinsic to a single apparatus outcome. These definitions naturally involve the retrodictive and interdictive states for that outcome, and produce complementarity and error-disturbance inequalities that have the same form as the traditional Heisenberg relation.

  28. By generalizing the quantum weak measurement protocol to the case of quantum fields, we show that weak measurements probe an effective classical background field that describes the average field configuration in the spacetime region between pre- and postselection boundary conditions. The classical field is itself a weak value of the corresponding quantum field operator and satisfies equations of motion that extremize an effective action. Weak measurements perturb this effective action, producing measurable changes to the classical field dynamics. As such, weakly measured effects always correspond to an effective classical field. This general result explains why these effects appear to be robust for pre- and postselected ensembles, and why they can also be measured using classical field techniques that are not weak for individual excitations of the field.

  29. 2013

  30. Since its introduction 25 years ago, the quantum weak value has gradually transitioned from a theoretical curiosity to a practical laboratory tool. While its utility is apparent in the recent explosion of weak value experiments, its interpretation has historically been a subject of confusion. Here a pragmatic introduction to the weak value in terms of measurable quantities is presented, along with an explanation for how it can be determined in the laboratory. Further, its application to three distinct experimental techniques is reviewed. First, as a large interaction parameter it can amplify small signals above technical background noise. Second, as a measurable complex value it enables novel techniques for direct quantum state and geometric phase determination. Third, as a conditioned average of generalized observable eigenvalues it provides a measurable window into nonclassical features of quantum mechanics. In this selective review, a single experimental configuration to discuss and clarify each of these applications is used.

  31. We present a stochastic path integral formalism for continuous quantum measurement that enables the analysis of rare events using action methods. By doubling the quantum state space to a canonical phase space, we can write the joint probability density function of measurement outcomes and quantum state trajectories as a phase space path integral. Extremizing this action produces the most likely paths with boundary conditions defined by preselected and postselected states as solutions to a set of ordinary differential equations. As an application, we analyze continuous qubit measurement in detail and examine the structure of a quantum jump in the Zeno measurement regime.

  32. We consider the use of cyclic weak measurements to improve the sensitivity of weak-value amplification precision measurement schemes. Previous weak-value experiments have used only a small fraction of events, while discarding the rest through the process of "post-selection." We extend this idea by considering recycling of events which are typically unused in a weak measurement. Here we treat a sequence of polarized laser pulses effectively trapped inside an interferometer using a Pockels cell and polarization optics. In principle, all photons can be post-selected, which will improve the measurement sensitivity. We first provide a qualitative argument for the expected improvements from recycling photons, followed by the exact result for the recycling of collimated beam pulses, and numerical calculations for diverging beams. We show that beam degradation effects can be mitigated via profile flipping or Zeno reshaping. The main advantage of such a recycling scheme is an effective power increase, while maintaining an amplified deflection.

  33. We demonstrate that quantum instruments can provide a unified operational foundation for quantum theory. Since these instruments directly correspond to laboratory devices, this foundation provides an alternate, more experimentally grounded, perspective from which to understand the elements of the traditional approach. We first show that in principle all measurable probabilities and correlations can be expressed entirely in terms of quantum instruments without the need for conventional quantum states or observables. We then show how these states and observables reappear as derived quantities by conditioning joint detection probabilities on the first or last measurement in a sequence as a preparation or a post-selection. Both predictive and retrodictive versions of states and observables appear in this manner, as well as more exotic bidirectional and interdictive states and observables that cannot be easily expressed using the traditional approach. We also revisit the conceptual meaning of the Heisenberg and Schr\"{o}dinger pictures of time evolution as applied to the various derived quantities, illustrate how detector loss can be included naturally, and discuss how the instrumental approach fully generalizes the time-symmetric two-vector approach of Aharonov \emph{et al.} to any realistic laboratory situation.

  34. This thesis presents a general algebraic approach for indirectly measuring both classical and quantum observables, along with several applications. To handle the case of imperfectly correlated indirect detectors we generalize the observable spectra from eigenvalues to contextual values. Eigenvalues weight spectral idempotents to construct an observable, but contextual values can weight more general probability observables corresponding to indirect detector outcomes in order to construct the same observable. We develop the classical case using the logical approach of Bayesian probability theory to emphasize the generality of the concept. For the quantum case, we outline how to generalize the classical case in a straightforward manner by treating the classical sample space as a spectral idempotent decomposition of the enveloping algebra for a Lie group; such a sample space can then be rotated to other equivalent sample spaces through Lie group automorphisms. We give several classical and quantum examples to illustrate the utility of our approach. In particular, we use the approach to describe the theoretical derivation and experimental violation of generalized Leggett-Garg inequalities using a quantum optical setup. We also describe the measurement of which-path information using an electronic Mach-Zehnder interferometer. Finally, we provide a detailed and exact treatment of the quantum weak value, which appears as a general feature in conditioned observable measurements using a weakly correlated detector.

  35. 2012

  36. We refute the widely held belief that the quantum weak value necessarily pertains to weak measurements. To accomplish this, we use the transverse position of a beam as the detector for the conditioned von Neumann measurement of a system observable. For any coupling strength, any initial states, and any choice of conditioning, the averages of the detector position and momentum are completely described by the real parts of three generalized weak values in the joint Hilbert space. Higher-order detector moments also have similar weak value expansions. Using the Wigner distribution of the initial detector state, we find compact expressions for these weak values within the reduced system Hilbert space. As an application of the approach, we show that for any Hermite-Gauss mode of a paraxial beam-like detector these expressions reduce to the real and imaginary parts of a single system weak value plus an additional weak-value-like contribution that only affects the momentum shift.

  37. 2011

  38. We present a detailed motivation for and definition of the contextual values of an observable, which were introduced by Dressel et al. [Phys. Rev. Lett. 104 240401 (2010)]. The theory of contextual values is a principled approach to the generalized measurement of observables. It extends the well-established theory of generalized state measurements by bridging the gap between partial state collapse and the observables that represent physically relevant information about the system. To emphasize the general utility of the concept, we first construct the full theory of contextual values within an operational formulation of classical probability theory, paying special attention to observable construction, detector coupling, generalized measurement, and measurement disturbance. We then extend the results to quantum probability theory built as a superstructure on the classical theory, pointing out both the classical correspondences to and the full quantum generalizations of both Lüder's rule and the Aharonov-Bergmann-Lebowitz rule in the process. As such, our treatment doubles as a self-contained pedagogical introduction to the essential components of the operational formulations for both classical and quantum probability theory. We find in both cases that the contextual values of a system observable form a generalized spectrum that is associated with the independent outcomes of a partially correlated and generally ambiguous detector; the eigenvalues are a special case when the detector is perfectly correlated and unambiguous. To illustrate the approach, we apply the technique to both a classical example of marble color detection and a quantum example of polarization detection. For the quantum example we detail two devices: Fresnel reflection from a glass coverslip, and continuous beam displacement from a calcite crystal. We also analyze the three-box paradox to demonstrate that no negative probabilities are necessary in its analysis. Finally, we provide a derivation of the quantum weak value as a limit point of a pre- and postselected conditioned average and provide sufficient conditions for the derivation to hold.

  39. We theoretically investigate a generalized “which-path” measurement on an electronic Mach-Zehnder Interferometer (MZI) implemented via Coulomb coupling to a second electronic MZI acting as a detector. The use of contextual values, or generalized eigenvalues, enables the precise construction of which-path operator averages that are valid for any measurement strength from the available drain currents. The form of the contextual values provides direct physical insight about the measurement being performed, providing information about the correlation strength between system and detector, the measurement inefficiency, and the proper background removal. We find that the detector interferometer must display maximal wavelike behavior to optimally measure the particle-like which-path information in the system interferometer, demonstrating wave-particle complementarity between the system and detector. We also find that the degree of quantum erasure that can be achieved by conditioning on a specific detector drain is directly related to the ambiguity of the measurement. Finally, conditioning the which-path averages on a particular system drain using the zero-frequency cross correlations produces conditioned averages that can become anomalously large due to quantum interference; the weak-coupling limit of these conditioned averages can produce both weak and detector-dependent semiweak values.

  40. Unlike the real part of the generalized weak value of an observable, which can in a restricted sense be operationally interpreted as an idealized conditioned average of that observable in the limit of zero measurement disturbance, the imaginary part of the generalized weak value does not provide information pertaining to the observable being measured. What it does provide is direct information about how the initial state would be unitarily disturbed by the observable operator. Specifically, we provide an operational interpretation for the imaginary part of the generalized weak value as the logarithmic directional derivative of the post-selection probability along the unitary flow generated by the action of the observable operator. To obtain this interpretation, we revisit the standard von Neumann measurement protocol for obtaining the real and imaginary parts of the weak value and solve it exactly for arbitrary initial states and post-selections using the quantum operations formalism, which allows us to understand in detail how each part of the generalized weak value arises in the linear response regime. We also provide exact treatments of qubit measurements and Gaussian detectors as illustrative special cases, and show that the measurement disturbance from a Gaussian detector is purely decohering in the Lindblad sense, which allows the shifts for a Gaussian detector to be completely understood for any coupling strength in terms of a single complex weak value that involves the decohered initial state.

  41. We review and clarify the sufficient conditions for uniquely defining the generalized weak value as the weak limit of a conditioned average using the contextual values formalism introduced in Dressel, Agarwal and Jordan (2010 Phys. Rev. Lett. http://dx.doi.org/10.1103/PhysRevLett.104.240401). We also respond to criticism of our work by Parrott (arXiv:1105.4188v1) concerning a proposed counter-example to the uniqueness of the definition of the generalized weak value. The counter-example does not satisfy our prescription in the case of an underspecified measurement context. We show that when the contextual values formalism is properly applied to this example, a natural interpretation of the measurement emerges and the unique definition in the weak limit holds. We also prove a theorem regarding the uniqueness of the definition under our sufficient conditions for the general case. Finally, a second proposed counter-example by Parrott (arXiv:1105.4188v6) is shown not to satisfy the sufficiency conditions for the provided theorem.

  42. We generalize the derivation of Leggett-Garg inequalities to systematically treat a larger class of experimental situations by allowing multi-particle correlations, invasive detection, and ambiguous detector results. Furthermore, we show how many such inequalities may be tested simultaneously with a single setup. As a proof of principle, we violate several such two-particle inequalities with data obtained from a polarization-entangled biphoton state and a semi-weak polarization measurement based on Fresnel reflection. We also point out a non- trivial connection between specific two-party Leggett-Garg inequality violations and convex sums of strange weak values.

  43. 2010

  44. We introduce contextual values as a generalization of the eigenvalues of an observable that takes into account both the system observable and a general measurement procedure. This technique leads to a natural definition of a general conditioned average that converges uniquely to the quantum weak value in the minimal disturbance limit. As such, we address the controversy in the literature regarding the theoretical consistency of the quantum weak value by providing a more general theoretical framework and giving several examples of how that framework relates to existing experimental and theoretical results.

  45. 2009

  46. We explore the nature of the classical propagation of light through media with strong frequency-dependent dispersion in the presence of a gravitational field. In the weak field limit, gravity causes a redshift of the optical frequency, which the slow-light medium converts into a spatially varying index of refraction. This results in the bending of a light ray in the medium. We further propose experimental techniques to amplify and detect the phenomenon using weak value measurements. Independent heuristic and rigorous derivations of this effect are given.