Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add (check) full support for sampling in full parity with Lightning. #908

Merged
merged 62 commits into from
Sep 18, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
62 commits
Select commit Hold shift + click to select a range
120bbed
include the new device base class
LuisAlfredoNu Sep 4, 2024
bdf4daa
add measurements
LuisAlfredoNu Sep 9, 2024
b05a6a4
State vector almos done
LuisAlfredoNu Sep 9, 2024
5dac907
tmp commit
LuisAlfredoNu Sep 9, 2024
bfd0771
Solve prop issue
LuisAlfredoNu Sep 9, 2024
a1ff6c6
ready measurenment class for LGPU
LuisAlfredoNu Sep 10, 2024
a0cfb1d
print helps for measurements
LuisAlfredoNu Sep 10, 2024
f472471
Merge gpuNewAPI_simulate
LuisAlfredoNu Sep 11, 2024
c7ac82d
grammar correction
LuisAlfredoNu Sep 11, 2024
6627913
cleaning code
LuisAlfredoNu Sep 11, 2024
6290953
apply format
LuisAlfredoNu Sep 11, 2024
219262b
delete usuless variables
LuisAlfredoNu Sep 11, 2024
bca1a74
delete prints
LuisAlfredoNu Sep 11, 2024
0f8f957
Revert change in measurenment test
LuisAlfredoNu Sep 11, 2024
a9ccf62
Merge branch 'gpuNewAPI_backend' into gpuNewAPI_simulate
LuisAlfredoNu Sep 11, 2024
a99b6e8
Merge branch 'gpuNewAPI_backend' into gpuNewAPI_simulate
LuisAlfredoNu Sep 11, 2024
0399f18
add simulate method
LuisAlfredoNu Sep 11, 2024
23d5696
apply format
LuisAlfredoNu Sep 11, 2024
585c313
Shuli suggestion
LuisAlfredoNu Sep 11, 2024
5eee8eb
apply format
LuisAlfredoNu Sep 11, 2024
be3c6f7
Develop Jacobian
LuisAlfredoNu Sep 11, 2024
7ff13b3
unlock the test for jacobian and adjoint-jacobian
LuisAlfredoNu Sep 11, 2024
3cec8b1
apply format
LuisAlfredoNu Sep 11, 2024
d4d79cb
unlock fullsupport
LuisAlfredoNu Sep 11, 2024
92089eb
Update pennylane_lightning/lightning_gpu/_mpi_handler.py
LuisAlfredoNu Sep 12, 2024
196042a
Apply suggestions from code review Vinvent's comments
LuisAlfredoNu Sep 12, 2024
b4ed1ae
Vincent's comments
LuisAlfredoNu Sep 12, 2024
695283b
Merge branch 'gpuNewAPI_simulate' of github.com:PennyLaneAI/pennylane…
LuisAlfredoNu Sep 12, 2024
1729d06
apply format
LuisAlfredoNu Sep 12, 2024
38ddbdb
Merge branch 'gpuNewAPI_simulate' into gpuNewAPI_AdjJaco
LuisAlfredoNu Sep 12, 2024
c9e6c42
Merge branch 'gpuNewAPI_AdjJaco' into gpuNewAPI_fullSupport
LuisAlfredoNu Sep 12, 2024
51b3824
add pytest skip for SparseHamiltonian due to bug for .sparse_matrix
LuisAlfredoNu Sep 12, 2024
6f0fffb
apply format
LuisAlfredoNu Sep 12, 2024
ffb79f1
spelling correction
LuisAlfredoNu Sep 13, 2024
5819efc
Apply suggestions from code review. Vincent's suggestion
LuisAlfredoNu Sep 13, 2024
2dbc7db
review comments
LuisAlfredoNu Sep 13, 2024
0630edf
apply format
LuisAlfredoNu Sep 13, 2024
3fa8409
apply format
LuisAlfredoNu Sep 13, 2024
ebe960d
added restriction on preprocess
LuisAlfredoNu Sep 13, 2024
af16b8d
Ali suggestion 1
LuisAlfredoNu Sep 13, 2024
0cb050f
add reset
LuisAlfredoNu Sep 16, 2024
54afeb5
apply_basis_state as abstract in GPU
LuisAlfredoNu Sep 16, 2024
ac87663
apply format
LuisAlfredoNu Sep 16, 2024
35270fb
Apply suggestions from code review Ali suggestion docs
LuisAlfredoNu Sep 16, 2024
f51cbb9
propagate namming suggestion
LuisAlfredoNu Sep 16, 2024
29675e7
Merge branch 'gpuNewAPI_simulate' into gpuNewAPI_AdjJaco
LuisAlfredoNu Sep 16, 2024
65e66e9
Apply suggestions from code review. Ali's suggestion 3
LuisAlfredoNu Sep 16, 2024
0472fdd
solve errors with kokkos
LuisAlfredoNu Sep 16, 2024
96728cb
apply format
LuisAlfredoNu Sep 16, 2024
5901a64
Merge branch 'gpuNewAPI_simulate' into gpuNewAPI_AdjJaco
LuisAlfredoNu Sep 16, 2024
112ead0
solve conflicts
LuisAlfredoNu Sep 16, 2024
1acf4db
Merge branch 'gpuNewAPI_backend' into gpuNewAPI_AdjJaco
LuisAlfredoNu Sep 16, 2024
87778d6
apply format
LuisAlfredoNu Sep 16, 2024
c493d47
solve issue with reset
LuisAlfredoNu Sep 16, 2024
faead9c
solve error with kokkos
LuisAlfredoNu Sep 17, 2024
bfcda74
Merge branch 'gpuNewAPI_AdjJaco' into gpuNewAPI_fullSupport
LuisAlfredoNu Sep 17, 2024
a734267
trigger CIs
LuisAlfredoNu Sep 17, 2024
1bb028b
Merge branch 'gpuNewAPI_backend' into gpuNewAPI_fullSupport
LuisAlfredoNu Sep 17, 2024
0f55794
trigger CIs
LuisAlfredoNu Sep 17, 2024
b5f5090
Apply suggestions from code review. Ali's suggestions
LuisAlfredoNu Sep 18, 2024
812fd84
Update pennylane_lightning/lightning_gpu/lightning_gpu.py
LuisAlfredoNu Sep 18, 2024
f63a234
raise error in test_device
LuisAlfredoNu Sep 18, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
152 changes: 134 additions & 18 deletions pennylane_lightning/lightning_gpu/lightning_gpu.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@
"""

from ctypes.util import find_library
from dataclasses import replace
from functools import reduce
from importlib import util as imp_util
from pathlib import Path
from typing import Callable, Optional, Tuple, Union
Expand All @@ -26,10 +28,21 @@
import numpy as np
import pennylane as qml
from pennylane.devices import DefaultExecutionConfig, ExecutionConfig
from pennylane.devices.default_qubit import adjoint_ops
from pennylane.devices.modifiers import simulator_tracking, single_tape_support
from pennylane.devices.preprocess import (
decompose,
mid_circuit_measurements,
no_sampling,
validate_adjoint_trainable_params,
validate_device_wires,
validate_measurements,
validate_observables,
)
from pennylane.measurements import MidMeasureMP
from pennylane.operation import Operator
from pennylane.tape import QuantumScript, QuantumTape
from pennylane.operation import DecompositionUndefinedError, Operator, Tensor
from pennylane.ops import Prod, SProd, Sum
from pennylane.tape import QuantumScript
from pennylane.transforms.core import TransformProgram
from pennylane.typing import Result

Expand Down Expand Up @@ -65,16 +78,14 @@
LGPU_CPP_BINARY_AVAILABLE = True
except (ImportError, ValueError) as ex:
warn(str(ex), UserWarning)
backend_info = None
LGPU_CPP_BINARY_AVAILABLE = False
backend_info = None


# The set of supported operations.
_operations = frozenset(
{
"Identity",
"BasisState",
"QubitStateVector",
"StatePrep",
"QubitUnitary",
"ControlledQubitUnitary",
"MultiControlledX",
Expand Down Expand Up @@ -133,7 +144,9 @@
"C(BlockEncode)",
}
)
# End the set of supported operations.

# The set of supported observables.
_observables = frozenset(
{
"PauliX",
Expand All @@ -145,6 +158,7 @@
"LinearCombination",
"Hermitian",
"Identity",
"Projector",
"Sum",
"Prod",
"SProd",
Expand All @@ -156,38 +170,72 @@
"""A function that determines whether or not an operation is supported by ``lightning.gpu``."""
# To avoid building matrices beyond the given thresholds.
# This should reduce runtime overheads for larger systems.
return 0
if isinstance(op, qml.QFT):
return len(op.wires) < 10
if isinstance(op, qml.GroverOperator):
return len(op.wires) < 13
if isinstance(op, qml.PauliRot):
return False

Check warning on line 178 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L173-L178

Added lines #L173 - L178 were not covered by tests

return op.name in _operations

Check warning on line 180 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L180

Added line #L180 was not covered by tests


def stopping_condition_shots(op: Operator) -> bool:
"""A function that determines whether or not an operation is supported by ``lightning.gpu``
with finite shots."""
return 0
if isinstance(op, (MidMeasureMP, qml.ops.op_math.Conditional)):

Check warning on line 186 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L186

Added line #L186 was not covered by tests
# LightningGPU does not support Mid-circuit measurements.
return False
return stopping_condition(op)

Check warning on line 189 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L188-L189

Added lines #L188 - L189 were not covered by tests


def accepted_observables(obs: Operator) -> bool:
"""A function that determines whether or not an observable is supported by ``lightning.gpu``."""
return 0
return obs.name in _observables

Check warning on line 194 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L194

Added line #L194 was not covered by tests


def adjoint_observables(obs: Operator) -> bool:
"""A function that determines whether or not an observable is supported by ``lightning.gpu``
when using the adjoint differentiation method."""
return 0
if isinstance(obs, qml.Projector):
return False

Check warning on line 201 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L200-L201

Added lines #L200 - L201 were not covered by tests

if isinstance(obs, Tensor):
if any(isinstance(o, qml.Projector) for o in obs.non_identity_obs):
return False
return True

Check warning on line 206 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L203-L206

Added lines #L203 - L206 were not covered by tests

if isinstance(obs, SProd):
return adjoint_observables(obs.base)

Check warning on line 209 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L208-L209

Added lines #L208 - L209 were not covered by tests

if isinstance(obs, (Sum, Prod)):
return all(adjoint_observables(o) for o in obs)

Check warning on line 212 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L211-L212

Added lines #L211 - L212 were not covered by tests

return obs.name in _observables

Check warning on line 214 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L214

Added line #L214 was not covered by tests


def adjoint_measurements(mp: qml.measurements.MeasurementProcess) -> bool:
"""Specifies whether or not an observable is compatible with adjoint differentiation on DefaultQubit."""
return 0
return isinstance(mp, qml.measurements.ExpectationMP)

Check warning on line 219 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L219

Added line #L219 was not covered by tests


def _supports_adjoint(circuit):
return 0
if circuit is None:
return True

Check warning on line 224 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L223-L224

Added lines #L223 - L224 were not covered by tests

prog = TransformProgram()
_add_adjoint_transforms(prog)

Check warning on line 227 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L226-L227

Added lines #L226 - L227 were not covered by tests

try:
prog((circuit,))
except (DecompositionUndefinedError, qml.DeviceError, AttributeError):
return False
return True

Check warning on line 233 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L229-L233

Added lines #L229 - L233 were not covered by tests


def _adjoint_ops(op: qml.operation.Operator) -> bool:
"""Specify whether or not an Operator is supported by adjoint differentiation."""
return 0
return not isinstance(op, qml.PauliRot) and adjoint_ops(op)

Check warning on line 238 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L238

Added line #L238 was not covered by tests


def _add_adjoint_transforms(program: TransformProgram) -> None:
Expand All @@ -203,9 +251,23 @@
"""

name = "adjoint + lightning.gpu"
return 0
program.add_transform(no_sampling, name=name)
program.add_transform(

Check warning on line 255 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L254-L255

Added lines #L254 - L255 were not covered by tests
decompose,
stopping_condition=_adjoint_ops,
stopping_condition_shots=stopping_condition_shots,
name=name,
skip_initial_state_prep=False,
)
program.add_transform(validate_observables, accepted_observables, name=name)
program.add_transform(

Check warning on line 263 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L262-L263

Added lines #L262 - L263 were not covered by tests
validate_measurements, analytic_measurements=adjoint_measurements, name=name
)
program.add_transform(qml.transforms.broadcast_expand)
program.add_transform(validate_adjoint_trainable_params)

Check warning on line 267 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L266-L267

Added lines #L266 - L267 were not covered by tests


# LightningGPU specific methods
def check_gpu_resources() -> None:
"""Check the available resources of each Nvidia GPU"""
if find_library("custatevec") is None and not imp_util.find_spec("cuquantum"):
Expand Down Expand Up @@ -321,7 +383,24 @@
"""
Update the execution config with choices for how the device should be used and the device options.
"""
return 0
updated_values = {}
if config.gradient_method == "best":
updated_values["gradient_method"] = "adjoint"
if config.use_device_gradient is None:
updated_values["use_device_gradient"] = config.gradient_method in ("best", "adjoint")
if config.grad_on_execution is None:
updated_values["grad_on_execution"] = True

Check warning on line 392 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L386-L392

Added lines #L386 - L392 were not covered by tests

new_device_options = dict(config.device_options)
for option in self._device_options:
if option not in new_device_options:
new_device_options[option] = getattr(self, f"_{option}", None)

Check warning on line 397 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L394-L397

Added lines #L394 - L397 were not covered by tests

# It is necessary to set the mcmc default configuration to complete the requirements of ExecuteConfig
mcmc_default = {"mcmc": False, "kernel_name": None, "num_burnin": 0, "rng": None}
new_device_options.update(mcmc_default)

Check warning on line 401 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L400-L401

Added lines #L400 - L401 were not covered by tests

return replace(config, **updated_values, device_options=new_device_options)

Check warning on line 403 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L403

Added line #L403 was not covered by tests

def preprocess(self, execution_config: ExecutionConfig = DefaultExecutionConfig):
"""This function defines the device transform program to be applied and an updated device configuration.
Expand All @@ -342,7 +421,28 @@
* Currently does not intrinsically support parameter broadcasting

"""
return 0
exec_config = self._setup_execution_config(execution_config)
program = TransformProgram()

Check warning on line 425 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L424-L425

Added lines #L424 - L425 were not covered by tests

program.add_transform(validate_measurements, name=self.name)
program.add_transform(validate_observables, accepted_observables, name=self.name)
program.add_transform(validate_device_wires, self.wires, name=self.name)
program.add_transform(

Check warning on line 430 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L427-L430

Added lines #L427 - L430 were not covered by tests
mid_circuit_measurements, device=self, mcm_config=exec_config.mcm_config
)

program.add_transform(

Check warning on line 434 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L434

Added line #L434 was not covered by tests
decompose,
stopping_condition=stopping_condition,
stopping_condition_shots=stopping_condition_shots,
skip_initial_state_prep=True,
name=self.name,
)
program.add_transform(qml.transforms.broadcast_expand)

Check warning on line 441 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L441

Added line #L441 was not covered by tests

if exec_config.gradient_method == "adjoint":
_add_adjoint_transforms(program)
return program, exec_config

Check warning on line 445 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L443-L445

Added lines #L443 - L445 were not covered by tests

def execute(
self,
Expand All @@ -358,7 +458,19 @@
Returns:
TensorLike, tuple[TensorLike], tuple[tuple[TensorLike]]: A numeric result of the computation.
"""
return 0
results = []
for circuit in circuits:
if self._wire_map is not None:
circuit, _ = qml.map_wires(circuit, self._wire_map)
results.append(

Check warning on line 465 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L461-L465

Added lines #L461 - L465 were not covered by tests
self.simulate(
circuit,
self._statevector,
postselect_mode=execution_config.mcm_config.postselect_mode,
)
)

return tuple(results)

Check warning on line 473 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L473

Added line #L473 was not covered by tests

def supports_derivatives(
self,
Expand All @@ -377,7 +489,11 @@
Bool: Whether or not a derivative can be calculated provided the given information

"""
return 0
if circuit is None or (execution_config is None and circuit is None):
return True
if execution_config.gradient_method not in {"adjoint", "best"}:
return False
return _supports_adjoint(circuit=circuit)

Check warning on line 496 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L492-L496

Added lines #L492 - L496 were not covered by tests

def simulate(
self,
Expand Down
3 changes: 0 additions & 3 deletions tests/lightning_qubit/test_measurements_samples_MCMC.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,6 @@
from conftest import LightningDevice as ld
from conftest import device_name

if device_name == "lightning.gpu":
pytest.skip("LGPU new API in WIP. Skipping.", allow_module_level=True)

if device_name != "lightning.qubit":
pytest.skip(
f"Device {device_name} does not have an mcmc option. Skipping.", allow_module_level=True
Expand Down
56 changes: 48 additions & 8 deletions tests/new_api/test_device.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,11 +43,7 @@
validate_measurements,
validate_observables,
)

if device_name == "lightning.gpu":
pytest.skip("LGPU new API in WIP. Skipping.", allow_module_level=True)

if device_name == "lightning.kokkos":
elif device_name == "lightning.kokkos":
from pennylane_lightning.lightning_kokkos.lightning_kokkos import (
_add_adjoint_transforms,
_adjoint_ops,
Expand All @@ -65,13 +61,31 @@
validate_measurements,
validate_observables,
)


if device_name == "lightning.tensor":
elif device_name == "lightning.gpu":
from pennylane_lightning.lightning_gpu.lightning_gpu import (
_add_adjoint_transforms,
_adjoint_ops,
_supports_adjoint,
accepted_observables,
adjoint_measurements,
adjoint_observables,
decompose,
mid_circuit_measurements,
no_sampling,
stopping_condition,
stopping_condition_shots,
validate_adjoint_trainable_params,
validate_device_wires,
validate_measurements,
validate_observables,
)
elif device_name == "lightning.tensor":
from pennylane_lightning.lightning_tensor.lightning_tensor import (
accepted_observables,
stopping_condition,
)
else:
raise TypeError(f"The device name: {device_name} is not a valid name")

if not LightningDevice._new_API:
pytest.skip("Exclusive tests for new device API. Skipping.", allow_module_level=True)
Expand Down Expand Up @@ -451,6 +465,11 @@ def test_execute_single_measurement(self, theta, phi, mp, dev):
if isinstance(mp.obs, qml.ops.LinearCombination) and not qml.operation.active_new_opmath():
mp.obs = qml.operation.convert_to_legacy_H(mp.obs)

if isinstance(mp.obs, qml.SparseHamiltonian) and dev.dtype == np.complex64:
pytest.skip(
reason="The conversion from qml.Hamiltonian to SparseHamiltonian is only possible with np.complex128"
)

qs = QuantumScript(
[
qml.RX(phi, 0),
Expand Down Expand Up @@ -644,6 +663,12 @@ def test_supports_derivatives(self, dev, config, tape, expected, batch_obs):
qml.Z(1) + qml.X(1),
qml.Hamiltonian([-1.0, 1.5], [qml.Z(1), qml.X(1)]),
qml.Hermitian(qml.Hadamard.compute_matrix(), 0),
qml.SparseHamiltonian(
qml.Hamiltonian([-1.0, 1.5], [qml.Z(1), qml.X(1)]).sparse_matrix(
wire_order=[0, 1, 2]
),
wires=[0, 1, 2],
),
qml.Projector([1], 1),
],
)
Expand All @@ -652,6 +677,11 @@ def test_derivatives_single_expval(
self, theta, phi, dev, obs, execute_and_derivatives, batch_obs
):
"""Test that the jacobian is correct when a tape has a single expectation value"""
if isinstance(obs, qml.SparseHamiltonian) and dev.dtype == np.complex64:
pytest.skip(
reason="The conversion from qml.Hamiltonian to SparseHamiltonian is only possible with np.complex128"
)
Comment on lines +680 to +683
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This limitation comes from a hard code in Pennylane ref


if isinstance(obs, qml.ops.LinearCombination) and not qml.operation.active_new_opmath():
obs = qml.operation.convert_to_legacy_H(obs)

Expand Down Expand Up @@ -708,6 +738,11 @@ def test_derivatives_multi_expval(
self, theta, phi, omega, dev, obs1, obs2, execute_and_derivatives, batch_obs
):
"""Test that the jacobian is correct when a tape has multiple expectation values"""
if isinstance(obs2, qml.SparseHamiltonian) and dev.dtype == np.complex64:
pytest.skip(
reason="The conversion from qml.Hamiltonian to SparseHamiltonian is only possible with np.complex128"
)

if isinstance(obs1, qml.ops.LinearCombination) and not qml.operation.active_new_opmath():
obs1 = qml.operation.convert_to_legacy_H(obs1)
if isinstance(obs2, qml.ops.LinearCombination) and not qml.operation.active_new_opmath():
Expand Down Expand Up @@ -1077,6 +1112,11 @@ def test_vjp_multi_expval(
self, theta, phi, omega, dev, obs1, obs2, execute_and_derivatives, batch_obs
):
"""Test that the VJP is correct when a tape has multiple expectation values"""
if isinstance(obs2, qml.SparseHamiltonian) and dev.dtype == np.complex64:
pytest.skip(
reason="The conversion from qml.Hamiltonian to SparseHamiltonian is only possible with np.complex128"
)

if isinstance(obs1, qml.ops.LinearCombination) and not qml.operation.active_new_opmath():
obs1 = qml.operation.convert_to_legacy_H(obs1)
if isinstance(obs2, qml.ops.LinearCombination) and not qml.operation.active_new_opmath():
Expand Down
3 changes: 0 additions & 3 deletions tests/new_api/test_expval.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,6 @@
from conftest import PHI, THETA, VARPHI, LightningDevice, device_name
from pennylane.devices import DefaultQubit

if device_name == "lightning.gpu":
pytest.skip("LGPU new API in WIP. Skipping.", allow_module_level=True)

if not LightningDevice._new_API:
pytest.skip("Exclusive tests for new API. Skipping.", allow_module_level=True)

Expand Down
Loading
Loading