Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add (check) full support for sampling in full parity with Lightning. #908

Merged
merged 62 commits into from
Sep 18, 2024
Merged
Show file tree
Hide file tree
Changes from 59 commits
Commits
Show all changes
62 commits
Select commit Hold shift + click to select a range
120bbed
include the new device base class
LuisAlfredoNu Sep 4, 2024
bdf4daa
add measurements
LuisAlfredoNu Sep 9, 2024
b05a6a4
State vector almos done
LuisAlfredoNu Sep 9, 2024
5dac907
tmp commit
LuisAlfredoNu Sep 9, 2024
bfd0771
Solve prop issue
LuisAlfredoNu Sep 9, 2024
a1ff6c6
ready measurenment class for LGPU
LuisAlfredoNu Sep 10, 2024
a0cfb1d
print helps for measurements
LuisAlfredoNu Sep 10, 2024
f472471
Merge gpuNewAPI_simulate
LuisAlfredoNu Sep 11, 2024
c7ac82d
grammar correction
LuisAlfredoNu Sep 11, 2024
6627913
cleaning code
LuisAlfredoNu Sep 11, 2024
6290953
apply format
LuisAlfredoNu Sep 11, 2024
219262b
delete usuless variables
LuisAlfredoNu Sep 11, 2024
bca1a74
delete prints
LuisAlfredoNu Sep 11, 2024
0f8f957
Revert change in measurenment test
LuisAlfredoNu Sep 11, 2024
a9ccf62
Merge branch 'gpuNewAPI_backend' into gpuNewAPI_simulate
LuisAlfredoNu Sep 11, 2024
a99b6e8
Merge branch 'gpuNewAPI_backend' into gpuNewAPI_simulate
LuisAlfredoNu Sep 11, 2024
0399f18
add simulate method
LuisAlfredoNu Sep 11, 2024
23d5696
apply format
LuisAlfredoNu Sep 11, 2024
585c313
Shuli suggestion
LuisAlfredoNu Sep 11, 2024
5eee8eb
apply format
LuisAlfredoNu Sep 11, 2024
be3c6f7
Develop Jacobian
LuisAlfredoNu Sep 11, 2024
7ff13b3
unlock the test for jacobian and adjoint-jacobian
LuisAlfredoNu Sep 11, 2024
3cec8b1
apply format
LuisAlfredoNu Sep 11, 2024
d4d79cb
unlock fullsupport
LuisAlfredoNu Sep 11, 2024
92089eb
Update pennylane_lightning/lightning_gpu/_mpi_handler.py
LuisAlfredoNu Sep 12, 2024
196042a
Apply suggestions from code review Vinvent's comments
LuisAlfredoNu Sep 12, 2024
b4ed1ae
Vincent's comments
LuisAlfredoNu Sep 12, 2024
695283b
Merge branch 'gpuNewAPI_simulate' of github.com:PennyLaneAI/pennylane…
LuisAlfredoNu Sep 12, 2024
1729d06
apply format
LuisAlfredoNu Sep 12, 2024
38ddbdb
Merge branch 'gpuNewAPI_simulate' into gpuNewAPI_AdjJaco
LuisAlfredoNu Sep 12, 2024
c9e6c42
Merge branch 'gpuNewAPI_AdjJaco' into gpuNewAPI_fullSupport
LuisAlfredoNu Sep 12, 2024
51b3824
add pytest skip for SparseHamiltonian due to bug for .sparse_matrix
LuisAlfredoNu Sep 12, 2024
6f0fffb
apply format
LuisAlfredoNu Sep 12, 2024
ffb79f1
spelling correction
LuisAlfredoNu Sep 13, 2024
5819efc
Apply suggestions from code review. Vincent's suggestion
LuisAlfredoNu Sep 13, 2024
2dbc7db
review comments
LuisAlfredoNu Sep 13, 2024
0630edf
apply format
LuisAlfredoNu Sep 13, 2024
3fa8409
apply format
LuisAlfredoNu Sep 13, 2024
ebe960d
added restriction on preprocess
LuisAlfredoNu Sep 13, 2024
af16b8d
Ali suggestion 1
LuisAlfredoNu Sep 13, 2024
0cb050f
add reset
LuisAlfredoNu Sep 16, 2024
54afeb5
apply_basis_state as abstract in GPU
LuisAlfredoNu Sep 16, 2024
ac87663
apply format
LuisAlfredoNu Sep 16, 2024
35270fb
Apply suggestions from code review Ali suggestion docs
LuisAlfredoNu Sep 16, 2024
f51cbb9
propagate namming suggestion
LuisAlfredoNu Sep 16, 2024
29675e7
Merge branch 'gpuNewAPI_simulate' into gpuNewAPI_AdjJaco
LuisAlfredoNu Sep 16, 2024
65e66e9
Apply suggestions from code review. Ali's suggestion 3
LuisAlfredoNu Sep 16, 2024
0472fdd
solve errors with kokkos
LuisAlfredoNu Sep 16, 2024
96728cb
apply format
LuisAlfredoNu Sep 16, 2024
5901a64
Merge branch 'gpuNewAPI_simulate' into gpuNewAPI_AdjJaco
LuisAlfredoNu Sep 16, 2024
112ead0
solve conflicts
LuisAlfredoNu Sep 16, 2024
1acf4db
Merge branch 'gpuNewAPI_backend' into gpuNewAPI_AdjJaco
LuisAlfredoNu Sep 16, 2024
87778d6
apply format
LuisAlfredoNu Sep 16, 2024
c493d47
solve issue with reset
LuisAlfredoNu Sep 16, 2024
faead9c
solve error with kokkos
LuisAlfredoNu Sep 17, 2024
bfcda74
Merge branch 'gpuNewAPI_AdjJaco' into gpuNewAPI_fullSupport
LuisAlfredoNu Sep 17, 2024
a734267
trigger CIs
LuisAlfredoNu Sep 17, 2024
1bb028b
Merge branch 'gpuNewAPI_backend' into gpuNewAPI_fullSupport
LuisAlfredoNu Sep 17, 2024
0f55794
trigger CIs
LuisAlfredoNu Sep 17, 2024
b5f5090
Apply suggestions from code review. Ali's suggestions
LuisAlfredoNu Sep 18, 2024
812fd84
Update pennylane_lightning/lightning_gpu/lightning_gpu.py
LuisAlfredoNu Sep 18, 2024
f63a234
raise error in test_device
LuisAlfredoNu Sep 18, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
154 changes: 136 additions & 18 deletions pennylane_lightning/lightning_gpu/lightning_gpu.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@
"""

from ctypes.util import find_library
from dataclasses import replace
from functools import reduce
from importlib import util as imp_util
from pathlib import Path
from typing import Callable, Optional, Tuple, Union
Expand All @@ -26,10 +28,21 @@
import numpy as np
import pennylane as qml
from pennylane.devices import DefaultExecutionConfig, ExecutionConfig
from pennylane.devices.default_qubit import adjoint_ops
from pennylane.devices.modifiers import simulator_tracking, single_tape_support
from pennylane.devices.preprocess import (
decompose,
mid_circuit_measurements,
no_sampling,
validate_adjoint_trainable_params,
validate_device_wires,
validate_measurements,
validate_observables,
)
from pennylane.measurements import MidMeasureMP
from pennylane.operation import Operator
from pennylane.tape import QuantumScript, QuantumTape
from pennylane.operation import DecompositionUndefinedError, Operator, Tensor
from pennylane.ops import Prod, SProd, Sum
from pennylane.tape import QuantumScript
from pennylane.transforms.core import TransformProgram
from pennylane.typing import Result

Expand Down Expand Up @@ -65,16 +78,14 @@
LGPU_CPP_BINARY_AVAILABLE = True
except (ImportError, ValueError) as ex:
warn(str(ex), UserWarning)
backend_info = None
LGPU_CPP_BINARY_AVAILABLE = False
backend_info = None


# The set of supported operations.
_operations = frozenset(
{
"Identity",
"BasisState",
"QubitStateVector",
"StatePrep",
"QubitUnitary",
"ControlledQubitUnitary",
"MultiControlledX",
Expand Down Expand Up @@ -133,7 +144,9 @@
"C(BlockEncode)",
}
)
# End the set of supported operations.

# The set of supported observables.
_observables = frozenset(
{
"PauliX",
Expand All @@ -145,6 +158,7 @@
"LinearCombination",
"Hermitian",
"Identity",
"Projector",
"Sum",
"Prod",
"SProd",
Expand All @@ -156,38 +170,72 @@
"""A function that determines whether or not an operation is supported by ``lightning.gpu``."""
# To avoid building matrices beyond the given thresholds.
# This should reduce runtime overheads for larger systems.
return 0
if isinstance(op, qml.QFT):
return len(op.wires) < 10

Check warning on line 174 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L174

Added line #L174 was not covered by tests
if isinstance(op, qml.GroverOperator):
return len(op.wires) < 13

Check warning on line 176 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L176

Added line #L176 was not covered by tests
if isinstance(op, qml.PauliRot):
return False

Check warning on line 178 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L178

Added line #L178 was not covered by tests

return op.name in _operations


def stopping_condition_shots(op: Operator) -> bool:
"""A function that determines whether or not an operation is supported by ``lightning.gpu``
with finite shots."""
return 0
if isinstance(op, (MidMeasureMP, qml.ops.op_math.Conditional)):
# LightningGPU does not support Mid-circuit measurements.
return False

Check warning on line 188 in pennylane_lightning/lightning_gpu/lightning_gpu.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/lightning_gpu/lightning_gpu.py#L188

Added line #L188 was not covered by tests
return stopping_condition(op)


def accepted_observables(obs: Operator) -> bool:
"""A function that determines whether or not an observable is supported by ``lightning.gpu``."""
return 0
return obs.name in _observables


def adjoint_observables(obs: Operator) -> bool:
"""A function that determines whether or not an observable is supported by ``lightning.gpu``
when using the adjoint differentiation method."""
return 0
if isinstance(obs, qml.Projector):
return False

if isinstance(obs, Tensor):
if any(isinstance(o, qml.Projector) for o in obs.non_identity_obs):
return False
return True

if isinstance(obs, SProd):
return adjoint_observables(obs.base)

if isinstance(obs, (Sum, Prod)):
return all(adjoint_observables(o) for o in obs)

return obs.name in _observables


def adjoint_measurements(mp: qml.measurements.MeasurementProcess) -> bool:
"""Specifies whether or not an observable is compatible with adjoint differentiation on DefaultQubit."""
return 0
return isinstance(mp, qml.measurements.ExpectationMP)


def _supports_adjoint(circuit):
return 0
if circuit is None:
return True

prog = TransformProgram()
_add_adjoint_transforms(prog)

try:
prog((circuit,))
except (DecompositionUndefinedError, qml.DeviceError, AttributeError):
return False
return True


def _adjoint_ops(op: qml.operation.Operator) -> bool:
"""Specify whether or not an Operator is supported by adjoint differentiation."""
return 0
return not isinstance(op, qml.PauliRot) and adjoint_ops(op)


def _add_adjoint_transforms(program: TransformProgram) -> None:
Expand All @@ -203,9 +251,23 @@
"""

name = "adjoint + lightning.gpu"
return 0
program.add_transform(no_sampling, name=name)
program.add_transform(
decompose,
stopping_condition=_adjoint_ops,
stopping_condition_shots=stopping_condition_shots,
name=name,
skip_initial_state_prep=False,
)
program.add_transform(validate_observables, accepted_observables, name=name)
program.add_transform(
validate_measurements, analytic_measurements=adjoint_measurements, name=name
)
program.add_transform(qml.transforms.broadcast_expand)
program.add_transform(validate_adjoint_trainable_params)


# LightningGPU specific methods
def check_gpu_resources() -> None:
"""Check the available resources of each Nvidia GPU"""
if find_library("custatevec") is None and not imp_util.find_spec("cuquantum"):
Expand Down Expand Up @@ -321,7 +383,24 @@
"""
Update the execution config with choices for how the device should be used and the device options.
"""
return 0
updated_values = {}
if config.gradient_method == "best":
updated_values["gradient_method"] = "adjoint"
if config.use_device_gradient is None:
updated_values["use_device_gradient"] = config.gradient_method in ("best", "adjoint")
if config.grad_on_execution is None:
updated_values["grad_on_execution"] = True

new_device_options = dict(config.device_options)
for option in self._device_options:
if option not in new_device_options:
new_device_options[option] = getattr(self, f"_{option}", None)

# It is necessary to set the mcmc default configuration to complete the requirements of ExecuteConfig
mcmc_default = {"mcmc": False, "kernel_name": None, "num_burnin": 0, "rng": None}
new_device_options.update(mcmc_default)

return replace(config, **updated_values, device_options=new_device_options)

def preprocess(self, execution_config: ExecutionConfig = DefaultExecutionConfig):
"""This function defines the device transform program to be applied and an updated device configuration.
Expand All @@ -342,7 +421,28 @@
* Currently does not intrinsically support parameter broadcasting

"""
return 0
exec_config = self._setup_execution_config(execution_config)
program = TransformProgram()

program.add_transform(validate_measurements, name=self.name)
program.add_transform(validate_observables, accepted_observables, name=self.name)
program.add_transform(validate_device_wires, self.wires, name=self.name)
program.add_transform(
mid_circuit_measurements, device=self, mcm_config=exec_config.mcm_config
)

program.add_transform(
decompose,
stopping_condition=stopping_condition,
stopping_condition_shots=stopping_condition_shots,
skip_initial_state_prep=True,
name=self.name,
)
program.add_transform(qml.transforms.broadcast_expand)

if exec_config.gradient_method == "adjoint":
_add_adjoint_transforms(program)
return program, exec_config

def execute(
self,
Expand All @@ -358,7 +458,19 @@
Returns:
TensorLike, tuple[TensorLike], tuple[tuple[TensorLike]]: A numeric result of the computation.
"""
return 0
results = []
for circuit in circuits:
if self._wire_map is not None:
[circuit], _ = qml.map_wires(circuit, self._wire_map)
LuisAlfredoNu marked this conversation as resolved.
Show resolved Hide resolved
results.append(
self.simulate(
circuit,
self._statevector,
postselect_mode=execution_config.mcm_config.postselect_mode,
)
)

return tuple(results)

def supports_derivatives(
self,
Expand All @@ -377,7 +489,13 @@
Bool: Whether or not a derivative can be calculated provided the given information

"""
return 0
if execution_config is None and circuit is None:
return True
if execution_config.gradient_method not in {"adjoint", "best"}:
return False
if circuit is None:
return True
LuisAlfredoNu marked this conversation as resolved.
Show resolved Hide resolved
return _supports_adjoint(circuit=circuit)

def simulate(
self,
Expand Down
3 changes: 0 additions & 3 deletions tests/lightning_qubit/test_measurements_samples_MCMC.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,6 @@
from conftest import LightningDevice as ld
from conftest import device_name

if device_name == "lightning.gpu":
pytest.skip("LGPU new API in WIP. Skipping.", allow_module_level=True)

if device_name != "lightning.qubit":
pytest.skip(
f"Device {device_name} does not have an mcmc option. Skipping.", allow_module_level=True
Expand Down
48 changes: 45 additions & 3 deletions tests/new_api/test_device.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,9 +44,6 @@
validate_observables,
)

if device_name == "lightning.gpu":
pytest.skip("LGPU new API in WIP. Skipping.", allow_module_level=True)

if device_name == "lightning.kokkos":
from pennylane_lightning.lightning_kokkos.lightning_kokkos import (
_add_adjoint_transforms,
Expand All @@ -66,6 +63,25 @@
validate_observables,
)

if device_name == "lightning.gpu":
LuisAlfredoNu marked this conversation as resolved.
Show resolved Hide resolved
from pennylane_lightning.lightning_gpu.lightning_gpu import (
_add_adjoint_transforms,
_adjoint_ops,
_supports_adjoint,
accepted_observables,
adjoint_measurements,
adjoint_observables,
decompose,
mid_circuit_measurements,
no_sampling,
stopping_condition,
stopping_condition_shots,
validate_adjoint_trainable_params,
validate_device_wires,
validate_measurements,
validate_observables,
)


if device_name == "lightning.tensor":
from pennylane_lightning.lightning_tensor.lightning_tensor import (
Expand Down Expand Up @@ -451,6 +467,11 @@ def test_execute_single_measurement(self, theta, phi, mp, dev):
if isinstance(mp.obs, qml.ops.LinearCombination) and not qml.operation.active_new_opmath():
mp.obs = qml.operation.convert_to_legacy_H(mp.obs)

if isinstance(mp.obs, qml.SparseHamiltonian) and dev.dtype == np.complex64:
pytest.skip(
reason="The conversion from qml.Hamiltonian to SparseHamiltonian is only possible with np.complex128"
)

qs = QuantumScript(
[
qml.RX(phi, 0),
Expand Down Expand Up @@ -644,6 +665,12 @@ def test_supports_derivatives(self, dev, config, tape, expected, batch_obs):
qml.Z(1) + qml.X(1),
qml.Hamiltonian([-1.0, 1.5], [qml.Z(1), qml.X(1)]),
qml.Hermitian(qml.Hadamard.compute_matrix(), 0),
qml.SparseHamiltonian(
qml.Hamiltonian([-1.0, 1.5], [qml.Z(1), qml.X(1)]).sparse_matrix(
wire_order=[0, 1, 2]
),
wires=[0, 1, 2],
),
qml.Projector([1], 1),
],
)
Expand All @@ -652,6 +679,11 @@ def test_derivatives_single_expval(
self, theta, phi, dev, obs, execute_and_derivatives, batch_obs
):
"""Test that the jacobian is correct when a tape has a single expectation value"""
if isinstance(obs, qml.SparseHamiltonian) and dev.dtype == np.complex64:
pytest.skip(
reason="The conversion from qml.Hamiltonian to SparseHamiltonian is only possible with np.complex128"
)
Comment on lines +680 to +683
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This limitation comes from a hard code in Pennylane ref


if isinstance(obs, qml.ops.LinearCombination) and not qml.operation.active_new_opmath():
obs = qml.operation.convert_to_legacy_H(obs)

Expand Down Expand Up @@ -708,6 +740,11 @@ def test_derivatives_multi_expval(
self, theta, phi, omega, dev, obs1, obs2, execute_and_derivatives, batch_obs
):
"""Test that the jacobian is correct when a tape has multiple expectation values"""
if isinstance(obs2, qml.SparseHamiltonian) and dev.dtype == np.complex64:
pytest.skip(
reason="The conversion from qml.Hamiltonian to SparseHamiltonian is only possible with np.complex128"
)

if isinstance(obs1, qml.ops.LinearCombination) and not qml.operation.active_new_opmath():
obs1 = qml.operation.convert_to_legacy_H(obs1)
if isinstance(obs2, qml.ops.LinearCombination) and not qml.operation.active_new_opmath():
Expand Down Expand Up @@ -1077,6 +1114,11 @@ def test_vjp_multi_expval(
self, theta, phi, omega, dev, obs1, obs2, execute_and_derivatives, batch_obs
):
"""Test that the VJP is correct when a tape has multiple expectation values"""
if isinstance(obs2, qml.SparseHamiltonian) and dev.dtype == np.complex64:
pytest.skip(
reason="The conversion from qml.Hamiltonian to SparseHamiltonian is only possible with np.complex128"
)

if isinstance(obs1, qml.ops.LinearCombination) and not qml.operation.active_new_opmath():
obs1 = qml.operation.convert_to_legacy_H(obs1)
if isinstance(obs2, qml.ops.LinearCombination) and not qml.operation.active_new_opmath():
Expand Down
3 changes: 0 additions & 3 deletions tests/new_api/test_expval.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,6 @@
from conftest import PHI, THETA, VARPHI, LightningDevice, device_name
from pennylane.devices import DefaultQubit

if device_name == "lightning.gpu":
pytest.skip("LGPU new API in WIP. Skipping.", allow_module_level=True)

if not LightningDevice._new_API:
pytest.skip("Exclusive tests for new API. Skipping.", allow_module_level=True)

Expand Down
3 changes: 0 additions & 3 deletions tests/new_api/test_var.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,6 @@
from conftest import PHI, THETA, VARPHI, LightningDevice, device_name
from pennylane.tape import QuantumScript

if device_name == "lightning.gpu":
pytest.skip("LGPU new API in WIP. Skipping.", allow_module_level=True)

if not LightningDevice._new_API:
pytest.skip("Exclusive tests for new API. Skipping.", allow_module_level=True)

Expand Down
Loading