Skip to content

Commit

Permalink
update sample docu
Browse files Browse the repository at this point in the history
Signed-off-by: Dominik-Pinsel <dominik.pinsel@daimler.com>
  • Loading branch information
DominikPinsel committed Jan 13, 2023
1 parent 6b97f06 commit 5231670
Show file tree
Hide file tree
Showing 2 changed files with 152 additions and 13 deletions.
12 changes: 7 additions & 5 deletions docs/samples/Local TXDC Setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

This document describes how to set up two TXDConnector instances locally. The Supporting Infrastructure Deployment, used
by this
example, must never be used publicy. The deployment of the two TXDConnector instances, done by this example, is not
example, must never be used productively. The deployment of the two TXDConnector instances, done by this example, is not
suitable for
productive deployment scenarios.

Expand All @@ -14,7 +14,7 @@ productive deployment scenarios.

## Local Deployment

The Local TXDC Setup consists of three separate deployments. One time the Supporting Infrastructure, that is required to
The Local TXDC Setup consists of three separate deployments. The Supporting Infrastructure, that is required to
run connectors, and two different TXDC Connector instances, that can communicate with each other.

- [TXDC Supporting Infrastructure](../../edc-tests/src/main/resources/deployment/helm/supporting-infrastructure/README.md)
Expand All @@ -31,10 +31,10 @@ run connectors, and two different TXDC Connector instances, that can communicate

### Supporting Infrastructure

Before the connectors can be setup, the Supporting Infrastructure must be in place. The TXDConnector configuration must
match the different supporting infrastructure components.
Before the connectors can be setup, the Supporting Infrastructure must be in place. It comes with pre-configured with
everything to run two connectors independently.

For the local test scenario,
For this local test scenario,
the [Supporting Infrastructure](../../edc-tests/src/main/resources/deployment/helm/supporting-infrastructure/README.md)
of the TXDC Business Tests can be used.

Expand Down Expand Up @@ -112,3 +112,5 @@ helm install sokrates charts/tractusx-connector \
--set daps.clientId=E7:07:2D:74:56:66:31:F0:7B:10:EA:B6:03:06:4C:23:7F:ED:A6:65:keyid:E7:07:2D:74:56:66:31:F0:7B:10:EA:B6:03:06:4C:23:7F:ED:A6:65 \
--set backendService.httpProxyTokenReceiverUrl=http://backend:8080
```

> To try out the local setup, have a look at the [Transfer Example Documentation](Transfer%20Data.md)
153 changes: 145 additions & 8 deletions docs/samples/Transfer Data.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,36 +3,175 @@
This document will showcase a data transfer between two connectors.

For this transfer connector **Bob** will act as data provider, and connector **Alice** will act as data
consumer. But the roles could be inverse as well.
consumer. But the roles could be inverse as well.

> Please note: Before running the examples the corresponding environment variables must be set.
> How such an environment can be setup locally is documented in chapter 0.
**Contents**

0. (optional) Local Setup
1. Setup Data Offer
2. Request Contract Offers
3. Negotiate Contract
4. Transfer Data
5. Verify Data Transfer

## 0. (optional) Local Setup

To create a local setup with two connectors have a look at
the [Local TXDC Setup Documentation](Local%20TXDC%20Setup.md).
It creates two connectors (Plato & Sokrates) with exposed Node Ports.

### See Node Ports using Minikube

Run the following command.

```shell
minkube service list
```

Minikube will then print out something like this:

```shell
|-------------|-----------------------|-----------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------|-----------------------|-----------------|---------------------------|
| cx | backend | frontend/8080 | http://192.168.49.2:31918 |
| | | backend/8081 | http://192.168.49.2:30193 | < Transfer Backend API
| cx | ids-daps | No node port |
| cx | plato-controlplane | default/8080 | http://192.168.49.2:31016 |
| | | control/8083 | http://192.168.49.2:32510 |
| | | data/8081 | http://192.168.49.2:30423 | < Plato Data Management API
| | | validation/8082 | http://192.168.49.2:30997 |
| | | ids/8084 | http://192.168.49.2:32709 | < Plato IDS API
| | | metrics/8085 | http://192.168.49.2:31124 |
| cx | plato-dataplane | No node port |
| cx | sokrates-controlplane | default/8080 | http://192.168.49.2:32297 |
| | | control/8083 | http://192.168.49.2:32671 |
| | | data/8081 | http://192.168.49.2:31772 | < Sokrates Data Management API
| | | validation/8082 | http://192.168.49.2:30540 |
| | | ids/8084 | http://192.168.49.2:32543 | < Sokrates IDS API
| | | metrics/8085 | http://192.168.49.2:30247 |
| cx | sokrates-dataplane | No node port |
| cx | vault | No node port |
| cx | vault-internal | No node port |
| cx | vault-ui | No node port |
| default | kubernetes | No node port |
| kube-system | kube-dns | No node port |
|-------------|-----------------------|-----------------|---------------------------|
```

The most important APIs, used by this example, are highlighted. How they are used is described in subchapter 'Set
Environment Variables, used by this example' below.

### See Node Ports using Kubernetes

Using Kubernetes only the Node Ports of each Service must be checked separately.

Run

```shell
kubectl describe service -n cx plato-controlplane
```

or

```shell
kubectl describe service -n cx sokrates-controlplane
```

Kubernetes will then print out something like this.

```shell
Name: plato-controlplane
Namespace: cx
Labels: app.kubernetes.io/component=edc-controlplane
app.kubernetes.io/instance=plato-controlplane
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=tractusx-connector-controlplane
app.kubernetes.io/part-of=edc
app.kubernetes.io/version=0.2.0
helm.sh/chart=tractusx-connector-0.2.0
Annotations: meta.helm.sh/release-name: plato
meta.helm.sh/release-namespace: cx
Selector: app.kubernetes.io/instance=plato-controlplane,app.kubernetes.io/name=tractusx-connector-controlplane
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.110.180.57
IPs: 10.110.180.57
Port: default 8080/TCP
TargetPort: default/TCP
NodePort: default 31016/TCP
Endpoints: 172.17.0.6:8080
Port: control 8083/TCP
TargetPort: control/TCP
NodePort: control 32510/TCP
Endpoints: 172.17.0.6:8083
Port: data 8081/TCP
TargetPort: data/TCP
NodePort: data 30423/TCP < Plato Data Manamgent API
Endpoints: 172.17.0.6:8081
Port: validation 8082/TCP
TargetPort: validation/TCP
NodePort: validation 30997/TCP
Endpoints: 172.17.0.6:8082
Port: ids 8084/TCP
TargetPort: ids/TCP
NodePort: ids 32709/TCP < Plato IDS API
Endpoints: 172.17.0.6:8084
Port: metrics 8085/TCP
TargetPort: metrics/TCP
NodePort: metrics 31124/TCP
Endpoints: 172.17.0.6:8085
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
```

The most important APIs, used by this example, are highlighted. How they are used is described in subchapter 'Set
Environment Variables, used by this example' below.
In comparison to the Minikube example this call shows only the ports. To call the ports the Kubernetes Node IP / URL is
required. Where to get the IP may vary depending on how Kubernetes is deployed.

### Set Environment Variables, used by this example

Environment Variables, containing a URL, used by this example are
- BOB_DATAMGMT_URL
- ALICE_DATAMGMT_URL
- BOB_IDS_URL
- ALICE_BACKEND_URL

Let's assume we will use Sokrates as Bob, and Plato as Alice.

**BOB_DATAMGMT_URL** must be the Node URL. In this local setup it would be `http://192.168.49.2:31772`

**ALICE_DATAMGMT_URL** must be the Node URL. In this local setup it would be `http://192.168.49.2:30423`

**BOB_IDS_URL** must be internal Kubernetes URL. In this local setup `http://sokrates-controlplane:8084`

**ALICE_BACKEND_URL** must the Node URL. In this local setup it would be `http://192.168.49.2:30193`

## 1. Setup Data Offer

Set up a data offer in **Bob**, so that **Alice** has something to consume.

In case you are unfamiliar with the EDC terms `Asset`, `Policy` or `ContractDefinition` please have a look at the official open
source documentation ([link](https://github.com/eclipse-edc/Connector/blob/main/docs/developer/architecture/domain-model.md)).
In case you are unfamiliar with the EDC terms `Asset`, `Policy` or `ContractDefinition` please have a look at the
official open
source
documentation ([link](https://github.com/eclipse-edc/Connector/blob/main/docs/developer/architecture/domain-model.md)).

![Sequence 1](diagrams/transfer_sequence_1.png)

**Run**


The following commands will create an Asset, a Policy and a Contract Definition.
For simplicity `https://jsonplaceholder.typicode.com/todos/1` is used as data source of the asset, but could be any
other API, that is reachable from the Provider Data Plane.

```bash
curl -X POST "$BOB_DATAMGMT_URL/data/assets" \
curl -X POST "${BOB_DATAMGMT_URL}/data/assets" \
--header 'X-Api-Key: password' \
--header 'Content-Type: application/json' \
--data '{
Expand Down Expand Up @@ -73,7 +212,6 @@ curl -X POST "${BOB_DATAMGMT_URL}/data/policydefinitions" \
-s -o /dev/null -w 'Response Code: %{http_code}\n'
```


```bash
curl -X POST "${BOB_DATAMGMT_URL}/data/contractdefinitions" \
--header 'X-Api-Key: password' \
Expand Down Expand Up @@ -157,7 +295,6 @@ export NEGOTIATION_ID=$( \
-s | jq -r '.id')
```


```bash
curl -X GET "${ALICE_DATAMGMT_URL}/data/contractnegotiations/${NEGOTIATION_ID}" \
--header 'X-Api-Key: password' \
Expand Down Expand Up @@ -201,7 +338,7 @@ export TRANSFER_ID=$( \
```

```bash
curl -X GET "$ALICE_DATAMGMT_URL/data/transferprocess/$TRANSFER_ID" \
curl -X GET "${ALICE_DATAMGMT_URL}/data/transferprocess/${TRANSFER_ID}" \
--header 'X-Api-Key: password' \
--header 'Content-Type: application/json' \
-s | jq
Expand Down

0 comments on commit 5231670

Please sign in to comment.