Skip to content

Artifact for "Communication-Safe Web Programming in TypeScript with Routed Multiparty Session Types"

License

Notifications You must be signed in to change notification settings

STScript-2020/cc21-artifact

Repository files navigation

CC '21 Artifacts #90 Overview

DOI

Communication-Safe Web Programming in TypeScript with Routed Multiparty Session Types

Anson Miu, Francisco Ferreira, Nobuko Yoshida and Fangyi Zhou

Our paper presents STScript, a toolchain that generates TypeScript APIs for communication-safe web development over WebSockets, and RouST, a new session type theory that supports multiparty communications with routing mechanisms.

This overview describes the steps to assess the practical claims of the paper using the artifact.

  1. Getting Started

    • 1.1 Obtain the Artifact (Docker image)
    • 1.2 Run the Artifact
    • 1.3 Artifact Layout
  2. Experiment Workflow

    • 2.1 End-to-End Tests
    • 2.2 Case Studies
    • 2.3 Performance Benchmarks
  3. Experiment Customisation

    • 3.1 Case Studies
    • 3.2 Performance Benchmarks


1 Getting Started

In this section, we outline how to access and run the artifact. We also introduce the layout of this repository, which is used to build the artifact Docker image.


1.1 Obtain the Artifact (Docker image)

We provide a Docker image with the necessary dependencies. The following steps assume a Unix environment with Docker properly installed. Other platforms supported by Docker may find a similar way to import the Docker image.

Make sure that the Docker daemon is running. Load the Docker image (use sudo if necessary):

$ docker load < stscript-cc21-artifact.tar.gz

You should see the following as output after the last operation:

Loaded image: stscript-cc21-artifact

Alternatively, you can build the Docker image from source:

$ git clone --recursive \
	https://github.com/STScript-2020/cc21-artifact
$ cd cc21-artifact
$ docker build . -t "stscript-cc21-artifact"

1.2 Run the Artifact (Docker image)

To run the image, run the command (use sudo if necessary):

$ docker run -it -p 127.0.0.1:5000:5000 \
		-p 127.0.0.1:8080:8080 -p 127.0.0.1:8888:8888 \
		stscript-cc21-artifact

This command exposes the terminal of the container. To run the STScript toolchain (e.g. show the helptext):

stscript@stscript:~$ codegen --help

For example, the following command reads as follows:

$ codegen ~/protocols/TravelAgency.scr TravelAgency A \
	browser -s S -o ~/case-studies/TravelAgency/client/src
  1. Generate APIs for role A of the TravelAgency protocol specified in ~/protocols/TravelAgency.scr;

  2. Role A is implemented as a browser endpoint, and assume role S to be the server;

  3. Output the generated APIs under the path ~/case-studies/TravelAgency/client/src


1.3 Artifact Layout

  • scribble-java contains the Scribble toolchain, for handling multiparty protocol descriptions, a dependency of our toolchain.
  • codegen contains the source code of our code generator, written in Python, which generates TypeScript code for implementing the provided multiparty protocol.
  • protocols contains various Scribble protocol descriptions, including those used in the case studies.
  • case-studies contains 3 case studies of implementing interactive web applications with our toolchain, namely Noughts and Crosses, Travel Agency, and Battleships.
  • perf-benchmarkscontains the code to generate performance benchmarks, including an iPython notebook to visualise the benchmarks collected from an experiment run.
  • scripts contains various convenient scripts to run the toolchain and build the case studies.
  • web-sandbox contains configuration files for the web development, e.g. TypeScript configurations and NPM package.json files.


2 Experiment Workflow

In this section, we explain the workflow for carrying out the experiments to verify the claims made in our paper.


2.1 End-to-End Tests

To run the end-to-end tests:

# Run from any directory
$ run_tests

The end-to-end tests verify that

  • STScript correctly parses the Scribble protocol specification files, and,
  • STScript correctly generates TypeScript APIs, and,
  • The generated APIs can be type-checked by the TypeScript Compiler successfully.

The protocol specification files, describing the multiparty communication, are located in ~/codegen/tests/system/examples. The generated APIs are saved under ~/web-sandbox (which is a sandbox environment set up for the TypeScript Compiler) and are deleted when the test finishes.

Verify that all tests pass. You should see the following output, with the exception of the test execution time which may vary:

-------------------------------------------------------
Ran 14 tests in 171.137s
OK

Passing the end-to-end tests means that our STScript toolchain correctly generates type-correct TypeScript code.


2.2 Case Studies

We include three case studies of realistic web applications, namely Noughts and Crosses, Travel Agency and Battleships, implemented using the generated APIs to show the expressiveness of the generated APIs and the compatibility with modern web programming practices.

2.2.1 Noughts and Crosses

This is the classic turn-based 2-player game as introduced in §5. To generate the APIs for both players and the game server:

# Run from any directory
$ build_noughts-and-crosses

To run the case study:

$ cd ~/case-studies/NoughtsAndCrosses
$ npm start

Visit http://localhost:8080 on two web browser windows side-by-side, one for each player. Play the game; you may refer to https://youtu.be/SBANcdwpYPw for an example game execution as a starting point.

You may also verify the following:

  1. Open 4 web browsers to play 2 games simultaneously. Observe that the state of each game board is consistent with its game, i.e. moves do not get propagated to the wrong game.

  2. Open 2 web browsers to play a game, and close one of them mid-game. Observe that the remaining web browser is notified that their opponent has forfeited the match.

Additional Notes:

  • Refresh both web browsers to start a new game.
  • Stop the web application by pressing Ctrl+C on the terminal.

2.2.2 Travel Agency

This is the running example of our paper, as introduced in §1. To generate the APIs for both travellers and the agency:

# Run from any directory
$ build_travel-agency

To run the case study:

$ cd ~/case-studies/TravelAgency
$ npm start

Visit http://localhost:8080 on two web browser windows side-by-side, one for each traveller. Execute the Travel Agency service; you may refer to https://youtu.be/mZzIBYP_Xac for an example execution as a starting point.

  1. Log in as Friend and Customer on separate windows.

  2. As Friend, suggest 'Tokyo'. As Customer, query for 'Tokyo'. Expect to see that there is no availability.

  3. As Friend, suggest 'Edinburgh'. As Customer, query for 'Edinburgh'. Expect to see that there is availability, then ask Friend. As Friend, enter a valid numeric split and press OK. As Customer, enter any string for your name and any numeric value for credit card and press OK. Expect to see that both roles show success messages.

  4. Refresh both web browsers and log in as Friend and Customer on separate windows again. As Friend, suggest 'Edinburgh' again. As Customer, query for 'Edinburgh'. Expect to see that there is no availability, as the last seat has been taken.

Stop the web application by pressing Ctrl+C on the terminal.

2.2.3 Battleships

This is a turn-based 2-player board game with more complex application logic compared with Noughts and Crosses. To generate the APIs for both players and the game server:

# Run from any directory
$ build_battleships

To run the case study:

$ cd ~/case-studies/Battleships
$ npm start

Visit http://localhost:8080 on two web browser windows side-by-side, one for each player. Play the game; you may refer to https://youtu.be/cGrKIZHgAtE for an example game execution as a starting point.

Additional Notes:

  • Refresh both web browsers to start a new game.
  • Stop the web application by pressing Ctrl+C on the terminal.

2.3 Performance Benchmarks

We include a script to run the performance benchmarks as introduced in Appendix C.1. By default, the script executes the same experiment configurations -- parameterising the Ping Pong protocol with and without additional UI requirements with 100 and 1000 messages, and running each experiment 20 times. Refer to 3.2 on how to customise these parameters.

To run the performance benchmarks:

$ cd ~/perf-benchmarks
$ ./run_benchmark.sh

Note: If the terminal log gets stuck at Loaded client page, open a web browser and access http://localhost:5000.

Terminology Alignment

Observe the following discrepancies between the artifact and the paper:

  • The simple_pingpong example in the artifact refers to the Ping Pong protocol without UI requirements in the paper.
  • The complex_pingpong example in the artifact refers to the Ping Pong protocol with UI requirements in the paper.

To visualise the performance benchmarks, run:

$ cd ~/perf-benchmarks
$ jupyter notebook --ip=0.0.0.0
/* ...snip... */
	To access the notebook, open this file in a browser:
		/* ...snip... */
	Or copy and paste one of these URLs:
	   http://ststcript:8888/?token=<token>
	or http://127.0.0.1:8888/?token=<token>

Use a web browser to open the URL in the terminal output beginning with http://127.0.0.1:8888. Open the STScript Benchmark Visualisation.ipynb notebook.

Click on Kernel -> Restart & Run All from the top menu bar.

Data Alignment

Tables 1 and 2 (from the paper) can be located by scrolling to the end (bottom) of the notebook.

Observations

Verify the following claims made in the paper against the tables printed at the end (bottom) of the notebook.

  • Simple Ping Pong ("w/o req"):

    • Time taken by node is less than time taken by react, which entails that "the round trip time is dominated by the browser-side message processing time".

    • The delta (of mpst relative to bare) for the React endpoints is greater than the delta for the Node endpoints, which entails that "mpst introduces overhead dominated by the React.js session runtime".

  • Complex Ping Pong ("w/ req"):

    • Inspect the difference between the message processing time across Simple Ping Pong and Complex Ping Pong. This difference is greater for bare implementations compared with mpst implementations, which entails that "the UI requirements require bare to perform additional state updates and rendering, reducing the overhead relative to mpst".

Stop the notebook server by pressing Ctrl+C on the terminal, and confirm the shutdown command by entering y.



3 Experiment Customisation

In this section, we show how to customise the experiment workflow to implement your own use case.


3.1 Case Studies

We provide a step-by-step guide on implementing your own web applications using STScript under the wiki.

We use the Adder protocol as an example, but you are free to use your own protocol. Other examples of protocols (including Adder) can be found under ~/protocols.


3.2 Performance Benchmarks

You can customise the number of messages (exchanged during the Ping Pong protocol) and the number of runs for each experiment. These parameters are represented in the run_benchmark.sh script by the -m and -r flags respectively.

For example, to set up two configurations -- running Ping Pong with 100 round trips and 1000 round trips -- and run each configuration 100 times:

$ cd ~/perf-benchmarks
$ ./run_benchmark.sh -m 100 1000 -r 100

Note: running ./run_benchmark.sh will clear any existing logs.

Refer to §2.3 for instructions on visualising the logs from the performance benchmarks.

Note: If you change the message configuration (i.e. the -m flag), update the NUM_MSGS tuple located in the first cell of the notebook as shown below:

# Update these variables if you wish to
# visualise other benchmarks.
VARIANTS = ('bare', 'mpst')
NUM_MSGS = (100, 1000)


Licence

This work is licensed under Apache 2.0 Licence.

About

Artifact for "Communication-Safe Web Programming in TypeScript with Routed Multiparty Session Types"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages