Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Python] Added Tensorflow Model Handler #25368

Merged
merged 53 commits into from
Feb 15, 2023
Merged
Show file tree
Hide file tree
Changes from 49 commits
Commits
Show all changes
53 commits
Select commit Hold shift + click to select a range
758de33
go lints
riteshghorse Dec 7, 2022
7368f4c
Merge branch 'master' of https://github.com/apache/beam
riteshghorse Dec 12, 2022
a2c36ae
Merge branch 'master' of github.com:riteshghorse/beam
riteshghorse Dec 14, 2022
f397a04
Merge branch 'apache:master' into master
riteshghorse Dec 14, 2022
531867a
Merge branch 'master' of github.com:riteshghorse/beam
riteshghorse Feb 7, 2023
cb284ec
added tf model handler and tests
riteshghorse Feb 7, 2023
8193283
lint and formatting changes
riteshghorse Feb 7, 2023
d1eb67c
correct lints
riteshghorse Feb 7, 2023
5fb5cbb
more lints and formats
riteshghorse Feb 7, 2023
e1ec168
auto formatted with yapf
riteshghorse Feb 7, 2023
e7b5cf0
rm spare lines
riteshghorse Feb 7, 2023
70edea4
add readme file
riteshghorse Feb 7, 2023
3ed3160
test requirement file
riteshghorse Feb 8, 2023
800cc3a
add test to gradle
riteshghorse Feb 8, 2023
1bc4adf
add test tasks for tf
riteshghorse Feb 8, 2023
70b5a2b
unit test
riteshghorse Feb 8, 2023
f62c366
lints
riteshghorse Feb 8, 2023
eef7a25
updated inferenceFn type
riteshghorse Feb 8, 2023
1169246
add tox info for py38
riteshghorse Feb 8, 2023
520e192
pylint
riteshghorse Feb 8, 2023
4c43cc1
lints
riteshghorse Feb 8, 2023
8017a4d
using tfhub
riteshghorse Feb 10, 2023
1d98cdb
added tf model handler and tests
riteshghorse Feb 7, 2023
5b56a2f
lint and formatting changes
riteshghorse Feb 7, 2023
3ada016
correct lints
riteshghorse Feb 7, 2023
e8cee7b
more lints and formats
riteshghorse Feb 7, 2023
7a2c1a1
auto formatted with yapf
riteshghorse Feb 7, 2023
ee905ee
rm spare lines
riteshghorse Feb 7, 2023
b54436f
merge master
riteshghorse Feb 10, 2023
dd7c49d
test requirement file
riteshghorse Feb 8, 2023
86d7329
add test to gradle
riteshghorse Feb 8, 2023
8ca2a1d
add test tasks for tf
riteshghorse Feb 8, 2023
613068f
unit test
riteshghorse Feb 8, 2023
0fd2b30
lints
riteshghorse Feb 8, 2023
1e80e70
updated inferenceFn type
riteshghorse Feb 8, 2023
38210fc
add tox info for py38
riteshghorse Feb 8, 2023
521bd78
pylint
riteshghorse Feb 8, 2023
029cc95
lints
riteshghorse Feb 8, 2023
efec494
using tfhub
riteshghorse Feb 10, 2023
40568d4
tfhub example
riteshghorse Feb 13, 2023
4fe8a1d
update doc
riteshghorse Feb 13, 2023
ccf0422
Merge branch 'master' of https://github.com/apache/beam into tf-model…
riteshghorse Feb 13, 2023
df61e8c
Merge branch 'master' into tf-model-handler
riteshghorse Feb 13, 2023
0958576
merge master
riteshghorse Feb 13, 2023
368d87d
sort imports
riteshghorse Feb 13, 2023
a557fad
resolve pydoc,precommit
riteshghorse Feb 14, 2023
1b21874
resolve conflict
riteshghorse Feb 14, 2023
46fbde9
fix import
riteshghorse Feb 14, 2023
34e4505
fix lint
riteshghorse Feb 14, 2023
0fbb3d9
address comments
riteshghorse Feb 14, 2023
d298e42
fix optional inference args
riteshghorse Feb 15, 2023
2556534
change to ml bucket
riteshghorse Feb 15, 2023
627fdd9
fix doc
riteshghorse Feb 15, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
107 changes: 106 additions & 1 deletion sdks/python/apache_beam/examples/inference/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,15 @@ because the `apache_beam.examples.inference` module was added in that release.
pip install apache-beam==2.40.0
```

### Tensorflow dependencies

The following installation requirement is for the Tensorflow model handler examples.

The RunInference API supports the Tensorflow framework. To use Tensorflow locally, first install `tensorflow`.
```
pip install tensorflow==2.11.0
```

### PyTorch dependencies

The following installation requirements are for the files used in these examples.
Expand Down Expand Up @@ -417,4 +426,100 @@ python -m apache_beam.examples.inference.onnx_sentiment_classification.py \
This writes the output to the output file path with contents like:
```
A comedy-drama of nearly epic proportions rooted in a sincere performance by the title character undergoing midlife crisis .;1
```
```

---
## MNIST digit classification with Tensorflow
[`tensorflow_mnist_classification.py`](./tensorflow_mnist_classification.py) contains an implementation for a RunInference pipeline that performs image classification on handwritten digits from the [MNIST](https://en.wikipedia.org/wiki/MNIST_database) database.

The pipeline reads rows of pixels corresponding to a digit, performs basic preprocessing(converts the input shape to 28x28), passes the pixels to the trained Tensorflow model with RunInference, and then writes the predictions to a text file.

### Dataset and model for language modeling

To use this transform, you need a dataset and model for language modeling.

1. Create a file named [`INPUT.csv`](gs://apache-beam-ml/testing/inputs/it_mnist_data.csv) that contains labels and pixels to feed into the model. Each row should have comma-separated elements. The first element is the label. All other elements are pixel values. The csv should not have column headers. The content of the file should be similar to the following example:
```
1,0,0,0...
0,0,0,0...
1,0,0,0...
4,0,0,0...
...
```
2. Save the trained tensorflow model to a directory `MODEL_DIR` .


### Running `tensorflow_mnist_classification.py`

To run the MNIST classification pipeline locally, use the following command:
```sh
python -m apache_beam.examples.inference.tensorflow_mnist_classification.py \
--input INPUT \
--output OUTPUT \
--model_path MODEL_DIR
```
For example:
```sh
python -m apache_beam.examples.inference.tensorflow_mnist_classification.py \
--input INPUT.csv \
--output predictions.txt \
--model_path MODEL_DIR
```

This writes the output to the `predictions.txt` with contents like:
```
1,1
4,4
0,0
7,7
3,3
5,5
...
```
Each line has data separated by a comma ",". The first item is the actual label of the digit. The second item is the predicted label of the digit.

---
## Image segmentation with Tensorflow and TensorflowHub

[`tensorflow_image_segmentation.py`](./tensorflow_image_segmentation.py) contains an implementation for a RunInference pipeline that performs image segementation using the [`mobilenet_v2`]("https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/4") architecture from the tensorflow hub.

The pipeline reads images, performs basic preprocessing, passes the images to the Tensorflow implementation of RunInference, and then writes predictions to a text file.

### Dataset and model for image segmentation

To use this transform, you need a dataset and model for image segmentation.

1. Create a directory named `IMAGE_DIR`. Create or download images and put them in this directory. We
will use the [example image]("https://storage.googleapis.com/download.tensorflow.org/example_images/") on tensorflow.
2. Create a file named `IMAGE_FILE_NAMES.txt` that names of each of the images in `IMAGE_DIR` that you want to use to run image segmentation. For example:
```
grace_hopper.jpg
```
3. A tensorflow `MODEL_PATH`, we will use the [mobilenet]("https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/4") model.
4. Note the path to the `OUTPUT` file. This file is used by the pipeline to write the predictions.

### Running `tensorflow_image_segmentation.py`

To run the image segmentation pipeline locally, use the following command:
```sh
python -m apache_beam.examples.inference.tensorflow_image_segmentation \
--input IMAGE_FILE_NAMES \
--image_dir IMAGES_DIR \
--output OUTPUT \
--model_path MODEL_PATH
```

For example, if you've followed the naming conventions recommended above:
```sh
python -m apache_beam.examples.inference.tensorflow_image_segmentation \
--input IMAGE_FILE_NAMES.txt \
--image_dir "https://storage.googleapis.com/download.tensorflow.org/example_images/"
--output predictions.txt \
--model_path "https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/4"
```
This writes the output to the `predictions.txt` with contents like:
```
background
...
```
Each line has a list of predicted label.
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

import argparse
import logging
from typing import Iterable
from typing import Iterator

import numpy

import apache_beam as beam
import tensorflow as tf
from apache_beam.ml.inference.base import PredictionResult
from apache_beam.ml.inference.base import RunInference
from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
from apache_beam.runners.runner import PipelineResult


class PostProcessor(beam.DoFn):
"""Process the PredictionResult to get the predicted label.
Returns predicted label.
"""
def process(self, element: PredictionResult) -> Iterable[str]:
print("prediction result---->: %", element)
riteshghorse marked this conversation as resolved.
Show resolved Hide resolved
predicted_class = numpy.argmax(element.inference[0], axis=-1)
labels_path = tf.keras.utils.get_file(
'ImageNetLabels.txt',
'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt' # pylint: disable=line-too-long
riteshghorse marked this conversation as resolved.
Show resolved Hide resolved
)
imagenet_labels = numpy.array(open(labels_path).read().splitlines())
predicted_class_name = imagenet_labels[predicted_class]
return predicted_class_name.title()


def parse_known_args(argv):
"""Parses args for the workflow."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--input',
dest='input',
required=True,
help='Path to the text file containing image names.')
parser.add_argument(
'--output',
dest='output',
required=True,
help='Path to save output predictions.')
parser.add_argument(
'--model_path',
dest='model_path',
required=True,
help='Path to load the Tensorflow model for Inference.')
parser.add_argument(
'--image_dir', help='Path to the directory where images are stored.')
return parser.parse_known_args(argv)


def filter_empty_lines(text: str) -> Iterator[str]:
if len(text.strip()) > 0:
yield text


def read_image(image_name, image_dir):
from PIL import Image
riteshghorse marked this conversation as resolved.
Show resolved Hide resolved
img = tf.keras.utils.get_file(image_name, image_dir + image_name)
img = Image.open(img).resize((224, 224))
img = numpy.array(img) / 255.0
img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)
return img_tensor


def run(
argv=None, save_main_session=True, test_pipeline=None) -> PipelineResult:
"""
Args:
argv: Command line arguments defined for this example.
save_main_session: Used for internal testing.
test_pipeline: Used for internal testing.
"""
known_args, pipeline_args = parse_known_args(argv)
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = save_main_session

# In this example we will use the TensorflowHub model URL.
model_loader = TFModelHandlerTensor(model_uri=known_args.model_path)

pipeline = test_pipeline
if not test_pipeline:
pipeline = beam.Pipeline(options=pipeline_options)

image = (
pipeline
| 'ReadImageNames' >> beam.io.ReadFromText(known_args.input)
| 'FilterEmptyLines' >> beam.ParDo(filter_empty_lines)
| "PreProcessInputs" >>
beam.Map(lambda image_name: read_image(image_name, known_args.image_dir)))

predictions = (
image
| "RunInference" >> RunInference(model_loader)
| "PostProcessOutputs" >> beam.ParDo(PostProcessor()))

_ = predictions | "WriteOutput" >> beam.io.WriteToText(
known_args.output, shard_name_template='', append_trailing_newlines=False)

result = pipeline.run()
result.wait_until_finish()
return result


if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
Original file line number Diff line number Diff line change
@@ -0,0 +1,118 @@
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

import argparse
import logging
from typing import Iterable
from typing import Tuple

import numpy

import apache_beam as beam
from apache_beam.ml.inference.base import KeyedModelHandler
from apache_beam.ml.inference.base import PredictionResult
from apache_beam.ml.inference.base import RunInference
from apache_beam.ml.inference.tensorflow_inference import ModelType
from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerNumpy
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
from apache_beam.runners.runner import PipelineResult


def process_input(row: str) -> Tuple[int, numpy.ndarray]:
data = row.split(',')
label, pixels = int(data[0]), data[1:]
pixels = [int(pixel) for pixel in pixels]
# the trained model accepts the input of shape 28x28
pixels = numpy.array(pixels).reshape((28, 28, 1))
return label, pixels


class PostProcessor(beam.DoFn):
"""Process the PredictionResult to get the predicted label.
Returns a comma separated string with true label and predicted label.
"""
def process(self, element: Tuple[int, PredictionResult]) -> Iterable[str]:
label, prediction_result = element
prediction = numpy.argmax(prediction_result.inference, axis=0)
yield '{},{}'.format(label, prediction)


def parse_known_args(argv):
"""Parses args for the workflow."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--input',
dest='input',
required=True,
help='text file with comma separated int values.')
parser.add_argument(
'--output',
dest='output',
required=True,
help='Path to save output predictions.')
parser.add_argument(
'--model_path',
dest='model_path',
required=True,
help='Path to load the Tensorflow model for Inference.')
return parser.parse_known_args(argv)


def run(
argv=None, save_main_session=True, test_pipeline=None) -> PipelineResult:
"""
Args:
argv: Command line arguments defined for this example.
save_main_session: Used for internal testing.
test_pipeline: Used for internal testing.
"""
known_args, pipeline_args = parse_known_args(argv)
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = save_main_session

# In this example we pass keyed inputs to RunInference transform.
# Therefore, we use KeyedModelHandler wrapper over TFModelHandlerNumpy.
model_loader = KeyedModelHandler(
TFModelHandlerNumpy(
model_uri=known_args.model_path, model_type=ModelType.SAVED_MODEL))

pipeline = test_pipeline
if not test_pipeline:
pipeline = beam.Pipeline(options=pipeline_options)

label_pixel_tuple = (
pipeline
| "ReadFromInput" >> beam.io.ReadFromText(known_args.input)
| "PreProcessInputs" >> beam.Map(process_input))

predictions = (
label_pixel_tuple
| "RunInference" >> RunInference(model_loader)
| "PostProcessOutputs" >> beam.ParDo(PostProcessor()))

_ = predictions | "WriteOutput" >> beam.io.WriteToText(
known_args.output, shard_name_template='', append_trailing_newlines=True)

result = pipeline.run()
result.wait_until_finish()
return result


if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
Loading