Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dev PR to update Docker environment to Ubuntu 22, Python 3.10 #840

Closed
wants to merge 21 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
4443eeb
Update base image to Ubuntu 22.04
fpjentzsch May 25, 2023
a2a0ffe
Update pre-commit config
fpjentzsch May 25, 2023
161cc20
Update Brevitas, apply workarounds to fix quicktest
fpjentzsch May 26, 2023
a78cdb2
[CustomOp] Update naming of interfaces for code generation
auphelia May 29, 2023
c39cf90
Merge branch 'dev' into feature/2023_1
auphelia May 29, 2023
19097bd
[TlastMarker] Update interface naming for code gen
auphelia May 29, 2023
3f43c8f
Merge branch 'feature/2023_1' into feature/update_container
auphelia May 30, 2023
aae59b1
[Zynq build] update PS IP version
fpjentzsch Jun 1, 2023
1679e01
Minor fixes and workarounds for version updates
fpjentzsch Jun 1, 2023
07afcbb
Merge pull request #824 from fpjentzsch/feature/update_container
fpjentzsch Jun 1, 2023
3d0b918
[CI] Update tool version in Jenkinsfile
auphelia Jun 7, 2023
1c36cdb
[Zynq build] Retrieve auxiliary IP versions from catalog
fpjentzsch Jun 8, 2023
a425893
Merge pull request #833 from fpjentzsch/feature/bd_ip_versions
auphelia Jun 8, 2023
8cb04e6
[Tests] Add node naming to cppsim tests
auphelia Jun 9, 2023
6ee4549
Merge dev into feature/2023_1
auphelia Jun 28, 2023
21f191e
[gha] Update python version for pre-commit gha
auphelia Jun 28, 2023
d3465bc
[linting] Pre-commit with python 3.10 on all files
auphelia Jun 28, 2023
3497cfe
[deps/ci] Downgrade tool version for ci and update qonnx commit
auphelia Jun 28, 2023
c1b86d8
[Deps/tests] Update brevitas and delete finn_onnx export in brevitas …
auphelia Jun 28, 2023
14192c6
[Tests] Remove finn onnx export from end2end bnn tests
auphelia Jul 4, 2023
85f37d4
[Tests/Deps] Update qonnx commit and update tests
auphelia Jul 5, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/pre-commit.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ jobs:
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.8'
python-version: '3.10'

- name: Run Lint
uses: pre-commit/action@v3.0.0
11 changes: 6 additions & 5 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -29,11 +29,11 @@
exclude: '^docs/conf.py'

default_language_version:
python: python3.8
python: python3.10

repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.2.0
rev: v4.4.0
hooks:
- id: trailing-whitespace
exclude: '\.dat$'
Expand All @@ -56,15 +56,16 @@ repos:
- id: isort

- repo: https://github.com/psf/black
rev: 22.3.0
rev: 23.3.0
hooks:
- id: black
language_version: python3
args: [--line-length=100]

- repo: https://github.com/PyCQA/flake8
rev: 3.9.2
rev: 6.0.0
hooks:
- id: flake8
# black-compatible flake-8 config
args: ['--max-line-length=88', # black default
args: ['--max-line-length=100', # black default
'--extend-ignore=E203'] # E203 is not PEP8 compliant
31 changes: 19 additions & 12 deletions docker/Dockerfile.finn
Original file line number Diff line number Diff line change
Expand Up @@ -26,10 +26,10 @@
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

FROM pytorch/pytorch:1.7.1-cuda11.0-cudnn8-runtime
FROM ubuntu:jammy-20230126
LABEL maintainer="Yaman Umuroglu <yamanu@xilinx.com>"

ARG XRT_DEB_VERSION="xrt_202210.2.13.466_18.04-amd64-xrt"
ARG XRT_DEB_VERSION="xrt_202220.2.14.354_22.04-amd64-xrt"

WORKDIR /workspace

Expand Down Expand Up @@ -57,12 +57,15 @@ RUN apt-get update && \
unzip \
zip \
locales \
lsb-core
lsb-core \
python3 \
python-is-python3 \
python3-pip
RUN echo "StrictHostKeyChecking no" >> /etc/ssh/ssh_config
RUN locale-gen "en_US.UTF-8"

# install Verilator from source to get the right version
RUN apt-get install -y git perl python3 make autoconf g++ flex bison ccache libgoogle-perftools-dev numactl perl-doc libfl2 libfl-dev zlibc zlib1g zlib1g-dev
RUN apt-get install -y git perl make autoconf g++ flex bison ccache libgoogle-perftools-dev numactl perl-doc libfl2 libfl-dev zlib1g zlib1g-dev
RUN git clone https://github.com/verilator/verilator
RUN cd verilator && \
git checkout v4.224 && \
Expand All @@ -81,19 +84,23 @@ RUN rm /tmp/$XRT_DEB_VERSION.deb
COPY requirements.txt .
RUN pip install -r requirements.txt
RUN rm requirements.txt

# install PyTorch
RUN pip install torch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116

# extra Python package dependencies (for testing and interaction)
RUN pip install pygments==2.4.1
RUN pip install ipykernel==5.5.5
RUN pip install pygments==2.14.0
RUN pip install ipykernel==6.21.2
RUN pip install jupyter==1.0.0 --ignore-installed
RUN pip install markupsafe==2.0.1
RUN pip install matplotlib==3.3.1 --ignore-installed
RUN pip install matplotlib==3.7.0 --ignore-installed
RUN pip install pytest-dependency==0.5.1
RUN pip install pytest-xdist[setproctitle]==2.4.0
RUN pip install pytest-parallel==0.1.0
RUN pip install pytest-xdist[setproctitle]==3.2.0
RUN pip install pytest-parallel==0.1.1
RUN pip install "netron>=5.0.0"
RUN pip install pandas==1.1.5
RUN pip install scikit-learn==0.24.1
RUN pip install tqdm==4.31.1
RUN pip install pandas==1.5.3
RUN pip install scikit-learn==1.2.1
RUN pip install tqdm==4.64.1
RUN pip install -e git+https://github.com/fbcotter/dataset_loading.git@0.0.4#egg=dataset_loading

# extra dependencies from other FINN deps
Expand Down
5 changes: 3 additions & 2 deletions docker/finn_entrypoint.sh
Original file line number Diff line number Diff line change
Expand Up @@ -54,8 +54,9 @@ recho () {
echo -e "${RED}ERROR: $1${NC}"
}

# qonnx
pip install --user -e ${FINN_ROOT}/deps/qonnx
# qonnx (using workaround for https://github.com/pypa/pip/issues/7953)
# to be fixed in future Ubuntu versions (https://bugs.launchpad.net/ubuntu/+source/setuptools/+bug/1994016)
pip install --no-build-isolation --no-warn-script-location -e ${FINN_ROOT}/deps/qonnx
# finn-experimental
pip install --user -e ${FINN_ROOT}/deps/finn-experimental
# brevitas
Expand Down
4 changes: 2 additions & 2 deletions fetch-repos.sh
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,9 @@
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

QONNX_COMMIT="20a34289cf2297d2b2bbbe75d6ac152ece86e3b4"
QONNX_COMMIT="0aec35a16948155e81c1640b71650206e733db3e"
FINN_EXP_COMMIT="0aa7e1c44b20cf085b6fe42cff360f0a832afd2c"
BREVITAS_COMMIT="c65f9c13dc124971f14739349531bbcda5c2a4aa"
BREVITAS_COMMIT="9bb26bf2798de210a267d1e4aed4c20087e0e8a5"
PYVERILATOR_COMMIT="766e457465f5c0dd315490d7b9cc5d74f9a76f4f"
CNPY_COMMIT="4e8810b1a8637695171ed346ce68f6984e585ef4"
HLSLIB_COMMIT="c17aa478ae574971d115afa9fa4d9c215857d1ac"
Expand Down
18 changes: 5 additions & 13 deletions notebooks/end2end_example/cybersecurity/dataloader_quantized.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,6 @@ def __init__(
onehot=False,
train=True,
):

self.dataframe = (
pd.concat([pd.read_csv(file_path_train), pd.read_csv(file_path_test)])
.reset_index()
Expand Down Expand Up @@ -77,9 +76,7 @@ def __getitem__(self, index):
data_val = self.data[index][:-1]
return data_val, target

def dec2bin(
self, column: pd.Series, number_of_bits: int, left_msb: bool = True
) -> pd.Series:
def dec2bin(self, column: pd.Series, number_of_bits: int, left_msb: bool = True) -> pd.Series:
"""Convert a decimal pd.Series to binary pd.Series with numbers in their
# base-2 equivalents.
The output is a numpy nd array.
Expand Down Expand Up @@ -133,6 +130,7 @@ def integer_encoding(self, df):
def quantize_df(self, df):
"""Quantized the input dataframe. The scaling is done by multiplying
every column by the inverse of the minimum of that column"""

# gets the smallest positive number of a vector
def get_min_positive_number(vector):
return vector[vector > 0].min()
Expand Down Expand Up @@ -178,24 +176,18 @@ def char_split(s):
column_data = np.clip(
column_data, 0, 4294967295
) # clip due to overflow of uint32 of matlab code
column_data = self.round_like_matlab_series(
column_data
) # round like matlab
column_data = self.round_like_matlab_series(column_data) # round like matlab
column_data = column_data.astype(np.uint32) # cast like matlab

if column == "rate":
column_data.update(pd.Series(dict_correct_rate_values))

python_quantized_df[column] = (
self.dec2bin(column_data, maxbits, left_msb=False)
.reshape((-1, 1))
.flatten()
self.dec2bin(column_data, maxbits, left_msb=False).reshape((-1, 1)).flatten()
)

for column in python_quantized_df.columns:
python_quantized_df[column] = (
python_quantized_df[column].apply(char_split).values
)
python_quantized_df[column] = python_quantized_df[column].apply(char_split).values

python_quantized_df_separated = pd.DataFrame(
np.column_stack(python_quantized_df.values.T.tolist())
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,9 +57,7 @@ def make_unsw_nb15_test_batches(bsize, dataset_root):
help='name of bitfile (i.e. "resizer.bit")',
default="../bitfile/finn-accel.bit",
)
parser.add_argument(
"--dataset_root", help="dataset root dir for download/reuse", default="."
)
parser.add_argument("--dataset_root", help="dataset root dir for download/reuse", default=".")
# parse arguments
args = parser.parse_args()
bsize = args.batchsize
Expand Down
14 changes: 6 additions & 8 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,19 +3,17 @@ clize==4.1.1
dataclasses-json==0.5.7
gspread==3.6.0
ipython==8.12.2
numpy==1.22.0
numpy==1.24.1
onnx==1.13.0
onnxoptimizer
onnxruntime==1.11.1
pre-commit==2.9.2
onnxruntime==1.15.0
pre-commit==3.3.2
protobuf==3.20.3
psutil==5.9.4
pyscaffold==3.2.1
scipy==1.5.2
pyscaffold==4.4
scipy==1.10.1
setupext-janitor>=1.1.2
sigtools==2.0.3
sphinx==5.0.2
sphinx_rtd_theme==0.5.0
toposort==1.5
toposort==1.7.0
vcdvcd==1.0.5
wget==3.2
5 changes: 4 additions & 1 deletion run-docker.sh
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ SCRIPTPATH=$(dirname "$SCRIPT")
: ${ALVEO_BOARD="U250"}
: ${ALVEO_TARGET_DIR="/tmp"}
: ${PLATFORM_REPO_PATHS="/opt/xilinx/platforms"}
: ${XRT_DEB_VERSION="xrt_202210.2.13.466_18.04-amd64-xrt"}
: ${XRT_DEB_VERSION="xrt_202220.2.14.354_22.04-amd64-xrt"}
: ${FINN_HOST_BUILD_DIR="/tmp/$DOCKER_INST_NAME"}
: ${FINN_DOCKER_TAG="xilinx/finn:$(git describe --always --tags --dirty).$XRT_DEB_VERSION"}
: ${FINN_DOCKER_PREBUILT="0"}
Expand Down Expand Up @@ -201,6 +201,9 @@ DOCKER_EXEC+="-e PYNQ_PASSWORD=$PYNQ_PASSWORD "
DOCKER_EXEC+="-e PYNQ_TARGET_DIR=$PYNQ_TARGET_DIR "
DOCKER_EXEC+="-e OHMYXILINX=$OHMYXILINX "
DOCKER_EXEC+="-e NUM_DEFAULT_WORKERS=$NUM_DEFAULT_WORKERS "
# Workaround for FlexLM issue, see:
# https://community.flexera.com/t5/InstallAnywhere-Forum/Issues-when-running-Xilinx-tools-or-Other-vendor-tools-in-docker/m-p/245820#M10647
DOCKER_EXEC+="-e LD_PRELOAD=/lib/x86_64-linux-gnu/libudev.so.1 "
if [ "$FINN_DOCKER_RUN_AS_ROOT" = "0" ];then
DOCKER_EXEC+="-v /etc/group:/etc/group:ro "
DOCKER_EXEC+="-v /etc/passwd:/etc/passwd:ro "
Expand Down
8 changes: 4 additions & 4 deletions setup.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -34,12 +34,12 @@
name = finn
description = A Framework for Fast, Scalable Quantized Neural Network Inference
author = Yaman Umuroglu
author-email = yamanu@xilinx.com
author_email = yamanu@xilinx.com
license = new-bsd
long-description = file: README.md
long-description-content-type = text/markdown
long_description = file: README.md
long_description_content_type = text/markdown
url = https://xilinx.github.io/finn/
project-urls =
project_urls =
Documentation = https://finn.readthedocs.io/
# Change if running only on Windows, Mac or Linux (comma-separated)
platforms = any
Expand Down
4 changes: 1 addition & 3 deletions src/finn/analysis/fpgadataflow/dataflow_performance.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,9 +66,7 @@ def dataflow_performance(model):
max_pred_latency = 0
else:
# find max of any of predecessors
pred_latencies = map(
lambda x: latency_at_node_output[x.name], predecessors
)
pred_latencies = map(lambda x: latency_at_node_output[x.name], predecessors)
max_pred_latency = max(pred_latencies)
latency_at_node_output[node.name] = node_cycles + max_pred_latency
critical_path_cycles = max(latency_at_node_output.values())
Expand Down
4 changes: 2 additions & 2 deletions src/finn/analysis/fpgadataflow/post_synth_res.py
Original file line number Diff line number Diff line change
Expand Up @@ -85,8 +85,8 @@ def get_instance_stats(inst_name):
row = root.findall(".//*[@contents='%s']/.." % inst_name)
if row != []:
node_dict = {}
row = row[0].getchildren()
for (restype, ind) in restype_to_ind.items():
row = list(row[0])
for restype, ind in restype_to_ind.items():
node_dict[restype] = int(row[ind].attrib["contents"])
return node_dict
else:
Expand Down
5 changes: 1 addition & 4 deletions src/finn/analysis/fpgadataflow/res_estimation.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,10 +62,7 @@ def res_estimation_complete(model):
if is_fpgadataflow_node(node) is True:
op_type = node.op_type
inst = registry.getCustomOp(node)
if (
op_type == "MatrixVectorActivation"
or op_type == "VectorVectorActivation"
):
if op_type == "MatrixVectorActivation" or op_type == "VectorVectorActivation":
orig_restype = inst.get_nodeattr("resType")
res_dict[node.name] = []
inst.set_nodeattr("resType", "dsp")
Expand Down
18 changes: 4 additions & 14 deletions src/finn/builder/build_dataflow.py
Original file line number Diff line number Diff line change
Expand Up @@ -91,12 +91,8 @@ def resolve_build_steps(cfg: DataflowBuildConfig, partial: bool = True):
return steps_as_fxns


def resolve_step_filename(
step_name: str, cfg: DataflowBuildConfig, step_delta: int = 0
):
step_names = list(
map(lambda x: x.__name__, resolve_build_steps(cfg, partial=False))
)
def resolve_step_filename(step_name: str, cfg: DataflowBuildConfig, step_delta: int = 0):
step_names = list(map(lambda x: x.__name__, resolve_build_steps(cfg, partial=False)))
assert step_name in step_names, "start_step %s not found" + step_name
step_no = step_names.index(step_name) + step_delta
assert step_no >= 0, "Invalid step+delta combination"
Expand Down Expand Up @@ -150,19 +146,13 @@ def build_dataflow_cfg(model_filename, cfg: DataflowBuildConfig):
for transform_step in build_dataflow_steps:
try:
step_name = transform_step.__name__
print(
"Running step: %s [%d/%d]"
% (step_name, step_num, len(build_dataflow_steps))
)
print("Running step: %s [%d/%d]" % (step_name, step_num, len(build_dataflow_steps)))
# redirect output to logfile
if not cfg.verbose:
sys.stdout = stdout_logger
sys.stderr = stderr_logger
# also log current step name to logfile
print(
"Running step: %s [%d/%d]"
% (step_name, step_num, len(build_dataflow_steps))
)
print("Running step: %s [%d/%d]" % (step_name, step_num, len(build_dataflow_steps)))
# run the step
step_start = time.time()
model = transform_step(model, cfg)
Expand Down
14 changes: 4 additions & 10 deletions src/finn/builder/build_dataflow_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -267,9 +267,7 @@ class DataflowBuildConfig:

#: When `auto_fifo_depths = True`, select which method will be used for
#: setting the FIFO sizes.
auto_fifo_strategy: Optional[
AutoFIFOSizingMethod
] = AutoFIFOSizingMethod.LARGEFIFO_RTLSIM
auto_fifo_strategy: Optional[AutoFIFOSizingMethod] = AutoFIFOSizingMethod.LARGEFIFO_RTLSIM

#: Avoid using C++ rtlsim for auto FIFO sizing and rtlsim throughput test
#: if set to True, always using Python instead
Expand Down Expand Up @@ -366,9 +364,7 @@ def _resolve_driver_platform(self):
elif self.shell_flow_type == ShellFlowType.VITIS_ALVEO:
return "alveo"
else:
raise Exception(
"Couldn't resolve driver platform for " + str(self.shell_flow_type)
)
raise Exception("Couldn't resolve driver platform for " + str(self.shell_flow_type))

def _resolve_fpga_part(self):
if self.fpga_part is None:
Expand Down Expand Up @@ -410,8 +406,7 @@ def _resolve_vitis_platform(self):
return alveo_default_platform[self.board]
else:
raise Exception(
"Could not resolve Vitis platform:"
" need either board or vitis_platform specified"
"Could not resolve Vitis platform:" " need either board or vitis_platform specified"
)

def _resolve_verification_steps(self):
Expand All @@ -429,8 +424,7 @@ def _resolve_verification_io_pair(self):
)
verify_input_npy = np.load(self.verify_input_npy)
assert os.path.isfile(self.verify_expected_output_npy), (
"verify_expected_output_npy not found: "
+ self.verify_expected_output_npy
"verify_expected_output_npy not found: " + self.verify_expected_output_npy
)
verify_expected_output_npy = np.load(self.verify_expected_output_npy)
return (
Expand Down
Loading