From e6d67f2259335fd7cd5373f409477c4fd594f84d Mon Sep 17 00:00:00 2001 From: Aser-Atawya <130549698+Aser-Atawya@users.noreply.github.com> Date: Wed, 12 Apr 2023 01:51:27 -0700 Subject: [PATCH] Update a Typo --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 8e4f7b9..eeb14e9 100644 --- a/README.md +++ b/README.md @@ -95,7 +95,7 @@ Complete the following steps: ``` 2. Call the `load_denormalized.sh` file using the `parallel` program from within the `load_tweets_parallel.sh` script. - You know you've completed this step correctly if the `check-answers.sh` script passes and the test badge turns green. + You know you've completed this step correctly if the `check_answers.sh` script passes and the test badge turns green. #### Normalized Data (unbatched) @@ -113,7 +113,7 @@ so even when run in parallel it is still slower than the batched code. Parallel loading of the batched data will fail due to deadlocks. These deadlocks will cause some of your parallel loading processes to crash. So all the data will not get inserted, -and you will fail the `check-answers.sh` tests. +and you will fail the `check_answers.sh` tests. There are two possible ways to fix this. The most naive method is to catch the exceptions generated by the deadlocks in python and repeat the failed queries.