I am busy writing a model to predict types of text like names or dates on a pdf document.
The model uses nltk.word_tokenize and nltk.pos_tag
When I try to use this on Kubernetes on Google Cloud Platform I get the following error:
from nltk.tag import pos_tag
from nltk.tokenize import word_tokenize
tokenized_word = tokenize_word('x')
tagges_word = pos_tag(['x'])
stacktrace:
Resource punkt not found.
Please use the NLTK Downloader to obtain the resource:
>>> import nltk
>>> nltk.download('punkt')
Searched in:
- '/root/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
- '/env/nltk_data'
- '/env/share/nltk_data'
- '/env/lib/nltk_data'
- ''
But obviously downloading it to your local device will not solve the problem if it has to run on Kubernetes and we do not have NFS set up on the project yet.
How I ended up solving this problem was adding the download of the nltk packages in an init function
import logging
import nltk
from nltk import word_tokenize, pos_tag
LOGGER = logging.getLogger(__name__)
LOGGER.info('Catching broad nltk errors')
DOWNLOAD_DIR = '/usr/lib/nltk_data'
LOGGER.info(f'Saving files to {DOWNLOAD_DIR} ')
try:
tokenized = word_tokenize('x')
LOGGER.info(f'Tokenized word: {tokenized}')
except Exception as err:
LOGGER.info(f'NLTK dependencies not downloaded: {err}')
try:
nltk.download('punkt', download_dir=DOWNLOAD_DIR)
except Exception as e:
LOGGER.info(f'Error occurred while downloading file: {e}')
try:
tagged_word = pos_tag(['x'])
LOGGER.info(f'Tagged word: {tagged_word}')
except Exception as err:
LOGGER.info(f'NLTK dependencies not downloaded: {err}')
try:
nltk.download('averaged_perceptron_tagger', download_dir=DOWNLOAD_DIR)
except Exception as e:
LOGGER.info(f'Error occurred while downloading file: {e}')
I realize that the amount of try catch expressions are not needed. I also specify the download dir because it seemed that if you do not do that it downloads and unzips 'tagger' to /usr/lib and the nltk does not look for the the files there.
This will download the files on every first run on a new pod and the files will persist until the pod dies.
The error was solved on a Kubernetes stateless set which means this can deal with non persistent applications like App Engine, but will not be the most efficient because it will need to be download every time the instance spins up.