Skip to content

Latest commit

 

History

History
53 lines (37 loc) · 2.02 KB

README.md

File metadata and controls

53 lines (37 loc) · 2.02 KB

UMGF: Multi-modal Graph Fusion for Named Entity Recognition with Targeted Visual Guidance

This repository contains the source code for the paper: UMGF: Multi-modal Graph Fusion for Named Entity Recognition with Targeted Visual Guidance

Install

  • python3.7
  • transformers==3.4.0
  • torch==1.7.1
  • pytorch-crf==0.7.2
  • pillow==7.1.2
  • tqdm==4.62.3

Dataset

  • You can download original data from UMT

Preprocess

Image

  1. Download twitter images from UMT
  2. To detect visual objects, please follow onestage_grounding or you can directly download them from twitter2015_img.tar.gz(password: l75t) and twitter2017_img.tar.gz(password: 2017)
  3. Unzip and put the images under the corresponding folder(e.g. ./data/twitter2015/image)

Text

  • The proprocessed text has been put under ./my_data/ folder

Run

Train

python ddp_mmner.py --do_train --txtdir=./my_data/twitter2015 --imgdir=./data/twitter2015/image --ckpt_path=./model.pt --num_train_epoch=30 --train_batch_size=16 --lr=0.0001 --seed=2019

Test

python ddp_mmner.py --do_test --txtdir=./my_data/twitter2015 --imgdir=./data/twitter2015/image --ckpt_path=./ddp_mner.pt --test_batch_size=32

Acknowledgements

  • Using these two datasets means you have read and accepted the copyrights set by Twitter and dataset providers.
  • Part of the code are from: