diff --git a/language/capwap/README.md b/language/capwap/README.md index 649c979..2f00b8b 100644 --- a/language/capwap/README.md +++ b/language/capwap/README.md @@ -274,7 +274,7 @@ python evaluation/score_captions.py \ You can download all the pre-trained captioning models here: ```bash -gsutil cp gs://capwap/models.zip . +gcloud storage cp gs://capwap/models.zip . unzip models.zip && rm models.zip ``` diff --git a/language/xsp/README.md b/language/xsp/README.md index 465c70c..a28b629 100644 --- a/language/xsp/README.md +++ b/language/xsp/README.md @@ -65,7 +65,7 @@ sh language/xsp/data_download.sh train_only You must also download resources for training the models (e.g., a pre-trained BERT model). Clone the [official BERT repository](https://github.com/google-research/bert) and download the BERT-Large, uncased model. We didn't use the original BERT-Large model in our main experimental results, but performance using BERT-Large is slightly behind BERT-Large+ on the Spider development set (see Table 3 in the main paper). You can ignore the vocabulary file in the zipped directory. -Finally, for the input training vocabulary, please download the text file from [this link](https://storage.googleapis.com/xsp-files/input_bert_vocabulary.txt) or `gs://xsp-files/input_bert_vocabulary.txt` via `gsutils`. We recommend to save it in the `assets` directory for each run. +Finally, for the input training vocabulary, please download the text file from [this link](https://storage.googleapis.com/xsp-files/input_bert_vocabulary.txt) or `gs://xsp-files/input_bert_vocabulary.txt` via `gcloud storage`. We recommend to save it in the `assets` directory for each run. ### For evaluation @@ -273,4 +273,3 @@ An example of running this: ``` python -m language.xsp.evaluation.filter_results dataset_predictions.txt ``` -