From b57854763bc3c6dd654dce01c9b331c8969ed8c2 Mon Sep 17 00:00:00 2001 From: risha-vijayvargiya-22 Date: Tue, 3 Mar 2026 09:04:55 +0000 Subject: [PATCH] chore: Migrate gsutil usage to gcloud storage --- language/capwap/README.md | 2 +- language/xsp/README.md | 3 +-- 2 files changed, 2 insertions(+), 3 deletions(-) diff --git a/language/capwap/README.md b/language/capwap/README.md index 649c9799..2f00b8bf 100644 --- a/language/capwap/README.md +++ b/language/capwap/README.md @@ -274,7 +274,7 @@ python evaluation/score_captions.py \ You can download all the pre-trained captioning models here: ```bash -gsutil cp gs://capwap/models.zip . +gcloud storage cp gs://capwap/models.zip . unzip models.zip && rm models.zip ``` diff --git a/language/xsp/README.md b/language/xsp/README.md index 465c70ce..a28b6297 100644 --- a/language/xsp/README.md +++ b/language/xsp/README.md @@ -65,7 +65,7 @@ sh language/xsp/data_download.sh train_only You must also download resources for training the models (e.g., a pre-trained BERT model). Clone the [official BERT repository](https://github.com/google-research/bert) and download the BERT-Large, uncased model. We didn't use the original BERT-Large model in our main experimental results, but performance using BERT-Large is slightly behind BERT-Large+ on the Spider development set (see Table 3 in the main paper). You can ignore the vocabulary file in the zipped directory. -Finally, for the input training vocabulary, please download the text file from [this link](https://storage.googleapis.com/xsp-files/input_bert_vocabulary.txt) or `gs://xsp-files/input_bert_vocabulary.txt` via `gsutils`. We recommend to save it in the `assets` directory for each run. +Finally, for the input training vocabulary, please download the text file from [this link](https://storage.googleapis.com/xsp-files/input_bert_vocabulary.txt) or `gs://xsp-files/input_bert_vocabulary.txt` via `gcloud storage`. We recommend to save it in the `assets` directory for each run. ### For evaluation @@ -273,4 +273,3 @@ An example of running this: ``` python -m language.xsp.evaluation.filter_results dataset_predictions.txt ``` -