Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
69 changes: 69 additions & 0 deletions gedi/gediFinder.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
"""
This script is meant to create a GediFinder URL and get the corresponding list of granules within a
user-defined bounding box. This list can then be used in the Earthdata Search Tool to pull data from
within a bounding box.
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Request: before merging, it probably makes sense to remove most of this auto-generated preamble that Colab throws in when exporting. Lines 2-10, that is; the description on L11 seems helpful to me.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just went through and cleaned up some of the commented stuff.


## Import necessary packages

# Use requests package to retreive list of files from GEDI Finder URL
import requests

## Define bounding box and other variables

### Bounding box variables
# Pacific Northwest bbox
ul_lat = 44.75
lr_lat = 44.25
ul_lon = -122.25
lr_lon = -122.75

### Constant values

# Server url for LP DAAC which stores GEDI data
lpdaac = 'https://lpdaacsvc.cr.usgs.gov/services/gedifinder'

# Different levels of GEDI data currently available to the public
productLevel1B = 'GEDI01_B'
productLevel2A = 'GEDI02_A'
productLevel2B = 'GEDI02_B'

# Image verison number
version = '001'

# Create bounding box string for url
bbox = ','.join(map(str, [ul_lat, ul_lon, lr_lat, lr_lon]))

# Define output type of url call
output = 'json'

## Join elements of GediFinder URL

# Join together components of url
urlList = [
f'product={productLevel1B}',
f'version={version}',
f'bbox={bbox}',
f'output={output}'
]

url = lpdaac + "?" + '&'.join(urlList)

## Get list of granules

# Making a GET request
response = requests.get(url)

# Verify a successful request call
if response:
print('Success!')
granulesList = response.json()['data'] # Pull data from response
else:
print('An error has occurred.')

# Strip extra information away from granule file names
stripped_granulesList = [s[-49:] for s in granulesList]

# Join list of granules and print list to copy and paste into Earthdata Search
# Use copy icon at end of output to quickly copy all granule names
','.join(stripped_granulesList)