Skip to content

RiceComp413-Fall2022/CacheCow

Repository files navigation

CacheCow

CacheCow

CacheCow is a distributed in-memory cache service that consists of a graph of intercommunicating nodes. Each node assumes the role of a generic endpoint to process client requests that handles and redirects client requests and as a fragment of the distributed cache store.

Helpful Links: Google Drive, Design Document

Authors: Eliot Solomon, Halit Ozkaya, Mark Pepperl, Nadav Levanoni, Ryan Huckleberry

File Structure

  • cache-node: Javalin project to run cache node instance
  • monitor-node: React project to run monitor node instance
  • performance-testing: Performance testing scripts to test cache metrics

Set Up

  1. Create a new directory
mkdir CacheCow
  1. Clone the repository into the new directory
git clone git@github.com:RiceComp413-Fall2022/CacheCow.git CacheCow

Running Locally

  1. Download Node.js if it's not already installed

  2. Set up the monitoring node

# Navigate to monitoring node directory
cd monitoring-node

# Install packages
npm install
  1. Launch a local cache cluster
# Navigate to cache node directory
cd cache-node

# Run 3 node cluster locally
./gradlew run --args 'local 0 7070'
./gradlew run --args 'local 1 7071'
./gradlew run --args 'local 2 7072'

# Note: use the -s flag to run in scalable mode
./gradlew run --args 'local 0 7070 -s'
./gradlew run --args 'local 1 7071 -s'
./gradlew run --args 'local 2 7072 -s'
  1. Check that the nodes are running
# Ping a cache node
curl -X GET "localhost:7070/v1/hello-world"
  1. Monitor the cache

Running on AWS

  1. Download Python 3 if not already installed

  2. Download the AWS Command Line Interface

  3. Set up an AWS key pair

  • Navigate to EC2 Dashboard on AWS Management Console
  • Create a new key pair
  • Download the pair and save the file rootkey.csv to the root CacheCow directory
  1. Configure the AWS CLI
aws configure

# Respond to the prompts using your credentials in rootkey.csv
AWS Access Key ID []: <your-access-key>
AWS Secret Access Key []: <your-secret-key>
Default Region Name []: us-east-1
Default Output Format []: <enter>
  1. Install Python packages
pip3 install requests "boto3[crt]" fabric

# Note: if boto3 install fails try this
pip3 install --upgrade pip
  1. Launch a cluster on AWS
# Crate a cluster with 2 nodes
python3 pasture.py create 2

# Note: use the -s flag to run in scalable mode
python3 pasture.py create 2 -s
  1. Delete the cluster
python3 pasture.py delete 2

Sending Requests

  1. Store a key-value pair
curl -X POST -H "Content-Type: text/plain" --data "<value>" "localhost:7070/v1/blobs/{key}/{version}"
  1. Fetch a value
curl -X GET "localhost:7070/v1/blobs/{key}/{version}"
  1. Clear all key-value pairs
curl -X DELETE "localhost:7070/v1/clear"
  1. Scale cluster (Note: cluster must be running in scalable mode)
curl -X POST "localhost:7070/v1/launch-node"

Performance Testing

There are multiple performance tests. Here, we will run long-tailed.py which uses a heavy-tailed lognormal distribution to simulate cache-aside performance. The test is best performed against a cache which can hold a maximum of 100 keys. The distribution parameters generate 995 keys, 408 of which are unique. This ensures that the cache handles eviction appropriately.

  1. Move into the performance-testing directory
cd CacheCow/performance-testing
  1. Run the test script: long-tailed.py
python3 long-tailed.py --url <domain name>

For example,

python3 long-tailed.py --url localhost:7070

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 6