Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 18 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -47,13 +47,30 @@ benchmark: build
@echo "Running Benchmarks..."
go test ./benchmark -bench=. \
-benchmem \
-count=3 \
-count=4 \
-benchtime=2s \
-cpu=1,4 \
-timeout=30m \
| tee ./benchmark/benchmark_results.txt
@$(call success,"Standard benchmarks complete.")

benchmark\:new: build
@echo "Running Benchmarks..."
@bash ./tools/scripts/archive-benchmark.sh
@$(call success,"Archived old benchmarks.")
go test ./benchmark -bench=. \
-benchmem \
-count=4 \
-benchtime=2s \
-cpu=1,4 \
-timeout=30m \
| tee ./benchmark/benchmark_results.txt
@$(call success,"Standard benchmarks complete.")

benchmark\:report:
@bash ./tools/scripts/compare-benchmarks.sh


benchmark\:full: build
@echo "Running Benchmarks..."
go test ./benchmark -bench=. \
Expand Down
49 changes: 22 additions & 27 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,11 +17,7 @@ Seed is a CLI tool that helps you quickly create directory structures from a tre
- [Using spaces](#using-spaces)
- [Using JSON](#using-json)
- [Features](#features)
- [Benchmarks](#benchmarks)
- [Overview](#overview)
- [Time performance](#time-performance)
- [Memory Usage](#memory-usage)
- [In Depth](#in-depth)
- [Performance](#performance)
- [Contributing](#contributing)
- [Todo](#todo)
- [License](#license)
Expand Down Expand Up @@ -245,36 +241,34 @@ seed --format json -f path/to/structure.json

## Features

- 🚀 Fast directory structure creation
- 🚀 Super Fast directory structure creation
- 📋 Direct clipboard support
- 🌲 Supports standard tree format
- 🏗️ Supports JSON format
- 📁 Creates both files and directories

## Benchmarks
## Performance

### Overview
Seed is built with performance in mind. Here's a quick look at our parser performance:

#### Time performance
| Parser Type | Nodes | Time/Operation | Allocations/Operation | Memory/Operation |
|------------|-------|----------------|----------------------|------------------|
| ASCII Tree | 100 | ~3.7ms | 46 | ~14KB |
| ASCII Tree | 1000 | ~13.5ms | 76 | ~16KB |
| ASCII Tree | 5000 | ~76ms | 654 | ~54KB |
| JSON | 100 | ~3.8ms | 46 | ~14KB |
| JSON | 1000 | ~14.1ms | 78 | ~16KB |
| JSON | 5000 | ~79ms | 697 | ~56KB |

| nodes | ascii (ms) | json (ms) | difference |
|-------|------------|-----------|------------|
| 100 | 8.62 | 9.00 | +4.4% |
| 500 | 35.55 | 36.76 | +3.4% |
| 1000 | 64.48 | 66.23 | +2.7% |
| 5000 | 428.16 | 438.29 | +2.4% |
- Time and memory complexity are linear
- For detailed benchmarks, methodology, and historical data, see the [benchmark documentation](./benchmark/README.md).

#### Memory Usage
Run benchmarks locally:

| Nodes | ASCII (KB) | JSON (KB) | Difference |
|-------|------------|-----------|------------|
| 100 | 13.89 | 13.95 | +0.4% |
| 500 | 17.09 | 17.14 | +0.3% |
| 1000 | 24.82 | 25.11 | +1.2% |
| 5000 | 235.83 | 235.89 | +0.03% |

### In Depth

A more in depth analysis and breakdown of the benchmarks can be found [here](./benchmark/README.md)
```bash
make benchmark:new # Standard benchmarks
make benchmark:report # Compare against last benchmarks
```

## Contributing

Expand All @@ -289,7 +283,8 @@ Contributions are welcome! Please feel free to submit a Pull Request. For major
## Todo

- ~~Implement ability to parse from file path~~
- ~~Add JSON support ~~
- ~~Add JSON support~~
- ~~Benchmarks~~
- Increase package manager distribution
- apt
- pacman
Expand Down
163 changes: 74 additions & 89 deletions benchmark/README.md
Original file line number Diff line number Diff line change
@@ -1,108 +1,93 @@
# Benchmark Results
# Seed Performance Benchmarks

This document details the performance characteristics of the tree parsing implementations, comparing ASCII tree and JSON parsing methods across various node counts and input methods.

The benchmark results are in `./benchmark_results.txt`

## Environment

- **OS**: Darwin (macOS)
- **Architecture**: ARM64
- **CPU**: Apple M3 Pro

## Methodology

Benchmarks were conducted using Go's built-in testing framework with the following parameters:

- 100, 500, 1000, and 5000 nodes (files and dirs)
- 2 second runs
- 3 runs per test
- Single core and quad core
- File and String input
- Metrics measured:
- Time (ns/op)
- Memory allocation (B/op)
- Allocation count (allocs/op)
This document details Seed's performance characteristics and benchmarking methodology. Our benchmarks focus on real-world usage patterns while maintaining technical rigor.

## Key Findings

### Performance Comparison

#### time performance
| nodes | ascii (ms) | json (ms) | difference |
|-------|------------|-----------|------------|
| 100 | 8.62 | 9.00 | +4.4% |
| 500 | 35.55 | 36.76 | +3.4% |
| 1000 | 64.48 | 66.23 | +2.7% |
| 5000 | 428.16 | 438.29 | +2.4% |

#### Memory Usage
| Nodes | ASCII (KB) | JSON (KB) | Difference |
|-------|------------|-----------|------------|
| 100 | 13.89 | 13.95 | +0.4% |
| 500 | 17.09 | 17.14 | +0.3% |
| 1000 | 24.82 | 25.11 | +1.2% |
| 5000 | 235.83 | 235.89 | +0.03% |
- Both parsers handle typical project sizes (100-500 nodes) in under 10ms
- Memory usage scales linearly and stays under 20KB for common use cases
- Parser choice (ASCII vs JSON) has minimal impact on performance
- Multi-core scaling shows diminishing returns past 4 cores

## Benchmark Configuration
```go
go test ./benchmark -bench=. \
-benchmem \
-count=4 \
-benchtime=2s \
-cpu=1,4 \
-timeout=30m
```
All benchmarks:
- Run 4 iterations to ensure consistency
- Test both single and quad-core configurations
- Measure over a 2-second window
- Include memory allocation tracking

### Input Method Comparison (500 nodes)
### Real-World Context

- **String Input**
- Time: 35.82ms
- Memory: 40.62KB
- Allocations: 77/op
To put these numbers in perspective:

- **File Input**
- Time: 35.57ms
- Memory: 16.17KB
- Allocations: 77/op
| Structure Size | Example | Parse Time | Memory |
|---------------|---------|------------|---------|
| 100 nodes | Small React app | ~3.8ms | ~14KB |
| 500 nodes | Medium web project | ~8.5ms | ~15KB |
| 1000 nodes | Large monorepo | ~14ms | ~16KB |
| 5000 nodes | Extreme case | ~77ms | ~55KB |

## Analysis
### Input Method Comparison (500 nodes)
```
BenchmarkInputMethods/StringInput 259 8781598 ns/op 38524 B/op 46 allocs/op
BenchmarkInputMethods/FileInput 262 8743410 ns/op 14090 B/op 45 allocs/op
```
File input uses less memory due to streaming, while string input requires holding the entire structure in memory.

1. **Parsing Performance**
- ASCII tree parsing consistently outperforms JSON parsing
- Performance gap decreases with larger node counts
- Both methods show linear scaling with node count
## Performance Characteristics

2. **Memory Efficiency**
- Both parsers show similar memory patterns
- JSON consistently uses marginally more memory
- Differences in memory usage become negligible at larger node counts
### Memory Usage Pattern
- Base memory cost: ~14KB
- Linear scaling: ~8B per additional node
- Allocations increase significantly past 1000 nodes
- JSON parser has slightly higher allocation count for large structures

3. **Input Methods**
- File input shows significant memory advantages (~60% reduction)
- No performance penalty for file-based input
- Consistent allocation patterns across both input methods
### CPU Scaling
Performance improvement moving from 1 to 4 cores:
- Small structures (≤500 nodes): Minimal benefit
- Large structures (>1000 nodes): Up to 20% improvement
- Very large structures (5000+ nodes): Up to 35% improvement

## Practical Implications
## Running Benchmarks

1. **For Performance-Critical Applications**
- ASCII tree parsing offers a slight but consistent performance advantage
- Benefits are most noticeable with smaller node counts
- Consider using file-based input for better memory efficiency
### Standard Benchmark
```bash
make benchmark:new
```
This archives the current benchmarks and runs a fresh report
```bash
make benchmark
```
Should be used if you do NOT want to archive the current report and overwrite it instead

2. **For Memory-Constrained Systems**
- File input method is strongly recommended
- Both parsing methods show similar memory characteristics
- Memory usage scales linearly with node count
### Comparing Results
```bash
make benchmark:report
```
Compares current results with previous benchmark data.

3. **For General Usage**
- Both methods are viable for typical use cases
- Performance differences are minimal for most applications
- Choose based on your specific needs for data format and interoperability
## Historical Data

## Running the Benchmarks
Benchmarks are archived in `historical/` with the schema:
```
historical/
└── {int}_benchmark_results.txt
```
The lower the number, the older the run.

To run the benchmarks yourself:
## Contributing New Benchmarks

```bash
# Standard
make benchmark
# More intensive (time consuming)
make benchmark:full
```
## Notes
When adding features that could impact performance:
1. Add relevant benchmarks to `seed_test.go`
2. Include baseline numbers using `make benchmark:new`
3. Document performance characteristics in PR

- All benchmarks were run with Go's default settings
- Results may vary based on hardware and system load
- Memory statistics include both heap and stack allocations
- Each benchmark includes multiple runs to account for variance
For benchmark design guidelines, see `data_generators.go`.
Loading
Loading