diff --git a/migration/README.md b/migration/README.md index c88f01e8..daa77ecf 100644 --- a/migration/README.md +++ b/migration/README.md @@ -4,6 +4,9 @@ This section contains scripts to help before, during, and after migrations. ## [Mongosync Insights](mongosync_insights) This project parses **mongosync** logs and reads the internal database (metadata), generating a variety of plots to assist with monitoring and troubleshooting ongoing mongosync migrations. +## [Toolbox](toolbox) +Toolbox is a collection of helper scripts created by the Migration Factory team for data capture and analysis + ### License [Apache 2.0](http://www.apache.org/licenses/LICENSE-2.0) diff --git a/migration/toolbox/README.md b/migration/toolbox/README.md new file mode 100644 index 00000000..c8b8b820 --- /dev/null +++ b/migration/toolbox/README.md @@ -0,0 +1,112 @@ +# Toolbox +Toolbox is a collection of helper scripts created by the Migration Factory team for data capture and analysis. + +## Database and Collection size + +**Script:** `collectionSizes.js` + +Lists all databases and collections (excluding system databases: `admin`, `config`, `local`) with their sizes in MB, sorted from largest to smallest. + +### Usage + +```bash +mongosh "mongodb://localhost:27017" --quiet collectionSizes.js +``` + +Or with authentication: + +```bash +mongosh "mongodb://user:password@localhost:27017" --quiet collectionSizes.js +``` + +### Example Output + +``` +Database | Collection | Size (MB) +--------------------------------- +mydb | largeCollection | 1024.50 MB +mydb | mediumCollection | 256.25 MB +otherdb | smallCollection | 12.00 MB +``` + +## Index size, parameters and utilization + +**Script:** `probIndexesComplete.js` + +Collects index statistics across all user databases (excluding `admin`, `config`, `local`). For each index, it reports: +- Database and collection name +- Index name and type (common, TTL, Partial, text, 2dsphere, geoHaystack, or `[INTERNAL]` for `_id_`) +- Whether the index is unique +- Access count (ops) and when tracking started +- Index size in MB and bytes + +### Usage + +```bash +mongosh "mongodb://localhost:27017" --quiet probIndexesComplete.js +``` + +Or with authentication: + +```bash +mongosh "mongodb://user:password@localhost:27017" --quiet probIndexesComplete.js +``` + +### Example Output + +``` +┌─────────┬────────┬────────────────┬──────────────┬────────────┬────────┬──────────┬──────────┬─────────┬─────────────────────────┐ +│ (index) │ db │ collection │ name │ type │ unique │ accesses │ size (MB)│ size │ accesses_since │ +├─────────┼────────┼────────────────┼──────────────┼────────────┼────────┼──────────┼──────────┼─────────┼─────────────────────────┤ +│ 0 │ mydb │ users │ _id_ │ [INTERNAL] │ │ 150 │ 0.25 │ 262144 │ 2024-01-15T10:30:00.000Z│ +│ 1 │ mydb │ users │ email_1 │ common │ true │ 1200 │ 0.12 │ 126976 │ 2024-01-15T10:30:00.000Z│ +│ 2 │ mydb │ sessions │ _id_ │ [INTERNAL] │ │ 50 │ 0.08 │ 81920 │ 2024-01-15T10:30:00.000Z│ +│ 3 │ mydb │ sessions │ expireAt_1 │ TTL │ │ 0 │ 0.04 │ 40960 │ 2024-01-15T10:30:00.000Z│ +└─────────┴────────┴────────────────┴──────────────┴────────────┴────────┴──────────┴──────────┴─────────┴─────────────────────────┘ +``` + +## Mongosync Limitations Checker + +**Script:** `mongosync_uniqueindex_limitation_checker.py` + +Detects a known mongosync limitation where a collection has two indexes with the exact same key pattern—one unique and one non-unique. This condition can cause mongosync to fail during migrations. + +The script supports two modes: +- **Online mode:** Connects directly to a MongoDB cluster via connection string +- **Offline mode:** Parses a `getMongoData` JSON file (no cluster access required) + +### Quick Usage + +**Offline (getMongoData):** +```bash +python3 mongosync_uniqueindex_limitation_checker.py --getmongodata .json +``` + +**Online (MongoDB cluster):** +```bash +python3 mongosync_uniqueindex_limitation_checker.py --uri "mongodb+srv://USER:PASS@host" +``` + +For full documentation, filtering options, and examples, see [README_limitations_checker.md](README_limitations_checker.md). + +### License + +[Apache 2.0](http://www.apache.org/licenses/LICENSE-2.0) + +DISCLAIMER +---------- +Please note: all tools/ scripts in this repo are released for use "AS IS" **without any warranties of any kind**, +including, but not limited to their installation, use, or performance. We disclaim any and all warranties, either +express or implied, including but not limited to any warranty of noninfringement, merchantability, and/ or fitness +for a particular purpose. We do not warrant that the technology will meet your requirements, that the operation +thereof will be uninterrupted or error-free, or that any errors will be corrected. + +Any use of these scripts and tools is **at your own risk**. There is no guarantee that they have been through +thorough testing in a comparable environment and we are not responsible for any damage or data loss incurred with +their use. + +You are responsible for reviewing and testing any scripts you run *thoroughly* before use in any non-testing +environment. + +Thanks, +The MongoDB Support Team diff --git a/migration/toolbox/README_limitations_checker.md b/migration/toolbox/README_limitations_checker.md new file mode 100644 index 00000000..6204709c --- /dev/null +++ b/migration/toolbox/README_limitations_checker.md @@ -0,0 +1,260 @@ +# Mongosync Limitations Checker (Unified) + +This script detects a known **mongosync limitation**: + +> A collection that has two indexes with the exact same key pattern where one is **unique** and the other is **non-unique**. + +This condition can cause mongosync to fail or behave unexpectedly during migrations. +The script is intended as a **pre-check** for MRAs and migration readiness reviews. + +--- + +## What the script does + +For every collection it scans, the script: + +1. Retrieves all index definitions. +2. Separates them into: + - **unique** indexes + - **non-unique** indexes +3. Compares index key patterns. +4. Flags a limitation when it finds the *same key pattern* in both groups. + +### Output + +- Prints a clean terminal report. +- Optionally writes a JSON report using `--out`. + +Each finding includes: +- `database` +- `collection` +- `index_keys` +- `unique_index_names` +- `non_unique_index_names` + +**Sample terminal output:** + +``` +Starting mongosync limitations checker (ONLINE). +Input: mongodb+srv://... +Limitations found: 1 + +- mydb.users | keys={['email', 1]} | uniqueIndex=['email_unique_idx'] | non-uniqueIndex=['email_idx'] + +Finishing mongosync limitations checker. +``` + +**Sample JSON output** (when using `--out`): + +```json +[ + { + "database": "mydb", + "collection": "users", + "index_keys": [["email", 1]], + "unique_index_names": ["email_unique_idx"], + "non_unique_index_names": ["email_idx"] + } +] +``` + +--- + +## What it runs against + +The script supports **two modes**. + +### Online mode (MongoDB cluster) + +Reads indexes directly from a MongoDB deployment using a connection string. + +Supported: +- MongoDB Atlas clusters +- Self-managed replica sets / Sharded clusters + +### Offline mode (getMongoData JSON) + +Runs without cluster access by parsing a `getMongoData` output JSON. + +--- + +## Requirements + +### Offline mode +- Python 3.7+ +- No external dependencies + +### Online mode +- Python 3.7+ +- PyMongo: +```bash +python3 -m pip install pymongo +``` + +--- + +## Atlas / SRV TLS note + +PyMongo uses the Python/OS trust store. On some machines you may need `certifi`: +```bash +python3 -m pip install certifi +``` +Run the script with `--use-certifi-ca` when connecting to Atlas. + +--- + +## Usage + +Exactly one mode flag is required. +```bash +python3 mongosync_uniqueindex_limitation_checker.py \ +(--uri "" | --getmongodata ) \ +[flags...] +``` + +--- + +## Flags + +**Mode selection (required)** + +| Flag | Description | +| ---------------- | ----------------------------------------- | +| `--uri` | Online mode. Connect to a MongoDB cluster | +| `--getmongodata` | Offline mode. Parse getMongoData JSON | + +--- + +**Filters (apply to both modes)** + +| Flag | Description | +| --------------- | -------------------------------- | +| `--include-dbs` | Comma-separated DB allow-list | +| `--exclude-dbs` | Comma-separated DB block-list | +| `--include-ns` | Regex applied to `db.collection` | + +--- + +**Output** + +| Flag | Description | +| ------- | ----------------------------- | +| `--out` | Write findings to a JSON file | + +--- + +**TLS helper (online only)** + +| Flag | Description | +| ------------------ | ---------------------------------------------- | +| `--use-certifi-ca` | Use certifi CA bundle (fixes Atlas TLS issues) | + +--- + +## How to use the filters + +**Include / exclude DBs** + +```bash +--include-dbs prod_01,prod_02 +--exclude-dbs test,staging +``` +- System DBs (`admin`, `local`, `config`) are always skipped. + +**Namespace regex filter** + +The `--include-ns` flag accepts a regex pattern that is searched against the full namespace (`db.collection`): + +```bash +--include-ns "^prod_" # Namespaces starting with "prod_" +--include-ns "\.users$" # Collections ending with "users" +--include-ns "orders" # Namespaces containing "orders" +``` + +--- + +## Examples + +### Offline (getMongoData) + +```bash +python3 mongosync_uniqueindex_limitation_checker.py \ +--getmongodata .json +``` + +With JSON output: + +```bash +python3 mongosync_uniqueindex_limitation_checker.py \ +--getmongodata .json \ +--out .json +``` + +Offline + DB filter: + +```bash +python3 mongosync_uniqueindex_limitation_checker.py \ +--getmongodata .json \ +--include-dbs , \ +--out .json +``` + +--- + +### Online (non-SRV) + +```bash +python3 mongosync_uniqueindex_limitation_checker.py \ +--uri "mongodb://:@:/admin?appName=" \ +--out .json +``` + +--- + +### Online (Atlas SRV) + +```bash +python3 mongosync_uniqueindex_limitation_checker.py \ +--uri "mongodb+srv://USER:PASS@/admin?appName=checker" \ +--out .json +``` + +If you see TLS errors: + +```bash +python3 -m pip install certifi +``` + +Then: + +```bash +python3 mongosync_uniqueindex_limitation_checker.py \ +--uri "mongodb+srv://USER:PASS@/admin?appName=checker" \ +--use-certifi-ca \ +--out .json +``` + +--- + +## Notes + +- The script is read-only. +- Permission errors on specific collections are skipped. + +DISCLAIMER +---------- +Please note: all tools/ scripts in this repo are released for use "AS IS" **without any warranties of any kind**, +including, but not limited to their installation, use, or performance. We disclaim any and all warranties, either +express or implied, including but not limited to any warranty of noninfringement, merchantability, and/ or fitness +for a particular purpose. We do not warrant that the technology will meet your requirements, that the operation +thereof will be uninterrupted or error-free, or that any errors will be corrected. + +Any use of these scripts and tools is **at your own risk**. There is no guarantee that they have been through +thorough testing in a comparable environment and we are not responsible for any damage or data loss incurred with +their use. + +You are responsible for reviewing and testing any scripts you run *thoroughly* before use in any non-testing +environment. + +Thanks, +The MongoDB Support Team \ No newline at end of file diff --git a/migration/toolbox/collectionSizes.js b/migration/toolbox/collectionSizes.js new file mode 100644 index 00000000..f6c671e2 --- /dev/null +++ b/migration/toolbox/collectionSizes.js @@ -0,0 +1,62 @@ +// List of system databases to exclude +const excludeDatabases = ['admin', 'config', 'local']; +const byteToMB = (byte) => ((byte / 1024) / 1024).toFixed(2); +const databaseInfo = []; + +// Function to check if an array contains a value +const arrayContains = function(arr, val) { + return arr.indexOf(val) !== -1; +}; + +// Get all databases and exclude system ones +const databases = db.adminCommand('listDatabases').databases.filter(function(database) { + return !arrayContains(excludeDatabases, database.name); +}); + +// Debugging: Log the databases found +//print("Databases found (excluding system databases):"); +//databases.forEach(function(database) { +// print(" - " + database.name); +//}); + +for (var i = 0; i < databases.length; i++) { + const database = databases[i]; + const currentDb = db.getSiblingDB(database.name); + + // Debugging: Log the current database being processed + //print("Processing database: " + database.name); + + // Use getCollectionNames() + const collections = currentDb.getCollectionNames(); + + // Debugging: Log collections found in the database + //print("Collections found in " + database.name + ":"); + //if (collections.length === 0) { + // print(" No collections found."); + //} + collections.forEach(function(collectionName) { + //print(" - " + collectionName); + const currentCollection = currentDb.getCollection(collectionName); + const stats = currentCollection.stats(); // Get collection stats + + databaseInfo.push({ + db: database.name, + collection: collectionName, + size_MB: parseFloat(byteToMB(stats.size)), // Collection size in MB + size: stats.size // Size in bytes + }); + }); +} + +// Sort by size (descending order) +databaseInfo.sort(function(a, b) { + return b.size - a.size; +}); + +// Print the sorted list of collections +print("Database | Collection | Size (MB)"); +print("---------------------------------"); +for (var j = 0; j < databaseInfo.length; j++) { + const info = databaseInfo[j]; + print(info.db + " | " + info.collection + " | " + info.size_MB + " MB"); +} diff --git a/migration/toolbox/mongosync_uniqueindex_limitation_checker.py b/migration/toolbox/mongosync_uniqueindex_limitation_checker.py new file mode 100644 index 00000000..d4d674f1 --- /dev/null +++ b/migration/toolbox/mongosync_uniqueindex_limitation_checker.py @@ -0,0 +1,340 @@ +#!/usr/bin/env python3 + +from __future__ import annotations + +import argparse +import json +import re +import sys +from collections import defaultdict +from typing import Any, Dict, Iterable, List, Optional, Set, Tuple + + +# Order-preserving, hashable signature for an index key pattern +KeySig = Tuple[Tuple[str, Any], ...] + + + +# ------------------------- +# Filter helpers +# ------------------------- + +def _parse_csv_set(value: Optional[str]) -> Optional[Set[str]]: + if not value: + return None + items = [v.strip() for v in value.split(",") if v.strip()] + return set(items) if items else None + + +def _compile_regex(pattern: Optional[str]) -> Optional[re.Pattern]: + if not pattern: + return None + return re.compile(pattern) + + +def ns_allowed( + db: str, + coll: str, + include_dbs: Optional[Set[str]], + exclude_dbs: Optional[Set[str]], + include_ns_re: Optional[re.Pattern], +) -> bool: + # include/exclude DBs + if include_dbs is not None and db not in include_dbs: + return False + if exclude_dbs is not None and db in exclude_dbs: + return False + + # system DBs are always excluded + if db in ("admin", "local", "config"): + return False + + # include-ns regex on db.collection + if include_ns_re is not None: + ns = f"{db}.{coll}" + if not include_ns_re.search(ns): + return False + + return True + + +# ------------------------- +# Normalization + core logic +# ------------------------- + +def normalize_key_pattern(key_obj: Any) -> KeySig: + """ + Normalize index key patterns into an order-preserving, hashable representation. + + IMPORTANT: Order matters for compound indexes in MongoDB. + """ + + if isinstance(key_obj, dict): + return tuple((str(k), v) for k, v in key_obj.items()) + + if isinstance(key_obj, (list, tuple)): + pairs: List[Tuple[str, Any]] = [] + for item in key_obj: + if isinstance(item, (list, tuple)) and len(item) == 2: + pairs.append((str(item[0]), item[1])) + else: + return (("<>", str(key_obj)),) + return tuple(pairs) + + # Last resort: try .items() (dict-like) + try: + items = list(key_obj.items()) # type: ignore[attr-defined] + return tuple((str(k), v) for k, v in items) + except Exception: + return (("<>", str(key_obj)),) + + +def find_limitations(index_rows: Iterable[Dict[str, Any]]) -> List[Dict[str, Any]]: + """ + index_rows yields dicts shaped like: + { + "database": str, + "collection": str, + "index_name": str, + "key": , + "unique": bool + } + """ + + per_collection: Dict[Tuple[str, str], Dict[KeySig, Dict[str, List[str]]]] = defaultdict( + lambda: defaultdict(lambda: {"unique": [], "non_unique": []}) + ) + + for row in index_rows: + db = row.get("database") + coll = row.get("collection") + name = row.get("index_name", "") + key = row.get("key") + unique = bool(row.get("unique", False)) + + if not db or not coll or key is None: + continue + + key_pattern = normalize_key_pattern(key) + bucket = "unique" if unique else "non_unique" + per_collection[(db, coll)][key_pattern][bucket].append(str(name)) + + limitations: List[Dict[str, Any]] = [] + + for (db, coll), by_key in per_collection.items(): + for key_pattern, buckets in by_key.items(): + if buckets["unique"] and buckets["non_unique"]: + limitations.append( + { + "database": db, + "collection": coll, + "index_keys": [list(kv) for kv in key_pattern], + "unique_index_names": sorted(set(buckets["unique"])), + "non_unique_index_names": sorted(set(buckets["non_unique"])), + } + ) + + limitations.sort(key=lambda d: (d["database"], d["collection"], str(d["index_keys"]))) + return limitations + + +# ------------------------- +# Offline extractor (getMongoData) +# ------------------------- + +def iter_indexes_from_getmongodata( + docs: List[Dict[str, Any]], + include_dbs: Optional[Set[str]], + exclude_dbs: Optional[Set[str]], + include_ns_re: Optional[re.Pattern], +) -> Iterable[Dict[str, Any]]: + for doc in docs: + if doc.get("section") != "data_info": + continue + if doc.get("subsection") != "indexes": + continue + if doc.get("error") is not None: + continue + + params = doc.get("commandParameters") or {} + db = params.get("db") + coll = params.get("collection") + output = doc.get("output") + + if not db or not coll or not isinstance(output, list): + continue + + if not ns_allowed(db, coll, include_dbs, exclude_dbs, include_ns_re): + continue + + for idx in output: + if not isinstance(idx, dict): + continue + + yield { + "database": db, + "collection": coll, + "index_name": idx.get("name", ""), + "key": idx.get("key"), + "unique": bool(idx.get("unique", False)), + } + + +# ------------------------- +# Online extractor (MongoDB cluster) +# ------------------------- + +def iter_indexes_from_cluster( + uri: str, + include_dbs: Optional[Set[str]], + exclude_dbs: Optional[Set[str]], + include_ns_re: Optional[re.Pattern], + use_certifi_ca: bool = False, +) -> Iterable[Dict[str, Any]]: + try: + from pymongo import MongoClient + except Exception as e: + raise RuntimeError(f"PyMongo is required for --uri mode. Install with: pip install pymongo. Error: {e}") + + client_kwargs: Dict[str, Any] = {} + if use_certifi_ca: + try: + import certifi + client_kwargs["tlsCAFile"] = certifi.where() + except Exception as e: + raise RuntimeError( + f"--use-certifi-ca requested but certifi not available. Install: pip install certifi. Error: {e}" + ) + + client = MongoClient(uri, **client_kwargs) + try: + db_names = client.list_database_names() + for db_name in db_names: + # DB-level filters first + if include_dbs is not None and db_name not in include_dbs: + continue + if exclude_dbs is not None and db_name in exclude_dbs: + continue + if db_name in ("admin", "local", "config"): + continue + + db = client[db_name] + try: + coll_names = db.list_collection_names() + except Exception: + continue + + for coll_name in coll_names: + if not ns_allowed(db_name, coll_name, include_dbs, exclude_dbs, include_ns_re): + continue + + coll = db[coll_name] + try: + for idx in coll.list_indexes(): + yield { + "database": db_name, + "collection": coll_name, + "index_name": idx.get("name", ""), + "key": idx.get("key"), + "unique": bool(idx.get("unique", False)), + } + except Exception: + continue + finally: + client.close() + + +# ------------------------- +# Output helpers +# ------------------------- + +def print_report(limitations: List[Dict[str, Any]], title: str, input_label: str) -> None: + print(title) + print(f"Input: {input_label}") + print("Checking for unique and non-unique indexes on the same field/s...") + print(f"Limitations found: {len(limitations)}\n") + + if not limitations: + print("No limitations found.") + return + + for item in limitations: + ns = f"{item['database']}.{item['collection']}" + keys_dict = {k: v for k, v in item["index_keys"]} + print( + f"- {ns} | keys={keys_dict} " + f"| uniqueIndex={item['unique_index_names']} | non-uniqueIndex={item['non_unique_index_names']}" + ) + + +def main() -> int: + parser = argparse.ArgumentParser( + description="Unified mongosync limitations checker (online MongoDB cluster OR offline getMongoData JSON)." + ) + + mode = parser.add_mutually_exclusive_group(required=True) + mode.add_argument("--uri", help="MongoDB connection string (online mode).") + mode.add_argument("--getmongodata", help="Path to getMongoData JSON file (offline mode).") + + # Filters + parser.add_argument("--include-dbs", default=None, help="Comma-separated DB list to include (only these DBs).") + parser.add_argument("--exclude-dbs", default=None, help="Comma-separated DB list to exclude.") + parser.add_argument("--include-ns", default=None, help=r'Regex filter on namespace "db.collection". Example: "^prod_".') + + # Output / TLS helpers + parser.add_argument("--out", default=None, help="Write limitations to a JSON file.") + parser.add_argument( + "--use-certifi-ca", + action="store_true", + help="Online mode only: use certifi CA bundle (fixes CERTIFICATE_VERIFY_FAILED on some machines).", + ) + + args = parser.parse_args() + + include_dbs = _parse_csv_set(args.include_dbs) + exclude_dbs = _parse_csv_set(args.exclude_dbs) + include_ns_re = _compile_regex(args.include_ns) + + try: + if args.uri: + rows = iter_indexes_from_cluster( + args.uri, + include_dbs=include_dbs, + exclude_dbs=exclude_dbs, + include_ns_re=include_ns_re, + use_certifi_ca=args.use_certifi_ca, + ) + limitations = find_limitations(rows) + print_report(limitations, "Starting mongosync limitations checker (ONLINE).", args.uri) + + else: + with open(args.getmongodata, "r", encoding="utf-8") as f: + docs = json.load(f) + if not isinstance(docs, list): + print("ERROR: getMongoData JSON top-level must be a list.", file=sys.stderr) + return 2 + + rows = iter_indexes_from_getmongodata( + docs, + include_dbs=include_dbs, + exclude_dbs=exclude_dbs, + include_ns_re=include_ns_re, + ) + limitations = find_limitations(rows) + print_report(limitations, "Starting mongosync limitations checker (OFFLINE getMongoData).", args.getmongodata) + + if args.out: + with open(args.out, "w", encoding="utf-8") as f: + json.dump(limitations, f, indent=2) + print(f"\nWrote JSON report to: {args.out}") + + print("\nFinishing mongosync limitations checker.") + return 0 + + except Exception as e: + print(f"An error occurred: {e}", file=sys.stderr) + return 2 + + +if __name__ == "__main__": + raise SystemExit(main()) diff --git a/migration/toolbox/probIndexesComplete.js b/migration/toolbox/probIndexesComplete.js new file mode 100644 index 00000000..dd2be959 --- /dev/null +++ b/migration/toolbox/probIndexesComplete.js @@ -0,0 +1,49 @@ +const indexesUtilization = []; +const excludeDatabases = ['admin', 'config', 'local'] +const byteToMB = (byte) => ((byte/1024)/1024).toFixed(2); + +/* This version gets information for all non-system DBs. To limit it to specific DBs, edit the filter in the next line (e.g., by adding an explicit include list). */ +const databases = db.adminCommand('listDatabases').databases.filter(({ name }) => !excludeDatabases.includes(name)) +const project = { $project: {'ops': "$accesses.ops", 'accesses.since': 1, 'name': 1, 'key': 1, 'spec': 1} }; + + +for (const database of databases) { + const currentDb = db.getSiblingDB(database.name) + + currentDb.getCollectionInfos({ type: "collection" }).forEach(function(collection){ + const currentCollection = currentDb.getCollection(collection.name); + + const indexes = currentCollection.getIndexes(); + const indexesSize = currentCollection.stats().indexSizes; + + currentCollection.aggregate( [ { $indexStats: { } }, project ] ).forEach(function(index){ + + const indexDetail = indexes.find(i => i.name === index.name); + const idxValues = Object.values(Object.assign({}, index.key)); + + let indexType = "commom"; + if(index.name === '_id_') indexType = '[INTERNAL]'; + else if(idxValues.includes('2dsphere')) indexType = '2dsphere'; + else if(idxValues.includes("geoHaystack")) indexType = 'geoHaystack'; + else if(indexDetail?.textIndexVersion !== undefined) indexType = 'text'; + else if(indexDetail?.expireAfterSeconds !== undefined) indexType = 'TTL'; + else if(indexDetail?.partialFilterExpression !== undefined) indexType = 'Partial'; + + indexesUtilization.push({ + db: database.name, + collection: collection.name, + name: index.name, + type: indexType, + unique: index.spec.unique, + accesses: index.ops, + 'size (MB)': parseFloat(byteToMB(indexesSize[index.name])), + size: indexesSize[index.name], + accesses_since: index.accesses.since, + }) + }); + }) +} + +//const indexesProblematic = indexesUtilization.filter(index => {return index.type === 'TTL'}) +console.table(indexesUtilization); +//console.table(indexesProblematic); \ No newline at end of file